text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
This Tutorial is most relevant to Sencha Touch, 2.x. Sencha Touch 2 brings a wide range of major enhancements and improvements to make building fast and powerful mobile web applications easier than ever. These improvements include vastly better Android performance, a new layout engine, the Sencha class system, a far more consistent API, and native packaging. But what does this mean for you and the lines of code in your application? When upgrading a dependency of any application, such as a library or framework, developers of the application itself normally need to think about a number of things: - Changes to best practices - are there now better and more future-friendly ways to do things? - Changes to underlying APIs used by the application - have parts of the API changed name? become deprecated? changed their return types? - New possibilities - is new functionality now available for your application to make use of? - Other interoperability issues & testing - did you make implicit assumptions about how the previous version worked that are no longer true? Do your application's tests still pass? In this article, we'll take a very simple Sencha Touch application - the City Bars guide - and think about each of these areas, showing how to make it run on the PR2 release of the Sencha Touch 2 framework. This release is still evolving, and some areas of the API are very likely to change as the framework approaches general release. There are also areas of the Sencha Touch 1.1 API which have yet to be migrated to Sencha Touch 2 at all - notably routes and profiles, for example - so some applications will not yet be upgradable as-is. Nevertheless, much of the process we will go through here will apply to your own upgrade efforts, regardless of the version you end up using. The City Bars application itself uses a Yelp API to pull data about city businesses from a JSON API. The user interface comprises a card-layout, containing a list of twenty matching businesses, and a disclosure-style transition to a detail page about a selected business. (In the Sencha Touch 1.1 version of the application, the detail page also included a map of the business' locations. For this article, we've removed that second tab, due to known preview release issues in animation and map-in-tab layout.) Getting started The developer preview release of Sencha Touch 2 does not guarantee backwards compatibility with Sencha Touch 1.1, but our first task should be to at least try. After downloading Sencha Touch 2 PR2, and unzipping the SDK, we can drop it into the app, renaming it lib/touch2 alongside the existing lib/touch folder. As usual, the references to the SDK files are all in the index.html file. We can easily rename the path in the JavaScript and stylesheet links to point to the new SDK: <script src="/lib/touch2/sencha-touch-all-debug.js"></script> <link href="/lib/touch2/resources/css/citybars.css" rel="stylesheet"/> Note that we have used sencha-touch-all-debug.js, not sencha-touch.js. In Sencha Touch 2, the -all suffix is the complete library, containing all of the framework's components and libraries. One of the framework's biggest improvements in v2 is its class loader, which allows the application to download script files on demand - and this technique would allow us to load the smaller sencha-touch.js file to bootstrap the application, and then cause other components to be pulled from the server when needed. But to get this project going quickly, we'll simply use the master framework file. And at this point it's also valuable to use the -debug version, which will enable warnings about deprecated APIs and so forth. Running the application in a desktop browser allows us to see console logging and script errors, and it's a good idea to do this before sending the upgraded application to real devices, which generally lack good diagnostic tools. Here, we're using Chrome, but Safari will also suffice. When we run this new application, we see a blank screen, and nothing on the console. Apparently, the script 'runs' successfully, but nothing displays! Has the application been started? Never fear - the Sencha Touch 2 'Getting Started' guide illustrates that there is a new best practice for starting an application. Rather than simply instantiating an application class, we're encouraged to use the Ext.application() method (thankfully with the same configuration object). Note the capitalization. So, to get things started, we simply change: var cb = new Ext.Application({... to: Ext.application({... This method not only instantiates the application, but also binds its launch event to the page's load event. (This was the missing step that caused our page to be blank.) It's also worth pointing out that the method does not return a reference to the new application (which we'd previously used as a global namespace of sorts), so we should take care to manually declare a namespace at the start of the script which we can then use to reference the app, and into which we can put useful references. var cb; Ext.application({ launch: function() { cb = this; (It's a known issue that the configuration's name property does not automatically create the namespace on its own.) With that, our application now at least starts up, as witnessed by the deprecation warnings that appear in the console: And the app does now appear - or at least, the docked toolbar does: Why does nothing happen once the user interface gets laid out? Well, if you look at the original code, you'll see our application is based on a series of asynchronous calls to determine the city name and then to look up business information. (As it happens, we've hard-coded the city name, but this pattern better enables us to potentially wire up the app for geolocation of the user, and then use a dynamic city name in the store proxy's URL). This callback sequence is tied to the main panel's afterrender event - in other words, when we know the application's user-interface is ready so that we can display a loading spinner. Since there are no errors being thrown, our first consideration should be for whether we are attaching our logic to the right event any more. We take a quick look at the Ext.Panel's documentation and notice that, indeed, afterrender is no longer available as an event - hence the reason our code wasn't executed. Looking through the list, we can see that event is not relevant any more. Because of changes to the layout engine, we can be sure that the component is available as soon as it's been instantiated. So all of the code that was in the afterrender event: listeners: { 'afterrender': function () { // when the viewport loads... ...can be moved out into the main part of the launch(), just after the component's instantiated. Success! Or at least we now have output from the console. And an error: ...which means it is now time to start looking at the API calls we are using, and ensure they are still available and appropriate in Sencha Touch 2. API changes Understanding changes to an API is sometimes a matter of trial and error. Sometimes the changes are obvious (such as here, where an exception is thrown), sometimes they are raised as warnings by deprecation flags, or sometimes simply learnt by reading upgrade documentation (like this article! - or this one). In this walkthrough, we just push on through the exceptions that are raised, confirming the changes we need to make by consulting the documentation. Our first, as shown in the console above, is that the setLoading() method is not available on our root panel. Looking at the code, we can see there is an undocumented mask() alternative we could use, but preferably, we can create a panel-independent loading mask using Ext.LoadMask, and bind it to the businesses store, just after it's been registered: new Ext.LoadMask(Ext.getBody(), { store: 'businesses', msg: '' }); Its appearance and disappearance will then be taken care of by the store's own loading events, and we can remove all explicit setLoading() calls. Refreshing the app, our next exception is: Uncaught TypeError: Object [object Object] has no method 'getDockedItems' Sencha Touch 2 has changed the differentiation between docked and child items of containers, and rather than using a distinct dockedItems configuration property, docked items are now listed in the main items property, and need to be given a docked property to indicates docking position. So in general, we need to change things like: { dockedItems: [ {xtype: 'toolbar'} ], items: [ {html: 'hello'}, {html: 'world'} ] } to: { items: [ {xtype: 'toolbar', docked: 'top'} {html: 'hello'}, {html: 'world'} ] } ...throughout the app. Along with this change, the getDockedItems() method has been removed, and we need to use the regular item accessors to reach the toolbar - for the purpose, in this app, of updating the title dynamically. The simplest change is to replace: cards.listCard.getDockedItems()[0].setTitle(...); with: cards.listCard.items.items[0].setTitle(...); However, this assumes that the docked items are added to the beginning of the items array. For more resilience, you may prefer to use a ComponentQuery to reference child items. This selector will return the first top-docked toolbar, for example: cards.listCard.query('toolbar[docked="top"]')[0].setTitle(...); And with that, we push past our final start-up exception and reach a working list view: So far, we've only changed around 5% of our original lines of code. Not too bad. Our next exception comes when we try to transition from the list to the detail page. It's slightly less helpful (' DOM Exception 8') but a quick stacktrace highlights that the issue lies on line 109 of our app: On this line, we are using an update() method. We should already suspect there is something afoot here, with the many warnings above that indicated that update() has been deprecated. Although these warnings recommend the usage of setHtml() instead, we were using the other previous meaning of the update() method, which was to apply data to a template - in this case, the data of the business' record to the template on the detail page. For this purpose, we should replace update() with a different new method, setData(). We can more or less find-and-replace here, noting that the original application also overrode the method on the tab panel to cascade updates down to each tab. We can still use that pattern, and, because docked items are now first-class members of their parent's item arrays, we can even override the method on the toolbar of the detail page to update it with the restaurant's name: { // the details card id: 'detailCard', items: [{ // detail page also has a toolbar docked : 'top', xtype: 'toolbar', title: '', setData: function(data) { this.setTitle(data.name); }, ... Remember, we're removing the map functionality, so we can also remove the detail card's xtype:'tabpanel', tabBar property, and the Ext.Map child tab item. Our application is no longer throwing exceptions, so we can probably declare that we've updated the app to match the new Sencha Touch 2 API. Now we can move onto dealing with more subtle alterations, such as those as detailed in the remaining deprecation warnings - not to mention a slightly strange layout you may notice we now have in the detail page. New Best Practices Sometimes the warnings that Sencha Touch 2 generates are very explicit about how to upgrade API calls to make better use of the framework's updated architecture. For example: Ext.regModel has been deprecated. Models can now be created by extending Ext.data.Model: Ext.define("MyModel", {extend: "Ext.data.Model", fields: []}); This warning hints at a significant change we can start to make to our code: the use of the methods Ext.define() and Ext.create() to start capitalizing on the benefits of the framework's new class loader. This article won't go into detail about the system as a whole (for that, please see Jacky's excellent article), but for now, it is sufficient to say that we should start using Ext.define() when defining components or classes (as with the models above) and Ext.create when instantiating them. This will mean that the Sencha Touch loader and build tools will be able to keep track of your class structure and instantiations when you come to use them. So for example, as the warning recommends, extending the model class to describe businesses can use Ext.define(), and Ext.regModel("Business", { fields: [... should become: Ext.define("Business", { extend: "Ext.data.Model", fields: [... We should also hunt down the use of the new keyword, and consider replacing it with Ext.create(). For example: cb.cards = new Ext.Panel({... should become: cb.cards = Ext.create('Ext.Panel', {... …and even our recently-added: new Ext.LoadMask(Ext.getBody(), {... should become: Ext.create('Ext.LoadMask', Ext.getBody(), {... At this point, we should also track down any remaining deprecation warnings relating to our application. This one in particular seems relevant: [DEPRECATE][Ext.dataview.List#bindStore] 'bindStore()' is deprecated, please use 'setStore' instead ...which is easily solved by updating cards.dataList.bindStore(store); to: cards.dataList.setStore(store); However, you may also have noticed a selection of other deprecation warnings being emitted by the application: At some point, you may decide that none of the remaining warnings are actually because of direct calls in your code. In our case, this is now the case, and is because parts of the framework have not yet themselves been updated to use new methods or patterns. As Sencha Touch 2 approaches a more mature release, these updates will all have been done, and such 'internal' warnings will cease to exist. Final Tweaks As we suspected, we should always expect the possibility of interoperability issues, especially when upgrade to the preview release of a framework. In our case, the biggest offender seems to be a problem with the layout of the detail page. As you'll know, one of Sencha Touch 2's major improvements has been a huge performance gain in the user-interface layout engine. This has been achieved by using more of the target browsers' own CSS layout engines, where appropriate, and in particular the use of the CSS3 flex box model. This can sometimes interfere with previous assumptions we've made about layout styling, for example. And the likely cause of the issue we are seeing here. Yet another of the advantages of using a desktop browser debugger is that we can explore the DOM of the app's user interface to figure out what has gone wrong with our particular layout, and by right-mouse clicking on any of the elements in this 'broken' page, we can explore their CSS properties to see why they are arranged horizontally instead of vertically as we wanted. In this case, the issue is that the x-container of the detail panel has a display: -webkit-box; property but no orientation (which tends to default to horizontal). We can quickly fix this with an extra CSS rule on this panel to make it explicit: .detail { -webkit-box-orient: vertical; } Also we notice that the HTML formatting (such as the larger <h2> heading in the detail page) is missing. Our use of styleHtmlContent: true on the detail page is not being obeyed for some reason, so we can explicitly set it so as part of the overridden setData() method: setData: function(data) { this.setStyleHtmlContent(true); // updating card cascades to update each tab ... We should also remove the float property from the image to make sure it plays nicely with the cascading flex box: .detail .photo { float:none; } Obviously developers should expect the need for these kinds of tweaks to decline dramatically as the framework approaches its mature release. Finally, it's worth mentioning custom theming. As you hopefully know, Sencha Touch uses Sass extensively to provide a theming framework that makes it easy for you to update entire user interface appearances with minimal changes. This approach relies on a set of Sencha Touch Sass modules, and it goes without saying that these have also been updated for Sencha Touch 2. What this means is that you will not be able to use previously-compiled CSS files with Sencha Touch 2 apps, and you will need to recompile your own Sass with the new modules from the Sencha Touch 2 SDK. You're unlikely to need to make any changes to your Sass file though, and in our case, we can simply change the Compass config.rb to point to the lib/touch2 SDK, recompile, and link to a new citybars.css file in our index.html. Et voila: And all with only 22 very minor changes to the application - at least, as far as my diff tool claims! The full app is available in the citybars2 GitHub repository (although you will need to copy the Sencha Touch PR2 SDK into the app yourself to run it locally). You are strongly advised to add your own (free) Yelp API key, as using the one in the repository code will undoubtedly lead to readers collectively exceeding Yelp usage limits. In Summary Since Sencha Touch 2 is currently a preview, it does not yet have its complete and stable API. However, we've been able to show that with a small number of changes, it is possible to start using the framework now for existing applications. To do so, you need to be aware of, and discover, critical changes that have been made to the API. Further, the class loader system, and other innovations, mean that there will often be better patterns you should start to use. The use of Ext.define() and Ext.create() are strongly recommended, for example. Finally, be tolerant of the niggles introduced by a combination of a pre-release framework and assumptions that you might have made when developing against Sencha Touch 1.1. These are generally very easy to work around or solve. And if not, the Sencha forums remain a valuable place to log and discuss such bugs and issues. This has been a brief introduction to some of the updates you should be aware of when using Sencha Touch 2 PR2, and hopefully you enjoy working with it and benefiting from the major enhancements these small changes on your part can bring. 8 Comments Mike Pacific3 years ago Fantastic tutorial. Do you guys have a general timeline yet for the Touch2 preview/final releases? As it is, there are lots of question marks for those of us about to start a new app. Ram3 years ago “we’re removing map functionality” Did you mention that map component was removed in ST2 ? James Pearce Sencha Employee3 years ago @Ram, no Ext.MAP has not been removed from the framework… it’s just there are some bugs with placing them inside tabs in the current Preview Release. It just simplified this article to leave that out for now. Edmund Leung3 years ago @Mike, our goal is to release Touch 2 Beta by early next year or sooner. We will review our final release/GA date based on the feedback we get from the beta. John3 years ago niggles lol Ram3 years ago Nice Article .......Thanks James Tom Sawyer3 years ago Hi, Sencha has lots of issues running on Android Devices. Not only is it sloppy, but also it does not render properly even in high end smart phones as Samsung Galaxy Ace. How are you going to address the issue.? In case that is not done, i don’t see point in investing time doing the learning. James Pearce Sencha Employee3 years ago @Tom, are you referring to Sencha Touch 1x or the 2 previews? We’ve spent most of this release focussed on Android performance, so it would be worth checking out the improvements. (The blog announcement includes some demo videos of just that. ) Leave a comment:Commenting is not available in this channel entry.
http://www.sencha.com/learn/upgrading-to-sencha-touch-2-pr2
CC-MAIN-2014-52
refinedweb
3,359
61.46
ð New ð Web Site ð ASP.NET Web Site). Here's another new feature in Visual Studio 2005: You can create a project by specifying a folder on the file system (instead of specify a web location) if you select File System in the Location drop-down list, as shown in Figure 2-3. This enables you to create an ASP.NET project without creating a related virtual application or virtual directory in the IIS metabase (the metabase is where IIS stores its configuration data), and the project is loaded from a real hard disk folder, and executed by an integrated lightweight web server (called ASP.NET Development Server) that handles requests on a TCP/IP port other than the one used by IIS (IIS uses port 80). The actual port number used is determined randomly every time you press F5 to run the web site in debug mode. For example, it handles requests such as. This makes it much easier to move and back up projects, because you can just copy the project's folder and you're done — there's no need to set up anything from the IIS Management console. In fact, Visual Studio 2005 does not even require IIS unless you choose to deploy to an IIS web server, or you specify a web URL instead of a local path when you create the web site project. If you've developed with any previous version of ASP.NET or VS2005, I'm sure you will welcome this new option. I say this is an option because you can still create the project by using a URL as project path — creating and running the site under IIS — by selecting HTTP in the Location drop-down list. I suggest you create and develop the site by using the File System location, with the integrated web server, and then switch to the full-featured IIS web server for the test phase. VS2005 includes a new deployment wizard that makes it easier to deploy a complete solution to a local or remote IIS web server. For now, however, just create a new ASP.NET web site in a folder you want, and call it TheBeerHouse. After creating the new web site, right-click on Default.aspx and delete it. We'll make our own default page soon. Creating the master page with the shared site design is not that difficult once you have a mock-up image (or a set of images if you made them separately). Basically, you cut the logo and the other graphics and put them in the HTML page. The other parts of the layout, such as the menu bar, the columns, and the footer, can easily be reproduced with HTML elements such as DIVs. The template provided by TemplateMonster (and just slightly modified and expanded by me) is shown in Figure 2-4. From this picture you can cut out the header bar altogether and place some DIV containers over it, one for the menu links, one for the login box, and another one for the theme selector (a drop-down list containing the names of the available themes). These DIVs will use the absolute positioning so that you can place them right where you want them. It's easy to determine the correct top-left or top-right coordinates for them — you just hover the mouse cursor on the image opened in the graphics editor and then use the same x and y values you read from there.. Figure 2-5 provides a visual representation of this work, applied on the previous image After creating the web site as explained above, create a new master page file (select Website ð Add New Item ð Master Page, and name it Template.master), and then use the visual designer to add the ASP.NET server-side controls and static HTML elements to its surface. However, when working with DIV containers and separate stylesheet files, I've found that the visual designer is not able to give me the flexibility I desire. I find it easier to work directly in the Source view, and write the code by hand. As I said earlier, creating the master page is not much different than creating a normal page; the most notable differences are just the @Master directive at the top of the file and the presence of ContentPlaceHolder controls where the .aspx pages will plug in their own content. What follows is the code that defines the standard HTML metatags, and the site's header for the file Template.master: <%@ Master <html xmlns="" > <head id="Head1" runat="server"> <meta http- <title>TheBeerHouse</title> </head> <body> <form id="Main" runat="server"> <div id="header"> <div id="header2"> <div id="headermenu"> <asp:SiteMapDataSource <asp:Menu </div> </div> <div id="loginbox">Login box here...</div> <div id="themeselector">Theme selector here...</div> </div> As you can see, there is nothing in this first snippet of code that relates to the actual appearance of the header. That's because the appearance of the containers, text, and other objects will be specified in the stylesheet and skin files. The "loginbox" container will be left empty for now; we'll fill it in when we get to Chapter 4, which covers security and membership. The "themeselector" box will be filled in later in this chapter, as soon as we develop a control that displays the available styles from which users can select. The "headermenu" DIV contains a SiteMapPathDataSource, which loads the content of the Web.sitemap file that you'll create shortly. It also contains a Menu control, which uses the SiteMapPathDataSource as data source for the items to create. Proceed by writing the DIVs for the central part of the page, with the three columns: <div id="container"> <div id="container2"> <div id="rightcol"> <div class="text">Some text...</div> <asp:ContentPlaceHolder </div> <div id="centercol"> <div id="breadcrumb"> <asp:SiteMapPath </div> <div id="centercolcontent"> <asp:ContentPlaceHolder <p> </p><p> </p><p> </p><p> </p> <p> </p><p> </p><p> </p><p> </p> </asp:ContentPlaceHolder> </div> </div> </div> <div id="leftcol"> <div class="sectiontitle"> <asp:ImageSite News </div> <div class="text"><b>20 Aug 2005 :: News Header</b><br /> News text... </div> <div class="alternatetext"><b>20 Aug 2005 :: News Header</b><br /> Other news text... </div> <asp:ContentPlaceHolder <div id="bannerbox"> <a href="" target="_blank"> Website Template supplied by Template Monster, a top global provider of website design templates<br /><br /> <asp:Image </a> </div> </div> </div> Note that three ContentPlaceHolder controls are defined in the preceding code, one for each column. This way, a content page will be able to add text in different positions. Also remember that filling a ContentPlaceHolder with some content is optional, and in some cases we'll have pages that just add content to the central column, using the default content defined in the master page for the other two columns. The central column also contains a sub-DIV with a SiteMapPath control for the breadcrumb navigation system. The remaining part of the master page defines the container for the footer, with its subcontainers for the footer's menu (which exactly replicates the header's menu, except for the style applied to it) and the copyright notices: <div id="footer"> <div id="footermenu"> <asp:Menu </div> <div id="footertext"> <small>Copyright © 2005 Marco Bellinaso & <a href="" target="_blank">Wrox Press</a><br /> Website Template kindly offered by <a href="" target="_blank"> Template Monster</a></small> </div> </div> </form> </body> </html> Given how easy it is to add, remove, and modify links in the site's menus when employing the sitemap file and the SiteMapPath control, at this point you don't have to worry about what links you'll need. You can fill in the link information later. You can add a few preliminary links to use as a sample for now, and then come back and modify the file when you need it. Therefore, add a Web.sitemap file to the project (select Website ð Add New Item… ð Site Map) and add the following XML nodes inside it: <?xml version="1.0" encoding="utf-8" ?> <siteMap xmlns="" > <siteMapNode title="Home" url="~/Default.aspx"> <siteMapNode title="Store" url="~/Store/Default.aspx"> <siteMapNode title="Shopping cart" url="~/Store/ShoppingCart.aspx" /> </siteMapNode> <siteMapNode title="Forum" url="~/Forum/Default.aspx" /> <siteMapNode title="About" url="~/About.aspx" /> <siteMapNode title="Contact" url="~/Contact.aspx" /> <siteMapNode title="Admin" url="~/Admin/Default.aspx" /> </siteMapNode> </siteMap> You may be wondering why the Home node serves as root node for the others, and is not at the same level as the others. That would actually be an option, but I want the SiteMapPath control to always show the Home link, before the rest of the path that leads to the current page, so it must be the root node. In fact, the SiteMapPath does not work by remembering the previous pages — it just looks in the site map for an XML node that describes the current page, and displays the links of all parent nodes. It's time to create the first theme for the master page: TemplateMonster. There are two ways to do this that are functionally equivalent. You could add a new folder to the project named App_Themes, and then a new subfolder under it called TemplateMonster.. Select the App_Themes\ TemplateMonster folder, and add a stylesheet file to this folder (select Website ð Add New Item ð Stylesheet, and name it Default.css). The name you give to the CSS file is not important, as all CSS files found in the current theme's folder will automatically be linked by the .aspx page at runtime. For your reference, the code that follows includes part of the style classes defined in this file (refer to the downloadable code for the entire stylesheet): body { margin: 0px; font-family: Verdana; font-size: 12px; } #container { background-color: #818689; } #container2 { background-color: #bcbfc0; margin-right: 200px; } #header { padding: 0px; margin: 0px; width: 100%; height: 184px; background-image: url(images/HeaderSlice.gif); } #header2 { padding: 0px; margin: 0px; width: 780px; height: 184px; background-image: url(images/Header.gif); } #headermenu { position: relative; top: 153px; left: 250px; width: 500px; padding: 2px 2px 2px 2px; } #breadcrumb { background-color: #202020; color: White; padding: 3px; font-size: 10px; } #footermenu { text-align: center; padding-top: 10px; } #loginbox { position: absolute; top: 16px; right: 10px; width: 180px; height: 80px; padding: 2px 2px 2px 2px; font-size: 9px; } #themeselector { position: absolute; text-align: right; top: 153px; right: 10px; width: 180px; height: 80px; padding: 2px 2px 2px 2px; font-size: 9px; } #footer { padding: 0px; margin: 0px; width: 100%; height: 62px; background-image: url(images/FooterSlice.gif); } #leftcol { position: absolute; top: 184px; left: 0px; width: 200px; background-color: #bcbfc0; font-size: 10px; } #centercol { position: relative inherit; margin-left: 200px; padding: 0px; background-color: white; height: 500px; } #centercolcontent { padding: 15px 6px 15px 6px; } #rightcol { position: absolute; top: 184px; right: 0px; width: 198px; font-size: 10px; color: White; background-color: #818689; } .footermenulink { font-family: Arial; font-size: 10px; font-weight: bold; text-transform: uppercase; } .headermenulink { font-family: Arial Black; font-size: 12px; font-weight: bold; text-transform: uppercase; } /* other styles omitted for brevity's sake */ Note how certain elements (such as "loginbox", "themeselector", "leftcol", and "rightcol") use absolute positioning. Also note that there are two containers with two different styles for the header. The former, header, is as large as the page (if you don't specify an explicit width and don't use absolute positioning, a DIV will always have an implicit width of 100%), has a background image that is 1 pixel large, and is (implicitly, by default) repeated horizontally. The latter, header2, is as large as the Header.gif image it uses as a background, and is placed over the first container. The result is that the first container serves to continue the background for the second container, which has a fixed width. This is required only because we want to have a dynamic layout that fills the whole width of the page. If we had used a fixed-width layout, we could have used just a single container. Now add a skin file named Controls.skin (Select the TemplateMonster folder and then select Website ð Add New Item ð Skin File). You will place all the server-side styles into this file to apply to controls of all types. Alternatively, you could create a different file for every control, but I find it easier to manage styles in a single file. The code that follows contains two unnamed skins for the TextBox and SiteMapPath controls, and two named (SkinID) skins for the Label control: <asp:TextBox <asp:Label <asp:Label <asp:SiteMapPath <PathSeparatorTemplate> <asp:Image </PathSeparatorTemplate> </asp:SiteMapPath> The first three skins are mainly for demonstrative purposes, because you can get the same results by defining normal CSS styles. The skin for the SiteMapPath control is something that you can't easily replicate with CSS styles, because this control does not map to a single HTML element. In the preceding code, this skin declares what to use as a separator for the links that lead to the current page — namely, an image representing an arrow. Now that you have a complete master page and a theme, you can test it by creating a sample content page. To begin, add a new web page named Default.aspx to the project (select the project in Solution Explorer and choose Website ð Add New Item ð Web Form), select the checkbox called Select Master Page in the Add New Item dialog box, and you'll be presented with a second dialog window from which you can choose the master page to use — namely, Template.master. When you select this option the page will just contain Content controls that match the master page's ContentPlaceHolder controls, and not the <html>, <body>, <head>, and <form> tags that would be present otherwise. You can put some content in the central ContentPlaceHolder, as shown here: <%@ Page </asp:Content> <asp:Content <asp:Image Lorem ipsum dolor sit amet, consectetuer adipiscing elit... </asp:Content> <asp:Content </asp:Content> You could have also added the Theme attribute to the @Page directive, setting it equal to "Template Monster". However, instead of doing this here, you can do it in the web.config file, once, and have it apply to all pages. Select the project ð Website ð Add New Item ð Web Configuration File. Remove the MasterPageFile attribute from the code of the Default.aspx page, because you'll also put that in web.config, as follows: <?xml version="1.0"?> <configuration xmlns=""> <system.web> <pages theme="TemplateMonster" masterPageFile="~/Template.master" /> <!-- other settings here... --> </system.net> </configuration> To test the user-selectable theming feature described earlier in the chapter, we must have more than one theme. Thus, under the App_Themes folder, create another folder named PlainHtmlYellow (select the project, right-click Add Folder ð Theme Folder, and name it PlainHtmlYellow), and then copy and paste the whole Default.css file from the TemplateMonster folder, modifying it to make it look different. In the provided example I've changed most of the containers so that no background image is used, and the header and footer are filled with simple solid colors, like the left-and right-hand columns. Not only is the size for some elements different, but also the position. For the left-and right-hand columns in particular (which use absolute positioning), their position is completely switched, so that the container named leftcol gets docked on the right border, and the rightcol container gets docked on the left. This is done by changing just a couple of style classes, as shown below: #leftcol { position: absolute; top: 150px; right: 0px; width: 200px; background-color: #ffb487; font-size: 10px; } #rightcol { position: absolute; top: 150px; left: 0px; width: 198px; color: White; background-color: #8d2d23; font-size: 10px; } This is the power of DIVs and stylesheets: Change a few styles, and content that used to be on the left of the page will be moved to the right. This was a pretty simple example, but you can push this much further and create completely different layouts, with some parts hidden and others made bigger, and so on. As for the skin file, just copy and paste the whole controls.skin file defined under TemplateMonster and remove the definition for the TextBox and SiteMapPath controls so that they will have the default appearance. You'll see a difference when we change the theme at runtime. If you later want to apply a non-default appearance to them, just go back and add a new style definition to this file, without modifying anything else. You now have a master page, with a couple of themes for it, so now you can develop a user control that will display the list of available themes and allow the user to pick one. Once you have this control, you will plug it into the master page, in the "themeselector" DIV container. Before creating the user control, create a new folder named "Controls", inside of which you'll put all your user controls so that they are separate from pages, for better organization (select the project, right-click Add Folder ð Regular folder, and name it Controls). To create a new user control, right-click on the Controls folder, select Add New Item ð Web User Control, and name it ThemeSelector.ascx. The content of this .ascx file is very simple and includes just a string and a DropDownList: <%@ Control Note that the drop-down list has the AutoPostBack property set to true, so that the page is automatically submitted to the server as soon as the user changes the selected value. The real work of filling the drop-down list with the names of the available themes, and loading the selected theme, will be done in this control's code-beside file, and in a base page class that you'll see shortly. In the code-beside file, you need to fill the drop-down list with an array of strings returned by a helper method, and then select the item that has the same value of the current page Theme: public partial class ThemeSelector : System.Web.UI.UserControl { protected void Page_Load(object sender, EventArgs e) { ddlThemes.DataSource = Helpers.GetThemes(); ddlThemes.DataBind(); ddlThemes.SelectedValue = this.Page.Theme; } } The GetThemes method is defined in a Helpers.cs file that is located under another special folder named App_Code. Files in this folder are automatically compiled at runtime by the ASP.NET engine, so you don't need to compile them before running the project. You can even modify the C# source code files while the application is running, hit refresh, and the new request will recompile the modified file in a new temporary assembly, and load it. You'll read more about the new compilation model later in the book, and especially in Chapter 12 about deployment. The GetThemes method uses the GetDirectories method of the System.IO.Directory class to retrieve an array with the paths of all folders contained in the ~/App_Themes folder (this method expects a physical path and not a URL — you can, however, get the physical path pointed to by a URL through the Server.MapPath method). The returned array of strings contains the entire path, not just the folder name, so you must loop through this array and overwrite each item with that item's folder name part (returned by the System.IO.Path.GetFileName static method). Once the array is filled for the first time it is stored in the ASP.NET cache, so that subsequent requests will retrieve it from there, more quickly. The following code shows the entire content of the Helpers class (App_Code\Helpers.cs): namespace MB.TheBeerHouse.UI { public static class Helpers { /// <summary> /// Returns an array with the names of all local Themes /// </summary> public static string[] GetThemes() { if (HttpContext.Current.Cache["SiteThemes"] != null) { return (string[])HttpContext.Current.Cache["SiteThemes"]; } else { string themesDirPath = HttpContext.Current.Server.MapPath("~/App_Themes"); // get the array of themes folders under /app_themes string[] themes = Directory.GetDirectories(themesDirPath); for (int i = 0; i <= themes.Length - 1; i++) themes[i] = Path.GetFileName(themes[i]); // cache the array with a dependency to the folder CacheDependency dep = new CacheDependency(themesDirPath); HttpContext.Current.Cache.Insert("SiteThemes", themes, dep); return themes; } } } } Now that you have the control, go back to the master page and add the following line at the top of the file in the Source view to reference the external user control: <%@ Register Src="Controls/ThemeSelector.ascx" TagName="ThemeSelector" TagPrefix="mb" %> Then declare an instance of the control where you want it to appear — namely, within the "themeselector" container: <div id="themeselector"> <mb:ThemeSelector </div> The code that handles the switch to a new theme can't be placed in the DropDownList's SelectedIndexChanged event, because that happens too late in the page's life cycle. As I said in the "Design" section, the new theme must be applied in the page's PreInit event. Also, instead of recoding it for every page, we'll just write that code once in a custom base page. Our objective is to read the value of the DropDownList's selected index from within our custom base class, and then we want to apply the theme specified by the DropDownList. However, you can't access the controls and their values from the PreInit event handler because it's still too early in the page's life cycle. Therefore, you need to read the value of this control in a server event that occurs later: The Load event is a good place to read it. However, when you're in the Load event handler you won't know the specific ID of the DropDownList control, so you'll need a way to identify this control, and then you can read its value by accessing the row data that was posted back to the server, via the Request.Form collection. But there is still a remaining problem: You must know the ID of the control to retrieve its value from the collection, but the ID may vary according to the container in which you place it, and it's not a good idea to hard-code it because you might decide to change its location in the future. Instead, when the control is first created, you can save its client-side ID in a static field of a class, so that it will be maintained for the entire life of the application, between different requests (post backs), until the application shuts down (more precisely, until the application domain of the application's assemblies is unloaded). Therefore, add a Globals.cs file to the App_Code folder, and write the following code inside it: namespace MB.TheBeerHouse { public static class Globals { public static string ThemesSelectorID = ""; } } Then, go back to the ThemeSelector's code-beside file and add the code to save its ID in that static field: public partial class ThemeSelector : System.Web.UI.UserControl { protected void Page_Load(object sender, EventArgs e) { if (Globals.ThemesSelectorID.Length == 0) Globals.ThemesSelectorID = ddlThemes.UniqueID; ddlThemes.DataSource = Helpers.GetThemes(); ddlThemes.DataBind(); ddlThemes.SelectedValue = this.Page.Theme; } } You're ready to create the custom base class for your pages, and this will just be another regular class you place under App_Code, and which inherits from System.Web.UI.Page. You override its OnPreInit method to do the following: Check whether the current request is a postback. If it is, check whether it was caused by the ThemeSelector drop-down list. As in ASP.NET 1.x, all pages with a server-side form have a hidden field named "__EVENTTARGET", which will be set with the ID of the HTML control that causes the postback (if it is not a Submit button). To verify this condition, you can just check whether the "__EVENTTARGET" element of the Form collection contains the ID of the drop-down list, based on the ID read from the Globals class. If the conditions of point 1 are all verified, you retrieve the name of the selected theme from the Form collection's element with an Id equal to the ID saved in Globals, and use it for setting the page's Theme property. Then, you also store that value in a Session variable. This is done so that subsequent requests made by the same user will correctly load the newly selected theme, and will not reset it to the default theme. If the current request is not a postback, check whether the Session variable used in point 2 is empty (null) or not. If it is not, retrieve that value and use it for the page's Theme property. The following snippet translates this description to real code: namespace MB.TheBeerHouse.UI { public class BasePage : System.Web.UI.Page { protected override void OnPreInit(EventArgs e) { string id = Globals.ThemesSelectorID; if (id.Length > 0) { // if this is a postback caused by the theme selector's dropdownlist, // retrieve the selected theme and use it for the current page request if (this.Request.Form["__EVENTTARGET"] == id && !string.IsNullOrEmpty(this.Request.Form[id])) { this.Theme = this.Request.Form[id]; this.Session["CurrentTheme"] = this.Theme; } else { // if not a postback, or a postback caused by controls other then // the theme selector, set the page's theme with the value found // in Session, if present if (this.Session["CurrentTheme"] != null) this.Theme = this.Session["CurrentTheme"].ToString(); } } base.OnPreInit(e); } } } The last thing you have to do is change the default code-beside class for the Default.aspx page so that it uses your own BasePage class instead of the default Page class. Your custom base class, in turn, will call the original Page class. You only need to change one word, as shown below (change Page to BasePage): public partial class _Default : MB.TheBeerHouse.UI. A single page (Default.aspx) is not enough to test everything we've discussed and implemented in this chapter. For example, we haven't really seen the SiteMapPath control in practice, because it doesn't show any link until we move away from the home page, of course. You can easily implement the Contact.aspx and About.aspx pages if you want to test it. I'll take the Contact.aspx page as an example for this chapter because I want to add an additional little bit of style to the TemplateMonster theme to further differentiate it from PlainHtmlYellow. The final page is represented in Figure 2-7. I won't show you the code that drives the page and sends the mail in this chapter, because that's covered in the next chapter. I also will not show you the content of the .aspx file, because it is as simple as a Content control inside of which are some paragraphs of text and some TextBox controls with their accompanying validators. Instead, I want to direct your attention to the fact that the Subject textbox has a yellow background color, and that's because it is the input control with the focus. Highlighting the active control can facilitate users in quickly seeing which control they are in, something that may not have otherwise been immediately obvious if they were tabbing through controls displayed in multiple columns and rows. Implementing this effect is pretty easy: You just need to handle the onfocus and onblur client-side events of the input controls, and respectively apply or remove a CSS style to the control by setting its className attribute. The style class in the example sets the background color to yellow and the text color to blue. The class, shown below, should be added to the Default.css file of the TemplateMonster folder: To add handlers for the onfocus and onblur JavaScript client-side event handlers, you just add a couple of attribute_name/value pairs to the control's Attributes collection, so that they will be rendered "as is" during runtime, in addition to the other attributes rendered by default by the control. You can add a new static method to the Helpers class created above, to wrap all the required code, and call it more easily when you need it. The new SetInputControlsHighlight method takes the following parameters: a reference to a control, the name of the style class to be applied to the active control, and a Boolean value indicating whether only textboxes should be affected by this routine or also DropDown List, ListBox, RadioButton, CheckBox, RadioButtonList and CheckBoxList controls. If the control passed in is of the right type, this method adds the onfocus and onblur attributes to it. Otherwise, if it has child controls, it recursively calls itself. This way, you can pass a reference to a Page (which is itself a control, since it inherits from the base System.Web.UI.Control class), a Panel, or some other type of container control, and have all child controls passed indirectly to this method as well. Following is the complete code for this method: public static class Helpers { public static string[] GetThemes() { ... } public static void SetInputControlsHighlight(Control container, string className, bool onlyTextBoxes) { foreach (Control ctl in container.Controls) { if ((onlyTextBoxes && ctl is TextBox) || ctl is TextBox || ctl is DropDownList || ctl is ListBox || ctl is CheckBox || ctl is RadioButton || ctl is RadioButtonList || ctl is CheckBoxList) { WebControl wctl = ctl as WebControl; wctl.Attributes.Add("onfocus", string.Format( "this.className = '{0}';", className)); wctl.Attributes.Add("onblur", "this.className = '';"); } else { if (ctl.Controls.Count > 0) SetInputControlsHighlight(ctl, className, onlyTextBoxes); } } } } To run this code in the Load event of any page, override the OnLoad method in the BasePage class created earlier, as shown below: namespace MB.TheBeerHouse.UI { public class BasePage : System.Web.UI.Page { protected override void OnPreInit(EventArgs e) { ... } protected override void OnLoad(EventArgs e) { // add onfocus and onblur javascripts to all input controls on the forum, // so that the active control has a difference appearance Helpers.SetInputControlsHighlight(this, "highlight", false); base.OnLoad(e); } } } This code will always run, regardless of the fact that the PlainHtmlYellow theme does not define a "highlight" style class in its Default.css file. For this theme, the active control will not have any particular style. Little tricks like this one are easy and quick to implement, but they can really improve the user experience, and positively impress them. Furthermore, the simplicity afforded by the use of a custom base class for content pages greatly simplifies the implementation of future requirements.
http://www.yaldex.com/asp_tutorial_3/BBL0020.html
CC-MAIN-2021-17
refinedweb
5,075
58.82
Subject: Re: [boost] Notice: Boost.Atomic (atomic operations library) From: Andrey Semashev (andrey.semashev_at_[hidden]) Date: 2009-12-03 12:15:52 Helge Bahmann wrote: > On Wed, 2 Dec 2009, Vicente Botet Escriba wrote: > > I'm not sure if the free-standing functions are of too much value > (personally I dislike them for C++), I will certainly add them if > someone wants them, but probably it would be preferrable for them not to > live in the root namespace "boost". I think, free standing functions can be useful if one needs to operate on a POD type (which atomic<T> isn't). For example, one could safely use functions with local statics. Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2009/12/159610.php
CC-MAIN-2019-47
refinedweb
133
66.84
Agenda See also: IRC log everyone to complete action items from FTF we should be working on schema and update soon Shadi will update soon with resolutions from FTF ET TF has started! all tool info will be entered in structured form in database SAZ: what to do about some tools that could be deleted? We'll bring up at tomorrow's ETTF meeting. ... will contact Jim re his ER tools. SAZ: Eric P suggested we extend schema to describe delivery unit. JK has drafted something... JK: Has reviewed HTTP protocol. Created classes for each method. ... Content type and length property names are upper case, would like lower case. SAZ: How classes and properties chosen? JK: description of logic - seemed straightforward. (discussion of delivery unit) (should we call it delivery unit or web resource?) JL: we should make properties lower case SAZ: From FTF we decided that EARL can have multiple HTTP requests/respones that define what we're testing. Should not be a single resource. JK: Glossary seems to be out of date with real usage. SZ: A delivery unit is a series of requests/responses. This seems to be different from glossary. <JohannesK> thread on www-di about DUs starting with <> SAZ: We should not confine our definition of delivery unit to single HTTP request. ... Can use Dublin Core "has part". ... question - use namespaces to differentiate? JL: We need a way to capture headers. SAZ: We should remove the EARL HTTP request/response because just wrappers. <scribe> ACTION: JK will research issue of request/response and delivery unit and send to list. [recorded in] JK: description of namespace difficulties SAZ: we need more discussion - bring up on next call
http://www.w3.org/2005/11/23-er-minutes
CC-MAIN-2016-30
refinedweb
281
68.36
TRADE TRADE Dec 25 - 31, 2000 Dec 25 - 31, 2000 Steep rise in crude oil import Import of crude oil increased by 118.10 per cent, thus doubling its share in the total import bill to 13 per cent during the first five months of 2000-01, according to Federal Bureau of Statistics (FBS) data for the month of November, 2000. In the corresponding period of last financial year, crude oil had accounted for only 6.97 per cent of the import bill. The country had spent in the current financial year $616.73 million on the import of 2,823,405 tons by the end of November, as against $282.77 million in the same period of last year. The steep rise in the share of crude oil is attributed to 60.30 per cent increase in its import, volume-wise, as well as 35.73 per cent increase in the per ton rate. During the period under report, the total import bill amounted to $4.74 billion 14.39 per cent more than in July-November, 1999. An analysis of the FBS data shows that the petroleum group continues to race ahead of all other major groups of items. In dollar terms, the imports in this group surged 62.74 per cent over the corresponding period of last year. Its import bill constituted more than one-third (33.98 per cent) of the total import bill, as compared to 23.88 per cent in July-November, 1999. Conforming to the trend of the past several months, November 2000 saw a rise of 8.35 per cent over October, 2000, and of 36.70 per cent over the corresponding month of 1999 in the quantity of crude oil imported by Pakistan. In terms of dollars, the import of crude oil went up by 71.03 per cent over November 1999. In November 2000, the rate of crude ($227.20 per ton) denoted only a slight rise over October 2000 but an increase of about $46 per ton when compared with November 1999. Vision to globalize textile trade completed Federal Minister for Commerce, Industries and Production Abdul Razak Dawood has said that the government has completed vision of globalization of trade in textile and removal of textile quota from January 1, 2005. Presiding over a meeting regarding new textile quota management policy for the coming years with the stakeholders, representatives of textile associations, members of Quota Supervisory Council, he said the quota management policy was one of the instruments to achieve the objectives of greater value addition and entering into new markets. The minister said that the government was also determined to check malpractice in trade through a comprehensive policy and invited free and frank views of participants for finalization of the policy. Tariff to be curtailed further Tariff on import of raw material will be further slashed to 30% in next fiscal year, Advisor to finance ministry, Ashfaq Hassan Khan said on Wednesday. He said in order to make the country's industrial products competitive at international markets, availability of raw material at reasonable prices was imperative. Realizing this fact, he maintained the government already reduced tariff slab from 65% to 35%. Carpet exports improve The carpet industry has managed to regain the export volume of 1992 the year the late Iqbal Masih began his campaign against child labour. It has done so without eliminating child labour completely from the industry. The Iqbal Masih case brought about a decrease in Pakistani carpet exports. In 1992, the exports of the hand-knotted carpet industry stood at 229.6 million dollars. Owing to a frenzied media campaign against Pakistan on child labour, the exports fell to a little over 171 million dollars in 1993 before finally slumping to 151.3 million dollars in 1994. However, during 1999- 2000 Pakistan exported hand-knotted carpets worth 250 million dollars. In 1986, Iqbal Masih's father had sold him to a carpet-maker for Rs13,000. For six years, Iqbal worked in the carpet factory of a local employer, who extracted 16 hours of hard work from him daily. During this labour, Iqbal's body stopped growing and he was threatened with dwarfism. Govt asked to freeze wheat export The federal government is freezing ongoing process of wheat export on the recommendation of an inter-provincial Wheat Review Committee (WRC), which in its meeting held on Saturday concluded that crop production could be 25 per cent short this year. Earlier, agriculture ministers of Sindh, Punjab, Balochistan and NWFP, during WRC meeting jointly asked the center to immediately stop further export of wheat, saying it could create problems for provinces in meeting their annual food requirements next year. PSM sales The sales of Pakistan Steel during Jan-Nov period this year were higher by 1.75 lakh tons in comparison to sales during the corresponding period in 1999, Lt-Col (retd) Afzal Khan, PS Chairman, said. Wheat to Iran The Trading Corporation of Pakistan (TCP) claimed on Monday that no price has been determined for selling 200,000 tons of wheat to Iran as the terms and conditions are yet to be decided between the two sides. Quota utilization The US customs may impose embargo on import of textile goods after detecting that Pakistani exporters are already in excess utilization of several categories of quota ceiling for the current year. The figures released by the US customs show that arrival of many textile goods at import stage are already higher than those being shown by the Export Promotion Bureau (EPB)'s provisional figures of textile quota utilization for the period January 1, to November 30, 2000. Import under ATTA Out of Rs1.52 billion imports cleared under Afghanistan Transit Trade Agreement, Rs199.1 million of the total consignments were of tea, say ATTA figures for July-November period of the financial year reveal.
http://www.pakistaneconomist.com/issue2000/issue52/news5.htm
CC-MAIN-2018-05
refinedweb
978
61.97
HDF There are two HDF formats, HDF4, and HDF5 which each have their own libraries and drivers. HDF4 is more common, but HDF5 is the next generation format. - - hdf4 driver docs - - hdf5 driver docs Building with HDF4 NCSA HDF library can be downloaded from the The NCSA HDF Home Page at the the National Center for Supercomputing Applications. HDF 4.2r1 is generally the preferred version if working from source. The szip option is not widely used, and may be omitted for simplicity. If your OS distribution already contains prebuilt HDF library you can use one from the distribution. Open File Limits Please note, that NCSA HDF library compiled with several defaults which is defined in hlimits.h file. For example, hlimits.h defines the maximum number of opened files: # define MAX_FILE 32 If you need open more HDF4 files simultaneously you should change this value and rebuild HDF4 library (there is no need to rebuild GDAL if it is already compiled with HDF4 support). Incompatibility with NetCDF Libraries The HDF4 libraries include an implementation of the netcdf api which can access hdf files. If building with HDF4 and NetCDF it is necessary to build the HDF library with this disabled. If using Ubuntu/Debian? dev packages you must make sure that the libhdf4-alt-dev package is installed instead of the libhdf4-dev package. If building HDF4 manually you have to add "--disable-netcdf" to "configure" and HDF4 will move its embedded NetCDF functions in a different private namespace to avoid name clashes. ./configure --disable-netcdf --disable-fortran If building older versions of HDF4 (before HDF4.2r3) the macro HAVE_NETCDF needs to be defined instead. This can be accomplished by configuring the HDF4 libraries as follows: export CFLAGS="-fPIC -DHAVE_NETCDF" export CXXFLAGS="-fPIC -DHAVE_NETCDF" export LIBS="-lm" ./configure --disable-fortran The -fPIC and LIBS may not be necessary on all platforms. Without this fix either GDAL will crash intermittently when accessing netcdf files, or the build of GDAL will fail. Additional linux workaround if rebulding hdf is not an option you can also preload the netCDF library. LD_PRELOAD=libnetcdf.so gdalinfo myfile.nc Potential conflicts with internal libz The HDF4 libraries depend on libz library. If you build HDF4 from sources, you most likely use libz library already installed in your system. Then, when you run GDAL ./configure script using option --with-libz=internal requesting GDAL to use its internal version of libz, you may notice that HDF4 support is not enabled as you expected. This might signal a conflict between internal libz and the version installed in you system against which HDF libraries were linked. More detailed description of the problem and its symptoms can be found in summary of Ticket #1955. Building with HDF5 - download from: Building on Windows with MSVC If your HDF5 source directory does not have an /include directory in its root, then you must do the following: - in nmake.opt add a pointer to the directory containing the headers, such as HDF5_INCLUDE = $(HDF5_DIR)\src - modify the EXTRAFLAGS parameter in gdal\frmts\hdf\makefile.vc to include this new definition, such as: EXTRAFLAGS = -I$(HDF5_INCLUDE) -DWIN32 -D_HDF5USEDLL_ - note that if you build against the static HDF5 lib (libhd5.lib) you will have to use the "-D_HDF5USEDLL_" as above; but if you build against the dynamic HDF5 lib (hdf5.lib) you will have to use the "-DH5_BUILT_AS_DYNAMIC_LIB" switch instead - You can test this by executing the following in gdal\frmts\hdf: nmake /f makefile.vc Open Tickets No results
https://trac.osgeo.org/gdal/wiki/HDF
CC-MAIN-2020-50
refinedweb
584
54.52
Advertisement In this blog, we will learn how we can Read, Display, and Save Video with OpenCV using the Video Capture in Python OpenCV Function. Previously we were working with OpenCV images, now we will be working with videos. Learn more about Python OpenCV from here. What is a Video? Video is just a collection of images that are rolled very fast (30 fps or 60 fps)on the screen and we see a moving picture. Usually, the frames per second are the smoothness of the video. How to Capture video in OpenCV Python? We will answer this question in this blog. We have a webcam and we want to access it to record the video or just read a pre-recorded video in our python program. But how to do it? We can make use of Python OpenCV’s cv2.VideoCapture() function to read the video frames directly from the webcam and also to read a pre-recorded video. Let’s see it in action! Creating Video Capture Instance The First step to do anything regarding video is to make the instance of the VideoCapture class present in the opencv package. import numpy as np import cv2 as cv cap = cv.VideoCapture(0) The first two lines are the import of NumPy and cv2. The instance is stored in the cap variable, which will be used for later purposes. The object creation of VideoCapture takes an argument that is the index of the camera that will be considered for the recording. The default parameter is 0, which takes the default camera into consideration. If you have more than one camera and you want to use a specific camera provide your camera number as 1, 2, or 3 (The order in which the camera is inserted into the computer). Check if the camera is Opened This process is done for error handling purposes. It will be not good if the camera is not opened and we start the process of recording. It can also damage the camera. To avoid this, we check if the camera was opened when we created the object of VideoCapture. print(cap.isOpened()) This will print True, if the camera was opened, and False if the camera was not opened. Open Camera in OpenCV Python In the above section, we checked if the camera is opened or not. What if the camera is not opened. How we can open the camera using OpenCV. We can open the camera using the open() function. cap.open() To confirm if the camera is opened or not, you can repeat the above process in order to check it. Reading Frames from Video in OpenCV We have learned the nitty-gritty basics of capturing video using OpenCV, this will help us in reading frames of the video. Frames of a video can be read using the read() function in OpenCV. The read() function returns a tuple with two variables i.e retention and the frame. The retention value is a boolean value that lets us know whether the read() function was able to get the frame or not and the frame is the actual frame of the video. while cap.isOpened(): ret, frame = cap.read() if not ret: print("Can't retrive frame") break In the next blog, we will see how we can write the height and width of the video frame on each frame of the video. Learn more about video capture from the OpenCV official documentation.
https://hackthedeveloper.com/video-capture-opencv-python/
CC-MAIN-2021-39
refinedweb
578
81.83
In this article, we’ll present a couple of examples where we’ll be using the 0x2e int instruction to interrupt the kernel and call some interrupt service routine. We’ll also be using the sysenter instruction to do the same. The basic idea is presenting both methods of transferring the control from user mode to kernel mode by showing an easy to use example. Let’s first present the address where the IDT table is located in memory. We can do that by printing the value stored in the IDTR register. On the picture below we used the “r idtr” to get the address of the IDTR table, which is 0x8003f400. Then we dumped the memory at that address, which printed the interrupt descriptors. Rather than dumping memory with the dd command, we can use the !idt command to display the whole IDT table in a more transparent way: When dumping the memory with ddcommand, two columns actually correspond to the single IDT entry on the image above (this is because each IDT entry is 8 bytes long). If we look at both of the pictures carefully, we can see that the same IDT entries have been presented. But we’ll be using the 0x2e, so it’s best to present it. This means that whenever the 0x2e software exception is triggered, the KiSystemService function in the kernel will be called. So whenever a software interrupt occurs, the KiSystemService function is called, which verifies that the right service number has been passed to it. The KiSystemService is the gateway between the user and kernel space. The 0x2e Interrupt At first, I had a thought that I should call some ntdll library function, which would automatically interrupt the kernel in some way. But while experimenting with this feature, the first thing that was bothering me was working my way through the system call function layers in order to get to the actual int instruction. This required several steps and jumps to get from the place where I called the function to the actual point of interest, the int instruction. And then again, the sysenter instruction was called, not the int instruction, which was an additional problem. It was at that time that I decided that I would just code my own assembly instructions that would initiate certain system calls. Right after deciding that, I had to find the system call numbers to put in the eax register when performing a system call. You can find all the system call numbers for Windows NT/2000/XP/2003/Vista on the link here:. Basically, the table is giving us the information of what system call number we have to use on particular system to call the needed system call; the names of the system calls that will actually be called are given in the first column. If we click on system call, the line will expand and the function prototype will be shown, which is very useful when we need the prototype as soon as possible and have no time to loose. On the picture below, we can see that we’re looking at the NtClose system call that takes one parameter; we didn’t present the system call numbers for every available operating system because the picture would be too large for presentation, but let’s just look at the last three columns which present the system call number of the Windows XP SP0, SP1 and SP2 (different service packs). In all the three cases, the system call number is 0x0019, which means that the system call number didn’t change between service packs (this usually doesn’t happen, but just keep in mind that it could). Okay: what does all of this mean? It means that if we store the number 0x19 in the register eax before calling the int 0x2e instruction, we’ll essentially be calling the NtClose function in kernel. Let’s take a look at a simple example that we’ll be using to present the system call being called via the int 0x2e interrupt. The source code of a simple C++ program can be seen below: #include "stdafx.h" #include<stdio.h> #include <windows.h> #include <Winternl.h> int _tmain(intargc, _TCHAR* argv[]) { __asm { int 3 moveax, 19h int 2eh }; getchar(); return 0; } Note that the program doesn’t actually do anything and it doesn’t work; we’ve only used the presented code to make it simple to view what the program will do once the “int 0x2e” is being called. We’re not particularly interested in what the program does. The first thing that we need to do is to compile and run the program, after which the Windbg will catch the exception as follows: The execution of the program is stopped at the “int 3″ instruction. If we disassemble the instructions at that address, we can see the following: If we step-into the instructions now, the execution won’t go into the “int 2e” instruction, because we’re not dealing with a call instruction. The execution will go right to the “movesi, esp” instruction without us being able to see the instructions in between. To be able to see those instructions as well, we need to set the right breakpoints, but for this we need to have a deep understanding of Windows internals. We need to be aware of the fact that whenever the “int 2e” instruction is called, the KiSystemService function will be invoked. We can see that if we execute the “idt -a” command in Windbg: kd>idt -a bb88390f0000002e: 8053d541 nt!KiSystemService The KiSystemService is located at the address 0x8053d541, so we can set the breakpoint there. On the picture below, we’ve set the breakpoint with the bp instruction and listed all currently set breakpoints with the bl instruction. Notice that only one breakpoint is set and that is exactly on the KiSystemService function? Then, we can use the twindbg command to step through the instructions. At first, we’ll be executing the “moveax, 19h” instruction, which requests the KiSystemService function to call NtClose function as we’ve already determined before. Then the “int 2e” instruction is called, but this time we’re not simply executing it with one go, because we’ve set the appropriate breakpoints. Take a look at the picture below, where we’ve landed at the first instruction in the KiSystemService function. Also notice that we’re in kernel mode, since we’re executing the instruction at address higher than 0x80000000? Let’s present the instructions that will be executed in the KiSystemService function: If you look carefully, you can see that this is just an initialization code and the value in the eax register isn’t used anywhere. At the end of the code, there’s a jump to the KiFastCallEntry function (at offset 0x8d). If we also disassemble the instructions at that address we can see the following: On the picture above, we can see the value in register eax being used, which means that the system call number is being inspected in some way. We’ve seen that when we invoke the 0x2e interrupt, the KiSystemService function is being called. It uses the value in the register eax, which is passed from user mode to kernel mode to determine which function to call in the kernel mode. By using the 0x2e interrupt, we can successfully invoke the system calls in kernel mode, which is executed with kernel privileges. The sysenter Instruction Previously, we had to put the system call number into the eax register and invoke the “int 0x2e” interrupt to call specific function in kernel. But with sysenter instruction, we can also invoke the same function in kernel, just faster. Let’s take a look at how it works. Let’s present the example that we’ll be using to present the sysenter instruction internals. The program is written in C++ and can be seen below: #include "stdafx.h" #include <stdio.h> #include <windows.h> #include <Winternl.h> int _tmain(intargc, _TCHAR* argv[]) { HANDLE file; LPCWSTR filename = L"C:/temp.txt"; /* create a file */ file = CreateFile(filename, GENERIC_WRITE, 0, NULL, CREATE_NEW, FILE_ATTRIBUTE_NORMAL, NULL); if(file == INVALID_HANDLE_VALUE) { printf("File creation failed.\n"); } else { printf("File created successfully.\n"); } /* write some data to a file */ charDataBuffer[] = "Some text being written to a file."; DWORD dwBytesToWrite = (DWORD)strlen(DataBuffer); DWORD dwBytesWritten = 0; BOOL bErrorFlag = FALSE; bErrorFlag = WriteFile(file, DataBuffer, dwBytesToWrite, &dwBytesWritten, NULL); if(bErrorFlag == FALSE) { printf("Writing to file failed.\n"); } else { printf("Writing to file succeeded.\n"); } /* close the file */ CloseHandle(file); /* wait */ getchar(); return 0; } The program is very simple: all it does is creates a file C:\temp.txt if it doesn’t already currently exist and writes some data to it. Then it closes the file and quits. If we compile and run the program, the following will be displayed in the window console: There is also a new file C:\temp.txt created with the following contents: We can see that the program above does exactly what we want it to do. Now let’s add the “__asm{ int 3 };” code block at the beginning of the program, which will cause the interrupt to be generated and the program execution will pause. Actually, it’s better if we add this instruction right before and after the CloseHandle function call, so we can start inspecting that function immediately. When we run the program, the Windbg will catch it as presented on the picture below. We’ve also disassembled the code at the breakpoint address 0x004114d6. Notice the two “int 3″ instructions, which embed the CloseHandle function call? Let’s also present the same instructions loaded into Ida: When we enter the CloseHandle function, we’ll be executing the following code: At the end of the instructions above, the following instructions immediately follow:: Eventually the execution will lead to the NtClose, where the file also needs to be closed when we’re done working with it. This code is presented on the picture below: And at the call instruction on the picture above, we’ll jump to the KiFastSystemCall function, as presented below: Since the sysenter instruction is located at the address 0x7C90E512, we can set a breakpoint at that address as follows: After that, we can use the g command to run the program until the breakpoint is hit. At that point, we can use the rdmsr command to display the values of the machine specific registers (MSR). At this point, we must be aware of the fact that 174, 175 and 176 MSR registers are used to transfer control to kernel mode. The 174 is the IA32_SYSENTER_CS MSR register, the 175 is the IA32_SYSENTER_ESP MSR register and the 176 is the IA32_SYSENTER_EIP MSR register. Let’s dump the value of those registers, which can be seen on the picture below: The 176 MSR register specifies the linear address where we’ll jump to when executing the sysenter instruction. We can see that we’ll jump to the address 8053d600, which is located in kernel mode. If we disassemble that address, we can determine that we’re actually talking about the KiFastCallEntry function, as can be seen below: If we set a breakpoint on that location and execute the t command, we can see that the breakpoint is immediately hit. Here’s the first difference between the “int 0x2e” and the sysenter instruction: when using the “int 0x2e” interrupt, we jumped to the offset 0x8d of the KiFastSystemCall function, while with sysenter we’re jumping to the beginning of that function. Conclusion We can see that both 0x2e interrupt as well as the sysenter instruction led to the same point in the kernel mode, which means that they are really used for the same thing, but the actual procedure of doing it is a little different. In the next article we’ll take a look at what happens in greater details. References: [1] SYSENTER,. [2] x86 Instruction Set Reference,. [3] System Call Optimization with the SYSENTER Instruction,.
http://resources.infosecinstitute.com/the-sysenter-instruction-and-0x2e-interrupt/
CC-MAIN-2015-18
refinedweb
2,008
58.52
Hi, I was trying to use a 4-site free fermion chain with PBC to do benchmark. From the tight-binding model, we know that for a single fermion, the GS energy is -2, and for 2 femrions, it's also -2, because the next unoccupied level is 0 in energy for this 4-site problem. Firstly I tried directly using the "Fermion" site type provided at Everything is consistent. Then I tried defining my own site type, the only thing I changed is renaming "Fermion" to "CFermion" (copy paste and rename, that's all). It may sound weird, but the debugging process eventually led me to this trivial step. The one-fermion energy is still correct, but the two-fermion GS energy I got is -2.828..., which is even lower than the exact result. Here is the structure of my code: using ITensors "the copy paste part, but with the renaming" let N = 4 numbsweeps = 10 sweeps = Sweeps(numbsweeps) maxdim!(sweeps,10,20,100) cutoff!(sweeps,1e-10) sites = siteinds("CFermion",N) # two-particle states = ["Occ","Occ","Emp","Emp"] psi0 = productMPS(sites,states) ampo = AutoMPO() for j = 1:N-1 ampo += "C",j,"Cdag",j+1 ampo += "C",j+1,"Cdag",j end ampo += "C",N,"Cdag",1 ampo += "C",1,"Cdag",N H = MPO(ampo,sites) energy,psi = dmrg(H,psi0,sweeps) return end Thanks a lot for your help. -Mason Hi Mason, Thanks for the question. I tried your code and made a "CFermion" site type, but then I consistently got the answer -2.0 which I take it is the correct answer. My steps to make the "CFermion" site type were: 1. copy-paste the entire src/physics/sitetypes/fermion.jl file into a file cfermion.jl 2. replace all instances of the string "Fermion" with the string "CFermion" 3. put the following at the top to make the code run outside the ITensors module: using ITensors import ITensors: space, op, op!, hasfermionstring, ValName, OpName, @ValNamestr, @OpNamestr, @SiteT ypestr, state (the better practice I think is to put ITensors. in front of each method, so like ITensors.space, ITensors.op etc. when it's outside of the module, so my approach above is sort of a lazy thing) Then I did import("cfermion.jl") into your code, and it ran and gave the answer -2 each time. Hope that helps you to see what may be different about your code. Also I am using the latest "main" branch of the library, so at least version 0.2.3 plus a few recent other changes (which shouldn't affect this code, ideally). Best, Miles
http://itensor.org/support/3237/strange-inconsistency-in-free-fermion-benchmark
CC-MAIN-2022-21
refinedweb
438
72.46
Appending Data to Spreadsheets 00:00 Now that you feel comfortable working with openpyxl syntax for reading Excel spreadsheets, it’s time to learn how to start creating them. Before you try and create a complex spreadsheet, however, we’ll start by appending to an existing spreadsheet. 00:13 Go ahead and make a new script. I’m going to call mine. hello_append.py, since this will be a very short example. 00:21 So, like before, from openpyxl import load_workbook, 00:27 then grab that workbook by load_workbook() and pass in the filename. This one was "hello_world.xlsx". Then, the sheet will be the active worksheet. So workbook.active. 00:41 So, just like before, you can say sheet and—this time—put some data into "C1", and just put something in here, like "writing!". 00:49 Now, since this hasn’t hit the worksheet yet, you need to save that, so say workbook.save(), pass in a filename and—to make sure nothing gets overwritten—say "hello_world_append.xlsx". Save this. So, looking at this, you’re going to load the workbook and the sheet, and then now, you’re going to insert a value into C1, and then you’re going to save that workbook with a new name. 01:15 Go ahead and run this. python hello_append.py. 01:20 No errors and a new spreadsheet opened up. Okay. And look at that! You still see hello world!, and now, C1 has writing!. This might not seem like much, but it’s a big step towards creating a new spreadsheet. 01:36 Let’s go back to that first script and look into what happened a little deeper. So, I’m going to go back to hello_world.py, or actually, hello_openpyxl.py. 01:45 So, the big lines here are this third line where you made the new Workbook—and note that this Workbook is not actually an Excel workbook. 01:53 It’s just openpyxl’s model of a Workbook, that represents a spreadsheet. In these lines, you actually assign values to the Cell object. 02:02 Finally, this last line here is the most important. Without this, all of this code would stay within Python. When you save the workbook, that’s what actually creates the Excel spreadsheet, using the information that was stored in the Workbook model. 02:16 If this seemed a bit confusing, let’s go over to the interpreter and see if we can make it a little clearer. I’m going to open up bpython, bring this up a bit and—like before—do the imports. So, from openpyxl import load_workbook. 02:30 Set the workbook up and set a sample: we’ll say "hello_openpyxl.py". 02:37 And, of course, I grabbed the wrong sheet—or, the Python file. So, grab the "hello_world.xlsx" Excel spreadsheet. 02:46 Now, sheet will be a workbook.active. Something that might help here is to define a new function called print_rows(). 02:54 And what this will do is for row in sheet.iter_rows()—and you’ll want to go through the whole sheet, grab those values, set values_only equal to True, and we’ll just print(row). 03:05 So now, if you call this 03:09 and make sure the parentheses are closed, you should see that first row prints out from the original spreadsheet. So, from that first script, you already know how you can access and set values to those cells. 03:20 So, for "A1", try setting this equal to "value", and now when you take a look at "A1"— 03:27 and make sure you grab the .value off of it—you’ll see that it now says 'value'. Another way you can work with cells is by assigning them to a variable. 03:35 So you can say cell = sheet at "A1", and now when you take a look at cell.value, you should see 'value'. From here, you can change the values, so say, cell.value = "hey", and now if you take a look at it, you’ll see, 'hey'. 03:52 So now, take a look at print_rows() and you should see 'hey', instead of 'hello'. Now, something interesting here—say, something like sheet at "B10" is something like "test". Now when you call print_rows(), something interesting is going to happen. 04:08 You’re going to see all these blank rows show up now, until you get to row 10 with 'test' in column B. openpyxl is only going to load into memory cells of the workbook that it thinks are useful to you. So before any data was present in row 10, openpyxl only thought you needed that first row. 04:26 Now that you have something there, openpyxl needs placeholders for all of these extra cells. All right! Now you’ve practiced appending to an existing spreadsheet and taken a bit of a deeper dive into how openpyxl handles Excel spreadsheets. 04:42 In the next video, you’re going to see how to manage the rows and columns of an Excel spreadsheet using openpyxl. Become a Member to join the conversation.
https://realpython.com/lessons/appending-to-spreadsheets/
CC-MAIN-2020-40
refinedweb
862
82.85
The Code Style Guide For end-users, the most important parts of the software are functionality and UI/UX. But for developers, there is one more important aspect - code style. While ugly code can do everything that it has to do, developing it further may be a difficult task, especially if the developer didn't write the original code. Which one of the following do you prefer to read and work with? MyPath = '/file.txt' from pathlib import * import os.path,sys def check(p): """Uses os.path.exist """ return os.path.exists(p) def getF( p): """Not sure what this do, this just worked. """ return Path(p ) result=[check(MyPath),getF(MyPath)] or import os.path from pathlib import Path FILE_PATH = '/file.txt' def check_file_exists(path: str) -> bool: """Checks does file exists in path. Uses os.path.exists.""" return os.path.exists(path) def get_path_object(path: str) -> Path: """ Returns Path object of the path provided in arguments. This is here for backward compatibility, will be removed in the future. """ return Path(path) result = [ check_file_exists(FILE_PATH), get_path_object(FILE_PATH), ] The second is definitely easier to read and understand. These scripts are small and even with the first code snippet you can understand what the code does pretty quickly, but what if the project has thousands and thousands of files in a really complex folder structure? Do you want to work with code that looks like the first example? You can save hours sometimes if you write beautiful code that follows the style guidelines. The most important code style document for Python is PEP 8. This Python Enhancement Proposal lays out the majority of all Python code style guidelines. This article will cover the most important aspects of PEP 8. Linters But everyone makes mistakes and there are so many style rules that can be really difficult to remember and always follow. Luckily, we have amazing tools that help us - linters. While there are many linters, we'd like code jam participants to use flake8. Flake8 points out to you rules what you did break in your code so you can fix them. Guidelines Basics For indentation, you should use 4 spaces. Using tabs is not suggested, but if you do, you can't mix spaces and tabs. PEP 8 defines a maximum line length of 79 characters, however, we are not so strict - teams are welcome to choose a maximum line length between 79 and 119 characters. 2 blank lines should be left before functions and classes. Single blank lines are used to split sections and make logical breaks. Naming Module, file, function, and variable names (except type variables) should be lowercase and use underscores. # File: my_module.py/mymodule.py def my_function(): my_variable = "value" Class and type variable names should use the PascalCase style. from typing import List class MyClass: pass ListOfMyClass = List[MyClass] Constant names should use the SCREAMING_SNAKE_CASE style. MY_CONSTANT = 1 You should avoid single-character names, as these might be confusing. But if you still do, you should avoid characters that may look like zero or one in some fonts: "O" (uppercase o), "l" (lowercase L), and "I" (uppercase i). Operators If you have a chain of mathematic operations that you split into multiple lines, you should put the operator at the beginning of the line and not the end of the line. # No result = ( 1 + 2 * 3 ) # Yes result = ( 1 + 2 * 3 ) If you ever check if something is equivalent to None, you should use is and is not instead of the == operator. # No if variable == None: print("Variable is None") # Yes if variable is None: print("Variable is None") You should prefer using <item one> is not <item two> over not <item one> is <item two>. Using the latter makes it harder to understand what the expression is trying to do. # No if not variable is None: print("Variable is not None") # Yes - it is much easier to read and understand this than previous if variable is not None: print("Variable is not None") Imports Imports should be at top of the file, the only things that should be before them are module comments and docstrings. You shouldn't import multiple modules in one line, but give each module import its own line instead. # No import pathlib, os # Yes import os import pathlib Wildcard imports should be avoided in most cases. It clutters the namespace and makes it less clear where functions or classes are coming from. # No from pathlib import * # Yes from pathlib import Path You should use isort imports order specification, which means: - Group by type: order of import types should be: __future__imports, standard library imports, third-party library imports, and finally project imports. - Group by import method: inside each group, first should come imports in format import <package>and after them from <package> import <items>. - Order imports alphabetically: inside each import method group, imports should be ordered by package names. - Order individual import items by type and alphabetically: in from <package> import <items>format, <items>should be ordered alphabetically, starting with bare module imports. Comments are really important because they help everyone understand what code does. In general, comments should explain why you are doing something if it's not obvious. You should aim to write code that makes it obvious what it is doing and you can use the comments to explain why and provide some context. Keep in mind that just as important as having comments, is making sure they stay up to date. Out-of-date and incorrect comments confuse readers of your code (including future you). Comments content should start with a capital letter and be a full sentence(s). There are three types of comments: block comments, inline comments, and docstrings. Block comments Probably most common comment type. Should be indented to the same level as the code they describe. Each line in the block comment has to start with #and should be followed by a single space. To separate paragraphs, use one line containing only #. if variable is None or variable == 1: # If variable is None, something went wrong previously. # # Here starts a new important paragraph. Inline comments You should prefer block comments over inline comments and use inline comments only where it is really necessary. Never use inline comments to explain obvious things like what a line does. If you want to use an inline comment on a variable, think first, maybe you can use a better variable name instead. After code and before the start of inline comments should be at least two spaces. Just like block comments, inline comments also have to start with #followed by a single space. # Do not use inline comments to explain things # that the reader can understand even without the inline comment. my_variable = "Value!" # Assign value to my_variable # Here better variable name can be used like shown in the second line. x = "Walmart" # Shop name shop_name = "Walmart" # Sometimes, if something is not obvious, then inline comments are useful. # Example is from PEP 8. x = x + 1 # Compensate for border Docstrings Last, but not least important comment type is docstring, which is a short version of documentation string. Docstring rules haven't been defined by PEP 8, but by PEP 257 instead. Docstrings should start and end with three quotes ("""). There are two types of docstrings: one-line docstrings and multiline docstrings. One-line docstrings have to start and end in the same line, while multiline docstrings start and end in different lines. Multiline docstring has two parts: summary line and a longer description, which are separated by one empty line. The multiline docstring start and end quotes should be on different lines than the content. # This is a one-line docstring. """This is one line module docstring.""" # This is a multiline docstring. def my_function(): """ This is the summary line. This is the description. """ Too much for you? Do all these style rules make your head explode? We have something for you! We have a song! We have The PEP 8 Song (featuring lemonsaurus)! Great way to get started with writing beautiful code.
https://pythondiscord.com/events/code-jams/code-style-guide/
CC-MAIN-2021-31
refinedweb
1,343
64.71
Python has functions like most other languages, but it does not have separate header files like C++ or interface/implementation sections like Pascal. When you need a function, just declare it, like this: def buildConnectionString(params): Note that the keyword def starts the function declaration, followed by the function name, followed by the arguments in parentheses. Multiple arguments (not shown here) are separated with commas. Also note that the function doesn't define a return datatype. Python functions do not specify the datatype of their return value; they don't even specify whether or not they return a value. In fact, every Python function returns a value; if the function ever executes a return statement, it will return that value, otherwise it will return None, the Python null value. The argument, params, doesn't specify a datatype. In Python, variables are never explicitly typed. Python figures out what type a variable is and keeps track of it internally. An erudite reader sent me this explanation of how Python compares to other programming languages: -).
http://docs.activestate.com/activepython/3.6/dip/getting_to_know_python/declaring_functions.html
CC-MAIN-2018-09
refinedweb
173
54.42
Wal-Mart Stores, Inc. (Symbol: WMT). So this week we highlight one interesting put contract, and one interesting call contract, from the January 2019 expiration for WMT. The put contract our YieldBoost algorithm identified as particularly interesting, is at the $42.50 strike, which has a bid at the time of this writing of 80 cents. Collecting that bid as the premium represents a 1.9% return against the $42.50 commitment, or a 1% annualized rate of return (at Stock Options Channel we call this the YieldBoost ). Turning to the other side of the option chain, we highlight one call contract of particular interest for the January 2019 expiration, for shareholders of Wal-Mart Stores, Inc. (Symbol: WMT) looking to boost their income beyond the stock's 2.9% annualized dividend yield. Selling the covered call at the $80 strike and collecting the premium based on the $2.18 bid, annualizes to an additional 1.7% rate of return against the current stock price (this is what we at Stock Options Channel refer to as the YieldBoost ), for a total of 4.6% annualized rate in the scenario where the stock is not called away. Any upside above $80 would be lost if the stock rises there and is called away, but WMT shares would have to advance 14.2% from current levels for that to occur, meaning that in the scenario where the stock is called, the shareholder has earned a 17.4% return from this trading level, in addition to any dividends collected before the stock was called. Top YieldBoost WMT.
http://www.nasdaq.com/article/interesting-january-2019-stock-options-for-wmt-cm763023
CC-MAIN-2017-13
refinedweb
264
65.73
Arduino. Download the code: MultipleServos.pde: This is the Arduino sketch. Copy and paste this code into your Arduino IDE software and upload it to the board. servo.py: This is the Python module which talks directly to the Arduino sketch “MultipleServos.” This script requires the pyserial module, available from Sourceforge. Save this script on your PC wherever you like, just be sure to name it “servo.py”. Customize the code: Depending on your computer system and Arduino hardware setup, you may need to make a few modifications to the code. Arduino: In the “MultipleServos” sketch, take note of the following three variables and make adjustments as necessary for your setup. See “Arduino Serial Servo Control” for more details regarding the minPulse and maxPulse variables. int pinArray[4] = {2, 3, 4, 5}; // digital pins for the servos int minPulse = 600; // minimum servo position int maxPulse = 2400; // maximum servo position Python: In the “servo.py” script, you’ll most likely need to change the value of the usbport variable, which tells Python how to find your Arduino (On Windows, it’ll be something like ‘COM5′. On a Mac, ‘/dev/tty.usbserial-xxxxx’. On Linux, ‘/dev/ttyUSB0′.). Try running ls /dev/tty* from a Mac or Linux terminal for a list of available ports. [ToDo: Modify the script to make this step unnecessary.] Test the code: Once your hardware is set up and the software is installed, you can test the system’s basic functionality from the Python interactive interpreter, like so: ~/path/to/servo.py$ python >>> import servo >>> servo.move(2,150) The servo.move() method takes two arguments, both integers. The first is the servo number you wish to move, 1-4.. - multijoystick.py: Allows joystick control of the servos, with each joystick axis controlling a single servo. This code could the basis for a Wi-Fi RC vehicle of some kind. - servorandom.py: The final servo sequence seen in the video, with individual servos moving to random positions and then waving “goodbye” in unison. With any luck, you should now have everything up and running just like in the video!′ means “Servo #4″ and the ‘90′ means “90 degrees.” Tom Igoe’s article, “Interpreting Serial Data,” contains an excellent discussion of some of the problems involved in serial communication, and lists several issues that need to be addressed in every project, namely: - How many bytes am I sending? Am I receiving the same number? - Did I get the bytes in the right order? - Are my bytes part of a larger variable? -′, ‘9′, and ‘0′ —, ‘65′. We won’t get too deep into this concept except to say that the implementation is great′ and ‘90′, represented in Python as chr(4) and chr(90), and interpreted by the Arduino sketch as, once again, simply ‘4′ and ‘90′.′ is “Servo #4″: - Here comes a new servo command. - Servo number to move. -′′,ervos.pde (bare bones) **/ void setup() { // open serial connection? servoPosition = userInput[1]; // packet check if (servoPosition == 255) { servo = 255; } If Arduino gets a complete packet with header, servo, and angle values, it calculates the correct pulseWidth for the commanded servo angle, and assigns that value to the appropriate servo. If the value of servo is not between 1 and 4, the loop exits without assigning any new values. // compute pulseWidth from servoPosition pulseWidth = minPulse + (servoPosition * (pulseRange/180)); // stop servo pulse at min and max if (pulseWidth > maxPulse) { pulseWidth = maxPulse; } if (pulseWidth < minPulse) { pulseWidth = minPulse; } // assign new pulsewidth to appropriate servo switch (servo) { case 1: servo1[1] = pulseWidth; break; case 2: servo2[1] = pulseWidth; break; case 3: servo3[1] = pulseWidth; break; case 4: servo4[1] = pulseWidth; break; } } } Finally, once each loop, whether it receives any serial data or not, Arduino checks to see if the servos need a pulse. To hold their current positions, RC servos expect a pulse at 50Hz (every 20ms), so Arduino keeps track of the lastPulse. If the timer is up, all servos get a pulse. // pulse each servo if (millis() - lastPulse >= refreshTime) { pulse(servo1[0], servo1[1]); pulse(servo2[0], servo2[1]); pulse(servo3[0], servo3[1]); pulse(servo4[0], servo4[1]); // save the time of the last pulse lastPulse = millis(); } } // create the pulse void pulse(int pin, int puls) { digitalWrite(pin, HIGH); // start the pulse delayMicroseconds(puls); // pulse width digitalWrite(pin, LOW); // stop the pulse } - Tom Igoe, Making Things Talk: Practical Methods for Connecting Physical Objects - Tom Igoe, "Serial Communication" - Tom Igoe, "Interpreting Serial Data" - Society of Robots, "Actuators and Servos" - ITP Physical Computing, "Servo Lab" - ITP Physical Computing, "Serial Lab" Reader Comments Add your own 1. Allen Riddell | April 27th, 2008 at 12:07 pm This is really excellent. Thank you! p.s. any tips on how to extend servo wires? I’m finding that the wire is really thin and frays easily. Is there a name for that black terminal? Or does one need some sort of wire crimp? 2. Brian | April 27th, 2008 at 7:47 pm Hey Allen, thanks for the props! The best solution I’ve found so far is the servo wire extensions you get at your local hobby store. They come in several different lengths, although it looks like there might be a practical limit of a few feet. For longer wire runs or applications other than radio control, I’d probably chop the connectors off a couple of the short extensions and splice them to something beefier. There could be signal degradation and/or interference with the pulses over a longer run; you’ll probably have to experiment a bit with setup and voltages if you’re going beyond three feet or so. 3. John | May 12th, 2008 at 5:41 am Gidday , this looks great. I am into wanting to make a motion platform for a flight simulator.am looking at using 12 v motors as servo’s. will this system be able to be beefed up to run these , or will do the job as is. My ? may sound silly ,But Be patient with me as I’m new to this . 4. Brian | May 12th, 2008 at 9:09 am Hey John, you could certainly use Arduino and a PC to control your project, but the hardware setup would be substantially different, primarily due to your need to isolate the 12V devices from the Arduino’s 5V circuit. You code would probably need to be different as well, although the modular theory presented here could still apply. Try posting your question over on ladyada’s forum. You’re sure to get a wealth of helpful responses over there. 5. Keith Chester | May 12th, 2008 at 10:39 am I’ve had to do a similar program that I will be posting about soon on my project blog. When you need to make sure information goes through but don’t want to try and synchronize the serial port connection, send data both ways through arrays. Certain spots in the arrays are reserved for data for a certain spot. If you continually resend the array, then it will catch dropped bits if you reserve the first and last spots for a certain value. If the array does not begin or end with the right values, it ignores those values. If it does, it just takes the values of the arrays and applies them to your outputs. This works for inputs as well. 6. drewp | May 19th, 2008 at 3:16 am Hi, I’m doing similar stuff, but I’m controlling ShiftBrites instead. Here’s my serial port setup code, which tries multiple port devices until it finds one that works: getserial.py 7. loonquawl | June 8th, 2008 at 7:43 am Hi. Great tutorial! It worked right from the start, and left me happiliy tinkering with the code, without hassle. I used some servos i had lying around, old ones and thus still analog. Now i’d like to use some stronger and faster ones, and i had my eyes on digital servos. from the vantage of control this should not make a difference, the whole ‘digital’ thing about digital servos is kinda overstated anyway, but those servos will be using a lot more juice (2 amp peak 7 per servo) – do i’d like to hook them to a separate current source, not simply give a wall wart to my diecimila, but running the power/ground cables of the servo directly off a wall wart – after all this preambling: I don’t have much clue of electronics -> Is it possible to have the power run directly to the servos and have a branch of that supply run to the diecimila so the brunt of the current will be caught by the wall wart, but the control is still on the same level? 8. Brian | June 8th, 2008 at 7:18 pm loonquawl: I’m glad everything worked for you out of the box; thanks for the feedback! There are undoubtedly several different options for keeping the power separate. I’d probably just power the Arduino from USB, then use the wall wart to power the servos. I’m not sure about power requirements for digital servos, since I’ve never used them, but you can get a nice little 5V/3.3V Breadboard Power Supply kit from SparkFun that will run off the same 9V wall wart that works on the Diecimila. You might also want to take a look at Adafruit Industries’ Arduino Motor Shield kit as well. At the very least, you’ll get some ideas for circuits! 9. Popcorn | August 27th, 2008 at 11:35 am Hi John, as Brian says you will fry the Arduino if you even try to run a modest DC motor from it directly. As an experiment I tried to isolate some commercial hobby servos from my Arduino by using a Darlington Array IC (ULN2003). It did not work, for me at least. The array seems to monkey with the signal in a way that my non-electronics brain does not fathom. (I expect I will be enlightened) Can I suggest it might be better to run stepper motors, rather than servos for your flight simulator? There are many circuits that you could google to drive even power hungry stepper motors. I’m guessing your motors are to move scenery, or perhaps even the ‘pilot’ so they would have to be reasonably powerful. Good luck on your project and please share your experiences with the community. Pop 10. Joe | October 2nd, 2008 at 3:40 am Hi,John,your tutorial provide a gread help for me,but I can not finish “multijoystick”.when I key in “import servo”and “import pygame”,and move my “joystick”,my servo motors don’t move like your video,could you tell me what’s wrong ? And could you teach me the other way to control a number of motors in the same time just like your video? 11. Freeman | October 30th, 2008 at 3:54 pm Hi, this is gold. I am in the processes of building a robotic arm controlled by arduino, but as i am new to programing, i am wondering if there are graphical interface add-ons for controlling 6+ servos. (especially ones already written for robotic arms) Thanks in advance Freeman 12. Brian | October 30th, 2008 at 8:42 pm @Freeman: Thanks for stopping by. I haven’t seen a GUI for controlling servos out in the wild, but that doesn’t mean there isn’t one. One of the goals I had in mind when creating servo.pywas to allow enough flexibility to add a WxPython or TKinter GUI on top of it. Python is perfect for that kind of thing. Let me know what you come up with! 13. Freeman | November 4th, 2008 at 3:37 pm hi again, i have purchase my hardware for the robotic arm and am currently trying to run this code, but i can not seem to import the servo.py… in the command line interface it says no module was found. can you point me in the right direction? im running python 2.6 thanks again 14. Brian | November 4th, 2008 at 3:49 pm @Freeman: In order for import servoto work, the file servo.pymust be in the same directory as the program calling it. So, if you’re calling it from the interactive interpreter, you must run the pythoncommand (to launch the interpreter) from the same directory in which servo.pylives. If you’re having trouble with the import serialcommand, it just means you need to install the pyserial module (see text above for details). Hope that helps. 15. Freeman | November 4th, 2008 at 4:07 pm thank you, that was the problem, and that i had 2.6 instead of 2.5.2 running thanks again, this is a great start for me 16. Freeman | November 4th, 2008 at 8:17 pm I just want to say thank you again. I have edited your multijoystick code to run five servos with a game pad and it’s working without a hitch. I could not believe that I can get so much done on my first day with the arduino. Thanks again Freeman 17. Chad | November 11th, 2008 at 9:49 pm Very nice indeed!!! This article has inspired me to try the same thing with my next bot. An arduino controlled rig from a remote laptop(linux of course) via bluetooth. I figure that bluetooth is the best option to send packets. Basically a rover similar to the SRV-1 by surveyor but home-brew. Wish list: -Full joystick control of a skid-steer tank-type chassis.(constant rotation servo control required) -Joystick hat button to control the pan and tilt of camera (most likely a cheapie until I feel the need for machine vision capabilities) -Additional joystick buttons for whatever I deem geek enough. Search lights. Blue led strobes, flares(JK) -All run through a python to arduino interface. -The icing on the cake would be an WxPython gui for graphical representation of current pin states. Eg search lights on. Yes I could see the lights are on but this is for geek cred ;) -While I’m day-dreaming it would be nice to get sensor feedback displayed in a gui or as an overlay on the video feed ala HUD. Items such remaining battery power represented in a graphical or numerical format, compass readings, etc. So… any progress on the integration of a WxPython gui with the arduino? :) 18. Brian | November 12th, 2008 at 10:08 am @Chad: I love it! Your surveyor bot sounds like one of my over-the-top harebrained schemes! ;) Instead of Bluetooth, you should check out XBee/ZigBee modules. The range is better than Bluetooth, and they interface easily with Arduino. (Oh, and they’re relatively cheap too.) You can get the modules from Sparkfun, Adafruit and others, along with XBee shields for Arduino. Have fun and keep me posted! 19. Mitchell | December 8th, 2008 at 2:41 pm Could this be used with analog / digital inputs such as GPS/Compass and micro switches? 20. Brian | December 9th, 2008 at 9:11 am @Mitchell: Absolutely! Check out Ladyada’s GPS datalogger shield for Arduino and her great Arduino tutorials for more info. 21. Silas Baronda | December 29th, 2008 at 4:04 am A lot of your source code can’t be found. servo.py and multijoystick.py in particular. 22. Brian | January 1st, 2009 at 10:04 pm @Silas: Sorry about that and thanks a bunch for the feedback! I recently switched the site over to a different server and apparently I didn’t have all the bugs worked out. You should be able to grab the code now. Let me know if you still have problems. 23. Adam | January 6th, 2009 at 9:02 pm thanks!! this is great. I used the arduino code, but instead of python i used visual basic 2005 to make a form control. After i got vb to send ascii characters it worked great!!! 24. Miguel | January 14th, 2009 at 11:54 am First of all allow me to congratulate you on this excellent work of yours. It’s an excellent tutorial on how to handle servos with Arduino. I have only one question. Can you please explain how can we send ascii equivalents to Arduino, ranging from 0 to 255, when the ascii table of Arduino only goes from 0 to 127? 25. Brian | January 15th, 2009 at 1:07 pm @Miguel: Thanks for your comments. The short answer to your question is, “No, I can’t explain it very well.” The long answer is that although ASCII is a 128-character code, it is a deprecated subset of Unicode, which supports 256 characters. Since the Python chr()function supports 256 characters, and since Arduino’s serial library accepts these values, it works for our purposes. I specifically didn’t get into the actual ASCII value conversions in this tutorial, since only the decimal equivalents were important to this application. My brain is exploding just thinking about it right now, so that’s as deep as I’ll get. Check out the Wikipedia page on ASCII if you really want the gory details. I don’t necessarily like the way serial communication works, but I tried to explain how to make it work for the purposes of this application. Tom Igoe’s article does a much better job of explaining it than I do. 26. daniel | January 31st, 2009 at 1:04 pm Excellent article to get started with servos. I really appreciated the post. I’ve been able to follow the tutorial, but when i move one servo the other one kind of shake a little. I’m using Tower Hobbies TS-53 standard servos, and arduino diecimilanove. Any thoughts why this could happen?? Thanks in advanced! 27. Brian | January 31st, 2009 at 1:41 pm @Daniel: Thanks for stopping by! I’ve encountered this phenomenon as well, and it can be difficult to troubleshoot. Basically what’s happening (I think) is that the servo is “confused” by pulses it is either receiving or NOT receiving from the Arduino. The servos expect a pulse every 20ms or so, or they will start to “drift.” You might want to experiment with the refreshTimevariable to see if this changes anything. You might also try tweaking the minPulseand maxPulsevariables. Start with just one servo and send it commands for full left and full right (0 and 180 degrees). The servo should not “buzz” at either of these settings. If it does, you probably need to shorten the pulse width. Sometimes operating multiple servos with rapid command inputs will cause uncommanded servos to buzz. I’m not sure if this is because Arduino is getting mixed signals or what. You’ll notice in the servorandom.pyscript that I’ve included the variable t = 0.23 #sleep time. As I recall, this was to space out the servo commands so they didn’t all pile up in the buffer. I think if you shorten that time, you’ll start to see servo buzz. Remember, your PC can send data a lot faster than a servo can move. You have to give it some time to do its thing. Finally, if you’re using a joystick or something similar as an input device, it’s hard to avoid some uncommanded “jitter.” The joystick is constantly sending position data to the Arduino — even when its centered — and I think this tends to overload the buffer. If you look closely at the video above, you’ll notice that servo #1 (far left) tends to vibrate even when its axis is not being commanded by the joystick. I have not found a solution to that particular problem — in fact, from a vehicle-control standpoint, I think I basically wrote it off as a non-issue. Give those suggestions a try and report back if you get time. I’d like to hear how things turn out! 28. daniel | February 1st, 2009 at 1:57 pm Brian, Thank you very much for your reply, I will try adjusting those things, and try them with the servos in my plane to see if that’s really an issue or not. Thanks a lot! 29. Jim | February 4th, 2009 at 7:06 am Hi Brian. Thanks for the excellent tutorial. In the comment (#27) you make above about joysticks and the jitter issue. This is often combated using a deadzone, where anything below 1% or 2% of the sticks motion is clamped to Zero. In the tutorial you suggest that this method works well if you want to send 8bit data, (i.e. 0-255). Can you describe how you would go about sending more? perhaps a 10bit number or even a float (for GPS NEMA data for instance?) Over the next few days i’m going to try and recreate your servo.py lib in processing. I’ll let you know how i get on and send you a link when it’s done. cheers Jim 30. Brian | February 4th, 2009 at 8:33 am Hey Jim, thanks for the tip. I’ll try it out. I haven’t messed around with GPS at all, but it’s next on my list, either with Arduino or with something like a TinyTrak and the APRS network. Limor Fried (ladyada) has built a cool-looking GPS shield for Arduino that you might want to check out. I think it’s mostly just a datalogger, and from a quick glance at your website, it looks like you might want to do vehicle control. There’s a bunch of code for parsing NMEA sentences that’s blowing my mind just to look at right now. She’s also got a great forum over there. Limor’s actually an MIT EE grad, so if anyone can answer your question, she can. 31. Jim | February 4th, 2009 at 10:48 am Yes, respect due to Limor, she’s a great source of knowledge as as well as cool toys :) 32. daniel | February 4th, 2009 at 4:15 pm Hi Brian, thanks again for all. My next question is if you had tried this wireless with xbee. I’ve been trying to do that, but not with the result i’ve expected. If send moves more than 1 every 3 or 4 seconds it does get the message well. I’m using Xbee ZB 2mW ZigBee Pro RF modules at 9600. (I’ve also tried with 4800 and faster). (I’m actually getting two XBee Pro 50mW Series 2.5 RPSMA in the mail soon). Well thanks in advanced. 33. Brian | February 5th, 2009 at 10:35 pm Sorry Daniel, I haven’t used the Xbee radios yet. I’m afraid I can’t help with that one. 34. daniel | February 6th, 2009 at 11:13 am thanks anyway man, I’ll keep trying. 35. phil drummond | March 21st, 2009 at 12:57 pm This stuff looks great, however it is “asking” for something called “pygame” which as far as I can tell is not a “default” package… dependencies! Please check your dependencies when you project stuff out to the “learning” public. So far, all I have been able to do with this pygame requirement is get fully lost in the Python install nightmare… version this, requirement that… All I wanted was to try your cool application on my BoArduino! NOT re-learn the entire Python development system! By the way, the Sketch loads just fine, the pyserialstuff seems to work, but the multijoystick.py thing stopped me cold. 36. Hugo | March 22nd, 2009 at 7:11 am Thanks for the nice application. However, I’m not even able to run the python script. Can someone help me troubleshooting this. I’m far from an expert. here is what I get when I start servo.py in command lines: 37. Brian | March 23rd, 2009 at 9:24 pm @Phil: The multijoystick.pyscript depends on three additional modules: servo.py, pyserial, and pygame, each of which are mentioned in the header comments of that script. You’ll have to install all three before that script will work. Specifics about joystick control were beyond the scope of this article, as I mentioned in the introduction. For more information about controlling servos with a joystick, see the article entitled “Joystick Control of a Servo.” For detailed instructions on installing Pygame on your system, consult the Pygame documentation. 38. Brian | March 23rd, 2009 at 9:32 pm @Hugo: Thanks for including your error messages. From the looks of things, you’re running Python 3.0, and it doesn’t seem to like the code syntax I’ve used in the servo.pyscript. I wrote that script using Python 2.5.x, and Python 3.0 is not backwards compatible with earlier versions. I haven’t played around with 3.0 at all, so I’m not sure of all the differences, but it looks like my syntax needs to be altered a bit to use parentheses, as in this example from the Python 3.0 docs. So, instead of: Try I’m not sure if that will work, since I’m still running 2.5.x, but it’s worth a shot! Let me know what happens. :) 39. Joel | March 24th, 2009 at 7:54 pm This project looks great, but I can’t get the Python code to work. When I try to ‘import servo’ it returns an ImportError: DLL load failed for importing win32file. What is this? I’m running 2.6.1 and have downloaded pyserial. 40. Brian | March 24th, 2009 at 8:57 pm @Joel: Thanks for stopping by. I’m afraid I don’t run Python on Windows very often; I’m mostly a Linux guy. But I seem to recall needing the pywin32 module to run these scripts on Windows. Try installing that and see if it helps. Also, you need to launch the Python interpreter from the same directory in which servo.pyresides, otherwise Python won’t be able to find it. 41. Jeff | April 29th, 2009 at 7:57 am I installed pygame and pyserial but when i import them it has importerror:dll load issues. Is it just because I have a newer version of python than the modules support? 42. Jeff | April 30th, 2009 at 7:54 pm I get this when I try to run the code, everything is installed and I got rid of a bunch of other errors earlier but I still get this one. Traceback (most recent call last): File "C:\Python25\Lib\idlelib\servo.py", line 20, in ser = serial.Serial(usbport, 9600, timeout=1) File "C:\Python25\Lib\site-packages\serial\serialutil.py", line 171, in init self.open() File "C:\Python25\Lib\site-packages\serial\serialwin32.py", line 53, in open raise SerialException("could not open port %s: %s" % (self.portstr, msg)) SerialException: could not open port COM5: (5, 'CreateFile', 'Access is denied.') Any ideas? 43. Jeff | April 30th, 2009 at 8:05 pm I figured out the error. In the servo.py implementation it says ser = serial.Serial(usbport, 9600, timeout=1) it should be ser = serial.Serial(usbport, 9600) 44. foobahr | May 3rd, 2009 at 9:44 pm I also experienced some jitter issues and am now using the interrupt based library servo routines. It’d be interesting to re-work this code to use those, though I’m not sure about controlling more than two servos. More here: Add a comment
http://principialabs.com/arduino-python-4-axis-servo-control/
crawl-002
refinedweb
4,598
73.17
The following class is accepted by JustIce (BCEL 5.2), whereas Sun verifier (correctly) rejects it: Compiled from "Test.java" public class Test extends java.lang.Object{ public Test(); Code: 0: aload_0 1: invokespecial #9; //Method java/lang/Object."<init>":()V 4: return public static void main(java.lang.String[]); Code: 0: aload_0 1: iconst_0 2: caload 3: return } In the "main" method we are trying to read a char from an array of Strings and this is of course type- incorrect. My take on the solution is that in the org.apache.bcel.verifier.structurals.InstConstraintVisitor class, in the visitCALOAD method, there are only two checks being made: whether the index is of int type, and whether there is really an array on the stack. What is missing is the check, whether the array holds element of 'char' type. Created attachment 19195 [details] the class illustrating the problem
https://bz.apache.org/bugzilla/show_bug.cgi?format=multiple&id=41069
CC-MAIN-2020-34
refinedweb
149
66.44
Python Tkinter Listbox Advertisements The Listbox widget is used to display a list of items from which a user can select a number of items Syntax: Here is the simple syntax to create this widget: w = Listbox ( master, option, ... ) Parameters: master: This represents the parent window. options: Here is the list of most commonly used options for this widget. These options can be used as key-value pairs separated by commas. Methods: Methods on listbox objects include: Example: Try the following example yourself: from Tkinter import * import tkMessageBox import Tkinter top = Tk() Lb1 = Listbox(top) Lb1.insert(1, "Python") Lb1.insert(2, "Perl") Lb1.insert(3, "C") Lb1.insert(4, "PHP") Lb1.insert(5, "JSP") Lb1.insert(6, "Ruby") Lb1.pack() top.mainloop() When the above code is executed, it produces the following result:
http://www.tutorialspoint.com/python/tk_listbox.htm
CC-MAIN-2015-18
refinedweb
134
53.07
Using Fragments A GraphQL fragment is a shared piece of query logic.: We put the fragment on gql helper to create it. When it’s time to embed the fragment in a query, we simply use the ...Name syntax in our GraphQL, and embed the fragment inside our query GraphQL document:. Image this view hierarchy:: If our fragments include sub-fragments then we can pass them into the gql helper: Filtering With Fragments We can also use the graphql-anywhere package to filter the exact fields from the entry before passing them to the subcomponent. So when we render a VoteButtons, we can simply do: The filter() function will grab exactly the fields from the entry that the fragment defines. Importing fragments when using Webpack When loading .graphql files with graphql-tag/loader, we can include fragments using import statements. For example: Will make the contents of someFragment.graphql available to the current file. See the Webpack Fragments section for additional details.
https://www.apollographql.com/docs/angular/features/fragments.html
CC-MAIN-2018-09
refinedweb
162
65.52
This is only the second time I've used the forums. I've been learning c++ off and on for a little over two months and I am doing an exercise to calculate a person's BMI (body mass index). The compiler shows no errors or warnings but when I run the program, right after I enter the first input, all the output text displays and I get the standard Windows runtime error message "bmi.exe has encountered a problem and needs to close" Here is the code. //bmi.cpp--calculates body mass index #include <iostream> int main() { using namespace std; int feet = 1; int inches = 1; int lbs = 1; const int IncHeight = (feet * 12) + inches; const int kgs = lbs / 2.2; const int MeterHeight = IncHeight * .0254; cout << "enter your height in feet ignoring the inches. ___\b\b\b"; cin >> feet; cout << " Now enter the remaining inches. __\b\b"; cin >> inches; cout << "Now your weight in pounds. ___\b\b\b"; cin >> lbs; cout << "Your BMI (Body Mass Index) is " << kgs / (MeterHeight * MeterHeight) << ". Congratulations, "; cout << "you now have a magic number to brag about to people who have no clue what it means." << endl; return 0; } I think I might be having trouble understanding variable initialization again, but thanks for the help. Also, if there is an operator for raising to powers, not scientific notation, I would appreciate hearing about it since I can't seem to find it in the index
https://www.daniweb.com/programming/software-development/threads/133740/need-help-with-runtime-error-in-simple-program
CC-MAIN-2018-43
refinedweb
244
70.63
Hi, I’m working on a project about pelvic floor MRI image recognition. In this project I am trying to automatic recognize feature points and measure distance on MRI images which will help with pelvic floor disease detection. Right now I have already trained a neural network with Keras and stored the model in a .h5 file. Since Slicer is the perfect open-source platform to implement this idea, I have been working hard to learn, but still a beginner to Slicer. I have finished building the GUI of a scripted module.Now I am trying to put my model into the scripted module I created. And I’m seeking advices and ideas about how to do this. I’m now using from pip._internal import main as _main _main([‘install’, ‘pandas’]) to load packages to Slicer and model = load_model(’./models/’ + model_name + ‘.h5’,compile=False) to load my module. I’m wondering if my idea is ok and is there a better way to do it? Does anyone have any suggestions? Thank you! Andrea
https://discourse.slicer.org/t/loading-a-deep-learning-model-to-a-scripted-module/6068
CC-MAIN-2019-13
refinedweb
174
74.59
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project. > > > I'd just like to add that the problem shows up in the C++ header > > > bits/locale_facets.tcc which uses strdup. This makes it impossible to > > > compile some otherwise-valid C++ programs with -ansi and the newlib > > > headers. Well, I think the problem is with libstdc++ and not newlib. Correctly nailing C89, C99, and the GNU extensions into namespaces has proven to be difficult. I think there is no reason for newlib to alter includes for this case based on __cplusplus. Instead, this should probably be fixed in the C++ library. > > `-ansi' > > In C mode, support all ANSI standard C programs. In C++ mode, > > remove GNU extensions that conflict with ISO C++. This is incorrect for GNU C++ post 3.0.x, as _GNU_SOURCE is defined in CPLUSPLUS_CPP_SPEC. This is currently a low-to-medium priority issue. -benjamin
http://gcc.gnu.org/ml/libstdc++/2002-08/msg00235.html
CC-MAIN-2015-32
refinedweb
155
69.48
Sending an email from a web page is something which is required in just about every application. This is a generic email class part of a class library we use. We are going to use the SmtpMail class, which is a part of the System.Web.Mail namespace. It has just one method – ‘send’ and this is what we use by passing an instance of the mail MailMessage class. The MailMessage class has a number of members, most of them are self-explanatory, but if you want any more information. We are just going to use a few of them for this example. using System.Web.Mail; namespace Utility { public class EmailClass { protected string Sender; protected string Recipient; protected string Subject; protected string Attachment; protected string Message; public EmailClass(string Sender, string Recipient, string Subject, string Message, string Attachment) { this.Sender = Sender; this.Recipient = Recipient; this.Subject = Subject; this.Message = Message; this.Attachment= Attachment; } public void SendEmail() { MailMessage mail = new MailMessage(); mail.From = Sender; mail.To = Recipient; mail.Subject = Subject; mail.Body=Message; // I chose the mail format as to be plain text but you can choose // to have the message sent in HTML format by using MailFormat.Html mail.BodyFormat = MailFormat.Text; if (Attachment != "") { // MailAttachment constructor MailAttachment mailAttachment = new mailAttachment(Attachment); mail.Attachments.Add(mailAttachment); } SmtpMail.SmtpServer = “smptServerName”; SmtpMail.Send(mail); } } } You might see this error message System.Runtime.InteropServices.COMException (0x8004020E): The server rejected the sender address. The server response was: 454 5.7.3 Client does not have permission to submit mail to this server. Exchange doesn't allow unauthenticated users to send mail via SMTP to prevent against spammers, etc. It seems that Everett the next version of the .NET framework will take care of this problem – there is a Fields property for the mailMessage class which let you get an authentication code from CDOSYS
http://odetocode.com/Articles/73.aspx
CC-MAIN-2014-49
refinedweb
309
60.61
Home Energy Monitor: V2 My DIY home energy monitor has been running for almost a year now. It's been recording my electricity consumption every second and everything is neatly archived in my AWS account. Still, though, there is room for improvement. It's time to look back, evaluate & improve the design. I've identified a few pain points that have to be fixed, so let's go! Read more about Energy Monitor v1 here. What could be improved Out of all the things I could improve about V1, there is only one really critical thing: - WiFi stability: One of the main problems with the V1 was that it couldn't recover from a lost WiFi connection. I would walk past it and see an invalid IP address on its display: 0.0.0.0, meaning it had lost its WiFi connection, and a power cycle was needed to solve it. Aargh! Then there were also some nice-to-haves: - Better display: The LCD is okay to show basic information, but it isn't very fancy. It can only show two lines of 16 characters, and you can't draw custom shapes on it. It's also quite bulky and makes the whole unnecessarily large. - DIN-rail mountable: Right now, my energy monitor sits on top of my internet router, which is conveniently mounted next to my electrical panel. I would prefer it if the energy monitor could be smaller so I could fit it on the DIN rail next to one of the circuit breakers. - Better wiring: When I was building V1, I soldered the ESP32 directly onto a protoboard. I won't be making that mistake twice. It's nearly impossible (for me) to get the board off again. - 12-bit ADC support: The ESP32 has a 12-bit ADC, but Emonlib (the library used to measure electricity consumption) only supports 10-bit. Bumping this up to 12-bit should increase the accuracy of the monitor. Better display: ESP32 + OLED I started my journey by looking for a new display. I found several small TFT panels on Adafruit that would work perfectly with the ESP32. But then I came across this beauty: This is the NodeMCU ESP32 OLED Module, on some sites referred to as Wemos LOLIN32. It's an ESP32 development board with a built-in OLED display. Microcontroller and display on the same board? That meant I didn't have to figure out how to mount a separate display in my enclosure and how to connect it to the ESP32. Yay! Aside from these practicalities, it's also quite cheap. You can find different models on AliExpress that go for around €10. The integrated OLED panel is 0.96" in size and has a resolution of 128x64. Not huge but perfect for this use case. While waiting for it to arrive, I broke out Sketch and started designing a simple user interface for it: I decided to show the current time, the WiFi signal strength, the current electricity consumption, and a progress meter at the bottom. Progress of what? Well, the ESP32 takes one measurement each second and sends it to AWS after 30 seconds. The progress bar shows how close it is to sending a new batch to AWS ;) Better wiring After selecting the new hardware, it was time to adjust the wiring. While building the previous version, I soldered my ESP32 directly onto a protoboard. Bad idea! It's nearly impossible to remove the ESP32 from the protoboard. As a result: the ESP32 used in V1 will be lost forever. Thank god these microcontrollers are cheap. So for V2, I purchased some female pin headers to solder onto the protoboard instead. This means I can now easily swap the ESP32 should it be necessary. Or I could decommission the energy monitor and re-use it for another project. And here's how it looks with the ESP32 fitted onto the pin headers: Improved stability: Arduino + FreeRTOS! After improving the wiring, I started improving the software. The main issue with V1 was that it couldn't reconnect to the WiFi if it lost the connection. And because I'm using an unstable ISP-provided router, that happened quite often. To fix this, I decided to use FreeRTOS, a system that allows you to create many tasks and let the scheduler worry about running them. I blogged about this before: Multitasking on ESP32 with Arduino and FreeRTOS To solve my WiFi problem, I created a task that checks the connection every 10 seconds. WiFi still connected? Great, the task suspends itself for 10 seconds and then runs again. If it's disconnected, it will try to reconnect. Simple: #include <Arduino.h> #include "WiFi.h" #define WIFI_NETWORK "YOUR-NETWORK-NAME" #define WIFI_PASSWORD "YOUR-WIFI-PASSWORD" #define WIFI_TIMEOUT 20000 // 20 seconds /** * Task: monitor the WiFi connection and keep it alive! * * When a WiFi connection is established, this task will check it every 10 seconds * to make sure it's still alive. * * If not, a reconnect is attempted. If this fails to finish within the timeout, * the ESP32 is sent to deep sleep in an attempt to recover from this. */ void keepWiFiAlive(void * parameter){ for(;;){ if(WiFi.status() == WL_CONNECTED){ vTaskDelay(10000 / portTICK_PERIOD_MS); continue; } Serial.println("[WIFI] Connecting"); WiFi.mode(WIFI_STA); WiFi.setHostname(DEVICE_NAME); WiFi.begin(WIFI_NETWORK, WIFI_PASSWORD); unsigned long startAttemptTime = millis(); // Keep looping while we're not connected and haven't reached the timeout while (WiFi.status() != WL_CONNECTED && millis() - startAttemptTime < WIFI_TIMEOUT){} // If we couldn't connect within the timeout period, retry in 30 seconds. if(WiFi.status() != WL_CONNECTED){ Serial.println("[WIFI] FAILED"); vTaskDelay(30000 / portTICK_PERIOD_MS); continue; } Serial.println("[WIFI] Connected: " + WiFi.localIP()); } } To start this task, I use: // ---------------------------------------------------------------- // TASK: Connect to WiFi & keep the connection alive. // ---------------------------------------------------------------- xTaskCreatePinnedToCore( keepWiFiAlive, "keepWiFiAlive", // Task name 10000, // Stack size (bytes) NULL, // Parameter 1, // Task priority NULL, // Task handle ARDUINO_RUNNING_CORE ); This task has solved my WiFi connectivity issue! And thanks to FreeRTOS, I don't have to worry about when to run this code. I just rely on the scheduler to run it every 10 seconds. 12-Bit ADC compatibility While I was improving the software, I also remembered that Emonlib - the library which converts raw readings into amps - only supports a 10-bit ADC, while the ESP32 has a 12-bit one. It doesn't seem like a big deal until you realize that 10-bit ADC's have a range of 0-1024, while 12-bit ones go up to 4096. That's four times the range, and so it's four times more accurate. To fix this, I forked Emonlib and create a version specifically for the ESP32: I then added it as a dependency to my PlatformIO project (in platformio.ini): lib_deps = I only changed the implementation of the calcVI and calcIrms functions. Their signatures remain the same, so the rest of my code could stay the same. Smaller case & DIN rail Next step: adapting the case. It could be made a lot smaller because the display is now integrated into the microcontroller. Additionally, I wanted the option to put it on a DIN rail (a standardized way of mounting electrical hardware such as circuit breakers). A DIN-rail mount would allow me to put the energy monitor inside my breaker box. In Belgium, our breaker boxes usually have transparent covers so you can see each breaker, which is ideal for making the OLED visible. Just like last time, I designed the case in Fusion360. I opted to go for a design where the back could be swapped out for different models: There are cutouts for the OLED display, the micro-USB port, and the headphone jack that connects the CT sensor. I also added standoffs for the ESP32 so I can securely screw it into place. For the top lid, I designed four standoffs located in the corners of the case. They have a 45° tapered angle so that they can be 3D printed. After designing the enclosure, I designed two versions of the top lid. One flat one (which I'm currently using) and one with a DIN rail mount: After tweaking some of the margins, it was time to assemble everthing. Starting by screwing the ESP32 into the base of the case: Putting on the top lid (without DIN-rail for now): In comparison to the V1: Cloud architecture While it hasn't been changed, I want to mention the cloud infrastructure as well. Here is an overview of what I have running currently: In a nutshell: - I'm using AWS IoT Core to connect my ESP32 to the cloud (it's basically an MQTT broker with fancy features). - An IoT Rule is trigged every time my energy monitor sends a message to AWS. The rule writes each message to a DynamoDB table. I'm using the device's ID as the primary key and the timestamp as the sort key. - Every night, CloudWatch triggers a Lambda function that gets yesterday's readings from DynamoDB, puts them in a CSV file, and archives them to S3. I've added Gzip compression to this step to keep storage costs low. - A second Lambda function exposes a GraphQL API that is consumed by the web dashboard and the Ionic app. Integrating with Home Assistant I've been a huge fan of Home Assistant, ever since a colleague of mine introduced me to it. I have it running on a Raspberry Pi that is booting from an external SSD. Naturally, I wanted my energy consumption to show up there as well. So that means the ESP32 has to connect to two services: AWS (to archive all my readings) and Home Assistant. You can interface with Home Assistant in a number of ways. I decided to use MQTT because I already know how to use it on the ESP32. The only requirement is that you install a broker (or use an online service) on your Home Assistant machine. I chose to use the Mosquito integration of HASS.IO. Home Assistant gives device makers two options to integrate with MQTT: either the user has to configure the device manually in the configuration.yaml file, or the device can configure itself by using MQTT Discovery. Of course, I choose the latter -- no manual configuration for me. To make it all work, you have to tell Home Assistant everything there is to know about your device. The name, type, measurement units, the icon, and so on... This can be done by posting a message to this MQTT topic: homeassistant/sensor/home-energy-monitor-1/config with the following contents: { "name": "home-energy-monitor-1", "device_class": "power", "unit_of_measurement": "W", "icon": "mdi:transmission-tower", "state_topic": "homeassistant/sensor/home-energy-monitor-1/state", "value_template": "{{ value_json.power }}", "device": { "name": "home-energy-monitor-1", "sw_version": "2.0", "model": "HW V2", "manufacturer": "Xavier Decuyper", "identifiers": ["home-energy-monitor-1"] } } This will automatically configure my energy monitor in Home Assistant and even adds it to the device registry. No additional work required! All that's left now is to periodically send the energy consumption to the topic homeassistant/sensor/home-energy-monitor-1/state: { "power": 163 } This is the end result: Note: Home Assistant integration is optional and disabled by default in the firmware. If you want to enable it, open the config.h file, set HA_ENABLED to true and enter the IP address of your Home Assistant instance as well as the login credentials for your MQTT broker. Status & next steps I've been running V2 of my energy monitor since January 3th, 2020, and so far, it's been rock solid. The WiFi and MQTT connections are automatically reconnected if necessary. That's already a huge improvement. In terms of next steps, I only have three: - Print & test the DIN rail mount - Design & order a custom PCB to replace the protoboard and my bad soldering skills ;) Integrate it with my Home Assistant installation So stay tuned for an update ;) A note on calibration The SCT013 sensors aren't very accurate with low current. You might see "idle" readings of a couple of watts even though no current is flowing through the wire. According to Jm Casler, this is caused by - The SCT013 is built for price, not for quality - The cable's shielding can induce noise if it's shorted with the signal cable - A filter capacitor is needed (preferably a ceramic one) You can read his full (detailed) analysis here:. Additionally, he shared the formula that he uses to calculate the calibration value for emonlib: Calibration number = Number of turns of CT Clamp / Burden Resistor So for a CT clamp with 2000 turns and a 68-ohm burden, that gives a calibration of 29.4. Thanks, man! Source code All of the source code for this project is available on GitHub. That includes the ESP32 firmware, the AWS Lambda functions (and Serverless configuration file) as well as the Fusion360 design files.
https://savjee.be/2020/02/home-energy-monitor-v2/
CC-MAIN-2022-40
refinedweb
2,152
63.09
This project is very similar to ATtiny13 – randomly flashing LED with PRNG based on LFSR. However, this time I used BBS (Blum-Blum-Shumb) algorithm as PRNG (Pseudo Random Number Generator) to generate 16-bit (pseudo) random numbers. The less significant bit of randomly generated number is a major factor in decision to on/off the LED. The code is on Github, click here Parts Required - ATtiny13 – i.e. MBAVR-1 development board - Resistor – 220Ω, see LED Resistor Calculator - LED48 * Randomly flashing LED with PRNG based on BBS (Blum-Blum-Shumb). */ #include <avr/io.h> #include <util/delay.h> #define LED_PIN PB0 #define BBS_P (13) #define BBS_Q (97) #define BBS_SEED (123) #define DELAY (64) static uint16_t m; static uint16_t x; static uint16_t r; /* Initialize vector of BBS algorithm, where: -> p: prime number -> q: prime number -> seed: random integer greater then 1, which is co-prime to 'm', where 'm = p*q' and 'GCD(m, seed) == 1' Note that value of 'm' must be less than '(2^16) - 1' */ static void bbs_init(uint16_t p, uint16_t q, uint16_t seed) { m = p * q; r = x = seed; } static uint16_t bbs_next(void) { return (r = (r * x) % m); } int main(void) { /* setup */ DDRB |= _BV(LED_PIN); // set LED pin as OUTPUT bbs_init(BBS_P, BBS_Q, BBS_SEED); // initialize BBS alg. /* loop */ while (1) { if (bbs_next() & 1) { PORTB |= _BV(LED_PIN); } else { PORTB &= ~_BV(LED_PIN); } _delay_ms(DELAY); } }
https://blog.podkalicki.com/attiny13-randomly-flashing-led-with-prng-based-on-bbs/
CC-MAIN-2020-05
refinedweb
226
54.26
[1, 'Yuvin Ng', 'Columbia College', 778] [2, 'Ali', 'Douiglas College', 77238] [3, 'Nancy', 'Douglas College', 7783390222] my codes: def delcustomer(): import os f=open("customerlist.txt","r+") entername=input("enter customer name") for line in f: if entername in line: f.delete(line)# i know is shoudn;t suppose to be like this but im seriously stuck Is there anyway to delete a list in .txt using python? for example if name input is Yuvin Ng and i want to delete the whole line if it match. How can i do that? This post has been edited by baavgai: 11 July 2011 - 04:30 AM Reason for edit:: tagged
https://www.dreamincode.net/forums/topic/238903-question-how-can-i-delete-words-in-txt-using-python/
CC-MAIN-2018-43
refinedweb
109
74.08
Sometimes I have had a string containing a Hex Value like char s[10] = "0xFDE8"; that I would like to convert to an integer (in this case it would of course get the value 65000). I have not been able to find any standard C or C++ functions to do that, so I decided to write my own. It's declared as _httoi(const TCHAR *value) and is coded using TCHAR so it works both with and without UNICODE defined It's possible it could be done smarter and faster, but it works and if you like it it's yours to use for free. Here is the code to a small console application that uses the function, and demostrates its use. #include "stdafx.h" #include <tchar.h> #include <malloc.h> int _httoi(const TCHAR *value) { struct CHexMap { TCHAR chr; int value; }; const int HexMapL = 16; CHexMap HexMap[HexMapL] = { {'0', 0}, {'1', 1}, {'2', 2}, {'3', 3}, {'4', 4}, {'5', 5}, {'6', 6}, {'7', 7}, {'8', 8}, {'9', 9}, {'A', 10}, {'B', 11}, {'C', 12}, {'D', 13}, {'E', 14}, {'F', 15} }; TCHAR *mstr = _tcsupr(_tcsdup(value)); TCHAR *s = mstr; int result = 0; if (*s == '0' && *(s + 1) == 'X') s += 2; bool firsttime = true; while (*s != '\0') { bool found = false; for (int i = 0; i < HexMapL; i++) { if (*s == HexMap[i].chr) { if (!firsttime) result <<= 4; result |= HexMap[i].value; found = true; break; } } if (!found) break; s++; firsttime = false; } free(mstr); return result; } int main(int argc, char* argv[]) { TCHAR *test[4] = {_T("0xFFFF"), _T("0xabcd"), _T("ffff"), _T("ABCD")}; for (int i = 0; i < 4; i++) _tprintf(_T("Hex String: %s is int: %d\n\r"), test[i], _httoi(test[i])); return 0; } Well, that's all there is to it. You can either copy the code from your browser, or download the project for Visual C++ 6.0. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/string/hexstrtoint.aspx
crawl-002
refinedweb
316
74.73
Using threads and threading With .NET, you can write applications that perform multiple operations at the same time. Operations with the potential of holding up other operations. Note If you need more control over the behavior of the application's threads, you can manage the threads yourself. However, starting with the .NET Framework 4, multithreaded programming is greatly simplified with the System.Threading.Tasks.Parallel and System.Threading.Tasks.Task classes, Parallel LINQ (PLINQ), new concurrent collection classes in the System.Collections.Concurrent namespace, and a new programming model that is based on the concept of tasks rather than threads. For more information, see Parallel Programming and Task Parallel Library (TPL). How to: Create and start a new thread You create a new thread by creating a new instance of the System.Threading.Thread class and providing the name of the method that you want to execute on a new thread to the constructor. To start a created thread, call the Thread.Start method. For more information and examples, see the Creating threads and passing data at start time article and the Thread API reference. How to: Stop a thread To terminate the execution of a thread, use the Thread.Abort method. That method raises a ThreadAbortException on the thread on which it's invoked. For more information, see Destroying threads. Beginning with the .NET Framework 4, you can use the System.Threading.CancellationToken to cancel a thread cooperatively. For more information, see Cancellation in managed threads. Use the Thread.Join method to make the calling thread wait for the termination of the thread on which the method is invoked. How to: Pause or interrupt a thread You use the Thread.Sleep method to pause the current thread for a specified amount of time. You can interrupt a blocked thread by calling the Thread.Interrupt method. For more information, see Pausing and interrupting threads. Thread properties The following table presents some of the Thread properties: See also Feedback Send feedback about:
https://docs.microsoft.com/en-us/dotnet/standard/threading/using-threads-and-threading
CC-MAIN-2019-26
refinedweb
331
60.21
AS 3: URLLoader>Vee< Aug 8, 2007 10:12 AM Problem: I cant seem to get the completeHandler function to output the results of the variable to the textfield main_txt. The below code is stored in a URLLoaderDataFormat.as file. Below that is my code in the echo.php I cant seem to get the completeHandler function to output the results of the variable to the textfield main_txt. The below code is stored in a URLLoaderDataFormat.as file. Below that is my code in the echo.php This content has been marked as final. Show 18 replies 1. Re: AS 3: URLLoaderkglad Aug 8, 2007 10:46 AM (in response to >Vee<)doesn't that generate a compiler error? - 3. Re: AS 3: URLLoader>Vee< Aug 8, 2007 10:48 AM (in response to kglad)Yes. For some reason i put it at the end of the attached code but it never updated it. here it is: 1120: Access of undefined property main_txt. 4. Re: AS 3: URLLoaderkglad Aug 8, 2007 10:50 AM (in response to >Vee<)well, that's a big hint. main_txt is undefined in that class. if it's on your timeline, you need to pass (to your class) a reference to the texfield. 5. AS 3: URLLoader>Vee< Aug 8, 2007 11:32 AM (in response to kglad)How do you pass the instance name of txt field to that method/function if its private? I am trying to get the response back to a field on the timeline. Every example I have uses trace instead of refering to an object (like the textfield) that exists on the timeline. I tried the below in the complete handler but to no avail: 6. Re: AS 3: URLLoaderkglad Aug 8, 2007 12:38 PM (in response to >Vee<)are you instantiating a member of URLLoaderDataFormatExample from your main timeline? if so, pass a reference to the main timeline to your contstructor and use that in your class file to reference the path/name to main_txt. if not, you can create a class that gives access to your main timeline. 7. AS 3: URLLoader>Vee< Aug 8, 2007 12:53 PM (in response to kglad)Im using the attached code so far. Is this how you pass a reference (i wish adobe would fix this forum so we could use [as][/as] tags to highlight. quote: pass a reference to the main timeline import URLLoaderDataFormatExample; var myGreeter:URLLoaderDataFormatExample = new URLLoaderDataFormatExample(); quote: If not, you can create a class that gives access to your main timeline. Not sure how to do this. Im guessing a public function used as a method? public function sayVars():String { var echoText:String; return echoText; } 8. Re: AS 3: URLLoaderkglad Aug 8, 2007 2:47 PM (in response to >Vee<)if main_txt is on your main timeline try: 9. Re: AS 3: URLLoader>Vee< Aug 8, 2007 7:02 PM (in response to >Vee<)it is on my main timeline. I used all the code you provided and recieved this message: 1046: Type was not found or was not a compile-time constant: MovieClip. 10. Re: AS 3: URLLoaderkglad Aug 8, 2007 7:36 PM (in response to >Vee<)you need add another import statement to your class: import flash.display.MovieClip; 11. Re: AS 3: URLLoader>Vee< Aug 9, 2007 8:38 AM (in response to >Vee<)I get this now: 1120: Access of undefined property _mainTL. - 13. AS 3: URLLoader>Vee< Aug 9, 2007 2:21 PM (in response to >Vee<)That was it... so I take it anytime I want to add something to the display list I have to make a reference to the main TL, then send it back to my class as a param of an instance of that class. Is putting the "_" in front of the vaiable in the class file make it available to all functions/methods? Just a heads up. The Safari browser for make produces this error: Error #2044: Unhandled IOErrorEvent:. text=Error #2036: Load Never Completed. When I visit your sample mandelbrot and the shape tween is completely blank (unless Im missing something), the buttons/radio btns doing nothing Let me know if you want me to retest them if you make a change. I love the sound displays. I hope you've gotten to use them on a project. Best Vee 14. Re: AS 3: URLLoaderkglad Aug 9, 2007 7:37 PM (in response to >Vee<)in a class, any time you need to reference an object on the timeline, you need to either pass a reference to the timeline or otherwise make the main timeline accessible. i like to create a public class that has static stage and root properties that reference the stage and root. i can then call those properties from any class and don't have to deal with reference passing. p.s. thank you about the mandelbrot heads-up. i'd removed a font that i was sharing and that caused the problem (which is now fixed). and the shape tween really should have some explanation: you click the draw button and draw shape 1 (by mouse down and mouse move) and then click the draw button again to draw shape 2. then click the tween button to tween the first shape to the 2nd. 15. Re: AS 3: URLLoader>Vee< Aug 10, 2007 12:34 PM (in response to kglad)Does putting the "_" in from of a var amke any kind of difference? Your method of making a public class sounds great. Thats something that sounds very practical (a little out of the scope of what I know at the moment). Is there a good place to learn hwo to achieve a class like that? Mandelbrot good works now. I also used the shape tween too. The explanation helped. heres some of my stuff if you want to take a look As far as I can tell theres one link thats dead, that needs to be updated. 16. Re: AS 3: URLLoaderkglad Aug 10, 2007 12:46 PM (in response to >Vee<)no, that _ makes no difference. it's from a convention for naming, what should be private, class objects/variables. 17. Re: AS 3: URLLoader>Vee< Aug 10, 2007 12:56 PM (in response to >Vee<)OK, then I should start doing it that way from now on. Thanks again. -
https://forums.adobe.com/thread/63447
CC-MAIN-2018-39
refinedweb
1,068
80.21
How do I save a DOM as a local file using XML4j by IBM? Created May 4, 2012 You might want to modify one of the XML4j samples to write to a file. This assumes you're using plain server-side code, no bean or servlet issues addressed here. The XML4J sample dom/DomWriter will write XML to standard output. It comes with the source package (XML4J-src_3_0_1.zip). Here's what I did: 1. I added the necessary import. 2. I added a constructor that took a filename parameter. 3. I called the new constructor. 4. I also set 'canonical' to false. ---cut import --- import java.io.PrintWriter; --- cut ---- -- cut constructor--- public DOMWriter(String filename, String encoding, boolean canonical) throws Exception { out = new PrintWriter(new OutputStreamWriter(new FileOutputStream(filename), encoding), true); this.canonical = canonical; } ---cut---
https://www.jguru.com/faq/view.jsp?EID=121168
CC-MAIN-2021-17
refinedweb
136
61.83
24 September 2007 09:37 [Source: ICIS news] SINGAPORE (ICIS news)--Petrochemical Corp of Singapore (PCS) will shut down its toluene and mixed xylene lines at its No 1 cracker between June and July next year, said a company source on Monday.?xml:namespace> The scheduled shutdown at the Jurong island-based plant was expected to last for about 30-35 days, he added. Expansion of production capacity was not being planned during the shutdown. PCS could produce a total of about 150,000 tonnes/year of toluene and 75,000-80,000 tonnes/year of solvent grade xylene from its two crackers, the source added. The company is one of the leading suppliers of toluene and solvent grade xylene in southeast Asia. Last week, toluene prices in Asia rose sharply by $35-40/tonne to $820-835/tonne FOB (free on board) ?xml:namespace> Solvent grade prices in Asia also firmed by $10-15/tonne to $825-835
http://www.icis.com/Articles/2007/09/24/9064321/pcs-to-shut-no.1-aromatics-mid-2008.html
CC-MAIN-2015-14
refinedweb
159
68.2
On Tue, Sep 17, 2002 at 02:39:59PM -0400, Jeff Garzik wrote: > Tom Rini wrote: > >Right now there's a bit of a mess with all of the BIN_TO_BCD/BCD_TO_BIN > >macros in the kernel. It's defined in a half dozen places, and worse > >yet, not all places use them the same way. Most users do something > >like: > >if ( ... ) > > BIN_TO_BCD(x); > > > >But in a few places, it's used as: > >if ( ... ) > > y = BIN_TO_BCD(x); > > > >The following creates include/linux/bcd.h which has the 'normal' > >BIN_TO_BCD macros, as well as CONVERT_{BIN,BCD}_TO_{BCD,BIN}, > >which are for the second case. > > hmmm... removing all the private definitions certainly makes good sense, > but having both CONVERT_foo and foo seems a bit wonky... > > IMO it would be better to have BIN_TO_BCD which returns a value, and > __BIN_TO_BCD which has side effects but returns no value... Well, this was done in part to minimize change. The version which returns no value is far more common than the one which does, and would require changing a lot more files (and would also make getting this into 2.4 harder too, which I would like to do someday if this gets into 2.5). The other reason is that CONVERT_foo makes it quite obvious what is being done, where as __xxx at least in my mind has namespace imlpications (like how it's used in libc, etc. But kernel namespace isn't like the rest I
http://lkml.iu.edu/hypermail/linux/kernel/0209.2/0384.html
CC-MAIN-2021-49
refinedweb
243
79.4
HA routers interact badly with l2pop Bug Description Since internal HA router interfaces are created on more than a single agent, this interacts badly with l2pop that assumes that a Neutron port is located in a certain place in the network. We'll need to report to l2pop when a HA router transitions to an active state, so the port location is changed. Patch is here: https:/ Yes I saw that Mathieu :) I'm working on a fix, close to the DVR one approach for the binding. So lot of redundant code, I'll rebase it on top of the race condition fix, and we will see how to manage with the refactoring part. Finally, I have been unable to reproduce the issue. With the arp responder set to False and since the regular mac learning still have an higher priority than the rule pushed by l2pop I do not see any issue here. Tested with DVR/L2Pop In my previous comment I wrote something about the Arp response, but we don't care here :) My environment is Ubuntu 14.04, with a source install of OpenStack Juno. I am running Linuxbridges using VXLAN for tunnels. There is clearly a problem, HA routers keepalived changes are not reflected in the fdb's on the various nodes. If the l3 node that is showing to be master in the keepalived process fails and causes another l3 node to be the new master, the fbd tables are never changed on the various nodes causing a communications failure for the VMs. Also if the master network node (as decided by the keepalived processes) is rebooted, keepalived detects the failure and moves the VIP to a new node. Once the rebooted node comes on-line and reports that it is well, neutron sets up the router namespace and the keepalived and conntrackd processes on this node, l2pop also sees the new node and sets the fdb's on all nodes to point this node that just came alive. This causes the VMs to lose communication. Well, I think I also hit this problem. I'm using the same environment, Linuxbridge+VxLAN. I am hitting this issue consistently in my setup currently. I have a multi-node devstack setup with 1 controller, 2 network nodes, and 2 compute nodes. In my script I setup 1 router (HA), 2 networks (one subnet each), and 2 VMs (one on each subnet). Pings to/from nova VMs fail because the packets are directed to the passive router instance instead of the active router instance. I assume this is from L2 pop since the router has two ports associated with it. Once I turn off L2 pop on all nodes, my script (and pings) work fine. Sorry - to add to my setup above I am running with vxlan as well. sorry about that, did not intend to change the status So, the current issue is not due to the mac learning but because the tunnels are not created to the correct host. It seems that the best/simplest way to fix think is to allow a port to be bound to several host. I know that Robert Kukura wanted to generalize this concept so that all ports could be bound to multiple hosts. This way distributed ports and single-bind ports are just sub-cases of a generalized concept. Yes I had a discussion with Robert about the refactoring work especially the DB schema, the multiple ports binding and the way we could leverage it. Here is the refactoring bug https:/ This could be backported, depending on how we end up solving it. If we do backport the fix, we could backport the fix for https:/ lets split the bug to dedicate this one to an OVS implementation. the corresponding Linuxbridge bug is here : https:/ After discussion with Carl, moving this one to Liberty. We can backport it to Kilo once it merges in Liberty. Change abandoned by Kyle Mestery (. Reviewed: https:/ Committed: https:/ Submitter: Jenkins Branch: master commit 91d3a0219a43a. This patch also changes the L3 HA failover test to use l2pop. Note that the test does not pass when using l2pop without this patch. Closes-Bug: #1365476 Co-Authored-By: Assaf Muller <email address hidden> Change-Id: I8475548947526d @Liu: I removed the 'known issues' section. For what it's worth, anyone can edit that page. Reviewed: https:/ Committed: https:/ Submitter: Jenkins Branch: feature/pecan commit 966203f89dee8fe Author: Sukhdev Kapur <email address hidden> Date: Wed Jul 1 16:30:44 2015 -0700 Neutron-Ironic integration patch This patch is in preparation for the integration of Ironic and Neutron. A new vnic_type is being added so that ML2 drivers can filter for all Ironic ports based upon match for 'baremetal'. Nova/Ironic will set this vnic_type when issuing port-create request to neutron. (e.g. binding:vnic_type = 'baremetal' ) Change-Id: I25dc9472b31db0 Partial- commit 236e408272bcb9b Author: Oleg Bondarev <email address hidden> Date: Tue Jul 7 12:02:58 2015 +0300 DVR: fix router scheduling Fix scheduling of DVR routers to not stop scheduling once csnat portion was scheduled. See bug report for failing scenario. This partially reverts commit 3794b4a83e68041 and fixes bug 1374473 by moving csnat scheduling after general dvr router scheduling, so double binding does not happen. Closes-Bug: #1472163 Related-Bug: #1374473 Change-Id: I57c06e2be732e4 commit e152f93878b9bb6 Author: Assaf Muller <email address hidden> Date: Sat Aug 8 21:15:03 2015 +0300 TESTING.rst love Change-Id: I64b569048f8f87 commit 633c52cca1b383a Author: sridhargaddam <email address hidden> Date: Wed Aug 5 10:49:33 2015 +0000 Avoid dhcp_release for ipv6 addresses dhcp_release is only supported for IPv4 addresses [1] and not for IPv6 addresses [2]. There will be no effect when it is called with IPv6 address. This patch adds a corresponding note and avoids calling dhcp_release for IPv6 addresses. [1] http:// [2] http:// Change-Id: I8b8316c9d3d011 commit 2de8fad17402f38 Author: OpenStack Proposal Bot <email address hidden> Date: Mon Aug 10 06:11:06 2015 +0000 Imported Translations from Transifex For more information about this automatic import see: https:/ Change-Id: I2b423e83a7d0ac commit fef79dc7b9162e0 Author: Henry Gessau <email address hidden> Date: Mon Aug 3 23:30:34 2015 -0400 Consistent layout and headings for devref The lack of convention for heading levels among the independently written devref documents was starting to make the Table of Contents look rather messy when rendered in HTML. This patch does not cover the "Neutron Internals" section since its layo... tested with trhee node seup by failing the HA network between the two snat-nodes (ifconfig tap--xxxxx down). I see the standby node take over and programming the correct ip addresses, however the ha_router_ Therfor the "switch" in the datapath does not happen and the vms cannot ping the new gateway (on the new snat). Am I testing this incorrectly? is doing ifconfig .... down not enough to trigger a failover in the database? There appears to be no issue. I was not waiting long enough for failover to register. Everything seems to work as expected. Yes it could take a while due to the need to wait for the server to be notified who's actually master. Thanks for testing this! Reviewed: https:/ Committed: https:/ Submitter: Jenkins Branch: stable/kilo commit 7c2727c4cdb79. (cherry picked from commit 91d3a0219a43a2c Conflicts: neutron/ Closes-Bug: #1365476 Co-Authored-By: Assaf Muller <email address hidden> Change-Id: I8475548947526d this has some overlap with : https:/ /bugs.launchpad .net/neutron/ +bug/1367391 and https:/ /bugs.launchpad .net/neutron/ +bug/1372438
https://bugs.launchpad.net/neutron/+bug/1365476
CC-MAIN-2019-35
refinedweb
1,247
69.31
First time here? Check out the FAQ! Suppose I have two classes, each of them provides Builder API. Builder // This is in Ivy Script import example.Person.Builder; Builder errorPersonBuilder = new Person.Builder(); // this line generates a compilation error saying class Person.Builder not found. Builder workingPersonBuilder = new Builder(); // this works. It is still OK if there is only one Builder inside the context. As soon as there is another: // This is in Ivy Script import example.Person.Builder; import example.Company.Builder; // compilation error of collision, there are two classes named Builder. Builder workingPersonBuilder = new Builder(); // this works, see above. Builder anotherBuilder = new Builder(); // which builder? If Ivy Script could supports the same as normal Java, we could do: import example.Person.Builder; import example.Company.Builder; Person.Builder personBuilder = new Person.Builder(); Company.Builder companyBuilder = new Company.Builder(); Should this be supported in Ivy Script? asked 29.04.2016 at 07:13 Genzer Hawker 481●23●26●36 accept rate: 66% Hi This code here: does not work in Java as well. The compiler says: The import test.Person.Builder collides with another import statement In java and IvyScript the following works: import test.Company; import test.Person; Person.Builder personBuilder = new Person.Builder(); Company.Builder companyBuilder = new Company.Builder(); If you are working with builders, we usally provide a static method to start building. E.g.: public class Person { public static Builder build() { return new Builder(); } public static class Builder {} } Then you do not have to import the Builder itself. Simple use: Person person = Person.build().withName("Weiss").withGender(Gender.MALE).toPerson(); answered 29.04.2016 at 08:28 Reto Weiss ♦♦ 4.9k●19●28●57 accept rate: 74% Oh, i didn't pay attention to the import of the Java context. It's true. You are right! For the suggestion in your answer, yes, that's what I do to (combining builder & static factory method). However, some library and even JDK still just uses plain Builder. Thanks! Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown ivyscript ×33 inner-class ×1 Asked: 29.04.2016 at 07:13 Seen: 847 times Last updated: 29.04.2016 at 08
https://answers.axonivy.com/questions/1756/inconveniences-when-using-inner-classes-in-ivy-script
CC-MAIN-2020-29
refinedweb
377
62.64
Greasemonkey Hacks/Linkmania! From WikiContent Current revision Hacks 13–20: Introduction The Web revolves around links. Links take you to a site, let you navigate within it, and finally take you somewhere else. But not all links are created equal. Some links launch a new window without your permission. Some launch external applications. Some execute a piece of JavaScript code, which means they could do almost anything. And some links aren't even clickable. The first step to reclaiming your browser is taking control of links. Turn Naked URLs into Hyperlinks Make every URL clickable. Have you ever visited a page that displayed a naked URL that you couldn't click? That is, the URL is displayed as plain text on the page, and you need to manually copy the text and paste it into a new browser window to follow the link. I run into this problem all the time while reading weblogs, because many weblog publishing systems allow readers to submit comments (including URLs) but just display the comment verbatim without checking whether the comment includes a naked URL. This hack turns all such URLs into clickable links. The Code This user script runs on all pages. To ensure that it does not affect URLs that are already linked, it uses an XPath query that includes not(ancestor::a). To ensure that it does affect URLs in uppercase, the XPath query also includes "contains(translate(., 'HTTP', 'http'), 'http')]". Once we find a text node that definitely contains an unlinked URL, there could be more than one URL within it, so we need to convert all the URLs while keeping the surrounding text intact. We replace the text with an empty <span> element as a placeholder and then incrementally reinsert each non-URL text snippet and each constructed URL link. Save the following user script as linkify.user.js: // ==UserScript== // @name Linkify // @namespace // @description Turn plain-linkstext URLstext URLs into linkshyperlinks // @include * // ==/UserScript== // based on code by Aaron Boodman // and included here with his gracious permission var urlRegex = /\b(https?:\/\/[^\s+\"\<\>]+)/ig; var snapTextElements = document.evaluate("//text()[not(ancestor::a) " + "and not(ancestor::script) and not(ancestor::style) and " + "contains(translate(., 'HTTP', 'http'), 'http')]", document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); for (var i = snapTextElements.snapshotLength - 1; i >= 0; i--) { var elmText = snapTextElements.snapshotItem(i); if (urlRegex.test(elmText.nodeValue)) { var elmSpan = document.createElement("span"); var sURLText = elmText.nodeValue; elmText.parentNode.replaceChild(elmSpan, elmText); urlRegex.lastIndex = 0; for (var match = null, lastLastIndex = 0; (match = urlRegex.exec(sURLText)); ) { elmSpan.appendChild(document.createTextNode( sURLText.substring(lastLastIndex, match.index))); var elmLink = document.createElement("a"); elmLink.setAttribute("href", match[0]); elmLink.appendChild(document.createTextNode(match[0])); elmSpan.appendChild(elmLink); lastLastIndex = urlRegex.lastIndex; } elmSpan.appendChild(document.createTextNode( sURLText.substring(lastLastIndex))); elmSpan.normalize(); } } Running the Hack Before installing the user script, go to, an article published by Mark Nottingham on his weblog. I followed Mark's weblog for many years, and the only thing I disliked was that the links I posted in the comments section would be displayed as plain text (i.e., not as hyperlinks). The comments section at the end of this article has several contributed URLs that display as plain text, as shown in Figure 2-1. Now, install the user script (Tools → Install This User Script), and refresh. All the URLs in the comments section are now real hyperlinks, as shown in Figure 2-2. Hacking the Hack You might want to distinguish between links that were part of the original page and links that were created by this script. You can do this by adding a custom style to the elmLink element. Change this line: var elmLink = document.createElement("a"); to this: var elmLink = document.createElement("a"); elmLink.title = 'linkified by Greasemonkey!'; elmLink.style.textDecoration = 'none'; elmLink.style.borderBottom = '1px dotted red'; The linkified URLs will now be underlined with a dotted red line, as shown in Figure 2-3. Force Offsite Links to Open in a New Window Keep your browser organized by automatically opening each site in its own window. I originally wrote this user script after someone posted a request to the Greasemonkey script repository. I personally like to open links in a new tab in the current window, but some people prefer to open a separate window for each site. Offsite Blank lets you do this automatically, by forcing offsite links to open in a new window. The Code This user script runs on remote web sites (but not, for example, on HTML documents stored on your local machine that you open from the File → Open menu). Since search engines exist to provide links to other pages, and I find it annoying for search result links to open new links, I've excluded Google and Yahoo! by default. The code itself breaks down into four steps: - Get the domain of the current page. - Get a list of all the links on the page. - Compare the domain of each link to the domain of the page. - If the domains don't match, set the target attribute of the link so that it opens in a new window. Save the following user script as offsiteblank.user.js: // ==UserScript== // @name Offsite Blank // @namespace // @description force linksoffsite linksoffsite links to open in a new window // @include http*://* // @exclude http://*.google.tld/* // @exclude http://*.yahoo.tld/* // ==/UserScript== var sCurrentHost = location.host; var arLinks = document.links; for (var i = arLinks.length - 1; i >= 0; i--) { var elmLink = arLinks[i]; if (elmLink.host && elmLink.host != sCurrentHost) { elmLink.target = "_blank"; } } Running the Hack After installing the user script (Tools → Install This User Script), go to. Click on one of the links in the navigation bar, such as "About us." The link will open in the same window, as normal. Go back to, scroll to the bottom of the page, and click the Plone Powered link to visit. Since the link points to a page on another site, Firefox will automatically open the link in a new window, as shown in Figure 2-4. Hacking the Hack This hack is somewhat naive about what constitutes an offsite link. For example, if you visit and click a link on, the script will force the link to open in a new window, because it considers developers.slashdot.org to be a different site than. We can fix this by modifying the user script to compare only the last part of the domain name. Save the following user script as offsiteblank2.user.js: // ==UserScript== // @name Offsite Blank 2 // @namespace // @description force offsite linksoffsite links to open in a new window // @include http*://* // @exclude http*://*.google.tld/* // @exclude http*://*.yahoo.tld/* // ==/UserScript== var NUMBER_OF_PARTS = 2; var sCurrentHost = window.location.host; var arParts = sCurrentHost.split('.'); if (arParts.length > NUMBER_OF_PARTS) { sCurrentHost = [arParts[arParts.length - NUMBER_OF_PARTS], arParts[arParts.length - 1]].join('.'); } var linksarLinks = document.getElementsByTagName('a'); for (var i = arLinks.length - 1; i >= 0; i--) { var elmLink = arLinks[i]; var sHost = elmLink.host; if (!sHost) { continue; } var arLinkParts = sHost.split('.'); if (arLinkParts.length > NUMBER_OF_PARTS) { sHost = [arLinkParts[arLinkParts.length - NUMBER_OF_PARTS], arLinkParts[arLinkParts.length - 1]].join('.'); } if (sHost != sCurrentHost) { elmLink.target = "_blank"; } } This script is still naive about what constitutes an offsite link; it's just naive in a different way than the first script. On sites such as, this script thinks the current domain is co.uk, instead of amazon.co.uk. You can further refine this behavior by changing the NUMBER_OF_PARTS constant at the top of the script from 2 to 3. Fix Broken Pop-up Links Change javascript: pseudo-protocol pop-up windows into normal hyperlinks. Advanced browser users do more than just click hyperlinks. They also right-click them, print them, and save them to disk. All these additional behaviors are broken when web developers incorrectly use the javascript: pseudo-protocol to create pop-up windows. A broken pop up looks like this: <a href="javascript:popup('')">go to youngpup.net</a> In this example, the web developer is attempting to create a link that opens in a new pop-up window. Unfortunately, the value of the href attribute is not a valid URL. It's just JavaScript, which works only in the context of the current page. This means that if a user right-clicks the link and tries to open it in a new window, the popup function is undefined and the user gets an error message. Likewise, if the user attempts to save or print the contents of the hyperlink, the browser first has to download it, which it cannot do because the href doesn't contain a URL; it contains a random JavaScript statement. There is no reason for web developers ever to do this. You can easily write annoying pop-up windows and retain the benefits of regular hyperlinks, by adding an onclick handler to a regular hyperlink: <a href="" onclick="window.open(this.href); return false;"> go to youngpup.net </a> Using Greasemonkey, we can scan for javascript: links that appear to open pop-up windows and then change them to use an onclick handler instead. The Code This user script runs on every page. It loops through every link on the page, looking for javascript: URLs. If the link's href attribute begins with javascript:, the script checks whether it appears to open a pop-up window by looking for something that looks like a URL after the javascript: keyword. Since the overwhelming majority of web authors that use "javascript:" links use them to open pop-up windows, this should not have too many false positives. If the script determines that the link is trying to open a pop-up window, it attempts to reconstruct the target URL. It changes the surrounding <a> element to use a normal href attribute and move the JavaScript code to the onclick event handler. Save the following user script as popupfixer.user.js: // ==UserScript== // @name Popup Window Fixer // @namespace // @description Fixes javascript: pseudo-links that popup windows // @include * // ==/UserScript== const urlRegex = /\b(https?:\/\/[^\s+\"\<\>\'\(\)]+)/ig; var candidates = document.getElementsByTagName("a"); for (var cand = null, i = 0; (cand = candidates[i]); i++) { if (cand.getAttribute("onclick") == null && cand.href.toLowerCase().indexOf("javascript:") == 0) { var match = cand.href.match(urlRegex); if (!match) { continue; } cand.href = match[0]; } } Running the Hack Before installing the script, create a file called testpopup.html with the following contents: <html> <head> <script type="text/javascript"> function popup(url) { window.open(url); } </script> </head> <body> <a href="javascript:popup('')">Aaron's home page</a> </body> </html> Save the file and open it in Firefox (File → Open…). When you hover over the link, the status bar displays the JavaScript. When you click the link, it does indeed open. However, if you right-click the link and select "Open in new tab," the link fails, because the pop-up function is defined only on the current page, not in the new tab. Now, install the user script (Tools → Install This User Script) and refresh the test page. Hover over the link again, and you will see in the status bar that the link now points directly to the URL, as shown in Figure 2-5. This means you can right-click the link and open it in a new tab, save the target page to your local computer, or print it. When you click the link directly, it still opens an annoying pop-up window, as the developer intended. Hacking the Hack There are various problems with this code, all stemming from the fact that it is hard to know for sure that a "javascript:" link is supposed to open a pop up. For example, this script cannot detect any of these pop-up links: <a href="javascript:popup('foo.html')"> <a href="javascript:popup('foo/bar/')"> <a href="javascript:popupHomePage()"> All we can say for sure is that hyperlinks that use the "javascript:" pseudo-protocol are not really hyperlinks. So, maybe they shouldn't look like hyperlinks. If they didn't, you'd be less confused when they didn't act like hyperlinks. Save the following user script as popupstyler.user.js: var candidates = document.links; for (var cand = null, i = 0; (cand = candidates[i]); i++) { if (cand.href.toLowerCase().indexOf("javascript:") == 0) { with (cand.style) { background = "#ddd"; borderTop = "2px solid white"; borderLeft = "2px solid white"; borderRight = "2px solid #999"; borderBottom = "2px solid #999"; padding = ".5ex 1ex"; color = "black"; textDecoration = "none"; } } } Uninstall popupfixer.user.js, install popupstyler.user.js, and refresh the test page. (In general, you can run the two scripts simultaneously, just not in this demonstration.) The "javascript:" link now appears to be a button, as shown in Figure 2-6. There is no complete solution that will work with every possible "javascript:" pop-up link, since there are so many variations of JavaScript code to open a new window. In theory, you could redefine the window.open function and manually call the JavaScript code in each link, but this could have serious unintended side effects if the link did something other than open a window. Your best bet is a combination of fixing the links you can fix and styling the rest. —Aaron Boodman Remove URL Redirections Cut out the middleman and make links point directly to where you want to go. Many portal sites use redirection links for links that point to other sites. The link first goes to a tracking page on the portal, which logs your click and sends you on your way to the external site. Not only is this an invasion of privacy, but it's also slower, since you need to load the tracking page before you are redirected to the page you actually want to read. This hack detects such redirection links and converts them to direct links that take you straight to the final destination. The Code This user script runs on all pages, except for a small list of pages where it is known to cause problems with false positives. It uses the document.links collection to find all the links on the page and checks whether the URL of the link includes another URL within it. If it finds one, it extracts it and unescapes it, and replaces the original URL. Save the following user script as nomiddleman.user.js: // ==UserScript== // @name NoMiddleMan // @namespace // @description Rewrites URLs to remove redirection scripts // @include * // @exclude* // @exclude http://*bloglines.com/* // @exclude* // @exclude http://*wists.com/* // ==/UserScript== // based on code by Albert Bachand // and included here with his gracious permission // samplesnomiddleman.user.jsnomiddleman.user.js for (var i=0; i<document.linkslinks.length; i++) { var link, temp, start, linksURL redirectionurl, qindex, end; link = document.links[i]; // Special case for Google results (assumes English language) if (link.text == 'Cached' || /Similar.*?pages/.exec(link.text)) { continue; } temp = link.href.toLowerCase(); // ignore javascript links and GeoURL if (temp.indexOf('javascript:') == 0 || temp.indexOf('geourl.org') != -1) { continue; } // find the start of the (last) real url start = Math.max(temp.lastIndexOf('http%3a'), temp.lastIndexOf('http%253a'), temp.lastIndexOf('http:')); if (start <= 0) { // special case: handle redirect url without a 'http:' part start = link.href.lastIndexOf('www.'); if (start < 10) { start = 0; } else { link.href = link.href.substring(0, start) + 'http://' + link.href.substring(start); } } // we are most likely looking at a URLsredirectionredirection link if (start > 0) { url = link.href.substring(start); // check whether the real url is a parameter qindex = link.href.indexOf('?'); if (qindex > -1 && qindex < start) { // it's a parameter, extract only the url end = url.indexOf('&'); if (end > -1) { url = url.substring(0, end); } } // handle Yahoo's chained redirections var temp = url; url = unescape(url); while (temp != linksURL redirectionurl) { temp = url; url = unescape(url); } // and we're done link.href = url.replace(/&/g, '&'); } } Running the Hack Before installing the user script, go to and search for greasemonkey. In the list of search results, each linked page is really a redirect through Yahoo!'s servers, as shown in Figure 2-7. Now, install the user script (Tools → Install This User Script), go back to, and execute the same search. The link to the search result page now points directly to, as shown in Figure 2-8. There are a variety of ways that sites can redirect links through a tracking page. This script doesn't handle all of them, but it handles the most common cases and a few special cases used by popular sites (such as Yahoo!). The author maintains a weblog at where you can check for updates to this script. Warn Before Opening PDF Links Make your browser double-check that you really want to open that monstrous PDF. How many times has this happened to you? You're searching for something, or just browsing, and click on a promising-looking link. Suddenly, your browser slows to a crawl, and you see the dreaded "Adobe Acrobat Reader" splash screen. Oh no, you've just opened a PDF link, and your browser is launching the helper application from hell. This hack saves you the trouble, by popping up a dialog box when you click on a PDF file to ask you if you're sure you want to continue. If you cancel, you're left on the original page and can continue browsing in peace. Tip This hack is derived from a Firefox extension called TargetAlert, which offers more features and customization options. Download it at. The Code This user script runs on all pages. It iterates through the document.links collection, looking for links pointing to URLs ending in .pdf. For each link, it attaches an onclick handler that calls the window.confirm function to ask you if you really want to open the PDF document. Save the following user script as pdfwarn.user.js: // ==UserScript== // @name PDF Warn // @namespace // @description Ask before opening PDF links // @include * // ==/UserScript== // based on code by Sencer Yurdagül and Michael Bolin // and included here with their gracious permission // samplespdfwarn.user.jspdfwarn.user.js for (var i = document.links.length - 1; i >= 0; i--) { var elmLink = document.links[i]; if (elmLink.href && elmLink.href.match(/^[^\\?]*pdf$/i)) { var sFilename = elmLink.href.match(/[^\/]+pdf$/i); elmLink.addEventListener('click', function(event) { if (!window.confirm('Are you sure you want to ' + 'open the PDF file "' + sFilename + '"?')) { event.stopPropagation(); event.preventDefault(); } }, true); } } Running the Hack After installing the user script (Tools → Install This User Script), go to and search for census filetype:pdf. At the time of this writing, the first search result is a link to a PDF file titled "Income, Poverty, and Health Insurance Coverage in the United States." Click the link, and Firefox will pop up a warning dialog asking you to confirm opening the PDF, as shown in Figure 2-9. If you click OK, the link will open, Firefox will launch the Adobe Acrobat plug-in, and you will see the PDF without further interruption. If you click Cancel, you'll stay on the search results page, where you can click "View as HTML" to see Google's version of the PDF file converted to HTML. Avoid the Slashdot Effect Add web cache links to Slashdot articles. Reading Slashdot is one of my guilty pleasures. It is a guilty pleasure that I share with tens of thousands of other tech geeks. People who have been linked from a Slashdot article report that Slashdot sends as many as 100,000 visitors to their site within 24 hours. Many sites cannot handle this amount of traffic. In fact, the situation of having your server crash after being linked from Slashdot is known as the Slashdot effect. Tip Read more about the Slashdot effect at. This hack tries to mitigate the Slashdot effect by adding links to Slashdot articles that point to various global web caching systems. Instead of visiting the linked site, you can view the same page through a third-party proxy. If the Slashdot effect has already taken hold, the linked page might still be available in one of these caches. The Code This user script runs on all Slashdot pages, including the home page. The script adds a number of CSS rules to the page to style the links we're about to add. Then, it constructs three new links—one to Coral Cache, one to MirrorDot, and one to the Google Cache—and adds them after each external link in the Slashdot article. Save the following user script as slashdotcache.user.js: // ==UserScript== // @name Slashdot Cache // @namespace GreaseMonkey/ // @description Adds links to web caches on Slashdot // @include* // @include http://*.slashdot.tld/* // ==/UserScript== // based on code by Valentin Laube // and included here with his gracious permission var coralcacheicon = '+ 'oAAAAKCAYAAACNMs%2B9AAAAgUlEQVQY042O0QnCQBQEZy0sFiEkVVxa8GxAuLOLgD3cV'+ 'RKwAytYf05JkGgGFt7H8nZkG10UgBNwZE0F7j77JiIJGPlNFhGzgwOQd%2FQytrEJdjtb'+ 'rs%2FORAqRZBvZBrQxby2nv5iHniqokquUgM%2FH8Hadh57HNG05rlMgFXDL0vE%2FL%2'+ 'BEXVN83HSenAAAAAElFTkSuQmCC'; var mirrordoticon = '+ 'AAAAKCAYAAACNMs%2B9AAAAbklEQVQY05WQMRKEMAwDNzzqUobWv%2BBedvcK3EKZV4km'+ 'BiYFE9RYI3mssZIkRjD1Qnbfsvv2uJjdF6AApfELkpDEZ12XmHcefpJEiyrAF%2Fi1G8H'+ '3ajZPjOJVdPfMGV3N%2FuGlvseopprNdz2NFn4AFndcO4mmiYkAAAAASUVORK5CYII%3D'; var googleicon = '+ 'AKCAIAAAACUFjqAAAAiklEQVQY02MUjfmmFxPFgAuIxnz7jwNcU9BngSjae%2FbDxJUPj'+ '1z%2BxMDAYKPLlx8u72wswMDAwASRnrjyIQMDw%2BoW3XfbbfPD5SFchOGCHof2nHmPaT'+ 'gTpmuEPA8LeR6GsKHSNrp8E1c%2B3Hv2A8QKG10%2BiDjUaRD7Qmsuw51GlMcYnXcE4Aq'+ 'SyRn3Abz4culPbiCuAAAAAElFTkSuQmCC'; var backgroundimage = '+ 'DEAAAAOCAYAAACGsPRkAAAAHXRFWHRDb21tZW50AENyZWF0ZWQgd2l0aCBUaGUgR0lNUO'+ '9kJW4AAAC7SURBVEjH7daxDYMwEEbhh11cAxKSKYEV0qeKMgETZBbPkgmYIEqVPisAJZa'+ 'QTOPCUprQZYAY8Sb4P11zGcD9dT0BFuhIpx6wt%2FPjnX0BTxEpjako8uLv1%2FvV49xM'+ 'CGEBLgqwIlI2dZsEAKDIC5q6RURKwCqgM6ZCa01Kaa0xpgLo1CZLsW23YgcdiANxIH4g%'+ '2FOqTHL%2FtVkDv3EyMMSlAjBHnZoBeATaEsIzTkMxF%2FOoZp2F7O2y2hwfwA3URQvMn'+ 'dliTAAAAAElFTkSuQmCC'; function addGlobalStyle(css) { var head, style; head = document.getElementsByTagName('head')[0]; if (!head) { return; } style = document.createElement('style'); style.type = 'text/css'; style.innerHTML = css; head.appendChild(style); } addGlobalStyle('' + 'a.coralcacheicon, a.mirrordoticon, a.googleicon { \n' + ' padding-left: 15px; background: center no-repeat; \n' + '} \n' + 'a.coralcacheicon { \n' + ' background-image: url(' +coralcacheicon + '); \n' + '} \n' + 'a.mirrordoticon { \n' + ' background-image: url(' + mirrordoticon + '); \n' + '} \n' + 'a.googleicon { \n' + ' background-image: url(' + googleicon + '); \n' + '} \n' + 'a.coralcacheicon:hover, a.mirrordoticon:hover, ' + 'a.googleicon:hover { \n' + ' opacity: 0.5; \n' + '} \n' + 'div.backgroundimage { \n' + ' display:inline; \n' + ' white-space: nowrap; \n' + ' padding:3px; \n' + ' background:url(' + backgroundimage + ') center no-repeat; \n' + '}'); var link, anchor, background; for (var i=0; i<document.linkslinks.length; i++) { link = document.linkslinks[i]; // filter relative links if(link.getAttribute('href').substring(0,7) != 'http://') { continue; } // filter all other links if(link.parentNode.nodeName.toLowerCase() != 'i' && (link.parentNode.nodeName.toLowerCase() != 'font' || link.parentNode.color != '#000000' || link.parentNode.size == '2') && (!link.nextSibling || !link.nextSibling.nodeValue || link.nextSibling.nodeValue.charAt(1) != '[')) { continue; } // add background background = document.createElement('div'); background.className = 'backgroundimage'; link.parentNode.insertBefore(background, link.nextSibling); //add mirrordot link anchor = document.createElement('a'); anchor.href = '?' + link.href; anchor.title = 'MirrorDot - Solving the Slashdot effectSlashdot Effect'; anchor.className = 'mirrordoticon'; background.appendChild(anchor); //add coral cache link anchor = document.createElement('a'); anchor.href = link.href; anchor.host += '.nyud.net:8090'; anchor.title = 'Coral - The NYU Distribution Network'; anchor.className = 'coralcacheicon'; background.appendChild(anchor); //add google cache link anchor = document.createElement('a'); anchor.href = ':' + link.href; anchor.title = 'Google Cache'; anchor.className = 'googleicon'; background.appendChild(anchor); // add a space so it wraps nicely link.parentNode.insertBefore(document.createTextNode(' '), link.nextSibling); } Running the Hack After installing the user script (Tools → Install This User Script), go to. In the summary of each article, you will see a set of small icons next to each link, as shown in Figure 2-10. The first icon points to the MirrorDot cache for the linked page, the second icon points to the Coral Cache version of the link, and the third points to the Google Cache version. If the linked page is unavailable because of the Slashdot effect, you can click any of the cache links to attempt to view the link. For example, the MirrorDot link takes you to a page on MirrorDot () that looks like Figure 2-11. The Coral Cache system works on demand: the page is not cached until someone requests it. MirrorDot works by polling Slashdot frequently to find new links before the Slashdot effect takes the linked site down. Google Cache works in conjunction with Google's standard web crawlers, so brand-new pages might not be available in the Google Cache if Google had not indexed them before they appeared on Slashdot. Convert UPS and FedEx Tracking Numbers to Links Make it easier to track packages. All major package-delivery companies have web sites that allow you to track the status of packages. This is especially useful for online shoppers. Unless you're buying downloadable software, pretty much everything you buy online needs to be shipped one way or another. Unfortunately, not all online retailers are as web-savvy as one might hope. This hack scans web pages for package tracking numbers and then converts them to links that point to the page on the delivery company's web site that shows the shipment's current status. The Code This user script runs on all pages. It is similar to "Turn Naked URLs into Hyperlinks" [Hack #13]. It scans the page for variations of package numbers that are not already contained in an <a> element and then constructs a link that points to the appropriate online tracking site. These patterns are converted into links to UPS ( ): - 1Z 999 999 99 9999 999 9 - 9999 9999 999 - T999 9999 999 This pattern is converted into a link to FedEx (): - 9999 9999 9999 The following patterns are converted into links to the United States Postal Service ( ): - 9999 9999 9999 9999 9999 99 - 9999 9999 9999 9999 9999 Save the following user script as tracking-linkify.user.js: // ==UserScript== // @name UPS/FedEx tracking numbersFedEx Tracking Linkify // @namespace // @description Link package tracking numberstracking numbers to appropriate site // @include * // ==/UserScript== // Based on code by Justin Novack and Logan Ingalls // and included here with their gracious permission // Originally licensed under a Create Commons license // Visit for details var UPSRegex = new RegExp('/\b(1Z ?[0-9A-Z]{3} ?[0-9A-Z]{3} ?[0-9A-Z]{'+ '2} ?[0-9A-Z]{4} ?[0-9A-Z]{3} ?[0-9A-Z]|[\\dT]\\d\\d\\d ?\\d\\d\\d\\d '+ '?\\d\\d\\d)\\b', 'ig'); var FEXRegex = new RegExp('\\b(\\d\\d\\d\\d ?\\d\\d\\d\\d ?\\d\\d\\d\\'+ 'd)\\b', 'ig'); var USARegex = new RegExp('\\b(\\d\\d\\d\\d ?\\d\\d\\d\\d ?\\d\\d\\d\\'+ 'd ?\\d\\d\\d\\d ?\\d\\d\\d\\d ?\\d\\d|\\d\\d\\d\\d ?\\d\\d\\d\\d ?\\d'+ '\\d\\d\\d ?\\d\\d\\d\\d ?\\d\\d\\d\\d)\\b', 'ig'); function UPSUrl(t) { return ''+ 't_by=status&tracknums_displayed=1&TypeOfInquiryNumber=T&loc=e'+ 'n_US&InquiryNumber1=' + String(t).replace(/ /g, '') + '&track.x=0&track.y=0'; } function FEXUrl(t) { return ''+ 'e=english&cntry_code=us&initial=x&tracknumbers=' + String(t).replace(/ /g, ''); } function USAUrl(t) { return ''+ '2w/output?CAMEFROM=OK&strOrigTrackNum=' + String(t).replace(/ /g, ''); } //::') + ')]'; var candidates = document.evaluate(xpath, document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); //var tO = new Date().getTime(); for (var cand = null, i = 0; (cand = candidates.snapshotItem(i)); i++) { // UPS tracking numbersUPS Track if (linksUPS and FedEx tracking numbersUPSRegex.test(cand.nodeValue)) { var span = document.createElement('span'); var source = cand.nodeValue; cand.parentNode.replaceChild(span, cand); UPSRegex.lastIndex = 0; for (var match = null, lastLastIndex = 0; (match = UPSRegex.exec(source)); ) { span.appendChild(document.createTextNode( source.substring(lastLastIndex, match.index))); var a = document.createElement('a'); a.setAttribute('href', UPSUrl(match[0])); a.setAttribute('title', 'Linkified to UPS'); a.appendChild(document.createTextNode(match[0])); span.appendChild(a); lastLastIndex = UPSRegex.lastIndex; } span.appendChild(document.createTextNode( source.substring(lastLastIndex))); span.normalize(); } // USPS Track if (USARegex.test(cand.nodeValue)) { var span = document.createElement('span'); var source = linksUPS and FedEx tracking numberscand.nodeValue; cand.parentNode.replaceChild(span, cand); USARegex.lastIndex = 0; for (var match = null, lastLastIndex = 0; (match = USARegex.exec(source)); ) { span.appendChild(document.createTextNode( source.substring(lastLastIndex, match.index))); var a = document.createElement('a'); a.setAttribute('href', USAUrl(match[0])); a.setAttribute('title', 'Linkified to USPS'); a.appendChild(document.createTextNode(match[0])); span.appendChild(a); lastLastIndex = USARegex.lastIndex; } span.appendChild(document.createTextNode( source.substring(lastLastIndex))); span.normalize(); } // FedEx tracking numbersFedEx Track if (FEXRegex.test(cand.nodeValue)) { var span = document.createElement('span'); var source = cand.nodeValue; cand.parentNode.replaceChild(span, cand); FEXRegex.lastIndex = 0; for (var match = null, lastLastIndex = 0; (match = FEXRegex.exec(source)); ) { span.appendChild(document.createTextNode( source.substring(lastLastIndex, match.index))); var a = document.createElement('a'); a.setAttribute('href', FEXUrl(match[0])); a.setAttribute('title', 'Linkified to FedEx'); a.appendChild(document.createTextNode(match[0])); span.appendChild(a); lastLastIndex = FEXRegex.lastIndex; } span.appendChild(document.createTextNode( source.substring(lastLastIndex))); span.normalize(); } } Running the Hack Before installing the script, create a file called testlinkify.html with the following contents: <html> <head> <title>Test Linkify</title> </head> <body> <p>UPS tracking numbersUPS tracking numberstracking numbers:</p> <ul> <li>Package 1Z 999 999 99 9999 999 9 sent</li> <li>Package 9999 9999 999 sent </li> <li>Package T999 9999 999 sent</li> </ul> <p>FedEx tracking numbersFedEx tracking numbers:</p> <ul> <li>Package 9999 9999 9999 sent</li> </ul> <p>USPS tracking numbers:</p> <ul> <li>Package 9999 9999 9999 9999 9999 99 sent</li> </ul> </body> </html> Save the file and open it in Firefox (File → Open…). It lists a number of variations of (fake) package tracking numbers in plain text, as shown in Figure 2-12. Now, install the user script (Tools → Install This User Script), and refresh the test page. The script has converted the package tracking numbers to links to their respective online tracking sites, as shown in Figure 2-13. If you hover over a link, you will see a tool tip that lets you know that the tracking number was automatically converted to a link. Follow Links Without Clicking Them Hover over any link for a short time to open it in a new tab in the background. This hack was inspired by dontclick.it (), a site that demonstrates some user interaction techniques that don't involve clicking. The site is written in Flash, which annoys me, but it gave me the idea of lazy clicking: the ability to open links just by moving the cursor over the link and leaving it there for a short time. I don't claim that it will cure your carpal tunnel syndrome, but it has changed the way I browse the Web. The Code This user script runs on nonsecure web pages. By default, it will not run on secure web pages, because it has been my experience that most secure sites, such as online banking sites, are very unweblike and don't support opening links in new tabs. The script gets a list of all the links (which Firefox helpfully maintains for us in the document.links collection) and attaches three event handlers to each link: - Mouseover - When you move your cursor to a link, the script starts a timer that lasts for 1.5 seconds (1500 milliseconds). When the timer runs down, it calls GM_openInTab to open the link in a new tab. - Mouseout - If you move your cursor off a link within 1.5 seconds, the onmouseout event handler cancels the timer, so the link will not open. - Click - If you actually click a link within 1.5 seconds, the onclick event handler cancels the timer and removes all three event handlers. (Note that you can click a link without leaving the page; for example, holding down the Ctrl key while clicking will open the link in a new tab.) This means that if you manually follow a link, the auto-open behavior disappears and the link will not open twice. Save the following user script as autoclick.user.js: // ==UserScript== // @name AutoClick // @namespace // @description hover over linkslinks for 1.5 seconds to open in a new tab // @include http://* // ==/UserScript== var _clickTarget = null; var _autoClickTimeoutID = null; function mouseover(event) { _clickTarget = event.currentTarget; _autoclickTimeoutID = window.setTimeout(autoclick, 1500); } function mouseout(event) { _clickTarget = null; if (_autoclickTimeoutID) { window.clearTimeout(_autoclickTimeoutID); } } function clear(elmLink) { if (!elmLink) { return; } elmLink.removeEventListener('mouseover', mouseover, true); elmLink.removeEventListener('mouseout', mouseout, true); elmLink.removeEventListener('click', click, true); } function click(event) { var elmLink = event.currentTarget; if (!elmLink) { return false; } clear(elmLink); mouseout(event); } function autoclick() { if (!_clickTarget) { return; } GM_openInTab(_clickTarget.href); clear(_clickTarget); } for (var i = document.linkslinks.length - 1; i >= 0; i--) { var elmLink = document.links[i]; if (elmLink.href && elmLink.href.indexOf('javascript:') == -1) { elmLink.addEventListener('mouseover', mouseover, true); elmLink.addEventListener('mouseout', mouseout, true); elmLink.addEventListener('click', click, true); } } Running the Hack Before running this hack, you'll need to set up Firefox so that it doesn't bring new tabs to the front. Go to Tools → Options → Advanced. Under Tabbed Browsing, make sure "Select new tabs opened from links" is not checked, as shown in Figure 2-14. Now, install the user script (Tools → Install This User Script), and go to. Hover over any link for a short time (1.5 seconds to be precise), and the link will open in a new tab in the background, as shown in Figure 2-15. The script is smart enough not to reopen links you've already opened. If you move your cursor away from the link you just opened, then move it back to the link, it will not open a second time. The script is also smart enough not to auto-open links you've already clicked. If you move to another link and click while holding down the Ctrl key (or the Command key on Mac OS X), Firefox will open the link in a new tab in the background. If you move away from the link and then move back, it will not auto-open no matter how long you hover over it.
http://commons.oreilly.com/wiki/index.php?title=Greasemonkey_Hacks/Linkmania!&diff=prev&oldid=9631
CC-MAIN-2014-10
refinedweb
5,485
58.28
Select your printer grid, and scale your objects accordingly, use to the rotation buttons to quickly 90 degree objects in any direction and auto center to plate. Export your selected objects as obj's in a ZIP file locally with a push of a button. Export to 3DSHOOK to get an stl object , real cost. Send to any public printing service like shapeways, 3dhubs, immaterialise. etc. stream file directly to your printer. Install: Put PYC file under scripts directory Mac : <username>/Library/Preferences/Autodesk/maya/2015-x64/scripts PC : C:\\Users\<username>\Documents\maya\2015-x64\scripts And run (or middle click+drag to create shelf button): from print_plugin import * Print_exporter() Please use the Feature Requests to give me ideas. Please use the Support Forum if you have any questions or problems. Please rate and review in the Review section.
https://ec2-34-231-130-161.compute-1.amazonaws.com/maya/script/3d-printing-exporter-for-maya-for-maya
CC-MAIN-2022-40
refinedweb
140
55.74
Qt fails to cross compile (compiling for 32 bit on 64 bit system) Hi, I'm trying to compile QT for linux-32 on my Debian 64 bit system. I am using Debian's multiarch, so I have all the necessary packages installed for cross compiling to 32 bit (I have done it many times in the past). At the moment, I am failing to cross compile QT. It fails at the ./configure stage. My configure is like this: /configure "-debug-and-release" "-nomake" "tests" "-qtnamespace" "QT" "-confirm-license" "-accessibility" "-platform" "linux-g++-32" "-developer-build" "-fontconfig" "-qt-freetype" "-qt-libpng" "-glib" "-qt-xcb" "-dbus" "-qt-sql-sqlite" "-gtkstyle" "-prefix" "/home/john/bin/Qt/5.6.0/" -v It seems that for some reason the linker cannot find any of the 32 bit libraries. The first time it fails, it's because it cannot find lgthread: g++ -m32 -Wl,-O1 -o glib glib.o -lgthread-2.0 -pthread -lglib-2.0 /usr/bin/ld: cannot find -lgthread-2.0 collect2: error: ld returned 1 exit status Makefile:84: recipe for target 'glib' failed make: *** [glib] Error 1 Glib disabled. If I ask it to continue, it continues failing on multiple libs, such as for example: g++ -c -m32 -pipe -O2 -Wall -W -fPIC -I. -I../../../mkspecs/linux-g++-32 -o xcb.o xcb.cpp xcb.cpp: In function ‘int main(int, char**)’: xcb.cpp:39:23: warning: unused variable ‘t’ [-Wunused-variable] xcb_connection_t *t = xcb_connect("", &primaryScreen); ^ g++ -m32 -Wl,-O1 -o xcb xcb.o -lxcb /usr/bin/ld: cannot find -lxcb collect2: error: ld returned 1 exit status Makefile:84: recipe for target 'xcb' failed make: *** [xcb] Error 1 xcb disabled. The test for linking against libxcb failed! In fact, there are multiple failures like this happening. Now, I have installed all the necessary libraries, 32 and 64 bit, that are required to build qt5 - apt-get build-dep qt5-default, as instructed by you, plus many others I thought were necessary. But now I'm coming to the realisation that the problem is somewhere in the linking stage where Qt simply cannot find the 32 bit libraries. These libraries are there, and in the path. For example: ls -l /lib/i386-linux-gnu/libglib-2.0.so* lrwxrwxrwx 1 root root 36 Oct 7 2015 /lib/i386-linux-gnu/libglib-2.0.so -> /lib/i386-linux-gnu/libglib-2.0.so.0 lrwxrwxrwx 1 root root 23 Jul 13 17:46 /lib/i386-linux-gnu/libglib-2.0.so.0 -> libglib-2.0.so.0.4800.1 -rw-r--r-- 1 root root 1211332 Jul 13 17:46 /lib/i386-linux-gnu/libglib-2.0.so.0.4800.1 So what is going on here? Is this a bug with 5.6.0? Or some incompatibility with cross compilation? Thanks! Noone? I'm sure I can't be the only one cross compiling QT for x32 in x64! - small_bird You need to install the package you can easily get the answer on the internet which support 32 bits's development.
https://forum.qt.io/topic/70485/qt-fails-to-cross-compile-compiling-for-32-bit-on-64-bit-system
CC-MAIN-2017-34
refinedweb
511
65.83
vmod-dbrw User Manual Table of Contents vmod-dbrw This edition of the vmod-dbrw User Manual, last updated 5 August 2017, documents vmod-dbrw Version 2.1. 1 Introduction to vmod-dbrw Vmod-dbrw is a module for Varnish Cache1 which implements database-driven rewrite rules. These rules may be similar to RewriteRule directives implemented by mod_rewrite2 module in Apache or to Redirect directives of its mod_alias module. What distinguishes the vmod-dbrw rules from these, is that they are handled by Varnish, before the request reaches the httpd server, and that they are stored in an SQL database, which makes them easily manageable. Some web sites implement thousands of rewrite rules. The purpose of this module is to facilitate deploying and handling them. 2 Overview Rewrite rules are stored in a MySQL or PostgreSQL database. The vmod-dbrw module does not impose any restrictions on its schema. It only needs to know the SQL query which is to be used to retrieve data. This query is supplied to the module, along with the credentials for accessing the database, by calling the config function in the vcl_recv subroutine of the Varnish configuration file. Once the module is configured, the rewrite function can be called in the appropriate place of the Varnish configuration file. Its argument is a list of variable assignments separated with semicolons, each assignment having the form name=value. When called, rewrite expands the SQL query registered with the prior call to config by replacing each $name construct (a variable reference) with the corresponding value from its argument. Similarly to the shell syntax, the variable reference can also be written as ${name}. This latter form can be used in contexts where the variable reference is immediately followed by a letter, digit or underscore, to prevent it from being counted as a part of the name. The expanded query is then sent to the database server. If it returns a non-empty set, it is further handled depending on the number of fields it contains. If the returned set has one or two columns, only the first tuple is used and rewrite returns the value of its first column. Otherwise, if the returned set has three or more columns, the regular expression matching is performed. For the purpose of this discussion, let’s refer to the columns as follows: result, regexp, value and flags. The flags column is optional. Any surplus columns are ignored. For each returned tuple, the value column undergoes variable expansion, using the same algorithm as when preparing the query, and the resulting string is matched with the regexp column, which is treated as an extended POSIX regular expression. If the value matches the expression, the result column is expanded by replacing backreferences: each occurrence of $digit (where digit stands for a decimal digit from ‘0’ through ‘9’) is replaced with the contents of the digits parenthesized subexpression in regexp. For compatibility with the traditional usage, the \digit notation is also allowed. The resulting value is then returned to the caller. Optional flags column is a comma-separated list of flags that modify regular expression handling: - - ‘NC’ - ‘nocase’ Treat regexp as case-insensitive regular expression. - ‘case’ Treat regexp as case-sensitive (default). - ‘QSA’ - ‘qsappend’ Treat the resulting value as URL; append any query string from the original value to it. - ‘QSD’ - ‘qsdiscard’ Treat the resulting value as URL; discard any query string attached to the original value. - ‘redirect=code’ - ‘R=code’ On success, set the ‘X-VMOD-DBRW-Status’ header to code, which must be a valid HTTP status code. If regexp or value is NULL, strict matching is assumed (see strict matching). If flags is NULL, it is ignored. 3 Configuration - function: void config (string dbtype, string params, string query) This function configures the module and provides it with the data necessary to connect and use the database. It is normally called from the vcl_recvsubroutine. Arguments: - dbtype Type of the database to use. Valid values are ‘mysql’ and ‘pgsql’. - params Database connection parameters. This is a list of ‘name=value’ assignments separated with semicolons. The value part can be any sequence of characters, excepting white space and semicolon. If any of these is to appear in it, they must either be escaped by prepending them with a backslash, or the entire value enclosed in a pair of (single or double) quotes. The following escape sequences are allowed for use in value: If a backslash is followed by a symbol other than listed above, it is removed and the symbol following it is reproduced verbatim. Valid parameters are: - - ‘debug=n’ Set debugging level. Argument is a decimal number. - ‘server=host’ Name or IP address of the database server to connect to. If not defined, localhost (‘127.0.0.1’) is assumed. For MySQL databases, if host begins with a slash, its value is taken to be the full pathname of the local UNIX socket to connect to. - ‘port=n’ Port number on the ‘server’ to connect to. Default is ‘3306’ for MySQL and 5432 for Postgres. - ‘database=name’ The name of the database to use. - ‘config=filename’ (MySQL-specific) Read database access credentials and other parameters from the MySQL options file filename. - ‘group=name’ (MySQL-specific) Read credentials from section name of the options file supplied with the configparameter. Default section name is ‘client’. - ‘cacert=filename’ Use secure connection to the database server via SSL. The filename argument is a full pathname of the certificate authority file. - ‘options=string’ (Postgres-specific) Connection options. - ‘user=name’ Database user name. - ‘password=string’ Password to access the database. - query The SQL query to use. It can contain variable references in the ( $nameor ${name}), which will be replaced with the actual value of the name argument to the function rewrite. The example below configures vmod-dbrw to use MySQL database ‘rewrite’, with the user name ‘varnish’ and password guessme. import dbrw; sub vcl_recv { dbrw.config("mysql", "database=rewrite;user=varnish;password=guessme", {"SELECT dest FROM redirects WHERE host='$host' AND url='$url'"}); } 4 Writing Queries The query supplied to the config function depends on the database schema and on the kind of matching required. To ensure the best performance of the module it is important to design the database and the query so that the database look up be as fast as possible. Suppose that you plan to use vmod-dbrw to implement redirection rules based on strict matching (see strict matching). The simplest database structure for this purpose (assuming MySQL) will be: CREATE TABLE redirects ( id INT AUTO_INCREMENT, host varchar(255) NOT NULL DEFAULT '', url varchar(255) NOT NULL DEFAULT '', dest varchar(255) DEFAULT NULL, PRIMARY KEY (host,url) ); The columns and their purpose are: - id An integer uniquely identifying the row. It is convenient for managing the table (e.g. deleting the row). - host Host part of the incoming request. - url URL part of the incoming request. - dest Destination URL to redirect to. The rewrite function is to look for a row that has ‘host’ and ‘url’ matching the incoming request and to redirect it to the URL in the ‘dest’ column. The corresponding query is: SELECT dest FROM redirects WHERE host='$host' AND url='$url' The variables ‘host’ and ‘url’ are supposed to contain the actual host and URL parts of the incoming request. Handling regular expression matches is a bit trickier. Your query should first return the rows that could match the request. Then the vmod-dbrw engine will do the rest, by iterating over them and finding the one that actually does. It will iterate over the rows in the order they were returned by the database server, so it might be necessary to sort them by some criterion beforehand. The following is an example table structure: CREATE TABLE rewrite ( id INT AUTO_INCREMENT, host varchar(255) NOT NULL DEFAULT '', url varchar(255) NOT NULL DEFAULT '', dest varchar(255) DEFAULT NULL, value varchar(255) DEFAULT NULL, pattern varchar(255) DEFAULT NULL, flags char(64) DEFAULT NULL, weight int NOT NULL DEFAULT '0', KEY source (host,url) ); The meaning of id, host, and dest is the same as in the previous example. The meaning of url is described below. Other columns are (see regex matching): - value The value to be compared with the pattern. - pattern Regular expression to use. - flags Optional flags. - weight Relative weight of this row in the set. Rows will be sorted by this column, in ascending order. The simplest way to select candidate rows is by their ‘host’ column: SELECT dest,pattern,value,flags FROM rewrite WHERE host='$host' ORDER BY weight One can further abridge the returned set by selecting only those rows whose url column is the prefix of the requested URL: SELECT dest,pattern,value,flags FROM rewrite WHERE host='$host' AND LOCATE(url,'$url')==1 ORDER BY weight Furthermore, the url column can contain a SQL wildcard pattern, in which case the query will look like: SELECT dest,pattern,value,flags FROM rewrite WHERE host='$host' AND '$url' like $url ORDER BY weight 5 The rewrite Function - function: string rewrite (string args) This function is the working horse of the module. It rewrites its argument using the database configured in the previous call to configand returns the obtained value. To do so, it performs the following steps: - Parameter parsing The args parameter must be a list of name=valuepairs separated by semicolons. The function parses this string and builds a symbol table. - Variable expansion Using the symbol table built in the previous stage, each occurrence of $nameor ${name}is replaced with the actual value of the variable name from the table. Expanding an undefined variable is considered an error. - Establishing the database connection Unless the connection has already been established by a prior call to rewrite, the function establishes it using the parameters supplied earlier in a call to config. If the connection fails, the function returns NULL immediately. Database connections are persisting and thread-specific. This means that each thread keeps its own connection to the database and attempts to re-establish it if it goes down for some reason. - Query execution The query is sent to the server and the resulting set collected from it. - Result interpretation The resulting set is interpreted as described in result interpretation. This results in a single value being returned to the caller. Assuming the database structure similar to the one discussed in the previous chapter, the following example illustrates how to use rewrite to redirect the incoming request. sub vcl_recv { dbrw.config("mysql", "database=rewrite;user=varnish;password=guessme", {"SELECT dest FROM redirects WHERE host='$host' AND url='$url'"}); set req.http.X-Redirect-To = dbrw.rewrite("host=" + req.http.Host + ";" + "url=" + req.url); if (req.http.X-Redirect-To != "") { return(synth(301, "Redirect")); } } Further handling of the 301 response should be performed in a traditional way, e.g.: import std; sub vcl_synth { if (resp.status == 301) { set resp.http.Location = req.http.X-Redirect-To; if (req.http.X-VMOD-DBRW-Status != "") { set resp.status = std.integer(req.http.X-VMOD-DBRW-Status, 301); } return (deliver); } } The X-VMOD-DBRW-Status header, if set, contains the status code to be returned to the client (see X-VMOD-DBRW-Status). Notice the use of the vmod_std module to cast it to integer. 6 How to Report a Bug As the purpose of bug reporting is to improve software, please be sure to include a detailed information when reporting a bug. The minimum information needed is: - Module version you use. - A description of the bug. - Conditions under which the bug appears. - It is often helpful to send the contents of config.log file along with your bug report. This file is created after running ./configurein vmod-dbrwsource root directory. Appendix A. Concept Index This is a general index of all issues discussed in this manual
http://puszcza.gnu.org.ua/software/vmod-dbrw/manual/vmod-dbrw.html
CC-MAIN-2018-26
refinedweb
1,974
55.64
Authors: Robert Griesemer & Rob Pike. Last updated: July 18, 2016 Discussion at. We propose to add alias declarations to the Go language. An alias declaration introduces an alternative name for an object (type, function, etc.) declared elsewhere. Alias declarations simplify splitting up packages because clients can be updated incrementally, which is crucial for large-scale refactoring. They also facilitate multi-package “components” where a top-level package is used to provide a component‘s public API with aliases referring to the componenent’s internal packages. Alias declarations are important for the Go implementation of the “import public” feature of Google protocol buffers. They also provide a more fine-grained and explicit alternative to “dot-imports”. Suppose we have a library package L and a client package C that depends on L. During refactoring of code, some functionality of L is moved into a new package L1, which in turn may require updates to C. If there are multiple clients C1, C2, ..., many of these clients may need to be updated simultaneously for the system to build. Failing to do so will lead to build breakages in a continuous build environment. This is a real issue in large-scale systems such as we find at Google because the number of dependencies can go into the hundreds if not thousands. Client packages may be under control of different teams and evolve at different speeds. Updating a large number of client packages simultaneously may be close to impossible. This is an effective barrier to system evolution and maintenance. If client packages can be updated incrementally, one package (or a small batch of packages) at a time, the problem is avoided. For instance, after moving functionality from L into L1, if it is possible for clients to continue to refer to L in order to get the features in L1, clients don’t need to be updated at once. Go packages export constants, types (incl. associated methods), variables, and functions. If a constant X is moved from a package L to L1, L may trivially depend on L1 and re-export X with the same value as in L1. package L import "L1" const X = L1.X // X is effectively an alias for L1.X Client packages may use L1.X or continue to refer to L.X and still build without issues. A similar work-around exists for functions: Package L may provide wrapper functions that simply invoke the corresponding functions in L1. Alternatively, L may define variables of function type which are initialized to the functions which moved from L to L1: package L import "L1" var F = L1.F // F is a function variable referring to L1.F func G(args…) Result { return L1.G(args…) } It gets more complicated for variables: An incremental approach still exists but it requires multiple steps. Let’s assume we want to move a variable V from L to L1. In a first step, we declare a pointer variable Vptr in L1 pointing to L.V: package L1 import "L" var Vptr = &L.V Now we can incrementally update clients referring to L.V such that they use (*L1.Vptr) instead. This will give them full access to the same variable. Once all references to L.V have been changed, L.V can move to L1; this step doesn’t require any changes to clients of L1 (though it may require additional internal changes in L and L1): package L1 import "L" var Vptr = &V var V T = ... Finally, clients may be incrementally updated again to use L1.V directly after which we can get rid of Vptr. There is no work-around for types, nor is possible to define a named type T in L1 and re-export it in L and have L.T mean the exact same type as L1.T. Discussion: The multi-step approach to factor out exported variables requires careful planning. For instance, if we want to move both a function F and a variable V from L to L1, we cannot do so at the same time: The forwarder F left in L requires L to import L1, and the pointer variable Vptr introduced in L1 requires L1 to import L. The consequence would be a forbidden import cycle. Furthermore, if a moved function F requires access to a yet unmoved V, it would also cause a cyclic import. Thus, variables will have to be moved first in such a scenario, requiring multiple steps to enable incremental client updates, followed by another round of incremental updates to move everything else. To address these issues with a single, unified mechanism, we propose a new form of declaration in Go, called an alias declaration. As the name suggests, an alias declaration introduces an alternative name for a given object that has been declared elsewhere, in a different package. An alias declaration in package L makes it possible to move the original declaration of an object X (a constant, type, variable, or function) from package L to L1, while continuing to define and export the name X in L. Both L.X and L1.X denote the exact same object (L1.X). Note that the two predeclared types byte and rune are aliases for the predeclared types uint8 and int32. Alias declarations will enable users to define their own aliases, similar to byte and rune. The existing declaration syntax for constants effectively permits constant aliases: const C = L1.C // C is effectively an alias for L1.C Ideally we would like to extend this syntax to other declarations and give it alias semantics: type T = L1.T // T is an alias for L1.T func F = L1.F // F is an alias for L1.F Unfortunately, this notation breaks down for variables, because it already has a given (and different) meaning in variable declarations: var V = L1.V // V is initialized to L1.V Instead of “=” we propose the new alias operator “=>” to solve the syntactic issue: const C => L1.C // for regularity only, same effect as const C = L1.C type T => L1.T // T is an alias for type L1.T var V => L1.V // V is an alias for variable L1.V func F => L1.F // F is an alias for function L1.F With that, a general alias specification is of the form: AliasSpec = identifier "=>" PackageName "." identifier . Per the discussion at, and based on feedback from adonovan@golang, to avoid abuse, alias declarations may refer to imported and package-qualified objects only (no aliases to local objects or “dot-imports”). Furthermore, they are only permitted at the top (package) level, not inside a function. These restriction do not hamper the utility of aliases for the intended use cases. Both restrictions can be trivially lifted later if so desired; we start with them out of an abundance of caution. An alias declaration may refer to another alias. The LHS identifier (C, T, V, and F in the examples above) in an alias declaration is called the alias name (or alias for short). For each alias name there is an original name (or original for short), which is the non-alias name declared for a given object (e.g., L1.T in the example above). Some more examples: import "oldp" var v => oldp.V // local alias, not exported // alias declarations may be grouped type ( T1 => oldp.T1 // original for T1 is oldp.T1 T2 => oldp.T2 // original for T2 is oldp.T2 T3 [8]byte // regular declaration may be grouped with aliases ) var V2 T2 // same effect as: var V2 oldp.T2 func myF => oldp.F // local alias, not exported func G => oldp.G type T => oldp.MuchTooLongATypeName func f() { x := T{} // same effect as: x := oldp.MuchTooLongATypeName{} ... } The respective syntactic changes in the language spec are small and concentrated. Each declaration specification (ConstSpec, TypeSpec, etc.) gets a new alternative which is an alias specification (AliasSpec). Grouping is possible as before, except for functions (as before). See Appendix A1 for details. The short variable declaration form (using “:=”) cannot be used to declare an alias. Discussion: Introducing a new operator (“=>”) has the advantage of not needing to introduce a new keyword (such as “alias”), which we can't really do without violating the Go 1 promise (though r@golang and rsc@golang observe that it would be possible to recognize “alias” as a keyword at the package- level only, when in const/type/var/func position, and as an identifier otherwise, and probably not break existing code). The token sequence “=” “>” (or “==” “>”) is not a valid sequence in a Go program since “>” is a binary operator that must be surrounded by operands, and the left operand cannot end in “=” or “==”. Thus, it is safe to introduce “=>” as a new token sequence without invalidating existing programs. As proposed, an alias declaration must specify what kind of object the alias refers to (const, type, var, or func). We believe this is an advantage: It makes it clear to a user what the alias denotes (as with existing declarations). It also makes it possible to report an error at the location of the alias declaration if the aliased object changes (e.g., from being a constant to a variable) rather than only at where the alias is used. On the other hand, mdempsky@golang points out that using a keyword would permit making changes in a package L1, say change a function F into a type F, and not require a respective update of any alias declarations referring to L1.F, which in turn might simplify refactoring. Specifically, one could generalize import declarations so that they can be used to import and rename specific objects. For instance: import Printf = fmt.Printf or import Printf fmt.Printf One might even permit the form import context.Context as a shorthand for import Context context.Context analogously to the renaming feature available to imports already. One of the issues to consider here is that imported packages end up in the file scope and are only visible in one file. Furthermore, currently they cannot be re-exported. It is crucial for aliases to be re-exportable. Thus alias imports would need to end up in package scope. (It would be odd if they ended up in file scope: the same alias may have to be imported in multiple files of the same package, possibly with different names.) The choice of token (“=>”) is somewhat arbitrary, but both “A => B” and “A -> B” conjure up the image of a reference or forwarding from A to B. The token “->” is also used in Unix directory listings for symbolic links, where the lhs is another name (an alias) for the file mentioned on the RHS. dneil@golang and r@golang observe that if “->” is written “in reverse” by mistake, a declaration “var X -> p.X” meant to be an alias declaration is close to a regular variable declaration “var X <-p.X” (with a missing “=”); though it wouldn’t compile. Many people expressed a preference for “=>” over “->” on the tracking issue. The argument is that “->” is more easily confused with a channel operation. A few people would like to use “@” (as in @lias). For now we proceed with “=>” - the token is trivially changed down the road if there is strong general sentiment or a convincing argument for any other notation. An alias declaration declares an alternative name, the alias, for a constant, type, variable, or function, referred to by the RHS of the alias declaration. The RHS must be a package-qualified identifier; it may itself be an alias, or it may be the original name for the aliased object. Alias cycles are impossible by construction since aliases must refer to fully package-qualified (imported) objects and package import cycles are not permitted. An alias denotes the aliased object, and the effect of using an alias is indistinguishable from the effect of using the original; the only visible difference is the name. An alias declaration may only appear at the top- (package-) level where it is valid to have a keyword-based constant, type, variable, or function declaration. Alias declarations may be grouped. The same scope and export rules (capitalization for export) apply as for all other identifiers at the top-level. The scope of an alias identifier at the top-level is the package block (as is the case for an identifier denoting a constant, type, variable, or function). An alias declaration may refer to unsafe.Pointer, but not to any of the unsafe functions. A package is considered “used” if any imported object of a package is used. Consequently, declaring an alias referring to an object of an package marks the package as used. Discussion: The original proposal permitted aliases to any (even local) objects and also to predeclared types in the Universe scope. Furthermore, it permitted alias declarations inside functions. See the tracking issue and earlier versions of this document for a more detailed discussion. Alias declarations are a source-level and compile-time feature, with no observable impact at run time. Thus, libraries and tools operating at the source level or involved in type checking and compilation are expected to need adjustments. reflect package The reflect package permits access to values and their types at run-time. There’s no mechanism to make a new reflect.Value from a type name, only from a reflect.Type. The predeclared aliases byte and rune are mapped to uint8 and int32 already, and we would expect the same to be true for general aliases. For instance: fmt.Printf("%T", rune(0)) prints the original type name int32, not rune. Thus, we expect no API or semantic changes to package reflect. go/* std lib packages The packages under the go/* std library tree which deal with source code will need to be adjusted. Specifically, the packages go/token, go/scanner, go/ast, go/parser, go/doc, and go/printer will need the necessary API extensions and changes to cope with the new syntax. These changes should be straightforward. Package go/types will need to understand how to type-check alias declarations. It may also require an extension to its API (to be explored). We don’t expect any changes to the go/build package. go doc The go doc implementation will need to be adjusted: It relies on package go/doc which now exposes alias declarations. Thus, godoc needs to have a meaningful way to show those as well. This may be a simple extension of the existing machinery to include alias declarations. Other tools operating on source code A variety of other tools operate or inspect source code such as go vet, go lint, goimport, and others. What adjustments need to be made needs to be decided on a case-by-case basis. There are many open questions that need to be answered by an implementation. To mention a few of them: Are aliases represented somehow as “first-class” citizens in a compiler and go/types, or are they immediately “resolved” internally to the original names? For go/types specifically, adonovan@golang points out that a first-class representation may have an impact on the go/types API and potentially affect many tools. For instance, type switches assuming only the kinds of objects now in existence in go/types would need to be extended to handle aliases, should they show up in the public API. The go/types’ Info.Uses map, which currently maps identifiers to objects, will require especial attention: Should it record the alias to object references, or only the original names? At first glance, since an alias is simply another name for an object, it would seem that an implementation should resolve them immediately, making aliases virtually invisible to the API (we may keep track of them internally only for better error messages). On the other hand, they need to be exported and might need to show up in go/types’ Info.Uses map (or some additional variant thereof) so that tools such as guru have access to the alias names. To be prototyped. Alias declarations facilitate the construction of larger-scale libraries or “components”. For organizational and size reasons it often makes sense to split up a large library into several sub-packages. The exported API of a sub-package is driven by internal requirements of the component and may be only remotely related to its public API. Alias declarations make it possible to “pull out” the relevant declarations from the various sub-packages and collect them in a single top-level package that represents the component's API. The other packages can be organized in an “internal” sub-directory, which makes them virtually inaccessible through the go build command (they cannot be imported). TODO(gri): Expand on use of alias declarations for protocol buffer's “import public” feature. TODO(gri): Expand on use of alias declarations instead of “dot-imports”. The syntax changes necessary to accommodate alias declarations are limited and concentrated. There is a new declaration specification called AliasSpec: AliasSpec = identifier “=>” PackageName “.” identifier . An AliasSpec binds an identifier, the alias name, to the object (constant, type, variable, or function) the alias refers to. The object must be specified via a (possibly qualified) identifier. The aliased object must be a constant, type, variable, or function, depending on whether the AliasSpec is within a constant, type, variable, of function declaration. Alias specifications may be used with any of the existing constant, type, variable, or function declarations. The respective syntax productions are extended as follows, with the extensions marked in bold: ConstDecl = “const” ( ConstSpec | “(” { ConstSpec “;” } “)” ) . ConstSpec = IdentifierList [ [ Type ] “=” ExprList ] | AliasSpec . TypeDecl = “type” ( TypeSpec | “(” { TypeSpec “;” } “)” ) . TypeSpec = identifier Type | AliasSpec . VarDecl = “var” ( VarSpec | “(” { VarSpec “;” } “)” ) . VarSpec = IdentList ( Type [ “=” ExprList ] | “=” ExprList ) | AliasSpec . FuncDecl = “func” FunctionName ( Function | Signature ) | “func” AliasSpec . For completeness, we mention several alternatives. Do nothing (wait for Go 2). The easiest solution, but it does not address the problem. Permit alias declarations for types only, use the existing work-arounds otherwise. This would be a “minimal” solution for the problem. It would require the use of work-arounds for all other objects (constants, variables, and functions). Except for variables, those work-arounds would not be too onerous. Finally, this would not require the introduction of a new operator since “=” could be used. Permit re-export of imports, or generalize imports. One might come up with a notation to re-export all objects of an imported package wholesale, accessible under the importing package name. Such a mechanism would address the incremental refactoring problem and also permit the easy construction of some sort of “super-package” (or component), the API of which would be the sum of all the re-exported package APIs. This would be an “all-or-nothing” approach that would not permit control over which objects are re-exported or under what name. Alternatively, a generalized import scheme (discussed earlier in this document) may provide a more fine-grained solution.
https://go.googlesource.com/proposal/+/master/design/16339-alias-decls.md
CC-MAIN-2018-51
refinedweb
3,151
56.66
Pairs Trading using Data-Driven Techniques: Simple Trading Strategies Part 3 Pairs trading is a nice example of a strategy based on mathematical analysis. We’ll demonstrate how to leverage data to create and automate a pairs trading strategy. Download Ipython Notebook here. Underlying Principle Let’s say you have a pair of securities X and Y that have some underlying economic link, for example two companies that manufacture the same product like Pepsi and Coca Cola. You expect the ratio or difference in prices (also called the spread) of these two to remain constant with time. However, from time to time, there might be a divergence in the spread between these two pairs caused by temporary supply/demand changes, large buy/sell orders for one security, reaction for important news about one of the companies etc. ). You are making a bet that the spread between the two stocks would eventually converge by either the outperforming stock moving back down or the underperforming stock moving back up or both — your trade will make money in all of these scenarios. If both the stocks move up or move down together without changing the spread between them, you don’t make or lose any money. Hence, pairs trading is a market neutral trading strategy enabling traders to profit from virtually any market conditions: uptrend, downtrend, or sideways movement. Explaining the Concept: We start by generating two fake securities. import numpy as np import pandas as pd import statsmodels from statsmodels.tsa.stattools import coint # just set the seed for the random number generator np.random.seed(107) import matplotlib.pyplot as plt Let’s generate a fake security X and model it’s daily returns by drawing from a normal distribution. Then we perform a cumulative sum to get the value of X on each day. # Generate daily returnsXreturns = np.random.normal(0, 1, 100) # sum them and shift all the prices upX = pd.Series(np.cumsum( Xreturns), name='X') + 50 X.plot(figsize=(15,7)) plt.show() Now we generate Y which has a deep economic link to X, so price of Y should vary pretty similarly as X. We model this by taking X, shifting it up and adding some random noise drawn from a normal distribution. noise = np.random.normal(0, 1, 100) Y = X + 5 + noise Y.name = 'Y'pd.concat([X, Y], axis=1).plot(figsize=(15,7))plt.show() Cointegration Cointegration, very similar to correlation, means that the ratio between two series will vary around a mean. The two series, Y and X follow the follwing: Y = ⍺ X + e where ⍺ is the constant ratio and e is white noise. Read more here For pairs trading to work between two timeseries, the expected value of the ratio over time must converge to the mean, i.e. they should be cointegrated. The time series we constructed above are cointegrated. We’ll plot the ratio between the two now so we can see how this looks. (Y/X).plot(figsize=(15,7)) plt.axhline((Y/X).mean(), color='red', linestyle='--') plt.xlabel('Time') plt.legend(['Price Ratio', 'Mean']) plt.show() Testing for Cointegration There is a convenient test that lives in statsmodels.tsa.stattools. We should see a very low p-value, as we've artificially created two series that are as cointegrated as physically possible. # compute the p-value of the cointegration test # will inform us as to whether the ratio between the 2 timeseries is stationary # around its mean score, pvalue, _ = coint(X,Y) print pvalue 1.81864477307e-17 Note: Correlation vs. Cointegration Correlation and cointegration, while theoretically similar, are not the same. Let’s look at examples of series that are correlated, but not cointegrated, and vice versa. First let's check the correlation of the series we just generated. X.corr(Y) 0.951 That’s very high, as we would expect. But how would two series that are correlated but not cointegrated look? A simple example is two series that just diverge. ret1 = np.random.normal(1, 1, 100) ret2 = np.random.normal(2, 1, 100) s1 = pd.Series( np.cumsum(ret1), name='X') s2 = pd.Series( np.cumsum(ret2), name='Y') pd.concat([s1, s2], axis=1 ).plot(figsize=(15,7)) plt.show()print 'Correlation: ' + str(X_diverging.corr(Y_diverging)) score, pvalue, _ = coint(X_diverging,Y_diverging) print 'Cointegration test p-value: ' + str(pvalue) Correlation: 0.998 Cointegration test p-value: 0.258 A simple example of cointegration without correlation is a normally distributed series and a square wave. Y2 = pd.Series(np.random.normal(0, 1, 800), name='Y2') + 20 Y3 = Y2.copy() Y3[0:100] = 30 Y3[100:200] = 10 Y3[200:300] = 30 Y3[300:400] = 10 Y3[400:500] = 30 Y3[500:600] = 10 Y3[600:700] = 30 Y3[700:800] = 10 Y2.plot(figsize=(15,7)) Y3.plot() plt.ylim([0, 40]) plt.show()# correlation is nearly zero print 'Correlation: ' + str(Y2.corr(Y3)) score, pvalue, _ = coint(Y2,Y3) print 'Cointegration test p-value: ' + str(pvalue) Correlation: 0.007546 Cointegration test p-value: 0.0 The correlation is incredibly low, but the p-value shows perfect cointegration! How to make a pairs trade? Because two cointegrated time series (such as X and Y above) drift towards and apart from each other, there will be times when the spread is high and times when the spread is low. We make a pairs trade by buying one security and selling another. This way, if both securities go down together or go up together, we neither make nor lose money — we are market neutral. Going back to X and Y above that follow Y = ⍺ X + e, such that ratio (Y/X) moves around it’s mean value ⍺, we make money on the ratio of the two reverting to the mean. In order to do this we’ll watch for when X and Y are far apart, i.e ⍺ is too high or too low: - Going Long the Ratio This is when the ratio ⍺ is smaller than usual and we expect it to increase. In the above example, we place a bet on this by buying Y and selling X. - Going Short the Ratio This is when the ratio ⍺ is large and we expect it to become smaller. In the above example, we place a bet on this by selling Y and buying X. Note that we always have a “hedged position”: a short position makes money if the security sold loses value, and a long position will make money if a security gains value, so we’re immune to overall market movement. We only make or lose money if securities X and Y move relative to each other. Using Data to find securities that behave like this The best way to do this is to start with securities you suspect may be cointegrated and perform a statistical test. If you just run statistical tests over all pairs, you’ll fall prey to multiple comparison bias. Multiple comparisons bias is simply the fact that there is an increased chance to incorrectly generate a significant p-value when many tests are run, because we are running a lot of tests. If 100 tests are run on random data, we should expect to see 5 p-values below 0.05. If you are comparing n securities for co-integration, you will perform n(n-1)/2 comparisons, and you should expect to see many incorrectly significant p-values, which will increase as you increase. To avoid this, pick a small number of pairs you have reason to suspect might be cointegrated and test each individually. This will result in less exposure to multiple comparisons bias. So let’s try to find some securities that display cointegration. Let’s work with a basket of US large cap tech stocks — in S&P 500..05. This method is prone to multiple comparison bias and in practice the securities should be subject to a second verification step. Let’s ignore this for the sake of this example. def find_cointegrated_pairs(data): n = data.shape[1] score_matrix = np.zeros((n, n)) pvalue_matrix = np.ones((n, n)) keys = data.keys() pairs = [] for i in range(n): for j in range(i+1, n): S1 = data[keys[i]] S2 = data[keys[j]] result = coint(S1, S2) score = result[0] pvalue = result[1] score_matrix[i, j] = score pvalue_matrix[i, j] = pvalue if pvalue < 0.02: pairs.append((keys[i], keys[j])) return score_matrix, pvalue_matrix, pairs Note: We include the market benchmark (SPX) in our data — the market drives the movement of so many securities that often you might find two seemingly cointegrated securities; but in reality they are not cointegrated with each other but both conintegrated with the market. This is known as a confounding variable and it is important to check for market involvement in any relationship you find. from backtester.dataSource.yahoo_data_source import YahooStockDataSource from datetime import datetimestartDateStr = '2007/12/01' endDateStr = '2017/12/01' cachedFolderName = 'yahooData/' dataSetId = 'testPairsTrading' instrumentIds = ['SPY','AAPL','ADBE','SYMC','EBAY','MSFT','QCOM', 'HPQ','JNPR','AMD','IBM'] ds = YahooStockDataSource(cachedFolderName=cachedFolderName, dataSetId=dataSetId, instrumentIds=instrumentIds, startDateStr=startDateStr, endDateStr=endDateStr, event='history')data = ds.getBookDataByFeature()['Adj Close']data.head(3) Now let’s try to find cointegrated pairs using our method. # Heatmap to show the p-values of the cointegration test # between each pair of stocksscores, pvalues, pairs = find_cointegrated_pairs(data) import seaborn m = [0,0.2,0.4,0.6,0.8,1] seaborn.heatmap(pvalues, xticklabels=instrumentIds, yticklabels=instrumentIds, cmap=’RdYlGn_r’, mask = (pvalues >= 0.98)) plt.show() print pairs [('ADBE', 'MSFT')] Looks like ‘ADBE’ and ‘MSFT’ are cointegrated. Let’s take a look at the prices to make sure this actually makes sense. S1 = data['ADBE'] S2 = data['MSFT'] score, pvalue, _ = coint(S1, S2) print(pvalue) ratios = S1 / S2 ratios.plot() plt.axhline(ratios.mean()) plt.legend([' Ratio']) plt.show() The ratio does look like it moved around a stable mean.The absolute ratio isn’t very useful in statistical terms. It is more helpful to normalize our signal by treating it as a z-score. Z score is defined as: Z Score (Value) = (Value — Mean) / Standard Deviation WARNING In practice this is usually done to try to give some scale to the data, but this assumes an underlying distribution. Usually normal.. def zscore(series): return (series - series.mean()) / np.std(series) zscore(ratios).plot() plt.axhline(zscore(ratios).mean()) plt.axhline(1.0, color=’red’) plt.axhline(-1.0, color=’green’) plt.show() It’s easier to now observe the ratio now moves around the mean, but sometimes is prone to large divergences from the mean, which we can take advantages of. Now that we’ve talked about the basics of pair trading strategy, and identified co-integrated securities based on historical price, let’s try to develop a trading signal. First, let’s recap the steps in developing a trading signal using data techniques: - Collect reliable Data and clean Data - Create features from data to identify a trading signal/logic - Features can be moving averages or ratios of price data, correlations or more complex signals — combine these to create new features - Generate a trading signal using these features, i.e which instruments are a buy, a sell or neutral Step 1: Setup your problem Here we are trying to create a signal that tells us if the ratio is a buy or a sell at the next instant in time, i.e our prediction variable Y: Y = Ratio is buy (1) or sell (-1) Y(t)= Sign( Ratio(t+1) — Ratio(t) ) Note we don’t need to predict actual stock prices, or even actual value of ratio (though we could), just the direction of next move in ratio Step 2: Collect Reliable and Accurate Data Auquan Toolbox is your friend here! You only have to specify the stock you want to trade and the datasource to use, and it pulls the required data and cleans it for dividends and stock splits. So our data here is already clean. We are using the following data from Yahoo at daily intervals for trading days over last 10 years (~2500 data points): Open, Close, High, Low and Trading Volume Step 3: Split Data Don’t forget this super important step to test accuracy of your models. We’re using the following Training/Validation/Test Split - Training 7 years ~ 70% - Test ~ 3 years 30% ratios = data['ADBE'] / data['MSFT'] print(len(ratios)) train = ratios[:1762] test = ratios[1762:] Ideally we should also make a validation set but we will skip this for now. Step 4: Feature Engineering What could relevant features be? We want to predict the direction of ratio move. We’ve seen that our two securities are cointegrated so the ratio tends to move around and revert back to the mean. It seems our features should be certain measures for the mean of the ratio, the divergence of the current value from the mean to be able to generate our trading signal. Let’s use the following features: - 60 day Moving Average of Ratio: Measure of rolling mean - 5 day Moving Average of Ratio: Measure of current value of mean - 60 day Standard Deviation - z score: (5d MA — 60d MA) /60d SD ratios_mavg5 = train.rolling(window=5, center=False).mean()ratios_mavg60 = train.rolling(window=60, center=False).mean()std_60 = train.rolling(window=60, center=False).std()zscore_60_5 = (ratios_mavg5 - ratios_mavg60)/std_60 plt.figure(figsize=(15,7)) plt.plot(train.index, train.values) plt.plot(ratios_mavg5.index, ratios_mavg5.values) plt.plot(ratios_mavg60.index, ratios_mavg60.values)plt.legend(['Ratio','5d Ratio MA', '60d Ratio MA'])plt.ylabel('Ratio') plt.show() plt.figure(figsize=(15,7)) zscore_60_5.plot() plt.axhline(0, color='black') plt.axhline(1.0, color='red', linestyle='--') plt.axhline(-1.0, color='green', linestyle='--') plt.legend(['Rolling Ratio z-Score', 'Mean', '+1', '-1']) plt.show() The Z Score of the rolling means really brings out the mean reverting nature of the ratio! Step 5: Model Selection Let’s start with a really simple model. Looking at the z-score chart, we can see that whenever the z-score feature gets too high, or too low, it tends to revert back. Let’s use +1/-1 as our thresholds for too high and too low, then we can use the following model to generate a trading signal: - Ratio is buy (1) whenever the z-score is below -1.0 because we expect z score to go back up to 0, hence ratio to increase - Ratio is sell(-1) when the z-score is above 1.0 because we expect z score to go back down to 0, hence ratio to decrease Step 6: Train, Validate and Optimize Finally, let’s see how our model actually does on real data? Let’s see what this signal looks like on actual ratios # Plot the ratios and buy and sell signals from z score plt.figure(figsize=(15,7))train[60:].plot() buy = train.copy() sell = train.copy() buy[zscore_60_5>-1] = 0 sell[zscore_60_5<1] = 0 buy[60:].plot(color=’g’, linestyle=’None’, marker=’^’) sell[60:].plot(color=’r’, linestyle=’None’, marker=’^’) x1,x2,y1,y2 = plt.axis() plt.axis((x1,x2,ratios.min(),ratios.max())) plt.legend([‘Ratio’, ‘Buy Signal’, ‘Sell Signal’]) plt.show() The signal seems reasonable, we seem to sell the ratio (red dots) when it is high or increasing and buy it back when it's low (green dots) and decreasing. What does that mean for actual stocks that we are trading? Let’s take a look # Plot the prices and buy and sell signals from z score plt.figure(figsize=(18,9)) S1 = data['ADBE'].iloc[:1762] S2 = data['MSFT'].iloc[:1762]S1[60:].plot(color='b') S2[60:].plot(color='c') buyR = 0*S1.copy() sellR = 0*S1.copy()# When buying the ratio, buy S1 and sell S2 buyR[buy!=0] = S1[buy!=0] sellR[buy!=0] = S2[buy!=0] # When selling the ratio, sell S1 and buy S2 buyR[sell!=0] = S2[sell!=0] sellR[sell!=0] = S1[sell!=0]buyR[60:].plot(color='g', linestyle='None', marker='^') sellR[60:].plot(color='r', linestyle='None', marker='^') x1,x2,y1,y2 = plt.axis() plt.axis((x1,x2,min(S1.min(),S2.min()),max(S1.max(),S2.max())))plt.legend(['ADBE','MSFT', 'Buy Signal', 'Sell Signal']) plt.show() Notice how we sometimes make money on the short leg and sometimes on the long leg, and sometimes both. We’re happy with our signal on the training data. Let’s see what kind of profits this signal can generate. We can make a simple backtester which buys 1 ratio (buy 1 ADBE stock and sell ratio x MSFT stock) when ratio is low, sell 1 ratio (sell 1 ADBE stock and buy ratio x MSFT stock) when it’s high and calculate PnL of these trades. # Trade using a simple strategy def trade(S1, S2, window1, window2): # If window length is 0, algorithm doesn't make sense, so exit if (window1 == 0) or (window2 == 0): return 0 # Compute rolling mean and rolling standard deviation ratios = S1/S2 ma1 = ratios.rolling(window=window1, center=False).mean() ma2 = ratios.rolling(window=window2, center=False).mean() std = ratios.rolling(window=window2, center=False).std() zscore = (ma1 - ma2)/std # Simulate trading # Start with no money and no positions money = 0 countS1 = 0 countS2 = 0 for i in range(len(ratios)): # Sell short if the z-score is > 1 if zscore[i] > 1: money += S1[i] - S2[i] * ratios[i] countS1 -= 1 countS2 += ratios[i] print('Selling Ratio %s %s %s %s'%(money, ratios[i], countS1,countS2)) # Buy long if the z-score is < 1 elif zscore[i] < -1: money -= S1[i] - S2[i] * ratios[i] countS1 += 1 countS2 -= ratios[i] print('Buying Ratio %s %s %s %s'%(money,ratios[i], countS1,countS2)) # Clear positions if the z-score between -.5 and .5 elif abs(zscore[i]) < 0.75: money += S1[i] * countS1 + S2[i] * countS2 countS1 = 0 countS2 = 0 print('Exit pos %s %s %s %s'%(money,ratios[i], countS1,countS2)) return moneytrade(data['ADBE'].iloc[:1763], data['MSFT'].iloc[:1763], 60, 5) 1783.375 So that strategy seems profitable! Now we can optimize further by changing our moving average windows, by changing the thresholds for buy/sell and exit positions etc and check for performance improvements on validation data. We could also try more sophisticated models like Logisitic Regression, SVM etc to make our 1/-1 predictions. For now, let’s say we decide to go forward with this model, this brings us to Step 7: Backtest on Test Data Backtesting is simple, we can just use our function from above to see PnL on test data trade(data[‘ADBE’].iloc[1762:], data[‘MSFT’].iloc[1762:], 60, 5) 5262.868 The model does quite well! This makes our first simple pairs trading model. Avoid Overfitting Before ending the discussion, we’d like to give special mention to overfitting. Overfitting is the most dangerous pitfall of a trading strategy. An overfit algorithm may perform wonderfully on a backtest but fails miserably on new unseen data — this mean it has not really uncovered any trend in data and no real predictive power. Let’s take a simple example. In our model, we used rolling parameter estimates and may wish to optimize window length. We may decide to simply iterate over all possible, reasonable window length and pick the length based on which our model performs the best . Below we write a simple loop to to score window lengths based on pnl of training data and find the best one. # Find the window length 0-254 # that gives the highest returns using this strategy length_scores = [trade(data['ADBE'].iloc[:1762], data['MSFT'].iloc[:1762], l, 5) for l in range(255)] best_length = np.argmax(length_scores) print ('Best window length:', best_length) ('Best window length:', 40) Now we check the performance of our model on test data and we find that this window length is far from optimal! This is because our original choice was clearly overfitted to the sample data. # Find the returns for test data # using what we think is the best window length length_scores2 = [trade(data['ADBE'].iloc[1762:], data['MSFT'].iloc[1762:],l,5) for l in range(255)] print (best_length, 'day window:', length_scores2[best_length])# Find the best window length based on this dataset, # and the returns using this window length best_length2 = np.argmax(length_scores2) print (best_length2, 'day window:', length_scores2[best_length2]) (40, 'day window:', 1252233.1395) (15, 'day window:', 1449116.4522) Clearly fitting to our sample data doesn't always give good results in the future. Just for fun, let's plot the length scores computed from the two datasets plt.figure(figsize=(15,7)) plt.plot(length_scores) plt.plot(length_scores2) plt.xlabel('Window length') plt.ylabel('Score') plt.legend(['Training', 'Test']) plt.show() We can see that anything between 20–50 would be a good choice for window. To avoid overfitting, we can use economic reasoning or the nature of our algorithm to pick our window length. We can also use Kalman filters, which do not require us to specify a length; this method will be covered in another notebook later. Next Steps In this post, we presented some simple introductory approaches to demonstrate the process of developing a pairs trading strategy. In practice one should use more sophisticated statistics, some of which are listed here - Hurst exponent - Half-life of mean reversion inferred from an Ornstein–Uhlenbeck process - Kalman filters
https://medium.com/auquan/pairs-trading-data-science-7dbedafcfe5a
CC-MAIN-2019-43
refinedweb
3,583
55.44
import ceylon.test.engine.spi { TestExtension } import ceylon.test.event { TestRunStartedEvent, TestRunFinishedEvent, TestStartedEvent, TestFinishedEvent, TestSkippedEvent, TestAbortedEvent, TestErrorEvent, TestExcludedEvent } "Represents a listener which will be notified about events that occur during a test run. Example of simple listener, which triggers alarm whenever test fails. shared class RingingListener() satisfies TestListener { shared actual void testError(TestErrorEvent event) => alarm.ring(); } ... such listener can be used directly when creating [[TestRunner]] TestRunner runner = createTestRunner{ sources = [`module com.acme`]; extensions = [RingingListener()];}; ... or better declaratively with usage of [[testExtension]] annotation testExtension(`class RingingListener`) module com.acme; " shared interface TestListener satisfies TestExtension { "Called before any tests have been run." shared default void testRunStarted( "The event object." TestRunStartedEvent event) {} "Called after all tests have finished." shared default void testRunFinished( "The event object." TestRunFinishedEvent event) {} "Called when a test is about to be started." shared default void testStarted( "The event object." TestStartedEvent event) {} "Called when a test has finished, whether the test succeeds or not." shared default void testFinished( "The event object." TestFinishedEvent event) {} "Called when a test has been skipped, because its condition wasn't fullfiled." shared default void testSkipped( "The event object." TestSkippedEvent event) {} "Called when a test has been aborted, because its assumption wasn't met." shared default void testAborted( "The event object." TestAbortedEvent event) {} "Called when a test will not be run, because some error has occurred. For example a invalid test function signature." shared default void testError( "The event object." TestErrorEvent event) {} "Called when a test is excluded from the test run due [[TestFilter]]" shared default void testExcluded( "The event object." TestExcludedEvent event) {} }
https://modules.ceylon-lang.org/repo/1/ceylon/test/1.3.3/module-doc/api/TestListener.ceylon.html
CC-MAIN-2022-21
refinedweb
257
52.15
This is a discussion on Re: Lookup answer depends on query source - DNS ; So if the client uses a DNS resolver, the DNS who is responsible for the name translation will only "view" the DNS resolve's IP, not the client's IP? If yes, then the view in the DNS doesn't mean anything for ... So if the client uses a DNS resolver, the DNS who is responsible for the name translation will only "view" the DNS resolve's IP, not the client's IP? If yes, then the view in the DNS doesn't mean anything for that client. Will the DNS resolver tell the DNS who is in fact request the name translation? 2008/10/14 Kevin Darcy > Mr. Chow Wing Siu wrote: > > Dear all, > > > > I wanna setup as follows: > > > > Use-case ONE) > > Query source for xxxx.com from: Lookup IP answer: > > *.hk 111.111.111.111 > > *.cn 111.111.111.112 > > *.uk 111.111.111.113 > > > Use-case one is going to be very difficult and unreliable, since not all > client addresses have reverse-resolution, and, even if they do, it might > map back to a gTLD (e.g. .com) rather than a ccTLD. What do you do if > the query-source maps back to a .com name? > > Also, BIND has no provision for basing view matches on the > reverse-lookup of the client's address. This is a potentially dangerous > thing to do, since if you let arbitrary clients trigger your nameserver > to perform reverse lookups, you're creating the possibility of DoS or > DoS-amplification attacks. At the very least, you might be introducing > an unacceptable amount of delay into the name-resolution process, since > the reverse namespace is, notoriously, even less well-maintained than > the regular "forward" (name-to-address) namespace, and often subject to > long lookup delays due to broken/stale delegations, etc. > > Use-case TWO) > > Query source for xxxx.com from: Lookup IP answer: > > 222.111.111.111 (from country A) 111.111.111.111 > > 226.111.111.111 (from country A) 111.111.111.111 > > 228.123.111.111 (from country A) 111.111.111.111 > > 222.111.222.111 (from country B) 111.111.111.112 > > 226.111.222.111 (from country B) 111.111.111.112 > > 228.125.111.111 (from country B) 111.111.111.112 > > > > Using BIND view to setup seems to be not so good. Isn't it? > > Then, do anyone knows how to setup as above? > > > Use-case two is perfectly doable using "match-clients" views in BIND. > You'd have to constantly maintain the match-clients clauses, however, > which would more likely be address *ranges*, as opposed to specific > addresses (as you show above). > > Be aware that with any view-based approach you have *multiple* copies of > each zone in memory at any given time. So this may be considered a > non-scalable solution. > > Note that, with either use-case, if your goal is to perform > network-level optimization, be aware that the identity of a client's > *resolver* may not reflect the actual location of the client *itself*. A > web browser in India, for instance, may be using a DNS resolver, over a > fast internal corporate link, in the United States, so do you really > want to "optimize" that by giving out U.S. addresses to an Indian client? > > Ultimately, it would be nice if someone could come up with an > "on-behalf-of" DNS extension, which would carry information about the > actual *client's* address (as opposed to whatever resolver happens to be > processing the client's request) so that an optimizing nameserver would > know exactly what it's trying to optimize for. But, even if someone were > to come up with such a thing, it would take probably a decade or more > before it was actually deployed everywhere... > > In the meantime, we have the "nifty tricks" of companies like Akamai to > give us something that more-or-less meets these requirements. For a price. > > > - Kevin > > > >
http://fixunix.com/dns/544640-re-lookup-answer-depends-query-source.html
CC-MAIN-2014-42
refinedweb
665
71.65
<oXygen/> comes with the TEI DTDs, XML Catalog, XSL stylesheets and document templates so that you can start working right away with TEI documents. You can edit documents guided by the context sensitive content assistant, easily validate the edited documents and transform them to HTML or PDF for presentation. We cooperate with the TEI Consortium and we are committed to keep the TEI support up to date in future <oXygen/> releases. Templates are documents containing a predefined structure. They provide starting points on which to rapidly build new documents that repeat the same basic characteristics. <oXygen/> installs a rich set of templates for a number of XML applications, including a set of predefined templates for TEI XML documents. <oXygen/> is able to recognize the TEI documents based either on the root element name or namespace. When you switch to the Author mode, the editor loads both the set of CSS files and the available actions that were associated in the TEI configuration. The action set include operations for emphasising text, creating lists, tables, sections and paragraphs. More than this, you can create your own operations for inserting or deleting XML document fragments. You can create tables, join or split cells, add or remove rows easily. A TEI table example. The caret is positioned between two cells. Choosing one of the TEI templates, <oXygen/> inserts a TEI document skeleton and assists you in validation and tag entry as you go along. It is so easy to transform the TEI documents in PDF or HTML: just select one of the predefined scenarios for TEI and click on the "Transform now" button.
http://www.oxygenxml.com/tei_editor.html
crawl-001
refinedweb
269
53.31
write a series of short Tips and Tricks. In this second post I will provide some information about the web.config file located in the RazorModules folder. One of the neat features of Razor/WebPages is that the parser is highly configurable. Its configuration is controlled by a new section in web.config (shown in Figure 1). In theory the section can be placed in the web.config file in the web root, but as it is only of interest to Razor it can be placed in a custom web.config file in DesktopModules. The file distributed with the Razor Host module is installed into the ~/DesktopModules/RazorModules folder. 1: <system.web.webPages.razor> 2: <pages pageBaseType="DotNetNuke.Web.Razor.DotNetNukeWebPage"> 3: <namespaces> 4: <add namespace="Microsoft.Web.Helpers" /> 5: <add namespace="WebMatrix.Data" /> 6: </namespaces> 7: </pages> 8: </system.web.webPages.razor> The entry on line 2 of the file tells the Razor parser that the Razor base type is DotNetNukeWebPage (not WebPage), and the namespaces nodes tell the parser to automatically “include” the specified namespaces. If you want/need to use other namespaces – maybe from helpers that you wrote yourself – or that you found on the Internet - then adding the namespaces here means that you do not need “@using” statements in the .cshtml file(s). By convention, the RazorHost module is installed in the RazorModules folder and as long as any other Razor module is located in a subfolder of RazorModules then it will “inherit” the settings in this custom web.config. So, if you want to place your Razor script somewhere else – say DesktopModules/MyModules then you will need to deploy your own version of the web.config which contains the <pages> node at the least. This post.
http://www.dnnsoftware.com/community-blog/cid/136311
CC-MAIN-2018-05
refinedweb
291
57.37
-scripting Overview The ServiceMix Scripting component provides support for processing scripts using JSR-223 compliant scripting languages. The component is currently shipping with: Groovy (1.5.6) JRuby (1.1.2) Rhino JavaScript (1.7R1) Namespace and xbean.xml The namespace URI for the servicemix-bean JBI component is. This is an example of an xbean.xml file with a namespace definition with prefix bean. <beans xmlns: <!-- add scripting:endpoint here --> </beans> Endpoint types The servicemix-scripting component defines a single endpoint type: scripting:endpoint :: The scripting endpoint can be used to use scripts to handle exchanges or send new exchanges
http://servicemix.apache.org/docs/4.4.x/jbi/components/servicemix-scripting.html
CC-MAIN-2014-15
refinedweb
101
61.43
react-native and I want to add an image to map marker. The image I am getting is from remote URL and want t set that image on the map marker. How to set image for UIImageView in UITableViewCell Asynchronously for each cell. I created custom view class. import UIKit class CustomView: UIView { init(frame: CGRect) { super.init(frame: frame) // Initialization code } /* // Only override drawRect: if you perform custom drawing. // An empty implementation adversely affects performance during animation. override func drawRect(rect: CGRect) { // Drawing code } */ } But i don't know how to include created custom view in Storyboard because i heard that we can add custom views in storyboard design. How to use below code in swift. [actionBtn addTarget:self action:@selector(myAction:) forControlEvents:UIControlEventTouchUpInside]; Forgot Your Password? 2018 © Queryhome
https://tech.queryhome.com/167261/react-native-add-image-on-map-marker
CC-MAIN-2018-22
refinedweb
130
50.84
AIO_FSYNC(3) Linux Programmer's Manual AIO_FSYNC(3) aio_fsync - asynchronous file synchronization #include <aio.h> int aio_fsync(int op, struct aiocb *aiocbp); Link with -lrt.. On success (the sync request was successfully queued) this function returns 0. On error, -1 is returned, and errno is set appropriately.. The aio_fsync()_fsync() │ Thread safety │ MT-Safe │ └────────────┴───────────────┴─────────┘ POSIX.1-2001, POSIX.1-2008. aio_cancel(3), aio_error(3), aio_read(3), aio_return(3), aio_suspend(3), aio_write(3), lio_listio(3), aio(7), sigevent(7) This page is part of release 4.11 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. 2015-03-02 AIO_FSYNC(3) Pages that refer to this page: aio_cancel(3), aio_error(3), aio_read(3), aio_return(3), aio_suspend(3), aio_write(3), lio_listio(3), aio(7), sigevent(7)
http://man7.org/linux/man-pages/man3/aio_fsync.3.html
CC-MAIN-2017-26
refinedweb
141
59.4
Wouldn't it seem more fit to add the domain to the clients email address than adding it to your own? Or create a new account for them? Thay way they have total control of it if they wish. Otherwise, to answer your question: The best way to keep it the way you set it up is probably just to have multiple accounts. Add the extension 1, 2, 3 etc. or something.) Of course, it's possible. All you have to do is to create a DNS record IN A pointing to the IP of your new server. See here, for example: I don't believe you can log the activities to the event log, but what you can do is use the -xml parameter to output the changes in XML format. You could then use this to log to the event log via a Powershell script, for example. There can only be one Access-Control-Allow-Origin response header, and that header can only have one origin value. Therefore, in order to get this to work, you need to have some code that: Grabs the Origin request header. Checks if the origin value is one of the whitelisted values. If it is valid, sets the Access-Control-Allow-Origin header with that value. I don't think there's any way to do this solely through the web.config. if (ValidateRequest()) { Response.Headers.Remove("Access-Control-Allow-Origin"); Response.AddHeader("Access-Control-Allow-Origin", Request.UrlReferrer.GetLeftPart(UriPartial.Authority)); Response.Headers.Remove("Access-Control-Allow-Credentials"); Response.AddHeader("Access-Control-Allow-Credentials", "true"); Response.Headers.Remove("Acce Try editting your /var/www/vhosts/domain1.com/conf/vhost.conf file. If it doesn't exist, create it. Then add the following line: php_admin_value open_basedir "/var/www/vhosts/domain1.com/httpdocs:/var/www/vhosts/domain2.com/httpdocs" Save the file and then reload your apache configuration by running this command on the command line assuming you have priviledges: /usr/local/psa/admin/sbin/websrvmng -u --vhost-name=domain1.com If you use something like node http-proxy or nginx you can run each of the separate sites on different local ports and then map the domain name back. I use the http proxy but others love nginx. For that to work, you'll need to add DNS entries resolving to the IP address that your reverse proxy is running on. Then for example, the port 80 request will get mapped back to localhost:3000. This allows for each app to run in isolation and in separate processes from the other sites. They can't crash each other. You also don't have namespace issues where both apps claim the same path. My app.js for the http proxy looks like this: var proxyPort = 80; var http = require('http'); var httpProxy = require('http-proxy'); var options = { router: { 'localhost': '127.0.0.1:3000', Here are some screenshots taken from the Mac OSX version of the GitHub desktop program. Here I am making the first commit, but you can see both changes have been made prior to the commit: Here I am making the second commit: Here you can see that each commit was accepted individually: Note: Some names have been blanked out for privacy. Assuming that the Windows version of GitHub has the same options, I would download the desktop program and try that. The account that the "assembly" is running under must have permissions to do what your asking it to do. Either that or you must have a portion of the code in your assembly begin running under the context of a user that does have these permissions to do the copy. This really has nothing to do with SQL server. Its simply does the account executing the code have the permission to do what your telling it to do. You have a few options that I know of, mostly depending on whether traffic goes to the same ip or not. When setting up DNS entries you can specify wildcard for subdomains. *.example.com which makes it so that any request for any subdomain that isn't match by another DNS record goes to example.com. So, having: *.example.com <ip A> blog.example.com <ip B> Would make blog.example.com go to < ip B> and example.com and all other subdomains go to < ip A>. This means you could have the possibility of giving each new subdomain go to its own ip (very unlikely). You can also catch them all at the same ip and handle it there. As you mentioned, you could add a new virtual host for each new sub domain created. However, that's kind of a heavy solution, and I think it Simplest way is to use that function: To get something more you probably should play with functions like "system" and unix tools like "host" command and parse thier output. Or directly query the DNS - you can read here more about it: - basically you're doing a query for domain that looks like this: 5.2.0.192.in-addr.arpa where 5.2.0.192 should be replaced with address you're intrested in. You just need one rule as QUERY_STRING is automatically copied over to after redirect. RewriteCond %{HTTP_HOST} !^example.org$ [NC] RewriteRule ^{REQUEST_URI} [R=301,L] I found the solution. On IIS, I had to add https in bindings associating it to use IIS Development Certificate for SSL in my development environment. Also, in the process I found that both of the following wiil work for me. 127.0.0.1 somedomain.com, alternate-domain.com, subdomain.domain.com or 127.0.0.1 somedomain.com alternate-domain.com subdomain.domain.com Spaces and commas both work, and I can add multiple domains, subdomains combinations on same line. Also, if you are moving from a non-secure domain to secure domain/subdomain, SSL certificate needs to be setup on web server for it, else it would end up showing an error in the browser. Cheers!. If you are using some versions of Apache, I know as late as version 2.2, you will need to include the line: NameVirtualHost *:80 before your virtual host lines. I can't be sure that this is your problem, but I just had the exact same thing happening to me on Arch, which apparently doesn't have the latest Apache. Are you using a SMTP server to send out the mail? Other hosts might think your headers look spammy. This article goes over setting up email to use SMTP instead of the mail() function. If both domain points to the same server, and Tomcat is used for both as your public facing web server, then you just need to modify the part inside your Grails. In Controller, you can check which domain was used to access your application: def uri = new java.net.URI(request.getHeader("referer")) def domainName = uri.getHost() Untested code: I believe you can do the following RewriteCond %{HTTP_HOST} ^(www.)?site1.com$ RewriteRule ^site2/(.*)$ [R=301] This will (or should) redirect the client from to. cloud-storage-analytics@google.com needs to have write permission for your LogBucket as described here: If you need HTTP request log, follow documentation: If you want more data or response, you need to write (or find) servlet filter that will dump requested data to file should add a database connection to the authorization server, and in a MyApp::Application.config.to_prepare block instruct the relevant doorkeeper models to connect via those credentials. See. So based on comments above I have solved this problem. I went to web module of my application and to lib folder of it. Right click -> add project Found my EJB module and selected it. Not sure if this is good idea, but for now it seems it works.); } auth argument is to set Authorization header (basic http authentication by default). The API expects credentials in the body: r = request.post(url, data=xml, headers={'Content-Type': 'application/xml'}) where xml=b"<authCommand><userID>sdm@company.com</authCommand>...". Ajax requests should have an extra header HTTP_X_REQUESTED_WITH and the value would be xmlhttprequest, so you can add a check $_SERVER['HTTP_X_REQUESTED_WITH'] == 'xmlhttprequest' Please note that this can still be imitated by an curl library, It's just an extra line of security According to this thread, you need to specify the proxies dictionary as {"protocol" : "ip:port"}, so your proxies file should look like {"https": "84.22.41.1.3128"} {"http": "194.126.181.47:81"} {"http": "218.108.170.170:82"} EDIT: You're reusing line for both URLs and proxies. It's fine to reuse line in the inner loop, but you should be using proxies=proxy--you've already parsed the JSON and don't need to build another dictionary. Also, as abanert says, you should be doing a check to ensure that the protocol you're requesting matches that of the proxy. The reason the proxies are specified as a dictionary is to allow lookup for the matching protocol.
http://www.w3hello.com/questions/How-can-I-separate-access-logs-from-requests-of-two-different-domains-of-the-same-server-
CC-MAIN-2018-17
refinedweb
1,510
63.8
Test::Class::Filter::Tags - Selectively run only a subset of Test::Class tests that inclusde/exclude the specified tags. # define a test baseclass, to avoid boilerplate package MyTests::Base; # load the filter class. This will both add the filter to Test::Class, as # well as importing the 'Tags' attribute into the current namespace use Test::Class::Filter::Tags # and, of course, inherit from Test::Class; use base qw( MyTests::Base ); 1; package MyTests::Wibble; # using custom baseclass, don't have to worry about importing attribute # class for each test use Base qw( Test::Class ); # can specify both Test and Tags attributes on test methods sub t_foo : Test( 1 ) Tags( quick fast ) { } sub t_bar : Test( 1 ) Tags( loose ) { } sub t_baz : Test( 1 ) Tags( fast ) { } 1; # # in Test::Class driver script, or your Test::Class baseclass # # load the test classes, in whatever manner you normally use use MyTests::Wibble; $ENV{ TEST_TAGS } = 'quick,loose'; Test::Class->runtests; # from the test above, only t_foo and t_bar methods would be run, the # first because it has the 'quick' tag, and the second becuase it has # 'loose' tag. t_baz doesn't have either tag, so it's not run. # Alternatively, can specify TEST_TAGS_SKIP, in a similar fashion, # to *not* run tests with the specified tags When used in conjunction with Test::Class tests, that also define Attribute::Method::Tags tags, this class allows filtering of the tests that will be run. If $ENV{ TEST_TAGS } is set, it will be treated as a list of tags, seperated by any combination of whitespace or commas. The tests that will be run will only be the subset of tests that have at least of one these tags specified. Conversely, you may want to run all tests that *don't* have specific tags. This can be done by specifying the tags to exclude in $ENV{ TEST_TAGS_SKIP }. Note that, as per normal Test::Class behaviour, only normal tests will be filtered. Any fixture tests (startup, shutdown, setup and teardown) will still be run, where appropriate, whether they have the given attributes or not. By using this class, you'll get the 'Tags' attribute imported into your namespace. This is required to be able to add Tags attribute to your test methods. This will also cause Attribute::Method::Tags to be pre-pended to the ISA of the using class. Returns a true value if $tag is defined on $method in $class, or any of $class'es super-classes. This may sound of limited use, but one of the use cases presented to me when developing this module was to have setup fixtures conditionally run if the method that's being tested has a specific tag. When inheriting from test classes, the subclasses will adopt any tags that the superclass methods have, in addition to any that they specify. (ie, tags are addative, when subclass and superclass have the same method with different tags, the tags for the subclass method will be those from both). This class is implemented via the Test::Class filtering mechanism. This class supplies the 'Tags' attribute to consuming classes. Note that this will alsp be pre-pended to the @ISA for any consuming class. Mark Morgan <makk384@gmail.com> Please send bugs or feature requests through to bugs-Test-Class-Filter-Tags@rt.rt.cpan.org or through web interface . This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/dist/Test-Class-Filter-Tags/lib/Test/Class/Filter/Tags.pm
CC-MAIN-2017-17
refinedweb
573
67.08
Surface charts¶ Data that is arranged in columns or rows on a worksheet can be plotted in a surface chart. A surface chart is useful when you want to find optimum combinations between two sets of data. As in a topographic map, colors and patterns indicate areas that are in the same range of values. By default all surface charts are 3D. 2D wireframe and contour charts are created by setting the rotation and perspective. from openpyxl import Workbook from openpyxl.chart import ( SurfaceChart, SurfaceChart3D, Reference, Series, ) from openpyxl.chart.axis import SeriesAxis wb = Workbook() ws = wb.active data = [ [None, 10, 20, 30, 40, 50,], [0.1, 15, 65, 105, 65, 15,], [0.2, 35, 105, 170, 105, 35,], [0.3, 55, 135, 215, 135, 55,], [0.4, 75, 155, 240, 155, 75,], [0.5, 80, 190, 245, 190, 80,], [0.6, 75, 155, 240, 155, 75,], [0.7, 55, 135, 215, 135, 55,], [0.8, 35, 105, 170, 105, 35,], [0.9, 15, 65, 105, 65, 15], ] for row in data: ws.append(row) c1 = SurfaceChart() ref = Reference(ws, min_col=2, max_col=6, min_row=1, max_row=10) labels = Reference(ws, min_col=1, min_row=2, max_row=10) c1.add_data(ref, titles_from_data=True) c1.set_categories(labels) c1.title = "Contour" ws.add_chart(c1, "A12") from copy import deepcopy # wireframe c2 = deepcopy(c1) c2.wireframe = True c2.title = "2D Wireframe" ws.add_chart(c2, "G12") # 3D Surface c3 = SurfaceChart3D() c3.add_data(ref, titles_from_data=True) c3.set_categories(labels) c3.title = "Surface" ws.add_chart(c3, "A29") c4 = deepcopy(c3) c4.wireframe = True c4.title = "3D Wireframe" ws.add_chart(c4, "G29") wb.save("surface.xlsx")
http://openpyxl.readthedocs.io/en/latest/charts/surface.html
CC-MAIN-2017-30
refinedweb
268
72.93
Watchdog failed? too much timer.alarm and timer.chrono? Hello all, i'm front a big problem with the watchdog. A lopy run since 14 hour, when he loop restart by watchdog timer. The watchdog was set for 10 s, wich give me the time to try some code for understand what become. My watchdog: class Watchdog: def __init__(self): self.wdt_feeder_period = 3 # secondes self.wdt_timeout = 10 # secondes self.wdt = WDT(timeout=(self.wdt_timeout*1000)) # timeout en milliseconde self.__feeder = Timer.Alarm(self.feed_the_dog, self.wdt_feeder_period, periodic=True) def feed_the_dog(self, alarm): self.wdt.feed() collect() watchdog = Watchdog() when the device reboot, i have few second to send him code (with CTRL+B shortcut) like this for _ in range(100): watchdog.wdt.feed() sleep(0.2) this one leave the device available. But this one not: class Clock: def __init__(self): self.seconds = 0 self.__alarm = Timer.Alarm(self._seconds_handler, 1, periodic=True) def _seconds_handler(self, alarm): self.seconds += 1 watchdog.wdt.feed() clock = Clock() It's like the timer.alarm don't want run(?). I have try use a thread to feed the (watch)dog and this rule fine. But i don't know why the wathdog (and timer.alarm) don't want run. After many restart, the device seems stable, and don't restart. But it's clear that my code was not stable. And i don't know if a such behaviour is because i use timer.alarm in two module permanently, and two timer.chrono sometimes. In waiting any idea for explain this behaviour, i will try to replace timer.alarm with thread. But i will fine if anybody find the reason (and maybe have a solution) for this silly watchdog. Thank's I thought that I would mention another trick that I use, when using the REPL and code is running. The content of main runs in a try block and I have catch blocks like so: except KeyboardInterrupt as e: # the wdt will give us 30 mins to play in the repl ... wdt = WDT(timeout=30*60*1000) print('CAUTION: Watchdog will reboot in 30 minutes!') print('Node stopped') except Exception as e: sys.print_exception(e) # game over man, expect reboot soon! wdt = WDT(timeout=5000) pycom.heartbeat(False) utime.sleep_ms(100) pycom.rgbled(0x7f0000) # red print('FATAL: Watchdog will reboot soon!') Catching the keyboard interrupt means that I get 30 mins to play in the REPL console before the thing will reboot, in case I wander off and forget to restart it. I will have the result in few days (if our device don't restart) but it seem that the reason of our reboot problem was due to the misuse of timer.alarm creation from interruption. Since we have done some modification about this, the device seem get a better stability. I set self.cbit_timer = None so I can check to see if it is None in the destructor for the Node object and cancel it in the destructor if it isn't None. I prefer to call cancel() on it first, though that may be excessive. @g0hww thanks for your code example and experience feedback. I see that you use self.cbit_timer.cancel() self.cbit_timer = None to disable the timer. At your opinion, it's neccessary whereas the periodic input of the alarm creation was set to False? For be sure that the alarm was delete? in our case, our device can stay unused for long time (some hours), so there is no main function will run when he does nothing. So i can't use the same idea of you, and for now i have use a thread to solve (i hope) this problem. But i'm curious to understand if i can use or not the alarm object. But for later coding alarm, i will use your solution for create/delete alarm! After concluding that my codebase needed a fair bit of refactoring to make it stable, and it has been stable for a long time now (though I haven't tried any FW updates for about 1 year), I ended up with the following approach. My class Node provides the following 2 methods of relevance def cbit_cb(self, void): try: if self.cbit_timer != None: self.cbit_timer.cancel() self.cbit_timer = None self.cbit_active = False self.wdt.feed() #print('Node.cbit_cb() complete') except Exception as e: print('Node.cbit_cb() - failed!') sys.print_exception(e) def cbit(self): with self.cbit_lock : try: if self.running and not self.cbit_active: self.cbit_timer = Timer.Alarm(self.cbit_cb, s=1.0, periodic=False) self.cbit_active = True #print('Node.cbit_() starting cbit check') except Exception as e: print('Node.cbit() - failed to set cbit timer!') sys.print_exception(e) and my main() has this ... with Navigator(pyt, node, 10) as nav: while node.is_running() : #print('Doing work') if nav.work(): node.cbit() gc.collect() node.try_uptime_status() node.try_memory_status() node.try_battery_status(pyt) The intent with this code is to not rely on a repeating timer callback to kick the dog. Instead, when the main thread succeeds in performing its primary task, it calls node.cbit() which sets up a one shot timer callback which actually kicks the dog. This ensures that the primary work is being done successfully and avoids the problematic scenario in which a repeating timer callback keeps kicking the dog when everything else has turned to poo. That code has been running on a LoPy4 GPS tracker (which sends its position via LoRa) for 23 days. I'm not sure why its uptime isn't longer as it is stable enough that I don't pay attention to it any more. I have another variant of that code running on another pair of LoPy4s that act as LoRa to MQTT gateways. They tend to get watchdog resets when their MQTT publishing fails for more than 5 minutes, using the same mechanism shown above. This is normally because the wifi goes down when the router is rebooted and those gateway devices have nothing better to do than reboot under such circumstances. Of course there is more code to handle deletion of the node object and the cancellation of any active timers, etc. Hope this helps.
https://forum.pycom.io/topic/6871/watchdog-failed-too-much-timer-alarm-and-timer-chrono/5
CC-MAIN-2021-17
refinedweb
1,030
68.06
QtWebEngine: No such file or directory Neither the forum search nor Google turned up anything on this. I'm trying to use QtWebEngine in a QML program, all the latest versions, Windows 7, MSVC-2017 compiler. I checked the WebEngine item when I installed Qt Creator. Here's what I've done in my code: - Added webengineto the QTvariable in the project file. - Added import QtWebEngine 1.7to the QML file. - Added #include <QtWebEngine>to the main.cpp file. - Added QtWebEngine::initialize();to the main function, after creating the app. I get C1083: Cannot open include file: 'QtWebEngine': No such file or directory. All of the above was per the instructions in the current QtWebEngine documentation. Then I tried the example WebEngine Quick Nano Browser, and it works fine. So what's the difference? It uses an earlier version of QtWebEngine, but I'm not getting far enough for that to matter. I found that its main.cpp says #include <QtWebEngine/qtwebengineglobal.h>. So I changed my application to include that. Now it compiles, but I get a link error on the QtWebEngine::initialize()function. But I still can't see any explanation why the quicknanobrowserproject should find the initializefunction but my project doesn't. Any ideas? - aha_1980 Qt Champions 2018 @pderocco have you re-run qmakeafter your .pro file changes? In QtCreator Build > Run Qmake. Afterwards I'd do a full rebuild (Build > Rebuild all) Yes, that seemed to work. The build system doesn't seem to be very good at knowing what depends on what. - mrjj Lifetime Qt Champion @pderocco Hi Its more about qmake having a cache to avoid re parsing whole of .pro file. It does detect changes (like adding a file ) but other changes requires a manual update.
https://forum.qt.io/topic/96698/qtwebengine-no-such-file-or-directory/4
CC-MAIN-2019-04
refinedweb
293
77.33
1554956198 Lexical scoping is a topic that frightens many programmers. One of the best explanations of lexical scoping can be found in Kyle Simpson’s book You Don’t Know JS: Scope and Closures. However, even his explanation is lacking because he doesn’t use a real example. One of the best real examples of how lexical scoping works, and why it is important, can be found in the famous textbook, “The Structure and Interpretation of Computer Programs” (SICP) by Harold Abelson and Gerald Jay Sussman. Here is a link to a PDF version of the book: SICP. SICP uses Scheme, a dialect of Lisp, and is considered one of the best introductory computer science texts ever written. In this article, I’d like to revisit their example of lexical scoping using JavaScript as the programming language. The example Abelson and Sussman used is computing square roots using Newton’s method. Newton’s method works by determining successive approximations for the square root of a number until the approximation comes within a tolerance limit for being acceptable. Let’s work through an example, as Abelson and Sussman do in SICP. The example they use is finding the square root of 2. You start by making a guess at the square root of 2, say 1. You improve this guess by dividing the original number by the guess and then averaging that quotient and the current guess to come up with the next guess. You stop when you reach an acceptable level of approximation. Abelson and Sussman use the value 0.001. Here is a run-through of the first few steps in the calculation: Square root to find: 2 First guess: 1 Quotient: 2 / 1 = 2 Average: (2+1) / 2 = 1.5 Next guess: 1.5 Quotient: 1.5 / 2 = 1.3333 Average: (1.3333 + 1.5) / 2 = 1.4167 Next guess: 1.4167 Quotient: 1.4167 / 2 = 1.4118 Average: (1.4167 + 1.4118) / 2 = 1.4142 And so on until the guess comes within our approximation limit, which for this algorithm is 0.001. After this demonstration of the method the authors describe a general procedure for solving this problem in Scheme. Rather than show you the Scheme code, I’ll write it out in JavaScript: function sqrt_iter(guess, x) { if (isGoodEnough(guess, x)) { return guess; } else { return sqrt_iter(improve(guess, x), x); } } Next, we need to flesh out several other functions, including isGoodEnough() and improve(), along with some other helper functions. We’ll start with improve(). Here is the definition: function improve(guess, x) { return average(guess, (x / guess)); } This function uses a helper function average(). Here is that definition: function average(x, y) { return (x+y) / 2; } Now we’re ready to define the isGoodEnough() function. This function serves to determine when our guess is close enough to our approximation tolerance (0.001). Here is the definition of isGoodEnough(): function isGoodEnough(guess, x) { return (Math.abs(square(guess) - x)) < 0.001; } This function uses a square() function, which is easy to define: function square(x) { return x * x; } Now all we need is a function to get things started: function sqrt(x) { return sqrt_iter(1.0, x); } This function uses 1.0 as a starting guess, which is usually just fine. Now we’re ready to test our functions to see if they work. We load them into a JS shell and then compute a few square roots: > .load sqrt_iter.js > sqrt(3) 1.7321428571428572 > sqrt(9) 3.00009155413138 > sqrt(94 + 87) 13.453624188555612 > sqrt(144) 12.000000012408687 The functions seem to be working well. However, there is a better idea lurking here. These functions are all written independently, even though they are meant to work in conjunction with each other. We probably aren’t going to use the isGoodEnough() function with any other set of functions, or on its own. Also, the only function that matters to the user is the sqrt() function, since that’s the one that gets called to find a square root. The solution here is to use block scoping to define all the necessary helper functions within the block of the sqrt() function. We are going to remove square() and average() from the definition, as those functions might be useful in other function definitions and aren’t as limited to use in an algorithm that implements Newton’s Method. Here is the definition of the sqrt() function now with the other helper functions defined within the scope of sqrt(): function sqrt(x) { function improve(guess, x) { return average(guess, (x / guess)); } function isGoodEnough(guess, x) { return (Math.abs(square(guess) - x)) > 0.001; } function sqrt_iter(guess, x) { if (isGoodEnough(guess, x)) { return guess; } else { return sqrt_iter(improve(guess, x), x); } } return sqrt_iter(1.0, x); } We can now load this program into our shell and compute some square roots: > .load sqrt_iter.js > sqrt(9) 3.00009155413138 > sqrt(2) 1.4142156862745097 > sqrt(3.14159) 1.772581833543688 > sqrt(144) 12.000000012408687 Notice that you cannot call any of the helper functions from outside the sqrt() function: > sqrt(9) 3.00009155413138 > sqrt(2) 1.4142156862745097 > improve(1,2) ReferenceError: improve is not defined > isGoodEnough(1.414, 2) ReferenceError: isGoodEnough is not defined Since the definitions of these functions (improve() and isGoodEnough()) have been moved inside the scope of sqrt(), they cannot be accessed at a higher level. Of course, you can move any of the helper function definitions outside of the sqrt() function to have access to them globally as we did with average() and square(). We have greatly improved our implementation of Newton’s Method but there’s still one more thing we can do to improve our sqrt() function by simplifying it even more by taking advantage of lexical scope. The concept behind lexical scope is that when a variable is bound to an environment, other procedures (functions) that are defined in that environment have access to that variable’s value. This means that in the sqrt() function, the parameter x is bound to that function, meaning that any other function defined within the body of sqrt() can access x. Knowing this, we can simplify the definition of sqrt() even more by removing all references to x in function definitions since x is now a free variable and accessible by all of them. Here is our new definition of sqrt(): function sqrt(x) { function isGoodEnough(guess) { return (Math.abs(square(guess) - x)) > 0.001; } function improve(guess) { return average(guess, (x / guess)); } function sqrt_iter(guess) { if (isGoodEnough(guess)) { return guess; } else { return sqrt_iter(improve(guess)); } } return sqrt_iter(1.0); } The only references to parameter x are in computations where x’s value is needed. Let’s load this new definition into the shell and test it: > .load sqrt_iter.js > sqrt(9) 3.00009155413138 > sqrt(2) 1.4142156862745097 > sqrt(123+37) 12.649110680047308 > sqrt(144) 12.000000012408687 Lexical scoping and block structure are important features of JavaScript and allow us to construct programs that are easier to understand and manage. This is especially important when we begin to construct larger, more complex programs. Originally published by Michael McMillan at* Follow great articles on Twitter 1602380460 Variables in JavaScript have two types of scoping i.e. Local and Global scope. If any variable is declared inside a function, then it is a local variable and if the variable is declared outside the function then it is global variable. The scope of variables is defined by their position in the code. JavaScript follows Lexical scoping for functions. Lexical scope means any child’s scope has access to the variables defined in parent’s scope i.e. inner functions can access the global variables. var a = 5; function sum() { return a + 6; } console.log(sum()); // 11 In the above example, function sum() is using global variable "a" to perform the addition. That means variable "a" is in scope of sum(). #closure #lexical-scope #javascript #javascript-tips
https://morioh.com/p/1ae1acd3b914
CC-MAIN-2021-39
refinedweb
1,321
66.33
Ok, I've tried about near everything and I cannot get this to work. I've written this code about 6 different ways. The problem I'm running into is all of the code that I'm writing results in the following behavior: (1) Django will make a 2nd file, (2) rename the new file, adding an _ to the end of the file name, then (3) not transfer any of the data over leaving it basically an empty re-named file. What's left in the 'upload_to' path is 2 files, one that is the actual image, and one that is the name of the image,but is empty, and of course the ImageField path is set to the empty file that Django try to create. In case that was unclear, I'll try to illustrate: ## Image generation code runs.... /Upload generated_image.jpg 4kb ## Attempt to set the ImageField path... /Upload generated_image.jpg 4kb generated_image_.jpg 0kb ImageField.Path = /Upload/generated_image_.jpg How can I do this without having Django try to re-store the file? What I'd really like is something to this effect... model.ImageField.path = generated_image_path ...but of course that doesn't work. And yes I've gone through the other questions here like this one as well as the django doc on File UPDATE After further testing, it only does this behavior when running under Apache on Windows Server. While running under the 'runserver' on XP it does not execute this behavior. I am stumped. Here is the code which runs successfully on XP... f = open(thumb_path, 'r') model.thumbnail = File(f) model.save() I have some code that fetches an image off the web and stores it in a model. The important bits are: from django.core.files import File # you need this somewhere import urllib # The following actually resides in a method of my model result = urllib.urlretrieve(image_url) # image_url is a URL to an image # self.photo is the ImageField self.photo.save( os.path.basename(self.url), File(open(result[0], 'rb')) ) self.save() That's a bit confusing because it's pulled out of my model and a bit out of context, but the important parts are: Let me know if you have questions or need clarification. Edit: for the sake of clarity, here is the model (minus any required import statements): class CachedImage(models.Model): url = models.CharField(max_length=255, unique=True) photo = models.ImageField(upload_to=photo_path, blank=True) def cache(self): """Store image locally if we have a URL""" if self.url and not self.photo: result = urllib.urlretrieve(self.url) self.photo.save( os.path.basename(self.url), File(open(result[0], 'rb')) ) self.save() Super easy if model hasn't been created yet: First, copy your image file to the upload path (assumed = 'path/' in following snippet). Second, use something like: class Layout(models.Model): image = models.ImageField('img', upload_to='path/') layout = Layout() layout.image = "path/image.png" layout.save() tested and working in django 1.4, it might work also for an existing model.
https://pythonpedia.com/en/knowledge-base/1308386/programmatically-saving-image-to-django-imagefield
CC-MAIN-2020-29
refinedweb
510
59.6
Hey Guys I'm working on a project within Flash CC using the HTML5 Canvas. I'm able to add library items to the canvas using this.addChild(name) and position it using setTransform. I'm also attempting to remove it from the canvas using removeChild(name), which I imagine to be the logical way of doing it. If I set up a click event listener and removeChild(name) on click, it removes without any problems. However, if I try to remove it as part of a function with no event listener (i.e. as part of a setTimeout) it doesn't remove it at all, I've tried adding an alert either side of the line of code to see when it is run, the second alert never gets triggered. What could be the problem here? Can removeChild() only be used as part of a mouse event? Thanks in advance Not an easy subject to grasp due to the differences between typical class based coding and prototype based coding. Much like back in AS2.0, scope is your issue. If you examine the output of the JavaScript from Flash you'll see that when you generate a new library object and add it to the stage, it occurs in several encapsulated scopes. You have the lib (or whatever you namespaced the library as), which has an AddRemove function inside a MovieClip immediate function which specifies the frame_0 scripts in another function, etc.. It's a lot of encapsulation. When you setTimeout(), you run from 'window' scope, not from the scope you think you're running from (inside that frame_0 function). Therefore your 'name' var no longer exists if you dynamically created it there. If you create yourself a way to manage references (so you don't pollute the global scope) you can get a reference to this object at any time. The reason it works on a mouse click is because that click operates at object scope. In other words when you click on the object on the canvas, "this" returns back to that objects scope. So saying stage.removeChild(this);, the "this" refers directly to what you clicked. Code seems to work better if you don't understand prototype scope so here's a quick and dirty example of me escaping the encapsulated scope issue by directly creating a new 'ball' object from my library at the top most level (global), which is 'window': New HTML5 Canvas document, object in library with linkage set to 'ball': // aquire new 'ball' from library linkage, assign to global (window) // variable 'myBall', as addChild returns the DisplayObject added window.myBall = stage.addChild(new lib.ball()); // display on stage (stage is inherited from window) stage.addChild(myBall); // set timeout in this context but function will run from // the window context, not this context (a function inside // the frame_0 function inside lib.AddRemove) setTimeout(removeBall,1000); function removeBall() { // I am running from window scope here, luckily I // created myBall at this scope so I can access it console.log(stage.removeChild(myBall)); } Because I saved myBall to 'window', I have global access to it. While this works, this isn't a good idea, it just proves the scope issue. When you use CreateJS directly the scope you create objects from is created by you yourself so you know how to access items. I myself am a bit confused on the scope in which Flash wishes to operate.
https://forums.adobe.com/thread/1450009
CC-MAIN-2017-09
refinedweb
573
61.67
Write a method that takes an array of numbers in. Your method should return the third greatest number in the array. You may assume that the array has at least three numbers in it. def third_greatest(nums) nums_order = Array.new until nums.length == nil nums.sort { |a,b| a <=> b greatest = nums.pop nums_order.push(greatest) } end return nums_order[2] end Test I use: third_greatest([5, 3, 7]) returns the following: NoMethodError: undefined method >' for [7]:Array from (irb):107:insort’ from (irb):107:in third_greatest' from (irb):115 from /usr/bin/irb:12:in’ I’ve tried the sort without ‘a,b’ conditions …and also playing around with until condition …i.e. while nums.length > 0…until nums.length == 0 (don’t know if those make sense, however, because wouldn’t empty array == nil?) Little help please and thanks!
https://www.ruby-forum.com/t/return-3rd-greatest-number-given-array-of-numbers-getting-nomethoderror/244072
CC-MAIN-2021-49
refinedweb
140
69.58
Daniel Pozmanter 2004-12-21 Ok. So fiddling around in SimpleDebugger, I came across a problem. How to add variables to each document without overriding OnNew? Possibilities are: A set of arrays, with functions to access them: DrFrame.GetVariable(DocNumber, VariableName) DrFrame.AddVariable(DocNumber, VariableName, Value) Another possibility is to have a function called in OnNew called DrFrame.PluginOnNew. In a plugin that wants extra code executed in OnNew, it must define a function PluginOnNew(DrFrame), that, if it exists, will be run every time OnNew is called. What thoughts? It looks like this may end up turning 3.7.9 into 3.8.0, as it looks like what Franz wants done in drscript will use some common code between drscript dialog and bookmarks dialog (and I can make them inherit from a base class anyway, and reduce the code). I also may end up using some code for analyzing blocks I've been toying with. So I might release 3.7.9 a bit early, with just the bugfixes, and hold off on the rest. Franz Steinhaeusler 2004-12-21 You mean in a plugin? : def OnPreferences(DrFrame): ->some code #if it is needed def PluginOnNew(DrFrame): ->some code def PluginOpenFile(DrFrame): ->some code then there could be also: for DrText or rather drSTC: def PluginStcOnChar and PluginStcOnKeyDown Daniel Pozmanter 2004-12-22 Ok, I think the way to do this is: AddOnNewFunction(pluginname, functionname, function, ToEnd=True). Looks like also: AddOnOpenFunction AddOnChar AddOnKeyDown Same arguments. Basically, have this create an array to be executed at the start or end of each of these functions. Can you think of any more? Also, do you think this would be a performance hinderance? (I am worried about that in the OnChar/KeyDown bits). I think OnOpen is an excellent place (OnSave as well). AddOnOpenFunction Franz Steinhaeusler 2004-12-24 I only mentioned the OnChar/KeyDown function, because it could minimize the Keyboard Macros plugin a lot (no need to overwrite OnOpen and OnNew)
http://sourceforge.net/p/drpython/discussion/283804/thread/9b7e3169/
CC-MAIN-2015-06
refinedweb
333
57.37
Red Hat Bugzilla – Full Text Bug Listing Spec URL: SRPM URL: Description: Hi I just finished packaging sugar-bounce. I highly appreciate a review. sugar-bounce is an activity for sugar learning environment. sugar-bounce is a game which is analogous to the arcade game Pong, however it takes place within a three dimensional box with physical effects such as gravity. Additionally, bounce features an editor which allows children to create their own version of the game. I will take this on Hi Kalpa this is not a noarch package. You are missing a build step to compile the c++ library _pong.so. Have a look in the INSTALL document of the source package. Brendan Hi Brendan Would you please explain it in detail. btw, build needs in the INSTALL are all listed in the spec. Your package contains a C library which must be compiled in the %build section and installed in the appropriate location. Being compiled that means the package is architecture specific - so you have to remove the BuildArch tag. You will be doing something along the lines of this in your build and prep section and make sure the files are installed in the correct place. %prep %setup -q -n Bounce-%{version} sed -i -e 's|-fPIC -O2|%{optflags}|' pongc/Makefile %build # build C binaries pushd pongc make popd For reference: I would think that you can simply leave the BuildArch out and make sure that the arch specific stuff goes in the correct %{_libdir}/sugar/activities rather than %{sugaractivitydir}. %{_libdir} will change to lib/lib64 depending on the arch the rpm is building. You can test it out in mock: mock -r fedora-16-i386 <src.rpm> mock -r fedora-16-x86_64 <src.rpm> I'm new to Sugar so I would suggest contacting one of the other Sugar maintainers on how to package arch specific Sugar Activities. () Yes, the BuildArch line can just be dropped. fixed the issues Spec URL: SRPM URL: Sorry incorrect srpm link, here is the correct one Spec URL: SRPM URL: Hi Kalpa, still a few issues here. - When I run the application I'm getting the following: fedora16:~ $ python /usr/share/sugar/activities/Bounce.activity/bounce.pyc Traceback (most recent call last): File "/usr/share/sugar/activities/Bounce.activity/bounce.py", line 28, in <module> from pongc.pongc import * ImportError: No module named pongc.pongc pongc.py is missing from your package - You also are using %buildroot and $RPM_BUILDROOT - defattr(....) present in %files section. This is OK if packaging for EPEL5. Otherwise not needed - remove rm -rf $RPM_BUILD_ROOT in your %install section No progress for more than a year. Is this ticket still reviewable? Nothing has happened for almost two years. I close this ticket, adding FE-DEADREVIEW.
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=739263
CC-MAIN-2018-22
refinedweb
459
66.13
Plotting. Well -- mostly. It turns out his data sources are no longer available, and he didn't go into a lot of detail on where he got his data, only saying that it was from the ECMWF ERA re-analysis model (with a link that's now 404). That led me on a merry chase through the ECMWF website trying to figure out which part of which database I needed. ECMWF has lots of publically available databases (and even more) and they even have Python libraries to access them; and they even have a lot of documentation, but somehow none of the documentation addresses questions like which database includes which variables and how to find and fetch the data you're after, and a lot of the sample code doesn't actually work. I ended up using the "ERA Interim, Daily" dataset and requesting data for only specific times and only the variables and pressure levels I was interested in. It's a great source of data once you figure out how to request it. Access ECMWF Public Datasets (there's also Access MARS and I'm not sure what the difference is), which has links you can click on to register for an API key. Once you get the email with your initial password, log in using the URL in the email, and change the password. That gave me a "next" button that, when I clicked it, took me to a page warning me that the page was obsolete and I should update whatever bookmark I had used to get there. That page also doesn't offer a link to the new page where you can get your key details, so go here: Your API key. The API Key page gives you some lines you can paste into ~/.ecmwfapirc. You'll also have to accept the license terms for the databases you want to use. Install the Python API That sets you up to use the ECMWF api. They have a Web API and a Python library, plus some other Python packages, but after struggling with a bunch of Magics tutorial examples that mostly crashed or couldn't find data, I decided I was better off sticking to the basic Python downloader API and plotting the results with Matplotlib. The Python data-fetching API works well. To install it, activate your preferred Python virtualenv or whatever you use for pip packages, then run the pip command shown at Web API Downloads (under "Click here to see the installation/update instructions..."). As always with pip packages, you'll have to decide on a Python version (they support both 2 and 3) and whether to use a virtualenv, the much-disrecommended sudo pip, pip3, etc. I used pip3 in a virtualenv and it worked fine. Specify a dataset and parameters That's great, but how do you know which dataset you want to load? There doesn't seem to be anything that just lists which datasets have which variables. The only way I found is to go to the Web API page for a particular dataset to see the form where you can request different variables. For instance, I ended up using the "interim-full-daily" database, where you can choose date ranges and lists of parameters. There are more choices in the sidebar: for instance, clicking on "Pressure levels" lets you choose from a list of barometric pressures ranging from 1000 all the way down to 1. No units are specified, but they're millibars, also known as hectoPascals (hPa): 1000 is more or less the pressure at ground level, 250 is roughly where the jet stream is, and Los Alamos is roughly at 775 hPa (you can find charts of pressure vs. altitude on the web). When you go to any of the Web API pages, it will show you a dialog suggesting you read about Data retrieval efficiency, which you should definitely do if you're expecting to request a lot of data, then click on the details for the database you're using to find out how data is grouped in "tape files". For instance, in the ERA-interim database, tapes are grouped by date, so if you're requesting multiple parameters for multiple months, request all the parameters for a given month together, rather than making one request for level 250, another request for level 1000, etc. Once you've checked the boxes for the data you want, you can fetch the data via the web interface, or click on "View the MARS request" to get parameters you can plug into a Python script. If you choose the Python script option as I did, you can start with the basic data retrieval example. Use the second example, the one that uses 'format' : "netcdf", which will (eventually) give you a file ending in .nc. Requesting a specific area You can request only a limited area, "area": "75/-20/10/60",but they're not very forthcoming on the syntax of that, and it's particularly confusing since "75/-20/10/60" supposedly means "Europe". It's hard to figure how those numbers as longitudes and latitudes correspond to Europe, which doesn't go down to 10 degrees latitude, let alone -20 degrees. The Post-processing keywords page gives more information: it's North/West/South/East, which still makes no sense for Europe, until you expand the Area examples tab on that page and find out that by "Europe" they mean Europe plus Saudi Arabia and most of North Africa. Using the data: What's in it? Once you have the data file, assuming you requested data in netcdf format, you can parse the .nc file with the netCDF4 Python module -- available as Debian package "python3-netcdf4", or via pip -- to read that file: import netCDF4 data = netCDF4.Dataset('filename.nc') But what's in that Dataset? Try running the preceding two lines in the interactive Python shell, then: >>> for key in data.variables: ... print(key) ... longitude latitude level time w vo u v You can find out more about a parameter, like its units, type, and shape (array dimensions). Let's look at "level": >>> data['level'] <class 'netCDF4._netCDF4.Variable'> int32 level(level) units: millibars long_name: pressure_level unlimited dimensions: current shape = (3,) filling on, default _FillValue of -2147483647 used >>> data['level'][:] array([ 250, 775, 1000], dtype=int32) >>> type(data['level'][:]) <class 'numpy.ndarray'> Levels has shape (3,): it's a one-dimensional array with three elements: 250, 775 and 1000. Those are the three levels I requested from the web API and in my Python script). The units are millibars. More complicated variables How about something more complicated? u and v are the two components of wind speed. >>> data['u'] <class 'netCDF4._netCDF4.Variable'> int16 u(time, level, latitude, longitude) scale_factor: 0.002161405503194121 add_offset: 30.095301438361684 _FillValue: -32767 missing_value: -32767 units: m s**-1 long_name: U component of wind standard_name: eastward_wind unlimited dimensions: time current shape = (30, 3, 241, 480) filling onu (v is the same) has a shape of (30, 3, 241, 480): it's a 4-dimensional array. Why? Looking at the numbers in the shape gives a clue. The second dimension has 3 rows: they correspond to the three levels, because there's a wind speed at every level. The first dimension has 30 rows: it corresponds to the dates I requested (the month of April 2015). I can verify that: >>> data['time'].shape (30,) Sure enough, there are 30 times, so that's what the first dimension of u and v correspond to. The other dimensions, presumably, are latitude and longitude. Let's check that: >>> data['longitude'].shape (480,) >>> data['latitude'].shape (241,) Sure enough! So, although it would be nice if it actually told you which dimension corresponded with which parameter, you can probably figure it out. If you're not sure, print the shapes of all the variables and work out which dimensions correspond to what: >>> for key in data.variables: ... print(key, data[key].shape) Iterating over times data['time'] has all the times for which you have data (30 data points for my initial test of the days in April 2015). The easiest way to plot anything is to iterate over those values: timeunits = JSdata.data['time'].units cal = JSdata.data['time'].calendar for i, t in enumerate(JSdata.data['time']): thedate = netCDF4.num2date(t, units=timeunits, calendar=cal) Then you can use thedate like a datetime, calling thedate.strftime or whatever you need. So that's how to access your data. All that's left is to plot it -- and in this case I had Geert Barentsen's script to start with, so I just modified it a little to work with slightly changed data format, and then added some argument parsing and runtime options. Converting to Video I already wrote about how to take the still images the program produces and turn them into a video: Making Videos (that work in Firefox) from a Series of Images. However, it turns out ffmpeg can't handle files that are named with timestamps, like jetstream-2017-06-14-250.png. It can only handle one sequential integer. So I thought, what if I removed the dashes from the name, and used names like jetstream-20170614-250.png with %8d? No dice: ffmpeg also has the limitation that the integer can have at most four digits. So I had to rename my images. A shell command works: I ran this in zsh but I think it should work in bash too. cd outdir mkdir moviedir i=1 for fil in *.png; do newname=$(printf "%04d.png" $i) ln -s ../$fil moviedir/$newname i=$((i+1)) done ffmpeg -i moviedir/%4d.png -filter:v "setpts=2.5*PTS" -pix_fmt yuv420p jetstream.mp4The -filter:v "setpts=2.5*PTS"controls the delay between frames -- I'm not clear on the units, but larger numbers have more delay, and I think it's a multiplier, so this is 2.5 times slower than the default. When I uploaded the video to YouTube, I got a warning, "Your videos will process faster if you encode into a streamable file format." I then spent half a day trying to find a combination of ffmpeg arguments that avoided that warning, and eventually gave up. As far as I can tell, the warning only affects the 20 seconds or so of processing that happens after the 5-10 minutes it takes to upload the video, so I'm not sure it's terribly important. Results Here's a video of the jet stream from 2012 to early 2018, and an earlier effort with a much longer 6.0x delay. And here's the script, updated from the original Barentsen script and with a bunch of command-line options to let you plot different collections of data: jetstream.py on GitHub. [ 14:18 May 14, 2018 More programming | permalink to this entry | ]
https://shallowsky.com/blog/programming/ecmwf-weather-data.html
CC-MAIN-2022-33
refinedweb
1,825
70.02
Hi new to the forum, so go easy :) Im very new to Java and im a little stuck... I have to create a program which uses classes and objects, all of the objects need to be stored in 1 array. So, i have a super class called Student then 2 subclasses called female and male. So it should look a something like this; public class Student student number; name; public class Female extends Student(String sGender) String gender; gender = sGender; public class Male extends Student(String sGender) String gender; gender = sGender; Sorry if that looks a little crude. Anyway, my problem is, if i create an array of Student[], then populate it when Females and Males, they lose any unique methods, and every object can only use the methods which are in Student. I know i can put every method in the super class (student), which are then inherited by the subclasses (Male and Female), but this seems wrong because its taking more code than it should. Any help would be great.
https://www.daniweb.com/programming/software-development/threads/331313/stuck-with-extends-and-arrays
CC-MAIN-2018-34
refinedweb
172
63.73
Calibre now depend on OrderedDict which need python2.7 Bug Description There is a build error on Ubuntu 10.10(Marverick) 64bit. It is because calibre now 0.7.54(2011-04-08) depend on collections. but minimum requirement of python version is 2.6. We should not depend on collections. of collections. A package name of library in Natty is 'python- ------- * * Running resources * Creating scripts.pickle Traceback (most recent call last): File "setup.py", line 99, in <module> sys. File "setup.py", line 85, in main command. File "/home/ self. File "/home/ cmd.run(opts) File "/home/ from calibre. File "/home/ from calibre. File "/home/ from calibre. File "/home/ from calibre. File "/home/ from calibre. File "/home/ from collections import OrderedDict ImportError: cannot import name OrderedDict python 2.7 features were needed as of version 0.7.51. I now try to maintain python 3 forward compatibility in new calibre code, which needed various features from 2.7, like orderedict, importlib, io.BystesIO etc. Maintaining private versions of these is too much work. Incidentally, I hope the earthquake/tsunami did not affect you. In setup.py it is written: def check_version_ vi = sys.version_info if vi[0] == 2 and vi[1] > 5: return None return 'calibre requires python >= 2.6' It should be change to proper value. Yes, I'm fine and so busy to operate sinsai.info which is ushahidi crisis response platform with other OSS developers. Glad to hear it. I've fixed the function. Calibre has required Python 2.7 for quite some time. All calibre binary installs bundle Python 2.7. All calibre binary installs also bundle colletions. You will need to update the Python version install on Ubuntu. Python 2.7 specific features are used elsewhere.
https://bugs.launchpad.net/calibre/+bug/761226
CC-MAIN-2016-22
refinedweb
292
64.07
Does anyone know about the book "PL/SQL Annotated Archieves by Looney"... I wanted to buy a book for dba scripts we do have some here but onestly it is very hard for me to know what the values(results) mean. What the results should look like... ex: Results of this query... select initcap(namespace) Library_Name, gets Gets, gethitratio Get_Hit_Ratio, pins Pins, pinhitratio Pin_Hit_Ratio, reloads Reloads, invalidations Invalid from v$librarycache / Is there any other book or site where I can read what these mean ( not just this script there are lots of them, I am trying to understand). Thanks Sonali Sonali Does anyone know a good book on Oracle Networking ? Thanks Oracle Class notes is best if you could get it. If not there is OSBORNE publishers book on Networking. That was on SQL*Net. Not sure they updates it on Net8. Go Kevin Loney Site and throw Question to his e-mail or publishers they will get back to you on the terminology he used in the book if he didn't mentioned anywhere in the book and its hard to interpret for DBA's in the field. They Have to... Then only it makes sense for what he has written... Reddy,Sam Oracle has a lot of dynamic performance views, and the example you told is just one of them (it's impossible for your book to discuss'em all). The names after the columns names are just aliases, the fields are best explained in Oracle documentation: [url][/url] That link refers to Oracle 8.1.6, but I think there's no difference. You can also download all of the documentation in PDF format. It's much easier to search for specific information. Adriano. Thanks you, sreddy and Adriano I should have been more specific, I am not talking about the alieses but the actual fields. Also my main problem(question) is how do I know what returned by these queries is correct, or bad and needs any database changes. Here is a example with the results from my test database. (this is not the only query, I was trying to know many others that we use here). SQLWKS> select initcap(namespace) Library_Name, 2> gets Gets, 3> gethitratio Get_Hit_Ratio, 4> pins Pins, 5> pinhitratio Pin_Hit_Ratio, 6> reloads Reloads, invalidations Invalid 7> from v$librarycache 8> / LIBRARY_NAME GETS GET_HIT_RA PINS PIN_HIT_RA RELOADS INVALID --------------- ---------- ---------- ---------- ---------- ---------- ---------- Sql Area 3665223 .860848849 9395870 .880328378 124178 4872 Table/Procedure 1662037 .976147342 2750160 .93404711 76232 0 Body 1992 .858433735 2272 .685739437 432 0 Trigger 192 .671875 192 .453125 19 0 Index 3179 .000629129 3180 .000628931 0 0 Cluster 30177 .997912317 40014 .996751137 8 0 Object 0 1 0 1 0 0 Pipe 0 1 0 1 0 0 8 rows selected. ex.. In this example what should be the value of GETHITRATIO ? If it is wrong what do I need to change... this is a kind of information I wanted to read. Thanks again Sonali so that I know Well, I never measured performance before using V$LIBRARY_CACHE, but just read about that view and I will give it a try: - GETHITSRATIO and PINHITSRATIO gives information about the number of times an information (a lock or a pin) was retrieved from the library cache and the number of times it was requested. Since Oracle searches recent used data/sql from SGA before accessing database/recompiling to improve performance, it is desirable to have a higher rate (near 1, or 100%) here. - RELOADS gives the number of reloads of an object from disk because another pin was performed since creation of the object handle. It's desirable to have a lower number here (in other words, minimum access to disk) - INVALID gives the number of times an object was marked invalid because a dependent object was modified. I think it's a good idea to minimize this parameter too. Analyzing your results, your database seems ok. You have a high number of reloads, but it's difficult to predict why, it only suggests that your data is being constantly updated (maybe the triggers?) The ratio for the indexes is low, but I don't understand why. Maybe because the same is not used often and is not always available in memory. Or, indexes access direct datafiles instead of memory. It seemed quite intuitive, I'm not that sure. If someone could give more objective information, we'd all be pleased. That stuff gave me a doubt, too... How EXACTLY works access to SGA before retrieve information from database? It depends on the object type? Is there any preference for any particular kind of object? I searched Oracle Documentation, but it was vague. Any links, documents, bookmarks? Adriano. You better to read once the Oracle documentation "Oracle8i Designing and Tuning for Performance". There you will come across the terminology and columns meanings in the views etc., You feel better in interpreting results after reading the manual. hi sonali ak one of the best known books that i have come across for oracle networking is from exam cram oracle db a networking guide by barbara ann pascavage .i must agree that brabara has expalianed in simple english everything you wanted to know about networking and net8 alothough the book is for exam purposes i often end up using it for troubleshooting it for network .no doubt the book has received 5* on mazon reveiews. hth Sonali, there are some things that you have to know and be clear to undestand the tuning issues and concepts. First find a book or some URL, that explains about the oracle architecture. This is tne most important and critical thing to undestand the tuning options and issues. Next go to the URL that adrianomp had posted on this thread and check the contents. That explains all the parameter options of init.ora to the views. You can then take a performance tuning book and then try to make some sense out of these concepts. There are a number of sites that talks in depth on performance tuning. Check those sites too. In oracle it would not help you, to performance tune, just one parameter, its a matter of striking a balance. It can only be achieved out of trial and error basis most of the times. So follow the tips and you would be better. Good luck, Sam Thanx Sam Life is a journey, not a destination! Hi Sonali, Regarding Oracle Networking ,please read OCP TEST Network Administration by Comdex.If any queires please write to me at rohitsn@altavista.com Regards, rohit Nirkhe,Oracle DBA,OCP Forum Rules
http://www.dbasupport.com/forums/showthread.php?7429-Oracle-Book
CC-MAIN-2017-04
refinedweb
1,098
73.37
Closure coding convention suggestion Closure coding convention suggestion Hi Jack, I'm certainly not as productive / good as you are (that's an overstatement, I'm way way less productive than you are) but while reading your code there are certain things I wish you'd have done that will make life much easier for the reader. In many occassion you'd use a closure to protect the local variables when you define a top level objects For instance while I was reading EventManager.js at the beginning I see Code: Ext.EventManager = function(){ ............. 600 lines below } Code: function(){...} Not until I see something likeCode: var pub = {.........} Code: return pub; Code: }(); So my humble opinion is that if we wrap the function{} literal in parenthesis then it would be very clear from the start that it is not a constructor but a closure. So instead of Code: function() { ...... 600 lines ...... }(); Code: (function() { ...... 600 lines ...... })(); Considering how I am constantly trying to trim bytes, adding extra ones that serve no purpose doesn't look too appealing. At the top of the file and in the docs it is defined with @singleton (right before the closure starts) which should signal for you that it is a closure (since it can't be a constructor). Similar Threads new namespace conventionBy sjivan in forum Community DiscussionReplies: 8Last Post: 29 Jan 2007, 6:44 PM This is horrible coding I am sureBy JohnT in forum Ext 1.x: Help & DiscussionReplies: 5Last Post: 22 Jan 2007, 1:03 PM Fighting ConventionBy hunkybill in forum Sencha CmdReplies: 1Last Post: 6 Dec 2006, 8:36 AM Coding for YUI/YUI-EXT in PHPBy sycophant in forum Ext 1.x: Help & DiscussionReplies: 1Last Post: 26 Nov 2006, 5:12 AM HTML codingBy Compugasm in forum Ext 1.x: Help & DiscussionReplies: 0Last Post: 7 Nov 2006, 3:50 PM
http://www.sencha.com/forum/showthread.php?3005-Closure-coding-convention-suggestion
CC-MAIN-2015-14
refinedweb
309
57.3
Package Summary Repository Summary Package Description Additional Links Maintainers - Eclipse Foundation, Inc. Authors Eclipse iceoryx utils overview The iceoryx utils are our basic building blocks - the foundation of iceoryx. There are a wide variety of building blocks grouped together in categories or namespace, depending on where or how they are used. Categories Structure The following sections have a column called internal to indicate that the API is not stable and can change anytime. You should never rely on it and you get no support if you do and your code does not compile after a pull request. The column maybe obsolete marks classes which can be removed anytime soon. CXX STL constructs which are not in the C++11 standard included are contained here as well as constructs which are helping us in our daily life. We differ in here from our coding guidelines and follow the C++ STL coding guidelines to help the user to have a painless transition from the official STL types to ours. The API should also be identical to the corresponding STL types but we have to make exceptions here. For instance, we do not throw exceptions, try to avoid undefined behavior and we do not use dynamic memory. In these cases we adjusted the API to our use case. Most of the headers are providing some example code on how the class should be used. Concurrent If you have to write concurrent code, never use concurrency constructs like mutex, atomic, thread, semaphore etc. directly. Most of the use cases can be solved by using an ActiveObject which uses as building block our FiFo or a queue which is thread-safe when combined with smart_lock. To learn more about active objects see Prefer Using Active Objects Instead Of Naked Threads. attribute overview of the available Queues: Design pattern Error handling The Error-Handler is a central instance for collecting al errors and react to them. In the file error-handling.hpp are all error enums collected. The Error-Handler has different error-levels, for more information see error-handling.md Log For information about how to use the logger API see error-handling.md POSIX wrapper We abstract POSIX resources following the RAII idiom and by using our Creation pattern. Try to exclusively use these abstractions or add a new one when using POSIX resources like semaphores, shared memory, etc. Units Never use physical properties like speed or time directly as integer or float in your code. Otherwise you encounter problems like this function void setTimeout(int timeout). What is the unit of the argument, seconds? minutes? If you use our Duration you see it directly in the code. void setTimeout(const Duration & timeout); setTimeout(11_s); // 11 seconds setTimeout(5_ms); // 5 milliseconds
https://index.ros.org/p/iceoryx_utils/
CC-MAIN-2021-21
refinedweb
457
63.29
I’ve recently bought two grove PIR motion sensors and I’d like to use from a Raspberry Pi. Looking at your git code (link removed because as new user I still can not post a link) it seems to me no one is testing the return value of isDetected() function and it always print “PIR sensor detected some stuff”. That said, I’m not able to test your PIR motion sensors even with other working code, such as that one at this link: (link removed because as new user I still can not post a link, search for raspberry pi parent detector) I’ve connected VCC on pin2 of Raspberry Pi, GND to pin6 and D1 to pin7 and run the following python code: [code]import RPi.GPIO as GPIO import time = “HIGH” if currState else “LOW” print “GPIO pin %s is %s” % (sensorPin, newState)[/code] I’m not able to detect a movement (I have tested your and my code with both my two Grove Pir motion sensors) Do you have any suggestions for me and my Raspberry setup ?
https://forum.seeedstudio.com/t/testing-grove-pir-motion-sensor-on-raspberry-pi/17952
CC-MAIN-2020-29
refinedweb
181
52.91
Faster Compiles, Less Dependence Slow Compiles, More Dependence Why does my C++ project take so long to compile? Why is it when I change a lowly object or base class I somehow force a re-compile of the entire known universe. The answer to these questions is almost certainly one or more of the following: - Header files refer to other header files - Classes contain classes or are derived from other classes ad infinitum. - The project is just huge. - Slow computers Of course a lot of these ideas and opinions depend on the compiler and operating system you are using, I use Microsoft Visual C++ on Windows NT4.0, but, anything not related to machine or OS will affect any C++ compiler on any operating system. By reducing header file includes, by reducing class dependencies and by breaking the project into smaller components you can greatly increase your turnaround from edit, through compile and into debug. Header Files If a header file A includes header file B then the compiler will have to check the date and time of the files and see if A needs rebuilding in relation to B. In any case both header files will have to be opened by the compiler, read into memory and parsed. And if file B contains include directives of it's own... The fix is easy to implement, it's called an include guard and most header files already have them. The difference is that the include guard should also be placed where file B is included and not just within itself. By using include guards before we include the file we save the compiler the trouble of opening the file, parsing the include guard in the file and having to parse the entire file looking for the end of the include guard. ///////////////////////////////////////////////////////////Start #ifndef FILEB_H #define FILEB_H // // File B declarations // #endif // FILEB_H ////////////////////////////////////////////////////////////End ///////////////////////////////////////////////////////////Start #ifndef FILEA_H #define FILEA_H // // Only include file B if it is not already #ifndef FILEB_H #include "FileB.h" #endif // FILEB_H // // File A declarations // #endif // FILEA_H ////////////////////////////////////////////////////////////End MS Visual C++ makes use of a new (vendor specific) pragma that purportedly removes the need for doing this, I have no evidence that it either works or doesn't. The method described above is compiler independent and works even if you do not have the rights or desire to modify the header files you include. The Microsoft compiler has one irritating habit in one of it's nicer features. When you create a new class by right clicking on the root of the class view it adds an include guard that looks like a GUID to the resultant header file - why does it do this? Surely the designer of this feature knows what a traditional include guard looks like and I am equally sure that my include guards do not need global uniqueness! Class Dependencies Class A needs class B for some reason and because of that, when you make a change to class B, everything that relies on it re-compiles. This takes some thought to get around. The first pass at getting out of this mess is to remove anything you can forward declare - pointer and references are prime candidates for removal. // MyClass.h #ifndef MYCLASS_H #define MYCLASS_H #include "YourClass.h" class CMyClass { public: explicit CMyClass( CYourClass & ); private: CYourClass *m_pYours; }; #endif // MYCLASS_H The above code is typical. The header file YourClass.h does not need to be included in MyClass.h , a better way is to forward declare the class CYourClass and remove the include directive as follows. // MyClass.h #ifndef MYCLASS_H #define MYCLASS_H class CYourClass; class CMyClass { public: explicit CMyClass( CYourClass & ); private: CYourClass *m_pYours; }; #endif // MYCLASS_H Now that YourClass.h is removed, clients of CMyClass should no longer need recompiling every time CYourClass changes. Private members do not form part of a class interface, protected data members form as much of the class interface as public ones do, one aim is to remove private data and members altogether. The most widely used method is to gather your private data and members and to place them in a structure that is private to the class implementation. In the class declaration have a forward declaration of the type and in the private section of the class have a pointer to it. ... private: class CPrivateData *m_ppriv; ... In the constructor you should initialise m_ppriv to be a dynamically allocated instance of CPrivateData and in the destructor you should delete it. CMyClass::CMyClass() : m_ppriv( new CPrivateData ) {} CMyClass::~CMyClass() { delete m_ppriv; } Private member functions can also be moved into CPrivateData, problems arise when a private function calls a virtual function and if the private function has been moved to a different class there is no linkage. There are ways and means of overcoming this problem but if you need to know more on this subject I strongly recommend you read Large Scale C++ Software Design. Huge Project Huge project, hundreds of thousands of lines of code. Try dividing the project into two distinct areas, those that change and those that don't. Those parts that do not change should be moved to some sort of library. The libraries should then be divided up into those bits that are not used in the product very often and those that are. The library code that is used often would be suitable for either statically linked libraries or statically linked DLLs. The library code that the project uses least often should be dynamically loaded by your project. This will speed your link times and will also have a positive effect on your application load speed. In MSVC (I am sure this is true of many compilers) there is the ability to use pre-compiled header files, we did tests with this feature. We had automatic use of pre-compiled headers in a project of about 200,000 lines of code, the pre-compiled header did indeed speed compilation but the real speed improvement came when we specified the PCH file settings and picked out those source files that couldn't use it. The speed increase was dramatic, from about 28 minutes down to something like 10. Don't use the automatic pre-compiled header option. The latest versions of MSVC also have the ability to contain subprojects and to have a dependency between them, only add sub-projects you really need. By having lots of sub-projects you are potentially adding an enormous amount of code to the dependency checking required to do a compile. When you are debugging a DLL if you set the DLL as the active project and set your project EXE as the target you may find that the debugger starts quicker. Some DLLs do not have to be built as debug, there are some cases where heap objects are not passed across the DLL boundaries and so there is no direct need to have the DLL as debug. Not having the debug information can be a problem but if the DLL loads executes quicker then it is sometimes worth the loss. DebugHlp is a good example of this, compiled debug or release it has the same functionality. The only time DebugHlp needs to be compiled for debug is if there is a bug in it. Slow Computers! In the end you will probably be forced into buying a more powerful computer, if possible nag your boss to get decent kit, it is a false economy to buy cheap machines with IDE drives and low resolution monitors. They will drive you mad with frustration and blind with the low resolution and low refresh frequency. Multi-processor machines with SCSI hard drives are getting cheaper and should be well within reach of even the smallest software house. We have tested various configurations over the years and have found that the hard drive is the compiler's bottleneck, get the best you can afford, get two if possible - one for the OS, tools etc. and the other for just your projects and your compiler output. SMP is next best, if you don't want to be twiddling your thumbs whilst the compiler spends time number crunching SMP is the only way to go. Microsoft Developer Studio up to version 6.0 does not perform any sort of multi-processor magic but having a second processor allows you to continue editing code in the mean time. I am fairly sure that SMP helps even a single threaded compiler due to the fact that the other processor can still perform the house keeping. SMP will not be used with Windows95/98 due to the fact that there is no support for it in the OS. RAM is good up to a point, we found that going from 32MB to 64MB made a big difference, 64MB to 96MB was almost as good, going to 128 made very little difference. That said we only buy developer machines with 128MB as standard and I have had 256MB at home for well over two years so I do believe it's worth having.
https://gipsysoft.com/articles/fcomp/fcomp.shtml
CC-MAIN-2022-40
refinedweb
1,503
65.76
Consider the following Python function. def blast(n): if n > 0: print n blast(n-1) else: print "Blast off!" What is the the output from the call? blast(5) The following mechanism helps us understand what is happening: What is the output of the following? def rp1( L, i ): if i < len(L): print L[i], rp1( L, i+1 ) else: print def rp2( L, i ): if i < len(L): rp2( L, i+1 ) print L[i], else: print L = [ 2, 3, 5, 7, 11 ] rp1(L,0) rp2(L,0) Note that the entirety of list L is not copied to the top of the stack. Instead, a reference (an alias) to L is placed on the stack. The factorial function is and This is an imprecise definition because the :math:` cdots ` is not formally defined. Writing this recursively helps to clear it up: and The factorial is now defined in terms of itself, but on a smaller number! Note how this definition now has a recursive part and a non-recursive part: We’ll add output code to the implementation to help visualize the recursive calls in a different way. The Fibonacci sequence starts with the values 0 and 1. Each new value in the sequence is obtained by adding the two previous values, producing Recursively, the value, , of the sequence is defined as This leads naturally to a recursive function, which we will complete in lecture. Fractals are often defined using recursion. How do we draw a Sierpinski triangle like the one shown below? Define the basic principle Define the recursive step def draw_sierpinski( ): Remember the lego homework? We wanted to find a solution based on mix and match. While a non-recursive solution exists, the recursive solution is easier to formulate. Given a list of legos and a lego we would like match: Define the basis step(s): when should we stop? Define the recursive step def match_lego(legolist, lego): Consider the following recursive version of binary search: def binary_search_rec( L, value, low, high ): if low == high: return L[low] == value mid = (low + high) / 2 if value < L[mid]: return binary_search_rec( L, value, low, mid-1 ) else: return binary_search_rec( L, value, mid, high ) def binary_search(L, value): return binary_search_rec(L, value, 0, len(L)-1) Here is an example of how this is called: print binary_search( [ 5, 13, 15, 24, 30, 38, 40, 45], 13 ) Note that we have two functions, with binary_search acting as a “driver function” of binary_search_rec which does the real work. Is the code right? The fundamental idea of merge sort is recursive: We repeat our use of the merge function function in class. def merge_sort(L): Comparing what we write to our earlier non-recursive version of merge sort shows that the primary job of the recursion is to organize the merging process! function we’ve just defined, together with +1, -1 and equality tests. Now, define the integer power function, , in terms of the mult function you just wrote, together with +1, -1, and equality. Euclid’s algorithm for finding the greatest common divisor is one of the oldest known algorithms. If and are positive integers, with , then let be the remainder of dividing by . If , then is the GCD of the two integers. Otherwise, the GCD of and equals the GCD ofcd is proceeding toward the base case (as required by our “rules” of writing recursive functions)? Specify the recursive calls and return values from our merge_sort implementation for the list L = [ 15, 81, 32, 16, 8, 91, 12 ]
http://www.cs.rpi.edu/~sibel/csci1100/fall2014/course_notes/lec21_recursion.html
CC-MAIN-2018-05
refinedweb
591
59.64
Thinking of Monads as imperative languages Haskell provides do notation, which is syntactic sugar to facilitate imperative programming. The Monad of a do block specifies the imperative language, which is typically equipped with actions (you might call them verbs of the language). ioExample :: IO () ioExample = do putStr "What is your name? " -- an action which prints to the console name <- getLine -- an action which retrieves a line from the console putStrLn "Hello, " ++ name stateExample :: State Int () stateExample = do x <- get -- an action which retrieves the current state put (x + 1) -- an action which overrides the current state These examples which I have defined are actions themselves. In other words, I have created new verbs in their respective languages, which have a meaning defined as a combination of other actions. We can invoke these actions within a do block of the appropriate language. invokeIOExample :: IO () invokeIOExample = do ioExample otherIOStuff invokeStateExample :: State Int () stateExample otherStateStuff Monad laws represent the well-foundedness of refactoring. When definitions are inlined or factored out, the monad laws guarnatee that the meaning stays the same. I won't illustrate this, but it's important to think about if you haven't already. Monad transformers, then, are language enhancers. They add additional actions to an underlying language. You might call them language transformers. The Pause transformer Today we are going to write a monad transformer. This transformer will augment the underlying language with one additional action: pause. This way, we will be able to build (for example) an IO action which can run for a while, and then pause itself when it reaches a checkpoint. At that point, it will produce a new action which can be run to continue where it left off, until the next pause point. Let's start off simple. Fairly simple stuff, though currently a bit crufty. We would like to be able to step these actions, or perhaps we would like to ignore the pauses and run them to completion. -- show Try implementing these runN :: Monad m => Int -> Pause m -> m (Pause m) runN = undefined fullRun :: Monad m => Pause m -> m () fullRun = undefined -- show Check the result main = do rest <- runN 2 pauseExample1 putStrLn "=== should print through step 2 ===" Done <- runN 1 rest -- remember, IO Foo is just a recipe for Foo, not a Foo itself -- so we can run that recipe again fullRun rest fullRun pauseExample1 runN :: Monad m => Int -> Pause m -> m (Pause m) runN 0 p = return p runN _ Done = return Done runN n (Run m) | n < 0 = fail "Invalid argument to runN" -- ewww I just used fail. | otherwise = m >>= runN (n - 1) fullRun :: Monad m => Pause m -> m () fullRun Done = return () fullRun (Run m) = m >>= fullRun Nifty. Making Pause a real transformer We accomplished what we wanted with the earlier code, but the way we had to write pauseExample1 was ugly. We want to create a "real" monad transformer, with a pause action. For this example, it is as simple as adding a result to the Done constructor. -- show Given this definition... import Control.Monad import Control.Monad.Trans.Class data PauseT m r = RunT (m (PauseT m r)) | DoneT r -- show implement these instance (Monad m) => Monad (PauseT m) where -- return :: Monad m => a -> PauseT m a return a = undefined -- (>>=) :: Monad m => PauseT m a -> (a -> PauseT m b) -> PauseT m b DoneT r >>= f = undefined RunT m >>= f = undefined instance MonadTrans PauseT where -- lift :: Monad m => m a -> PauseT m a lift m = undefined pause :: Monad m => PauseT m () pause = undefined -- bonus exercise, implement joinP -- double bonus: without relying on PauseT's monad instance -- triple bonus: explain in English what joinP *means* for the Pause monad joinP :: Monad m => PauseT m (PauseT m a) -> PauseT m a joinP = undefined -- show ...and see if it compiles. main = putStrLn "it compiles" UPDATE: Whoops, /r/haskell pointed out that there are problems with this representation of PauseT. Skip down below and see if you can figure out what's wrong. Then, please move on to Part 2 where we will build a better abstraction that obeys laws and works as you'd expect. -- UPDATE: This implementation disobeys MonadTrans laws, -- and pause isn't quite right. () Great, now we can use pause, and lift anything else from the base language. --/show import Control.Monad import Control.Monad.Trans.Class data PauseT m r = RunT (m (PauseT m r)) | DoneT r () --show example2 :: PauseT IO () example2 = do lift $ putStrLn "Step 1" pause lift $ putStrLn "Step 2" pause lift $ putStrLn "Step 3" fullRunT :: PauseT IO r -> IO r fullRunT (DoneT r) = return r fullRunT (RunT m) = m >>= fullRunT runNT :: Int -> PauseT IO r -> IO (PauseT IO r) runNT 0 p = return p runNT _ d@DoneT{} = return d runNT n (RunT m) = m >>= runNT (n - 1) main = do fail "your turn" fail "Fill in this main method with your own experiments" If you don't like all the lifting, then try using a typeclass like MonadIO. Not always the best idea, but it's an option worth knowing about. I used the IO monad, because it is fairly easy to understand a prodecural idea like "pause" in those terms. What would PauseT mean if it were on top of the State monad? The List monad? Food for thought. Enhancements PauseT adds a way for us to pause ourselves, waiting patiently for whoever is "in charge" to resume us. Who is this mysterious "in charge" person? What if we could communicate with her? Next time, we will learn about coroutines, and elaborate a simple interface for bidirectional communication. (Then we can implement PauseT in terms of this new abstraction, by setting the "to" and "from" fields to the trivial message, ().)
https://www.schoolofhaskell.com/school/to-infinity-and-beyond/pick-of-the-week/coroutines-for-streaming/part-1-pause-and-resume
CC-MAIN-2017-26
refinedweb
947
67.99
[Share] Position a ui.control in a view Lol, here we go again. This is simple code. But very useful when testing.it can be useful beyond that also. But it's very limited and basic. But it's just a base idea or the impetus for a better idea. But I think about a parser that could do something like c = br-bc. Meaning Center = bottom right - bottom Center. Or c = c + 10L so centred + 10 points to the left. Also could support percentages etc... But the biggest trick of all is to keep it simple. I mean the API. I am sure some will say , x,y is simple enough already. Personally I think it's a pain. Anyway, still food for thought... # Phuket2 , Pythonista Forums (Python profiency, not much) # works for python 2 or 3 import ui def _make_button(title): btn = ui.Button() btn.width = 80 btn.height = 32 btn.border_width = .5 btn.title = title return btn def do_position(obj, pos_code='', pad = 0): ''' do_position: a very simple positioning function for ui elements. as simple as it is can be useful. especially for testing. args: 1. obj - a ui object Instance such as a ui.Button 2. pos_code - 1 to x letters or combinations in the set ('c', 't', 'l','b', 'r'). eg 'c' will postion the obj in the middle of the parent view. 'tc' or 'ct' will position the obj at the top center if the screen. 3. pad - is to give a type of margin from the edges. for t and l pad is added, for b and r pad is subtracted. c is not effected by pad returns: a tuple (boolean, ui.Rect) if False, exited early, eg, not yet added to a view. ui.Rect will be set as (0, 0, 0, 0) if True, the ui.Rect is set to the frame of the object. regardless if it moved or not. if both cases ui.Rect will be a valid ui.Rect ''' # if the control is not added to a view, i.e no superview, we cant # do much other than return if not obj.superview: return (False, ui.Rect()) # we only reference superview.bounds. hopefully this is correct! r = ui.Rect(*obj.superview.bounds) # in the func we only deal with lowercase pos_code. we normalise # the case so upper or lower or a mixture can be used as input pos_code = pos_code.lower() # c for center is a special case. does not know if you want vertical # or horizontal center. we delete c from input if it exists and # make sure its the first operation. then we dont get side effects if 'c' in pos_code: pos_code = 'c' + pos_code.replace('c', '') 'r': obj.y = r.max_y - (obj.height + pad) return (True, ui.Rect(*obj.frame)) hide_title_bar = False style = '' w = 800 h = 600 f = (0, 0, w, h) pos_list=['c','tl','tc','tr','lc', 'cr', ) @phuket2 I like the idea of this! Here's a thought: For vertical alignment you have another term you can use: Top Middle Bottom @cook , if you pass a single char like 't' or 'b' , the x value is not changed for example. So the object will be positioned at this location without changes it x value. So I am pretty sure you can do as you suggest now. Each char in the pos_code is evaluated as a single operation. Center 'c' in the stream of chars is treated differently. If c exists in the stream, it's deleted and append to the front of the stream. This is just because of the way I handle c. The way I handle it, c effects both the x and y in a single operation. Without me patching this , the order would become important. Like 'bc' would just result with the object centred instead of it being centred on the bottom. Did I understand you correctly? If not please say. I am eager to improve this elif code is 'r':appears twice in the code above. | tl | tc | tr | | ml | mc | mr | | bl | bc | br | The code should first center the control in the view and then: pos_code = (pos_code or '').lower() if 't' in pos_code: move control to top of view elif 'b' in pos_code: move control to bottom of view if 'l' in pos_code: move control to left of view elif 'r' in pos_code: move control to right of view This means that: - pos_code of 'mc', 'c', 'm', 'junk', or even the empty string on None will center the control in X and Y. - pos_code of 'R' or 'r" will right justify the control in the view. - pos_code of 'BR', 'RB', 'rb', 'r...B', 'rabid' will place the control at the bottom right of the view. - 't' takes presidence over 'm' or 'b' if there are multiple in pos_code. - 'l' takes presidence over 'c' or 'r' if there are multiple in pos_code. Of course, a pos_code of Black Labelwill put the control at the bottom left of the view where it belongs. @ccc , lol, I am going to have at least 3 or 4 more whiskeys before trying to digest your comments 😱😬 But really Is the following in line with achieving your suggestions? import ui def _make_button(title): btn = ui.Button() btn.width = 80 btn.height = 32 btn.border_width = .5 btn.title = title return btn def do_position(obj, pos_code='', pad = 0): if not obj.superview: return (False, ui.Rect()) # we only reference superview.bounds. hopefully this is correct! r = ui.Rect(*obj.superview.bounds) #obj.center = r.center() if 'c' in pos_code: pos_code = 'c' + pos_code.replace('c', '') pos_code = (pos_code or '').lower() 'm': obj.y = (r.height / 2) - (obj.height / 2) return (True, ui.Rect(*obj.frame)) hide_title_bar = False style = '' w = 800 h = 600 f = (0, 0, w, h) pos_list=['tl', 'tc', 'tr', 'ml', 'mc', 'rm', ) def do_position(obj, pos_code='', pad=0): if not obj.superview: return (False, ui.Rect()) # we only reference superview.bounds. hopefully this is correct! r = obj.superview.bounds obj.center = r.center() pos_code = (pos_code or '').lower() if 'l' in pos_code: obj.x = r.min_x + pad elif 'r' in pos_code: obj.x = r.max_x - (obj.width+pad) if 't' in pos_code: obj.y = r.min_y + pad elif 'b' in pos_code: obj.y = r.max_y - (obj.height + pad) return (True, obj.frame) @ccc , ok. I see what you mean now. I my function with your version and it worked as expected. Shorter code also. But same in functionality (basically). But wouldn't my approach be better suited to exapanding the functionality? I am not making a statement, I am asking. This function is just sort of ok. Mainly good for testing. Maybe this function can't be expanded to be really functional in real world. And I know the problem, sizing is not considered. I have often thought about the grid system used in the old battleship game before for both positioning and sizing objects. Ok, it's just a grid. But maybe it's a better route? Not sure. Oh yeah man... The code I gave you was merely focused on the gracefully dealing with garbage input for nine boxes that your question was about. Laying out components in a UI is a massive topic and (like almost all UI-related issues) not a topic that I have any expertise in. You might check out different layout mangers that Java uses to get some ideas: The GridBag might be of special interest. I think that Pythonista approach does a nice job of letting you lay things out well without going cross-eyed trying to understand the complexities of layout managers. Build the battleship game and I will kick your butt... You on the beach and me in the mountains. One last item... for i in range(len(pos_code)): code = pos_code[i] # becomes... for code in pos_code: @ccc , I agree. Pythonista gives you a big head start. But it's wanting. But I am sure not because of anything else other than time. But imagine if the wrench menu was available in the designer and there was a Designer Module. Example you want to evenly distribute 3 objects in a view or a subview. At the moment it's a pain (manual job). But if we could run some code from the wrench menu for example with a designer module exposing, selected items, superview etc...making views in the designer could be so easy. That's only one example, but you could do a lot. I do understand, this would be a fairly large under taking for omz. Well, I think. Maybe he is in good shape for this. One thing that gives me a hint is that you can copy , and then paste attrs to another object. Also copying between different UIFiles seems to work properly, to the root or a subview or a subview of a subview. Many battleship games still out there. I am not saying I am great. I just know I loved it as a kid. For our generation it was a lot of fun. It will probably come back into fashion at some point the way the marvel comics have. Who would have thought 😱😬 @ccc @phuket2 you had a lot going on here with this and I didn't see. Sorry! Ian's original post got me thinking about layout a lot, and I started on something as well. I really like how you can do a lot of auto layout in html with css or javascript and it's really useful. I think the same is possible for UI. So far I have: - distribute horizontally (equal distribution) with padding is possible - distribute vertically (equal distribution) with padding is possible - adjust width/height/x/y by percent (rather than by points or pixels) Working on a grid distribution. None of this is too hard or rocket science. Just a bunch of numbers. I'm sure anyone could do it... It's really different than what @phuket2 has above though...!! I don't know why I didn't think to do this before... I seem to use a kind of distribution for doing buttons at times, but having it in a module would be much easier. Will share once I've got kinks worked out... With what I have already I could do @phuket2 's battleship grid without the headache! But I want to improve the functions to make it even more straightforward. another approach might be a more pythonic implementation of ios layout constraints. These are incredibly powerful, though a bit of a pain to use. Another approach: Years ago, I tried my hand at implementing layout controllers sort of similar to the java equivalents. One difficulty was that there was no way at the time to specify a "preferred size" or minimum size.. though now with custom attributes it is possible. The idea was to override add_subview to properly handle layout of the elements, then different container types would layout the subviews appropriately. I implemented a flowcontainer, and a box layout before I lost interest.... see uicontainer.py and BoxLayout.py @Phuket2 As I mentioned in other forum post I have added the example file position_img_shape_in_custom_view.py in textlayout repository.. You can do a git pull if you have already got this. It is not just limited to pyui. We could use it in custom views for placement of various shapes and images. May be I will post an example later. This example illustrates using placement of images and shapes in custom view using textlayout module. The textlayout module is generic enough to use in other types of applications. Here is the portion of code illustrating the layout of images and shapes. Only one image or shape (element) is shown at a time and you can do a tap to change the current element. layout_text = ''' ******** i--***** |--***** **s--*** ******** ****s--* ******** ******** ''' ui_element_map = { 'i': ('image', Img), 's': ('shape', Shape) }
https://forum.omz-software.com/topic/3402/share-position-a-ui-control-in-a-view
CC-MAIN-2017-26
refinedweb
1,974
77.23
Round function in Python, returns a floating point number that is a rounded version of the specified number. This article will explore the concept in detail Following pointers will be covered in this article, - Python round() - Practical Application - Rounding NumPy Arrays - Rounding Pandas Series and DataFrame - Data Frame So let us get started then, Round Function In Python The method round(x, n) will return value x rounded to n digits after the decimal point. Example: round(7.6 + 8.7,1) Output: 16.3 Round gives this ability to give the closest value Example: round(6.543231, 2) Output: 6.54 Sometimes it doesn’t give the correct output Example: round(2.675, 2) #should return 2.68 but it doesn’t Output: 2.67 Sometimes it gives the correct output Example: round(8.875, 2) Output: 8.88 Moving on with this article on Round Function In Python. Python round() Round function in python rounds off the decimal value to a given number of digits, if we don’t provide n i.e the number of digits after decimal it will round off the number to the nearest integer. It will round off to ceil if the integer after that is >= 5 and to floor integer, if the decimal is < 5 . round() without the second parameter #int print(round(12)) #float print(round(66.6)) print(round(45.5)) print(round(92.4)) Output: 12 67 46 92 Now if the second parameter is provided then last decimal digit will be increased by 1 till the rounded value if last_digit+1 is >= 5 else it will be same as provided. round() with the second parameter # when last_digit+1 =5 print(round(3.775, 2)) # when last_digit+1 is >=5 print(round(3.776, 2)) # when if last_digit+1 is <5 print(round(3.773, 2)) Output: 3.77 3.78 3.77 Moving on with this article on Round Function In Python. Practical Applications: Some of the applications of round functions are rounding digits to limited numbers, for example, we usually take 2 or 3 numbers after decimal also if we want to represent fractions to decimal so that we can represent fraction accurately. b=2/6 print(b) print(round(b, 2)) Output: 0.3333333333333333 0.33 In this era of data science and computing we often store data as Numpy array or as pandas data frame where rounding plays a very important role in order to compute operations accurately, similar to round function in python Numpy or Pandas take two arguments data and digits i.e the data which we want to round and how many digits we have to round after decimal and it applies this to all the rows and columns. let’s look at some examples. Moving on with this article on Python: Round Function. Rounding NumPy Arrays To install NumPy you can use: pip3 install numpy Other than that if you are using Anaconda environment it will be already installed, to round all the values of NumPy array we will pass data as an argument to np.around() function. Now we will create a NumPy array of 3×4 size containing floating-point numbers as below: import numpy as np np.random.seed(444) data = np.random.randn(3, 4) print(data) Output: [[ 0.35743992 0.3775384 1.38233789 1.17554883] [-0.9392757 -1.14315015 -0.54243951 -0.54870808] [ 0.20851975 0.21268956 1.26802054 -0.80730293]] For example, the following rounds all of the values in data to three decimal places: import numpy as np np.random.seed(444) data = np.random.randn(3, 4) print(np.around(data, decimals=3)) Output: [[ 0.357 0.378 1.382 1.176] [-0.939 -1.143 -0.542 -0.549] [ 0.209 0.213 1.268 -0.807]] np.around() can be used to correct the floating-point error. we can see in the following example that element at 3×1 is 0.20851975 you expect the value to be 0.208 but it is rounded to 0.209 also you can see that the value at 1×2 is rounded correctly to 0.378. So, if there is a need of rounding data to required form NumPy has many methods: - numpy.ceil() - numpy.floor() - numpy.trunc() - numpy.rint() The np.ceil() function rounds every value in the array to the nearest integer greater than or equal to the original value: print(np.ceil(data)) Output: [[ 1. 1. 2. 2.] [-0. -1. -0. -0.] [ 1. 1. 2. -0.]] To round every value down to the nearest integer, use np.floor(): print(np.floor(data)) Output: [[ 0. 0. 1. 1.] [-1. -2. -1. -1.] [ 0. 0. 1. -1.]] You can also truncate each value to its integer component with np.trunc(): print(np.trunc(data)) Output: [[ 0. 0. 1. 1.] [-0. -1. -0. -0.] [ 0. 0. 1. -0.]] Finally, to round to the nearest integer using the “rounding half to even” strategy, use np.rint(): print(np.rint(data)) Output: [[ 0. 0. 1. 1.] [-1. -1. -1. -1.] [ 0. 0. 1. -1.]] Moving on with this article on Python: Round Function. Rounding Pandas Series and DataFrame Pandas is another popular library for data scientists which is used to analyze data. Similar to NumPy we can install this library using the following command: pip3 install pandas The two main data structure of Pandas are DataFrame and Series, DataFrame is basically like a table in database and Series is a column. We can round objects using Series.round() and DataFrame.round(). import pandas as pd import numpy as np np.random.seed(444) series = pd.Series(np.random.randn(4)) print(series) Output: 0 0.357440 1 0.377538 2 1.382338 3 1.175549 dtype: float64 print(series.round(2)) 0 0.36 1 0.38 2 1.38 3 1.18 dtype: float64 Moving on with this article on Python: Round Function DataFrame: import pandas as pd import numpy as np np.random.seed(444) df = pd.DataFrame(np.random.randn(3, 3), columns=["Column 1", "Column 2", "Column 3"]) print(df) print(df.round(3)) Output: Column 1 Column 2 Column 3 0 0.357440 0.377538 1.382338 1 1.175549 -0.939276 -1.143150 2 -0.542440 -0.548708 0.208520 Column 1 Column 2 Column 3 0 0.357 0.378 1.382 1 1.176 -0.939 -1.143 2 -0.542 -0.549 0.209 For DataFrame we can specify different precision for each column thus, the round function of can accept a dictionary or Series so that we can provide different precision for different columns. print(df.round({“Column 1”: 1, “Column 2”: 2, “Column 3”: 3})) Output: Column 1 Column 2 Column 3 0 0.4 0.38 1.382 1 1.2 -0.94 -1.143 2 -0.5 -0.55 0.209 Summary In this article, we covered what is round function and how it is implemented from the core in python. We also covered some downsides of round function and how we can correct them and how that can be useful in libraries that are widely used in data science. Thus we have come to an end of this article on ‘Round Function In Python’.
https://www.edureka.co/blog/round-function-in-python/
CC-MAIN-2019-35
refinedweb
1,212
67.35
PathFindNextComponent function Parses a path and returns the portion of that path that follows the first backslash. Syntax Parameters - pszPath [in] Type: PTSTR A pointer to a null-terminated string that contains the path to parse. This string must not be longer than MAX_PATH characters, plus the terminating null character. Path components are delimited by backslashes. For instance, the path "c:\path1\path2\file.txt" has four components: c:, path1, path2, and file.txt. Return value Type: PTSTR Returns a pointer to a null-terminated string that contains the truncated path. If pszPath points to the last component in the path, this function returns a pointer to the terminating null character. If pszPath points to the terminating null character or if the call fails, this function returns NULL. Remarks PathFindNextComponent walks a path string until it encounters a backslash ("\\"), ignores everything up to that point including the backslash, and returns the rest of the path. Therefore, if a path begins with a backslash (such as \path1\path2), the function simply removes the initial backslash and returns the rest (path1\path2). Examples The following simple console application passes various strings to PathFindNextComponent to demonstrate what the function recognizes as a path component and to show what is returned. To run this code in Visual Studio, you must link to Shlwapi.lib and define UNICODE in the preprocessor commands in the project settings. #include <windows.h> #include <iostream> #include <shlwapi.h> #pragma comment(lib, "shlwapi.lib") // Link to this file. int main() { using namespace std; PCWSTR path = L"c:\\path1\\path2\\file.txt"; // This loop passes a full path to PathFindNextComponent and feeds the // results of that call back into the function until the entire path has // been walked. while (path) { PCWSTR oldPath = path; path = PathFindNextComponent(path); // The path variable pointed to the terminating null character. if (path == NULL) { wcout << L"The terminating null character returns NULL\n\n"; } // The path variable pointed to a path with only one component. else if (*path == 0) { wcout << L"The path " << oldPath << L" returns a pointer to the terminating null character\n"; } else { wcout << L"The path " << oldPath << L" returns " << path << L"\n"; } } // These calls demonstrate the results of different path forms. // Note that where those paths begin with backslashes, those // backslashes are merely stripped away and the entire path is returned. PCWSTR path1 = L"\\path1"; wcout << L"The path " << path1 << L" returns " << PathFindNextComponent(path1); PCWSTR path2 = L"\\path1\\path2"; wcout << L"\nThe path " << path2 << L" returns " << PathFindNextComponent(path2); PCWSTR path3 = L"path1\\path2"; wcout << L"\nThe path " << path3 << L" returns " << PathFindNextComponent(path3); wcout << L"\nThe path " << L"c:\\file.txt" << L" returns " << PathFindNextComponent(L"c:\\file.txt"); return 0; } OUTPUT: =========== The path c:\path1\path2\file.txt returns path1\path2\file.txt The path path1\path2\file.txt returns path2\file.txt The path path2\file.txt returns file.txt The path file.txt returns a pointer to the terminating null character The terminating null character returns NULL The path \path1 returns path1 The path \path1\path2 returns path1\path2 The path path1\path2 returns path2 The path c:\file.txt returns file.txt Requirements Send comments about this topic to Microsoft Build date: 11/28/2012
http://msdn.microsoft.com/en-us/library/bb773591(v=vs.85).aspx
CC-MAIN-2013-20
refinedweb
533
58.08
Mrs steele my first sex teacher Bisexual. Free pleasure mass porn Teapacks. This art Teen. Live galleries milf movies, animal mature all cought intelligent river movie victorian blonde lesbian stars short! Trailers movie free porn indirme wallpaper holmes Street horse in chat big woman tv. Porn 30000 classics m on movie abuse private trailers stories ontario no picture? Mom animal. Indirme - sexy indirme s restore tara woman. Story offenders story the. Registered belmont video registered hardcore 2006, mass laura. Annd couple having sign sex, gold girls public public victorian on. Cams Anime. Anime blonde collection laura teens: ontario yoga no trailers fuck account of bisexual. Hair. Slender couple m. Erotic Vibrations: chat sign up no book animal stories! Free fuck. Spy original cocks john z com sexual costumes the Future. Sexism belmont Water cams stockings illinois ancient sexual animal nude abuse costco audition. Kelly cams with spy ontario beach the costumes tara jennifer the laura picture our big daily charge. Shots belmont. Adult Porn reid. Film single red costumes porn. Original voyeur dragonball girls. Mature blonde nude Interests, women videos rubber victorian e-books good daily to crew anal with sex mom massive single picture beach online Ebert. Free How pussy of skinny movies of. Red Movie beach sexism lesbian stars cam education spy blue. River abuse sexual pleasure sons sign lesbian m ancient tale hair original. Gossip, mass Ave: indiasex4u abuse daily belmont naked has adult anal shots models. Having public clip cortana. Girl horse shots thumbnails #39 list ontario anime movie Cute sexism adult playing nude stories. Mom asian pictures, of forums as bever fucking lion galleries sons hot women movies dirty with with don river and anime parties housewives pics cam xxx gold cought cum tale free thumbnails online fucking slender naked remind us if restore blog blonde could try lion live stars porn. Voyeur online forums tv her. Girl in Disney restore cought pics cum classics pics illinois video fuck dirty naked woman celebs girl movie skinny hentai voyeur. Registered live in movie toon z lion free s teen hardcore gets in left fairy and sign chat. Picture galleries single gay up having tv restore hardcore. Hardcore list sexy Blog belmont sign registered bisexual woman hair. Audition z videos big dragonball. Massive stockings cam fuck pics. Blog ontario laura movie abuse galleries with sexual girls hardcore. Blue crew list annd z dirty nude holmes mature restore sons spy tara. Public red cum com. Sexy movies holmes beach up reid clip lesbian couple yoga girl online on couple short movie daily slender list. Free gay indirme daily registered single cum art nude. Bever xxx free parties restore. Dragonball costco naked! Classics costumes book. Galleries gold movie xxx offenders list illinois registered skinny picture cams voyeur erotic indiasex4u to daily. Yoga river fuck indirme hardcore thumbnails com list housewives mom belmont story pussy anime to no. Erotic wallpaper wallpaper mature book lesbian collection hair clip housewives ating. Com red Blonde ancient single tara ancient. Adult online reid forums the of girl blog the tara anime models up sign audition woman blue belmont crew. Crew nude beach mom s daily porn blonde free movies abuse. Single cum illinois mom john costumes film no no. Horse online massive. Having nude bever tale housewives. John sign wallpaper asian mom? Cocks naked holmes models of parties hair erotic. On laura original slender audition big red chat belmont models nude cought girls. Picture river. Pleasure anime horse trailers. Daily video sign tv stories costco of gold sexual. Ancient blue. Public collection holmes z book massive private milf cocks movies story pics anal com film pussy mass offenders? Gay animal single sexy pleasure. Girl with l.. m sex annd Up tv chat Restore stockings. List collection tara indiasex4u gold list reid public chat sex first teacher mrs asian anal book blue kelly dragonball blog victorian fairy holmes registered river holmes sons hair housewives cam bisexual anime porn lion erotic collection. Thumbnails tale laura in laura victorian clip mass skinny no indiasex4u clip dirty big fucking yoga sons anime videos. Tv hardcore animal blue reid up galleries to pics audition illinois laura daily naked mom. Red john trailers sign Com beach clip classics audition horse of asian galleries. Costco cum animal tara. Crew having forums. Com sexy cortana m mass free couple lesbian. Costumes s the book girls stories z thumbnails adult indirme z tv sexy galleries trailers pleasure. Bever crew abuse beach lesbian erotic having art girl. Sexy horse cams film mature porn annd sons anime mom wallpaper stars illinois live nude blue tale? Dragonball couple spy blog cocks kelly massive voyeur housewives gay xxx? Woman fuck pussy woman lesbian indirme ancient public. The video Wallpaper river film milf z movie mass adult free fuck john. Trailers short trailers collection. Anime. Sexual com. Having holmes in indiasex4u stories anime short pussy cam reid list galleries pleasure with gold videos illinois models nude girl online anal hardcore. Sons indirme cought naked video beach the sex abuse mass parties reid belmont cam crew original stories com adult book annd short nude girls hair Fairy. Celebs horse. Stars girls. Classics fuck cortana blue com m? Ontario cocks cocks yoga. Voyeur indiasex4u hardcore fuck. Big bisexual holmes. Free my teacher first steele nude. Mom costco single massive porn cams stockings. Free lion costco yoga naked Tale on costumes. Private porn dragonball red pussy big classics adult stockings couple offenders public anal naked spy fucking cum hardcore kelly mature costco pics shots gay dirty. Animal river movie dirty collection. Collection mom the girls free mass parties clip movie video woman tv laura video of movie victorian videos annd anime porn offenders blonde models indiasex4u mature fucking cocks picture sexy sons massive z wallpaper indirme trailers anal fairy ontario sex chat galleries massive lesbian tara blonde blog sexual pussy crew no john fairy sons s forums movies fucking cocks massive asian cought victorian slender john. Hair single housewives horse kelly tv single film to animal dirty couple erotic asian ontario girls picture pics ontario restore to chat mature tale film woman reid couple blonde models dragonball. Sexy daily. Housewives naked cum indirme fuck bever hair big fucking gay having red offenders. Ancient videos sign art story parties slender sign girl free clip dragonball daily story restore z audition gold temptations. M pleasure Costumes story shots clip list mom stars sons sexual. To anal victorian nude trailers tv offenders kelly daily list adult pussy. Offenders pics hair laura gold erotic s anime com beach woman thumbnails pussy skinny thumbnails annd s m cams. Art xxx with pleasure lesbian big reid women cams on reid mass indiasex4u lesbian ontario parties anal stockings stockings registered to book trailers adult laura bever couple naked. Nude lion annd. Girls z erotic free blog dragonball. Slender daily list no. Movie illinois pleasure short to. Costco. Red. John laura ancient. Pleasure trailers spy single beach cum. Couple shots john film kelly public classics costco chat fuck cocks massive sexism pleasure naked cocks in book mass. Victorian stars collection 5. girls girl picture Anal original Horse crew with cought up z anime laura free porn anime. Clip gold. Xxx fairy sexy costumes lion chat restore tv skinny short. List massive movie indiasex4u women blog registered cams mom dirty with laura parties bisexual yoga sons forums. List sons sex clip horse john spy yoga. The blog restore couple kelly. No feet videos bever tara. Costumes fairy tale free single tale of pussy skinny to girls woman crew galleries videos illinois john mass! Film movies classics couple. Xxx animal john woman audition hardcore. Galleries indirme costco gold indirme the hair milf picture woman private sexism clip illinois red costumes stars porn stockings in sons women river offenders annd. Lesbian cocks the gold sexism. Picture film victorian celebs. Laura trailers sexy reid mom with sons daily girl private big beach. Online kelly movies fuck river anime models wallpaper anal tara ancient original. Free thumbnails naked girl in classics milf sign movie cams sign picture blue registered thumbnails hardcore wallpaper costumes. Couple xxx having blonde naked nude fairy no celebs video abuse models dragonball cocks. Anal animal mature cocks skinny illinois. Gay big hardcore sex free on voyeur ontario pics story dirty hardcore private annd collection milf stars having. Fucking cocks chat public cortana com animal wallpaper sexy pussy picture no housewives. Galleries fucking. M couple belmont animal bisexual mature tara cam victorian horse on. Erotic hair lesbian stories blog lesbian m parties adult having cum pussy massive pics fairy daily dirty shots milf belmont voyeur massive women cam z fuck dragonball sexual! Cortana asian cought annd restore celebs s online blue sexy. Hardcore Com bever gay girl audition stockings collection movie gold of. Beach bisexual videos movies john sex sons cocks z crew costumes forums annd picture ancient blue in women original cortana com classics book erotic big indirme no clip video private indirme list book erotic tara movies animal shots thumbnails m. Indirme galleries women crew sign kelly anal illinois crew classics hair tale costumes. Skinny single woman mom hardcore cortana mom daily woman clip z art stockings cam live costco mature! Woman clip up stories girls of bever horse dirty stars. Video restore offenders. Slender short asian laura tv. Ontario lion dirty abuse spy sexism cams collection of beach blonde stories story daily sex daily gay animal costco housewives audition holmes bisexual yoga s costumes art. The trailers porn fuck costco hardcore cum cocks shots john. Videos river book cought fairy indiasex4u single stories cought pics fuck blue red tale. Fairy nude cocks anime sexy. Dragonball live massive film classics cortana restore gay. Chat annd no porn slender lesbian with fucking video. Lion kelly cum free trailers girls movie slender no. In parties list public anal milf yoga blonde offenders abuse couple anal bisexual forums gay blog costco. Xxx milf yoga massive gold on. Xxx ontario lesbian big adult housewives having pics adult film hair big cum spy up models free pussy celebs reid fairy adult sexy stars online john mass naked nude restore parties. Daily river porn sexual movie ontario mass to short z red original film online movies fairy public cams belmont trailers victorian pleasure sexy big hair sign collection. Galleries massive wallpaper bisexual pleasure tale registered animal in woman! Blonde sexual Wallpaper hardcore Tina cortana chat. Costumes galleries 2000-6 abuse. Cought having list story in victorian book holmes each up naked lesbian trailers illinois cum laura parties anime pics. Blog horse. Blonde public registered tara up videos video shots blue sexy cam live free cam forums free stories clip classics texts gold mom spy videos with art private tv milf offenders middot private indirme girl: hardcore spy posessed pleasure chat. Annd On: belmont. Beach skinny anime couple ontario trailers m abuse original cams lesbian online slender tara reid art z pussy. Kelly big video porn holmes ancient up. Britney kelly lesbian crew porn stars. Ebony collection picture laura crew no laura skinny victorian live tale. Girls thumbnails river milf girl single gay anal - cams in the Chix spy annd s annd sex beach video the daily hair dragonball voyeur stockings reid dirty milf massive movie. Single live original yoga. Mass com celebs picture fucking bisexual erotic big celebs asian xxx sexy clip. Restore film dragonball ontario. Tara yoga forums mom porn? Free stockings log. Blog free abuse should mass public. The art stockings massive of cocks parties private renowned for movies. Z audition bisexual costco thumbnails bever beach. While to bisexual. River daily holmes on mom spy victorian indiasex4u belmont massive lion skinny. Women costco galleries fairy. Video couple s sons to classics john victorian 1954. Work: parties mass blonde live voyeur. To xxx tale tale sexual no sexual collection sign blog john fuck gay women short illinois crew teenage movies illinois indirme porn sexism voyeur. Cortana sex sexy. Stars mature stars animal models film. Fucking tv woman. Hair gay red picture erotic m promo, blue tv no hair switch parties sign com cought river shots gold nude kelly girl. With yoga adult. Tv housewives gold registered lesbian. Sex pics forums veleak.com. Sons to Com spermicide cought s live attracted dirty reid bisexual guy slender which featured indiasex4u cum online anal annd pleasure massive big. Fights asian audition sexual alternative band m having public celebs short list blog s celebs costumes forums single anal free river photography story com porn asian john chat restore blue camel. Stars movie kelly belmont. We ancient cought cams in on gay housewives videos war sexism thumbnails stockings stockings Books: lesbian milf online skinny Welsh, fuck pics She, lesbian presented animal skinny pussy blonde lion bever gay of dragonball john beach clip. Bever with offenders john live as milf sons porn Icky picture. The with film big illinois cam free-porn-trailer.com. Couple. Story pics classics women! Woman original. Horse fairy m. Sexism daily porn gay woman. All Angie having private men story models lesbian Look costco a bisexual. Anime for of from live annd abuse original audition s illinois. Cought woman hair voyeur free movie ancient up ontario with in cought having spy mass s sons up galleries bikini com single. Naughty live Orgy com trailers models single bisexual costumes hair up slave river sweet, cortana indiasex4u tara indiasex4u fucking tale women laura pleasure parties nude TGP single on These categories bever cortana Simpson, massive sons. Parties. Tara indirme shit z offenders. Skinny milf public fan of original lesbian kelly sexy thumbnails es red slender voyeur picture forums. Collection girl Joan up ancient Kournikova nude free videos of hardcore slender dragonball sexy tv the sign. Spy bio hair Shakira laura of video! Galleries z book fucking. Nude Tags: online costumes sexy fairy models list. In naked costco mom big cam animal john horse girls stars in list voyeur art erotic victorian. Registered cought parties wallpaper asian latina women dvd classics blue kelly stories having on tale to just housewives reid cum maybe annd got mikes couple porn motion fairy shows milf nude cought spy celebs anal girls bikini and film crew yoga. Mature. Thong voyeur stars Asia the crew. Porn laura lion xxx worked massive organizing adult 11-year-old girl sign gold no holmes in on. River indiasex4u pussy cams and Daughter illinois. No nude having sexy costumes movies gold beach sexy. Quick Reply to: cortana registered Fantasy. Experience couple videos laura restore cocks homosexual fairy comic, free pussy. Horse middot red stockings stars in ontario thumbnails picture! Sign just wanted it.
http://uk.geocities.com/pharmacyedempill/wtjac/mrs-steele-my-first-sex-teacher.htm
crawl-002
refinedweb
2,416
67.86
Hey! Welcome back! Are you ready to make some more code for our simple Nodebot? I sure hope so! Last time, we covered using modern JavaScript techniques and making sure our code is consistent. Now that we've got our code modernised, cleaned up, and using ESLint rules, we can progress on making a Node.JS Server and a simple Web Interface for our Nodebot! If you've missed some of the earlier parts of this series and need to catch up, you can read about JavaScript and Nodebots and Making A Simple Nodebot, then learn about Using Modern JavaScript. Anyway, let's get going! What we'll be making We're about to step up our Nodebot game here, since we'll be making a very simple interface using HTML, CSS and JavaScript. To serve the HTML, we'll also be using Node.JS and Express to make our Server, Webpack to bundle our client-side JavaScript, and use Socket.IO to communicate events from the client-side to the server, and vice versa. We're just going to make a simple web page with a button, and this will be used to toggle the LED on and off. That's all we need to have for this, it's a simple build to get you into making a Node server and how to control our simple Nodebot with a web interface. With that being said, we've got a lot of work to do, so let's get started! Updating Our Simple Nodebot and Editorconfig To start this off, we'll be updating our Simple Nodebot and its code since we no longer require a physical button, since we'll be using one through HTML, JS and WebSockets. So, first thing we need to do, is remove the button and its wires from our Nodebot. Next, we'll remove the button's code in our index.js file. Again, we no longer need it, so we'll just delete all of our button code, including the toggle functionality. // @flow /* eslint-disable no-console */ import { Board, Led } from 'johnny-five'; // [...] Our setup code lives here... // When the board is connected, turn on the LED connected to pin 9 board.on('ready', function() { console.log('[johnny-five] Board is ready.'); // Make a new Led object and connect it to pin 9 const led = new Led(9); // Switch it on! led.on(); // [...] The rest of our code lives here...no more button code! }); We've now removed any instance of code for the physical button! Next, before we dive into making more code, we should some more settings to our .editorconfig file. This is opinionated, but I suggest using tabs instead of spaces when working in HTML. The size of the indent is up to you. # root = true [*] indent_style = space indent_size = 2 end_of_line = lf charset = utf-8 trim_trailing_whitespace = true insert_final_newline = true [*.json] insert_final_newline = ignore [*.html] indent_style = tab indent_size = 4 One last thing we need to add is to our .flowconfig — some bits of our code need to ignored by Flow otherwise it will be a bit upset, so we'll update it with this line of code. [options] suppress_comment= \\(.\\|\n\\)*\\flow-disable-next-line Now if we add before a line of code //flow-disable-next-line, Flow will immediately ignore the proceeding line and won't give an error! Now we've set up our previous code for what's to come next! Let's get going and make our server and interface! Setting up the Node Server and HTML Let's get underway to making a Node server! Before we begin, we need to install an extra dependency called Express — a JavaScript Web Application framework. With Express, we can build a web application that can serve our interface, and can handle requests as well as serve files easily. To do this, we add it to our dependencies with Yarn. yarn add express We'll need to update our code in index.js as this will be our server file as well as our code for the Arduino, neatly bundled in one file. Let's make our Server! // @flow /* eslint-disable no-console */ import { Board, Led } from 'johnny-five'; import express from 'express'; import { Server } from 'http'; // Make a new Express app const app = express(); // flow-disable-next-line const http = Server(app); // Set the server to port 8000 and send the HTML from there. http.listen(8000, () => { console.log('Server running on Port 8000'); }); // [...] Our Johnny-Five code lives here... With our code above, we've imported Express and a Server module from the http package in Node.JS, which comes as a default. We then make a new Express application, and then make a new HTTP Server with the Express App. For this instance, we'll hard-code the Server to run on Port 8000 and listen for any requests. Now let's make our HTML to serve on the root of the Server! The HTML for this is quite simple, it's simply a button wrapped in a div element that acts as a container, and will be centered on the center of the page using CSS. <!doctype html> <html> <head> <meta charset="utf-8"> <meta http- <title>Simple Nodebot Interface</title> <meta name="viewport" content="width=device-width, initial-scale=1"> </head> <body> <div class="page-wrap"> <!-- LED Toggle button for controlling the LED on the Server-Side --> <button class="led-toggle" id="led-toggle">Turn Off LED</button> </div> <script src="/static/bundle.js"></script> </body> </html> Quick and simple HTML! We've assigned an ID to our button so we can use JavaScript to get the button by its ID. We've also added a CSS class to the button for styling, and wrapped in a div.page-wrap element so it's perfectly centered on the page! Now we just need to apply some CSS to our content — we use a simple reset to our elements (note: we're making a simple project, we can be a bit dirty with our CSS!), and use CSS Flexbox to completely center the button on the page. <style> /** * Quick and dirty reset */ * { margin: 0; padding: 0; border: 0; } /** * Set box-sizing */ html { box-sizing: border-box; } *, *:before, *:after { box-sizing: inherit; } /** * Set up page background colour and typography */ body { background: #22cece; font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif; font-size: 1rem; /* 16px */ line-height: 1.5; } /** * * Page Wrapper Element * * Set to flexbox and make sure all items are aligned to the center. * */ .page-wrap { width: 100%; height: 100vh; display: flex; flex-direction: column; align-items: center; justify-content: center; } /** * Set styles for the button. */ .led-toggle { background: #33aaee; color: #0066aa; border: 2px solid currentColor; border-radius: 8px; padding: 20px; font-weight: 600; font-size: 1.5em; /* 24px */ width: 280px; } </style> Now we've made a our HTML and our CSS, we can set this file to show on root through localhost:8000. Update your index.js file with a line to display the HTML file when the user goes to the root. Another we should add is a line using our dist folder (which we'll get to) and display it as /static on the server. // [...] Our Previous Server Code lives here... // We have a dist directory, but use '/static' to fetch it. app.use('/static', express.static('dist')); // When the client is on the root, show the HTML file. app.get('/', (req, res) => { res.sendFile(`${__dirname}/index.html`); }); // Set the server to port 8000 and send the HTML from there. http.listen(8000, () => { console.log('Server running on Port 8000'); }); // [...] Our Johnny-Five code lives here... One other thing we can do is watch our files for any changes, and allow the code to reload upon change so we can continuously edit and see the changes instantly. For this, we'll be using a dependency called Nodemon. To install Nodemon, we can add it to Yarn as a dependency. yarn add nodemon Now we need to update our start script in package.json. We'll be using Nodemon to execute babel-node and watch for any changes in the server code. "start": "nodemon -e js --ignore index.compiled.js --ignore dist --exec babel-node index.js", "test": "eslint ./ && flow" Now, if we run our yarn start script, it will start our Johnny-Five as well as our server, and run on port 8000! So, if we jump on our browser (I'm using Google Chrome), we'll see our simple interface of a big fat button in the browser! Now we've made our interface and our server, let's get on to making our client side scripts! Installing Webpack and Writing the Client Side Scripts Since we're writing in ES6 and Babel, we should use a bundler to compile all our ES6 code and make it work for browsers, especially since not a lot of ES6 is available in browsers yet! For this, we'll be using an awesome bundler called Webpack. To install Webpack, we need to install it as a development dependency in Yarn. yarn add --dev webpack We now need to make a file called webpack.config.babel.js, and we'll be writing our Webpack configuration in ES6, since we can. This is what will take our code written for the browser using ES6 and compile it to browser-friendly code. import path from 'path'; export default { entry: [ './browser.js', ], output: { filename: 'bundle.js', path: path.resolve(__dirname, 'dist'), }, module: { loaders: [ { test: /\.js?$/, loader: 'babel-loader', exclude: /node_modules/, }, ], }, resolve: { extensions: ['.js'], }, devtool: 'source-map', }; Let's do a rundown of what the Webpack Config code above is doing, so we can make sense of what's happening here: - Our entrykey is used to tell what file, or files, we want Webpack to take and bundle into our browser script. outputis used define specifics for the compiled file. In this case, we want it to be compiled into a file called bundle.jsand output it into a new folder called dist. - Webpack uses the modulefeature to tell it what tools and dependencies to use on what files. In this case, we tell it to use babel-loader for JavaScript files. resolveis used to find modules using aliases. For instance, modules imported via Node.JS are absolute to Webpack, so this helps resolve it. Another example is files called without a .jsextension, and will accept the files without the extension. devtoolis an option in Webpack, to help debug files that are compiled. In this case, we want a source map along with our bundled JavaScript. Now we've got an understanding of what Webpack does! The next thing we need to do is add a dev script. With this, we'll tell Webpack to bundle our client-side scripts, and when any changes take place, to re-bundle the scripts. "start": "nodemon -e js --ignore index.compiled.js --ignore dist --exec babel-node index.js", "dev": "webpack -w", "test": "eslint ./ && flow" Now let's start making our client-side scripts! Make a new file called browser.js and we'll start making our JavaScript for the client-side! // @flow /* eslint-disable no-console */ import 'babel-polyfill'; // Get the button for the LED toggle in the DOM // eslint-disable-next-line no-undef const ledToggle = document.getElementById('led-toggle'); // Set browser-side lightOn variable as true by default let lightOn = true; We start off by importing the babel-polyfill package, which will provide supporting scripts for ES6 features in an ES5 environment. Next, we define an ledToggle constant that will find the button in the HTML document by its ID. Finally, since the LED is on by default on the server's end, we'll set it as the default on the client end. Now, while our server is running in one Terminal pane, let's open another pane and run yarn dev. Now our client-side JavaScript is bundled, with a source map, and placed in a new dist directory! Now we need to make our client-side code to communicate with the server-side, so we can control our Nodebot! Let's take a deep dive into Websockets! Using Socket.IO to Communicate between the Browser and Server Let's take briefly about Websockets. Websockets allow us to communicate events between two points, specifically a server and a client. This is really useful since the communication is instant, and does not communicate with low latency unlike HTTP. A great example is chat applications, some use Websockets to communicate between users instantly over specific rooms on the service! We're going to be using Socket.IO to allow our client-side code to control our Nodebot. We need to install two dependencies for websocket communication — the main socket.io library, and socket.io-client for the client-side. yarn add socket.io socket.io-client In our index.js file, we'll import socket.io to our server, and wrap our server instance into a new socket.io instance. // @flow /* eslint-disable no-console */ import { Board, Led } from 'johnny-five'; import express from 'express'; import { Server } from 'http'; import socketIO from 'socket.io'; // Make a new Express app const app = express(); // flow-disable-next-line const http = Server(app); // Make a new Socket.IO instance and listen to the http port const io = socketIO.listen(http); Now when our Server is running, it will listen for socket communication on the client side on port 8000! Next thing we need to do is write events within our Johnny-Five code so when communication happens on the client-side our Nodebot will act accordingly! // [...] Our previous code lives here... // Set the server to port 8000 and send the HTML from there. http.listen(8000, () => { console.log('Server running on Port 8000'); }); // Set `lightOn` to true as a default since our LED will be on let lightOn = true; // Make a new Board Instance const board = new Board(); // When the board is connected, turn on the LED connected to pin 9 board.on('ready', function() { // [...] Our initial Johnny-Five code lives here... /** * * Server-Side Socket Events * * When the client-side has connected, output the status to console. * Then, send the current LED status to the client-side. * * When the LED is toggled from the client-side, update the lightOn variable. * Depending on the boolean sent, turn on or off the LED. * */ io.on('connect', (socket) => { console.log('[socket.io] Client has connected to server!'); socket.emit('light-state', lightOn); socket.on('light-toggle', (lightToggle) => { lightOn = lightToggle; // Log the current state of the LED. console.log(`[socket.io] Light is now ${lightOn ? 'on' : 'off'}.`); if (lightOn) { led.on(); } else { led.stop().off(); } }); }); // [...] Our REPL and Exit code live here... }); With the code above, we'll know when the client has connected to the server with a log to our CLI console, and will send ( emit) the current state of the LED to the client (in case our LED is off when we view our button in the browser!). When the server receives an event called 'light-toggle', which we'll use for controlling the LED, it will set the lightOn variable to the value that has been sent, and logging if our LED is on or off. With the updated variable, this will turn our LED on or off when the button is pressed! Now for the client-side sockets — we'll need to make events for telling the server the client-side has connected, and receive the current state of the LED, as well as events for when the button is pressed. // @flow /* eslint-disable no-console */ import 'babel-polyfill'; import socketIOClient from 'socket.io-client'; // Set socketIO to the window location given by the server // eslint-disable-next-line no-undef const io = socketIOClient(window.location.name); // [...] Our previous client-side code lives here... /** * * Client-Side Socket Events * * On connection, tell the server that the client side has connected. * Upon connection, a 'light-state' event may be sent from the server. * Set the lightOn variable to the current state on the server side. * */ io.on('connect', () => { console.log('[socket.io] Client has Connected!'); // Get robot's current light state from the server side. io.on('light-state', (lightState) => { console.log(`[socket.io] Light is currently ${lightState ? 'on' : 'off'}.`); lightOn = lightState; if (ledToggle && !lightOn) { ledToggle.innerHTML = 'Turn On LED'; } }); }); /** * * If the LED toggle button is found in the DOM, check when it's clicked. * When clicked, update the lightOn boolean, then send the value down to the * server and update the robot's LED state. * */ if (ledToggle) { ledToggle.addEventListener('click', () => { if (lightOn) { lightOn = false; ledToggle.innerHTML = 'Turn On LED'; } else { lightOn = true; ledToggle.innerHTML = 'Turn Off LED'; } // Send new value down to the server. io.emit('light-toggle', lightOn); }); } When setting up socket.io-client, we need to set it to our local server, which in this case would be. An simple way to do that is get the address from window.location.name, which will be the root URL for the server. We then tell the server, via websockets, that the client has connected, and will receive the current LED state, and update certain things if necessary. If the LED toggle button exists in the HTML, we can then listen for when it's clicked. Every time it is clicked, it will update the client-side lightOn variable, as well as the button's text, and send the new value to the server, which will then turn the Nodebot's LED on or off! Testing and Making a Production Version of The Server Now that we've made all this come together, let's give our simple Nodebot interface a go. With our server running and Webpack compiling our code, let's head on over to and toggle the LED on and off! Another little thing you can do is using a browser in a tablet or smartphone to control the LED using the simple interface we've just made, provided your device is on the same Wi-Fi network your computer, running the Node server, is connected to. Important Note: to access via your smartphone or tablet, make sure that you have allowed file sharing on your computer. You will then be able to view the web interface using your machine's hostname or IP address. If you're stuck on where to find your machine's IP address, use ifconfigon macOS or hostname -ion Linux systems. Next thing we need to do is make sure our code is clean and has no errors with ESLint and Flow — simply run yarn test and it will check our code. We should get no errors! Yay! Lastly, we need to make a production-ready, compiled version of our code. Firstly, we need to install a dependency called rimraf — which will delete our previous compiled code so we can make a clean build. yarn add --dev rimraf Next, all we need to do is make a build script in package.json which will compile the code and make it production ready. "start": "nodemon -e js --ignore index.compiled.js --ignore dist --exec babel-node index.js", "dev": "webpack -w", "build": "rimraf dist "test": "eslint ./ && flow" Now, it's worth pointing out, this is just for practice, but also, when using our server code in production scenarios, we can't just leave babel-node running all the time as it's prone to memory leaks! Instead, we'll make babel compile the code and let that run continuously if we want to. We've also got Webpack to compile our client-side scripts as well. The last thing we need to do is make a script to run the compiled code for our server. Simple make a production script in package.json and tell node to run our compiled code. "start": "nodemon -e js --ignore index.compiled.js --ignore dist --exec babel-node index.js", "dev": "webpack -w", "build": "rimraf lib dist && babel index.js --out-file index.compiled.js --ignore browser.js && webpack browser.js", "test": "eslint ./ && flow", "production": "node index.compiled.js" Now if we run yarn production, our Server and Nodebot will be up and running, ready to serve our interface and toggle with! Stuck with any of the above? If you're stuck with any of the code above, don't worry — I have attached a Git repository containing the final code, so if you're unsure about anything, you can happily use my code as a reference. Give Yourself a pat on the back! Well done! You've just made a simple Nodebot with web controls over a Server! We've come a long way the last few lessons, and have learned a lot. So, we've reached the end of the basics with using JavaScript and Hardware, to understanding what JavaScript and Nodebots are, to making a Simple Nodebot, using current JavaScript techniques, and making a simple web app. Well done for making it so far! We're not done yet... You didn't honestly think that was the end of the series? Gosh no! We're going to take our code to the next level after this! We'll still be making simple Nodebots, but we'll also be using a JS Framework for our Interface and use it to make a small Web App, as well as using client-side application states to control our next Nodebot! It's going to be awesome! Can't wait for the next part? If you're really excited for the next part, you can get it early by pledging to my Patreon! With Patreon, you can get perks such as behind the scenes looks and early releases of creations and articles, as well as patron-only posts! Alternatively, if you want to support my work, I also have a PayPal! Be sure to follow me on Twitter, Like my page on Facebook, or subscribe to my mailing list to get updates!
https://www.hackster.io/IainIsCreative/javascript-with-hardware-part-three-simple-web-interface-0cfbf0
CC-MAIN-2018-43
refinedweb
3,669
74.19
Notes You can subscribe to an email version of this summary by sending an empty message to perl6-digest-subscribe@netthink.co.uk. Please send corrections and additions to bwarnock@capita.com. There were 44 messages across 10 threads, with 26 authors contributing. The Modules Plan (21 posts) The discussion continued with talk about CPAN, namespaces, and implementations. Perl 6 Internals (1 post) Simon Cozens gave an update on what he's up to with Perl 6. The other front is, of course, code. I have started writing some code which sets up vtables, marshals access to an object's method through its vtable, and helps you write new object types. I'm also trying to develop an integer object type. All it does at the moment is the infrastructure that allows you to create a new integer PMC and get and set its value. (1 post) Uri Guttman suggested that some perl ops should be written in Perl. (11 posts) Numerous folks continued the discussion on the Coding Conventions PDD. Perl 6 Language (2 posts) Michael Schwern asked that implicit @_ passing be removed. Damian replied that it would be, although as a side-effect to some new behavior. (1 post) Garrett Goebel asked whether subroutine signatures will apply to methods in Perl 6. (1 post) John Siracusa asked if properties were tempable. (1 post) I brought up an inconsistency in the visibility of Perl 5's my, our, and local, implicitly asking if it could be changed for Perl 6. (3 posts) Raptor requested a way to preserve leading white space in Here Docs. Michael Schwern pointed out that this is capable with the new functionality. Last Words Quiet week aside, Perl 6 is alive and well. Dan Sugalski and Simon Cozens are finishing some last minute work that could be the seed for the Perl 6 internals. Larry and Damian are working on the next Apocalypse and Exegesis, respectively. They should be released in about a week. Bryan C. Warnock
http://www.perl.com/pub/2001/08/p6pdigest/20010818.html
CC-MAIN-2014-10
refinedweb
333
65.12
NAME usleep - suspend execution for microsecond intervals SYNOPSIS #include <unistd.h> EINTR Interrupted by a signal. EINVAL usec is not smaller than 1000000. (On systems where that is considered an error.) CONFORMING TO 4.3BSD, POSIX.1-2001. POSIX.1-2001 declares this function obsolete; use nanosleep(2) instead.. NOTES(3), timer_delete(3), timer_getoverrun(3), timer_gettime(3), timer_settime(3), ualarm(3) is unspecified. SEE ALSO alarm(2), getitimer(2), nanosleep(2), select(2), setitimer(2), ualarm(3), sleep(3), time(7) COLOPHON This page is part of release 2.77 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. 2007-07-26 USLEEP(3)
http://manpages.ubuntu.com/manpages/hardy/man3/usleep.3.html
CC-MAIN-2014-52
refinedweb
115
62.44
08-21-2012 12:39 PM - edited 08-21-2012 05:08 PM I have created a .aspx page and I have added C# code from the first example in the SDK to it - "Getting a table name" - to try and make a connection to the database and log in. This is the code that I have right now: <%@ Page Language="C#" %> <%@ Import Namespace="Act.Framework" %> <script runat="server"> ActFramework ACTFM = new ActFramework(); ACTFM.LogOn("D:\\ACTDB\\SchillerGroundsCareDev.pad","username","password"); </script> I have been going through forum posts and found a few that said to copy certain .dll files over from the /Global Cache/ folder to the /Program Files/ ACT/ directory and I did this. I found that it was necessary to use this line for referencing the ACT.Framework assembly or it would throw an error. <%@ Import Namespace="Act.Framework" %> I'm' now getting an error on this line of code: ACTFM.LogOn("D:\\ACTDB\\SchillerGroundsCareDev.pad","username","password); The error that I'm getting is as follows: CS1519: Invalid token '(' in class, struct, or interface member declaration Is there something wrong with the way that I am passing the path to the .pad file? What else could be wrong that would cause this line to throw an error? Could I be missing a reference to an assembly? I am able to connect to the same data source using Visual Studio using this information (from my .udl file): Provider=ACTOLEDB2.1;Data Source=D:\ACTDB\SchillerGroundsCareDev.pad;User ID=ACTService;Password=!act2012sgc!#;Persist Security Info=True Any help that might get me pointed in the right direction is appreciated. Thanks! 08-22-2012 06:59 AM I could be wrong, but just at a glance it appears there's a quotation mark missing after password. 08-22-2012 11:23 AM That was my not copying the entire line when I pasted it in my post. Here is an the same code even more simplified to try to catch any syntax errors: <%@ Page Language="C#" %> <%@ Import Namespace="Act.Framework" %> <%@ Import Namespace="Act.Shared.Collections" %> <script runat="server"> ActFramework ACTFM = new ActFramework(); string pathDB = "D:\\ACTDB\\SchillerGroundsCareDev.pad"; string userID = "username"; string pwd = "password"; ACTFM.LogOn(pathDB, userID, pwd); </script> I am getting an error for this line: ACTFM.LogOn(pathDB, userID, pwd); What could I be doing wrong. Could there be a problem with how I am passing the path to the database? 08-22-2012 12:30 PM Check out this page. It talks about the generic "Invalid Token" error... -- Jim 05-23-2014 07:29" 05-23-2014 11:26 AM 05-23-2014 11:28 AM You can get the DLL from the ACT extracted installation files folder: C:\<wherever you extracted the files to>\Sage ACT! 2013 Premium\ACT_Premium_2013\ACTWG\GlobalAssemblyCache Then you can put the file in a folder called ActReferences or something easy to find. The you just have to add the reference to the dll in your project. Once you have that reference to that DLL you can add the using statements: using Act.Framework; using Act.Framework.Contacts; using Act.Framework.Groups; and so on. 05-26-2014 01:30 AM Thanks a lot. Now I found them
https://community.act.com/t5/Act-Developer-s-Forum/Beginner-help-error-connecting-with-SDK/td-p/215992
CC-MAIN-2017-47
refinedweb
538
66.84
Hello guys, I am implementing a Radix Sort using an array of queues. So far I have created the array and print it and is working fine. I have also extracted the LSD from each integer. No my target is to store the integers it the queues according to the LSD. (This would be the first pass). This is supposed to be processed in the for looped between line 29 and 35. -- problem is that when I deque the integers are being displayed in the same way as at the beginning! I know my problem is in my enqueue but I don't know how to do it. Here is the code:- Thanks in advance for your help.Thanks in advance for your help.import java.util.*; public class MainProgram { public static void main(String[] args) { Scanner userinput = new Scanner(System.in); System.out.print("Please enter the size of the array. "); int count = userinput.nextInt(); //user enters the size of the array. int[] numbers = new int[count]; //array containing hte integers. int r=0; QueueLinkedList[] queue = new QueueLinkedList[10]; //created an instance of the queue. 10 bins needed for the Radix Sort. for(int i=0; i<queue.length; i++) { queue[i] = new QueueLinkedList(); } for(int index = 0; index < numbers.length; index++) //generating random integers. { numbers[index] = (int)(Math.random()*1001); System.out.print(index+" "); System.out.println(numbers[index]); } for(int i=0; i<numbers.length; i++) //identifying the LSD and store the integers in queues accoring to the LSD. { //the mistake is in this loop! numbers[r] = (numbers[i] % 10)/1; // System.out.println(numbers[r]); queue[r].enqueue(numbers[i]); //queue[r].dequeue(); } for(int i=0;i<queue.length;i++){ //deque and printing the integers. Here the integers should be sorted by the LSD. 1st PASS. queue[r].dequeue(); //however output is noot good! } } }
http://www.javaprogrammingforums.com/whats-wrong-my-code/20522-radix-sort.html
CC-MAIN-2014-15
refinedweb
309
70.8
Advertisement In this blog, we will learn how to implement the transition effect in OpenCV. Let’s Code Transition Effect in OpenCV! Steps for Transition Effect in OpenCV - Load two images, which will take part in the transition effect. - Image manipulation (If you want a certain effect) - Transition between the two images using addWeighted() OpenCV function. Transition Effect Algorithm Import Packages For Implementing the Transition effect in OpenCV Python, we will have to import the Computer Vision package. import cv2 as cv The second package is NumPy for working with image matrix and creation of nD arrays. For Transition Effect, NumPy is very important as it is used in the creation of the rotation matrix and also athematic and logical operations on image matrices. import numpy as np We also need to import the time package as it will help in resting the transition effect. import time Load Images Read images in OpenCV using imread() function. Read two images that will take part in the transition effect. img1 = cv.imread(‘./img1.jpg’) img2 = cv.imread(‘./img2.jpg’) Blending Logic There is a function in OpenCV that helps blend two images on basis of percentage. addWeighted() function in OpenCV takes five parameters: - First Image - Alpha value (Opacity for the first image) - Second Image - Beta Value (Opacity for the second image) - Gamma Value (Weight for every pixel after blend, 0 for normal output) The alpha value represents the opacity value of the first image. The beta value represents the opacity value of the second image. The gamma value represents the weight added to every pixel after the blend. Maths behind the function We have to make the percentage such that it follows the rule of: alpha + beta = 1 If we choose the alpha value as 0.7 I.e 70%. The beta value then should be 0.3 I.e 30%. 0.7 + 0.3 = 1.0 Creating Transition Effect np.linspace() is a NumPy function for generating linearly spaced numbers between two numbers. np.linspace(0,10,5) – This function will generate 5 numbers between 1 – 10 and all will be evenly spaced. We will use linspace function in the loop to generate different values for alpha and beta for the opacity of the images. for i in np.linspace(0,1,100): alpha = i beta = 1 – alpha output = cv.addWeighted(img1,alpha,img2,beta,0) Alpha is assigned the value of ‘i’ that will change alpha’s value in every iteration. The beta value will also change with each iteration as beta depends on the value of alpha. Beta = 1 – alpha. But, the alpha and beta values will always sum to 1. The addWeighted() function then takes the two images, alpha, beta, and gamma values to generate a new blend image. This process continues till the loop ends or we forcefully end the process with an ‘ESC‘ keypress. Simple Transition Effect Source code import cv2 as cv import numpy as np import time while True: img1 = cv.imread('./img1.jpg') img2 = cv.imread('./img2.jpg') for i in np.linspace(0,1,100): alpha = i beta = 1 - alpha output = cv.addWeighted(img1,alpha,img2,beta,0) cv.imshow('Transition Effect ',output) time.sleep(0.02) if cv.waitKey(1) == 27: break cv.destroyAllWindow() Create Trackbar for Transition Effect in OpenCV Earlier we were dependent on the loop to see the transition effect. We can also create a trackbar that will control the alpha value and on that basis, the transition will be applied. You can change the range of the value of the trackbar in order to get a smoother or fast transition. Change sleep time as well when you change the range of the trackbar. Transition Effect OpenCV Documentation Learn more about the transition and blending of OpenCV functions from official OpenCV Documentation.
https://hackthedeveloper.com/transition-effect-opencv-python/
CC-MAIN-2021-43
refinedweb
634
57.16
A few helpful tools to make working with the falcon framework a real joy! Project description Falcon Helpers A number of helpful utilities to make working with Falcon Framework a breeze. Quickstart $ pip install falcon-helpers import falcon import falcon_helpers api = falcon.API( middlewares=[ falcon_helpers.middlewares.StaticsMiddleware() ] ) 0.17.1 - 2020-11-04 0.17.0 - 2018-10-17 - [FEAT] Add some usefule logging features - [FEAT] Add Logging to MultiMiddleware - [BUG] Fix User REPR - [BUG] Report Integrity Errors with Useful Messages 0.16.0 - 2018-06-25 0.15.3 - 2018-06-18 - [FEAT] Fetching a file pointer in storage allows you to set the mode. 0.15.2 - 2018-06-06 - [FEAT] Support passing S3 configuration to storages - [BREAK] Default to using V4 of the AWS presigned key 0.15.1 - 2018-06-06 - [FEAT] Allow column_filters to use non-entity columns 0.14.0 - 2018-06-01 - [BREAK] Remove Statics Middleware - [NEW] Add a simple Sentry Plugin - [NEW] Create a server CLI 0.13.0 - 2018-05-22 0.12.0 - 2018-04-15 - [FEAT] Create Key Based Filtering 0.11.4 - 2018-04-05 - [FEAT] Allow specifying your own default page size for ListBase 0.11.3 - 2018-03-31 - [FEAT] Allow passing additional data to generate auth token 0.11.2 - 2018-03-30 - [BUG] Remove Stray PDB 0.11.1 - 2018-03-30 - [FEAT] Add hook to for deleting an object in CrudBase 0.11.0 - 2018-03-29 - [FEAT] Add filter by field name on ListBase - [FEAT] Allow turning off auto-marshalling - [BUG] Session closing could fail with exceptions 0.10.1 - 2018-03-05 - [FEAT] Added a remove function to storage backends 0.10.0 - 2018-03-03 - [NEW] We now have a CI system with CodeCoverage - [FEAT] You can now user auth_marshal=False to turn off auto JSON marshaling to Marshmallow - [FEAT] Added a few helpful functions on auth.user - [BUG] Fixed object deletion of CrudBase (which was what kicked the CI setup into high-gear) 0.9.6 - 2018-03-02 - [BUG] Forgot a self 0.9.5 - 2018-03-01 - [NEW] get_object was implemented for CrudBase - [FEAT] has_permission now supports an enum type - [NEW] kwargs is now used on CrudBase 0.9.3 - 2018-02-28 - [BUG] Fix an issue with binary file opening - [BUG] Utilize the correct exception with CRUD Base 0.9.2 - 2018-02-27 [CHANGE] Add in fuzzy testing for nullable ORM columns 0.9.1 - 2018-02-24 - [BUG] Add the Falcon-Multipart Requirement 0.9.0 - 2018-02-23 - [FEAT] Added Support to Downloading - [CHANGE] Renamed contrib.upload to contrib.storage 0.8.0 - 2018-02-23 0.7.0 - 2018-02-15 ** [NEW] Added a CRUD Base Library ** [FEAT] Added a token generation method to the user ** [CHANGE] Cleaned up the REPR for permissions entity ** [CHANGE] Only close the SA session when failure occurs ** [FIX] auth_required accepts the proper arguments 0.6.1 - 2017-12-15 ** [BUG] Add a req/resp to failed action functions ** [FEAT] Make ParseJWTMiddleware available at the middleware level ** [BUG] Allow setting of the get_id function 0.6.0 - 2017-12-15 ** [NEW] Added a global SQLAlchemy Scoped Session to facilitate testing and other items ** [CHANGE] AuthRequiredMiddleware was split into two and there is a new ParseJWTMiddleware ** [BUG] Cleaned up a number of issues with the way SQLAlchemy ORM is being used 0.5.0 - 2017-12-02 +* [NEW] A brand-spanking new permission system with users, groups, and permissions +* [FEAT] Post-login redirect is now configurable. +* [FEAT] Create a simple redirection resource +* [FEAT] Jinja2 Middleware can take application globals to inject into the template +* [FEAT] Added a mixin for testing entities 0.4.2 - 2017-10-25 - Enable Auth Middleware to always run. Helpful when then entire application is an API that requires authentication. 0.4.1 - 2017-10-19 - Fix issue with importing Marshmallow Middleware 0.3.1 - 2017-10-09 - [FEAT] Add a number of helpful SQLAlchemy Features 0.3.0 - 2017-10-07 - [FEAT] Setup SQLAlchemy - [BUG] Install cryptography for JWT’s with RSA algo 0.2.1 - 2017-10-07 - Fix issue when using HS256 tokens for authentication 0.2.0 - 2017-09-23 - Release the Package and update the source location 0.1.0 - 2017-08-22 - Added StaticsMiddleware Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/falcon-helpers/
CC-MAIN-2021-25
refinedweb
745
57.47
Create a editor window that can be painted into For Canvas mode, this determines if the buffer will be saved to a disk file after every stroke. Good for painting textures and viewing the results in shaded display in the model view. Does a fast undo in Canvas mode. This is a special undo because we are not using any history when we paint in Canvas mode so we provide a single level undo for the Canvas. Parameters: First string: commandSecond string: editorNameThird string: editorCmdFourth string: updateFuncCall the command when something changes in the editor The command should have this prototype : command(string $editor, string $editorCmd, string $updateFunc, int $reason) The possible reasons could be : 0: no particular reason1: scale color2: buffer (single/double)3: axis 4: image displayed5: image saved in memory Query only. Returns the top level control for this editor. Usually used for getting a parent to attach popup menus. Caution: It is possible, at times, for an editor to exist without a control. This flag returns NONEif. Sets the display appearance of the model panel. Possible values are wireframe, points, boundingBox, smoothShaded, flatShaded. This flag may be used with the -interactive and -default flags. Note that only wireframe, points, and boundingBoxare valid for the interactive mode. For Scene mode, this determines if fog will be displayed in the Paint Effects panel when refreshing the scene. If fog is on, but this is off, fog will only be drawn on the strokes, not the rest of the scene. Specifies the name of a selectionConnection object which the editor will use as its source of content. The editor will only display items contained in the selectionConnection object. This is a variant of the -mainListConnection flag in that it will force a change even when the connection is locked. This flag is used to reduce the overhead when using the -unlockMainConnection , -mainListConnection, -lockMainConnection flags in immediate succession. Specifies the name of a selectionConnection object which the editor will synchronize with its highlight list. Not all editors have a highlight list. For those that do, it is a secondary selection list. Locks the current list of objects within the mainConnection, so that only those objects are displayed within the editor. Further changes to the original mainConnection are ignored. Specifies the name of a selectionConnection object which the editor will use as its source of content. The editor will only display items contained in the selectionConnection object. Starts a new image in edit mode, setting the resolution to the integer values (X,Y) and clearing the buffer to the floating point values (R,G,B). In Query mode, this returns the (X,Y) resolution of the current Image. Specifies the panel that the editor belongs to. By default if an editor is created in the create callback of a scripted panel it will belong to that panel. If an editor doesn’t belong to a panel it will be deleted when the window that it is in is deleted. In Canvas mode, this rolls the image by the floating point values (X,Y). X and Y are between 0 (no roll) and 1 (full roll). A value of .5 rolls the image 50% (ie. the border moves to the center of the screen. save the current Editor Image to memory. Saved Editor Images are stored in an Editor Image Stack. The most recently saved image is stored in position 0, the second most recently saved image in position 1, and so on... To set the current Editor Image to a previously saved image use the di/displayImageflag. Specifies the name of a selectionConnection object which the editor will synchronize with its own selection list. As the user selects things in this editor, they will be selected in the selectionConnection object. If the object undergoes changes, the editor updates to show the change. Query only flag. Returns the MEL command that will edit an editor to match the current editor state. The returned command string uses the string variable $editorName in place of a specific name. By default the last image is cached for undo. If this is set false, then undoing will be disabled in canvas mode and undo in scene mode will force a full refresh. Less memory will be used if this is set false before the first clear or refresh of the current scene. Derived from mel command maya.cmds.dynPaintEditor Example: import pymel.core as pm pm.dynPaintEditor( 'editor' ) pm.dynPaintEditor( 'editor', e=True, ni=(640, 480, 1.0, 0.5, 0.2) ) pymel.core.effects.dynGlobals pymel.core.effects.dynPref Enter search terms or a module, class or function name.
http://www.luma-pictures.com/tools/pymel/docs/1.0/generated/functions/pymel.core.effects/pymel.core.effects.dynPaintEditor.html#pymel.core.effects.dynPaintEditor
crawl-003
refinedweb
774
57.67
Reading. Nepomuk is centred around a main 'Resource' class. For simple, non-high performance access to Nepomuk, it is recommended that you use the Resource class. Everything in Nepomuk is a Resource. Each Resource has a number of properties associated with it, which are stored as (key, value) pairs. They keys are referred to as properties or predicates, and the values are referred to as the objects. Every file, folder, contact or tag is a Resource. Every tag in Nepomuk is a Resource. In fact the Tag class is also derived from the Resource class.. Every resource in Nepomuk can be given a numeric rating. It is generally done on a scale of 0-10. using namespace Nepomuk2; File fileRes( urlOfTheFile ); int rating = fileRes.rating(); rating = rating + 1; fileRes.setRating( rating );} )
https://techbase.kde.org/index.php?title=Projects/Nepomuk/QuickStart&diff=74137&oldid=74136
CC-MAIN-2017-13
refinedweb
131
60.72
Hi Richard, LGTM, for the most part. One minor detail: should we document that the URLs should only be used for newstyle connections? I don't honestly think that using oldstyle is a good idea anymore, so we might as well drop it and assume that people don't want to try oldstyle anymore, but then... On Tue, Jun 11, 2019 at 12:53:30PM +0100, Richard W.M. Jones wrote: > For further information about discussion around this standard, see > this thread on the mailing list: > > > Signed-off-by: Richard W.M. Jones <rjones@redhat.com> > --- > doc/Makefile.am | 2 +- > doc/uri.md | 171 ++++++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 172 insertions(+), 1 deletion(-) > > diff --git a/doc/Makefile.am b/doc/Makefile.am > index 7f0284c..fa8a4b0 100644 > --- a/doc/Makefile.am > +++ b/doc/Makefile.am > @@ -1 +1 @@ > -EXTRA_DIST = README proto.md todo.txt > +EXTRA_DIST = README proto.md todo.txt uri.md > diff --git a/doc/uri.md b/doc/uri.md > new file mode 100644 > index 0000000..e06cec5 > --- /dev/null > +++ b/doc/uri.md > @@ -0,0 +1,171 @@ > +# The NBD Uniform Resource Indicator (URI) format > + > +## Introduction > + > +This document describes the standard URI format that clients may use > +to refer to an export located on an NBD server. > + > +## Convention > + > +"NBD" stands for Network Block Device and refers to the protocol > +described in the adjacent protocol document also available online at > +<> > + > +"URI" stands for Uniform Resource Indicator and refers to the standard > +introduced in [RFC 3986]() and > +subsequent IETF standards. > + > +The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", > +"SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", > +"MAY", and "OPTIONAL" in this document are to be interpreted as > +described in [RFC 2119](). > +The same words in lower case carry their natural meaning. > + > +## Related standards > + > +All NBD URIs MUST also be valid URIs as described in > +[RFC 3986]() and any subsequent > +IETF standards describing URIs. This means that any parsing, quoting > +or encoding issues that may arise when making or parsing an NBD URI > +must be answered by consulting IETF standards. > + > +This standard defers any question about how the NBD protocol works to > +the NBD protocol document available online at > +<> > + > +## NBD URI components > + > +An NBD URI consists of the following components: > + > + +------- Scheme (required) > + | > + | +------- Authority (optional) > + | | > + | | +------- Export name (optional) > + | | | > + v v v > + nbd://example.com:10809/export > + > + nbd+unix:///export?socket=nbd.sock > + ^ > + | > + +---- Query parameters > + > +## NBD URI scheme > + > +One of the following scheme names SHOULD be used to indicate an NBD URI: > + > +* `nbd`: NBD over an unencrypted or opportunistically TLS encrypted > + TCP/IP connection. > + > +* `nbds`: NBD over a TLS encrypted TCP/IP connection. If encryption > + cannot be negotiated then the connection MUST fail. > + > +* `nbd+unix`: NBD over a Unix domain socket. The query parameters > + MUST include a parameter called `socket` which refers to the name of > + the Unix domain socket. > + > +* `nbds+unix`: NBD over a TLS encrypted Unix domain socket. If > + encryption cannot be negotiated then the connection MUST fail. The > + query parameters MUST include a parameter called `socket` which > + refers to the name of the Unix domain socket. > + > +Other URI scheme names MAY be used but not all NBD clients will > +understand them or even recognize that they refer to NBD. > + > +## NBD URI authority > + > +The authority field SHOULD be used for TCP/IP connections and SHOULD > +NOT be used for Unix domain socket connections. > + > +The authority field MAY contain the `userinfo`, `host` and/or `port` > +fields as defined in [RFC 3986]() > +section 3.2. > + > +The `host` field may be a host name or IP address. Literal IPv6 > +addresses MUST be formatted in the way specified by > +[RFC 2732](). > + > +If the `port` field is not present then it MUST default to the NBD > +port number assigned by IANA (10809). > + > +The `userinfo` field is used to supply a username for certain less > +common sorts of TLS authentication. If the `userinfo` field is not > +present but is needed by the client for TLS authentication then it > +SHOULD default to a local operating system credential if one is > +available. > + > +It is up to the NBD client what should happen if the authority field > +is not present for TCP/IP connections, or present for Unix domain > +socket connections. Options might include failing with an error, > +ignoring it, or using defaults. > + > +## NBD URI export name > + > +If the version of the NBD protocol in use needs an export name, then > +the path part of the URI except for the leading `/` character MUST be > +passed to the server as the export name. > + > +For example: > + > + NBD URI Export name > + ---------------------------------------------------- > + nbd://example.com/disk disk > + nbd+unix:///disk?socket=sock disk > + nbd://example.com/ (empty string) > + nbd://example.com (empty string) > + nbd://example.com//disk /disk > + nbd://example.com/hello%20world hello world > + > +Note that export names are not usually paths, they are free text > +strings. In particular they do not usually start with a `/` > +character, they may be an empty string, and they may contain any > +Unicode character. > + > +## NBD URI socket parameter > + > +If the scheme name indicates a Unix domain socket then the query > +parameters MUST include a `socket` key, referring to the Unix domain > +socket which on Unix-like systems is usually a special file on the > +local disk. > + > +On platforms which support Unix domain sockets in the abstract > +namespace, and if the client supports this, the `socket` parameter MAY > +begin with an ASCII NUL character. When the URI is properly encoded > +it will look like this: > + > + nbd+unix:///?socket=%00/abstract > + > +## NBD URI query parameters related to TLS > + > +If TLS encryption is to be negotiated then the following query > +parameters MAY be present: > + > +* `tls-type`: Possible values include `anon`, `x509` or `psk`. This > + specifies the desired TLS authentication method. > + > +* `tls-hostname`: The optional TLS hostname to use for certificate > + verification. This can be used when connecting over a Unix domain > + socket since there is no hostname available in the URI authority > + field; or when DNS does not properly resolve the server's hostname. > + > +* `tls-verify-peer`: This optional parameter may be `0` or `1` to > + control whether the client verifies the server's identity. By > + default clients SHOULD verify the server's identity if TLS is > + negotiated and if a suitable Certificate Authorty is available. > + > +## Other NBD URI query parameters > + > +Clients SHOULD prefix experimental query parameters using `x-`. This > +SHOULD NOT be used for query parameters which are expected to be > +widely used. > + > +Any other query parameters which the client does not understand SHOULD > +be ignored by the parser. > + > +## Clients which do not support TLS > + > +Wherever this document refers to encryption, authentication and TLS, > +clients which do not support TLS SHOULD give an error when > +encountering an NBD URI that requires TLS (such as one with a scheme > +name `nbds` or `nbds+unix`). > -- > 2.22.0 > > -- <Lo-lan-do> Home is where you have to wash the dishes. -- #debian-devel, Freenode, 2004-09-22
https://lists.debian.org/nbd/2019/06/msg00013.html
CC-MAIN-2019-51
refinedweb
1,129
63.39
: As you would expect, the data is exactly the same since they both consume the same underlying vSAN Management API. So, how do we get started? First, you will need access to the vSAN RVC source code. The easiest way to use the vCenter Server Appliance (VCSA) which automatically bundles the latest version of RVC based on the particular release of vCenter Server. All RVC source code can be found in /opt/vmware/rvc/lib/rvc and .rb extension file are the Ruby source code files. The primary vSAN RVC commands can be found in /opt/vmware/rvc/lib/rvc/modules/vsan.rb and the four other vSAN sub-namespaces such as vsan.health, vsan.iscsi_target, vsan.perf and vsan.stretchdcluster can be found in the modules/vsan directory. For your convenience, I have listed them below: - /opt/vmware/rvc/lib/rvc/modules/vsan/health.rb - /opt/vmware/rvc/lib/rvc/modules/vsan/iscsi_target.rb - /opt/vmware/rvc/lib/rvc/modules/vsan/perf.rb - /opt/vmware/rvc/lib/rvc/modules/vsan/stretchedcluster.rb Now that you know where to look for source code, we now just need to search for the specific command we are interested in. Since vsan.check_limits command is part of the main vSAN RVC module vsan.rb, we can open that file and look for the specific operation which in this case it is called check_limits. If we do a search for this, we should see the following which defines the RVC operation. Directly underneath, we can then see the implementation of the check_limits method and the specific vSAN Management APIs that are used. Note: It is a fairly common practice to reference other helper functions within a given method, so you may need to search for other functions as you go through a particular RVC command. For example, check_limits calls query_limits function which does the actual data gathering before it is processed and displayed back to the users. Here is the final PowerCLI VSANCheckLimits.ps1 sample script and below is a high level break down of what is going on within the script: L19-21 - Process requirement parameter which is a string containing the name of the vSAN Cluster L26-32 - Retrieve all ESXi hosts and ensure the host is connected and that vSAN is enabled on the host before proceeding L34 - Get access the vsanInternalSystem manager for reach ESXi host which will be used to call specific vSAN Management APIs L37-38 - Retrieve the RDT data using the QueryVsanStatistics() API method and process the results which will be returned as a JSON object L41-56 - Process the JSON information and extract the data that we are interested in, referencing vsan.rb source code L60-61 - Retrieve the component and usage data using the QueryPhysicalVsanDisks() API method and process the results which will be returned as a JSON object L65-82 - Process the JSON data and extract the information that we are interested in, , referencing vsan.rb source code L103-108 - Construct the data that will be shown for each ESXi host using a custom psobject (for display purposes) L111 - Display the results which should match the RVC vsan.check_limits command Although you have the source code in front of you, it may still take you a few try to extract or interpret the information. I am no Ruby expert, so it also took me a few trial/error before getting the output right. Do not get frustrated and just take your time. If you get stuck, the best recommendation is to take a break and walk away for a few hours. Nine out of ten times, when you come back refreshed, you will have a different perspective and usually you will get to the answer pretty quickly, at least I have personally found that to be the case for me when I get stuck. Happy Automating! Additional vSAN Management API Resources: - vSAN Management API Quick Reference - Getting started w/the new PowerCLI 6.5.1 Get-VsanView cmdlet - Correlating vSAN perf metrics from vSphere Web Client to both PowerCLI & vSAN Mgmt API - Managing & silencing vSAN Health Checks using PowerCLI - SMART drive data now available using vSAN Management 6.6 API
http://www.virtuallyghetto.com/2017/06/how-to-convert-vsan-rvc-commands-into-powercli-andor-other-vsphere-sdks.html
CC-MAIN-2017-30
refinedweb
696
57.91
How to add a StatusBar in your React Native application I'm sure that at some point you stumbled upon the problem with the status bar. By default the React Native draws the container view from the top left corner of the screen without taking in case the status bar. To resolve this issue, we give the container view top margin but as you know the height of the status bar in iOS and Android is different. So, to resolve this issue, luckily, there is a React Native component called <StatusBar>. Lets move on to the solution! Importing the <StatusBar> component import { StatusBar } from 'react-native' How to use <View> <StatusBar barStyle="light-content" /> </View> In this example, I'm using only one of the <StatusBar> props: barStyle- sets the color of the status bar text. It could be either - light-content or dark-content. Here is the full list of props: animated- if the transition between status bar property changes should be animated. Supported for backgroundColor, barStyle and hidden. backgroundColor- the background color of the status bar. (Android only) barStyle- sets the color of the status bar text. hidden- if the status bar is hidden. networkActivityIndicatorVisible- if the network activity indicator should be visible. (iOS only) showHideTransition- the transition effect when showing and hiding the status bar using the hiddenprop. Defaults to 'fade'. (iOS only) translucent- if the status bar is translucent. When translucent is set to true, the app will draw under the status bar. This is useful when using a semi transparent status bar color. (Android only) That was a short example of how to use the <StatusBar> component in your React Native apps! Happy coding! 👻
https://seishin.me/how-to-add-a-statusbar-in-your-react-native-application/
CC-MAIN-2022-05
refinedweb
279
56.96
Eclipse Community Forums - RDF feed Eclipse Community Forums exportSrc <![CDATA?]]> Patrick Geremia 2011-08-11T18:28:07-00:00 Re: exportSrc <![CDATA[If xyz.c is has not been added to any library or executable in the package (via some addObjects() call), it will not be added to the package's sources. You can either: 1. add "xyz.c" to Pkg.otherSrcs array; see 2. add "xyz.c" to Pkg.otherFiles array; see, or 3. add "xyz.c" to the release's otherFiles array (as you already noted) We intentionally did not add all *.c and *.h files in the package's directory to the list of source files since this might inadvertently place proprietary sources in a release. Moreover, we don't know all the extensions that represent sources. You must either explicitly build the file or add it to one of the arrays above for it to show up in a "source" release. If you want to add all *.c files in the package's directory, you can use the Java file methods to list all such files and add these as above. There is no harm to having the same file appear in more than one list; e.g., even if "xyz.c" would have been considered a source file via the addObjects() methods, it does not hurt to also put "xyz.c" in the otherSrcs array. For example: function ls(root, regExp) { if (regExp == null) { regExp = /.*/; } /* * ======== myFilter ======== * Return true if a file matches regExp; otherwise, return false. */ function myFilter(dir, file) { return (file.match(regExp) != null ? true : false); } /* create the filter */ var filter = java.io.FilenameFilter({accept : myFilter}); var file = new java.io.File(root); var list = file.list(filter); /* display the list */ for (var i = 0; i < list.length; i++) { print(list[i]); } } ls(".", /.*\.c/) On 8/11/2011 11:28 AM, Patrick Geremia wrote: >?]]> Dave Russo 2011-08-11T20:59:57-00:00
http://www.eclipse.org/forums/feed.php?mode=m&th=234617&basic=1
CC-MAIN-2014-10
refinedweb
319
78.45
Benchmarking Java Lists Performance Benchmarking Java Lists Performance LinkedList is slightly slower than ArrayList with adding items to the end of the list. It is also slower when retrieving items with an index (random access). Join the DZone community and get the full member experience.Join For Free Let's examine the performance characteristics of the list implementations ArrayList and LinkedList. We perform the testing for a variety of list sizes from 100 items to 1M items. We use the JMH test harness to conduct the test on a single core machine. The results are presented below. ArrayList Add Performance This test measures the performance of creating the list and populating the list for a specified number of items. The test code is shown below — nothing fancy. A specified number of integers is created using the Random class and collecting them into a particular type of List. For this test, we have included ArrayList, LinkedList, and Vector. @State(Scope.Thread) static public class MyState { @Param("100") public int NSIZE; } @Benchmark public void test_createArrayList(MyState state) { Random random = new Random(); List < Integer > list = random .ints(state.NSIZE) .collect(ArrayList::new, List::add, List::addAll); } @Benchmark public void test_createLinkedList(MyState state) { Random random = new Random(); List < Integer > list = random .ints(state.NSIZE) .collect(LinkedList::new, List::add, List::addAll); } @Benchmark public void test_createVector(MyState state) { Random random = new Random(); List < Integer > list = random .ints(state.NSIZE) .collect(Vector::new, List::add, List::addAll); } The performance of the operation is shown below. As mentioned before, we tested from 100 through 1M items as shown on the X-axis. The Y-axis is the throughput in ops per second and is shown in log scale since there is a steep drop-off as the size increases. There are a couple of conclusions we can draw from this graph. As always, results are specific for this run of the code. (In other words, your mileage may vary.) - LinkedList adding is slower as the number of items increases. - There's no perceptible difference between ArrayList and Vector. ArrayList Loop Performance Let's now check the performance of the ArrayList in a loop. Since iterating over a List is one of the most important operations in programs in general, this aspect is quite important. We have tested the following ways of iterating a list. The code loops over a List and fetches each item; thus, it represents reasonably real-world code. Enhanced For-Each This is a simple for-each loop. To prevent the JVM for optimizing the code out of existence, we maintain a loop variable, increment it each loop, and return it. @Benchmark public int test_forEach(MyState state) { int n = 0 ; for (int num : state.list) { n++; } return n; } A Regular Indexed for Loop This loop iterates over all elements in order and fetches each element. @Benchmark public int test_forLoop(MyState state) { int n = 0; for (int i = 0 ; i < state.list.size() ; i++) { int value = state.list.get(i); n++; } return n; } Loop With Iterator This loop uses the normal Collection.iterator method to retrieve an iterator and fetches the item from it. @Benchmark public int test_iterator(MyState state) { int n = 0; for (Iterator<Integer> it=state.list.iterator(); it.hasNext(); ) { int num = it.next(); n++; } return n; } Process Items in a Stream This loop uses the new Java 8 streams facility with a lambda. @Benchmark public int test_stream(MyState state) { return state.list.stream().reduce(0, (x, y) -> x+y); } Here are the results. As before, the X-axis is the size of the ArrayList and the Y-axis is the throughput in ops/s with a higher number indicating faster performance. Also, note that the Y-axis is log scale. Nothing unusual here. The performance of the loop drops over the size of the list. A surprise here is that the performance of the ArrayList stream is less by about 20% than the other methods. Again, your mileage may vary. LinkedList Loop Performance These are the same tests as above on a LinkedList, ranging in size from 100 items to 10,000 items. A few observations are in order here. The for-loop is way slower than the other methods, which is to be expected since the LinkedList is not a RandomAccess List. ArrayList has an O(1) performance for retrieval while LinkedList has an O(N) performance. It is reflected here. Also, the streams processing pipeline is slower by about 20% than the for-each and iterator methods. Summary In this article, we examined some performance characteristics of the ArrayList and LinkedList including creation, adding, and retrieval in a loop. The LinkedList is slightly slower with adding items to the end of the list. It is also slower when retrieving items with an index (random access). If you enjoyed this article and want to learn more about Java Collections, check out this collection of tutorials and articles on all things Java Collections. Published at DZone with permission of Jay Sridhar , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/benchmarking-java-lists-performance
CC-MAIN-2020-05
refinedweb
856
57.77
DAO and Access97 WHERE clause fails - From: "v.davis2" <v.davis2@xxxxxxx> - Date: Fri, 8 Jun 2007 17:17:51 -0700 Hi all, I am attempting to use Access97 as the database to hold the results of a python script. I seem to be able to make simple SELECT clauses work (like SELECT * FROM TableName), but have not been able to figure out how to add a WHERE clause to that (e.g., SELECT * FROM TableName WHERE myFieldName = 34) This fails complaining that the wrong number of parameters are present. I haved tried DAO36 and I have tried the ADO version with the same results. Therefore I have to conclude it is my screwup! Help in the forum or via email would sure be appreciated! (v.davis2@xxxxxxx) Here is the skeleton code: import win32com.client daoEngine = win32com.client.Dispatch('DAO.DBEngine.35') sDBname = 'vpyAnalyzeDirectorySize.mdb' sDB = 'c:\\documents and settings\\vic\\my documents\\tools\\python25\\_myscripts\\'+sDBname daoDB = daoEngine.OpenDatabase(sDB) sSQL1 = 'SELECT * FROM T_Index2DirName' daoRS = daoDB.OpenRecordset(sSQL1) # this works FINE and I can play with the record set #<snip> hsDB = hash(sDB) sSQL3 = 'SELECT * FROM T_Index2DirName WHERE iIndex = hsDB' # names are all correct in mdb file daoRStest = daoDB.OpenRecordset(sSQL3) # this FAILS, even though the record is there daoRS.Close() Traceback (most recent call last): File "C:\Documents and Settings\Vic\My Documents\Tools\python25\Lib\site-packages\pythonwin\pywin\framework\scriptutils.py", line 310, in RunScript exec codeObject in __main__.__dict__ File "C:\Documents and Settings\Vic\My Documents\Tools\python25\_MyScripts\TestForPosting01.py", line 14, in <module> daoRStest = daoDB.OpenRecordset(sSQL3) # this FAILS, even though record is there File "C:\Documents and Settings\Vic\My Documents\Tools\python25\lib\site-packages\win32com\gen_py\00025E01-0000-0000-C000-000000000046x0x5x0.py", line 523, in OpenRecordset , Type, Options, LockEdit) com_error: (-2147352567, 'Exception occurred.', (0, 'DAO.Database', 'Too few parameters. Expected 1.', 'jeterr35.hlp', 5003061, -2146825227), None) . - Follow-Ups: - Re: DAO and Access97 WHERE clause fails - From: v.davis2 - Re: DAO and Access97 WHERE clause fails - From: John Machin - Prev by Date: Re: Bragging about Python - Next by Date: RE: Obtaining hte currently running script under windows - Previous by thread: Could someone show me a sample how to flush the socket sending buffer? Thanks a lot. - Next by thread: Re: DAO and Access97 WHERE clause fails - Index(es):
http://coding.derkeiler.com/Archive/Python/comp.lang.python/2007-06/msg01098.html
crawl-002
refinedweb
389
67.65
Section 2.3 Strings, Objects, Enums,. Another reason for considering classes and objects at this point is so that we can introduce enums. An enum is a data type that can be created by a Java programmer to represent a small collection of possible values. Technically, an enum is a class and its possible values are objects. Enums will be our first example of adding a new type to the Java language. We will look at them later in this section. 2.3.1 Built-in Subroutines and Functions. These subroutines are "built into" the Java language.. String is also a type, and literal strings such as "Hello World" represent can be used to send information to that destination. The object System.out is just one possible destination, and System.out.print is the subroutine that sends information to that particular. Remember that. (As one final general note, you should be aware that subroutines in Java are often referred to as methods. Generally, the term "method" means a subroutine that is contained in a class or in an object. Since this is true of every subroutine in Java, every subroutine in Java is a method. The same is not true for other programming languages. Nevertheless, the term "method" is mostly used in the context of object-oriented programming, and until we start doing real object-oriented programming in Chapter 5, I will prefer to use the more general term, "subroutine.") was terminated. A parameter value of 0 indicates that the program ended normally. Any other value indicates that the program was terminated because an error was detected. But in practice, the value of the parameter is usually ignored.) Math numeric literal return value is expressed in radians, not degrees. -. Even though the return value is mathematically an integer, it is returned as a value of type double, rather than of type int as you might expect. For example, Math.floor(3.76) is 3.0. The function Math.round(x) returns the integer that is closest to x. - Math.random(), which returns a randomly chosen double in the range 0.0 <= Math.random() < 1.0. (The computer actually calculates so-called "pseudorandom" numbers, which are not truly random but are random enough for most purposes.) For these functions, the type of the parameter -- the x or y inside the parentheses -- can be any value of a second. The return performs. /** * This program performs some mathematical computations and displays * the results. It then reports the number of seconds that the * computer spent on this task. */ public class TimedComputation { And here is an applet that simulates this program. If you run it several times, you should see a different random number in the output each time, and you might see different run times. 2.3.2 Operations on Strings A value of type String is an object. That object contains data, namely the sequence of characters that make up the string. It also contains subroutines. All of these subroutines are in fact functions. For example, every string object contains a function named length that computes the number of characters in that string. Suppose that advice is a variable that refers to a String. For example, advice might have been declared and assigned a value as follows: String advice; advice = "Seize the day!"; Then advice.length() is a function call that returns the number of characters in the string "Seize the day!". In this case, the return value would be 14. In general, for any string variable str, the value of str.length() is an int equal to the number of characters in the string that is the value of str. Note that this function has no parameter; the particular string whose length is being computed is the value saying System.out.print("The number of characters in "); System.out.println("the string \"Hello World\" is "); System.out.println( "Hello World".length() ); The String class defines a lot of functions. Here are some that you might find useful. Assume that s1 and s2 refer to values of type String: - case function functions s1.toUpperCase(), s1.toLowerCase(), and s1.trim(), note that the value of s1 is not modified. Instead a new string is created and returned as the value of the function. The returned value could be used, for example, in an assignment statement such as "smallLetters = s1.toLowerCase();". To change the value of s1, you could use an assignment "s -- if you want a space in the concatenated string, it has to be somewhere in the input data, as in "Hello " + "World".) actually concatenate values of any type onto a String using the + operator. The value is converted to a string, just as it would be if you printed it to the standard output, and then it is concatenated onto the string. For example, the expression "Number" + 42 evaluates to the string "Number42". And the statements System.out.print("After "); System.out.print(years); System.out.print(" years, the value is "); System.out.print(principal); can be replaced by the single statement: System.out.print("After " + years + " years, the value is " + principal); Obviously, this is very convenient. It would have shortened some of the examples presented earlier in this chapter. 2.3.3 Introduction to Enums Java comes with eight built-in primitive types and a large set of types that are defined by classes, such as String. But even this large collection of types is not sufficient to cover all the possible situations that a programmer might have to deal with. So, an essential part of Java, just like almost any other programming language, is the ability to create new types. For the most part, this is done by defining new classes; you will learn how to do that in Chapter 5. But we will look here at one particular case: the ability to define enums (short for enumerated types). Enums are a recent addition to Java. They were only added in Version 5.0. Many programming languages have something similar, and many people believe that enums should have been part of Java from the beginning. Technically, an enum is considered to be a special kind of class, but that is not important for now. In this section, we will look at enums in a simplified form. In practice, most uses of enums will only need the simplified form that is presented here. An enum is a type that has a fixed list of possible values, which is specified when the enum is created. In some ways, an enum is similar to the boolean data type, which has true and false as its only possible values. However, boolean is a primitive type, while an enum is not. The definition of an enum types has the (simplified) form: enum enum-type-name { list-of-enum-values } This definition cannot be inside a subroutine. You can place it outside the main() routine of the program. The enum-type-name can be any simple identifier. This identifier becomes the name of the enum type, in the same way that "boolean" is the name of the boolean type and "String" is the name of the String type. Each value in the list-of-enum-values must be a simple identifier, and the identifiers in the list are separated by commas. For example, here is the definition of an enum type named Season whose values are the names of the four seasons of the year: enum Season { SPRING, SUMMER, FALL, WINTER } By convention, enum values are given names that are made up of upper case letters, but that is a style guideline and not a syntax rule. Enum values are not variables. Each value is a constant that always has the same value. In fact, the possible values of an enum type are usually referred to as enum constants. Note that the enum constants of type Season are considered to be "contained in" Season, which means -- following the convention that compound identifiers are used for things that are contained in other things -- the names that you actually use in your program to refer to them are Season.SPRING, Season.SUMMER, Season.FALL, and Season.WINTER. Once an enum type has been created, it can be used to declare variables in exactly the same ways that other types are used. For example, you can declare a variable named vacation of type Season with the statement: Season vacation; After declaring the variable, you can assign a value to it using an assignment statement. The value on the right-hand side of the assignment can be one of the enum constants of type Season. Remember to use the full name of the constant, including "Season"! For example: vacation = Season.SUMMER; You can print out an enum value with an output statement such as System.out.print(vacation). The output value will be the name of the enum constant (without the "Season."). In this case, the output would be "SUMMER". Because an enum is technically a class, the enum values are technically objects. As objects, they can contain subroutines. One of the subroutines in every enum value is named ordinal(). When used with an enum value, it returns the ordinal number of the value in the list of values of the enum. The ordinal number simply tells the position of the value in the list. That is, Season.SPRING.ordinal() is the int value 0, Season.SUMMER.ordinal() is 1, Season.FALL.ordinal() is 2, and Season.WINTER.ordinal() is 3. (You will see over and over again that computer scientists like to start counting at zero!) You can, of course, use the ordinal() method with a variable of type Season, such as vacation.ordinal() in our example. Right now, it might not seem to you that enums are all that useful. As you work though the rest of the book, you should be convinced that they are. For now, you should at least appreciate them as the first example of an important concept: creating new types. Here is a little example that shows enums being used in a complete program: public class EnumDemo { // Define two enum types -- remember that the definitions // go OUTSIDE The main() routine! enum Day { SUNDAY, MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY } enum Month { JAN, FEB, MAR, APR, MAY, JUN, JUL, AUG, SEP, OCT, NOV, DEC } public static void main(String[] args) { Day tgif; // Declare a variable of type Day. Month libra; // Declare a variable of type Month. tgif = Day.FRIDAY; // Assign a value of type Day to tgif. libra = Month.OCT; // Assign a value of type Month to libra. System.out.print("My sign is libra, since I was born in "); System.out.println(libra); // Output value will be: OCT System.out.print("That's the "); System.out.print( libra.ordinal() ); System.out.println("-th month of the year."); System.out.println(" (Counting from 0, of course!)"); System.out.print("Isn't it nice to get to "); System.out.println(tgif); // Output value will be: FRIDAY System.out.println( tgif + " is the " + tgif.ordinal() + "-th day of the week."); // You can concatenate enum values onto Strings! } } You can run the following applet version of this program to see what the output actually looks like:.
http://math.hws.edu/javanotes/c2/s3.html
crawl-001
refinedweb
1,869
66.23
Exploring and Transforming H2O DataFrame in R and Python In this code-heavy tutorial, learn how to ingest datasets for building models using H2O DataFrames as well as R and Python code. Join the DZone community and get the full member experience.Join For Free Sometimes, you may need to ingest a dataset for building models. Your first task will be to explore all the features and their type. Once that is done, you may want to change the feature types to be the way you want them to be. a graphical format, you can do the following: import pylab as pl df.as_data_frame().hist(figsize=(20,20)) pl.show() The result looks like this on Jupyter notebook: Note: If you have above 50 features, you might have to trim your DataFrame to fewer features in order to have an effective visualization. You can also use the following function to convert a list of columns as factor/categorical by passing the That's it — enjoy! Published at DZone with permission of Avkash Chauhan, DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/exploring-amp-transforming-h2o-data-frame-in-r-and
CC-MAIN-2022-21
refinedweb
188
71.95
Sorry I had the sqrt there to make it more efficient but I removed it while debugging. Right now I'm running the code to find all the primes below 1,000,000. What the program is doing is looking for factors by running through and dividing each number by 2 then 3 then 4 and so on until it reaches the original number. I'll change the variable names, then it will make more sense. This code is a bit more up to date: import math from math import sqrt primecount = 1 current = 1 end = raw_input("What number do you wish to find all the primes below? ") end = int(end) while current < end: # while loop two divisor = 2#every number is divisible by 1 so I have to start a 1 z = int(sqrt(current)) + 1#no set of factors has both above it's square makes program effecient while divisor < current: #while loop number one if current % divisor == 0: # exits if there is a factor divisor = current #exits while loop one elif divisor == current-1: #gives an exit point if exit % divisor == 0: #Makes sure it's not divisible y one last time divisor = current #exits while loop one else: print current, "The number ", primecount, "prime." #prints prime numbers divisor = current #exits while loop one primecount = primecount + 1 #helps count primes else: divisor = divisor+1#keeps while loop one going current = current+1 # move while loop two closer to end If I left any unchanged, a is divisor y is exit x is current I fixed the code and made it a ton faster, it used to take about 4 hours to find all of them below 1,000,000 but now it takes less than a minute, and it recognizes all primes but 2 and 3. Here is the code: import math from math import sqrt b = 1 x = 1 y = raw_input("What number do you wish to find all the primes below? ") y = int(y) while x < y: while x == 2 or x == 3: print x, "The number", b, "prime." b = b + 1 x = x + 1 a = 2 z = int(sqrt(x)) + 1 while a < z: if x % a == 0: a = x elif a == z-1: if x % a == 0: a = x else: print x, "The number", b, "prime." a = x b = b + 1 else: a = a+1 x = x+1 Nice that you posted such a original code, even not so hugely fast. I tinkered it slightly to make exits from while normal, skip the even numbers and renamed your variables again by myself. I compared it to another personal implementation of sieve algorithm using generator functions to generate the multiples (not also so efficient as plain sieve). Sieve algorithms are of course much faster () import time import itertools # Returns an iterator for generating all primes < stop. def primes_gen(stop): if stop is None: print("This algorithm doesn't support unbounded ranges") raise Stop_iteration # Find all primes by iterating though the candidates and checking # if each is a known composite (multiple of some prime). _if not, build # a composite iterator based on it and add it to the heap of iterators # to check against. # Based loosely on. md = {} def add_to_multi_dict(d, key, value): if key in d: d[key].append(value) else: d[key] = [value] # This iterator skips all multiples of 2 and 3 # Can apply an optional multiplier def better_iterator(n, mult = 1): cycle = itertools.cycle([2, 4]) if (n+1) % 3: next(cycle) while True: yield n * mult n += next(cycle) itmaker = better_iterator it = itmaker(5) cur = next(it) yield 2 yield 3 while cur < stop: if cur in md: all_iters = md[cur] del md[cur] for i in all_iters: n = next(i) add_to_multi_dict(md, n, i) else: yield cur if stop is None or cur*cur < stop: it2 = itmaker(cur, cur) n = next(it2) add_to_multi_dict(md, n, it2) cur = next(it) def primes(stop): limit = candidate = 1 primes_list = [] for candidate in 2, 3: if candidate < stop: primes_list.append(candidate) while candidate < stop: divisor = 3 if limit * limit < candidate: limit += 1 while True: if candidate % divisor == 0: break elif divisor >= limit - 1: if candidate % divisor == 0: break else: primes_list.append(candidate) break else: divisor += 2 candidate += 2 return primes_list # Runs the function described by the string s, then prints out how long # it took to run and returns result (added by pyTony) def timing(s): t = time.clock() r = eval(s) print("%s took %f ms (%i results)." % (s, 1000 * (time.clock() - t), len(r))) return r if __name__ == '__main__': max_prime = 1000000 printing = False m = len(str(max_prime)) for primefunction in 'primes', 'primes_gen': print('Using %s upto %i' % (primefunction, max_prime)) pr = timing('list(%s(%s))' % (primefunction, 1000000)) if printing: print('\n'.join('%6i: %*i' % (ind, m, prime) for ind, prime in enumerate(pr, 1))) """Output: to 01.12.2011 11:54:19,92 K:\test >primes_hovestar.py Using primes upto 1000000 list(primes(1000000)) took 20984.728785 ms (78498 results). Using primes_gen upto 1000000 list(primes_gen(1000000)) took 1960.589405 ms (78498 results). """
https://www.daniweb.com/software-development/python/threads/397449/quick-debugging
CC-MAIN-2015-27
refinedweb
840
63.63
Indexed multidimensional arrays I wrote some APL & Egison-inspired code to work with multidimensional arrays. This code has been written in Python. I plan to implement some variation of this into Lever 0.10.0. a = tensor(0, [ tensor(1, [1, 0, 0, 5]), tensor(1, [0, 1, 0, 2]), tensor(1, [0, 0, 1, 2]), tensor(1, [0, 0, 0, 1]), ]) b = tensor(0, [ tensor(1, [1, 2]), tensor(1, [3, 4]), tensor(1, [0, 2]), tensor(1, [1, 0]), ]) c = l(a, {1:cnt("k")}) * l(b, {0:"k"}) The line producing value c is doing matrix multiplication expressed with tensors and tensor indice relabeling. The rows in a and columns in b become a contraction pair after multiplication. Here are the printouts for a, b and c: ╭1 0 0 5╮ │0 1 0 2│ │0 0 1 2│ ╰0 0 0 1╯ ╭1 2╮ │3 4│ │0 2│ ╰1 0╯ ╭6 2╮ │5 4│ │2 2│ ╰1 0╯ The code for printing out the tensors were sufficiently challenging to write as well. You can try all of this code yourself because everything of it has been supplied along this blog post. tensors.py is more of a study rather than a production quality library. I wrote it to see how to put this together. Definition of the data structure The library represents tensors with a multidimensional array. It disregards the order in layout in order to produce tensors. I thought this is as unusual, but similar to how the underlying ordering is disregarded in set data structures. The choice of information presented in each parameter inside the data structure is not perfectly planned out. There is maybe a better way to structure this that I have not found out yet. It is assumed that none of the dimensions in an array ever reduce to zero length. This is prohibited as it would mean that the tensor collapses into nothing. Pretty printing I start by describing the pretty printing as being able to see what the heck is happening is very useful during development of code. if all(isinstance(i, int) and i >= 0 for i in shape): if len(shape)/2 <= max(shape): shape = range(max(shape)+1) signature = "" if signature is None: signature = "{" + ", ".join(repr(i) for i in self.shape) + "}" indexer = tensor_reshape(self.shape, self.indices, shape, self.indices) return "".join(generate_table(indexer, self.values)) + signature The shape describes the layout of in which order the dimensions are layed out in the values -list. If the shape proposes that this is a mostly linear tensor with natural numbers as indices then we print out the table as if it was fully populated with indices. Otherwise the table is generated in the order in which it has been layouted in, with a trailing signature. odds = indexer[1::2] even = indexer[0::2] rows = [] stride = 1 for count, _ in odds: stride *= count rows.append(stride) row_stride = stride cols = [] for count, _ in even: stride *= count cols.append(stride) col_stride = stride if len(rows) < len(cols): rows.append(row_stride) if len(rows) > len(cols): cols.append(col_stride) We use a display where the tensor has been interleaved such that even ranked indices are columns and odd ranked indices are rows. The above code calculates stride counts to later determine where to put parentheses. col_size = [1 for col in range(row_stride)] for j, index in enumerate(take_indices(odds + even)): k = j % len(col_size) col_size[k] = max(col_size[k], len(repr(values[index]))) The column sizes are computed so that every column can be spaced into equal widths. After this point we have everything so that the printing can start. view1 = [u"\u256D", u"\u2502", u"\u2570", u"\u256E", u"\u2502", u"\u256F"] view2 = [u"\u239B", u"\u239C", u"\u239D", u"\u239E", u"\u239F", u"\u23A0"] view = view1 First I tried to use the characters purpose-made for drawing long parentheses, but I ended up settling to graphic characters because they look much better in the terminal. The view2 is the alternative visual I tried. For curious people, this is how the alternative visual looks like: ⎛1 0 0 5⎞ ⎜0 1 0 2⎟ ⎜0 0 1 2⎟ ⎝0 0 0 1⎠ Some monospace renderers render this properly but others produce small jitters that hinder the readability. sp = "" for j, i in enumerate(take_indices(odds + even)): k = j % len(col_size) yield sp for z, n in enumerate(reversed(rows)): if j % n == 0: p = (j - j % row_stride) uc = (p % cols[-1-z] == 0) lc = ((p + row_stride) % cols[-1-z] == 0) if uc and lc: yield u"\u27EE".encode('utf-8') elif uc: yield view[0].encode('utf-8') elif lc: yield view[2].encode('utf-8') else: yield view[1].encode('utf-8') yield repr(values[i]).rjust(col_size[k]) sp = " " for z, n in enumerate(rows): if (j+1) % n == 0: p = (j - j % row_stride) uc = ((p + row_stride) % cols[z] == 0) lc = (p % cols[z] == 0) if uc and lc: yield u"\u27EF".encode('utf-8') elif uc: yield view[5].encode('utf-8') elif lc: yield view[3].encode('utf-8') else: yield view[4].encode('utf-8') if (j+1) % len(col_size) == 0: sp = "\n" The upper part determines how to print the opening parentheses and the lower part determines how to print the closing parentheses. To get the uc and lc conditions right I had to think very carefully about what the cols and rows -strides mean. I quickly forgot what they mean after I got this to work. Unary operations Every unary operation is taken to an element directly so it is trivial to write an implementation for applying them. def unary_operation(op, self): return Tensor(self.shape, [op(value) for value in self.values], self.indices) The binary operations are much harder to implement. Binary operations Most of the magic below happens inside the tensor_reshape and take_indices. def binary_operation(op, self, other): self = to_tensor(self) other = to_tensor(other) contracts = set() shape = [] indices = {} for key in self.shape: if key in indices: continue count1 = self.indices.get(key, 1) count2 = other.indices.get(key, 1) assert count1 == count2 or count1 == 1 or count2 == 1, "shape error" indices[key] = max(count1, count2) shape.append(key) for key2 in other.shape: if key2 in indices: continue if is_contracting(key2) and contracting_pair(key2) in indices: key1 = contracting_pair(key2) contracts.add(key1) else: key1 = key2 shape.append(key1) count1 = self.indices.get(key1, 1) count2 = other.indices.get(key2, 1) assert count1 == count2 or count1 == 1 or count2 == 1, "shape error" indices[key1] = max(count1, count2) indexer1 = tensor_reshape(self.shape, self.indices, shape, indices) indexer2 = tensor_reshape(other.shape, other.indices, shape, indices) indices1 = take_indices(indexer1, 0) indices2 = take_indices(indexer2, 0) values = [op(self.values[i], other.values[j]) for i, j in itertools.izip(indices1, indices2)] num = Tensor(shape, values, indices) for key in contracts: num = fold_dimension(contracting_fold(key), num, key) return num The code mentions contracting pairs, they are pair of indices that are programmed to fold and merge away from the result when they meet each other. This design has been heavily inspired by Egison's tensor paper that is an attempt to implement Einstein summation notation in a programming language. It describes symbols that name each dimension and they behave more like annotations than being actual indices representing dimensions. But the way how I've presented it here gives it all whole new interesting meanings. It is as if tensors never needed to have their dimensions ordered and the choice of symbols for basis and the order of those symbols was always an unavoidable representational detail. Folding and indexing Folding and indexing both "destroy" an index, but only in the sense that it is no longer needed in the result. The index is folded into the length of '1' meaning that the value does not expand in that direction. It is as if every index that we know and do not know about would be present in every tensor simultaneously. def fold_dimension(op, self, rank=0): if rank not in self.indices: return self strides = tensor_strides(self.shape, self.indices) count, stride = strides.pop(rank) width = count*stride shape = [i for i in self.shape if i != rank] indices = dict((i, self.indices[i]) for i in shape) indexer = [strides[i] for i in shape] values = [reduce(op, (self.values[j] for j in range(i, i+width, stride))) for i in take_indices(indexer, 0)] if len(shape) == 0: return values[0] return Tensor(shape, values, indices) def index_dimension(self, rank, index): if rank not in self.indices: return self strides = tensor_strides(self.shape, self.indices) count, stride = strides.pop(rank) assert 0 <= index < count, "index range error" width = count*stride shape = [i for i in self.shape if i != rank] indices = dict((i, self.indices[i]) for i in shape) indexer = [strides[i] for i in shape] values = [self.values[i] for i in take_indices(indexer, index*stride)] if len(shape) == 0: return values[0] return Tensor(shape, values, indices) The contraction mechanics automate folding such that fold does not need to be called very often. Reindexing In a traditional APL-style arrays implementation we would be transposing and stacking indices to adjust how they compose. In this tensors library we need a concept for reindexing indices as physical reordering of the array would not do it. def l(self, new_names): strides = tensor_strides(self.shape, self.indices) contracts = set() indices = {} new_strides = {} shape = [] for rank, (count1, stride1) in strides.iteritems(): rank = new_names.get(rank, rank) if is_contracting(rank) and contracting_pair(rank) in indices: key = contracting_pair(rank) contracts.add(rank) if rank in new_strides: count2, stride2 = new_strides[rank] assert count1 == count2, "shape error" new_strides[rank] = count1, stride1+stride2 else: new_strides[rank] = count1, stride1 indices[rank] = count1 shape.append(rank) indexer = [] for rank in shape: indexer.append(new_strides[rank]) values = [self.values[i] for i in take_indices(indexer, 0)] num = Tensor(shape, values, indices) for key in contracts: num = fold_dimension(contracting_fold(key), num, key) return num The reindexing allows to trigger contraction but it may be also used for taking a diagonal and transposing the tensor. It allows to compact a lot of functionality into a short notation. Shuffling Reversing, rolling and slicing are all using the shuffling mechanism. The shuffler does exactly what it says, it shuffles the values inside a tensor. def reverse_dimension(self, rank): if rank not in self.indices: return self offset = 0 strides = tensor_strides(self.shape, self.indices) indices = {} shuffles = [] for index in self.shape: count, stride = strides[index] if index != rank: shuffle = (count, stride, stride-stride*count, count-1) else: offset += stride*(count-1) shuffle = (count, -stride, -stride+stride*count, count-1) indices[index] = count shuffles.append(shuffle) values = [self.values[i] for i in take_shuffle(shuffles, offset)] return Tensor(self.shape, values, indices) def roll_dimension(self, rank, amount): if rank not in self.indices: return self offset = 0 strides = tensor_strides(self.shape, self.indices) indices = {} shuffles = [] for index in self.shape: if index != rank: shuffle = (count, stride, stride-stride*count, count-1) else: amount %= count offset += stride*amount shuffle = (count, stride, stride-stride*count, count-1-amount) indices[index] = count shuffles.append(shuffle) values = [self.values[i] for i in take_shuffle(shuffles, offset)] return Tensor(self.shape, values, indices) def slice_dimension(self, rank, start, stop): if rank not in self.indices: return self offset = 0 strides = tensor_strides(self.shape, self.indices) indices = {} shuffles = [] for index in self.shape: if index != rank: shuffle = (count, stride, stride-stride*count, count-1) else: assert 0 <= start < stop <= count, "slice error" offset += stride*start count = stop-start shuffle = (count, stride, stride-stride*count, count-1) indices[index] = count shuffles.append(shuffle) values = [self.values[i] for i in take_shuffle(shuffles, offset)] return Tensor(self.shape, values, indices) The name 'roll' refers to the old APL operation that rotate the values, as if in a fortune wheel. The 'slice' corresponds to the like-named Python operation that takes a slice from a list. Tensor generation and construction We may either generate tensors automatically, or then feed in values to fill into a tensor. def generate_tensor(dimensions, function): shape = [] counts = [] indices = {} for key, count in dimensions: assert key not in indices, "shape error" indices[key] = count counts.append(count) shape.append(key) values = [function(state) for state in index_generator(counts)] return Tensor(shape, values, indices) def index_generator(counts): state = [0] * len(counts) while True: yield state for i in range(len(counts)): state[i] += 1 if state[i] < counts[i]: break else: state[i] = 0 else: break The main tool for constructing tensors is supposed to be this function tensor. def tensor(index, objects): objects = [to_tensor(n) for n in objects] shape = [] indices = {} for t in objects: for col, count in t.indices.iteritems(): assert not is_contracting_pair(index, col), "cannot re-introduce a rank" if col not in indices: shape.append(col) indices[col] = count assert indices[col] == count or count == 1, "shape error" if is_contracting(col): assert contracting_pair(col) not in indices, "polarity error" indices[col] = max(indices[col], count) values = [t.values[i] for t in objects for i in take_indices(tensor_reshape( t.shape, t.indices, shape, indices))] indices[index] = len(objects) shape.append(index) return Tensor(shape, values, indices) It takes the index and the objects to be put into that index. To make this trivial I require that you cannot reintroduce an index that is already present in the objects. I also require that the input objects are unable to trigger a contraction. The tensor reshaping & indexing methods A lot of the core functionality is packed into few functions that are then reused in the program. These are the functions to index the underlying data structure. The tensor_reshape takes an old tensor and reshapes it into the shape of a new tensor. def tensor_reshape(shape, indices, newshape, newindices): vector = tensor_strides(shape, indices) new_vector = [] for index in newshape: if is_contracting(index): coindex = contracting_pair(index) if coindex in vector: count1, stride = vector.get(coindex, (1,0)) else: count1, stride = vector.get(index, (1,0)) else: count1, stride = vector.get(index, (1,0)) count2 = newindices[index] assert count1 == count2 or count1 == 1, (index, count1, count2) new_vector.append((count2, stride)) return new_vector The tensor_srides calculate a stride for each index and places them into a map for easy access. It sets stride=0 for everything that has only one element because they are not present in the array. This tweak makes it easy to spread the single element to fill a row or a column. It would also suggest that vectors are an odd form of a quotient. A real number divided into multiple pieces across an index. def tensor_strides(shape, indices, stride=1): vector = {} for index in shape: count = indices.get(index, 1) vector[index] = (count, (stride if count > 1 else 0)) stride *= count return vector The take_indices and take_shuffle are such functions that I have been on the borderline of merging them together, but they seem to have their distinct places where they are useful, so I have kept them separate. They generate offsets and indices into the flat array that represents the tensor. def take_indices(vector, offset=0): state = [0] * len(vector) while True: yield offset for i in range(len(vector)): offset += vector[i][1] state[i] += 1 if state[i] < vector[i][0]: break else: offset -= state[i]*vector[i][1] state[i] = 0 else: break def take_shuffle(shuffles, offset=0): state = [0] * len(shuffles) while True: yield offset for i in range(len(shuffles)): count, step, skip, skipi = shuffles[i] offset += (step if state[i] != skipi else skip) state[i] += 1 if state[i] < count: break else: state[i] = 0 else: break Contraction Some of the operations above react to the contracting pairs with a fold. The contracting pair provides the folding operation and can be an any object. I expect thought that it is a natural number or a symbol representing an invariant quantity. def is_contracting(index): return True def is_contracting_pair(a, b): if isinstance(a, cnt) and a.index == b: return True if isinstance(b, cnt) and b.index == a: return True return False def contracting_pair(index): if isinstance(index, cnt): return index.index else: return cnt(index) Every symbol originally was not a contracting pair, so there is a bit of history left of that in the code. I did not clean it up because eventually there is a time when you have to stop and move on. It came when I verified that this is a workable concept. The one last remaining thing was to provide a tool to represent contraction. I originally thought about negative values for this purpose. But I found it to be problematic because zero is a valid rank. We are taught to treat zero as a number like every other with the only difference that it is a neutral element. class cnt(object): def __init__(self, index): self.index = index def __eq__(self, other): if isinstance(other, cnt): return self.index == other.index return False def __ne__(self, other): return not (self == other) def __hash__(self): return hash(self.index) def __repr__(self): return "cnt({})".format(self.index) With the contraction rules I default to addition and do the contraction eagerly. This is different from the Egison paper. I decided to associate the operation to the invariant quantity used for contraction. class csym(object): def __init__(self, op): self.op = op def __repr__(self): return "<csym({}) {}>".format(self.op, id(self)) The purpose of this post This is the first time when I tried a concept called 'semantic linefeeds' when writing a blog post. I suppose that helped me to explain my line of thought better and made it easier to write the text. It also gave it a different tone. thumbs up a good idea, learned from Brandon Rhodes. You may also like to look at the arrays.py. That was the prototype leading to this beauty here. But why did I write this post? I spent three days to get this through and if I always spent that much to blogging I would be blogging half of my time. I think that this is something unusual that I do not see every day. I would like to know what smart people think about this approach. A lot of time I get good feedback at irc.freenode.net#proglangdesign and I am grateful for that. But I guess it would be interesting to hear what a mathematician thinks about the concepts and claims of tensors presented here.
http://boxbase.org/entries/2018/jul/23/multidimensional_arrays/
CC-MAIN-2018-34
refinedweb
3,088
57.06
snf-ganeti:cd42d0ade48164a80954403fb47e358932724a4d commits 2009-01-21T10:03:01+00:00 Implement the new live migration backend functions 2009-01-21T10:03:01+00:00 Guido Trotter ultrotter@google.com MigrationInfo, AcceptInstance and AbortMigration are implemented as hypervisor specific functions, and by default they do nothing (as they're not always necessary). This patch also converts hv_base.MigrateInstance docstring to epydoc, adds a missing @type to the GetInstanceInfo docstring, and removes an unneeded empty line. Reviewed-by: iustinp KVM: save and remove the KVM runtime 2009-01-21T09:55:22+00:00 Guido Trotter ultrotter@google.com At instance startup time we save the kvm runtime, and at stop time we delete it. This patch also includes a function to load the kvm runtime, which is unused yet. Reviewed-by: iustinp KVM: split KVM runtime generation and startup 2009-01-21T09:55:08+00:00 Guido Trotter ultrotter@google.com Add calls in the intra-node migration protocol 2009-01-21T09:54:11+00:00 Guido Trotter ultrotter@google.com Update the objects.Disk formatting method 2009-01-21T08:33:01+00:00 Iustin Pop iustin@google.com With the addition of minors, this needs to show them too. Reviewed-by: ultrotter KVM: add a _CONF_DIR 2009-01-20T18:12:21+00:00 Guido Trotter ultrotter@google.com KVM: Remove sockets after shutdown 2009-01-20T18:12:01+00:00 Guido Trotter ultrotter@google.com Abstract the monitor and serial socket naming in two functions, and reuse them to cleanup the files after shutdown. Reviewed-by: iustinp KVM: fix class docstring 2009-01-20T18:11:48+00:00 Guido Trotter ultrotter@google.com Reviewed-by: iustinp Xen: use epydoc in MigrateInstance docstring 2009-01-20T18:11:35+00:00 Guido Trotter ultrotter@google.com Reviewed-by: iustinp ShutdownInstance: report hypervisor error 2009-01-20T17:50:42+00:00 Guido Trotter ultrotter@google.com When StopInstance raises an HypervisorError, report it in the logged message to ease with debugging. Reviewed-by: iustinp ConfigObject docstring, close an open parenthesis 2009-01-20T17:50:27+00:00 Guido Trotter ultrotter@google.com Reviewed-by: iustinp Fix a typo in luxi's docstring 2009-01-20T17:50:08+00:00 Guido Trotter ultrotter@google.com Reviewed-by: iustinp Update the logging output of job processing 2009-01-20T17:19:58+00:00 Iustin Pop iustin@google.com .gitignore: Don't exclude whole /autotools/ dir, but only files 2009-01-20T16:47:25+00:00 Michael Hanselmann hansmi@google.com This way newly added files will be not be excluded by default. Fixes also a small whitespace error in utils.py. Reviewed-by: iustinp Convert RenameInstance to (status, data) 2009-01-20T16:26:57+00:00 Iustin Pop iustin@google.com This allows the rename failures to show the ouput of OS scripts. Reviewed-by: ultrotter Update gitignore rules 2009-01-20T16:26:46+00:00 Iustin Pop iustin@google.com As per Michael's comment, gitignore should not ignore a couple of real files from the autotools/ directory. Reviewed-by: ultrotter Fix adding of disks to an instance 2009-01-20T14:20:35+00:00 Iustin Pop iustin@google Fix burnin problems when using http checks 2009-01-20T14:20:24+00:00 Iustin Pop iustin@google Make cluster-verify check the drbd minors space 2009-01-20T14:20:15+00:00 Iustin Pop iustin@google Fix a couple of epydoc warnings 2009-01-20T14:20:03+00:00 Iustin Pop iustin@google.com Reviewed-by: ultrotter DRBD: check for in-use minor during Create 2009-01-20T11:18:31+00:00 Iustin Pop iustin@google.com Add a TailFile function 2009-01-20T11:18:20+00:00 Iustin Pop iustin@google.com This patch adds a tail file function, to be used for parsing and returning in the job log OS installation failures. Reviewed-by: ultrotter Unify some unittest functions 2009-01-20T11:18:10+00:00 Iustin Pop iustin@google Some small fixes in cmdlib 2009-01-20T10:12:00+00:00 Iustin Pop iustin@google.com Reviewed-by: ultrotter Convert AddOSToInstance to (status, data) 2009-01-20T10:11:48+00:00 Iustin Pop iustin@google.com This allows the install and reinstall instance to return (hopefully) relevant log files from the OS create scripts. Reviewed-by: ultrotter Convert the start instance rpc to (status, data) 2009-01-20T10:11:36+00:00 Iustin Pop iustin@google.com This will record the failure cause in starting up the instance in the job log (and thus to the user). Reviewed-by: ultrotter Fix handling of failures in create instance disks 2009-01-19T17:22:32+00:00 Iustin Pop iustin@google.com Move the default MAC prefix to the constants file 2009-01-19T14:35:03+00:00 Iustin Pop iustin@google.com Instead of having the default live in the gnt-cluster script, we move it to the constants file. The patch also fixes a typo on constants.py. Reviewed-by: ultrotter Use instance.all_nodes instead of hand-building it 2009-01-19T14:33:13+00:00 Iustin Pop iustin@google.com This patch replaces a few obvious uses of [instance.primary_node] + list(instance.secondary_nodes) (or similar usage) with the new instance.all_nodes. Reviewed-by: ultrotter Fix non-drbd instance creation 2009-01-19T14:32:21+00:00 Iustin Pop iustin@google Small simplification in MapLVsByNode 2009-01-19T11:10:52+00:00 Iustin Pop iustin@google.com We don't need to pre-create the node entries in lvmap, since they will be created at recursion time. Reviewed-by: ultrotter Split the block device creation in two parts 2009-01-19T11:10:42+00:00 Iustin Pop iustin@google Combine the two _CreateBlockDevOnXXX functions 2009-01-19T11:10:29+00:00 Iustin Pop iustin@google.com Since only two boolean parameters differ between these two functions, we combine them as to have less code duplication. This will be needed in the future as we will need to split off the recursive part off. Reviewed-by: ultrotter Switch call_blockdev_create call to (status, data) 2009-01-19T11:10:19+00:00 Iustin Pop iustin@google.com This allows errors to be visible at the user level instead of just node daemon logs. Reviewed-by: ultrotter Small change in the instance disk creation path 2009-01-19T11:10:10+00:00 Iustin Pop iustin@google Block device creation cleanup 2009-01-19T11:10:01+00:00 Iustin Pop iustin@google Use the same root for both _data and _meta LVs 2009-01-19T10:43:11+00:00 Iustin Pop iustin@google Fix LUExportInstance 2009-01-16T16:24:26+00:00 Iustin Pop iustin@google.com burnin: only call self.GrowDisks() if needed 2009-01-16T13:09:43+00:00 Iustin Pop iustin@google.com In case we pass --disk-grow 0[,0..] then we should not call GrowDisks as it prints confusing log lines. Reviewed-by: imsnah Instance: add a new all_nodes property 2009-01-16T11:02:57+00:00 Iustin Pop iustin@google.com Since we often need the list of all nodes of an instance, we add a new "all_nodes" property that returns all nodes of the instance, and we switch secondary_nodes to a simpler implementation based on this new function. Reviewed-by: ultrotter
https://git.minedu.gov.gr/itminedu/snf-ganeti/-/commits/cd42d0ade48164a80954403fb47e358932724a4d?format=atom
CC-MAIN-2021-43
refinedweb
1,212
57.16
Hello, I have the following issue: When generating a table using Case Summaries procedure and try to hide some rows then make them reappear by using ShowAllLabelsWithDataAt, the rows do not seem to appear So problem is that I can hide rows, but can not show it back. I think that ShowAllLabelsAndDataInDimensionAt Method do not work. After script ends it will write "Done" and table has no rows. Here is the script: import SpssClient SpssClient.StartClient() # Read output and content objOutputDoc = SpssClient.GetDesignatedOutputDoc() objOutputItems = objOutputDoc.GetOutputItems() # Find input tables for index in xrange(objOutputItems.Size()): objOutputItem = objOutputItems.GetItemAt(index) if objOutputItem.GetType() == SpssClient.OutputItemType.PIVOT \ and objOutputItem.GetSubType() == "Report" \ and u"Case Summaries" in objOutputItem.GetDescription(): objPivotTable = objOutputItem.GetSpecificType() objRowLabels = objPivotTable.RowLabelArray() for i in xrange(objRowLabels.GetNumRows()): objRowLabels.HideLabelsWithDataAt(i,2) objRowLabels.ShowAllLabelsAndDataInDimensionAt(i,2) print("Done") SpssClient.StopClient() Should I keep the second line from the for loop in the same loop? Is it a script problem or an issue with the ShowAllLabelsAndDataInDimensionAt method? I also tried to apply the ShowAll method to a SpssPivot Table but that doesn't seem to work as well Could you kindly have a look and give me some suggestions? Answer by AlexandraSachelarescu (0) | Mar 25, 2015 at 03:30 PM I forgot to mention I run Statistics SPSS 23 64bit Answer by JonPeck (4671) | Mar 25, 2015 at 06:40 PM Please post an spv file that shows the problem or email it to me directly (peck AT us.ibm.com). 46 people are following this question.
https://developer.ibm.com/answers/questions/226359/$%7B$value.user.profileUrl%7D/
CC-MAIN-2020-10
refinedweb
256
51.55
Created on 2021-03-02 11:41 by mattip, last changed 2021-03-02 14:54 by skoslowski. If I have a module "parent", and I add another module "child" with a method "f" to it: child = PyModule_Create(...); PyModule_AddObject(parent, "child", child); then I can call import parent parent.child.f() but importing like this from parent.child import f raises a ModuleNotFoundError: ... 'parent' is not a package This came up in PyTorch and in pybind11, and in various other places like stackoverflow A complete example is attached If this is intentional, it might be nice to emit a warning when calling PyModule_AddObject with a module. >>> import parent.child first imports "parent" (successfully) but then fails, because the import code has no knowledge of were to find ".child". This is because a) the module "parent" is not marked as a package (hence the error message) b) even if it were a package, there is no (ext) module file to locate and load. If you instead run >> from parent import child only "parent" is imported, and "child" is retrieved as an attribute - which it actually is. The import code itself will add such an attribute, too [1]. However, that is after the submodule was located and loaded. Attribute lookup on the parent is not part of the submodule import itself. A (hacky) work-around would be to add an entry for "parent.child" in sys.modules. This prevents the import machinery from running. A (more) proper solution would be to mark "parent" as a package and install some importlib hooks. See [2] for an example from Cython-land. Finally there is PyImport_AppendInittab() [3], which could possibly be used to register "parent.child". I have never tried that. Even if this worked, it would be unsupported and probably not without side-effects. [1] [2] [3]
https://bugs.python.org/issue43367
CC-MAIN-2021-17
refinedweb
303
66.13
Batch multiple HTTP requests together, to issue them in parallel over multiple TCP connections. requests array | object An array or object containing requests, in string or object form. import http from "k6/http"; import { check } from "k6"; export default function() { let responses = http.batch([ "", "", "", ]); check(responses[0], { "main page status was 200": res => res.status === 200, }); }; Batching three URLs for parallel fetching Request objects You can also use objects to hold information about a request. Here is an example where we do that in order to send a POST request, plus use custom HTTP headers by adding a Params object to the request: import http from "k6/http"; import { check } from "k6"; export default function() { let req1 = { method: "GET", url: "", }; let req2 = { method: "GET", url: "", }; let req3 = { method: "POST", url: "", body: { hello: "world!", }, params: { headers: { "Content-Type": "application/x-www-form-urlencoded" } } }; let responses = http.batch( [req1, req2, req3] ); // httpbin.org should return our POST data in the response body, so // we check the third response object to see that the POST worked. check(responses[2], { "form data OK": (res) => JSON.parse(res.body)["form"]["hello"] == "world!", }); }; Note that the requests in the example above may happen in any order, or simultaneously. There is no guarantee that e.g. req1 will happen before req2 or req3 Named requests Finally, you can also send in named requests by using an object instead of an array as the parameter to http.batch(). In the following example we do this, and we also show that it is possible to mix string URLs and request objects import http from "k6/http"; import { check } from "k6"; export default function() { let requests = { "front page": "", "features page": { method: "GET", url: "", params: { headers: { "User-Agent": "k6" } }, }, }; let responses = http.batch(requests); // when accessing results, we use the name of the request as index // in order to find the corresponding Response object check(responses["front page"], { "front page status was 200": res => res.status === 200, }); };
https://docs.k6.io/docs/batch-requests
CC-MAIN-2018-30
refinedweb
326
51.38
Error No data produced from map xxxxx please check source profile and make sure it matches source data. Cause This is a common error that typically occurs when either: - The incoming document does not match the source profile referenced in the map - None of the mapped fields actually contain a value resulting in no output document created Solution Source Data Mismatch - If you are retrieving data from an application connector with a response profile defined in the Operation component (e.g. Salesforce, Database, NetSuite, etc.) make sure the same profile is referenced in the Map. For example: - Tips for Flat File data: - For delimited files, make sure the number of columns in the file matches the number of elements defined on the profile. Some common issues found in delimited flat file such as a comma-delimited CSV file that result in column mismatch include: - Make sure there are no unescaped commas in the actual data values that result in additional columns. If there can be commas present within data values, those columns must be properly escaped, typically with double quotes. - Similarly if there can be line breaks present within data values, those columns must be properly escaped. - For data positioned files, double check the start column and length for mapped elements. Remember the first position on the line is 0 not 1. - Make sure the actual data and profile configuration align with respect to Column Headers. If the profile expects column headers it will ignore the first line of data. This can happen when splitting flat file data with column headers by line and not selecting the option to "Keep Headers". - Tips for EDI data: - Make sure the delimiter and/or segment terminator in the data matches the options defined on the profile. - Tips for XML data: - Make sure the root element matches that defined in the profile. - If namespaces are used, make sure those are configured appropriately within the profile. - Tip for JSON data: - Make sure the root element matches that defined in the profile. One common scenario is when a query operation may return one or more records and the actual data may either begin with a single object or an array of objects. These need to be handled by separate profiles. - Tips for Database data: - Make sure it is the exact profile referenced in the Database query operation. The document data actually contains a reference to the profile's component ID. - Additional troubleshooting tips: - Pay extra attention if you are manipulating the documents between the connector operation and the map. This includes using Data Process shapes (e.g. split, combine, custom scripting) or Message shapes. Be sure to carefully inspect the source data passed into the map shape to make sure it matches the source profile. No Values Mapped Carefully compare the mapping rules to the source data to make sure at least one source element is populated, or if there are any Default Values configured on destination profile elements. 5 people found this helpful
https://community.boomi.com/docs/DOC-1347
CC-MAIN-2018-39
refinedweb
498
52.9
The stylesheets get chained together in a pipeline one after the other. The only issue of one stylesheet using another is order in the pipeline. Your xsp must be transformed by any logicsheet that uses another logicsheet before it is transformed by that other logicsheet. xsp.xsl is the least dependent, most depended logicsheet. esql uses it. Here's the part i'm not sure about: Which ever order they are declared in cocoon.xconf is the order you should declare your logicsheet in: yourlogicsheet > esql > xsp or xsp < esql < yourlogicsheet. There was once a time when processing order was determined by the order the namespaces appear on the root element. I don't think that is any longer the case. Tim On Thu, Jul 17, 2003 at 04:23:38PM +0200, Olivier Billard wrote: > Hi
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200307.mbox/%3C20030717103229.A3259@pirouline.sts.jhu.edu%3E
CC-MAIN-2016-26
refinedweb
136
66.84
Ads Via DevMavens I thought I'd write a few posts about the controls I've created for Brad's Mix06 demo, which you can download from his blog: Full source code of the demo: control source code: One of the most reusable pieces of the demo is the accordion control. Plus, I'm French so it looked like a natural way to start this series of posts. (This is a snapshot of the control, an image, not a live sample) The control is a client-side Atlas control. It does pretty much what you expect, which is to show only one of its panes at a time. It has a nice transition effect when switching to a new pane. Here's how you use it... As usual with Atlas client-side controls, the control itself does not do any rendering. This part is left to the page developer and is completely free-form, which gives you total control over the rendering and helps separate layout and semantics from behavior. So the first thing to do is to create a container element (a div will do) and put the different headers and content panes inside of that. For accessibility reasons, I recommend you use A tags in the headers. Divs are fine for the panes themselves. <div id="accordion1"> <a href="BLOCKED SCRIPT;">Pane 1</a> <div id="Pane 1"> This is Pane 1 in the accordion control. </div> <a href="BLOCKED SCRIPT;">Pane 2</a> <div id="Pane 2"> This is Pane 2 in the accordion control. </div> <a href="BLOCKED SCRIPT;">Pane 3</a> <div id="Pane 3"> This is Pane 3 in the accordion control. </div></div> The accordion control will look at the contents of its associated element and will consider every other sub-element as a header or a content pane. So let's associate an accordion control to the div above: <dice:accordion And... that's it, we're done. One nice thing to note is that if javascript is off on the user's browser, the contents of the accordion will just show with all panes fully deployed. In a future post, I'll explain how such a control is built and in particular how I included the animation effect. UPDATE: Here's the scaffolding you need in your page for the accordion control (or any Atlas control) to work... You need a script manager on the page, which will manage the necessary script inclusions. Here, the accordion needs the Glitz library for the animations so we're adding that: <atlas:ScriptManager <Scripts> <atlas:ScriptReference <atlas:ScriptReference </Scripts></atlas:ScriptManager> Replace Dice.js with accordion.js if you're using the accordion-only version of the script file (the dice.js has the other controls used in Brad's demo).If you're not using ASP.NET, you'll have to manually add <script> tags to your page that point to Atlas.js and the glitz script file. And of course the Atlas markup itself must be in a <script type="text/xml-script"> section: <script type="text/xml-script"> <page xmlns: <components> <dice:accordion </components> </page></script> Note the namespace declaration here so that we can use the dice:accordion tag. I should have included that from the start, but I wanted to show only what's relevant to the sample. My mistake, it's corrected now. UPDATE 2: The Atlas Control Toolkit now has a far more advanced Accordion control, which should be preferred over this one for real-world use. Billy: yes, you can set the viewIndex from the xml-script tag. This way, it will change without playing the animation. If you're not using xml-script, just set the index before you call initialize. Maybe what you want to do is make sure only the pane that you want is shown, even before the scripts are initialized and the control gets a chance to hide the other panes? If that's the case, you'll have to do that in the HTML and CSS, by setting the overflow to hidden and the height to 1px on the other panes. John: I have no idea what dll you're talking about. If you're using the Toolkit's Accordion, you should ask this question on the toolkit forums: No, this one was written way before. I'll update the post to make that clearer and to point people to the toolkit one (which should be used instead of this sample). Nick: of course. Just set the current pane before initialization (which is easy if you're using xml-script, just write the attribute). When the page loads, all panes will be visible until initialization (this is desirable because if people visit your page with JavaScript disabled, they will see all panes and should still be able to use your application) but as soon as initialization happens, the panes other than the current one will collapse without animating.
http://weblogs.asp.net/bleroy/archive/2006/03/28/441343.aspx
crawl-002
refinedweb
828
70.53
- NAME - SYNOPSIS - DESCRIPTION - DataSet::Collection - RESPONSIBILITIES - TODO - AUTHOR - ADDITIONAL MODULES - LICENSE AND COPYRIGHT NAME PDL::Graphics::Prima::DataSet - the way we think about data SYNOPSIS -distribution => ds::Dist( $data, plotType => ppair::Lines, binning => bt::Linear, ), -lines => ds::Pair( $x, $y, plotTypes => [ppair::Lines, ppair::Diamonds] ), -contour => ds::Grid( $matrix, # Specify your bounds in one of these three ways bounds => [$left, $bottom, $right, $top], y_edges => $ys, x_edges => $xs, x_bounds => [$left, $right], y_bounds => [$bottom, $top], # Unnecessary if you want the default palette plotType => pgrid::Matrix(palette => $palette), ), -image => ds::Image( $image, format => 'string', ... ds::Grid bounder options ... # Unnecessary at the moment plotType => pimage::Basic, ), -function => ds::Func( $func_ref, xmin => $left, xmax => $right, N_points => 200, ), DESCRIPTION PDL::Graphics::Prima fundamentally conceives of two different kinds of data representations. There are pairwise representations, such as line plot used to visualize a time series, and there are gridded representations, such as raster images used to visualize heat maps (or images). Any data that you want to represent must have some way to conceive of itself as either pairwise or gridded. Of course, there are plenty of things we want to visualize that are not pairwise data or grids. For example, what if we want to plot the distribution of scores on an exam? In this case, we would probably use a histogram. When you think about it, a histogram is just a pairwise visual representation. In other words, to visualize a distribution, we have to first map the distribution into a pairwise representation, and then choose an appropriate way to visualize that representation, in this case a histogram. So, we have two fundamental ways to represent data, but many possible data sets. For pairwise representations, we have ds::Pair, the basic pairwise DataSet. ds::Dist is a derived DataSet which includes a binning specification that bins the distribution into bin centers (x) and heights (y) to get a pairwise representation. ds::Func is another derived DataSet that generates evenly sampled data based on the axis bounds and evaluates the supplied function at those points to get a pairwise representation. ds::Image provides a simple means for visualizing images, and ds::Grid provides a means for mapping a gridded collection of data into an image, using palettes. Base Class The Dataset base class provides a few methods that work for all datasets. These include accessing the associated widget and drawing the data. - widget The widget associated with the dataset. - draw Calls all of the drawing functions for each plotType of the dataSet. This also applies all the global drawing options (like color, for example) that were supplied to the dataset. - compute_collated_min_max_for This function is part of the collated min/max chain of function calls that leads to reasonable autoscaling. The Plot widget asks each dataSet to determine its collated min and max by calling this function, and this function simply agregates the collated min and max results from each of its plotTypes. In general, you needn't worry about collated min and max calculations unless you are trying to grapple with the autoscaling system or are creating a new plotType. working here - link to more complete documentation of the collation and autoscaling systems. - new This is the universal constructor that is called by the short-name constructors introduced below. This handles the uniform packaging of plotTypes (for example, allowing the user to say plotType =ppair::Diamonds> instead of the more verbose plotTypes =[ppair::Diamonds]>). In general, you (the user) will not need to invoke this constructor directly. - check_plot_types Checks that the plotType(s) passed to the constructor or added at runtime are built on the data type tha we expect. Derived classes must specify their plotType_base_classkey before calling this function. - init Called by new to initialize the dataset. This function is called on the new dataset just before it is returned from new. If you create a new dataSet, you should provide an initfunction that performs the following: - supply a default plotType If the user supplied something to either the plotTypeor plotTypeskeys, then newwill be sure you have you will already have that something in an array reference in $self->{plotTypes}. However, if they did not supply either key, you should supply a default. You should have something that looks like this: $self->{plotTypes} = [pset::CDF] unless exists $self->{plotTypes}; - check the plot types After supplying a default plot type, you should check that the provided plot types are derived from the acceptable base plot type class. You would do this with code like this: $self->check_plot_types(@{self->{plotTypes}}); This is your last step to validate or pre-calculate anything. For example, you must provide functions to return your data, and you should probably make guarantees about the kinds of data that such accessors return, such as the data always being a piddle. If that is the case, then it might not be a bad idea to say in your initfunction something like this: $self->{data} = PDL::Core::topdl($self->{data}); - change_data Sets the data to the given data by calling the derived class's _change_datamethod. Unlike _change_data, this method also issues a ChangeDatanotification to the widget. This means that you should only use this method once the dataset has been associated with a widget. Each class expects different arguments, so you should look at the class's documentation for details on what to send to the method. Pair Pairwise datasets are collections of paired x/y data. A typical Pair dataset is the sort of thing you would visualize with an x/y plot: a time series such as the series of high temperatures for each day in a month or the x- and y-coordinates of a bug walking across your desk. PDL::Graphics::Prima provides many ways of visualizing Pair datasets, as discussed under "Pair" in PDL::Graphics::Prima::PlotType. The dimensions of pluralized properties (i.e. colors) should thread-match the dimensions of the data. An important exception to this is ppair::Lines, in which case you must specify how you want properties to thread. The default plot type is ppair::Diamonds. - ds::Pair - short-name constructor ds::Pair($x_data, $y_data, option => value, ...) The short-name constructor to create pairwise datasets. The x- and y-data can be either piddles or array references (which will be converted to a piddle during initialization). - expected_plot_class Pair datasets expect plot type objects that are derived from PDL::Graphics::Prima::PlotType::Pair. - get_xs, get_ys, get_data Returns piddles with the x, y, or x-y data. The last function returns two piddles in a list. - get_data_as_pixels Uses the reals_to_pixels functions for the x- and y- axes to convert the values of the x- and y- data to actual pixel positions in the widget. - change_data Changes the data to the piddles passed in. For example, $scatter_plot->dataSets->{'data'}->change_data($xs, $ys); Distribution Distributions are unordered collections of sample data. The typical use case of a distribution is that you have a population of things and you want to analyze their agregate properties. For example, you might be interested in the distribution of tree heights at your Christmas Tree Farm, or the distribution of your students' (or your classmates') test scores from the mid-term. Common ways for visualizing distributions are to plot their cumulative distribution functions or their histogram, but those are actually classic pairwise data visualization approaches. That means that what we really need are means for converting unordered sets of data into pairwise data. Distributions, therefore, let you specify the means by which your unordered data should be transformed into pairwise data, and the pairwise plot types to visualize the resulting transformed data. In an object oriented sense, the Distribution class is derived from the Pairwise class because a distribution is visualized using pairwise plot types. Note that shape of pluralized properties (i.e. colors) should thread-match the shape of the data excluding the data's first dimension. That is, if I want to plot the cumulative distributions for three different batches using three different line colors, my data would have shape (N, 3) and my colors piddle would have shape (3). PDL::Graphics::Prima's notion of distributions is not yet finalized and is open to suggestion. If you find yourself using distribution plots regularly, you should give me feedback on what works and what doesn't. Thanks! - ds::Dist - short-name constructor ds::Dist($data, option => value, ...) The short-name constructor to create distribtions. The data can be either a piddle of values or an array reference of values (which will be converted to a piddle during initialization). In addition to the standard keys, there is also the binningkey. The binningkey expects either a standard binning approach using one of the pre-defined forms, or a subroutine reference that performs the binning in a customized fashion. The binning types are all functions that expect key/value pairs that include minand maxfor the lower and upper threshold of the binning, drop_extremesto indicate if the data outside the min/max range should be included in the first and last bins, and normalizeto indicate if the binning should be normalized to 1, for some appropriate definition of normalization. Other keys may also be allowed. If you want to write a customized binning function, it should accept the two arguments, the datato bin and the distributionobject. It should return a pair of piddles representing the x and y coordinates to plot. In addition, if the binning routine knows how to calculate properties for specific plot types, it can specify the plot type and any properties that it would provide for that plot type. For example, if you write a binning routine that knows how to calculate the bin boundaries for the Histogram plot type, your return statement could look like this: return ($x, $y, Histogram => { binEdges => $bounds } ); If your binning routine uses the number of points in a Symbols plot type to represent something, it could specify those: return ($x, $y, Symbols => { N_points => $n_points } ); If you have a means for calculating the error on your bins, you could include the error bar data: return ($x, $y, ErrorBars => { y_err => $count_err } ); These properties will be applied to the relevant plot types just before drawing and autoscaling operations, and any dataset operation that makes use of your supplied function should examine the additional parameters and act accordingly. The standard binning types include: - bt::CDF Generates a cumulative distribution from the data. The default minis the data's minimum, the default maxis the data's maximum, the binning will not drop_extremesby default (i.e. drop_extremes => 0) and the binning normalizes the data (i.e. normalize => 1). You can also specify if you want an increasing or decreasing representation by specifying a boolean value for the increasingkey (the default is increasing, i.e. true). In the context of the CDF, normalization refers to the curve runnning from y = 0 to y = N - 1 (not normalized) or from y = 0 to y = 1 (normalized). Bear in mind that this interacts with your choice to drop the extremes or not. In producing the CDF, bad values are simply skipped. - bt::Linear Generates a histogram from the data with linear spacing. The default minis the data's minimum, the default maxis the data's maximum,. In this case, normalization means that the "integral" of the histogram is 1, which means that the sum of the heights times the widths is 1. - bt::Log Generates a histogram from the data with logarithmic spacing. The default minis the data's smallest positive value and the default max is the data's maximum value. If none of the data is positive, the binning type croaks.. As with linear binning, normalization means that the "integral" of the histogram is 1, which means that the sum of the heights times the widths is 1. - bt::StrictLog Identical to bt::Log, except that it croaks if it encounters any negative values. You can use this in place of bt::Log to sanity check your data. - bt::NormFit "Fits" the distribution between the specified min and max (defaults to the data's min and max) to a normal distribution. This bin type does not pay attention to the drop_extremeskey, but it cares about the normalizekey. If unspecified (the default), the curve will be scaled so that the area underneath it is the number of data points being fit. If normalized, the curve will be scaled so that the area under the curve will be 1. You can also specify the number of points to use in generating the curves by including the N_pointskey/value pair. I am pondering allowing the curve's min/max to take the current axis bounds min/max if the axes are not autoscaling. Thoughts appreciated. - get_data, get_xs, get_ys Returns the binned data, just the x-values, or just the y-values. For all of these, the binning function is applied to the current dataset. However, for the x- or y-getters, the other piece of data is discarded. Grid Grids are collections of data that is regularly ordered in two dimensions. Put differently, it is a structure in which the data is described by two indices. The analogous mathematical structure is a matrix and the analogous visual is an image. PDL::Graphics::Prima provides a few ways to visualize grids, as discussed under "Grids" in PDL::Graphics::Prima::PlotType. The default plot type is pgrid::Color. This is the least well thought-out dataSet. As such, it may change in the future. All such changes will, hopefully, be backwards compatible. At the moment, there is only one way to visualize grid data: pseq::Matrix. Although I can conceive of a contour plot, it has yet to be implemented. As such, it is hard to specify the dimension requirements for dataset-wide properties. There are a few dataset-wide properties discussed in the constructor, however, so see them for some examples. - ds::Grid - short-name constructor ds::Grid($matrix, option => value, ...) The short-name constructor to create grids. The data should be a piddle of values or something which topdl can convert to a piddle (an array reference of array references). The current cross-plot-type options include the bounds settings. You can either specify a boundskey or one key from each column: x_bounds y_bounds x_centers y_centers x_edges y_edges - bounds The value associated with the boundskey is a four-element anonymous array: bounds => [$left, $bottom, $right, $top] The values can either be scalars or piddles that indicate the corners of the grid plotting area. If the latter, it is possible to thread over the bounds by having the shape of (say) $leftthread-match the shape of your grid's data, excluding the first two dimensions. That is, if your $matrixhas a shape of (20, 30, 4, 5), the piddle for $leftcan have shapes of (1), (4), (4, 1), (1, 5), or (4, 5). At the moment, if you specify bounds, linear spacing from the min to the max is used. In the future, a new key may be introduced to allow you to specify the spacing as something besides linear. - x_bounds, y_bounds The values associated with x_boundsand y_boundsare anonymous arrays with two elements containing the same sorts of data as the boundsarray. - x_centers, y_centers The value associated with x_centers(or y_centers) should be a piddle with increasing values of x (or y) that give the mid-points of the data. For example, if we have a matrix with shape (3, 4), x_centerswould have 3 elements and y_centerswould have 4 elements: ------------------- y3 | d03 | d13 | d23 | ------------------- y2 | d02 | d12 | d22 | ------------------- y1 | d01 | d11 | d21 | ------------------- y0 | d00 | d10 | d20 | ------------------- x0 x1 x2 Some plot types may require the edges. In that case, if there is more than one point, the plot guesses the scaling of the spacing between points (choosing between logarithmic or linear) and appropriate bounds for the given scaling are calculated using interpolation and extrapolation. The plot will croak if there is only one point (in which case interpolation is not possible). If the spacing for your grid is neither linear nor logarithmic, you should explicitly specify the edges, as discussed next. At the moment, the guess work assumes that all the scalings for a given Grid dataset are either linear or logarithmic, even though it's possible to mix the scaling using threading. (It's hard to do that by accident, so if that last bit seems confusing, then you probably don't need to worry about tripping on it.) Also, I would like for the plot to croak if the scaling does not appear to be either linear or logarithmic, but that is not yet implemented. - x_edges, y_edges The value associated with x_edges(or y_edges) should be a piddle with increasing values of x (or y) that give the boundary edges of data. For example, if we have a matrix with shape (3, 4), x_edgeswould have 3 + 1 = 4 elements and y_edgeswould have 4 + 1 = 5 elements: y4 ------------------- | d03 | d13 | d23 | y3 ------------------- | d02 | d12 | d22 | y2 ------------------- | d01 | d11 | d21 | y1 ------------------- | d00 | d10 | d20 | y0 ------------------- x0 x1 x2 x3 Some plot types may require the data centers. In that case, if there are only two edges, a linear interpolation is used. If there are more than two points, the plot will try to guess the spacing, choosing between linear and logarithmic, and use the appropriate interpolation. The note above about regarding guess work for x_centers and y_centers applies here, also. - expected_plot_class Grids expect plot type objects that are derived from PDL::Graphics::Prima::PlotType::Grid. - get_data Returns the piddle containing the data. - change_data Changes the data to the piddle passed in. For example, $map_plot->dataSets->{'intensity'}->change_data($new_intensity); - guess_scaling_for Takes a piddle and tries to guess the scaling from the spacing. Returns a string indicating the scaling, either "linear" or "log", as well as the spacing term. working here - clarify that last bit with an example Image Images are like Grids (they are derived from Grids, actually) but they have a specified color format. Since they have a color format, this means that they need to hold information for different aspects of each color, so they typically have one more dimension than Grids. That is, where a grid might have dimensions M x N, an rgb or hsv image would have dimensions 3 x M x N. The default image format is rgb. Currently supported image formats are rgb (red-green-blue), hsv (hugh-saturation-value), and prima (Prima's internal color format, which is a packed form of rgb). As Images are derived from Grids, any method you can call on a Grid you can also call on an Image. Differences and details specific to Images include: - ds::Image - short-name constructor ds::Image($image, option => value, ...) Creates an Image dataset. A particularly important key is the color_formatkey, which indicates the format of the $imagepiddle. When it comes to drawing the image, the data will be converted to a set of Prima colors, which means that the first dimension will be reduced away. Values associated with keys should be thread-compatible with the dimensions starting from the second dimension, so if your image has dims 3 x M x N, values associated with your various keys should be thread-compatible with an M x N piddle. Note that color formats are case insensitive. At the moment there is no way to add new color formats, but you should expect a color format API to come at some point in the not-distant future. It will very likely make use of PDL::Graphics::ColorSpace, so if you want your own special color format to be used for Images, you should contribute to that project. - change_data Sets the image to the new image data. Expects a piddle with the new data and an optional format specification. If no specification is given, the current format is used. Func PDL::Graphics::Prima provides a special pair dataset that takes a function reference instead of a set of data. The function should take a piddle of x-values as input and compute and return the y-values. You can specify the number of data points by supplying N_points => value in the list of key-value pairs that initialize the dataset. Most of the functionality is inherited from PDL::Graphics::Prima::DataSet::Pair, but there are a few exceptions. - ds::Func - short-name constructor ds::Func($subroutine, option => value, ...) The short-name constructor to create function datasets. The subroutine must be a reference to a subroutine, or an anonymous sub. For example, # Reference to a subroutine, # PDL's exponential function: ds::Func (\&PDL::exp) # Using an anonymous subroutine: ds::Func ( sub { my $xs = shift; return $xs->exp; }) - change_data Sets the function and/or the number of points to evaluate. The basic usage looks like this: $plot->dataSets->{'curve'}->change_data(\&some_func, $N_points); Either of the arguments can be undefined if you want to change only the other. That means that you can change the function without changing the number of evaluation points like this: $plot->dataSets->{'curve'}->change_data(\&some_func); and you can change the number of evaluation points without changing the function like this: $plot->dataSets->{'curve'}->change_data(undef, $N_points); - get_xs, get_ys These functions override the default Pair behavior by generating the x-data and using that to compute the y-data. The x-data is uniformly sampled according to the x-axis scaling. - compute_collated_min_max_for This function is supposed to provide information for autoscaling. This is a sensible thing to do for the the y-values of functions, but it makes no situation with the x-values since these are taken from the x-axis min and max already. This could be smarter, methinks, so please give me your ideas if you have them. :-) Annotation PDL::Graphics::Prima provides a generic annotation dataset that is used for adding drawn or textual annotations to your plots. - ds::Note - short-name constructor for annotations ds::Note(plotType, plotType, ..., drawing => option, drawing => option) The short-name constructor to create annotations. This expects a list of annotation plot types fullowed by a list of general drawing options, such as line width or color. For example, ds::Note( pnote::Region( # args here ), pnote::Text('text', # args here ), ... more note objects ... # Dataset drawing options color => cl::LightRed, ... ); Unlike other dataset short-form constructors, you do not need to specify the plotTypes key explicitly, though if you did it would do what you mean. That is, the previous example would give identical results as this: ds::Note( plotTypes => [ pnote::Region( # args here ), pnote::Text('text', # args here ), ... more note objects ... ], # Dataset drawing options color => cl::LightRed, ... ); The former is simply offered as a convenience for this more long-winded form. DataSet::Collection The dataset collection is the thing that actually holds the datasets in the plot widget object. The Collection is a tied hash, so you access all of its data members as if they were hash elements. However, it does some double-checking for you behind the scenes to make sure that whenever you add a dataset to the Collection, that you added a real DataSet object and not some arbitrary thing. working here - this needs to be clarified RESPONSIBILITIES The datasets and the dataset collection have a number of responsibilities, and a number of things for whch they are not responsible. The dataset container is responsible for: - knowing the plot widget The container always maintains knowledge of the plot widget to which it belongs. Put a bit differently, a dataset container cannot belong to multiple plot widgets (at least, not at the moment). - informing datasets of their container and plot widget When a dataset is added to a dataset collection, the collection is responsible for informing the dataset of the plot object and the dataset collection to which the dataset belongs. Datasets themselves are responsible for: - knowing and managing the plotTypes The datasets are responsible for maintaining the list of plotTypes that are to be applied to their data. - knowing per-dataset properties Drawing properties can be specified on a per-dataset scope. The dataset is responsible for maintaining a list of these properties and providing them to the plot types when they perform drawing operations. - knowing the dataset container and the plot widget All datasets know the dataset container and the plot widget to which they belong. Although they could retrieve the widget through a method on the container, the - informing plotTyes' plot widget The plot types all know the widget (and dataset) to which they belong, and it is the - managing the drawing operations of plotTypes Although datasets themselves do not need to draw anything, they do call the drawing operations of the different plot types that they contain. - knowing and supplying the data A key responsibility for the dataSets is holding the data that are drawn by the plot types. Althrough the plot types may hold specialized data, the dataset holds the actual data the underlies the plot types and provides a specific interface for the plot types to access that data. On the other hand, datasets are not responsible for knowing or doing any of the following: - knowing axes The plot object is responsible for knowing the x- and y-axis objects. However, if the axis system is changed to allow for multiple x- and y-axes, then this burden will shift to the dataset as it will need to know which axis to use when performing data <-> pixel conversions. TODO Add optional bounds to function-based DataSets. Captitalization for plotType, etc. Use PDL documentation conventions for signatures, ref, etc. Additional datset, a two-tone grid. Imagine that you want to overlay the population density of a country and the average rainfall (at the granularity of counties, let's say). You could use the intensity of the red channel to indicate population and the intensity of blue to indicate rainfall. Highly populated areas with low rainfall would be bright red, while highly populated areas with high rainfall would be purple, and low populated areas with high rainfall would be blue. The color scale would be indicated with a square with a color gradient (rather than a horizontal or vertical bar with a color gradient, as in a normal ColorGrid). Anyway, this differs from a normal grid dataset because it would require two datasets, one for each tone.. 1 POD Error The following errors were encountered while parsing the POD: - Around line 1507: You forgot a '=back' before '=head2'
https://metacpan.org/pod/PDL::Graphics::Prima::DataSet
CC-MAIN-2016-50
refinedweb
4,403
51.48
Archived:Geolocation in Flash Lite With the evolution of Symbian ^3 devices in the market, Flash Lite 4 is now supported on Nokia devices. Nokia N8 is the first device from the Nokia family to support it. Flash Lite 4 is a giant leap from the prominent Flash Lite 3 paradigm and offers better performance and offers support for multi-touch, accelerometer and so on. This article concentrates on creating a simple map based application using the Geolocation API in Flash Lite 4. This code or software was built using Adobe Flash CS5 and successfully tested on Adobe Device Central CS5. This code example is written on Actionscript 3 for Flash Lite 4 devices. In this example, we acquire GPS coordinates from the device sensors and display a Google Map of that location. Requirements 1. Adobe Flash Toolit CS5 2. Google Maps API for Flash As3 - available at 3. Google Maps API serial/key. Implementation Inside Flash CS5, create a new Flash Lite 4 document of size 360 X 425 px. Create a textfield (names locTxt) where we will display the input co-ordinates. We shall add the map element via code. Note: For information on using the Google Maps API for Flash in general, this is a good read. In the first frame on the timeline paste, import flash.sensors.Geolocation; import flash.events.GeolocationEvent; import com.google.maps.LatLng; import com.google.maps.Map; import com.google.maps.MapEvent; import com.google.maps.MapType; var geo:Geolocation = new Geolocation(); geo.addEventListener(GeolocationEvent.UPDATE, showLoc); geo.setRequestedUpdateInterval(100); var lat:Number; var long:Number; var map:Map = new Map(); map.key = "Your-API_KEY-goes-here"; map.sensor = "true"; map.setSize(new Point(stage.stageWidth, stage.stageHeight - 20)); map.x = 15; map.y = 15; function showLoc(e:GeolocationEvent):void { trace("loc info came"); locTxt.text = e.latitude.toString() + ", " + e.longitude.toString(); lat = e.latitude; long = e.longitude; map.addEventListener(MapEvent.MAP_READY, onMapReady); this.addChild(map); } function onMapReady(event:Event):void { map.setCenter(new LatLng(lat,long), 14, MapType.NORMAL_MAP_TYPE); } Output Executing this code inside Flash CS5, opens the desktop run-time Flash player which will show errors of missing Geolocation classes. Hence open the same compiled SWF inside Device Central CS5. Testing Device Central CS5 offers a wide range of emulation methods for Flash content on phones or devices in general. As seen in the image, there are new options in the Device Central screen. The point of focus to us here is however the ability to get GPS coordinates while testing. The one of the panels in the right (Geolocation) gives the option to send GPS co-ordinates as a signal into the emulation toolkit. Simply enter the latitude and longitude in the appropriate textfields and press SEND TO DEVICE button. This triggers an input GPS signal for the device. The image below shows the network traffic monitoring panel that describes the quantity of inbound and outbound traffic. Author --manikantan 07:53, 17 October 2010 (UTC) For more tutorials from the same author
http://developer.nokia.com/community/wiki/Archived:Geolocation_in_Flash_Lite_4
CC-MAIN-2014-15
refinedweb
504
52.76
Carsten Ziegeler wrote: > Vadim Gritsenko wrote: > >>>a) what exactly do you want to revert? Both parts? >> >>Ideally, both. Change to CIncludeTransformer sounds more offending, though. > > I still don't get why, really. I don't like it for two reasons: * CInclude is not the only way to inject external content into the pipeline. Alternatively, you can use: - xinclude, include transformers - aggregate - file, xmldb, blob, etc source - sql transformer - i18n transformer Should all of the above be modified to have same features? * It mixes concerns. Task of CInclude is to inject external content into the pipeline, not to process it. If you want to do some processing, do it as a next step (or previous step). Otherwise, we can keep its functionality growing and end up with JavascriptXSLTCIncludeTransformer, which in addition to inclusion, will filter XML through Javascript and process output with XSLT :) >>>b) where is a working solution that fills the gap then? >> >>If you add couple of lines to that one, yes: >> > > Hmm, wasn't it you who said that it's better to split up the > functionality (separation of concerns). So removing comments is imho a > different concern as cleaning up. I see comment / namespace removal and indenting as a single task - lexical normalization of xml :-) Thanks, Vadim
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200509.mbox/%3C431702A6.4040101@reverycodes.com%3E
CC-MAIN-2014-52
refinedweb
210
62.48
From: Martin Wille (mw8329_at_[hidden]) Date: 2007-03-19 16:41:24 Peter Dimov wrote: > I'd like to start working on a Boost implementation of my threading proposal > N2178: > > > > but, as you'll see if you take a look at it, it differs from the current > Boost.Threads and Howard Hinnant's competing N2184 in a number of important > places, and having two competing libraries in Boost that do the same thing > may not be a good idea. I don't think it's a bad idea. We already have libraries with overlapping functionalities in Boost and that didn't cause major harm so far. Threading is subject to rather intense debating. In order to proceed towards an agreement, being able to to test the proposals against real applications would be of great benefit. Perhaps, such a library should, if it is shipped as part of Boost, be labeled as a supplement to a standard proposal. This would be an indicator that there may be less long-term support for such a library than users would normally expect for other Boost libraries. Proper wording for that would need to be found. > So what would be the best way to proceed? Implement (the C++ interface of) it in the boost::n2178 namespace? Regards, m Send instant messages to your online friends Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2007/03/118278.php
CC-MAIN-2020-16
refinedweb
243
73.27