text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Hi I have an input dto like this [Route("/users/named/{UserName}", Verbs = "GET")]public class FindUser : IReturn { public string UserName { get; set; } } if I provide as username = "\domain\username" on the caller side I get a 404 as if the back slash was treated as as slash .. I then changed the rout in this way [Route("/users/named/{UserName*}", Verbs = "GET")] / / add wildcard Now I don't get 404 but in the UserName on the server side a receive "domain/username" .. that is , the \ is turned into a / .. I verified with fiddler that , on the client side , the correct backslash is sent Is it a bug ? I s there a way to get the \ on the input dto ? Thank you Enrico Don't put it on the /path/info, send user data on the QueryString. Unfortunately, the api signature is public and used all over by clients we don't control .. I cannot move the parameter from path to querystring.Is it a bug that SStack changes \ to / .. or is it the desired behaviour ? and if so why ? thank you Enrico This is the default behavior for ASP.NET, you shouldn't use them in the /path/info. This workaround in your Web.config may work: <uri> <schemeSettings> <add name="http" genericUriParserOptions="DontUnescapePathDotsAndSlashes" /> </schemeSettings> </uri>
https://forums.servicestack.net/t/backslashes-treated-converted-to-slashes/5894
CC-MAIN-2018-51
refinedweb
216
75.1
The ASP.NET pipeline has been devised at the time the very first version of ASP.NET came out back in 2000. Since then, very few changes occurred to the pipeline and none of them structural. In the end, the ASP.NET pipeline is still articulated on a list of modules that process any incoming request up to generating some output for the client browser. In the course of four different releases of ASP.NET -- the latest being ASP.NET 3.5 -- the only type of change that occurred to the pipeline is the addition of new modules to the standard list of modules that are attached to process each request. Every ASP.NET application is free of attaching its own modules and removing any of the standard ones. You should note, though, that attaching new modules is less dangerous than removing existing one. As an example, consider that if you detach the authentication HTTP module no authentication will be possible for any requests directed at the application. HTTP modules are extremely powerful tools and a significant number of the features we find today in the ASP.NET MVC Framework or ASP.NET AJAX wouldn't be possible without HTTP modules. An HTTP module is a special class that implements a fairly simple interface -- IHttpModule. The interface counts only two methods -- Init and Dispose. void Init(HttpApplication app); void Dispose(); These methods are invoked only once in the application's lifetime -- upon application loading and unloading. Essentially, the methods in the interface serve the purpose of attaching and detaching the module to and from the pipeline. The pipeline fires a number of events to registered modules during the processing of the request. By handling one or more of such events each module can implement its own logic and do its own things. For example, an HTTP module can post-process the HTML, or whatever output, produced by a page request. The page processing is just one step in the request lifecycle. An HTTP module that wants to validate or modify the output of a request as generated by the selected HTTP handler will register its own handler for the EndRequest event, as shown below: public void Init(HttpApplication app) { app.EndRequest += new EventHandler(OnEndRequest); } public void OnEndRequest(object sender, EventArgs e) { HttpApplication app = (HttpApplication) sender; HttpContext ctx = app.Context; DoCustomProcessing(ctx.Response); } The EndRequest event is the closing event in the pipeline and fires when all has been done and just before the output is sent back to the browser. The module can still access the output stream and make updates. The output stream is accessible through the Response object. It should be noted, though, that the OutputStream property exported by the Response object is a write-only stream meaning that you can write on it, but not read from it. As it turns out, this approach for post-processing is only partially effective. It doesn't work, for example, if you want to read, verify, and then enter some updates. What would be a better approach? You must register an handler for another event -- PostRequestHandlerExecute. The event fires when the page handler has completed and the output for the browser has been generated. public void Init(HttpApplication app) { app.PostRequestHandlerExecute += new EventHandler(OnPostRequestHandlerExecute); } A few other things will happen from now to the EndRequest event. One of the intermediate steps, however, has just to do with post-processing. The Response object features a property named Filter which evaluates to a Stream object. If you assign a custom stream object to the property, then the generated output will be written through your custom stream, thus giving you a chance to parse and update. You can register your output filter at any time in the page or request lifecycle: Response.Filter = new YourStream(Response.Filter); Here's an excerpt of the code required to parse: public class YourStream : MemoryStream { private Stream _outputStream; public YourStream(Stream outputStream) { _outputStream = outputStream; } public override void Write(byte[] buffer, int offset, int count) { // buffer contains the output to parse : // Now buffer contains your modified content, if any Write(buffer, offset, count); } } The trick leverages the ability of .NET streams to be piled up. This is the most powerful and effective way to post-process the response of an ASP.NET page before it is sent out. And, more importantly, you get this by only writing a single additional component and registering it declaratively into the system.
http://www.drdobbs.com/windows/post-processing-the-output-of-aspnet-pag/212001499
CC-MAIN-2017-09
refinedweb
740
55.64
You may want to search: 2017 Xuping 24k gold necklace, gold plated necklace jewelry, fashion jewellery women pendant chain necklace US $0.7-1.4 / Pieces 5 Pieces (Min. Order) 24k Gold Plated Initial Wishbone Layered Necklace, Personalized Fashionable Jewelry US $2-7.5 / Piece 100 Pieces (Min. Order) Customize pendants alfhabet initial necklace 24K Gold stainless steel necklace US $2-6 / Piece 1 Piece (Min. Order) 2017 Latest Simple Design Saudi 14K 18K 24K Plated Jewelry Fashion Gold Necklace For Women US $0.6-2.2 / Piece 50 Pieces (Min. Order) 24K gold thin chain long necklace with gold lock pendant with four clover leaf small charm fashion luck necklace US $3.54-3.84 / Piece 30 Pieces (Min. Order) China Wholesale Men Fashion Stainless Steel Chain Crystal Cross 24K Gold Jewelry Necklace Models US $0.65-3.5 / Piece 99 Pieces (Min. Order) 24k gold necklace bar necklace wholesale JN7371-Y US $7.08-10.12 / Piece 1 Piece (Min. Order) 2017 fashion 24k gold necklace, gold plated earring necklace jewelry, silver stone jewellery women US $0.9-5.9 / Set 1 Set (Min. Order) 2017 New Alphabet Letter Pendant Charm Necklace Women Men Jewelry 24K Gold Plated Initial Letter Choker Stainless Steel Necklace US $0.8-2 / Piece 100 Pieces (Min. Order) 24k gold plated 925 sterling silver disc pendant necklace US $4-6.5 / Piece 50 Pieces (Min. Order) Fashion Letter Necklaces Pendants Initial Necklace 24K Gold Stainless Steel Choker Necklace Women Jewelry US $0.99-1.99 / Piece 50 Pieces (Min. Order) Most Popular Products Gloving Boxs Pure Gold 24K Necklace US $0.6-2.2 / Piece 50 Pieces (Min. Order) wedding party jewelry silver plated AAA zircon pave brass 24k white gold necklace for engagement US $0.1-2 / Piece 200 Pieces (Min. Order) 2 hours replied 24k white gold necklace, eagle necklace, beaded necklace US $3.50-10 / Piece 12 Pieces (Min. Order) simple style thin chain 24k gold plated necklace Woman's collarbone necklace US $2.3-2.55 / Piece 1 Piece (Min. Order) 2017 New design jewelry layered 24k gold necklace for women US $0.75-1.2 / Piece 100 Pieces (Min. Order) Wholesale Custom Design Necklace Fashion Accessories 24k Gold Plated Cut Out Name Infinity Necklace US $1-3.5 / Piece 100 Pieces (Min. Order) Stainless Steel 24k Gold Plated Hamsa Name Necklace with Green Crystal Pendant Necklace For Women US $1-4 / Piece 100 Pieces (Min. Order) long simple hip hop dubai latest new designs neck jewelry brass 24k 14k 18k gold plated rope men chain necklace for wholesale US $3.13-4.27 / Piece 200 Pieces (Min. Order) Personalized 2 Names Necklace Customized Love Heart Choker 24K Gold Plated Nameplate Couple Necklace US $1-3.5 / Piece 50 Pieces (Min. Order) resin 24k gold chain necklace 240 Strands (Min. Order) 2017 Summer Noble Long Star Crystal Necklace Design 24k saudi Gold Jewelry US $4.0-4.4 / Pieces 12 Pieces (Min. Order) Gold chain men hip hop trendy long necklace 24K gold plated crystal jesus pendant necklace jewelry for women jesus piece chain US $4.4-6.3 / Piece 60 Pieces (Min. Order) import jewelry findings 24k gold necklace US $2-4 / Piece 1 Piece (Min. Order) Fashion Letter Necklaces Pendants Alfabet Initial Necklace 24K Gold Stainless Steel Choker Necklace US $0.95-1.8 / Piece 100 Pieces (Min. Order) New Hot Hip Pop Gold Long Chain Design Men Necklace 24K Pure Gold Jewelry US $1-4 / Piece 50 Pieces (Min. Order) 24k Saudi Gold Jewelry Cute Fox Pendant Necklace With Crystal Inserted US $0.5-1.5 / Piece 20 Pieces (Min. Order) JN0143 JN wholesale lovely crystal cat necklace fashion 24k gold necklace cute rabbit cow necklaces & pendants for women US $1.4-1.6 / Pieces 120 Pieces (Min. Order) Hot Sell New Style diamond lizard pendant jewelry solid gold necklace 24k gold necklace US $2.8-16.8 / Piece 1 Piece (Min. Order) 24k gold necklace, new snake chain chunky necklace US $1-3 / Piece 12 Pieces (Min. Order) 24K gold necklace, fashion jewelry women necklace US $0.75-1.5 / Piece 200 Pieces (Min. Order) MJCN022 new design chunky unisex 10mm curb cuban chain with CZ 24k gold necklace men US $39.99-99.99 / Piece 30 Pieces (Min. Order) 2017 Custom Engraved PVD 24K Gold Bar Charms Necklace With Name US $1.5-4.5 / Piece 50 Pieces (Min. Order) 24K Gold Plated african chunky necklace in Hiphop Style US $8.0-9.9 / Piece 1 Piece (Min. Order) Creative Gift for someone special the 24K Gold pink Flower necklace with gift box US $21.0-30.0 / Piece 1 Piece (Min. Order) Fashion wholesale Jewelry Real Rose 24K Gold Necklace US $15-29 / Piece 20 Pieces (Min. Order) High quality fashion 24k Gold necklace US $3.85-5.39 / Pair 100 Pairs (Min. Order) 24K Gold Plating Stainless Steel Initial Alphabet Pendant Necklace US $1.89-3.99 / Piece 100 Pieces (Min. Order) Fashion Heavy 24k Yellow Gold Plated Mens Necklace+bracelet Sets European Curb Link Chain Jewelry US $0.5-5.5 / Piece 100 Pieces (Min. Order) Western 24K Yellow Gold Plated Handmade Rosary Bead Cross Necklace in 316L Stainless Steel US $3-4 / Piece 20 Pieces (Min. Order) 24k Gold necklace Gold Plating Pineapple Pendant Necklace US $1.5-3.5 / Piece 60 Pieces (Min. Order) Custom Heavy Chunky Statement Necklace Beautiful Designs 24K Gold Plated CRYSTAL PAVE LEAF NECKLACE US $3.0-3.0 / Pieces 20 Pieces (Min. Order) - About product and suppliers: Alibaba.com offers 4,663 24k gold necklace products. About 100% of these are necklaces, 23% are copper alloy jewelry, and 16% are silver jewelry. A wide variety of 24k gold necklace options are available to you, such as gold, silver, and stainless steel. You can also choose from 925 sterling silver, copper alloy, and zinc alloy. As well as from free samples, paid samples. There are 4,659 24k gold necklace suppliers, mainly located in Asia. The top supplying countries are China (Mainland), India, and Thailand, which supply 88%, 8%, and 1% of 24k gold necklace respectively. 4k gold necklace products are most popular in North America, Western Europe, and Northern Europe. You can ensure product safety by selecting from certified suppliers, including 247 with Other, 78 with ISO9001, and 10 with BSCI certification. Buying Request Hub Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE Do you want to show 24k gold necklace or other products of your own company? Display your Products FREE now!
http://www.alibaba.com/showroom/24k-gold-necklace.html
CC-MAIN-2018-05
refinedweb
1,097
77.23
We have a real problem with excessive reflow called from within DHTML, commonly in code called from SetTimeout(). See bug 118933 among many others. One possible solution is to NOT generate a reflow event for every CSS position/etc change (in nsCSSFrameConstructor::StyleChangeReflow). Instead, reflow (if needed) when the code called by SetTimeout returns, or if more than N ms have elapsed (I suggest 50 or 100ms as a first cut - it can be tuned). To be more detailed, we might put the reflow commands into a separate queue (ala the mTimeoutReflowCommands queue) (and suppress duplicates in that queue), and then process that queue off either a timer or right before we go to sleep after a SetTimeout callback. There are some other possible solutions, like adding a general delay to processing reflows if other things are active, or perhaps event prioritization (which is another RFE in bugzilla). Also see lots of other DHTML bugs, such as bug 118933 I chatted with dbaron for a while on IRC. I'm not sure if there's a win to be had here - it depends if the reflows are getting flushed during the running of the code called by SetTimeout()'s callback. He did remark that we appeared to not be reflowing very incrementally from glancing at the jprof to bug 118933. I'm going to instrument and see what I can figure out. What about bug #64516 comment #21 by attinasi/buster for a possible solution ? Looking around a bit, there's a rather under-used mechanism for batching reflows (used in only one place, in Editor). For this purpose we'd need to put a maximum time to batch the reflows for, and there may be some additional issues, but it looks like a good place to start. I'm going to try a patch that batches reflows around calls to JS from RunTimeout and see what happens. (The methods are nsIPresShell::BeginReflowBatching(), nsIPresShell::EndReflowBatching(PRBool reflow), and a query method (no one uses it). I have my batched-reflows up and running (without the timeout). It does appear to properly batch the 100 reflows generated per timeout in the testcase for 129079. However, this doesn't appear to help much; my assumption is that when the 100 reflows commands unbatch, they still cause a hundred reflows to occur. The next thing (I guess) is to see if the 100 reflows can be coalesced at all. I'll upload my (trivial) little patch for this so far. Created attachment 72830 [details] [diff] [review] current test patch, needs more work I'm looking at merging reflows in nsPresShell::ProcessReflowCommands to help DHTML (which spends all it's time in reflow it seems). Can I just merge the nsHTMLReflowCommand Paths (with common element elimination), or will incremental reflow of a common parent reflow all the children? (which would allow me to throw away reflows for the same common parent (root?) node). I'm slogging through the code but some guidance wouldn't hurt. Created attachment 72873 [details] [diff] [review] Updated patch, much faster, almost certainly would cause regressions This patch cuts the time for the text case (for me) from ~100000ms to ~9300ms. THe Debug/viewer demos/DHTML testcase is faster too. This WILL cause regressions, I cheated. I probably need to merge reflow command framelists. Don't we already have some sort of reflow coalescing code? (But I think it works at an earlier stage.) is a test case for bug 70156, and it very nicely demonstrates the beauty of this patch. resource:///res/samples/test12.html (more fixed pos demo) doesn't display anything for me until I resize. 11 does, sometimes. Some test urls: (fireworks) is an interesting test. Created attachment 72893 [details] conversation between dbaron and shaver about this bug This is a conversation I just had with shaver on IRC about this bug. Nominating for 1.0 -- go strong, kudos to rjesup! I wouldn't hold 1.0 for this, but if it can be done safely and well, without too much opportunity cost, I'm all for it. Just one driver's opinion. /be Addendum to conversation: we looked a little bit at doing the mark-divergent-subpath-as-dirty thing, but that looked kind of evil. Less evil -- probably trivial for blocks, of as-yet-unknown non-triviality for tables -- would be to store the reflow commands in a tree, and teach block and table code how to reflow all the tree-children for a reflow pass. I'm going to poke at the code overnight and see if I can hack up a patch. Randall has certainly laid down an impressive gauntlet for us. =) Please test pageload with that patch... I'm seeing two issues: 1) a long time to unsupress painting and 2) much longer time taken (as far as I can tell). I just read dbaron's and shaver's comments. I agree we need to merge paths or otherwise make sure all the right frames get reflowed (I strongly suspected as much, in fact I was surprised it works as well as it does without it). Seems like we have two or three possibilities (I'm still trying to grok the code in BlockFrame/Viewport/etc): 1) Merge the lists throwing away duplicates. I'm not sure we can keep it a list, or that may depend on how we access it. Probably a tree structure makes the most sense, or a list structure that makes a virtual tree. I haven't yet determined if order is critical, but I imagine it is (requiring a tree, or the extra info). 2) Mark the nodes when the coalescing takes place using a new (different) flag. Then let the reflow start at the top and work it's way down the existing tree. Downside is that with flat structures (lots of children), you have a lot of nodes to examine. At the cost of more memory in the nodes, we could still encode it without the perf hit. 3) Do the coalescing only for the easy cases (target frames all have a common parent), or some such. I'm sure there are more solutions to the issue.... adding mozilla1.0, targetting 1.0 Some ideas about a tree for reflow framelists. I haven't coded this, just written some comments: Reflow array consists of an array of pointers. If the low bit is set, then it's not an nsIFrame *, it's an nsAutoVoidArray holding a list of children. Each entry in that array of children is another reflow array. This is efficient for the common case of a single or small number of reflows. Also, it works well for cases where there's a large overlap. They're also pretty easy to build from the reflow command arrays. You can reuse each mPath array after removing the first N entries (where N is the amount it matches with it's parent). You could avoid that overhead with more work, but it's not worth it I think. GetNext goes away. In it's place, an iterator on the stack stores where in the child array we are (assuming there is more than one). Places where GetNext is called become something like "while (NS_OK == iterator.GetNextReflowChild(&reflowChild))) ..." or some such. Just some ideas before I crash. More testcases: adding nsbeta1. this is a good one to fix for DHTML performance. Taking bug. The proper method for combining reflow paths (I think): Build a tree that has all the target nodes (with parents) with no duplication. All reflows must be for the same reflow command type (note: could relax by storing types in the tree). When walking the tree in Reflow(), when we come across a node, we check to see if it's a target. If so, we do a reflow from that point using the normal dirty flags. This will reflow anything marked dirty under the target so long as there's a chain of DIRTY_CHILDREN flags between it and the target. Then we check the list of children for the node, and iterate into each of them. While in theory we could end up traversing frames more than once, it's unlikely in practice that a frame would both be marked as dirty and be on the path to a target node. If we're really worried about this, we could refuse to merge paths that have a target above another target. I think in practice it will be rare, and would work correctly anyways. This means that the code that current looks a lot like: if (I_am_target) { reflow myself reflow any children with DIRTY or DIRTY_CHILDREN flags } else { GetNext(&nextFrame); find the frame call the frame's Reflow() } will become more like: // this lets me iterate through the reflow children; initialized // from state within the reflowCommand nsReflowIterator reflowIterator(aReflowState.reflowCommand); nsIFrame *childFrame; // See if the reflow command is targeted at us PRBool I_am_target = reflowIterator->IsTarget(); if (I_am_target) { // we're off the tree aReflowState.reflowCommand->SetCurrentNode(nsnull); reflow myself reflow any children with DIRTY or DIRTY_CHILDREN flags } while (reflowIterator.NextReflowNode(&childFrame)) { // Make sure the iterator in the next level down can find its data. // Possible optimization: if DIRTY (not DIRTY_CHILDREN), and // a reflow of a DIRTY node above handles reflowing all sub-frames, // we may be able to skip it. Not a big deal in practice, so ignore // for now. aReflowState.reflowCommand->SetCurrentNode(reflowIterator); call the child frame's Reflow() } The one trick to worry about is the current reflow node in the tree. We need to set the current node to null so that any children that look at the reflow tree will not think they're targets, and won't think they have children in the tree (even if they are targets or have children - that will be handled when the current target gets control back and goes through it's children in it's iterator). BTW, things like blobs and vectorsine run _immensely_ faster with the patch (from the dynamic-core.net tests). I didn't completely follow comment 22, but one thing I worry you might not be accounting for is that when something dirty is reflowed, it will often cause other things to be marked dirty during the reflow (e.g., if something changes size, if a floater moves, etc.). So we really want to process everything in one pass. Your comment "While in theory we could end up traversing frames more than once" scared me. I cut a branch for this, because trading patches is going to start sucking _very_ soon. : layout; cvs tag -b REFLOW_TREE_COALESCING_20020308_BRANCH : layout; cvs tag REFLOW_TREE_COALESCING_20020308_BASE N.B. here are waterson's reflow docs, VERY useful for understanding this bug: Another thing from shaver's and my conversation last night: Some Reflow() methods reflow all their children (not just dirty ones), such as nsInlineFrame.cpp, regardless if they're a target or not. In order to make things work correctly if there's a target inside there (keep the tree and iterators in sync with the frames being walked), we need to find the correct tree node for the child we're entering to reflow. The iterator needs a method |iter.SelectChild(nsIFrame *foo)|, which finds the tree node in the current list being iterated that matches frame foo, or none if there is none that matches. Sample tree built from the chasing-clock demo I mention in comment 10: Tree dump: | +-- 0x8834e50 | +-- 0x8834f84 | +-- 0x8835178 | +-- 0x8834e88 | +-- 0x8837e58 | +-- 0x8842c2c (T) | +-- 0x8843028 (T) | +-- 0x88433ec (T) | +-- 0x8843738 (T) | +-- 0x8843a84 (T) | +-- 0x822e140 | +-- 0x884411c (T) | +-- 0x8843dd0 (T) | +-- 0x884411c (T) | +-- 0x88444a8 (T) | +-- 0x88447f4 (T) So it looks like we go from 61 frame-reflow calls to 16 here. Promising! Created attachment 73443 [details] [diff] [review] First VERY rough cut as a patch. Compiles, does not run. For Shaver Note: this really is for Shaver; I'll check this into the branch once I get it limping up. Currently dies in startup due to a null reflowCommand (makes sense, can easily be fixed, but the sun is coming up...) Checked in very rough first cut of patch to the branch. (There may be some build issues with content; shaver has resolved them and will check in in the morning). Runs (there's some minor editor wierdness, the every-other-character updates bug is back I think with this patch). Most browsing and DHTML seems to work correctly otherwise, other than it being slow because it's Dumping the tree every time, and it's still running in compatibility mode (no tree merging). There are known problem spots, mostly where I added XXX's. Also, each spot needs to be reviewed for correctly where I modified the flow, and I haven't even looked at source style issues yet. But it runs. Checked in some updates. Full tree merging is now enabled and working (mostly, there are a few regressions, especially with cnn.com). DHTML works. The nsReflowTree code has a bug with adding chunks, so I set the chunksize to 110 for the moment. That will work for all but the worst sites (vectorsize and blobs). Shaver checked in a fix for the reflow chunking, and I'm about to check in a fix so that it actually deletes commands when it merges them, and implements IsATarget (in a very simplistic way, we probably want to use a hash instead of an array). FYI, tests on the same machine I tested the "cheat" patch on (full opt, with tree dumping on each reflow) show almost identical performance for the current branch as the cheat patch, around 10x faster than trunk. Looks like the perf win holds up with the merged tree. Still a bunch of detail work and regression testing to do. I'm running the current branch, with (in nsReflowTree.h) KIDS_CHUNK_SIZE = 300 instead of 10, to work around a bug. Current known regressions: Editor: the "every other character echoes when typing bug" - image and some text don't show up cnn.com - main page is ok, stories often don't show the story text/image From a) sliding an element 100px to the right, setTimeout set to 0ms between steps b) sliding an element 100px to the right, setTimeout set to 30ms between steps c) calculate the first 91380 elements of the harmonic series d) sliding 100 elements at once (20 steps, setTimeout set to 30ms per step) e) Move GIFs with many transparent parts above each other Time in ms - NOTE: this is with a preliminary version of the branch, with trees being dumped at every reflow(!). Compiled with -O2, Linux RH 7.2, trunk 20020311xx, 450MHz P3, remote X display(!) Also note: take those with a grain of salt, since I don't know which values they display by default. I'll re-run on the same machine tomorrow from the trunk to check. Note that we're still 1/2 to 1/3 the speed of IE on case d. FEH, durn thing don't like nice tabs. - Do we know why (c) got slower? Those numbers (other than my test) were from some unknown build and system reported by the site. When I get a chance (or have the regressions fixed) I'll run numbers before and after on the same machine. BTW, I notice we are getting some merging of reflows into trees loading pages (such as bugzilla.mozilla.org, merges 4 reflows), so I do think we're going to get a bit of pageload improvement from this patch. is a test case from the .perf group. are we thinking this is going to be baked enough land in the next week or two? I was under the impression that good, solid patches that are approved land. Then they bake along w/ everything else for a long time on the branch before a release is rolled out. --pete Things are looking pretty good right now, though there are a handful of regressions we need to sort out (some things aren't getting reflowed correctly in a batched case). I'd certainly like to have this at a landable state within 2 weeks, but I'd be more confident if we could enlist some help from a layout expert to help us better understand the nature of the things we're seeing. I'm going to be in California next week (San Jose area), so maybe I can pop in and spend some quality time with a Mountain View layout hacker. pete: things have to be solid and baked before they're ready to land on the trunk (1.0 development is on the trunk right now, not a branch); we're long past the point of throwing something against the trunk to see if it sticks. I would hold mozilla1.0 for this fix, speaking only for myself and not for all drivers. DHTML has suffered for too long because of a basic design flaw that was actually contested long ago by hyatt and evaughan, to no avail (the perpetrator is long-gone). Rather than "prematurely optimize", we're doing it just in time for 1.0, and things look close enough, and good enough, that I'd wait. /be The more I spend in this code and talking to dbaron and co, the more I understand why the perpetrator in question doesn't want to use the XUL dirty-bit system, which I think is the hyatt-evaughan lobbying you describe. How well does XUL's model handle an LXR-like page with many thousands of children below a single parent? Not sure I want to find out this close to 1.0. =) Anyway, I'm heartened by your support, and hope that we have something very soon that's ready for wide-scale testing. In the meantime, people who want to start banging on the branch are very much invited to do so and report troublesome sites. I've created a bug, marked as blocking this one, for tracking regressions discovered in testing of this branch. I'll try to merge against the tip and prepare test builds on at least Linux (other platform volunteers please speak up!) tomorrow or Friday. I didn't mean to bash unnamed perp from the past too much, or to endorse dirty bits naively applied to wide, bushy trees. Only to say that this ain't rocket science, and that terrible DHTML perf has been a stain on Gecko's escutcheon for too long. /be I don't think shaver meant to remove bug 130760 (the companion regressions tracking bug) as a blocker of this bug. Putting back. Slap me down if I'm wrong. I don't know if any work done here is related, but: Bug 87808 disappeared on the February 2nd build but reappeared somewhere between the 6th and the 9th of February. Did something get checked in and backed out that anybody knows about? I'm a bit more cautious than brendan. Maybe it's because I understand just enough about reflow to fear it :-). If we're going to try to land this in 1.0 then I think it needs to land at least a couple of weeks before putative 1.0 ship, and someone needs to be committed to put a lot of energy into fighting any resulting fires. rjesup, shaver, this means you :-). As long as we understand that, I'm OK with this. IMO this is important enough that we could involve all (most) of Netscape's QA people in doing a DHTML/Layout testing marathon to get lots of eyes on builds with these changes in them once the diff is done. Is this bug/patch specific to setTimeout, or does it also apply to inline javascript and javascript event handlers? jst, that sounds terrific. Pulled the current ./layout from branch REFLOW_TREE_COALESCING_20020308_BRANCH and rebuilt from the top. (I also tried this on win32, but msvc was not happy with the nested classes in nsReflowTree.h). I hate to report this after seeing that 'clock' dhtml running so smoothly on mozilla, but I gave this a quick A-B run (only changes between build A and build B being the changes in layout). So, anyways, with the build without these changes, on redhat6.1/500MHz/128MB, I got an avg. median of 1315msec. With the changes and a rebuild, I got an avg. median time of 1451msec, which is ~9% slower. Most of the deltas ranged from ~20% to 0% difference, with the most notable slowing (before=3536, after=5391; change=+50%) being for content equivalent to <> This assumes, of course, I didn't make any bonehead errors, which is always possible. I note also that the console was chirping "Releasing 1 reflows batched" during the test, which can't help (but isn't likely the key factor). But I don't think this is all bad news. Obviously I didn't crash :-), and most of the pages appeared correctly (although some were missing various parts of the normal content). [Sidenote: I am away till Monday, so I can't answer questions about the above till then. Ping kerz if you want some more help testing this in the meantime]. I certainly plan to do a careful analysis of perf on pageload before this lands, including comparison of pageload runs with and without the branch. There probably are optimizations we can do for the "normal" case of a single branch/target in the tree. In debug builds, you can get lots of analysis of the reflow patterns. See: and you can also set these vars: setenv NSPR_LOG_MODULES frame:21 setenv NSPR_LOG_FILE /tmp/reflow-tracing.log Note: you get a lot.... JRGM: I'll remove the fprintf's from the opt builds for you. Note I still haven't even made an initial "clean up" pass; there are a lot of opts like not creating the hash until you have a second item, etc that will speed things up. Also moving the reflowIterators deeper into the control flow in Reflow() methods, and allocation pools/etc for the trees and such. I currently suspect two major regressions: editor repaints are funky or non-existant (in HTML only, XUL seems ok, which makes some sense), and table rows/etc don't always get laid out (tinderbox, bugzilla.mozilla.org), which interestingly works ok if you remove the: document.forms['f'].id.focus(); from the bugzilla page. Very interesting..... Here's another test js app. Runs fast in IE - dog slow in current mozilla builds. I dont have the branch so I have no way to test it out. Cleaned up a bunch of code on the branch. No effect on the major regressions. Editor echo issues seem to be linked to the frame being marked as dirty, so when you type a character it doesn't issue a new reflow - but there is no reflow for the editor frame active, which is why we seeBatch release of 0 frames. (This is for text widgets that don't echo at all, like bugzilla.mozilla.org.) Checked in a fix to the DIRTY bit maintenance in nsBoxLayoutState.cpp:UnWind(). Didn't solve the regression, but was definitely a bug. I pulled the branch and tried to compile I use VC70 to build a static build I had to comment out 2 lines in nsReflowtree.h to suppress all msvc errors: //friend class nsReflowTree::Node; //friend class nsReflowTree::Node::Iterator; I'm restarting a full build now, I only tried layout with this patch Created attachment 74532 [details] [diff] [review] patch to build the brach with msvc add nsReflowTree to makefile.win, comment out problematic "friend" declarations I just took a look at the changes to nsBlockFrame.cpp in the patch (yes, I should have been reading diffs earlier, but...), and found what appears to me to be a significant problem in the case where a block has multiple children in the reflow path tree. You're calling PrepareChildIncrementalReflow for all the children in the reflow tree before reflowing any of them. This says to me that there's either a major case where you're reflowing too much or a major case where you're reflowing too little. (I haven't looked closely enough to figure out which.) First, I'll describe (roughly) how I think this should be handled in the block code, and then I'll try to explain why the code currently on the branch can't work. I think what needs to happen is that a good bit of the incremental reflow logic needs to be moved down into ReflowDirtyLines. In particular, most of PrepareChildIncrementalReflow (although not the part that causes incremental reflows to change into resize reflows) can't be called from the top-level reflow anymore, but, rather, needs to be done in ReflowDirtyLines. In particular, the marking a line dirty that PrepareChildIncrementalReflow does can't happen until after all the lines before that line have been reflowed. Then, when we get to the line that has the next reflow target, we should check if that line is dirty. If it *is* dirty, we have to somehow merge the dirty reflow with the incremental reflow targetted at some descendant of that line -- in some cases that might mean just doing a dirty reflow, but I'm not sure if that would work if the relevant incremental reflow is a style-change reflow. So, what's wrong with the current code? I think one of two things is probably happening. Either: 1) We're reflowing too much. In other words, whenever the reflow tree splits, we do a full dirty reflow on the entire subtree for each child that's on a path other than the first path, rather than following that path down to the target of the reflow. Or: 2) We're reflowing too little. In other words, if the incremental reflow of the first child that's in the reflow tree causes damage to the line that's the second child in the reflow tree, we're ignoring that damage and just targetting the reflow down the path that we have, which may only touch a small portion of what was dirtied by the change to an earlier line (e.g., a floater being pushed down). I'm not thinking that this will be somewhat difficult to fix in the presence of the BRS_INLINEINCRREFLOW optimization, which allows the possibility that you could have two children in the reflow path within the same line of a block. If we eliminate this by changing text inputs and textareas to be reflow roots (something I've wanted to do for a while that would both improve performance and reduce code complexity), then these changes would become much easier. Or am I missing something that makes this all work? David: Thanks whole-heartedly for the analysis; not knowing all the side-effects of the Reflow/etc routines was a major problem when reworking this for tree-based reflow. I'll go over your comments in more detail tomorrow; they look VERY helpful. I think the editor/text optimizations might be a significant win for cleaning up the code, as you suggest. My apologies for not spending more time on it this weekend; personal life (and taxes) intervened. I'm back on it full time tomorrow. One interesting perf regression for the branch versus the trunk: typing in the URL-bar is about 3x or 4x slower on the branch (Linux/X11/x86/-O2). dbaron: if you were looking at the patch, I should note that the recent set of changes I checked in added a ReflowDirtyLines() after each PrepareChildIncrementalReflow() (i.e. I moved it into the loop). Now, that may not be sufficient; I'm looking at it more closely today. In fact, looking at it for a few minutes, I see that RetargetInlineIncrementalReflow() must be written for trees, and hasn't been. I'll work on that now. Thanks again, and keep the commentary (or fixes) coming. I've made some major checkins and fixed editor widgets (text lines and areas). This also seems to have fixed most of the "doesn't lay out all of tables" bugs. Note: this exposes a number of things that were previously not getting invoked, and there's at least one assertion I added firing. Also, I'm doing most of my testing in non-merged versions, so there may be MERGE_REFLOW=1 bugs (to turn off merging but still use trees, change the define in nsPresShell.cpp). Mention regressions (and there may be more now) in bug 130760. Fixed most if not all of the assertions firing - they were mostly due to not selecting the correct tree node when reflowing each line, and in some cases reflowing lines more than once. The main remaining regressions (incorrect spacing (cnn.com), lines move when you type) are now fixed, and the assertions seem to be gone (very incomplete testing, though). New regressions (perhaps due to under-reflowing now): on bugzilla bug pages, some of the text widgets paint their text (in the wrong spot) but not their frames/etc. Resize fixes this. It's getting a lot closer, and is highly usable now. Still needing to be done: merge to tip. (Is there a standard method/procedure for updating or replacing a branch with one based off the current tip?) I'll attach a patch based on the diffs against the base. People who've been considering trying this: it's a good time to start thinking about it. Created attachment 75808 [details] [diff] [review] Current patch that makes up branch (against ...._BASE) Does not include the msvc patch yet New branch, REFLOW_TREE_COALESCING_20020325_BRANCH (layout ONLY). To add, cd layout (VERY IMPORTANT), cvs update -r REFLOW_TREE_COALESCING_20020325_BRANCH I'm going to attach an additional patch that enables reflow merging in dom for SetTimout() callbacks (which is what helps DHTML). Note that this seems to currently be over-reflowing, so temporarily you may not see the full improvement. I'm concentrating on correctness first, we know this is the right way to improve performance from previous tests. Please report regressions in bug 130760. Biggest known regression involves extra white space at the top of some websites. I'll be doing a layout regression run tomorrow morning early. Created attachment 76134 [details] [diff] [review] Patch to dom to enable batching of reflows in SetTimeout() callbacks. Layout regression test results (current branch plus dom patch): 12 tests out of ~2385 failed. Also there are a couple of reported crashes from jrgm, and the known problems with netscape.com and bugzilla bug pages. I'm also building an optimized tree to test performance. I'll list the failed tests on bug 130760 I'm about to commit some more small fixes to the branch. Note that currently, I can't find any regressions if I set MERGE_REFLOWS to 0 in nsPresShell.cpp, which means the remaining bugs are in the case where a node has multiple children, or a node is both a target and a parent of a target. I'm going to re-run the layout regression tests with MERGE_REFLOWS of 0 to see how close we are in the non-merged case. NOTE: we may well still be reflowing too much (even with MERGE_REFLOWS at 0). Other known issues: - RetargetInlineIncrementalReflow() definitely needs to be rewritten more, though I'm not sure of the correct algorithm. - It would be nice not to have to call ReflowDirtyLines for each child of a block that's in the tree. Currently we do in order to make sure the current reflow tree node is correct for the lines we're reflowing. - I'm not sure if I need to call BuildFloaterList() after each call to ReflowDirtyLines(). Created attachment 76574 [details] [diff] [review] patch that matches the current branch For anyone who wants to easily look at the changes or apply against tip, this is the current branch as a patch (BRANCH vs. BASE) Created attachment 76590 [details] [diff] [review] Updated patch to match branch Moved updated IncrementalDamageConstrained() into nsBlockReflowContext.cpp to match trunk changes that occurred on 3/12. (Had been in nsBlockFrame.cpp, commented out during the merge for the 3/25 branch.) I am shocked whats going on in this bug. <start rant> How on earth can this thing make it into 1.0??? My predictions is that it will take two milestones to fix the regressions if this gets checked in. I really argue the drivers to keep the promises from the mozilla manifesto. We contend that a milestone where only fixes for topcrash and other severe bugs (e.g., dataloss) get checked in, with lots of regression testing and code review, is necessary before we can confidently brand a mozilla1.0. In my oppinion this is the largest change layout has seen over month's. It scares that everybody who I recognize for intimate knowledge of reflow issues stays as away from this bug. So do I. Further I don't think that it is a good idea to start such a large change and then require people with layout knowledge to spend theire time on it: it is not only the qa that would need to spend time, but also people who would fix the layout topcrashes otherwise, and I think thats a far better use of theire time and expertise. <stop rant> even a single failed testcase in the layout tests is enough to stop a checkin. I agree with bernd (and what Marc said in email), and I don't think this should go in 1.0 at this point. I've tried to hint gently at this over the past week or so after seeing the current state of the patch (see comment 60), but I didn't want to discourage it too much, since I think it is the correct approach, and I think it should land on the trunk sometime after 1.0 branches. I should have been more discouraging about the potential for this to make 1.0 earlier, and I'm sorry for letting rjesup work so hard on it in the hope that it might make 1.0 without being clearer. I've looked at the patch quickly and I think it still has some serious problems -- including the potential for making a *single* incremental reflow pass O(N^2) in the number of children of an element (rather than O(N) of them, as most of the O(N^2) algorithms in layout are). I think it's probably worth getting some careful review before doing too much testing, since I think review is going to turn up issues that are going to require some pretty significant changes to the patch (such as comment 60 above, which I don't think has been fully addressed). Having said that, I really don't have time to do that review now. While I would love to see this in 1.0, I agree that it is a quite risky patch for this late in the game. I've been ready to say "drop this from 1.0", but it's not for me to decide how important fixing DHTML is for 1.0. Brendan certainly thinks it's important (and I do too). However, the complexity of the reflow code is such that this will need more baking that we're likely to want. (It doesn't help that Waterson is away.) It also doesn't help that I'm learning the complexities of reflow as I do this, and because of sabbaticals and workloads none of the people who know reflow already have had enough time to really help (which is not meant to be a complaint; this idea/patch came up at the last minute, and I'm grateful for the help I've gotten - in fact, I'm amazed that it's as close to solid as it is, given the dangers of reflow). I will keep hammering away at this regardless. If this is not taken for 1.0, it should go into trunk as the 1st post-1.0 patch, and be strongly considered as one of the first patches to migrate from trunk to the 1.0 branch. -- Randell, posting using the reflow branch As Jesup said "... how important fixing DHTML is for 1.0". Many major DHTML perf bugs have been postponed due to this bug and now this one also won't make 1.0 Furthermore bug 117061 is also not fixed for 1.0 - what remains is a very disappointing mozilla1.0 milestone in view of DHTML. I'd even suggest to postpone Mozilla 1.0 release one release farther -> to make Mozilla 0.9.10 with this code in. I think we're going to have to accept that Mozilla fumbled the DHTML ball for 1.0 and look to the future. I have been waiting Mozilla 1.0 for three years! I don't care to wait one month more to have a REAL Mozilla 1.0... Well, mozilla is already real in its current version and so are the major DHTML / timer problems we have in 1.0 Bug 73348 should make some interesting reading, and is probably an interesting source for testcases. I wonder if it would be possible to put in place "hooks" into the trunk that would allow someone to install (post 1.0) some xpi which could enable this feature. This way folks who want to can make the choice of less stability (perhaps) in return for faster DHTML. sorta similar to how SVG was set up for a while, where you had builds that were "ready" for SVG and turned it on after some xpi file was installed. Just a thought. What bernd and dbaron said -- this is post-1.0 material by a large measure. I understand how to do what rjesup is trying to do with block frames (and probably even enough to make sure that block and box play nicely), but I don't know how to do it with tables. karnaze or bernd should comment on that. I'll take a look at this patch (rjesup, stop mangling the brace style!), and try to comment more constructively anon. > stop mangling the brace style Yay, waterson is back! :) So I'm thinking that a better approach here might be to change the `reflow queue' data structure into a `dirty tree'. AppendReflowCommand would simply add a path to the tree, the leaf of which would refer to the reflow command (or the subset of the data that was useful during the reflow; e.g., the reflow type). During a reflow, clean nodes would be removed from the tree as they were flowed. This: 1. Automatically coalesces reflows when the command is posted. One fly in the ointment is desaling with two commands with different types (e.g., `resize' and `style change') posted to the same node: in this case, I think we could determine a precedence relationship among the reflow types (perhaps dirty < content changed < style changed) and evict the command with lower precedence. Presumably _any_ reflow (including resize and global style change) could clean up the `dirty tree', avoiding the (probably rare) case where a global reflow is processed while the reflow queue is not empty. 2. Allows for a natural implementation of `interruptible reflow'. An in-progress incremental reflow could dead-reckon that it's spent too much time, and return to the main event loop. Doing so would leave the frame tree and the corresponding reflow structures with their invariants maintained. 3. Seems cleaner than the current approach, which grovels the reflow queue to coalesce commands. The `dirty tree' could be maintained as part of the pres context, eliminating the need for the reflow command on the nsHTMLReflowState object. Presumably, CancelReflowCommand could be implemented using the same logic that removes a reflow path from the tree when the reflow is completed. Thoughts?. Perhaps in your example -- The global resize reflow propagates resize reflows to descendant frames -- The style-changed descendant merges the incoming resize reflow with its own reflow type, getting a style-change reflow -- Frames on the path from the root to the style-changed descendant need to reflow considering *both* that they are resize-reflowed *and* that they have a style-reflowed child. Presumably this means in some places we'll have to separate logic that checks the reflow type into two parts: one part that considers the reflow type for this frame, and another part that considers the merge of the reflow types for the children. The latter part consists of the code that runs even when the current frame is not the reflow target. Did that make any sense? I'm never quite sure how well I understand reflow. I always end up mangling brace style in places, but I put it back before I release the changes for real. :-) (Or I get beaten up by reviewers) Maintaining a tree to start with makes sense; we chose the current mechanism to minimize disruption of the code (in the hope it might make 1.0, and to minimize unexpected side-effects). As you said, it allows for some useful side-effects as well (especially in the issue of interruptable reflow, which is a big problem when loading large pages - you may be looking at another window/tab while the page is loading, but you get frozen anyways). CancelReflow would need us to maintain in each node the number of targets in all children, or (perhaps better) search the tree to see if the branch can be pruned (search for targets). Which is better depends on how often CancelReflow is done. It may make more sense to maintain the info in the tree and avoid possible O(n^2) effects on Cancel. Merging the different types may be possible as suggested. Worst case would be to separate reflows by type - i.e. only coalesce reflows of the same type. If order of operation is important, you can maintain a (small) list of reflow trees and only allow addition to the last tree in the list (with a new tree being added if the tree reflow types don't match). >. Just to be clear, there are two cases here. 1. Merging two incremental reflows targeted at the same frame; e.g., an incremental reflow of type `style change' targeted at frame f, followed by an incremental reflow of type `dirty' targeted at frame f. (As an aside, merging incremental reflows where the second reflow is targeted at frame g, where g is an ancestor or descendant of f, is another story. We have several bugs in the current system that occur when this happens, which replacing the incremental reflow queue with a single incremental reflow tree may be able to fix. More later.) I believe this problem could be dealt by assigning a precedence to the reflow types, and promoting the reflow command to the `most destructive'. I think minimal change would be required in the Reflow methods that handle processing for the target of an incremental reflow. 2. Merging a global reflow (resize, style change) with one or more incremental reflows. I don't think this is going to be much of a problem in real-life: I suspect that the incremental reflow queue is currently empty most of the time we do a global reflow. Nevertheless, a simple solution here would be to process (flush) any incremental reflows before doing the global reflow. Although it may imply that many frames are reflowed twice, it would certainly not be any less efficient than the mechanism we have now (which processes the reflow queue after the global reflow). A more efficient solution might involve modifying the Reflow methods to unilaterally check the incremental reflow tree as the reflow progresses, regardless of the reflow `reason'. This could be implemented later. > CancelReflow would need us to maintain in each node the number of targets > in all children, or (perhaps better) search the tree to see if the branch > can be pruned (search for targets). Ideally, the number of nodes in the tree would be small, so I'd suggest the latter as a first cut. If that turns out to be inefficient, we can maintain an index (i.e. hash table) of current target frames, so removal would be O(1). > 2. Merging a global reflow (resize, style change) with one or more incremental > reflows. > > I don't think this is going to be much of a problem in real-life: I hate to butt in to a conversation that's obviously well over my head, but I've been trying to follow this discussion and this part is confusing to me. It sounds to me as if a global reflow ought to have a strictly greater level of "destruction" than an incremental reflow. In which case, surely simply _aborting_ the pending incremental reflows and doing the global reflow instead would have the desired effect? > It sounds to me as if a global reflow ought to have a strictly greater level of > "destruction" than an incremental reflow. The point of this discussion (see comment 86) is that this isn't true. A resize reflow (or dirty reflow) doesn't clear as much information as a style-changed reflow, so a style-changed reflow on a subtree still requires clearing more information than would be done by a resize-reflow on the whole document. At least I think so. Ahhh, okay. Sorry for the spam. An interesting observation about the URL for this bug (which moves 100 elements by their left attribute): When tracing through the blockframe that all 100 are children of (in the reflow tree), I found that all of them are being marked as resize reflows; i.e. PrepareChildIncrementalReflow() does FindLineFor(), and none of the 100 come back as a match, so it does a PrepareResizeReflow() on each one, which marks all the lines as dirty. Perhaps this is an issue with other DHTML animations as well? (Note: my testing is with the current branch). Just checked in some significant improvements to the reflow tree stuff, mostly in nsBlockFrame.cpp. I found that the DHTML entries weren't being picked up as incremental absolute frame reflows properly (which was why they turned into resize reflows). This _may_ (or may not) have been happening on the trunk for DHTML, if so we may have a way to make a simple, safe patch that may improve DHTML perf a bit (nowhere near what this branch does, but it may be a small, safe win). I'll look into it tonight. THis solves some of the few remaining editor regressions. I also removed most of the usage of BRS_ISINLINEINCREFLOW, and created a wrapper for ReflowDirtyLines() to include processing we need to do on each call. rjesup, the more I look at your changes, the more I think dbaron was right wrt. removing the inline incremental reflow stuff before we proceed here. I've been following your changes to the block reflow code, and I am starting to feel like things are going in the wrong direction. It looks to me like we're calling ReflowDirtyLines multiple times from Reflow (via ReflowDirty), and I'm not sure that's appropriate. If you don't mind, I'd like to take a stab at implementing dbaron's idea of `reflow roots' to eliminate the inline incremental reflow optimization. I think that should allow us to proceed here with minimal intrusion on the block frame code path. As dbaron pointed out, the bulk of what ought to happen then is moving logic from PrepareChildIncrementalReflow to ReflowDirtyLines. Also, with respect to fixing other bugs with absolute positioning, can we fix those independently of this patch? I.e., could you file another bug, make a test case, so we can fix that separately on the trunk: don't fall into your old habit of lumping the kitchen sink into a single bug! ;-) Bug 135146 covers implementation of reflow roots. Go right ahead with work on BRS_ISINLINEINCREFLOW elimination; in my latest changes I started in that direction, but you understand this far better than I. I'll see if the absolute position stuff is broken in trunk and if so open a bug on it (and supply a patch). (I just found that issue in my branch 10 minutes before I left last night). Again, it won't provide more than a marginal improvement I believe. Perhaps something that notices that the only thing that changed was position might help (I think someone commented on that in a reflow/DHTML bug somewhere that's been mentioned here). The ReflowDirty/ReflowDirtyLines stuff - I agree we should probably only call it once; BRS_ISINLINEINCREFLOW was the main reason I wasn't (or at least the main reason I was unsure if I could get away with calling it once). Note that the way I have it currently, if a blockframe has multiple children in the tree, it calls ReflowDirty for each child - which is what would have happened at that point if the reflows hadn't been merged. If we can merge those calls to ReflowDirty, then we'll get a nice perf improvement in some cases. The current design should be no worse than the trunk code, but isn't optimal. Made a checkin to the branch that merges in waterson's bug 135146 changes (including a change to nsReflowTree.cpp to work with 135146). Also rewrote RetargetInlineIncrementalReflow() to work correctly with trees (recurses to do the actual retargetting). In preparation for waterson's comments/updates to the branch (feel free to either post a patch or simply update the branch). We also could merge the branch forward to the tip again at this point. Created attachment 78744 [details] [diff] [review] proposed modificiations I'd like to propose something along these lines (this patch isn't really working yet, but it should be suitable for flavor). I'm loathe to summarize before it's working, but... I've lifted the nsReflowTree idea from the rjesup/shaver patch, but promoted the nsReflowTree object to encapsulate the node/subtree idea. (I've simplified it a bit, too, using nsSmallVoidArray instead of a one-off data structure.) I've removed the reflowCommand member from nsHTMLReflowState, and replaced it with a `tree' member which is of type nsReflowTree. As reflow descends through the frame hierarchy, the nodes are pulled off the nsHTMLReflowState's `tree' member in parallel. (I think I'd like to rename nsReflowTree to nsReflowPath, or maybe just the member variable of nsHTMLReflowState to `path', but that can wait.) I've removed the notion of a `table' of reflow targets, and simply stored the reflow commands in the nsReflowTree at appropriate points in the path. When an incremental reflow arrives at a frame, it `knows' it is a target by virtue of the fact that nsReflowTree::mReflowCommand is non-null. Like the rjesup/shaver patch, each point in the reflow flow-of-control that previously assumed a single descendant now must iterate through the nsHTMLReflowState::tree's children. Furthermore, I've tried to make logic changes so that a `reflow target' iterates through nsHTMLReflowState::tree's children. In other words, even if you've been targeted for an incremental reflow, there may still be paths `beneath' you that need to be dealt with. This patch was made against an two- or three-day old tree, so don't try to apply it to the trunk. It's for expository purposes only! :-) waterson: check your code for nsViewPortFrame.cpp - I think you're reflowing mFrames.FirstChild potentially more than once, unless there can only be one child in mFrames for this (which is quite possible). Also, isHandled seems to be confused with your patch. Created attachment 78854 [details] [diff] [review] more functional version of above This patch actually works, to some extent. I think nsViewportFrame is better now, along with enough other things that you can actually browse. That said, table timeout reflows are still broken, as are incremental reflows to combobox child frames. I've XXX commented some scary stuff in the box code. I'm sure that there are some subtleties with nested reflows to block frames that are broken (i.e., block frame as a target with a child as a target). And MathML and SVG are both on the floor. Patch is still made against an (essentially) mozilla-1.0 tree; I'll update to the trunk tomorrow if the builds smell good. Created attachment 79014 [details] [diff] [review] working patch against current trunk The above patch passes my personal (very basic) smoketests and should be ready for public consumption. Most notably, I've ripped out the table `timeout' reflow code, which should be subsumed by the reflow tree. Created attachment 79016 [details] [diff] [review] as above, but diff -w (for human perusal) Checked in changes on REFLOW_20020412_BRANCH. This should build on Linux and Win32 (although I haven't tried the latter), but you'll need to *disable MathML*. (Just haven't gotten around to getting that working yet.) - Mail headers are don't show up when initially displaying a message, probably due to the fact that I've stuffed style reflows targeted at a box. - I don't think it's necessary to remove subtrees from the nsHTMLReflowState::path during block reflow. (Why did we ever do that, anyway?) - I should make sure that this isn't leaking nsReflowPath objects all over the place. I suspect it is. Could you test bug 116593 and bug 81546 testcases, does this by any change fix those? - Mail folder pane doesn't update when emptying trash. - Mail message pane needs resize-reflow to get rid of horizontal scollbar on some messages. - won't let you look up any words. Looks like a dead presentation shell. Stealing from rjesup, setting component to `Layout'. Ran the page loader tests on the (admittedly not all the way working) branch, here's what I saw: BASE Avg. Median : 711 msec Minimum : 171 msec Average : 724 msec Maximum : 2309 msec BRANCH Avg. Median : 699 msec Minimum : 150 msec Average : 711 msec Maximum : 2184 msec So, a marginal speedup (I'll be generous and call it 2%), presumably due to the fact that we collapse multiple incremental reflows targeted to different containers in the tree. heikki: this does appear to fix bug 116593 (adding as dependency); however, it does not appear to fix bug 81546. Fixed problems with mail three pane (update nsBoxLayoutState.[h|cpp]): we were failing to properly handle nested reflow commands targeted at boxes in the same hierarchy. Also fixed problem with m-w (update nsPresShell.cpp): it turns out that text frames append a reflow command _during their initial reflow_, which I had dropped on the floor. The only problem that I notice now is that we appear to be missing at least one reflow on some pages. In other words, loading any large page (like this bug report), and the doing a slight vertical resize, causes the page's width to jump slightly. Created attachment 79386 [details] [diff] [review] patch incorporating above fixes This patch (against the current trunk) incorporates the above fixes, as well as fixing the nsReflowPath leak. Created attachment 79562 [details] [diff] [review] ready for prime time? This patch (made against this morning's trunk): - Fixes some more memory leaks (nsReflowPath::Remove was not deleting the subtree it removed), - Makes sure that the nsBoxToBlockAdaptor prunes the incremental reflow subtree from the path after performing the reflow. This ensures that any subsequent reflows (e.g., due to addition of scrollbars in a scroll frame!) are treated as resize reflows. This fixes the extraneous horizontal scrollbar that would suddenly disappear as soon as the window was resized. - Fixes a crash in nsViewportFrame where an incremental reflow targeted at the viewport was being incorrectly propagated as an incremental reflow to the viewport's children. AFAICT, it's a wrap, and ready for some heavy-duty testing. These changes have been mirrored onto the REFLOW_20020412_BRANCH. (N.B., I still need to fix MathML and SVG, but I'll wait to do that until I hear a `yea' or `nay' from some people...) Oops: 1. I still need to handle incremental reflows targeted at combobox frames. 2. After thinking about a brief exchange that dbaron and I had a few days ago, I think it's probably safe to remove all of the reflow retargeting code in the block frame. (I managed to convince myself that the situation there was specific to text frames -- because there was an inline element with a scrollframe.) 3. This patch contains a fix for a another bug (cf. CalculateBlockSideMargins in nsHTMLReflowState); can't recall which. 4. I need to deal with the case where more than one reflow command is targeted at the same frame. (I think I'll just kick that command back into the reflow queue and process it in a separate pass -- I have yet to hit that case, FWIW.) 5. Need to double-check the viewport frame: does anyone else ever post UserDefined reflows? (The Viewport assumes that this means `reflow my fixed frames'; the previous code did this for any (?) user-defined reflow.) Created attachment 79566 [details] testcase showing nsBlockFrame changes still need work This is a test case that demonstrates why you should do something like what I described in comment 60. Steps to reproduce: * Load testcase. * Click "Go" In trunk build: * Text rewraps around float, as it should. In branch build: * Text doesn't rewrap around float. This method in nsReflowPath.h is never defined: // Find or create a child of this node corresponding to forFrame. // XXX better name nsReflowPath *EnsureChild(nsIFrame *aFrame); Fixed the unused method. My assumption is that PrepareChildIncrementalReflow would mark just those lines dirty that contained frames along the reflow path (or, in the case that the reflow was targeted at a floater, it would mark _all_ frames dirty); e.g., 1: clean ----------------------------- 2: dirty -----XXXXX------------------- 3: clean ----------------------------- 4: clean ----------------------------- 5: dirty --------------XXXXXXX-------- 6: clean ----------------------------- If reflowing line 2 impacted line 3, line 3 would be marked dirty and reflowed. If no impact continued on to line 4, line 4 would be skipped. Line 5 would be reflowed. If the reflow in line 2 was targeted at a placeholder frame, then we'd bail out to PrepareResizeReflow, which should mark all wrapped lines as dirty. Clearly, I don't understand this as well as I should. :-) I think it's safe to remove the last vestiges of the inline incremental reflow retargeting stuff, but I still haven't thought hard enough about it (and was hoping to defer it to a second pass). But, removal would certainly make implementing your proposal in comment #60 simpler. Anyway, thanks for the test case. I'll keep pounding on this. Great work Chris! I'm updating to the latest stuff with a clean tree and will start doing some real testing/debugging. I'm somewhat tied up with driver issues and getting worldgate stuff done. How does perf on DHTML look with the new stuff in an opt build?. Also, while I'm on the topic, how do we pick up the reflow path again when it goes through a floater? I haven't looked into how we do that in general, though, so it might be a silly question... dbaron: I think that the reason your test case doesn't work is because only the width changes on the floated <div>. If I also adjust the height of the <div> (e.g., setting the height to 301px as well as increasing the width), then your test case works fine (cf. your comments in nsBlockFrame::RememberFloaterDamage). Alternatively, forcing a jump in the debugger to execute the code in RememberFloaterDamage at an opportune time makes the test case work, as well. IOW, if we could notice horizontal changes to floats, it seems to me like things ought to `just work': passing an incremental reflow to a child frame seems like it should be sufficient to account for damage caused by changes to previous frames in the flow. Sorry, I'm a little out-of-sync. My last two comments didn't take into account what you'd written in comment #120. You wrote: >. So let's look at specifically how this is different from an incremental reflow where we reflow frames not along the reflow path. There are two cases: 1. The line contains a block frame. In this case, the block reflow context will set the reflow reason to resize (rather than incremental). This causes nsBlockFrame::Reflow to PrepareResizeReflow instead of PrepareChildIncrementalReflow. The substantial difference here is that PrepareResizeReflow will dirty all the lines that are impacted by floaters. PrepareChildIncrementalReflow will dirty only the line (well, now lines) along the reflow path -- relying on PropagateFloaterDamage to detect cases where a line must be `lazily' dirtied to account for float movement. 2. The line contains a inline frames. In this case, all of the frames in the line will be reflowed anyway, so it's a moot point (modulo weird cases involving box frames that want to be treated like inlines; hence my concern for retargeting reflows). > ?)) I don't have much to say about this other than it smells right. It would also allow us to combine reflows targeted at the same frame in a useful way (instead of what I propose in point 4 of comment #115, above). >. If RememberFloaterDamage worked, I think you'd be right. As is, I think that the aState.IsImpactedByFloater check force-dirties lines that might not otherwise be reflowed. > Also, while I'm on the topic, how do we pick up the reflow path again when it > goes through a floater? I haven't looked into how we do that in general, > though, so it might be a silly question... In general, the reflow path is maintained in the nsHTMLReflowState object (just like the reflow command object used to be). The nsHTMLReflowState object's ctor takes a child frame; when the reflow reason is `incremental', the ctor finds the next branch in the path that contains the child frame. You'll see that there are some places where I had to explicitly add a reflow reason of `resize' to the nsHTMLReflowState's ctor (e.g., when computing collapsing margins) so that the ctor doesn't try to pluck from the path -- and wind up dereferencing a null pointer. Working from your patch on trunk: nsPresShell.cpp: In ~IncrementalReflow() the loop needs to start at Count()-1, not Count(). nsReflowPath.cpp: In ~nsReflowPath() the loop needs to start at Count()-1, not Count(). nsReflowPatch.h: RemoveChild() should be more like this: void RemoveChild(nsIFrame *aFrame) { iterator iter = FindChild(aFrame); if (iter.mIndex != -1) Remove(iter); } The whole way mIndex is used worries me. It looks rife for errors (mostly not checking for equality with EndIterator() (or mIndex == -1) before calling things like Remove(). Perhaps these have been fixed on the branch (last time I tried the branch I couldn't get it to work, probably because the rest of my tree was newer). The throbber seems to not work with the patch (and my fixes as per above), and as per the comments in the code, RetargetInlineIncrementalReflow() isn't done (though from the comments I'm not sure you think it's necessary, so I won't work on fixing it). Yeah, I fixed those off-by-ones on the branch earlier today. I've fixed a few other things on the branch, too -- probably time for a new patch. nsReflowPath::Remove(iterator&) actually bounds-checks the iterator it's passed, so I don't think extra checking is needed in nsReflowPath::RemoveChild(nsIFrame*). Not sure I understand your other concerns about mIndex? The throbber is working for my trunk+patch build, as well as my branch build...any other changes in your tree? Created attachment 79750 [details] [diff] [review] updated to this morning's trunk, minor bug fixes. Current. Fixes off-by-one errors; removes the spurious bug fix mentioned in point 3 of comment #115. RemoveChild() needs to check for FindChild() not finding the child (I hit this one and crashed); checking for mIndex == -1 seems to be the easiest way. Remove() does have an assertion of mIndex >= 0 (which triggered in the crash I had above); I guess there's no need to do more so long as we're sure no one is going to call Remove() with an iterator at the end (mIndex of -1). I have no changes in my tree except for the ones I mentioned; I pulled it around 4/15 at 11:30 am edt. I'll re-apply your new patch and see. Throbber not working could have been a trunk breakage. I got this, though I'm not sure if it's related to the patch, when loading neowin.net: ###!!! ASSERTION: nsFrameManager::GenerateStateKey didn't find content by type!: 'index > -1', file /home/jesup/src/mozilla_trunk/mozilla/layout/html/base/src/nsFrameManager.cpp, line 2261 ###!!! Break: at file /home/jesup/src/mozilla_trunk/mozilla/layout/html/base/src/nsFrameManager.cpp, line 2261 rjesup: thanks for testing this. I think you've got an older patch (or at least an older version of nsReflowPath.cpp). This is what it looks like in my tree: void nsReflowPath::Remove(iterator &aIterator) { NS_ASSERTION(aIterator.mNode == this, "inconsistent iterator"); if (aIterator.mIndex >= 0 && aIterator.mIndex < mChildren.Count()) { delete NS_STATIC_CAST(nsReflowPath *, mChildren[aIterator.mIndex]); mChildren.RemoveElementAt(aIterator.mIndex); } } neowin.net triggers frame manager assertion in a clean tree, so it's not related specifically to these changes. Yes, it must be the older patch that tripped me up on that (I used the "ready for prime time" patch, which is now obsolete. I'll be updating today. BTW, I was doing some testing on as well as a bunch of the testcases mentioned here, and we do show significant DHTML perf improvements, even without the patch here to the DOM to use batching for SetTimeout() callbacks. There are significant regressions with (waterson has seen these). Other than that, it looks very good. I don't think this is directly relevant to this bug as described, but it might be an interesting test to see if this style of code can actually get reflow right in a case where the current code *doesn't*. If it's irrelevant to this bug (eg because it's table-related) then I'm sorry for mentioning it here. The webpage at mp3.wuffies.net (obLegal: this is my personal site, for my personal use: please don't download the mp3s from it, because that's illegal) is a table in which the first column grows substantially wider when its last row is reached. The table is also dynamically generated by a process which is often slow (well, a couple of seconds slow). Mozilla usually renders the first half of the table, then re-renders afterwards when the last row comes in. When this happens, artefacts of the original reflow are left around (specifically, bits of the second column persist too far to the left, where the first column is now). I thought this might be a good testbed for a reflow-related change... sorry if it's offtopic for this bug. Updated to waterson's most recent patch against trunk. Throbber works fine now. No other changes in the tree other than my DOM patch to batch SetTimeout() callbacks (which, BTW, I did have in my tree when I was doing that perf testing earlier today). I don't see any regressions now on dynamic-core.net with this patch. I'm going to do some profiling on the patch and see if there are any new DHTML hotspots. The dynamic-core.net problem appears to be timing- and cache-dependent (*sob*). waterson's linux and win32 builds should show up on in a few minutes at: Reproducible crash with bug-129115-reflow 2002041810 test build on linux: 1) go to resource:///res/samples/test12.html (more fixed pos demo) 2) resize window so you get a scrollbar in the main frame 3) drag srollbar thumb trunk cvs 2002-04-19 doesn't crash just downloaded the latest build 2002041803. The following sites about which others reported problems, i tired the following sites and didn't seem to have a problem. These web pages showed up instantly. I should mention that i have a dsl connection. I have lost track of what was putback to which gate(branch or trunck). Anyway some putback seems to have fixed these bugs. Are the Win32 experimental reflow builds that have just shown up in the /nightly/experimental directory debug builds? Because I see a significant DHTML slowdown in the tests I've tried with my PII-400, 128Mhz, Win98. The URL in comment #12 seems about 1/2 the speed, and doesn't complete properly. The 4th box never reaches its final stopping point, and the rest don't appear at all. The fireworks example in comment #11 doesn't display properly (doesn't seem to work). The dynamic core link in comment #20 doesn't display anything, but hogs all the CPU. Test a) in comment #35 gave me a result of 550000 for the experimental reflow build, but 390000 for the 2002041617 Win32 daily on the same machine. Actually the examples on dynamic-core.net worked very good last week (well at least they didnt hang mozilla). But since bug 129953 was backed out they dont work at all anymore. You probably knew about this though, sorry for spamming. Testing- reflow/ makes hardly any sense since bug 129953 is still open. The builds that dawn posted are optimized. Note that they were branched from the trunk on 4/12 (although I'm not sure if anything has regressed between now and then). I'm seeing better performance (or at worst, no change) on every test that has been mentioned in the bug; however, I'm running on Linux @ 700MHz. I'll try some slower Win32 boxes today. Some DHTML examples for testing ... (click on open cyborg) on the following examples just click in the grey area to start the animation: (click on company) (click on an a red bumper) (dragging the scroller) That's great -- thanks for the links. I'd appreciate it more, though, if people could use these builds for normal browsing and report any crashers or layout glitches that they've found. On the branch, I've fixed the crasher that Tuukka reported, as well as the incremental table reflow problem on dynamic-core.net that caused some of the demos not to appear in the list. I'm probably going to cut a new branch today (hopefully with fixes for bug 138057 and some of the other issues mentioned in comment 115). But in the meantime, please keep banging on the experimental bits. Thanks! bug-129115-reflow/linux crash in. (1_0_0/2002041809/Linux does not crash.) smaug: thanks. The crash on w3.org is the same as test20.html, so that's fixed. Wow, Win32 really sucks! :-) Is this because of pavlov's bug 129953? If, so I agree with comment 139. I think pav's win32 patch (backed out on branch, but checked into trunk I think) helps DHTML a LOT. I have it on the RC2 Not Suck list, so I think it makes sense to test with it. Note that 137706 is for enabling the same thing on Linux and Mac Using the last patch uploaded here (plus the DOM patch), I see some significant layout issues when loading the ebay "sell item" page. (Go to ebay, select sell, log in, select (say) art & antiquities. Note that most of the page isn't visible until you resize.) Created attachment 80515 [details] [diff] [review] against 2002-04-22 trunk. includes fixes for reflow ommission in table row group, crasher in fixed frame. I've created REFLOW_20020422_BRANCH, based off this morning's trunk. I'll post experimental builds as soon as possible (probably tomorrow morning). Linux, Win32, and Mac builds from REFLOW_20020422_BRANCH (attachment 80515 [details] [diff] [review] versus this morning's trunk) have just been posted to: <> Note that I _didn't_ back out pavlov's image stuff on this branch as the Win32 build I tested seemed to work reasonably well. Please continue to test, and make notes here. thanks! Hopefully I am not wrong on this one. But bug 49095 works for me using a copy from: But the bug is there with Mozilla RC1... Sometimes the "final painting" is missing. So you have to resize or otherwise get the Moz to repaint itself. This happens (rarely) for example in. Maybe this is the same problem as #147. Tested: Linux REFLOW_20020422_BRANCH : errors Linux Trunk 2002042221 : ok I'm testing this out on mac - on my iMac - and I'm not seeing much of a difference, if at all with the latest build from the ftp site. I guess at least it's not slower :) I wish there was an example I could point to on mac and say 'see? it's much faster in the test build' but going over the examples posted here I could see no measurable difference in speed.. One oddity which is probably just my machine was that I could not see the autocomplete widget the first time I launched the test build. It showed up fine on subsequent launches though. Okay, it sounds like there may still be some table incremental reflow bugs lurking in there. I couldn't reproduce problems on either ebay or the .fi site, above. Any other reports greatly appreciated. It's a lot faster now, but still not close to IE. If you're looking for table testcases, try the mp3.wuffies.net site I mentioned above. The problem occurs if it loads slow enough that you get an incremental reflow half-way through the table. If you don't get a reflow in that situation directly from the server (I almost always get one the first time I load it - the server-side code does a bunch of directory scanning of stuff that probably isn't in the OSs disk cache) then try simulating it on an artificially hobbled slow server - eg make the server pause for 5s before the letter 'L'. That shows an incremental reflow bug on all mozilla builds I've tried up to and including RC1 (I only use linux, so it might be OS-dependant, but that seems unlikely). I know that probably isn't directly related to this bug, but you did ask for incremental table reflow testcases :) Adding topembed also. My ebay table problem (see bug 130760) did appear to be an incremental reflow issue, but it was before the current patch. I'll see if I can make it happen again with the current one. Try slow websites like realtor.com as well. Just two nice testing URLs: and nice findings on Just so people understand: making the changes described in this bug will by no means fix all of Mozilla's DHTML problems. It might not even fix very many of them. At this point, the most useful thing to do is to test the patch in that bug (which lays the groundwork for other improvements), and look for _reproducable regressions_ in _normal layout_. Zarro regressions means that we can land the patch on the trunk, and continue with `the next big DHTML win'. thanks. Created attachment 80898 [details] [diff] [review] patch for review This patch leaves combobox `broken' (but I can't seem to construct a test case that causes it to misbehave: dead code?). It deals with multiple incremental reflows with different reflow types being targeted at the same frame (even though this didn't seem to cause any problems). It also includes the patch from bug 138057. This is ready for review, IMO. There appears to still be a table (?) incremental reflow bug in here, but I figure there's enough code to go over that reviewers might as well get started if this is to have any chance of landing this millenium: I can work on tracking down the table problem in parallel. Using Build ID 2002042219 on NT SP6a: Opening paints the left column on the page OK momentarily, but after reflowing the center column the left column is left partially covered and does not reflow unless I re-size the page, scrolling vertically, or let Windows repaint it (by opening another app over the Mozilla window and then minimize that other app). FWIW, this page also tries to load a shockwave flash applet at the top. I don't have that plugin installed. There have been recent checkins in the trunk that, combined with this patch give us an enourmous boost in DHTML performance especially on Windows. See bug #21762#c103 A new experimantal build would be great (i'm still learning to do this my self, at least on linux...) Created attachment 81450 [details] [diff] [review] updated patch This patch was updated to be current with the trunk as of 2002-04-28 (accounts for some changes to block frame, etc. that have gone into the tree over the last week). Also, this fixes a problem with nested reflows targeted at block frames that dbaron pointed out to me. See <> for details. dbaron: I juggled the reflow reason mangling logic in nsBlockReflowContext::ReflowBlock and nsLineLayout::ReflowFrame to cover the cases we'd discussed... There is a very interesting thing about the test at The animation moves perfectly until the big image with the baby isn't completely loaded. Then the performance dramatically degrades. Does this provide any valuable information? Created attachment 81810 [details] [diff] [review] (non compiled) patch with minor changes to tables I installed the patch dealing with tables and removed some remaining timeout reflow stuff (e.g. eReflowType_Timeout was in a case statement and 4 methods were still declared), changed the style/comments somewhat, and raised a question (see "question for waterson") for the case when a cell is both a target and has children which are targets. I did not install all of the other files and so it may not compile. I didn't change anything that would cause any additional testing. I like it very much. r=karnaze for the table files. I read the patch and it is looking pretty good. The plan of migrating to the reflow tree/path and from there on doing further specific tweakings with the mindset of the new framework sounds attractive. Have you been maintaining a branch (or another recent patch) that I can apply (for tracing purposes in the debugger)? The bug has come a long a way, and so did comments... (and the patches...) Mind opening a follow-up bug to ease slow transfers? I tried to compile with attachment 81450 [details] [diff] [review] applied to my tree, but SVG didn't build with that. I got the following lines: nsSVGOuterSVGFrame.cpp:307: `struct nsHTMLReflowState' has no member named `reflowCommand' nsSVGOuterSVGFrame.cpp:308: `struct nsHTMLReflowState' has no member named `reflowCommand' nsSVGOuterSVGFrame.cpp:312: `struct nsHTMLReflowState' has no member named `reflowCommand' nsSVGOuterSVGFrame.cpp:306: warning: `class nsIFrame * target' might be used uninitialized in this function nsSVGOuterSVGFrame.cpp:310: warning: `class nsIFrame * nextFrame' might be used uninitialized in this function make[4]: *** [nsSVGOuterSVGFrame.o] Error 1 before the build broke. BTW, against what state should attachment 81810 [details] [diff] [review] be applied? It seems I couldn't even apply the patch successfully, so I also couldn't try compiling :( Created attachment 81924 [details] [diff] [review] incorporate karnaze's changes; add code-level documentation karnaze: I rolled your changes into the main patch. A style change reflow targeted at a table cell should subsume any other incremental reflows targeted at the cell's children. So I think that should be fine. Also added some code-level documentation based on private feedback from attinasi. Created attachment 81952 [details] [diff] [review] with patch for SVG This should fix the SVG build bustage. [This bug is already too long and might need a continuation bug as the fixups continue] I am hitting the assertion below at startup. The problem seems to be an incorrect reflow reason being used. In #2, |reason| is initial, but aReflowState.reason is said to incremental in the constructor in #1, causing the lookup of the path (in GetSubtreeFor) to fail (since the real reason should have been initial). From the trace, the problem goes back to IncrementalReflow::Dispatch() where in the root reflow state is always defaulted to incremental. In this case the box overwrites the reason back to what it should be, but in the general case I wonder if this isn't suggesting that there are some frames that are never reflowed with the initial reason. ==== nsBoxToBlockAdaptor::Reflow() [...] #1 nsHTMLReflowState reflowState(aPresContext, aReflowState, mFrame, nsSize(size.width, NS_INTRINSICSIZE)); #2 reflowState.reason = reason; #3 reflowState.path = path; ==== NTDLL! 77f9eea9() nsDebug::Assertion(const char * 0x022fc984, const char * 0x022fc974, const char * 0x022fc924, int 207) line 291 + 13 bytes nsHTMLReflowState::nsHTMLReflowState(nsIPresContext * 0x012cb658, const nsHTMLReflowState & {...}, nsIFrame * 0x03eb6490, const nsSize & {...}) line 207 + 35 bytes nsBoxToBlockAdaptor::Reflow(nsBoxLayoutState & {...}, nsIPresContext * 0x012cb658, nsHTMLReflowMetrics & {...}, const nsHTMLReflowState & {...}, unsigned int & 0, int 0, int 0, int 0, int 0, int 1) line 793 nsBoxToBlockAdaptor::RefreshSizeCache(nsBoxToBlockAdaptor * const 0x03eb64ec, nsBoxLayoutState & {...}) line 363 + 49 bytes nsBoxToBlockAdaptor::GetMinSize(nsBoxToBlockAdaptor * const 0x03eb64ec, nsBoxLayoutState & {...}, nsSize & {...}) line 504 nsSprocketLayout::GetMinSize(nsSprocketLayout * const 0x03b43fe8, nsIBox * 0x03da3aa0, nsBoxLayoutState & {...}, nsSize & {...}) line 1373 nsContainerBox::GetMinSize(nsContainerBox * const 0x03da3aa0, nsBoxLayoutState & {...}, nsSize & {...}) line 536 + 38 bytes nsBoxFrame::GetMinSize(nsBoxFrame * const 0x03da3aa0, nsBoxLayoutState & {...}, nsSize & {...}) line 1119 + 20 bytes nsStackLayout::GetMinSize(nsStackLayout * const 0x0141cdd0, nsIBox * 0x03da38b4, nsBoxLayoutState & {...}, nsSize & {...}) line 124 nsContainerBox::GetMinSize(nsContainerBox * const 0x03da38b4, nsBoxLayoutState & {...}, nsSize & {...}) line 536 + 38 bytes nsBoxFrame::GetMinSize(nsBoxFrame * const 0x03da38b4, nsBoxLayoutState & {...}, nsSize & {...}) line 1119 + 20 bytes nsBoxFrame::Reflow(nsBoxFrame * const 0x03da387c, nsIPresContext * 0x012cb658, nsHTMLReflowMetrics & {...}, const nsHTMLReflowState & {...}, unsigned int & 0) line 951 nsRootBoxFrame::Reflow(nsRootBoxFrame * const 0x03da387c, nsIPresContext * 0x012cb658, nsHTMLReflowMetrics & {...}, const nsHTMLReflowState & {...}, unsigned int & 0) line 243 nsContainerFrame::ReflowChild(nsIFrame * 0x03da387c, nsIPresContext * 0x012cb658, nsHTMLReflowMetrics & {...}, const nsHTMLReflowState & {...}, int 0, int 0, unsigned int 0, unsigned int & 0) line 784 + 31 bytes ViewportFrame::Reflow(ViewportFrame * const 0x03da3840, nsIPresContext * 0x012cb658, nsHTMLReflowMetrics & {...}, const nsHTMLReflowState & {...}, unsigned int & 0) line 603 IncrementalReflow::Dispatch(nsIPresContext * 0x012cb658, nsHTMLReflowMetrics & {...}, const nsSize & {...}, nsIRenderingContext & {...}) line 943 PresShell::ProcessReflowCommands(int 0) line 6368 PresShell::FlushPendingNotifications(PresShell * const 0x03b774f0, int 0) line 5175 nsXULDocument::FlushPendingNotifications(nsXULDocument * const 0x013e7d30, int 1, int 0) line 2503 nsXBLResourceLoader::NotifyBoundElements() line 300 nsXBLResourceLoader::StyleSheetLoaded(nsXBLResourceLoader * const 0x03cd2f58, nsICSSStyleSheet * 0x04609450, int 1) line 209 CSSLoaderImpl::InsertSheetInDoc(nsICSSStyleSheet * 0x04609450, int 2, nsIContent * 0x00000000, int 1, nsICSSLoaderObserver * 0x03cd2f58) line 1196 InsertPendingSheet(void * 0x046098a0, void * 0x03ead438) line 757 nsVoidArray::EnumerateForwards(int (void *, void *)* 0x01ce83c0 InsertPendingSheet(void *, void *), void * 0x03ead438) line 660 + 21 bytes CSSLoaderImpl::Cleanup(URLKey & {...}, SheetLoadData * 0x03eb5070) line 821 CSSLoaderImpl::SheetComplete(nsICSSStyleSheet * 0x00000000, SheetLoadData * 0x03eb5070) line 914 CSSLoaderImpl::ParseSheet(nsIUnicharInputStream * 0x03bc3a50, SheetLoadData * 0x03eb5070, int & 1, nsICSSStyleSheet * & 0x04609450) line 949 CSSLoaderImpl::DidLoadStyle(nsIStreamLoader * 0x03eb5840, nsString * 0x03c3f240 {"/* * The contents of this file are subject to the Netscape Public * License Version 1.1 (the "License"); you may not use t"}, SheetLoadData * 0x03eb5070, unsigned int 0) line 984 + 27 bytes SheetLoadData::OnStreamComplete(SheetLoadData * const 0x03eb5070, nsIStreamLoader * 0x03eb5840, nsISupports * 0x00000000, unsigned int 0, unsigned int 2298, const char * 0x03bc2028) line 741 nsStreamLoader::OnStopRequest(nsStreamLoader * const 0x03eb5844, nsIRequest * 0x03ebb200, nsISupports * 0x00000000, unsigned int 0) line 163 nsJARChannel::OnStopRequest(nsJARChannel * const 0x03ebb204, nsIRequest * 0x03eb5b54, nsISupports * 0x00000000, unsigned int 0) line 606 + 49 bytes nsOnStopRequestEvent::HandleEvent() line 213 nsARequestObserverEvent::HandlePLEvent(PLEvent * 0x03ebf954) line 116 PL_HandleEvent(PLEvent * 0x03ebf954) line 596 + 10 bytes PL_ProcessPendingEvents(PLEventQueue * 0x01237e50) line 526 + 9 bytes _md_EventReceiverProc(HWND__ * 0x02bc07f8, unsigned int 49488, unsigned int 0, long 19103312) line 1077 + 9 bytes USER32! 77e148dc() USER32! 77e14aa7() USER32! 77e266fd() nsAppShellService::Run(nsAppShellService * const 0x03ae9550) line 451 main1(int 1, char * * 0x00304e70, nsISupports * 0x00000000) line 1431 + 32 bytes main(int 1, char * * 0x00304e70) line 1780 + 37 bytes mainCRTStartup() line 338 + 17 bytes KERNEL32! 77e992a6() Another problem that I find is that the path ends up null anyway for the case where a frame tries to reflow a child for which no reflow command was dispatched. To be more precise, suppose that there is a recorded path from root R to frame F, and continuing downward to F's child f. Now suppose that F wishes to reflow another child g (a sibling of f) using the pseudo-code: for child in (f, g) { setup the child's ReflowState( in:F's reflowState, in:child ) -->problem is: this will give a path for f, but will give a null path for g since no reflow path for g was registered earlier. I am seeing some crasher in the JavaScripted MathML editor as a result of such a scenario ... } I've fixed the issue that you've described in comment 170. (I wasn't seeing the assertions because I've been running in an optimized build...the box frame needs to create the reflow state as `resize' in this case as it does the path munging by hand.) WRT. the issue in comment 171, the container frame should convert the reflow reason to `dirty' in that case. The `new rules' for the reflow reason are: - A reflow reason of `incremental' implies that the current frame is along the reflow path, and therefore path will not be null. - A frame should convert the reflow reason to `dirty' (or possibly `resize') if it needs to propagate damage from an incremental reflow to siblings or children. - A frame must propagate a reflow reason of `style changed' to all of its children. Longer term, it may make sense to eliminate the `dirty' reflow reason, and replace it with `incremental' and a null path. (I didn't want to do that in this pass, however -- too much change in one fell swoop leads to confused code reviewers, I've found from being on the receiving end.) If you could point me to the `JS MathML editor', and let me know how to reproduce the crash, I'd be happy to try to fix the problem. The JS MathML editor auto-installs as a .xpi : Steps to reproduce the crash after launching the editor: 1. click somewhere in the editor window to position the caret 2. click the menu-item '(\square)' in the toolbar (this inserts it) 3. make a selection with the inner \square 4. click the menu-item Insert -> Matrix -> 2x2 (this inserts it) 5. make a selection with the \square in the first cell 6. click the menu-item '(\square)' in the toolbar -> crash Regarding the 'new rules', it might be safer to fold a bit of that logic in the reflowstate constructor then, what do you think? Since the constructor is testing to see if the frame is in the path, it might as well adjust its default reason accordingly (while the caller can still tweak it as the box is doing). Comment on attachment 81952 [details] [diff] [review] with patch for SVG Thanks for adding the comments and assertions :) What do you think about adding this assertion NS_ASSERTION(reason == eReflowReason_Incremental? path != nsnull : path == nsnull, "path required for incremental reflow, disallowed for other reflow types"); to the HTMLREflowState constructors, as a POSTCONDITION? Just and idea - I don't mean to make a fuss or anything, take it or leave it (though I do believe it is good to assert the invariant somewhere). So, I looked this stuff over, ran it as dogfood for a few days, found it to be very very good. sr=attinasi, assuming that you forward to me documentation on this change for inclusion into the layout docs. Created attachment 82108 [details] [diff] [review] fix for assertions, MathML incremental reflow Okay, I've fixed the MathML incremental reflow problem (knock on wood): I've made it so that MathML container frame will not incrementally reflow children not along the reflow path (which seems like the right thing anyway). The MathML editor seems to work fine with this change -- if only I could figure out how to install the fancy fonts on Linux. Also, I've cleaned up the bajillion assertions that my previous patch caused (shame on me, I was running an opt build that didn't show any of them). Basically, there were a half-dozen places where we'd hit that assertion in a `beningn' way (because immediately after creating the reflow state, we'd whack the reflow reason). I've made changes so that we now compute the reflow reason up-front, and pass it to the reflow state's ctor. Marc, it may be a bit strong to enforce that path == nsnull when reason != eReflowReason_Incremental, in general. I think it would be better to just say, `if reason != eReflowReason_Incremental, path is irrelevant'. There are a few places that twiddle with the reason post-nsHTMLReflowState construction, and there isn't a systematic way to enforce it. And while we could enforce that as a post-condition of the nsHTMLReflowState's ctors, it seems superfluous since each of the non-root ctors is explicitly _setting_ path to nsnull when reason != eReflowReason_Incremental. (It's not going to change before we leave the scope of the ctor.) That said, I _did_ add assertions to make sure that path != nsnull when constructing an nsHTMLReflowState with eReflowReason_Incremental. This required me to juggle logic in about a half-dozen places where we were getting `benign' assertions; e.g., nsHTMLReflowState childRS(...); childRS.reason = /* anything but eReflowReason_Incremental */ This is `benign' because although the ctor panics, we immediately whack the reason to something else. To avoid assert-botch noise, I changed these to: nsReflowReason reason = /* twiddle this however */; nsHTMLReflowSTate childRS(..., reason); rbs: the reflow reason twiddling is ad hoc enough that I'm not sure it can be merged systematically into the reflow state object. For example, in some places, we actually _do_ convert an incremental reflow to a dirty reflow; in other places, we just don't flow the child frames if they're not along the path. + nsIFrame* childFrame; + for (childFrame = mFrames.FirstChild(); + childFrame != nsnull; + childFrame->GetNextSibling(&childFrame)) { + if (aReflowState.reason == eReflowReason_Incremental) { + // Don't incrementally reflow child frames that aren't along the + // reflow path. + // XXXwaterson inverting this loop logic (e.g., so that we + // iterate through the reflow path's children) would be more + // performant, but I don't want to be too intrusive here. + if (! aReflowState.path->HasChild(childFrame)) + continue; + } I see that you are skipping some reflows now. I am suspicious that this might have side-effects with stretchy MathML frames or frames which are context-sensitive (i.e., frames for which changes outside can still affect them). E.g., if the baseFrame of an accentFrame is changed, then the accentFrame still needs to be reflowed (extended or shrinked) along with the baseFrame to ensure that the accent continues to cover the base accurately. So let's continue to keep the contract that MathML isn't doing incremental reflow for now given that not all the details of what to do have been worked out. Let's just reflow the whole lot and move on, i.e., if the incremental reason is where the crash comes from, what about setting it to resize if path->HasChild() is false? BTW, there are other similar loops to watch out (e.g., in mfencedFrame). The rest of the patch looks okay to me, r=rbs applies from now on your patch as I am going off-line now. I agree with attinasi that the 'new rules' need to be documented (e.g., the non-null command which means 'this == target'!). I'm hoping to address rbs's comments off-line (because this bug is just too darn big at this point). I'll post a summary here when we resolve the issues. In the meantime, I've created a new branch: REFLOW_20020502_BRANCH, that is this morning's trunk plus attachment 82108 [details] [diff] [review]. I've just posted a win32 binary to <> I'll post a Linux binary later tonight. If all goes well, I will try to land this early next week. Created attachment 82259 [details] [diff] [review] better fix for mathml incremental reflow This incorporates a less aggressive (and more comprehensive) fix for handling incremental reflows that arrive at MathML frames: - The prior patch would flow only the child frames along the incremental reflow path. This patch reflows all the child frames, using eReflowReason_Dirty for child frames that don't lie along the reflow path. - The prior patch only tackled nsMathMLContainerFrame; this patch also handles maction, mfenced, and mroot frames. Comment on attachment 82259 [details] [diff] [review] better fix for mathml incremental reflow r=rbs r=rods for forms controls Created attachment 82719 [details] [diff] [review] after review with kin kin and I spent most of the afternoon going through the block-and-line changes. I've made some minor mods where obvious problems came up. Overall, I think we realized that the Viewport is a sewer that needs a bit of an overhaul (I don't know if it's necessary to do that in this patch, though). I wasn't able to remember or explain my box changes very well: I'll probably want to refresh my memory and add a big comment there. Created attachment 82759 [details] partial review comments from dbaron Here are review comments on the first part of the patch (previous version). I didn't get through the whole thing, and I may not have time to in the next few days, so I figure some comments are better than none. Nothing major here, though. waterson: do you mind opening a continuation bug? Since this one is already weird, and with the usual fixups that might happen even after the checkin, it is not going to be convenient to revisit the bug to double-check things out. Created attachment 82984 [details] [diff] [review] incorporate dbaron's comments, bz & kin testing dbaron wrote: >|#if defined(DEBUG_dbaron) || defined(DEBUG_waterson) >| NS_NOTREACHED("We don't have a line for the target of the reflow. " >| "Being inefficient"); >|#endif >Do we still hit this? The only place that I've hit it is when we reflow the combobox (press the `edge case' button on <>, for example). I really think that this is a problem in the combobox, though. >How about an assertion that the child isn't in the reflow path, and a >note to remove it if we hit the assertion because we've started putting >inlines in the reflow path again? In the patch I'll attach shortly, I've put the code back. If we're going to leave RetargetInlineIncrementalReflow in the code, we might as well leave this there, too. >We have a bug on that extra reflow to get the overflow area, right? >Maybe it should be cited in the code? (And it looks like there's a >similar problem in nsInlineFrame.) Not sure. >+ // But...if the incremental reflow command is a StyleChanged >+ // reflow and its target is the current block, change the reason >+ // to `style change', so that it propagates through the entire >+ // subtree. >+ nsHTMLReflowCommand* rc = mOuterReflowState.path->mReflowCommand; > >This needs a big XXX comment saying that this is really insufficient >(the damage loss problem), and it should probably point to the comment >at the beginning of nsBlockFrame::ReflowDirtyLines, and vice versa. What is the `damage loss problem'? >| if (state & NS_FRAME_IS_DIRTY) >| reason = eReflowReason_Dirty; > >I wonder if this check should be right after the initial setting of the >reason to resize rather than within the check of the parent reflow >state's type. What if a resize causes floater damage that marks >something dirty? Shouldn't it get a dirty reflow? I doubt it really >matters, though. (Where do we differentiate dirty and resize?) The only difference for the block frame is whether or not PrepareResizeReflow gets called. (PrepareResizeReflow will pre-emptively mark lines dirty -- based on floater damage, wrapping, percentage-aware children, etc.) In the `dirty' case, it's not called, and so only the lines that are currently marked as dirty get reflowed (modulo floater damager propagation, etc). >nsHTMLReflowCommand.{h,cpp} > >You should remove the mPrevSiblingFrame member completely, since it's >unused. Done. In other news... I've fixed the incremental reflow problem with <select> frames that bz pointed out (breaking the test case in bug 82873). Looks like we'll need that code I ignored in nsComboboxControlFrame, after all! :-) rods, it'd be great if you could re-review this stuff. kin noticed some assertions firing with the sidebar. Caught a case where the nsHTMLFrameOuterFrame needed to convert an incremental reflow to a dirty if the child wasn't on the reflow path. Also, needed to tweak the nsBoxToBlockAdaptor to pass back the `fixed' reflow path from CanSetMaxElementSize since RefreshSizeCache can cause the adaptor to reflow its frame. kin also discovered that resizing a frame group would cause a crash. I think I checked the nsHTMLReflowState ctors in every subdirectory of layout except for layout/document. Oops. After thinking about it some more, I think that I'm a dolt for ignoring rbs's suggestion in comment 174. I've changed the logic in the nsHTMLReflowState ctors so that if the child frame isn't along the path, it automatically converts the reflow reason from `incremental' to `dirty'. This allowed me to undo the reflow reason twiddling in MathML, as well as some other places. It also makes it a lot harder to walk off the end of the reflow path by mistake, which should mean I'll have fewer crash bugs to deal with when this gets checked in. from what I can tell it looks right r=rods Comment on attachment 82984 [details] [diff] [review] incorporate dbaron's comments, bz & kin testing r=kin@netscape.com on the block and line changes. Changes checked in. May the Force be with us. Checkins in the nicer bonsai format for future lookup/reference: This check-in added a new "might be used uninitialized" warning (and removed two of them): -layout/html/base/src/nsPresShell.cpp:6184 - `PRTime afterReflow' might be used uninitialized in this function - `PRTime beforeReflow' might be used uninitialized in this function +layout/html/base/src/nsPresShell.cpp:6335 + `PRIntervalTime deadline' might be used uninitialized in this function Created attachment 83208 :)) Created attachment 83209 :)) doh.. sorry - sometimes moz just keeps spinning when i attach files - and "silantly" keep attaching and attaching without notifying it's succeeded :// Created attachment 83211 [details] oops.. i was a little fast: Here results from MSIE6, same PC bz pointed out that the other numbers may be averages, so i ran on MSIE6 too. The numbers even here. But we're VERY good at sliding elements 100px to the right ;) R.K.Aa, Adding my old scrolling credits demo - but this time the scrolling DOM effect on top of the mozilla.org DOM tree. So we can measure what happens. This represents the scenario where Style Animation would be running on top of an existing web site (existing DOM tree). (credits on top of mozilla.org home page DOM) (credits running alone) The following sample now makes sense: NOw you can drag IFRAMES with ("Sidebar Tabs") Panels inside and have a nice experience. The scrolling demo that I sent is weird it's really slow, but I checked many other cases and it's really great improvement!!. Before, was not only about slow, we had this "dhtml hangs" behavior when the reflow time was bigger than the setTimeout time window. Now panels and windows on top of sites is possible and the dHTML door is open again. This is a amazing start! :) Mozilla DHTML community will be glad and I am sure the future will be nice again. As an evangelist for Gecko, a new world of demonstrations will be showcased now. This is a for sure requirement to other products using Mozilla. Thanks!! I run almost all testcases posted here in Win2K, except for those posted in comment #192 or later which were already tested after the recent fix. Comparing 20020511 to the performance of 20020509 build, 10 cases have noticeable to extremely noticeable improvementwhile in 5 cases there isn't any noticeable difference. IE6 does it very well in all cases, usually being about 2x faster (where performance results are available) except for some tests in alladyn.art.pl where there is performance parity (those tests didn't gain anything from the patch). So IE is still faster but the gap is much smaller than before. Verified on 2002-05-23-08-trunk. Will this fix the extremely fast cursor flicker in Netscape.com's home page search field? Mgalli believes it is because the scrolling news is making a bunch of setTimeout calls, each of which causes a reflow/update to the entire document. No, I don't believe it addresses that problem. Mass removing adt1.0.0, and adding adt1.0.1 because, we are now on 1.0.1. Adding adt1.0.1+ and mozilla1.0.1+ for checkin to 1.0.1 per the ADT and Drivers. Let's set up a tracking bug for any issues that come up and then announce this change and direct people to this tracking bug so they can update it with any issues they are seeing. Created attachment 86515 [details] [diff] [review] patch for the branch This is a patch for the MOZILLA_1_0_BRANCH. It incorporates the dependent bug fixes, as well. Chris, is the patch different from the one that landed on the trunk or is it just all of the patches merged together. If it's different, does it need a review? Changes landed on MOZILLA_1_0_BRANCH. stummala - can you pls verify the fix on the branch? thanks! I'm covering for stummala since he is on leave. I performed lots of tests on branch & IE6 for direct comparison & looking at results I think I will have to re-open this bug & mark it fixed [Removing Verified] so it does not go out of our watch radar. I'm going to attach my results table. Marking it Fixed [Removing verified to keep it on watch list] Created attachment 87245 [details] Comparison Test Results between Branch & Other Browser. Test results are attached of multiple tests I performed. Test results are direct comparison of branch & The Other Browser. Created attachment 87424 [details] Comparison Test Results between Branch & The Other Browser. Markus Hubner sent me some more test links, so I performed those tests as well for direct comparison between two browsers. Attached are results. Those numbers in the results file will be considerably improved using a current trunk build - great improvements through bug 124685, bug 141786, bug 85595. Prashant, could you please add tests with a current trunk to your results. stummala - can you pls verify this one on the 1.0 branch (also check around for possible regressions), then replace "fixed1.0.1" with "verified1.0.1"? thanks! Jaime Rodriguez, As I mentioned earlier I'm covering this bug for "stummala" since he is on leave. I attached testing comparison table to the bug & our branch is not really performing as expected. I re-opened this bug & marked it fixed to keep it on radar. It is really tough to decide if we really want to replace "fixed1.0.1" with "verified1.0.1" since branch is not performing greatly compared to other browser. Please check my last attachment for comparison test results. Desale, surely you should be comparing to prior versions of *this* browser (ie, prior to this fix) to see if there's a substantial improvement. If this fix has resulted in vast performance increases but we still don't quite beat IE, I'd say that's plenty to mark this bug verified. After all, this bug is "Reflow coalescing within JS called from SetTimeout" not "Be faster than IE on all DHTML testcases". If we have a crasher bug, we don't test for it to be fixed by looking to see if we crash more often than IE or not, and we don't refuse to verify any crasher bug if our MTBF is worse than IE. Each bug has to be evaluated on its own merits. Test builds with this fix against builds without it, and if the builds with it are substantially faster and have no regressions, this bug can be marked verified. Created attachment 88042 [details] Comparison Testing Results amoung 3 different Browser builds & The Other Browser. Stuart, what you said is correct & we should be comparing browser with prior versions of browser , but evaluating performance is not either BLACK or WHITE thing as it is in other bugs right ? If there is some crash, we can say it is fixed if there is no crash any more. In case of performance, there is no such sound result & we should be defining our level of satisfaction with performance. You really have a point that we should compare versions before check in & after check in. Hence I did it now. I'm also including results on trunk, which are better than branch. Here is attached comparison testing results table. replacing "fixed1.0.1" with "verified1.0.1" since branch comparison showed lot of improvement. Trunk was best of all three. Adding hong@netscape to cc list since hong's team have good resources to perform performance testing. hong please check out my results comparison & perform your set of testing, & please take appropriate action on this bug. Marking as Verified per Comment #217 From Prashant Desale. *** Bug 128901 has been marked as a duplicate of this bug. ***
https://bugzilla.mozilla.org/show_bug.cgi?id=129115
CC-MAIN-2017-22
refinedweb
17,620
71.14
- Requited - Requite Re"quite" (r?-kw?t"), v. t. [imp. & p. p. {Requited}; p. pr. & vb. n. {Requiting}.] [Pref. re- + quit.] To repay; in a good sense, to recompense; to return (an equivalent) in good; to reward; in a bad sense, to retaliate; to return (evil) for evil; to punish. [1913 Webster] He can requite thee; for he knows the charma That call fame on such gentle acts as these. --Milton. [1913 Webster] Thou hast seen it; for thou beholdest mischief and spite, to requite it with thy hand. --Ps. x. 14. [1913 Webster] Syn: To repay; reward; pay; compensate; remunerate; satisfy; recompense; punish; revenge. [1913 Webster] The Collaborative International Dictionary of English. 2000.
http://en.academic.ru/dic.nsf/cide/147822/Requited
CC-MAIN-2017-22
refinedweb
114
61.16
In this exercise, you use Oracle Directory Services Manager to create a local store and add an entry to it. Then you create an adapter for an LDAP directory and an adapter for a database. The prerequisites for setting up Oracle Virtual Directory adapters are as follows: An instance of Oracle Directory Services Manager. You need to know the URL. An instance of Oracle Virtual Directory An instance of Oracle Internet Directory with some user entries. You can use the instance from the Oracle Internet Directory tutorial. An Oracle Database. For this exercise, you can use the Oracle Database associated with Oracle Internet Directory, although you would not do that on a production system. When an Oracle Database is installed, it already has the HR example scema that we will use in this exercise. For the Oracle Virtual Directory, Oracle Internet Directory, and Oracle Database, you will need to supply the following information: Hostname Port Administrator's name Create Local Store Adapter dc=oracle,dc=com, as follows: Access Oracle Directory Services Manager, as described in "Accessing Oracle Directory Services Manager". Click the Adapter tab. On the Adapter page: Click the Create Adapter icon and choose Local Store Adapter. Enter the Adapter name LSA. Leave Template set to Default. Click Next. On the Settings page: Enter the Adapter Suffix/Namespace dc=oracle,dc=com. Enter data/localDB for Database File. Use the default values for the rest of the fields on the Settings page. Click Next. Review the summary page and click Finish if everything looks correct. Note:If, for some reason, you decide to delete the adapter and create a new one, use a different Adapter name and a different Database File name. Create an entry in the local store as follows: Using a text editor, create an LDIF file that looks like this: version: 1 dn: dc=oracle,dc=com objectclass: top objectclass: domain dc: oracle Access Oracle Directory Services Manager, as described in "Accessing Oracle Directory Services Manager". Click the Data Browser tab. Highlight dc=oracle,dc=com under Client View. Click the Import LDIF icon. Browse to the LDIF file you created and click Open. Create LDAP adapter as a branch cn=Users,dc=mydomain,dc=com). Access Oracle Directory Services Manager, as described in "Accessing Oracle Directory Services Manager". Click the Adapter tab. On the Adapter page: Click Create Adapter icon and choose LDAP Since we will be connecting to an OID server, leave the adapter template at Default. Enter LDAP as name Click Next. On the Connection Page: Click the Add Host icon. Leave Use DNS for Auto Discovery set to No. Enter hostname and port values for your LDAP server. For server proxy Bind DN and proxy password enter the admin DN (typically cn=orcladmin) and password for your LDAP server. Use the default values for the rest of the fields on the page. Click Next. You should see Success!! Oracle Virtual Directory connected to all hosts. on the Connection Test page. Click Next. On the Name Space page: SetPassThrough Credentials to Always. Set the remote base to where you wish to connect in the remote directory tree. Browse to the Users container, cn=Users,dc=mydomain,dc=com Set the Mapped Namespace to ou=LDAP,dc=oracle,dc=com Use the default values for the rest of the fields on the page. Click Next. Review the Summary page. Click Finish. Click the Data Browser tab. On the Data Browser page; Click the Refresh icon Expand the containers under Adapter Browser to view the entries. Expand ou=LDAP,dc=oracle,dc=com under Client View to view the entries as they appear to a client. Click the Adapter tab. Highlight the LDAP adapter and click the Routing tab. On the Routing tab: Under General Settings, select No for Visibility so that this adapter will look like a normal branch to an LDAP client. Click Apply. Go to the Data Browser tab, refresh and verify that the data tree from the LDAP adapter is visible. Expand the containers under Client View to see if they have changed. Create a database adapter that maps the Oracle DB sample HR schema as a branch, as follows: Access Oracle Directory Services Manager, as described in "Accessing Oracle Directory Services Manager". Click the Adapter tab. On the Adapters page: Click the Create Adapter icon. The Adapter navigation tree appears. Select Database from the Adapter Type list. Enter DB as adapter name Leave the Adapter Template set to Default. Click Next. The Connection screen appears. On the Connection screen: For Adapter Suffix/Namespace, enter ou=db,dc=oracle,dc=com. For URL type, select Use Predefined Database. For Database type, select the proper driver type for your database, such as Oracle Thin Drivers. JDBC Driver Class and Database URL will fill in automatically. For Host, enter the hostname/IP address of your database (sta00730) For Port, enter the port of your database (5521) For Database name, enter dapmain. For Database user, enter HR. For Database password, enter the password. (welcome1) Click Next which takes you to the Mapped Database Tables page. On the Mapped Database Tables Page: Click Browse. Scroll down to HR, expand the container, and click EMPLOYEES. Click OK. The Map Database Tables page will now show HR.EMPLOYEES. Click Next to go to the Map Object Classes page. On the Map Object Classes page: Click the Create a New Object Class icon. Enter Object Class inetorgperson. Enter RDN Attribute UID. Click OK. Highlight the object class you just created and click the Add Mapping Attribute icon. On the Add Mapping Attribute page: Enter the LDAP attribute uid and the Database Table:Field HR.EMPLOYEES:EMAIL Leave Datatype blank. Click OK. Map the LDAP iterate givenname to HR.EMPLOYEES:FIRST_NAME. Click Next. Click Finish. The new DB adapter appears on the Adapter page. On the Adapter page, select the new Database adapter and click the Routing tab. On the Routing page: Under General Settings, select No for Visibility so that this adapter will look like a normal branch to an LDAP client. Select DB adapter criticality False so that if DB is not available OVD still responds Click Apply. You should see three adapters listed on the left side of the Adapter page, one for Local store, one for LDAP and one for Database adapter. Click on each adapter to make sure that it displays the correct namespace and configuration information you set in the adapter configuration setup. Go to the Data Browser, click the refresh icon, and observer the Client View and Adapter Browser.
http://docs.oracle.com/cd/E15523_01/oid.1111/e10276/ovd_adapt.htm
CC-MAIN-2016-36
refinedweb
1,098
58.48
Perhaps the main attraction of Scala-Native is to have compound value types, or structs. Obviously The Jvm and Js run times don’t support them. Now maybe I’m missing something obvious, but I don’t see why they couldn’t exist as a compile time fictions. When used within a method / function, no heap object would need to be created and the run time could just perform operations on the native platform supported components. The Struct could be passed as multiple parameters, its atomic platform components. The only place there would have to be object creation is for return types. They would have to be returned as Tuples behind the scenes. However particularly in code with a lot of tail loop recursion, function return is only a small percentage of processor operations so you could get maybe up to 95% of the benefits of compound value types even on platforms where they are not natively supported. I notice that both the System V and the Microsoft x86-64 calling conventions only use one register for return values, using the stack for 2nd and subsequent values. I’ve often wondered if this was a mistake. But maybe the reason for this is that in typical code for more values are passed as parameters then are returned from functions. Hence supporting the proposition that native compound return types are not vital to fairly efficient code. There would then be the question of dealing with collections of compound value types, but my guess is that this should be doable using byte arrays. Again maybe there is something I’m missing, so I’m happy to be corrected if my thinking is nieve. We can’t use byte arrays on the JVM unless we’re willing to pay a big cost for reconstructing values with the right type on every access. However, I think it would be easy to use type classes to direct the creation of a struct of arrays instead of an array of structs. Something close to what Array already does (with ClassTag). Array ClassTag class FlatArray[T:StructRep](init: => T) { ... } This would be backed by a normal Array for T that are not structs, and by several arrays for structs (one per field), which would avoid boxing. T Adapting collections to make use of this would probably be a little harder (it would require using the type class everywhere). Currently ArrayBuffer for example doesn’t even bother with Array's ClassTag and uses an internal Array[AnyRef]. ArrayBuffer Array[AnyRef] In general, I agree that it would be very nice to have value classes that erase to several values when used in parameters, class fields and array elements. Probably related: Automating ad hoc data representation transformations. Exploring better syntax for structs and making them more natural is one of the goals for 0.4 release of Scala Native. We’ve found that tuple-like CStructN[T1,..., TN] to be too cumbersome in practice. Some initial design notes are already available at CStructN[T1,..., TN] Implementation-wise, passing structs around is generally a complete non-issue on Scala Native. LLVM provides excellent support for this and we’ll just use built-in struct and array value types for this. Passing structs across C boundary requires some additional care to respect C ABI but it’s feasible to support it. @densh My big concern is maintaining a common code base across native, Jvm and Js. Will I be able to write higher level code that uses a common interface, while utilising Scala native structs in my Scala native versions? Currently my source code is 99% platform independent and only 1% Jvm / Scala.js specific. Can Scala native maintain a common syntax with the Jvm /Js platforms while still leveraging native efficiency? If we had compound data types this would not only be a great boon for the JVM / Js platforms but make keeping a common syntax much easier. @LPTK Do we have to reconstruct values from the wrapped byte Array every time? The map, flatMap and foreach functions on the Compound Value collection class could act on the constituent members without ever reconstructing the Compound value. I use Vec2(x: Double, y: Double) a lot in my code and at one point I created a specialist collection for Vec2’s using a wrapped Array[Double ], but then i started using other classes such as; case class Dist2(x: Dist, y: Dist) class Dist(val metres: Double) extends AnyVal So I scrapped my specialist collection code and started wondering about a more generic solution. Should I create a general wrapped array class for the product of 2 Doubles, for a struct of any number of Doubles or a more general case for all compound value types? Anyway I presume this must be a common problem that deserves a standardised language / library solution. I don’t know a way to not do so, unless you use an unsafe “off-heap memory” hack. @densh can probably confirm (or contradict). @LPTK OK whats wrong with this in terms of efficiency? Its pretty rough and ready but hopefully you get the idea. trait AtomVal[A] { def numBytes: Int def loadVal(byteBuf: java.nio.ByteBuffer, posn: Int): A def storeVal(byteBuf: java.nio.ByteBuffer, posn: Int, value: A): Unit } class ValSeq2[A, B](length: Int)(implicit val ev1: AtomVal[A], ev2: AtomVal[B]) { @inline def elemLen: Int= ev1.numBytes + ev2.numBytes @inline def bytesLen = elemLen * length val arr: Array[Byte] = new Array[Byte](elemLen * length) val buf = java.nio.ByteBuffer.wrap(arr) def storeAt(posn: Int, inp1: A, inp2: B): Unit = { val offset: Int = posn * elemLen ev1.storeVal(buf, offset, inp1) ev2.storeVal(buf, offset + ev1.numBytes, inp2) } def loadCompound(posn: Int): (A, B) = { val offset: Int = posn * elemLen val r1 = ev1.loadVal(buf, offset) val r2 = ev2.loadVal(buf, offset + ev1.numBytes) (r1, r2) } def map[R](f: (A, B) => R): Seq[R] = { var acc: Seq[R] = Seq() var i: Int = 0 while(i < bytesLen) { val el1: A = ev1.loadVal(buf, i) i += ev1.numBytes val el2: B = ev2.loadVal(buf, i) i += ev2.numBytes acc :+= f(el1, el2) } acc } } object ValSeq2 { def apply[A, B](len: Int)(implicit ev1: AtomVal[A], ev2: AtomVal[B]): ValSeq2[A, B] = new ValSeq2[A, B](len) } package object pExp { implicit object IntImplicitAtomVal extends AtomVal[Int] { val numBytes = 4 override def loadVal(byteBuf: java.nio.ByteBuffer, posn: Int): Int = byteBuf.getInt(posn) override def storeVal(byteBuf: java.nio.ByteBuffer, posn: Int, value: Int): Unit = byteBuf.putInt(posn, value) } implicit object DoubleImplicitAtomVal extends AtomVal[Double] { val numBytes = 8 override def loadVal(byteBuf: java.nio.ByteBuffer, posn: Int): Double = byteBuf.getDouble(posn) override def storeVal(byteBuf: java.nio.ByteBuffer, posn: Int, value: Double): Unit = byteBuf.putDouble(posn, value) } implicit object BooleanImplicitAtomVal extends AtomVal[Boolean] { val numBytes = 1 override def loadVal(byteBuf: java.nio.ByteBuffer, posn: Int): Boolean = byteBuf.get(posn) match { case 1 => true case 0 => false case _ => throw new Exception("Wrong value in Boolean Byte") } override def storeVal(byteBuf: java.nio.ByteBuffer, posn: Int, value: Boolean): Unit = byteBuf.put(posn, if(value) 1.toByte else 0.toByte) } } object ExpApp extends App { val c = ValSeq2[Int, Double](4) c.storeAt(0, -457, 2.12) c.storeAt(3, -5, 800.8) println(c.loadCompound(0)) println(c.loadCompound(3)) val c2 = ValSeq2[Int, Boolean](3) c2.storeAt(0, 100, true) c2.storeAt(1, 789, true) c2.storeAt(2, 5, false) println(c2.loadCompound(1)) println(c2.loadCompound(2)) println(c2.map((i, b) => if (b) i * 2 else i)) } It deserves a JVM-level solution which is already in the works. Look at project Valhalla: Project Valhalla is not only about compound value types but also about specializing generics to avoid boxing as much as it is feasible while retaining full functionality (ok, except identity-related features as lack of identity is the foundation of value types). Using byte arrays on JVM and JS for representing value classes would prohibit them from having reference type members. Value types as proposed in Project Valhalla can contain reference types, primitve types as well as other (nested) value types - so it seems JVM will be able to fully support value types of any form. OTOH it doesn’t seem that optimizations for compound value types in JS are feasible. While JS has DataView type that provides various methods for viewing the buffer as double, long, int, etc it doesn’t expose any method to save or load references to/ from byte buffer. Remember also that C of course allows to store references (or pointers) in structs. Therefore, if you want a way for encoding compound value types that would work (ie would eg allow flat arrays of value types) on all platforms (native, JVM and JS) then it wouldn’t be compatible with C structs already supported by Scala-Native. Your loadCompound method reconstructs a tuple of values on every access. At this point, why not just store the tuple in the first place? loadCompound Also, collections will want to abstract over the structs that they store, without resorting to ad-hoc ValSeqN alternatives. As soon as you make collections generic, you can’t even use the appropriate FunctionN function in methods like foreach. You’ll have to use Function1 everywhere, so you’ll need to box here too. ValSeqN FunctionN foreach Function1 @tarsa Ok here’s a new platform independent file: sealed trait AtomVal[A] { def numBytes: Int } object AtomVal { implicit case object IntAtom extends AtomVal[Int] { def numBytes: Int = 4 } implicit case object DoubleAtom extends AtomVal[Double] { def numBytes: Int = 8 } } abstract class Struct2[A, B, R](implicit val ev1: AtomVal[A], val ev2:AtomVal[B]) { def elemLen: Int = ev1.numBytes + ev2.numBytes val apply: (A, B) => R } class Struct3[A, B, C, R](implicit val ev1: AtomVal[A], val ev2:AtomVal[B], val ev3: AtomVal[C]) { def elemLen: Int = ev1.numBytes + ev2.numBytes + ev3.numBytes } Here’s a scala.js implementation, but one that can have exactly the same interface as a Jvm implementation: class ValSeq2[A, B, R](length: Int)(implicit ev: Struct2[A, B, R]) { @ inline def elemLen = ev.elemLen def atom1Len = ev.ev1.numBytes def bytesLen = elemLen * length val buf = new typedarray.ArrayBuffer(bytesLen) val vw = new typedarray.DataView(buf) def storeElem(ind: Int, inp1: A, inp2: B): Unit = { storeAtom1(ind, inp1); storeAtom2(ind, inp2) } def getElem(ind: Int): R = ev.apply(getAtom1(ind), getAtom2(ind)) def sMap[C, D, R2](f: (A, B) => (C, D))(implicit evR2: Struct2[C, D, R2]): ValSeq2[C, D, R2] = { val vs2 = new ValSeq2[C, D, R2](length) (0 until length).foreach(i => { val a = getAtom1(i) val b = getAtom2(i) val (c, d) = f(a, b) vs2.storeAtom1(i, c) vs2.storeAtom2(i, d) }) vs2 } def getTuple(ind: Int): (A, B) = (getAtom1(ind), getAtom2(ind)) def storeAtom1(ind: Int, inp: A): Unit = { val byteInd = ind * elemLen ev.ev1.asInstanceOf[AnyRef] match { case AtomVal.IntAtom => vw.setInt32(byteInd, inp.asInstanceOf[Int]) case AtomVal.DoubleAtom => vw.setFloat64(byteInd, inp.asInstanceOf[Double]) } } def getAtom1(ind: Int): A = { val byteInd = ind * elemLen ev.ev1.asInstanceOf[AnyRef] match { case AtomVal.IntAtom => vw.getInt32(byteInd).asInstanceOf[A] case AtomVal.DoubleAtom => vw.getFloat64(byteInd).asInstanceOf[A] } } def storeAtom2(ind: Int, inp: B): Unit = { val byteInd = ind * elemLen + atom1Len ev.ev2.asInstanceOf[AnyRef] match { case AtomVal.IntAtom => vw.setInt32(byteInd, inp.asInstanceOf[Int]) case AtomVal.DoubleAtom =>vw.setFloat64(byteInd, inp.asInstanceOf[Double]) } } def getAtom2(ind: Int): B = { val byteInd = ind * elemLen + atom1Len ev.ev2.asInstanceOf[AnyRef] match { case AtomVal.IntAtom => vw.getInt32(byteInd).asInstanceOf[B] case AtomVal.DoubleAtom => vw.getFloat64(byteInd).asInstanceOf[B] } } } I’d like to get rid of the And B type parameters on the ValSeq2. There must be away to derive A and B from the implicit “ev: Struct2[_, _, R]” parameter. Here’s some example classes: case class ExID(v1 :Int, v2: Double) object ExID { implicit object Ints2Implicit extends Struct2[Int, Double, ExID] { override val apply = ExID.apply } } case class ExDD(v1 :Double, v2: Double) object ExDD { implicit object Ints2Implicit extends Struct2[Double, Double, ExDD] { override val apply = ExDD.apply } } And here’s a simple example using the sMap method: object EJsApp { def main(args: Array[String]): Unit = { val v = new ValSeq2[Int, Double, ExID](3) v.storeElem(0, 27, 56.001) v.storeElem(1, -987, -1987.001) v.storeElem(2, 2, 2.02) (0 until 3).foreach(i => println(v.getElem(i))) val v2 = v.sMap[Double, Double, ExDD]((a, b) => (a + 3.0, b -56)) (0 until 3).foreach(i => println(v2.getElem(i))) } } @LPTK I’ve just studied your response and I think your suggestion of using multiple arrays is a superior solution. Either way the problem of collections of values structs seems eminently solvable. And I would still assert my original contention that we could have 80-90% of the benefits of structs with the current Jvm and Js platforms. I do think that having value classes without object references would be very useful. Such data is a lot easier to serialise and persist, although your multiple arrays would allow refs anyway. @RichType Given that your proposition is incompatible with both Scala’s AnyVal and Scala-Native’s CStruct you basically need a completely separate construct. What’s stopping you from implementing it as eg a macro? Even if it would be in a simplified form you still would have a concrete mechanism to benchmark. I think that the idea of unrolling a compound parameter to multiple method parameters severely limits the polymorphism opportunities. A method with different number of arguments can’t override method from superclass. Overall, (I know it may sound rude, but) I think the proposed idea is too cumbersome to use and too limited to become common. Small community and (relatively) high complexity would mean lots of bugs and high maintenance cost for little benefit. OTOH maybe I’m somewhat biased because I’m counting on Project Valhalla which is a complete and powerful solution.
https://contributors.scala-lang.org/t/why-cant-we-have-compound-value-types-on-jvm-and-js/1205
CC-MAIN-2017-43
refinedweb
2,329
57.06
This action might not be possible to undo. Are you sure you want to continue? Selenium Documentation . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . .3 Accessors/Assertions . . . . . . . 7. . . . . .4 Locating UI Elements . . . . . . . . . . . .4 Locator Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 8 9 . 6. . . . . . . . . . . . . . . . . . . . . .10 Organizing Your Test Scripts . . Actual Content? 7. . 6. . . . . . . . . . . . . . . . . . . . . .5 Location Strategy Tradeoffs . . . . . .2 Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Introducing Test Design . . . . . . . 6. . . . . . . .12 Handling HTTPS and Security Popups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . 6. . . . . . . . . . . Verify? Element vs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . JavaScript and Selenese Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. . . . . . . . . . . . . . . . . . . . . . . .7 Reporting Results . . . . . . . . . . . .9 Server Options .5 Programming Your Test . . . . . . . . . Alerts. . . . Selenium-Grid User-Extensions 9. . . .6 Testing Ajax Applications . . . . . . 105 ii . . . . . . . . . . . . . . . . . . .4 From Selenese to a Program . . . . . . 9. .7 5. . . . . . . . . . . . . . . . . . . .8 5. . . . . . . . . . . . . . . . . . . . . 7. . .1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . . . . . . . . 6. . 6. . . . . . . . . . . . . . . . . . . . . . . . . . .2 How Selenium-RC Works . . . . . . . . . . . . . . . . .The Selenese Print Command . . . . . . . . . .6 Using User-Extensions With Selenium RC . . . . Store Commands and Selenium Variables . . . . . . . . . . . . . . 9. . . . . . . . . .11 Selenium-RC Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Popups. . . . . . . . .10 Specifying the Path to a Specific Browser . . . . . . . . . . . . . . . . . . . and Multiple Windows . . . . . . . . .8 Adding Some Spice to Your Tests . . . . . . . . . . . . . . . . . . . . .5 Using User-Extensions With Selenium-IDE 9.13 Supporting Additional Browsers and Browser Configurations 6. 7 Test Design Considerations 7. . . . . . . . . . . . . . . .4 5. . . . . . . . . .3 Verifying Expected Results: Assert vs. . . . . . . . . . . . . . . . . . . . . .9 5. . . . 6. . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . 6. . . . . . . . . . . . . . . .11 Organizing Your Test Suites . . . . . . . . . . . . . . . . . . . . . . .8 Bitmap Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 6 The “AndWait” Commands . . . . . 7. echo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 What to Test? . . . The waitFor Commands in AJAX applications Sequence of Evaluation and Flow Control . . . . . 7. . . . . . . . . . . . . . . . . . . . . . . .6 5. . . . . . . 7. . . . . . 7. . . . . .NET client driver configuration 11 Java Client Driver Configuration 105 11. . . . . .3 Installation . . . . . . . . . . . . . . . 9. . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . 6. . . . .5 5. . . . . . . . . . . . . . . . . . . . .6 Learning the API . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 UI Mapping . . . . .1 Configuring Selenium-RC With Eclipse . . . . 44 44 44 45 46 46 47 49 49 49 51 53 57 61 62 64 67 71 71 75 76 76 83 83 83 85 85 86 87 87 89 89 89 89 90 93 95 95 95 95 96 97 97 101 Selenium-RC 6. . . . . . . . . 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 Handling Errors . .14 Troubleshooting Common Problems . . . . .9 Solving Common Web-App Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 10 . . . . . . . . . . . . . . . . .2 Configuring Selenium-RC With Intellij . . . . . . . . . . . . . . . . . . . . .11. . . . . . . . . . . . . . . . . . . .1 Useful XPATH patterns . . . . . . . 121 12 Python Client Driver Configuration 133 13 Locating Techniques 137 13. . . . . . . 137 iii . . . . . 137 13.2 Starting to use CSS instead of XPATH . . . . . . . . . . . . . . . . iv . Selenium Documentation.0 Contents: CONTENTS 1 . Release 1. Release 1.Selenium Documentation.0 2 CONTENTS . is unmatched by available proprietary tools. Whether you are brand-new to Selenium. There are planned areas we haven’t written yet. Thanks very much for reading. we have written the beginning chapters first so newcomers can get started more smoothly. It’s quite different from other tools. We feel its extensibility and flexibility.CHAPTER ONE NOTE TO THE READER Hello. and welcome to Selenium! The Documentation Team would like to welcome you. we have aimed to write so that those completely new to test automation will be able to use this document as a stepping stone. We are very excited to promote Selenium and. Also. to expand its user community. This document will be a ‘live’ document on the SeleniumHQ website where frequent updates will occur as we complete the additional planned documentation. In short. We have worked very. very hard on this document. we believe this documentation will truly help to spread the knowledge around. along with its tight integration with the browser. We truly believe you will be similarly excited once you learn how Selenium approaches test automation. Please realize that this document is a work in progress. experienced users and “newbies” will benefit from our Selenium User’s Guide. hopefully. or have been using it for awhile. We have also already added some valuable information that more experienced users will appreciate. – the Selenium Documentation Team 3 . However. we really want to “get the word out” about Selenium. No doubt. Why? We absolutely believe this is the best tool for web-application testing. and to thank you for being interested in Selenium. Release 1.Selenium Documentation. Note to the Reader .0 4 Chapter 1. If an application has a very tight deadline. it can be argued that disciplined testing and quality assurance practices are still underdeveloped in many organizations.. software applications today are written as web-based applications to be run in an Internet browser. if the application’s user interface will change considerably in the near future. and it’s imperative that the testing get done within that time frame. perhaps most. there is currently no test automation available. then any automation would need to be rewritten. Test automation means using a tool to run repeatable tests against the target application whenever necessary.CHAPTER TWO INTRODUCING SELENIUM 2. However. or lack the skills to perform. Utilizing these alternatives would in most cases greatly improve the efficiency of their software development by adding efficiencies to their testing. In an era of continuously improving software processes. Test automation is often the answer. At times. Also. such as eXtreme programming (XP) and Agile. There are times when manual testing may be more appropriate. For instance. For the short term. The effectiveness of testing these applications varies widely among companies and organizations. automation has specific advantages for improving the long-term efficiency of a software team’s testing processes. sometimes there simply is not enough time to build test automation. manual testing may be more effective. however there are alternatives to manual testing that many organizations are unaware of.1 To Automate or Not to Automate? That is the Question! Is automation always advantageous? When should one decide to automate test cases? It is not always advantageous to automate test cases.2 Test Automation for Web Applications Many. Software testing is often conducted manually. 5 . then manual testing is the best solution. this is effective. 2 Selenium-RC (Remote Control) Selenium-RC allows the test automation developer to use a programming language for maximum flexibility and extensibility in developing test logic. We hope this guide will assist in “getting the word out” that quality assurance and software testing have many options beyond what is currently practiced. Selenium-IDE has a recording feature. It operates as a Firefox add-on and provides an easy-to-use interface for developing and running individual test cases or entire test suites. which allows the user to pick from a list of assertions and verifications for the selected location. These operations are highly flexible. 2. Although Selenium-IDE is a Firefox only add-on. 6 Chapter 2. There are a number of commercial and open source tools available for assisting with the development of test automation. teaches its most widely used features.Selenium Documentation. 2. the programming language’s iteration support can be used to iterate through the result set. Also. if the application under test returns a result set. For instance. Selenium provides a rich set of testing functions specifically geared to the needs of testing of a web application. This guide introduces Selenium.1 Selenium-IDE Selenium-IDE is the Integrated Development Environment for building Selenium test cases. Each one has a specific role in aiding the development of web application test automation. tests created in it can also be run against other browsers by using Selenium-RC and specifying the name of the test suite on the command line. One of Selenium’s key features is the support for executing one’s tests on multiple browser platforms. Selenium is possibly the most widelyused open source solution. Most are related to the repeatability of the tests and the speed at which the tests can be executed. We hope this user’s guide and Selenium itself provide a valuable aid to boosting the reader’s efficiency in his or her software testing processes.0 There are many advantages to test automation. and provides useful advice in best practices accumulated from the Selenium community. It also has a context menu (right-click) integrated with the Firefox browser. Many examples are provided. It is our hope that this guide will get additional new users excited about using Selenium for test automation. allowing many options for locating UI elements and comparing expected test results against actual application behavior. Introducing Selenium .4 Selenium Components Selenium is composed of three major tools. technical information on the internal structure of Selenium and recommended uses of Selenium are provided as contributed by a consortium of experienced Selenium users. Release 1. This user’s guide will assist both new and experienced Selenium users in learning effective techniques in building test automation for web applications. Selenium-IDE also offers full editing of test cases for more precision and control.4.4. and if the automated test program needs to run tests on each element in the result set.3 Introducing Selenium Selenium is a robust set of tools that supports rapid development of test automation for web-based applications. calling Selenium commands to run tests on each item. which will keep account of user actions as they are performed and store them as a reusable script to play back. 2. 2. Selenium-RC support for multiple programming and scripting languages allows the 2. run tests Start browser. perhaps. run tests Start browser. Release 1.4. Java. run tests Start browser. Linux. Selenium’s strongest characteristic when compared with proprietary test automation tools and other open source solutions. Supported Browsers 7 . 2. 2. When tests are sent to the hub they are then redirected to an available Selenium-RC. run tests Under development Start browser. Mac Windows.0 Selenium-RC provides an API (Application Programming Interface) and library for each of its supported languages: HTML. With Selenium-Grid multiple instances of Selenium-RC are running on various operating system and browser configurations. there may be technical limitations that would limit certain features.3 Selenium-Grid Selenium-Grid allows the Selenium-RC solution to scale for large test suites or test suites that must be run in multiple environments.5. each of these when launching register with a hub. run tests Partial support possible** Operating Systems Windows.0 Beta-1: Record and playback tests Selenium-RC Start browser. and Ruby. There are multiple ways in which one can add functionality to Selenium’s framework to customize test automation for one’s specific testing needs. Linux. Perl. Linux.0 Beta-1 & 1. Linux. Mac Windows Windows Mac Mac Windows. ** Selenium-RC server can start any executable.. but depending on browser security settings. PHP. Python.6 Flexibility and Extensibility You’ll find that Selenium is highly flexible. with the entire test suite theoretically taking only as long to run as the longest individual test. Mac Windows. This allows for running tests in parallel. This is.Selenium Documentation. which will launch the browser and run the test. This ability to use Selenium-RC with a highlevel programming language to develop test cases also allows the automated testing to be integrated with a project’s automated build environment. 2. run tests Start browser. C#.5 Supported Browsers Browser Firefox 3 Firefox 2 IE 8 IE 7 Safari 3 Safari 2 Opera 9 Opera 8 Google Chrome Others Selenium-IDE 1. run tests Start browser.0 Beta-2: Record and playback tests 1. run tests Start browser. This allows users to customize the generated code to fit in with their own test frameworks.7 About this Book This reference documentation targets both new users of Selenium and those who have been using Selenium and are seeking additional knowledge. Architecture diagrams are provided to help illustrate these points. or configurations.Selenium Documentation. Selenium Commands Describes a subset of the most useful Selenium commands in detail. User extensions Presents all the information required for easily extending Selenium. We do not assume the reader has experience in testing beyond the basics. It introduces the novice to Selenium test automation. https requests. Selenium-IDE allows for the addition of user-defined “user-extensions” for creating additional commands customized to the user’s needs. verifications and assertions can be made against a web application. Also. This section allows you to get a general feel for how Selenium approaches test automation and helps you decide where to begin. We cover examples of source code showing how to report defects in the application under test. The remaining chapters of the reference present: Selenium Basics Introduces Selenium by describing how to select the Selenium component most appropriate for your testing tasks. this section describes some configurations available for extending and customizing how the Selenium-IDE supports test case development. Also provides a general description of Selenium commands and syntax. that Selenium-RC supports are described. pop-ups and the opening of new windows. Also. We explain how your test script can be “exported” to the programming language of your choice. Finally.. Many examples are presented in both a programming language and a scripting language. A number of solutions to problems which are often difficult for the new user. The experienced Selenium user will also find this reference valuable. This chapter also describes useful techniques for making your scripts more readable when interpreting defects caught by your Selenium tests. 8 Chapter 2. This includes handling Security Certificates. This chapter shows what types of actions. Test Design Considerations Presents many useful techniques for using Selenium efficiently. Finally. Selenium-RC Explains how to develop an automated test program using the Selenium-RC API. Introducing Selenium . Selenium is an Open Source project where code can be modified and enhancements can be submitted for contribution. 2. Selenium-IDE Teaches how to build test cases using the Selenium Integrated Development Environment. The various modes. We also cover techniques commonly asked about in the user forums such as how to implement data-driven tests (tests where one can vary the data between different test passes). the installation and setup of Selenium-RC is covered here. Release 1. are described in this chapter.0 test writer to build any logic they need into their automated testing and to use a preferred programming or scripting language of one’s choice. Selenium-Grid This chapter is not yet developed. This includes scripting techniques and programming techniques for use with Selenium-RC. along with their trade-offs and limitations. 8. Each chapter originally had a primary author who kicked off the intial writing.8. 2. but in the end. 2. and to Amit Kumar for participating in our discussions and for assisting with reviewing the document. The Documentation Team 9 . He also set us up with everything we needed on the SeleniumHQ website for developing and releasing this user’s guide. Release 1. His enthusiasm and encouragement definitely helped drive this project. Their reviewing and editorial contributions proved invaluable. Also thanks goes to Andras Hatvani for his advice on publishing solutions. his support has been invaluable.2 Current Authors • Mary Ann May-Pumphrey • Peter Newhook In addition to the original team members who are still involved (May ‘09). Their enthusiasm and dedication has been incredibly helpful. we must recognize the Selenium Developers.8.Selenium Documentation. and the continued efforts of the current developers.3 Acknowledgements A huge special thanks goes to Patrick Lightbody. 2. and Peter have recently made major contributions. Patrick has helped us understand the Selenium community–our audience. the reader. Without the vision of the original designers. Mary Ann is actively writing new subsections and has provided editorial assistance throughout the document.8 The Documentation Team 2. We hope they continue to be involved. As an administrator of the SeleniumHQ website. And of course. They have truly designed an amazing tool.0 2. each of us made significant contributions to each chapter throughout the project. Mary Ann.8. we would not have such a great tool to pass on to you.1 The Original Authors • Dave Hunt • Paul Grandjean • Santiago Suarez Ordonez • Tarun Kumar The original authors who kickstarted this document are listed in alphabetical order. Peter has provided assistance with restructuring our most difficult chapter and has provided valuable advice on topics to include. Each of us contributed significantly by taking a leadership role in specific areas. Introducing Selenium .0 10 Chapter 2.Selenium Documentation. Release 1. the Selenium community is encouraging the use Selenium-IDE and RC and discouraging the use of Selenium-Core. Ajax functionality. Selenium-IDE is also very easy to install. It’s simple to use and is recommended for lesstechnical users. event handling. 11 . One can run test scripts from a web-browser using the HTML interface TestRunner. In addition Selenium commands support testing of window size.1 Getting Started – Choosing Your Selenium Tool Most people get started with Selenium-IDE. At the time of writing (April 09) it is still available and may be convenient for some.2 Introducing Selenium Commands 3. such as testing each element of a variable length list requires running the script from a programming language. It’s an easy way to get familiar with Selenium commands quickly. running and developing tests. submitting forms. Support for Selenium-Core is becoming less available and it may even be deprecated in a future release. The Selenium-IDE can serve as an excellent way to train junior-level employees in test automation. and table data among other things. Since the development of SeleniumIDE and Selenium-RC. Some testing tasks are too complex though for the Selenium-IDE. Selenium-Core is another way of running tests. The command set is often called selenese. pop up windows. one can test the existence of UI elements based on their HTML tags. Selenium-IDE does not support iteration or condition statements. When programming logic is required Selenium-RC must be used. alerts. test for broken links. it does not support iteration. selection list options. For example. The IDE allows developing and running tests without the need for programming skills as required by Selenium-RC. Finally. 3. mouse position.org) lists all the available commands. test for specific content. This is what we recommend. You may also run your scripts from the Selenium-IDE.2. See the chapter on Selenium-IDE for specifics.html. Selenium-Core also cannot switch between http and https protocols. and many other web-application features.CHAPTER THREE SELENIUM BASICS 3. You can develop your first script in just a few minutes. If one has an understanding of how to conduct manual testing of a website they can easily transition to using the Selenium-IDE for both. Similar to Selenium-IDE. This is the original method for running Selenium commands. more are using these tools rather than Selenium-Core. input fields. any tests requiring iteration. In selenese.1 Selenium Commands – Selenese Selenium provides a rich set of commands for fully testing your web-app in virtually any way you may imagine. It has limitations though. These commands essentially create a testing language. However. The Command Reference (available at SeleniumHQ. 3. They will succeed immediately if the condition is already true.g. but they verify that the state of the application conforms to what is expected. In some cases both are required. “verifyText” and “waitForText”. labels. you can “assertText”. • Actions are commands that generally manipulate the state of the application. and the commands themselves are described in considerable detail in the section on Selenium Commands. This allows a single “assert” to ensure that the application is on the correct page. Selenium Basics . and still in others the command may take no parameters at all. however they are typically • a locator for identifying a UI element within a page. they will fail and halt the test if the condition does not become true within the current timeout setting (see the setTimeout action below). Examples include “make sure the page title is X” and “verify that this checkbox is checked”. This suffix tells Selenium that the action will cause the browser to make a call to the server. etc. When an “assert” fails. However. It depends on the command. the test will continue execution.0 A command is what tells Selenium what to do. “waitFor” commands wait for some condition to become true (which can be useful for testing Ajax applications). “storeTitle”. and ” waitFor”. selenium variables. they consist of the command and two parameters. the test is aborted. e. followed by a bunch of “verify” assertions to test form field values. “verify”. • Assertions are like Accessors. When a “verify” fails. Locators. All Selenium Assertions can be used in 3 modes: “assert”. For example: verifyText //div//a[2] Login The parameters are not always required. the execution of the current test is stopped. They do things like “click this link” and “select that option”.2. • Accessors examine the state of the application and store the results in variables. Selenium commands come in three “flavors”: Actions.Selenium Documentation. and that Selenium should wait for a new page to load. Accessors and Assertions. Here are a couple more examples: goBackAndWait verifyTextPresent type type Welcome to My Home Page (555) 666-7066 ${myVariableAddress} id=phone id=address1 The command reference describes the parameter requirements for each command. Many Actions can be called with the “AndWait” suffix.g. For example. or has an error. Release 1. • a text pattern for verifying or asserting expected page content • a text pattern or a selenium variable for entering text in an input field or for selecting an option from an option list. 12 Chapter 3. text patterns. e. “clickAndWait”. If an Action fails. logging the failure. Parameters vary.2 Script Syntax Selenium commands are simple. in others one parameter is required. They are also used to automatically generate Assertions. Commonly Junit is used to maintain a test suite if one is using Selenium-RC with Java.html" >Login</a></td></tr> <tr><td><a href= ". An example tells it all. If using an interpreted language like Python with Selenium-RC than some simple programming would be involved in setting up a test suite.3 Test Suites A test suite is a collection of tests. 3. Nunit could be employed. Release 1. Test suites can also be maintained when using Selenium-RC. but they should be present. Test Suites 13 ./SearchValues.Priority 1</title> </head> <body> <table> <tr><td><b>Suite Of Tests</b></td></tr> <tr><td><a href= "./SaveValues.html" >Test Searching for Values</a></td></tr> <tr><td><a href= ". from the Selenium-IDE. if C# is the chosen language.3. the second is a target and the final column contains a value.html" >Test Save</a></td></tr> </table> </body> </html> A file similar to this would allow running the tests all at once./Login. The first column is used to identify the Selenium command.Selenium Documentation. Since the whole reason for using Sel-RC is to make use of programming logic for your testing this usually isn’t a problem. Here is an example of a test that opens a page. Additionally. This is done via programming and can be done a number of ways. The second and third columns may not require values depending on the chosen Selenium command. one after another. This consists of an HTML table with three columns. Each table row represents a new Selenium command. <html> <head> <title>Test Suite Function Tests .0 Selenium scripts that will be run from Selenium-IDE may be stored in an HTML text file format. 3.. With a basic knowledge of selenese and Selenium-IDE you can quickly produce and run testcases. Often one will run all the tests in a test suite as one continuous batch-job. The syntax again is simple. When using Selenium-IDE. test suites also can be defined using a simple HTML file. An HTML table defines a list of tests where each row defines the filesystem path to each test. click/clickAndWait performs a click operation. verifyTable verifies a table’s expected contents.5 Summary Now that you’ve seen an introduction to Selenium. as defined by its HTML tag. is present on the page. menu. Chapter 3 gets you started and then guides you through all the features of the Selenium-IDE. we’ll show you a few typical Selenium commands. Release 1. We recommend beginning with the Selenium IDE and its context-sensitive. and optionally waits for a new page to load. verifyText verifies expected text and it’s corresponding HTML tag are present on the page.0 3. Selenium Basics . 14 Chapter 3. verifyTextPresent verifies expected text is somewhere on the page. in present on the page. These are probably the most commonly used commands for building test.Selenium Documentation. and you can have a simple script done in just a minute or two. verifyElementPresent verifies an expected UI element. waitForPageToLoad pauses execution until an expected new page loads. you’re ready to start writing your first scripts. open opens a page using a URL. as defined by it’s HTML tag. Called automatically when clickAndWait is used. verifyTitle/assertTitle verifies an expected page title. 3. right-click. This will allow you to get familiar with the most common Selenium commands quickly. waitForElementPresent pauses execution until an expected UI element.4 Commonly Used Selenium Commands To conclude our introduction of Selenium. It’s an easy-to-use Firefox plug-in and is generally the most efficient way to develop test cases. This chapter is all about the Selenium IDE and how to use it effectively. you’ll be presented with the following window.CHAPTER FOUR SELENIUM-IDE 4. 4. This is not only a time-saver. but also an excellent way of learning Selenium script syntax. 15 . first. download the IDE from the SeleniumHQ downloads page When downloading from Firefox. displays the following. first showing a progress bar. 16 Chapter 4.0 Select Install Now. and when the download is complete.Selenium Documentation. Release 1. Selenium-IDE . The Firefox Add-ons window pops up. 4.Selenium Documentation. Installing the IDE 17 .0 Restart Firefox.2. After Firefox reboots you will find the Selnium-IDE listed under the Firefox Tools menu. Release 1. Selenium Documentation. or creating new test cases. delete. The Edit menu allows copy. The 18 Chapter 4. undo and select all operations for editing the commands in your test case.3 Opening the IDE To run the Selenium-IDE.4 IDE Features 4. paste.0 4.4.1 Menu Bar The File menu allows you to create. 4. It opens as follows with an empty script-editing window and a menu for loading. open and save test case and test suite files. Release 1. Selenium-IDE . simply select it from the Firefox Tools menu. 0 Options menu allows the changing of settings. The Help menu is the standard Firefox Help menu. TestRunner Mode: Allows you to run the test case in a browser loaded with the Selenium-Core TestRunner. Use for debugging test cases. Most users will probably not need this button. The right-most button. Release 1. Detailed documentation on rollup rules can be found in the UI-Element Documentation on the Help menu. is the record button. Run: Runs the currently selected test. You can set the timeout value for certain commands.4. Step: Allows one to “step” through a test case by running it one command at a time.2 Toolbar The toolbar contains buttons for controlling the execution of your test cases. the one with the red-dot. 4. 4. When only a single test is loaded this button and the Run All button have the same effect. IDE Features 19 . 4.3 Test Case Pane Your script is displayed in the test case pane. add user-defined user extensions to the base set of Selenium commands.4. only one item on this menu–UI-Element Documentation–pertains to Selenium-IDE. including a step feature for debugging your test cases.4.Selenium Documentation. Record: Records the user’s browser actions. Run All: Runs the entire test suite when a test suite with multiple test cases is loaded. The TestRunner is not commonly used now and is likely to be deprecated. It has two tabs. and specify the format (language) used when saving your test cases. Speed Control: controls how fast your test case runs. This button is for evaluating test cases for backwards compatibility with the TestRunner. Pause/Resume: Allows stopping and re-starting of a running test case. Apply Rollup Rules: This advanced feature allows repetitive sequences of Selenium commands to be grouped into a single action. one for displaying the command and their parameters in a readable “table” format. 20 Chapter 4. The Source view also allows one to edit the test case in its raw form. The first parameter specified for a command in the Reference tab of the bottom pane always goes in the Target field. even if you do not first select the Log tab. The Command.4. Selenium-IDE .0 The Source tab displays the test case in the native format in which the file will be stored. 4. By default. If a second parameter is specified by the Reference tab. These messages are often useful for test case debugging. this is HTML although it can be changed to a programming language such as Java or C#. and Rollup– depending on which tab is selected. including copy. See the Options menu for details. or a scripting language like Python. Also notice the Info button is a drop-down allowing selection of different levels of information to display. UI-Element. cut and paste operations. These are entry fields where you can modify the currently selected command.4 Log/Reference/UI-Element/Rollup Pane The bottom pane is used for four different functions–Log. error messages and information messages showing the progress are displayed in this pane automatically. it always goes in the Value field. Target. Log When you run your test case. a drop-down list will be populated based on the first characters you type. Notice the Clear button for clearing the Log. you can then select your desired command from the drop-down. and Value entry fields display the currently selected command along with its parameters.Selenium Documentation. If you start typing in the Command field. Release 1. Reference. The number of parameters provided must match the number specified. 4. Note: This can be set to OFF as a default with an available user extension.0 Reference The Reference tab is the default selection whenever you are entering or modifying Selenese commands and parameters in Table mode. whether from Table or Source mode. Building Test Cases 21 . and the type of parameters provided must match the type specified. the record button is ON by default.1 Recording Many first-time users begin by recording a test case from their interactions with a website.5.Selenium Documentation. While the Reference tab is invaluable as a quick reference. UI-Element and Rollup Detailed information on these two panes (which cover advanced features) can be found in the UIElement Documentation on the Help menu of Selenium-IDE. the order of parameters provided must match the order specified. When entering or modifying commands. it is critically important to ensure that the parameters specified in the Target and Value fields match those specified in the parameter list specified in the Reference pane. Frequently.5. Release 1. 4. When Selenium-IDE is first opened. a test developer will require all three techniques. If there is a mismatch in any of these three areas.5 Building Test Cases There are three primary methods for developing test cases. In Table mode. the command will not run correctly. the Reference pane will display documentation on the current command. it is still often necessary to consult the Selenium Reference document. 4. 4.5. Now. Again. This requires assert and verify commands.0 During recording. For example. go to the browser displaying your test application and right click anywhere on the page. notice the Show All Available Commands menu option. With Selenium-IDE recording. feel free to use the IDE to record and select commands into a test case and then run it. Try right-clicking an image. these commands will be explained in detail in the chapter on Selenium commands.click or clickAndWait commands • entering values . Here we’ll simply describe how to add them to your test case. there may only be one Selenium command listed. This shows many.2 Adding Verifications and Asserts With the Context Menu Your test cases will also need to check the properties of a web-page. or a user control like a button or a checkbox. Selenium-IDE . the more commonly used ones will show up on the primary context menu. This will cause unexpected test case failures. The first time you use Selenium.Selenium Documentation. Also. selecting verifyElementPresent for an image should later cause that command to be available on the primary context menu the next time you select an image and right-click. this will include: • clicking a link .select command • clicking checkboxes or radio buttons . You can learn a lot about the Selenium commands simply by experimenting though the IDE.type command • selecting options from a drop-down listbox . many more commands. Selenium-IDE will automatically insert commands into your test case based on your actions. Once you select these other options. Selenium-IDE will attempt to predict what command. right-click the selected text. You will see a context menu showing verify and/or assert commands. Typically. You will often need to change this to clickAndWait to ensure your test case pauses until the new page is completely loaded. The context menu should give you a verifyTextPresent command and the suggested parameter should be the text itself. along with the parameters. Otherwise. again. For now though. 22 Chapter 4. your test case will continue running commands before the page has loaded all its UI elements. along with suggested parameters. • Following a link usually records a click command. Open a web-page of your choosing and select a block of text on the page. As you use the IDE however. We won’t describe the specifics of these commands here. that is in the chapter on “Selenese” Selenium Commands. You may need to use Show All Available Commands to see options other than verifyTextPresent. Try a few more UI elements. you will find additional commands will quickly be added to this menu. Let’s see how this works. A paragraph or a heading will work fine. Release 1. you will need for a selected UI element on the current web-page.click command Here are some “gotchas” to be aware of: • The type command may require clicking on some other area of the web page for it to record. for testing your currently selected UI element. and second parameter (again. first parameter (if one is required by the Command). Building Test Cases 23 . Your comment will appear in purple font. Now use the Command field to enter the comment. i. <!– your comment here –>. if one is required). Now use the command editing text fields to enter your new command and its parameters. Source View Since Source view provides the equivalent of a WYSIWYG editor. and enter the HTML tags needed to create a 3-column row containing the Command. you must create empty comments.Selenium Documentation. Release 1. simply modify which line you wish– command. An empty command will cause an error during execution. Right-click and select Insert Comment.5. These comments are ignored when the test case is run. Be sure to save your test before switching back to Table view. parameter.0 4. Target.5. or comment. and Value fields. In order to add vertical white space (one or more blank lines) in your tests. Source View Select the point in your test case where you want to insert the comment. Right-click and select Insert Command. Source View Select the point in your test case where you want to insert the command.e. Add an HTML-style comment. Table View Select the point in your test case where you want to insert the comment. Insert Comment Comments may be added to make your test case more readable.3 Editing Insert Command Table View Select the point in your test case where you want to insert the command. Edit a Command or Comment Table View Simply select the line to be changed and edit it using the Command. 4.. When you open an existing test case.6 Running Test Cases The IDE allows many options for running your test case. select a command.Selenium Documentation.portal. Run Any Single Command Double-click any single command to run it by itself. Stop and Start The Pause button can be used to stop the test case while it is running. close down the IDE and restart it (you don’t need to close the browser itself). This will fix the problem.7 Using Base URL to Run Test Cases in Different Domains The Base URL field at the top of the Selenium-IDE window is very useful for allowing test cases to be run across different domains. Selenium-IDE will then create an absolute URL by appending the open command’s argument onto the end of the value of Base URL. You can double-click it to see if it runs correctly. Start from the Middle You can tell the IDE to begin running from a specific command in the middle of the test case. Note: At the time of this writing. the Open. 4.news.html: 24 Chapter 4.4 Opening and Saving a Test Case The File=>Open. Save and Save As menu commands behave similarly to opening and saving files in most other programs. Execution of test cases is very flexible in the IDE. and you can do a batch run of an entire test suite. Suppose that a site named. It lets you immediately test a command you are constructing. run it one line at a time. To set a breakpoint. where at times. Run a Test Case Click the Run button to run the currently displayed test case.portal. However. when you are not sure if it is correct. Any test cases for these sites that begin with an open statement should specify a relative URL as the argument to open rather than an absolute URL (one starting with a protocol such as http: or https:). This is useful when writing a single command. such operations have their own menu entries near the bottom. Save.0 4. For example. You can run a test case all at once. nothing happens.5. This also is used for debugging. This is also available from the context menu. To set a startpoint. 4. and Save As items are only for files. right-click. The icon of this button then changes to indicate the Resume button. select a command. Selenium-IDE displays its Selenium commands in the test case pane. and from the context menu select Toggle Breakpoint. there’s a bug. stop and start it.com had an in-house beta site named. If you see this. Stop in the Middle You can set a breakpoint in the test case to cause it to stop on a particular command. Release 1. Selenium-IDE . This is useful for debugging your test case. To continue click Resume. Test suite files can also be opened and saved via the File menu. the test case below would be run against. run a single command you are currently developing. Run a Test Suite Click the Run All button to run all the test cases in the currently loaded test suite. right-click. and from the context menu select Set/Clear Start Point. when the IDE is first opened and then you select File=>Open.portal. That is. you only need to login once. 4. right-click. suppose your test case first logs into the website and then performs a series of tests and you are trying to debug one of those tests.com/about.portal. However. right-click. Debugging 25 . For example. To set a startpoint. To do this. select a command.8 Debugging Debugging means finding and fixing errors in your test case.html: Base URL setting would be run against 4. That will prevent you from having to manually logout each time you rerun your test case. To set a breakpoint. 4. and from the context menu select Toggle Breakpoint. then run your test case from a startpoint placed after the login portion of your test case.0 This same test case with a modified. we recommend you ask one of the developers in your organization. This is a normal part of test case development.news.8.Selenium Documentation. from any point within the test case. You can login once. but you need to keep rerunning your tests as you are developing them. set a breakpoint on the command just before the one to be examined. It is also sometimes useful to run a test case from somewhere in the middle to the end of the test case or up to a breakpoint that follows the starting point. If this is new to you. Then click the Run button to run your test case from the beginning up to the breakpoint. one can run up to a specific command in the middle of the test case and inspect how the test case behaves at that point.8. select a command. and from the context menu select Set/Clear Start Point. We won’t teach debugging here as most new users to Selenium will already have some basic experience with debugging. Release 1.1 Breakpoints and Startpoints The Sel-IDE supports the setting of breakpoints and the ability to start and stop the running of a test case. Now look on the webpage displayed in the Firefox browser. 4. select any command that has a locator parameter. 4.2 Stepping Through a Testcase To execute a test case one command at a time (“step through” it). click.8. type. The HTML opens in a separate window. with highlighting on the portion representing your selection. Simply. This feature can be very useful for learning more about locators. i. Alternatively.5 Locator Assistance Whenever Selenium-IDE records a locator-type argument. Use its Search feature (Edit=>Find) to search for a keyword to find the HTML for the UI element you’re trying to test. Start the test case running with the Run button from the toolbar.3 Find Button The Find button is used to see which UI element on the currently displayed webpage (in the browser) is used in the currently selected Selenium command.e. Firefox makes this easy.8. 4. Repeatedly select the Step button. Click the Find button.Selenium Documentation.4 Page Source for Debugging Often. when debugging a test case.0 Then click the Run button to execute the test case beginning at that startpoint. it stores additional information which allows the user to view other possible locator-type arguments that could be used instead.8. Release 1. select just that portion of the webpage for which you want to see the source. the separate HTML window will contain just a small amount of source. It can be used with any command that must identify a UI element on a webpage. and certain assert and verify commands. Then rightclick the webpage and select View Selection Source. you simply must look at the page source (the HTML for the webpage you’re trying to test) to determine a problem. 4. right-click the webpage and select Page Source. and is often needed to help one build a different type of locator than the type that was recorded. clickAndWait. 26 Chapter 4. Selenium-IDE . 1. Immediately pause the executing test case with the Pause button. In this case. follow these steps: 1.8. among others. 1. From Table view. This is useful when building a locator for a command’s first parameter (see the section on locators in the Selenium Commands chapter). There should be a bright green rectangle enclosing the element specified by the locator parameter. charset=UTF-8" > <title>Sample Selenium Test Suite</title> </head> <body> <table cellpadding= "1" cellspacing= "1" border= "1" > <thead> <tr><td>Test Cases for De Anza A-Z Directory Links</td></tr> </thead> 4. Selenium-IDE does not yet support loading pre-existing test cases into a test suite. The example below is of a test suite containing four test cases: <html> <head> <meta http-equiv= "Content-Type" content= "text/html. Each cell of each row in the <tbody> section contains a link to a test case. 4. Users who want to create or modify a test suite by adding pre-existing test cases must manually edit a test suite file. The test suite pane can be manually opened or closed via selecting a small dot halfway down the right edge of the pane (which is the left edge of the entire Selenium-IDE window if the pane is closed). whereas the second column indicates the type of each alternative. Note that the first column of the drop-down provides alternative locators.0 This locator assistance is presented on the Selenium-IDE window as a drop-down list accessible at the right end of the Target field (only when the Target field contains a recorded locator-type argument). the new test case will appear immediately below the previous test case.9 Writing a Test Suite A test suite is a collection of test cases which is displayed in the leftmost pane in the IDE. A test suite file is an HTML file containing a one-column table.Selenium Documentation.9. The test suite pane will be automatically opened when an existing test suite is opened or when the user selects the New Test Case item from the File menu. Writing a Test Suite 27 . In the latter case. Release 1. Below is a snapshot showing the contents of this drop-down for one command. Perhaps the most popular of all Selenium-IDE extensions is one which provides flow control in the form of while loops and primitive conditionals.html" ". Selenium-IDE .js./b. For an example of how to use the functionality provided by this extension. put the pathname to its location on your computer in the Selenium Core extensions field of Selenium-IDE’s Options=>Options=>General tab.Selenium Documentation. However./c. 28 Chapter 4.html" ".0 <tbody> <tr><td><a <tr><td><a <tr><td><a <tr><td><a </tbody> </table> </body> </html> href= href= href=A >B >C >D Links</a></td></tr> Links</a></td></tr> Links</a></td></tr> Links</a></td></tr> Note: Test case files should not have to be co-located with the test suite file that invokes them. that is indeed the case. 4./d. After selecting the OK button. Often this is in the form of customized commands although this extensibility is not limited to additional commands. you must close and reopen Selenium-IDE in order for the extensions file to be read. Any change you make to an extension will also require you to close and reopen SeleniumIDE.10 User Extensions User extensions are JavaScript files that allow one to create his or her own customizations and features to add additional functionality. And on Mac OS and Linux systems. a bug prevents Windows users from being able to place the test cases elsewhere than with the test suite that invokes them. However the author has altered the C# format in a limited manner and it has worked well. you will be using with Selenium-RC for developing your test programs. Each supported language has configuration settings which are editable. Then simply save the test case using File=>Save. this feature is used to translate your test case into a programming language. note that if the generated code does not suit your needs. The -htmlSuite command-line option is the particular feature of interest. allows you to select a language for saving and displaying the test case. Essentially. program code supporting your test is generated for you by Selenium-IDE.12 Executing Selenium-IDE Tests on Different Browsers While Selenium-IDE can only run tests against Firefox. Your test case will be translated into a series of functions in the language you choose. If you will be using Selenium-RC to run your test cases.Selenium Documentation. this feature is not yet supported by the Selenium developers. This is under the Options=>Options=>Format tab.13 Troubleshooting Below is a list of image/explanation pairs which describe frequent sources of problems with SeleniumIDE: 4. 4. you can alter it by editing a configuration file which defines the generation process. Format 29 . Java.11 Format Format. tests developed with Selenium-IDE can be run against other browsers. The default is HTML. 4.e. using a simple command-line interface that invokes the Selenium-RC server.11. under the Options menu. i. Note: At the time of this writing.0 4. Select the language. Release 1. This topic is covered in the Run Selenese tests section on Selenium-RC chapter. PHP. Also. The bug has been filed as SIDE-230.Selenium Documentation. The solution is to close and reopen Selenium IDE. Selenium-IDE . 30 Chapter 4.0 This problem occurs occasionally when Selenium IDE is first brought up. Release 1. investigate using an appropriate waitFor* or *AndWait command immediately before the failing command. it indicates that you haven’t actually created the variable whose value you’re trying to access. Try putting a pause 5000 before the command to determine whether the problem is indeed related to timing. Release 1. For any Selenese command. In the example above. and the second required parameter (if one exists) must go in the Value field. the element specified by a locator in your command wasn’t fully loaded when the command was executed. Troubleshooting 31 . This is sometimes due to putting the variable in the Value field when it should be in the Target field or vice versa.13.0 You’ve used File=>Open to try to open a test suite file. Whenever your attempt to use variable substitution fails as is the case for the open command above. This type of error may indicate a timing problem. the two parameters for the store command have been erroneously placed in the reverse order of what is required.. the first required parameter must go in the Target field. If so. Use File=>Open Test Suite instead. i.Selenium Documentation.e. 4. Selenium-IDE .html extension both in their filenames.0 One of the test cases in your test suite cannot be found. Selenium-IDE must be restarted after any change to either an extensions file or to the contents of the Selenium Core extensions field. Also. 32 Chapter 4. Release 1. Your extension file’s contents have not been read by Selenium-IDE. and in the test suite file where they are referenced. make sure that your actual test case files have the . Selenium-IDE is very space-sensitive! An extra space before or after a command will cause it to be unrecognizable.Selenium Documentation. Make sure that the test case is indeed located where the test suite indicates it is located. Also. Be sure you have specified the proper pathname to the extensions file via Options=>Options=>General in the Selenium Core extensions field. note that the parameter for verifyTitle has two spaces between the words “System” and “Division. Troubleshooting 33 . which is confusing. Selenium-IDE is correct to generate an error. Thus.13. In the example above. Selenium-IDE is correct that the actual value does not match the value specified in such test cases. However. 4.” The page’s actual title has only one space between these words. The problem is that the log file error messages collapse a series of two or more spaces into a single space. Release 1.0 This type of error message makes it appear that Selenium-IDE has generated a failure where there is none.Selenium Documentation. Selenium-IDE .Selenium Documentation.0 34 Chapter 4. Release 1. It is important that you understand these different methods because these methods define what you are actually testing.1. the web designers frequently change the specific image file along with its position on the page. an element is present somewhere on the page? 2. and start each group with an assert followed by one or more verify test commands. For example. specific text is somewhere on the page? 3. If. the text and its position at the top of the page are probably relevant for your test. 5. On the other hand. you are testing for the existence of an image on the home page. specific text is at a specific location on the page? For example.CHAPTER FIVE SELENESE SELENIUM COMMANDS Selenium commands. Selenese allows multiple ways of checking for UI elements. you may want to check many attributes of a page without aborting the test case on the first failure as this will allow you to review all failures on the page and take the appropriate action. The best use of this feature is to logically group your test commands. A sequence of these commands is a test script. are the set of commands that run your tests. and we present the many choices you have in testing your web application when using Selenium.1 Assertion or Verification? Choosing between assert and verify comes down to convenience and management of failures. Here we explain those commands in detail. you’ll probably want to abort your test case so that you can investigate the cause and fix the issue(s) promptly. often called selenese. however. if you are testing a text heading. If you’re not on the correct page.. whereas a verify will fail the test and continue to run the test case. 5. will you test that. then you only want to test that an image (as opposed to the specific image file) exists somewhere on the page.. An example follows: 35 . 1.1 Verifying Page Elements Verifying UI elements on a web page is probably the most common feature of your automated tests. and verify.2.0 open assertTitle verifyText assertTable verifyTable verifyTable /download/ Downloads //h2 1. divisions <div>. Use verifyTextPresent when you are interested in only the text itself being present on the page. only the HTML tag. The test case then asserts the first column in the second row of the first table contains the expected value.1 1. 5. Locators are explained in the next section.2. Release 1. paragraphs.1. 5. Do not use this when you also need to test where the text occurs on the page. One can check the existence of links. Here are a few more examples. verifyText must use a locator. It takes a single argument–the text pattern to be verified. verifyElementPresent //div/p/img This command verifies that an image. Again.2 verifyTextPresent The command verifyTextPresent is used to verify specific text exists somewhere on the page. specified by the existence of an <img> HTML tag.4 verifyText Use verifyText when both the text and its UI element must be tested. and that it follows a <div> tag and a <p> tag. etc. Only if this passes will the following command run and verify that the text is present in the expected location.3 verifyElementPresent Use this command when you must test for the presence of a specific UI element. One common use is to check for the presence of an image.2 1. 2008 1.0 beta 2 The above example first opens a page and then asserts that the correct page is loaded by comparing the title with the expected value.Selenium Documentation.2. This verification does not check the text. that the text string “Marketing Analysis” appears somewhere on the page currently being tested. For example: verifyTextPresent Marketing Analysis This would cause Selenium to search for. If one chooses an XPath or DOM locator. The first (and only) parameter is a locator for telling the Selenese command how to find the element. 5. verifyElementPresent can be used to check the existence of any HTML tag within the page. verifyElementPresent verifyElementPresent verifyElementPresent verifyElementPresent verifyElementPresent verifyElementPresent //div/p //div/a id=Login link=Go to Marketing Research //a[2] //head/title These examples illustrate the variety of ways a UI element may be tested.1. is present on the page.3 Downloads Selenium IDE June 3.1. 36 Chapter 5. Selenese Selenium Commands . and only if this passed will the remaining cells in that row be verified. one can verify that specific text appears at a specific location on the page relative to other UI components on the page. rather then its content. locators are explained in the next section. 5.Selenium Documentation. See Locating by Identifier. With this strategy. The locator type can be omitted in many cases.2 Locating by Identifier This is probably the most common method of locating elements and is the catch-all default when no recognised locator type is used. If no element has a matching id attribute. For instance.2. This target identifies an element in the content of the web application. See Locating by XPath. a target is required.. and consists of the location strategy followed by the location in the format locatorType=location. 5. Locating Elements 37 . 5.2. 5. then the first element with an name attribute matching the location will be used. See Locating by DOM • Locators starting with “//” will use the XPath locator strategy. the first element with the id attribute value matching the location will be used. • Locators that start with anything other than the above or a valid locator type will default to using the identifier locator strategy.2.1 Default Locators You can choose to omit the locator type in the following situations: • Locators starting with “document” will use the DOM locator strategy. the identifier= in the first three examples above is not necessary.0 verifyText //table/tr/td/div/p This is my text and it occurs right after the div inside the table. The various locator types are explained below with examples for each. Release 1.2 Locating Elements For many Selenium commands. Release 1. 1 2 3 4 5 6 7 8 9 10 <html> <body> <form id= "loginForm" > <input name= "username" <input name= "password" <input name= "continue" <input name= "continue" </form> </body> <html> type= type= type= "password" /> "submit" value= "Login" /> "button" value= "Clear" /> • id=loginForm (3) 5.0 5. or really via any HTML property. Selenese Selenium Commands . but its functionality must be regression tested. the three types of locators above allow Selenium to test a UI element independent of its location on the page. becomes very important..4 Locating by Name The name locator type will locate the first element with a matching name attribute. If multiple elements have the same value for a name attribute. but also more explicit.2. the test will still pass. Use this when you know an element’s id attribute. In the case where web designers frequently alter the page. The default filter type is value (matching the value attribute).2. So if the page structure and organization is altered. then you can use filters to further refine your location strategy. testing via id and name attributes. One may or may not want to also test whether the page structure changes.Selenium Documentation. 38 Chapter 5.3 Locating by Id This type of locator is more limited than the identifier locator type. First form element with an input child element with @name of ‘username’ • //input[@name=’username’] (4) . XPath extends beyond (as well as supporting) the simple methods of locating by id or name attributes.Input with @name ‘continue’ and @type of ‘button’ • //form[@id=’loginForm’]/input[4] (7) . By finding a nearby element with an id or name attribute (ideally a parent element) you can locate your target element based on the relationship. You can use XPath to either locate the element in absolute terms (not advised).0 5.Selenium Documentation.Absolute path (would break if the HTML was changed only slightly) • //form[1] (3) .First input element with @name of ‘username’ • //form[@id=’loginForm’]/input[1] (4) .2.2. or relative to an element that does have an id or name attribute. One of the main reasons for using XPath is when you don’t have a suitable id or name attribute for the element you wish to locate.5 Locating by XPath XPath is the language used for locating nodes in an XML document. Absolute XPaths contain the location of all elements from the root (html) and as a result are likely to fail with only the slightest adjustment to the application. and opens up all sorts of new possibilities such as locating the third checkbox on the page. but in order to learn more. Since only xpath locators start with “//”. the following references are recommended: 5.Fourth input child element of the form element with @id of ‘loginForm’ These examples cover some basics.First form element in the HTML • xpath=//form[@id=’loginForm’] (3) . it is not necessary to include the xpath= label when specifying an XPath locator. As HTML can be an implementation of XML (XHTML). XPath locators can also be used to specify elements via attributes other than id and name.The form element with @id of ‘loginForm’ • xpath=//form[input/\@name=’username’] (4) . Release 1. Locating Elements 39 .) .First input child element of the form element with @id of ‘loginForm’ • //input[@name=’continue’][@type=’button’] (7) . This is much less likely to change and can make your tests more robust. Selenium users can leverage this powerful language to target elements in their web applications. getElementById(’loginForm’) (3) 40 Chapter 5. 5.with interactive examples.2. which can be simply the element’s location using the hierarchical dotted notation. it is not necessary to include the dom= label when specifying a dom locator.Selenium Documentation. 1 2 3 4 5 6 7 <html> <body> <p>Are you sure you want to do this?</p> <a href= "continue.XPath suggestions are just one of the many powerful features of this very useful add-on..html" >Cancel</a> </body> <html> • link=Continue (4) • link=Cancel (5) 5. There are also a couple of very useful Firefox Add-ons that can assist in discovering the XPath of an element: • XPath Checker .html" >Continue</a> <a href= "cancel. Release 1.2.0 • W3Schools XPath Tutorial • W3C XPath Recommendation • XPath Tutorial .6 Locating Hyperlinks by Link Text This is a simple method of locating a hyperlink in your web page by using the text of the link. • Firebug .7 Locating by DOM The Document Object Model represents an HTML document and can be accessed using JavaScript. This location strategy takes JavaScript that evaluates to an element on the page. Since only dom locators start with “document”. If two links with the same text are present. then the first match will be used. Selenese Selenium Commands . Selenium Documentation.forms[0].2. the best place to go is the W3C publication.elements[’username’] (4) • document. Locating Elements 41 . Release 1.username (4) • document.].required[type="text"] (4) • css=input.forms[0].0 • dom=document. Note: Most experienced Selenium users recommend CSS as their locating strategy of choice as it’s considerably faster than XPath and can find the most complicated objects in an intrinsic HTML document.8 Locating by CSS CSS (Cascading Style Sheets) is a language for describing the rendering of HTML and XML documents.forms[’loginForm’] (3) • dom=document.2. You’ll find additional references there.forms[0]. These Selectors can be used by Selenium as another locating strategy.elements[0] (4) • document. CSS uses Selectors for binding style properties to elements in the document. A good reference exists on W3Schools.forms[0] (3) • document. 5. To specify a globbing pattern parameter for a Selenese command. verifyTitle. one can prefix the pattern with a glob: label. Selenium globbing patterns only support the asterisk and character class. the verifyTitle will pass as long as the two words “Film” and “Television” appear (in that order) anywhere in the page’s title. 5.c. 5.” the test would still pass. Regular expressions are also supported by most high-level programming languages. Examples of commands which require patterns are verifyTextPresent. Globbing is fairly limited. and exact.0 5. the ?. one can also omit the label and specify just the pattern itself. because globbing patterns are the default. 42 Chapter 5. Below is an example of two commands that use globbing patterns. click verifyTitle link=glob:Film*Television Department glob:*Film*Television* The actual title of the page reached by clicking on the link was “De Anza Film And Television Department . many text editors.e. patterns are a type of parameter frequently required by Selenese commands. if the page’s owner should shorten the title to just “Film & Television Department. And as has been mentioned above.Selenium Documentation.3.1 Globbing Patterns Most people are familiar with globbing as it is utilized in filename expansion at a DOS or Unix/Linux command line such as ls *. what text is expected rather than having to specify that text exactly. However. Only two special characters are supported in the Selenium implementation: * which translates to “match anything. A few examples will make the functionality of a character class clear: [aeiou] matches any lowercase vowel [0-9] matches any digit [a-zA-Z0-9] matches any alphanumeric character In most other contexts.. nothing. verifyAlert. Selenese Selenium Commands .3 Matching Text Patterns Like locators.2 Regular Expression Patterns Regular expression patterns are the most powerful of the three types of patterns that Selenese supports. verifyText. Using a pattern for both a link and a simple test that the link worked (such as the verifyTitle above does) can greatly reduce the maintenance for such test cases. by using a pattern rather than the exact text. assertConfirmation. The actual link text on the page being tested was “Film/Television Department”. a single character. Patterns allow one to describe. In this case. and verifyPrompt. There are three types of patterns: globbing. or many characters. regular expressions. Release 1.c extension that exist in the current directory. link locators can utilize a pattern. For example.” i. [ ] (character class) which translates to “match any single character found inside the square brackets. globbing includes a third special character.” A dash (hyphen) can be used as a shorthand to specify a range of characters (which are contiguous in the ASCII character set). the click command will work even if the link text is changed to “Film & Television Department” or “Film and Television Department”. However.3. The glob pattern’s asterisk will match “anything or nothing” between the word “Film” and the word “Television”. By using a pattern rather than the exact text. globbing is used to display all the files ending with a . via the use of special characters.Menu”. Alaska contains info on the sunrise time: open verifyTextPresent. Matching Text Patterns 43 .html regexp:Sunrise: *[0-9]{1.3. In Selenese.yahoo. sed. [] * + ? {1.*Film. the latter is case-insensitive.*Television Department regexp:. if one needed to look for an actual asterisk character (which is special for both globbing and regular expression patterns). Release 1. the following code might work or it might not.:. It uses no special characters at all. Below are a subset of those special characters: PATTERN . click verifyTitle link=regexp:Film.2}:[0-9]{2} [ap]m Let’s examine the regular expression above one part at a time: Sunrise: * [0-9]{1. Whereas Selenese globbing patterns support only the * and [ ] (character class) features. The asterisk in the glob:Real * pattern will match anything or nothing.* instead of just *). suppose your test needed to ensure that a particular table cell contained nothing but a number. So.3 Exact Patterns The exact type of Selenium pattern is of marginal usefulness. This two-character sequence can be translated as “0 or more occurrences of any character” or more simply. The more complex example below tests that the Yahoo! Weather page for Anchorage. if one wanted to select an item labeled “Real *” from a dropdown. The only differences are the prefix (regexp: instead of glob:) and the “anything or nothing” pattern (. the exact pattern would be one way to do that.*Television. “anything or nothing.* (“dot star”). A few examples will help clarify how regular expression patterns can be used with Selenese commands. if there was an earlier select option labeled “Real Numbers.com/forecast/USAK0012. regular expression patterns allow a user to perform many tasks that would be very difficult otherwise. Selenese regular expression patterns offer the same wide array of special characters that exist in JavaScript. The first one uses what is probably the most commonly used regular expression pattern–. and awk.” it would be the option selected rather than the “Real *” option.Selenium Documentation. For example. The former is case-sensitive.* The example above is functionally equivalent to the earlier example that used globbing patterns for this same test. 5. So.” It is the equivalent of the one-character globbing pattern * (a single asterisk). For example.3.0 and a host of tools. regexp: [0-9]+ is a simple pattern that will match a decimal number of any length. When flow control is needed. data is retrieved from server without refreshing the page. while the AndWait alternative (e. click) will do the action and continue with the following command as fast as it can.6 Sequence of Evaluation and Flow Control When a script runs. one command after another. there are three options: 1. However.4 The “AndWait” Commands The difference between a command and its AndWait alternative is that the regular command (e. checking for the desired condition every second and stop as soon as the condition is met. which wait dynamically. etc. 5.g. does not support condition statements (if-else.g. causing Selenium to raise a timeout exception. Run the script using Selenium-RC and a client library such as Java or PHP to utilize the programming language’s flow control features. your test will fail.) or iteration (for. Pausing the test execution for certain period of time is also not a good approach as web element might appear later or earlier than the stipulated period depending on the system’s responsiveness. possibly involving multiple pages. Thus. This happens because Selenium will reach the AndWait‘s timeout without seeing any navigation or refresh being made. Using andWait commands will not work as the page is not actually refreshed. globbing patterns and regular expression patterns are sufficient for the vast majority of us. 2.Selenium Documentation. programming logic is often needed. etc.0 select //select glob:Real * In order to ensure that the “Real *” item would be selected. 5. Be aware. Many useful tests can be conducted without flow control. Selenese. while. Selenese Selenium Commands . Run a small JavaScript snippet from within the script using the storeEval command. by itself. 44 Chapter 5. This is done using waitFor commands. if you use an AndWait command for an action that does not trigger a navigation/refresh. The AndWait alternative is always used when the action causes the browser to navigate to another page or reload the present one. clickAndWait) tells Selenium to wait for the page to load after the action has been done. 5. as waitForElementPresent or waitForVisible. load or other uncontrolled factors of the moment.). for a functional test of dynamic content. The best approach would be to wait for the needed element in a dynamic period and then continue the execution as soon as element is found. it simply runs in sequence. Release 1.5 The waitFor Commands in AJAX applications In AJAX driven web applications. leading to test failures.). If this is your case.7 Store Commands and Selenium Variables One can use Selenium variables to store constants at the beginning of a script. 5. It takes two parameters. the text value to be stored and a selenium variable. To access the value of a variable. or when programming skills are lacking). The plain store command is the most basic of the many store commands and can be used to simply store a constant value in a selenium variable. from another program. 5. Also.1 storeElementPresent This corresponds to verifyElementPresent. Here are a couple more commonly used store commands.Selenium Documentation. type id=login ${userName} Selenium variables can be used in either the first or second parameter and are interpreted by Selenium prior to any other operations performed by the command. store paul@mysite. when combined with a data-driven test design (discussed in a later section). you’ll want to use the stored value of your variable. Release 1. Install the goto_sel_ide. 5. if found. StoreText can be used to extract text from the page being tested. StoreEval allows the test to store the result of running the script in a variable.js extension. or from a file.7. A Selenium variable may also be used within a locator expression.7.3 storeEval This command takes a script as its first parameter. Use the standard variable naming conventions of only alphanumeric characters when choosing a name for your variable. consider a JavaScript snippet or the goto_sel_ide.7.2 storeText StoreText corresponds to verifyText.7. some organizations prefer to run their scripts from Selenium-IDE whenever possible (such as when they have many junior-level people running tests for them. 5. Embedding JavaScript within Selenese is covered in the next section.0 3. An equivalent store command exists for each verify and assert command. It uses a locater to identify specific page text. However. The text. enclose the variable in curly brackets ({}) and precede it with a dollar sign like this. Most testers will export the test script into a programming language file that uses the Selenium-RC API (see the Selenium-IDE chapter).org userName Later in your script. It simply stores a boolean value–“true” or “false”–depending on whether the UI element is found. 5. is stored in the variable. Store Commands and Selenium Variables 45 . Selenium variables can be used to store values passed to your test program from the command-line.js extension. verifyText //div/p ${userName} A common use of variables is for storing input for an input field. Username is ${userName} 46 Chapter 5.toLowerCase() name uc lc 5. echo statements can be used to print the contents of Selenium variables.. you must refer to it as storedVars[’yourVariableName’]. In most cases. Below is an example in which the type command’s second parameter value is generated via JavaScript code using this special syntax: store type league of nations q searchString javascript{storedVars[’searchString’].9 echo . The associative array containing your test case’s variables is named storedVars. even when the parameter is not specified to be of type script.8. and waitForEval. in this case.toUpperCase() storedVars[’name’]. verifyEval. storeEval. These notes also can be used to provide context within your test result reports. you’ll want to access and/or manipulate a test case variable inside the JavaScript snippet used as a Selenese parameter. Selenese Selenium Commands .2 JavaScript Usage with Non-Script Parameters JavaScript can also be used to help generate values for parameters.1 JavaScript Usage with Script Parameters Several Selenese commands specify a script parameter including assertEval. An associative array has string indexes rather than sequential numeric indexes. Release 1. 5.0 5.Selenium Documentation.toUpperCase()} 5. These parameters require no special syntax. Finally. This is useful for providing informational progress notes in your test which display on the console as your test is running. normally the Target field (because a script parameter is normally the first or only parameter). as in javascript {*yourCodeHere*}. Whenever you wish to access or manipulate a variable within a JavaScript snippet. store storeEval storeEval Edith Wharton storedVars[’name’]. echo echo Testing page footer now.The Selenese Print Command Selenese has a simple command that allows you to print text to your test’s output. However. A Selenium-IDE user would simply place a snippet of JavaScript code into the appropriate field. All variables created in your test case are stored in a JavaScript associative array. special syntax is required–the JavaScript snippet must be enclosed inside curly braces and preceded by the label javascript.8 JavaScript and Selenese Parameters JavaScript can be used with two types of Selenese parameters–script and non-script (usually expressions). in this case the JavaScript String object’s toUpperCase method and toLowerCase method. which can be useful for finding where a defect exists on a page in the event your test finds a problem.8. Popups. and Multiple Windows 47 .0 5. 5. Alerts.10.10 Alerts. Popups.Selenium Documentation. and Multiple Windows This section is not yet developed. Release 1. 0 48 Chapter 5. Release 1.Selenium Documentation. Selenese Selenium Commands . emailing test results. Selenium-IDE does not directly support: • condition statements • iteration • logging and reporting of test results • error handling. we will describe how the components of Selenium-RC operate and the role each plays in running your test scripts.2 How Selenium-RC Works First. 6. Selenium-RC uses the full power of programming languages to create more complex tests like reading and writing files..1 Introduction Selenium-RC is the solution for tests that need more than simple browser actions and linear execution. all of them can be achieved by using programming techniques with a language-specific Selenium-RC client library.CHAPTER SIX SELENIUM-RC 6. 49 . In the Adding Some Spice to Your Tests section. you’ll find examples that demonstrate the advantages of using a programming language for your tests. querying a database. executes the Selenium command. Here is a simplified architecture diagram. Then the server passes the Selenium command to the browser using Selenium-Core JavaScript commands.. interprets and runs the Selenese commands passed from the test program. This runs the Selenese action or verification you specified in your test script.Selenium Documentation.0 6. and acts as an HTTP proxy. Release 1. • Client libraries which provide the interface between each programming language and the Selenium-RC Server.1 RC Components Selenium-RC components are: • The Selenium Server which launches and kills browsers. 50 Chapter 6...2. The diagram shows the client libraries communicate with the Server passing each Selenium command for execution. intercepting and verifying HTTP messages passed between the browser and the AUT. The browser. Selenium-RC . using its JavaScript interpreter. 6.2. The Selenium-IDE can translate (using its Export menu item) its Selenium commands into a client-driver’s API function calls. which doesn’t require any special installation. which run Selenium commands from your own program. 6.e.3 Client Libraries The client libraries provide the programming support that allows you to run Selenium commands from a program of your own design. Just downloading the zip file and extracting the server in the desired directory is suffiient. Within each interface. or possibly taking corrective action if it was an unexpected error. So to create a test program. you simply need to: • Install the Selenium-RC Server. there is a programming function that supports each Selenese command. interprets them. This means you can use any programming language that can send HTTP requests to automate Selenium tests on the browser. The client library takes a Selenese command and passes it to the Selenium Server for processing a specific action or test against the application under test (AUT).jar). optionally. if you already have a Selenese test script created in the SeleniumIDE. Release 1. you’ll notice it has several subfolders. 6.0 6. The Server receives the Selenese commands from your test program using simple HTTP GET/POST requests. A Selenium client library provides a programming interface (API). actually a set of JavaScript functions which interprets and executes Selenese commands using the browser’s built-in JavaScript interpreter. This occurs when your test program opens the browser (using a client library API function). you can generate the Selenium-RC code.3. These folders have all the components you need for using Selenium-RC with the programming language of your choice. 6.3. Installation 51 . There is a different client library for each supported language. Your program can receive the result and store it into a program variable and reporting it as a success or failure. you simply write a program that runs a set of Selenium commands using a client library API.3 Installation After downloading the Selenium-RC zip file from the downloads page. The RC server bundles Selenium Core and automatically injects it into the browser. Once you’ve chosen a language to work with. • Set up a programming project using a language specific client driver. See the Selenium-IDE chapter for specifics on exporting RC code from Selenium-IDE. and reports back to your program the results of running those tests.1 Installing Selenium Server The Selenium-RC server is simply a Java jar file (selenium-server. The client library also receives the result of that command and passes it back to your program. a set of functions. And.2 Selenium Server Selenium Server receives Selenium commands from your test program. i.2. Selenium-Core is a JavaScript program..Selenium Documentation. etc. you’re ready to start using Selenium-RC.jar files to your project as references. You can either use JUnit. • Run Selenium server from the console. Release 1.py 52 Chapter 6. Netweaver. or you can write your own simple main() program. Go to the directory where Selenium-RC’s server is located and run the following from a command-line console.jar. NetBeans.3. or TestNg to run your test.jar. IntelliJ. see the Appendix sections Configuring Selenium-RC With Eclipse and Configuring Selenium-RC With Intellij. • Add the selenium-java-client-driver.2 Running Selenium Server Before starting any tests you must start the server. These concepts are explained later in this section. • Execute your test from the Java IDE or from the command-line. • Extract the file selenium-java-client-driver.jar This can be simplified by creating a batch or shell executable file (.py • Either write your Selenium test in Python or export a script from Selenium-IDE to a python file.4 Using the Python Client Driver • Download Selenium-RC from the SeleniumHQ downloads page • Extract the file selenium. You can check that you have Java correctly installed by running the following on a console: java -version If you get a version number (which needs to be 1.3.bat on Windows and . • Add to your project classpath the file selenium-java-client-driver.0 6. 6. • Add to your test’s path the file selenium.sh on Linux) containing the command above. Selenium-RC . For details on Java test project configuration. export a script to a Java file and include it in your Java. 6. project. java -jar selenium-server.5 or later). For the server to run you’ll need Java installed and the PATH environment variable correctly configured to run it from the console.Selenium Documentation. • From Selenium-IDE.) • Create a new project.3. The API is presented later in this chapter. Then make a shortcut to that executable on your desktop and simply double-click the icon to start the server.3 Using the Java Client Driver • Download Selenium-RC from the SeleniumHQ downloads page. • Open your desired Java IDE (Eclipse. or write your Selenium test in Java using the selenium-java-client API. 6. framework.dll and ThoughtWorks. From Selenese to a Program 53 .) • Open your desired . Release 1.Net). or export a script from Selenium-IDE to a C# file and copy this code into the class file you just created. from the NUnit GUI or from the command line For specific details on . ThoughtWorks. In this section.4 From Selenese to a Program The primary task for using Selenium-RC is to convert your Selenese into a programming language.UnitTests.dll.0 • Run Selenium server from the console • Execute your test from a console or your Python IDE For details on Python client driver configuration.NET client driver configuration. nunit.3.4.5 Using the . 6. MonoDevelop) • Create a class library (.dll) • Add references to the following DLLs: nmock.Net language (C#.1 Sample Test Script Let’s start with an example Selenese test script.4.core.dll • Write your Selenium test in a . • Write your own simple main() program or you can include NUnit in your project for running your test.dll. however NUnit is very useful as a test engine. Imagine recording the following test with Seleniumopen / type q selenium rc IDE. we provide several different language-specific examples.Selenium.Core. ThoughtWorks. 6. see the appendix .com 6.Selenium.Selenium Documentation. SharpDevelop.NET client driver configuration with Visual Studio.dll.dll.NET Client Driver • Download Selenium-RC from the SeleniumHQ downloads page • Extract the folder • Download and install NUnit ( Note: You can use NUnit as your test engine.google. If you’re not familiar yet with NUnit.Selenium. nunit.Net IDE (Visual Studio. • Run Selenium server from console • Run your test either from the IDE. IntegrationTests. VB. you can also write a simple main() function to run your tests. clickAndWait btnG assertTextPresent Results * for selenium rc Note: This example would work with the Google search page. These concepts are explained later in this chapter. see the appendix Python Client Driver Configuration. 2 Selenese as Programming Code Here is the test script exported (via Selenium-IDE) to each of the supported programming languages.Stop().IsTrue(selenium. System.WaitForPageToLoad( "30000" ). System. System. If you have at least basic knowledge of an object. [SetUp] public void SetupTest() { selenium = new DefaultSelenium( "localhost" . select one of these buttons.0 6. In C#: using using using using using using System. NUnit.Threading.AreEqual( "" . } } } 54 Chapter 6. To see an example in a specific language. you will understand how Selenium runs Selenese commands by reading one of these examples. } [TearDown] public void TeardownTest() { try { selenium. private StringBuilder verificationErrors. "( "/" ). verificationErrors. selenium.Click( "btnG" ). } catch (Exception) { // Ignore errors if unable to close the browser } Assert. selenium. "*firefox" .Text. verificationErrors = new StringBuilder().oriented programming language. Selenium.Type( "q" . 4444.4. Assert. Selenium-RC .Selenium Documentation.IsTextPresent( "Results * for selenium rc" )).Start(). Release 1.RegularExpressions. "selenium rc" ). namespace SeleniumTests { [TestFixture] public class NewTest { private ISelenium selenium. selenium.Framework.Text. } [Test] public void TheNewTest() { selenium. selenium.ToString()). } 6.isTextPresent( "Results * for selenium rc" )). Release 1. my $sel = Test::WWW::Selenium->new( host => "localhost" . warnings. In PHP: <?php require_once ’PHPUnit/Extensions/SeleniumTestCase. import com.waitForPageToLoad( "30000" ). } public void testNew() throws Exception { selenium. Test::More "no_plan" .util.com/ " ).Pattern. $sel->open_ok( "/" ). $sel->wait_for_page_to_load_ok( "30000" ).tests.0 In Java: package com.example.com/" ).open( "/" ).google.google. port => 4444. Test::WWW::Selenium.google. "selenium rc" ). From Selenese to a Program 55 .Selenium Documentation.*. } } In Perl: use use use use use use strict. Time::HiRes qw( sleep ) . public class NewTest extends SeleneseTestCase { public void setUp() throws Exception { setUp( ". $this->setBrowserUrl( ". selenium. class Example extends PHPUnit_Extensions_SeleniumTestCase { function setUp() { $this->setBrowser( " *firefox " ).php’ .com/" .4.type( "q" . $sel->is_text_present_ok( "Results * for selenium rc" ). browser_url => ". assertTrue(selenium. selenium. Test::Exception. browser => "*firefox" . $sel->type_ok( "q" . selenium.regex. $sel->click_ok( "btnG" ). import java. "*firefox" ). "selenium rc" ).click( "btnG" ).selenium. stop unless $selenium 56 Chapter 6.verificationErrors = [] self. 4444.start() def test_new(self): sel = self.TestCase): def setUp(self): self.click( " btnG " ) sel. " *firefox " .selenium.stop() self.selenium = selenium( " localhost " . Selenium-RC . time. " selenium rc " ).selenium.verificationErrors) in Ruby: require " selenium " require " test/unit " class NewTest < Test::Unit::TestCase def setup @verification_errors = [] if $selenium @selenium = $selenium else @selenium = Selenium::SeleniumDriver. " ht @selenium.failUnless(sel. $this->type( " q " .set_context( " test_new " ) end def teardown @selenium.assertEqual([]. 4444.0 function testMyTestCase() { $this->open( " / " ).selenium sel.Selenium Documentation. $this->assertTrue($this->isTextPresent( " Results * for selenium rc " )).open( " / " ) sel. $this->waitForPageToLoad( " 30000 " ).start end @selenium.google.com/ " ) self.type( " q " .wait_for_page_to_load( " 30000 " ) self. " selenium rc " ) sel. ". } } ?> in Python: from selenium import selenium import unittest.new( " localhost " . Release 1. " *firefox " . self.is_text_present( " Results * for selenium rc " )) def tearDown(self): self. re class NewTest(unittest. $this->click( " btnG " ). • Java • C# • Python • Perl.5 Programming Your Test Now we’ll illustrate how to program your own tests using examples in each of the supported programming languages. Release 1. There are essemtially two tasks. Teaching JUnit or TestNG is beyond the scope of this document however materials may be found online and there are publications available.com/"). Some development environments like Eclipse have direct support for these via plug-ins.open " / " @selenium.is_text_present( " Results * for selenium rc " ) end end In the next section we’ll explain how to build a test program using the generated code.0 assert_equal []. 6.google.NET if you are using one of those languages. so you’ll find a separate explanation for each. package com. you will need to change the browser-open parameters in the statement: selenium = new DefaultSelenium("localhost". You will probably want to rename the test class from “NewTest” to something of your own choosing. we show language-specific examples. 4444. PHP.5. Also. * Generate your script into a programming language from Selenium-IDE. This makes it even easier. people use either Junit or TestNG as the test engine.5. "*iehta". ". The language-specific APIs tend to differ from one to another.wait_for_page_to_load " 30000 " assert @selenium. The Selenium-IDE generated code will look like this. Programming Your Test 57 . or NUnit for . If you are already a “java-shop” chances are your developers will already have some experience with one of these test frameworks. @verification_errors end def test_new @selenium. you can adopt a test engine platform like JUnit or TestNG for Java. This example has coments added manually for additional clarity. Optionally. optionally modifying the result. Ruby 6. * And two.tests.Selenium Documentation.type " q " .example.click " btnG " @selenium. // We specify the package of our tess 6. Here. " selenium rc " @selenium. write a very simple main program that executes the generated code.1 Java For Java. using using using using using using System. namespace SeleniumTests { [TestFixture] 58 Chapter 6. NUnit. selenium.com/"). You’ll use this for instantiating a // browser and making it do what you need.selenium. // Selenium-IDE add the Pattern module because it’s sometimes used for // regex validations. You can remove the module if it’s not used in your // script. selenium.2 C# The . It can be used with any .google. // These are the real test steps } } 6.thoughtworks.com/" . Selenium-IDE assumes you will use NUnit as your testing framework.*.Text.Threading.RegularExpressions. ". selenium. System. // We instantiate and start the browser } public void testNew() throws Exception { selenium.5. System. "*iehta". The generated code will look similar to this.NET testing framework like NUnit or the Visual Studio 2005 Team System. "selenium rc" ). You will probably have to rename the test class from “NewTest” to something of your own choosing. Release 1.Text.Selenium Documentation. "*firefox" ). It includes the using statement for NUnit along with corresponding NUnit attributes identifying the role for each member function of the test class.util. you will need to change the browser-open parameters in the statement: selenium = new DefaultSelenium("localhost".google. You can see this in the generated code below.Pattern. Selenium.Framework.0 import com. public class NewTest extends SeleneseTestCase { // We create our Selenium test case public void setUp() throws Exception { setUp( ". assertTrue(selenium.open( "/" ).click( "btnG" ). // This is the driver’s import. Also. System.NET Client Driver works with Microsoft.regex.waitForPageToLoad( "30000" ).NET.isTextPresent( "Results * for selenium rc" )). import java. 4444.type( "q" . Selenium-RC . verificationErrors = new StringBuilder(). selenium. // Wait for page to load.AreEqual( "Google" .Open( "). 6. Assert.google.0 public class NewTest { private ISelenium selenium.com/" ). Programming Your Test 59 .com/" ). private StringBuilder verificationErrors. selenium. selenium. 4444. [SetUp] public void SetupTest() { selenium = new DefaultSelenium( "localhost" . "*iehta" . verificationErrors.Type( "q" .Selenium Documentation.WaitForPageToLoad( "5000" ).5.GetValue( "q" )).google. selenium.AreEqual( "Selenium OpenQA" .ToString()). Release 1. Assert. } [TearDown] public void TeardownTest() { try { selenium.AreEqual( "" . } catch (Exception) { // Ignore errors if unable to close the browser } Assert. selenium. } [Test] public void TheNewTest() { // Open Google search engine.Stop(). // Provide search term as "Selenium OpenQA" selenium.Click( "btnG" ). "Selenium OpenQA" ). // Read the keyed search term and assert it. // Assert Title of page. selenium. ". // Click on Search button.Start(). openqa.GetTitle()). 6.wait_for_page_to_load( " 30000 " ) 60 Chapter 6. SetupTest(). Assert.openqa.0 // Assert that "www. The basic test structure is: from selenium import selenium # This is the driver’s import.open( " / " ) sel.TestCase): # We create our unittest test case def setUp(self): self. sel = self.org/library/unittest.google. # You can remove the modules if they are not used in your script.start() # We instantiate and start the browser def test_new(self): # This is the test code.click( " btnG " ) sel.html>_. " *firefox " . TheNewTest().3 Python Pyunit is the test framework to use for Python. Here you should put the actions you need # the browser to do during your test. " selenium rc " ) sel.selenium # We assign the browser to the variable "sel" (just to save us from # typing "self. // Assert that page title is . Release 1. you can write a simple main() program that instantiates the test object and runs each of the three methods.selenium" each time we want to call the browser). To learn pyunit refer to its official documentation <. class NewTest(unittest.Google Search" . sel.IsTextPresent( " Documentation."Selenium OpenQA . import unittest. selenium. 4444.selenium.IsTrue(selenium. re # This are the basic imports added by Selenium-IDE by default.Google Search" Assert. and TeardownTest() in turn.selenium = selenium( " localhost " .5. You’ll use this class for instantiating a # browser and making it do what you need. } } } You can allow NUnit to manage the execution of your tests. ". Selenium-RC .org" )). Or alternatively.verificationErrors = [] # This is an empty array where we will store any verification errors # we find in our tests self.python.AreEqual( "Selenium OpenQA . time.com/ " ) self.type( " q " .org" is available in search results. 6 Learning the API The Selenium-RC API uses naming conventions that. Ruby The members of the documentation team have not used Sel-RC with Perl. 6. assuming you understand Selenese.verificationErrors) # And make the test fail if we found that any verification errors # were found 6. 6.com/" ).com/" .com/" selenium. In Python: 6. Learning the API 61 . we explain the most critical and possibly less obvious. ". however. PHP. We would love to include some examples from you and your experiences support Perl and PHP users.google. much of the interface will be self-explanatory.4 Perl. In PHP: $this->setBrowser("*firefox"). Release 1.selenium. If you are using Selenium-RC with either of these two languages please contact the Documentation Team (see the chapter on contributing). "*firefox" .Selenium Documentation.google. "*firefox" ).stop() # we close the browser (I’d recommend you to comment this line while # you are creating and debugging your tests) self. $this->setBrowserUrl(". Here.google.assertEqual([]. 4444. browser => "*firefox" .5. self.0 self.google.failUnless(sel. In Perl: my $sel = Test::WWW::Selenium->new( host => "localhost" . PHP or Ruby. In Java: setUp( " Starting the Browser In C#: selenium = new DefaultSelenium( "localhost" . browser_url => "").6. aspects of the API. port => 4444.is_text_present( " Results * for selenium rc " )) # These are the real test steps def tearDown(self): self. 2 Running Commands Once you have the browser initialized and assigned to a variable (generally named “selenium”) you can make it run Selenese commands by calling the respective methods from the browser variable. For example. by using the locator and the string you specified during the method call. 4444. " string to type " ) In the background the browser will actually perform a type operation.type( " field-id " . url The base url of the application under test.7 Reporting Results Selenium-RC does not have its own mechanism for reporting results. essentially identical to a user typing input into the browser. Note that some of the client libraries require the browser to be started explicitly by calling its start() method. In some clients this is an optional parameter. The parameters required when creating the browser instance are: host Specifies the IP address of the computer where the server is located. This browser variable is then used to call methods from the browser. 4444. port Specifies the TCP/IP socket where the server is listening waiting for the client to establish a connection. but what if you simply want something quick that’s already done for you? Often an existing library or test framework will exist that can meet your needs faster than developing your own test reporting code. " *firefox " . This is a required parameter. it allows you to build your reporting customized to your needs using features of your chosen programming language.0 self.Selenium Documentation. like open or type or the verify commands.selenium = selenium( " localhost " . so in this case localhost is passed. Rather. " http:/ @selenium. ". this is the same machine as where the client is running.start() In Ruby: if $selenium @selenium = $selenium else @selenium = Selenium::SeleniumDriver. This is required by all the client libs and is integral information for starting up the browser-proxy-AUT communication. Usually. Release 1. This also is optional in some client drivers. i. These methods execute the Selenium commands.com/ " ) self. That’s great. to call the type method of the selenium object: selenium.6. 62 Chapter 6.selenium. Selenium-RC .start Each of these examples opens the browser and represents that browser by assigning a “browser instance” to a program variable.new( " localhost " . 6. browser The browser in which you want to run the tests.google. " *firefox " .e. 6. As you begin to use Selenium no doubt you will start putting in your own “print statements” for reporting progress. That may gradually lead to you developing your own reporting. The TestNG framework generates an HTML report which list details of tests. that’s beyond the scope of this user guide. Reporting Results 63 . colour-coded view of the test results. Release 1. From there most will examine any available libraries as that’s less time consuming than developing your own. We will simply introduce the framework features that relate to Selenium along with some techniques you can apply. we’ll direct you to some specific tools in some of the other languages supported by Selenium. 6. We won’t teach the frameworks themselves here. . 6.7.Selenium Documentation. possibly in parallel to using a library or test framework. include library code for reporting results. 6. Test Reports in Java • If Selenium Test cases are developed using JUnit then JUnit Report can be used to generate test reports. along with their primary function of providing a flexible test engine for executing your tests. Refer to JUnit Report for specifics. NUnit.7. JUnit and TestNG. learning curve you will naturally develop what works best for your own situation. • If Selenium Test cases are developed using TestNG then no external task is required to generate test reports.NET also has its own.7.3 What’s The Best Approach? Most people new to the testing frameworks will being with the framework’s built-in reporting features. ReportNG provides a simple. • Also. A TestNG-xslt Report looks like this. • ReportNG is a HTML reporting plug-in for the TestNG framework.1 Test Framework Reporting Tools Test frameworks are available for many programming languages. These often support a variety of formats such as HTML or PDF.2 Test Report Libraries Also available are third-party libraries specifically created for reporting test results in your chosen programming language.4 Test Reporting Examples To illustrate. The ones listed here are commonly used and have been used extensively (and therefore recommended) by the authors of this guide. It is intended as a replacement for the default TestNG HTML report.0 6.7.7. See ReportNG for more. For example. See TestNG Report for more. 6. Regardless. Java has two commonly used test frameworks. for a very nice summary report try using TestNG-xslt. but short. after the initial. These. Their are good books available on these test frameworks however along with information on the internet. Selenium-RC . You can do some conditions by embedding javascript in Selenese parameters. It’s the same as for any program. and most conditions will be much easier in a programming language. In addition you can report progress information using I/O. Please refer to Logging Selenium.Selenium Documentation. For these reasons and others. Test Reports for Ruby • If RSpec framework is used for writing Selenium Test Cases in Ruby then its HTML report can be used to generate test report. In this section we’ll show some examples of how programming language constructs can be combined with Selenium to solve common testing problems. Release 1. You will find as you transition from the simple tests of the existence of page elements to tests of dynamic functionality involving multiple web-pages and varying data that you will require programming logic for verifying expected results. Program flow is controlled using condition statements and iteration. Logging the Selenese Commands • Logging Selenium can be used to generate a report of all the Selenese commands in your test along with the success of failure of each. however iteration is impossible. 64 Chapter 6. the Selenium-IDE does not support iteration and standard condition statements. See HTMLTestRunner.0 See TestNG-xslt for more. Basically. you may need exception-handling for error recovery. Note: If you are interested in a language independent log of what’s going on. Refer to RSpec Report for more. Logging Selenium extends the Java client driver to add this Selenense logging ability. adding programming logic to your tests. In addition. Test Reports for Python • When using Python Client Driver then HTMLTestRunner can be used to generate a Test Report. take a look at Selenium Server Logging 6. For example. "rc" . A common problem encountered while running Selenium tests occurs when an expected element is not available on page.click( "btnG" ).8.8.1 Iteration Iteration is one of the most common things people need to do in their tests. we can iterate over the search results for a more flexible and maintainable solution.type( "q" .open( "/" ). sel.0 The examples in this section are written in Java. If element ‘q’ is not on the page then an exception is thrown: 6." . assertTrue( "Expected text: " +s+ " is missing on page. let’s check the Selenium the search results. If you have some basic knowledge of an object-oriented programming language you shouldn’t have difficulty understanding this section. 6. String[] arr = { "ide" .Selenium Documentation. For example. foreach (String s in arr) { sel. In C#: // Collection of String values. you may want to to execute a search multiple times. By using a programming language. perhaps for verifying your test results you need to process a “result set” returned from a database. "grid" }. when running the following line: selenium. "selenium " +s).8.2 Condition Statements To illustrate using conditions in tests we’ll start with an example. Adding Some Spice to Your Tests 65 . Release 1.. // Execute loop for each String in array ’arr’.waitForPageToLoad( "30000" ). Using the same Google search example we used earlier.type( "q" . sel. But multiple copies of the same code is not good program practice because it’s more work to maintain. although the code is simple and can be easily adapted to the other supported languages.isTextPresent( "Results * for selenium " + s)). Or. "selenium " +s). } 6. sel. // Counter for check box ids.getEval( "window.toString()." ). A better approach is to first validate if the element is really present and then take alternatives when it it is not. Release 1. // end of for.id . // If element is available on page then perform type operation. i<inputFields. i++) {" . // Split the s return checkboxIds." . "}" . "Selenium rc" )." + // increment the counter. Let’s look at this using Java. getEval method of selenium API can be used to execute java script from selenium RC." ). 66 Chapter 6." ) } The aadvantage of this approach is to continue with test execution even if some UI elements are not available on page.getEval(script). Consider an application having check boxes with no static identifiers.document.getAttribute(’type’) == ’checkbox’) {" . script += "for(var i=0.getElementsByTagName(’input’).8. script += "var inputFields = new Array().id !=null " + "&& inputFields[i]. // Loop through the script += "if(inputFields[i]. "}" + // end of if." .length.document. // Create array in java scrip script += "inputFields = window. if(selenium. Selenium-RC .3 Executing Javascript from Your Test Javascript comes very handy in exercising application which is not directly supported by selenium.split( ".images. } else { System. public static String[] getAllCheckboxIds () { String script = "var inputId = new Array()." . script += "inputId.SeleniumException: ERROR: Element q not found This can cause your test to abort. String[] checkboxIds = selenium. // If input fie script += "inputId[cnt]=inputFields[i]. For some tests that’s what you want.id !=’undefined’ " + "&& inputFields[i].type( "q" ." ." .// Convert array in to string.// Create array in java scri script += "var cnt = 0. Remember to use window object in case of dom expressions as by default selenium window is referred and not the test window.Selenium Documentation.isElementPresent( "q" )) { selenium. } To count number of images on a page: selenium.0 com." + // Save check box id to inp "cnt++.thoughtworks.length. But often that is not desireable as your test script has many other subsequent tests to perform. In this case one could evaluate js from selenium RC to get ids of all check boxes and then exercise them.printf( "Element: " +q+ " is not available on page.out. 6.selenium. proxyUser and http. the server is started by running the following.0.proxyPort=8080 -Dhttp. However.9. 6. $ java -jar selenium-server. Release 1. Server Options 67 .proxyPort. Recall. since multiwindow mode is the default behavior.1 Proxy Configuration If your AUT is behind an HTTP proxy which requires authentication then you should you can configure http. so we’ve provided explanations for some of the more important options. http.2 Multi-Window Mode If you are using Selenium 1. 6.9 Server Options When the server is launched.0 you can probably skip this section. $ java -jar selenium-server. http. run the server with the -h option. The provided descriptions will not always be enough.proxyPassword using the following command.jar -Dhttp. $ java -jar selenium-server.jar To see the list of options. 6.0 6.com -Dhttp.proxyHost=proxy.Selenium Documentation.9. prior to version 1. command line options can be used to change the default server behaviour.jar -h You’ll see a list of all the options you can use with the server and a brief description of each.proxyHost. Selenium by default ran the application under test in a sub frame as shown here.9. and needed to be loaded into the top frame of the window.Selenium Documentation. 68 Chapter 6. Selenium-RC . Release 1. The multi-window mode option allowed the AUT to run in a separate window rather than in the default frame where it could then have the top frame it required.0 Some applications didn’t run correctly in a sub frame. 9.Selenium Documentation. to create a separate Firefox profile.0 For older versions of Selenium you must specify multiwindow mode explicitely with the following option: -multiwindow In Selenium-RC 1.3 Specifying the Firefox Profile Firefox will not run two instances simultaneously unless you specify a separate profile for each instance.0 and later runs in a separate profile automatically. Release 1. if you want to run your test within a single frame (i. you can probably skip this section. First. using the standard for earlier Selenium versions) you can state this to the Selenium Server using the option -singlewindow 6. then type and enter one of the following: 6.). Open the Windows Start menu.e.0. so if you are using Selenium 1. However. follow this procedure. you will need to explicitly specify the profile. Server Options 69 . select “Run”.0.9. Selenium-RC 1. jar -htmlSuite "*firefox" "" "c:\absolute This will automatically launch your HTML suite. More information about Firefox profiles can be found in Mozilla’s Knowledge Base 6. This command line is very long so be careful when you type it. run all the tests and save a nice HTML report with the results. For example: 20:44:25 DEBUG [12] org. the command will exit with a non-zero exit code and no results file will be generated. not a single test.jar -log selenium.4 Run Selenese Directly Within the Server Using -htmlSuite You can run Selenese html files directly within the Selenium Server by passing the html file to the server’s command line. .exe -P Create the new profile using the dialog. Selenium-RC . The when you run Selenium Server. Release 1. the server will start the tests and wait for a specified number of seconds for the test to complete.9. 6. and the ID number of the thread that logged the message.log This log file is more verbose than the standard console logs (it includes DEBUG level logging messages).selenium.SeleniumDriverResourceHandler Browser 465828/:top frame1 posted START NEW 70 Chapter 6.9.exe -profilemanager firefox. Also be aware the -htmlSuite option is incompatible with -interactive You cannot run both at the same time.openqa. Note: When using this option. regardless of whether they are profile files or not. if the test doesn’t complete within that amount of time. The log file also includes the logger name. java -jar selenium-server.0 firefox.server. For instance: java -jar selenium-server. Note this requires you to pass in an HTML Selenese suite.5 Selenium Server Logging Server-Side Logs When launching selenium server the -log option can be used to record valuable debugging information reported by the Selenium Server to a text file. So for example. it cannot run that loaded code against www. This is called XSS (Cross-site Scripting).mysite.mysite2. to log browserSideLogs (as well as all other DEBUG level logging messages) to a file. a script placed on any website you open.1 The Same Origin Policy The main restriction that Selenium’s has faced is the Same Origin Policy. java -jar selenium-server. When specifying the run mode. use the *custom specifier followed by the full path to the browser’s executable: *custom <path to browser> 6. To work within this policy. Specifying the Path to a Specific Browser 71 .com. if the browser loads javascript code when it loads www. The Same Origin Policy dictates that any code loaded within the browser can only operate within that website’s domain.0 The message format is TIMESTAMP(HH:mm:ss) LEVEL [THREAD] LOGGER . To understand in detail how Selenium-RC Server works and why it uses proxy injection and heightened privilege modes you must first understand the same origin policy. Selenium-Core (and its JavaScript commands that make all the magic happen) must be placed in the same origin as the Application Under Test (same URL). It cannot perform functions on another website. pass the -browserSideLog argument to the Selenium Server.10 Specifying the Path to a Specific Browser You can specify to Selenium-RC a path to a specific browser. but could be useful for understanding some of the problems you can find in the future.MESSAGE This message may be multiline. Release 1.jar -browserSideLog -browserSideLog must be combined with the -log argument.Selenium Documentation.11 Selenium-RC Architecture Note: This topic tries to explain the technical implementation behind Selenium-RC. would be able to read information on your bank account if you had the account page opened on other tab. 6. To access browser-side logs. and you wish to use a specific one. Also. in many cases.10. 6. Browser-Side Logs JavaScript on the browser side (Selenium Core) also logs important messages. If this were possible. This is useful if you have different versions of the same browser. 6. these can be more useful to the end-user than the regular Selenium Server logs.11. This security restriction is applied by every browser in the market and its objective is to ensure that a site’s content will never be accessible by a script from other site. this is used to allow your tests to run against a browser not directly supported by Selenium-RC. It’s not fundamental for a Selenium user to know this.com–even if that’s another of your sites. It then masks the AUT under a fictional URL (embedding Selenium-Core and the set of tests and delivering them as if they were coming from the same origin).0 Historically.. essentially.2 Proxy Injection The first method Selenium used to avoid the The Same Origin Policy was Proxy Injection. In Proxy Injection Mode. gives the capability of “lying” about the AUT’s real URL. Release 1. Selenium-Core was limited by this problem since it was implemented in Javascript. Its use of the Selenium Server as a proxy avoids this problem. the Selenium Server acts as a client-configured 1 HTTP proxy 2 . It acts as a “web server” that delivers the AUT to the browser. tells the browser that the browser is working on a single “spoofed” website that the Server provides. Here is an architectural diagram. 1 72 Chapter 6. The proxy is a third person in the middle that passes the ball between the two parts. Being a proxy. however.Selenium Documentation. It. Selenium-RC is not.11. Selenium-RC . restricted by the Same Origin Policy. that sits between the browser and the Application Under Test. 2 The browser is launched with a configuration profile that has set localhost:4444 as the HTTP proxy. 6. Selenium-Core instructs the browser to act on that first instruction. Release 1.0 As a test suite starts in your favorite language. 6. 6. The client-driver passes a Selenese command to the server. The Server interprets the command and then triggers the corresponding javascript execution to execute that command within the browser. 2. it sends the page to the browser masking the origin to look like the page comes from the same server as Selenium-Core (this allows Selenium-Core to comply with the Same Origin Policy). 4. The client/driver establishes a connection with the selenium-RC server. The browser receives the open request and asks for the website’s content to the Selenium-RC server (set as the HTTP proxy for the browser to use). 3. Selenium-RC server communicates with the Web server asking for the page and once it receives it. 5.11. 7. Selenium-RC Architecture 73 . the following happens: 1.Selenium Documentation. Selenium-RC server launches a browser (or reuses an old one) with an URL that injects SeleniumCore’s javascript into the browser-loaded web page. typically opening a page of the AUT. Selenium-RC . By using these browser modes.11. 3. the following happens: 1. Selenium-RC server launches a browser (or reuses an old one) with an URL that will load Selenium-Core in the web page.0 8. or filling file upload inputs and pretty useful stuff for Selenium). The browser receives the web page and renders it in the frame/window reserved for it. As a test suite starts in your favorite language. Selenium-Core gets the first instruction from the client/driver (via another HTTP request made to the Selenium-RC Server). 74 Chapter 6. Release 1.3 Heightened Privileges Browsers This workflow on this method is very similar to Proxy Injection but the main difference is that the browsers are launched in a special mode called Heightened Privileges. which allows websites to do things that are not commonly permitted (as doing XSS.Selenium Documentation. Selenium Core is able to directly open the AUT and read/interact with its content without having to pass the whole AUT through the Selenium-RC server. The client/driver establishes a connection with the selenium-RC server. 2. 6. Here is the architectural diagram. if you are running Selenium-RC in proxy injection mode. These are provided only for backwards compatibility only. Selenium-Core acts on that first instruction. Selenium-RC supports this. when the browser accesses the AUT using HTTPS. and should not use. The browser now thinks untrusted software is trying to look like your application.12. and should not be used unless required by legacy test programs.12. If you are using Selenium 1. When this occurs the browser displays security popups. Selenium-RC. Handling HTTPS and Security Popups 75 . 6. and these popups cannot be closed using Selenium-RC. Most users should no longer need to do this however. Release 1. there are additional run modes of *iexploreproxy and *firefoxproxy. renders it in the frame/window reserved for it. In version 1.12 Handling HTTPS and Security Popups Many applications switch from using HTTP to HTTPS when they need to send encrypted information such as passwords or credit card information. Using these run modes. (again when using a run mode that support this) will install its own security certificate. your browser will trust the application you are testing by installing a security certificate which you already own. to your client machine in a place where the browser can access it. use *chrome or *iehta. Another method used with earlier versions of Selenium was to install the Cybervillians security certificate provided with your Selenium installation. To ensure the HTTPS site is genuine. This is common with many of today’s web applications. To get around this. for the run mode. Their use will present limitations with security certificate handling and with the running of multiple windows if your application opens additional browser windows. it will assume that application is not ‘trusted’. However. you must use a run mode that supports this and handles the security certificate for you. these older run modes.0 beta 1. the browser will need a security certificate. The browser receives the open request and asks the Web Server for the page. In Selenium-RC 1.0 4. 6. Otherwise. In earlier versions. These were considered ‘experimental modes although they became quite stable and many used them. 6. including Selenium-RC 1. 5.1 Security Certificates Explained Normally. temporarily. In earlier versions of Selenium-RC.Selenium Documentation. Selenium-RC will handle it for you. you may need to explicitly install this security certificate. Once the browser receives the web page.0 the run modes *firefox or *iexplore are recommended. When Selenium loads your browser it injects code to intercept messages between the browser and the server. It responds by alerting you with popup messages. You can check this in your browser’s options or internet properties (if you don’t know your AUT’s security certificate ask you system administrator).0 beta 2 and later use *firefox or *iexplore for the run mode. typically opening a page of the AUT. you will not need to install any special security certificates. When dealing with HTTPS in a Selenium-RC test. *chrome or *iehta were the run modes that supported HTTPS and the handling of security popups.0 you do not need. This tricks the browser into thinking it’s accessing a site different from your AUT and effectively suppresses the popups. You specify the run mode when your test program initializes Selenium. 13 Supporting Additional Browsers and Browser Configurations The Selenium API supports running against multiple browsers in addition to Internet Explorer and Mozilla Firefox. If so. but instructions for this can differ radically from browser to browser. 6. For example. but if you launch the browser using the “*custom” run mode.1 Unable to Connect to Server When your test program cannot connect to the Selenium Server.13. you may still run your Selenium tests against a browser of your choosing by using the “*custom” run-mode (i.exe&2=htt Note that when launching the browser this way.1 Running Tests with Different Browser Configurations Normally Selenium-RC automatically configures the browser. In addition. Normally this just means opening your browser preferences and specifying “localhost:4444” as an HTTP proxy. 76 Chapter 6.Inner Exception Message: No connection could be made because the target machine actively refused it." (using .0 6. you pass in the path to the browsers executable within the API call as follows. See the SeleniumHQ. cmd=getNewBrowserSession&1=*custom c: \P rogram Files \M ozilla Firefox \M yBrowser. Consult your browser’s documentation for details.. One may need to set the MOZ_NO_REMOTE environment variable to make Mozilla browsers behave a little more predictably.. without using an automatic configuration. be sure you started the Selenium Server.exe&2=h 6. then there is a problem with the connectivity between the Selenium Client Library and the Selenium Server. Selenium-RC .. It should display this message or a similar one: "Unable to connect to remote server. you can force Selenium RC to launch the browser as-is.Selenium Documentation.. With this. Release 1. We present them along with their solutions here. in place of *firefox or *iexplore) when your test application starts the browser.NET and XP Service Pack 2) If you see a message like this.14 Troubleshooting Common Problems When getting started with Selenium-RC there’s a few potential problems that are commonly encountered. Unix users should avoid launching the browser using a shell script. it’s generally better to use the binary executable (e.. This can also be done from the Server in interactive mode.g. an exception will be thrown in your test program.org website for supported browsers. Be aware that Mozilla browsers can vary in how they start and stop. 6.14.. when a browser is not directly supported. you can launch Firefox with a custom configuration like this: cmd=getNewBrowserSession&1=*custom c: \P rogram Files \M ozilla Firefox \f irefox.e. firefox-bin) directly. you must manually configure the browser to use the Selenium Server as a proxy. 2 Unable to Load the Browser Ok. many people choose to run the tests this way. the most likely cause is your test program is not using the correct URL.0 When starting with Selenium-RC. most people begin by running thier test program (with a Selenium Client Library) and the Selenium Server on the same machine.14. Check to be sure the path is correct. but you already have a Firefox browser session running and. ipconfig(Unix)/ifconfig (Windows). Check the parameters you passed to Selenium when you program opens the browser. 6. but the browser doesn’t display the website you’re testing. 6. This can easily happen.14.14. it inserts a dummy URL. To do this use “localhost” as your connection parameter. telnet.Selenium Documentation. 6. however. In truth.14. If you have difficulty connecting. You must manually change the URL to the correct one for your application to be tested. If. you didn’t specify a separate profile when you started the Selenium Server. See the section on Firefox profiles under Server Options. Also check the forums to be sure there are no known issues with your browser and the “*custom” parameters. the connectivity should be fine assuming you have valid TCP/IP connectivity between the two machines. you do want to run Selenium Server on a remote machine.4 Firefox Refused Shutdown While Preparing a Profile This most often occurs when your run your Selenium-RC test program against Firefox. We recommend beginning this way since it reduces the influence of potential networking problems which you’re getting started. sorry. Troubleshooting Common Problems 77 . (500) Internal Server Error This could be caused by • Firefox (prior to Selenium 1. • The run mode you’re using doesn’t match any browser on your machine. • You specified the path to the browser explicitly (using “*custom”–see above) but the path is incorrect. not a friendly error message.3 Selenium Cannot Find the AUT If your test program starts the browser successfully.RuntimeException: Firefox refused shutdown while preparing a profile Here’s the complete error msg from the server: 6. but if the Selenium Server cannot load the browser you will likley see this error. Assuming your operating system has typical networking and TCP/IP settings you should have little difficulty. etc to ensure you have a valid network connection. Release 1. The error from the test program looks like this: Error: java. If unfamilar with these.lang. When you use Selenium-IDE to export you script. you can use common networking tools like ping. your system administrator can assist you.0) cannot start because the browser is already open and you did not specify a separate profile. .waitForFullProfileToBeCreated(FirefoxCustomProfileLauncher. • *iexplore: If the browser is launched using *iexplore.server... Standard Edition (build 1.5.. Caused by: org. Try 78 Chapter 6.GET /selenium-server/driver/?cmd=getNewBrowserSession&1=*fir efox&2=http%3a%2f%2fsage-webapp1.92 does not support Firefox 3. or *custom..Selenium Documentation.. Selenium-RC 0.FirefoxCustomProfileLaunc her$FileLockRemainedException: Lock file still present! C:\DOCUME~1\jsvec\LOCALS ~1\Temp\customProfileDir203138\parent..0_07" Java(TM) 2 Runtime Environment...5 or higher.Preparing Firefox profile. java version "1. use the latest release version of Selenium with the most widely used version of your browser. At times you may be lucky (I was).... java -version You should see a message showing the Java version.FirefoxCustomProfileLaunc her. 16:20:27.RuntimeException: Firefox refused shutdown while preparing a profile at org. Release 1.. mixed mode) If you see a lower version number..java:277) . To check double-check your java version.. then it must be because the Selenium Server was not correctly configured as a proxy.openqa.browserlaunchers.. 6.0)” while starting server This error says you’re not using a correct version of Java.. The Selenium Server requires Java 1.0_07-b03) Java HotSpot(TM) Client VM (build 1.14. run this from the command line.919 INFO .server.. 6.6 Error message: “(Unsupported major..5. or you may simply need to add it to your PATH environment variable. When in doubt.822 WARN .5.5 Versioning Problems Make sure your version of Selenium supports the version of your browser.google.. Selenium Server attempts To configure the global proxy settings in the Internet Options Control Panel. You must make sure that those are correctly configured when Selenium Server launches the browser.com. Selenium-RC .14.com HTTP/1.0_07-b03..browserlaunchers.. see the section on Specifying a Separate Firefox Profile 6..minor version 49.com/seleniumserver/“.. you could be having a problem with Internet Explorer’s proxy settings.7 404 error when running the getNewBrowserSession command If you’re getting a 404 error while attempting to open a page on “. For example...lock To resolve this. *opera.qa.selenium.1 java. you may need to update the JRE. The “selenium-server” directory doesn’t exist on google. Proxy Configuration highly depends on how the browser is launched with *firefox. But don’t forget to check which browser versions are supported by the version of Selenium you are using.openqa. it only appears to exist when the proxy is properly configured.0 16:20:03. *iexplore.14.lang. 6.0 looking at your Internet Options control panel. you’ll need to start Selenium Server with “-Dhttp. Troubleshooting Common Problems 79 .proxyHost”. but you may need to configure your 6. If your browser is configured correctly. – You may also try configuring your proxy manually and then launching the browser with *custom. • For other browsers (*firefox. If you had successfully configured the browser’s proxy settings incorrectly. Try configuring the browser to use the wrong proxy server hostname.Selenium Documentation.14. Double-check that you’ve configured your proxy settings correctly. You may need to know how to manage these. see more on this in the section on HTTPS. Permission issues are covered in some detail in the tutorial. or the wrong port. If you’re encountering 404 errors and have followed this user guide carefully post your results to user forums for some help from the user community. Click on the “Connections” tab and click on “LAN Settings”.8 Permission Denied Error The most common reason for this error is that your session is attempting to violate the same-origin policy by crossing domain boundaries (e. To login to a site that requires HTTP basic authentication.9 Handling Browser Popup Windows There are several kinds of “Popups” that you can get during a Selenium test. which is one way to make sure that one is adjusting the relevant settings. Proxy Injection carefully. or are no longer available (after the page has started to be unloaded). you should never see SSL certificate warnings. accesses a page from and then accesses a page from) or switching protocols (moving from to). • *custom: When using *custom you must configure the proxy correctly(manually). or with *iehta browser launcher.14.g.. like this: open(“. This error can be intermittent. Release 1. – If you need to use a proxy to access the application you want to test. and so ther are no known issues with this functionality.com/blah/blah/blah“). as described in RFC 1738. Each type of popup needs to be addressed differently. Often it is impossible to reproduce the problem with a debugger because the trouble stems from race conditions which are not reproducable when the debugger’s overhead is added to the system. • SSL certificate warnings: Selenium RC automatically attempts to spoof SSL certificates when it is enabled as a proxy. • HTTP basic authentication dialogs: These dialogs prompt for a username/password to login to the site.14. use a username and password in the URL. Read the section about the The Same Origin Policy. This is most typically encountered with AJAX pages which are working with sections of a page or subframes that load and/or reload independently of the larger page. *opera) we automatically hard-code the proxy for you. This error can also occur when JavaScript attempts to find UI objects which are not yet available (before the page has completely loaded). 6. To check whether you’ve configured the proxy correctly is to attempt to intentionally configure the browser incorrectly. then the browser will be unable to connect to the Internet. otherwise you’ll get a 404 error. see the Proxy Configuration for more details. You may not be able to close these popups by running selenium commands if they are initiated by the browser and not your AUT. Comment this line like this: “//user_pref(“browser. when it comes time to kill the browser Selenium RC will kill the shell script.14.14. Again. versions of Selenium before 1. you may find yourself getting empty verify strings from your tests (depending on the programming language used). These interceptors work best in catching new windows if the windows are loaded AFTER the onload() function. refer to the HTTPS section for how to do this. why isn’t my Firefox browser session closing? On Unix/Linux you must invoke “firefox-bin” directly.13 Problems With Verify Commands If you export your tests from Selenium-IDE.11 Firefox *chrome doesn’t work with custom profile Check Firefox profile folder -> prefs.14. Selenese contains commands for asserting or verifying alert and confirmation popups.14.14 Safari and MultiWindow Mode Note: This section is not yet developed. window.startup.page”.google. Selenium may not recognize windows loaded before the onload function. You can specify the path to firefox-bin directly. 0). 6.prompt) so they won’t stop the execution of your page.0 browser to trust our dangerous “CyberVillains” SSL certificate authority.page”.startup.confirm and window.c 6.onload() function runs)? No.e. 6.10 On Linux. leaving the browser running. If you’re seeing an alert pop-up. before the parent page’s javascript window. See the sections on these topics in Chapter 4.14.Selenium Documentation. Note: This section is not yet developed.15 Firefox on Linux On Unix/Linux. 6.12 Is it ok to load a custom pop-up as the parent page is loading (i. make sure that the real executable is on the path. it’s probably because it fired during the page load process.alert. Release 1. so make sure that executable is on the path. like this.14. cmd=getNewBrowserSession&1=*firefox /usr/local/firefox/firefox-bin&2=. Selenium relies on interceptors to determine window names as they are being loaded. 6. On most Linux distributions. If executing Firefox through a shell script. 6. which is usually too early for us to protect the page.js -> user_pref(“browser.” and try again. • modal JavaScript alert/confirmation/prompt dialogs: Selenium tries to conceal those dialogs from you (by replacing window. Selenium-RC . the real firefox-bin is located on: 80 Chapter 6. so if you are using a previous version.. 0).0 needed to invoke “firefox-bin” directly. x/firefox-bin " 6. IE interprets the keys in @style as uppercase. to add that path to the user’s path.x.14.14. 6.x. you will have to add the following to your . you can specify the path to firefox-bin directly in your test.Selenium Documentation. So.x is the version number you currently have. even if the source code is in lowercase.x. Troubleshooting Common Problems 81 . So. but you can easily code your test to detect the situation and try the alternative locator that only works in IE.0 /usr/lib/firefox-x. For example: //td[@style="background-color:yellow"] This would work perfectly in Firefox.14.17 Where can I Ask Questions that Aren’t Answered Here? Try our user forums 6.x/ Where the x. Opera or Safari but not with IE.x.x/" If necessary.16 IE and Style Attributes If you are running your tests on Internet Explorer and you cannot locate elements using their style attribute. you should use: //td[@style="BACKGROUND-COLOR:yellow"] This is a problem if your test is intended to work on multiple browsers. like this: " *firefox /usr/lib/firefox-x. Release 1.bashrc file: export PATH= "$PATH:/usr/lib/firefox-x. Selenium-RC .0 82 Chapter 6.Selenium Documentation. Release 1. 7. 7. These terms are by no means standard in the industry. that depends on aspects of your project– end-user expectations. • privacy policy. you the tester will of course make many decisions on what aspects of the application to test. priorities set by the project manager and so on. nonchanging.1 Introducing Test Design In this subsection we describe a few types of different tests you can do with Selenium. time allowed for the project. although the concepts we present here are typical for web-applications. but we provide this as a framework for relating Selenium test automation to the decisions a quality assurance professional will make when deciding what tests to perform. 7. For instance • Does each page have it’s expected page title? This can be used to verify your test found an expected page after following a link. does each page have the correct text within that header? 83 .2 What to Test? What elements of your application will you test? Of course. element on a particular page. We decided not to hold back on information just because a chapter was not ready.2. the priority for each of those tests. • Does the application’s home page contain an image expected to be at the top of the page? • Does each page of the website contain a footer area with links to the company contact page. We have some content here already though. and trademarks information? • Does each page begin with heading text using the <h1> tag? And.1 Content Tests The simplest type of test for a web-application is to simply test for the existence of an static. and whether to automate those tests or not. Once the project boundaries are defined though. This may not be new to you.CHAPTER SEVEN TEST DESIGN CONSIDERATIONS NOTE: This chapter is currently being developed. We will define some terms here to help us categorize the types of testing typical for a web-application. 2. say for example a list of documents. Often a function test will involve multiple pages with a formbased input page containing a collection of input fields.Selenium Documentation. An example will help.2. or anyother other browser-supported input. Its id and name (addForm:_id74:_id75:0:_id79:0:checkBox) both are same and both are dynamic (they will change the next time you open the application).. Testing for these involves clicking each link and veryfying the expected page behind that link loads correctly. Should that go in this section or in a separate section? 7. and returning some type of results.2.INCLUDE A GOOD DEFINITION OF AJAX OFF THE INTERNET. Dynamic content involves UI elements who identifying properties change each time you open the page displaying them. had a unique identifier for each specific document. Then. If your page content is not likely to be affected then it may be more efficient to test page content manually. the search results page returns a different data set where each document in the result set uses different identifiers. Release 1.2 Link Tests A frequent source of errors for web-sites is broken links and missing pages behind those broken links. Dynamic HTML of an object might look as: <input type= "checkbox" value= "true" id= "addForm:_id74:_id75:0:_id79:0:checkBox" name= This is HTML snippet for a check box. NOTE .. in a different search. requiring some type of user input.0 You may or may not need content tests. Need to include a description of how to design this test and a simple example. . content tests may prove valuable. drop-down lists. or files will likely be moved to different locations. This is usually on a result page of some given function. in. An example would be a result set of data returned to the user.. the search results page returns a data set with one set of documents and their correponding identifiers. User input can be via text-input fields. and one or more response pages. For example. 84 Chapter 7..5 Ajax Tests Ajax is a technology which supports dynamic real-time UI elements such as animation and RSS feeds.3 Function Tests These would be tests of a specific function within your application. that is.4 Dynamic Content Dynamic content is a set of page elements whose identifiers. Submit and Cancel operations. for a particular search. vary with each different instance of the page that contains them. 7.. If. data is retrieved from the application server with out refreshing the page. your application will be undergoing platform changes. In AJAX-driven web applications. Test Design Considerations . So. 7.2. Suppose each data result. however. checkboxes. In this case 7. characteristics used to locate the element. when this page is displayed.. Dynamic HTML of an object might look as: <input type= "checkbox" value= "true" id= "addForm:_id74:_id75:0:_id79:0:checkBox" name= This is HTML snippet for a check box. That is.1 Locating Static Objects This section has not been reviewed or edited.2 When to verifyTextPresent. Actual Content? 85 .0 7.3. Verify? Element vs.4.1 Assert vs. Given the dynamic nature of id this approach would not work.4. selenium.IsBlankOrNull(checkboxIds[i])) // If collected id is not null.2 Identifying Dynamic Objects This section has not been reviewed or edited.3 Verifying Expected Results: Assert vs. This id remains constant within all instances of this page. Release 1. It can be done as: String[] checkboxIds = selenium. verifyElementPresent... this UI element will always have this identifier. These are UI elements who identifying properties change each time you open the page displaying them.3.click("addForm:_id74:_id75:0:_id79:0:checkBox).Selenium Documentation..3. { // If the id starts with addForm if(checkboxIds[i].. if(!GenericValidator. 7. So.getAllFields(). // Collect all input ids on page. The best way is to capture this id dynamically from the website itself.check(checkboxIds[i]). Verify? Element vs. Actual Content? 7. For example. . Its id and name (addForm:_id74:_id75:0:_id79:0:checkBox) both are same and both are dynamic (they will change the next time you open the application).indexOf( "addForm" ) > -1) { selenium. 7. or verifyText 7. Static HTML Objects might look as: <a class= "button" id= "adminHomeForm" onclick= "return oamSubmitForm(’adminHomeForm’.4 Locating UI Elements 7. Verify: Which to Use? 7. for your test script to click this button you just have to use the following selenium command. In this case normal object identification would look like: selenium. Verifying Expected Results: Assert vs.click( "adminHomeForm" ).’ad This is HTML snippet for a button and its id is “adminHomeForm”. // Desired link. String[] links = selenium. Test Design Considerations . break. then one can reliably retrieve all elements without ever resorting to xpath. Consider one more example of a Dynamic object. Release 1. String editInfo = null. Now if href is used to click the link.getElementById(’" // If retrieved link is expected link. A page with two links having the same name (one which appears on page) and same html name.5 Location Strategy Tradeoffs This section is not yet developed. } // Set the second appearance of Autumn term link to true as isSecondInstanceLink = true.isBlankOrNull(linkID)) { // Find the inner HTML of link.getAllLinks(). // Loop through collected links. String editTermSectionInfo = selenium. if(editTermSectionInfo. table.equalsIgnoreCase( "expectedlink" )) { // If it is second appearance of link then save the link id and break the lo if(isSecondInstanceLink) { editInfo = linkID.getEval( "window.Selenium Documentation. // Collect all links.0 } } This approach will work only if there is one field whose id has got the text ‘addForm’ appended to it. } } } // Click on link. it would always be clicking on first element. Click on second element link can be achieved as following: // Flag for second appearance of link. selenium. etc) have element IDs.1 How can I avoid using complex xpath expressions to my test? If the elements in HTML (button. 7. These element IDs should be explicitly created by the 86 Chapter 7.click(editInfo). 7. for(String linkID: links) { // If retrieved link is not null if(!GenericValidator. boolean isSecondInstanceLink = false.5. label.document. each time the application is deployed. • Cryptic HTML identifiers and names can be given more human-readable increasing the readability of test scripts.6. if (second >= 60) break.. consider a page which brings a link (link=ajaxLink) on click of a button on page (without refreshing the page) This could be handled by Selenium using a for loop. A better approach would be to wait for a predefined period and then continue execution as soon as the element is found. for (int second = 0.0 application. different element ids could be generated.7 UI Mapping A UI map is a centralized location for an application’s UI elements and then the test script uses the UI Map for locating elements to be tested. This makes script maintanence easier and more efficient.Selenium Documentation.e.6. Thread. For instance.6 Testing Ajax Applications 7. id_147) tends to cause two problems: first. try { if (selenium. a non-specific element id makes it hard for automation testers to keep track of and determine which element ids are required for testing. But non-descriptive element ID (i. You might consider trying the UI-Element extension in this situation.1 Waiting for an AJAX Element In AJAX-driven web applications. Pausing the test execution for a specified period of time is also not a good approach as web element might appear later or earlier than expected leading to invalid test failures (reported failures that aren’t actually failures). } catch (Exception e) // Pause for 1 second. Testing Ajax Applications 87 . A UI map is a repository for all test script objects. } 7. // Loop initialization. // Search for element "link=ajaxLink" and if available then break loop. Second. UI maps have several advantages. 7. • Having centralized location for UI objects instead of having them scattered through out the script. Release 1. using Selenium’s waitForPageToLoad wouldn’t work as the page is not actually loaded to refresh the AJAX element. second++) { // If loop is reached 60 seconds then break the loop.isElementPresent( "link=ajaxLink" )) break.2 Locator Performance Considerations 7.sleep(1000). Consider following example (in java) of selenium tests for a website: 7.5. Release 1. // Click on Create New Event button.open( "" ).events. selenium. // Click on Login button.waitForPageToLoad( "30000" ). public void testNew() throws Exception { // Open app url.click(admin. selenium. selenium. selenium. selenium. selenium.waitForPageToLoad( "30000" ).click(admin.open( "(admin. } There is hardly any thing comprehensible from script.createnewevent).click(admin.loginbutton). selenium. (please beware that UI Map is not a replacement for comments!) A more comprehensible script could look like this. selenium. // Click on View Old Events button.test.viewoldevents).click( "adminHomeForm:_activitynew" ). selenium. selenium.Selenium Documentation.waitForPageToLoad( "30000" ).click(admin.events.events.waitForPageToLoad( "30000" ).username.test.click(admin. Test Design Considerations . selenium.waitForPageToLoad( "30000" ). selenium. selenium.click( "loginForm:btnLogin" ). selenium.username. selenium.com" ). "xxxxxxxx" ).click( "adminHomeForm:_activityold" ).waitForPageToLoad( "30000" ). selenium.events.type(admin.cancel).events.click(admin. // Provide admin username.click(admin.test.events.cancel).type(admin.0 public void testNew() throws Exception { selenium.waitForPageToLoad( "30000" ).viewoldevents). selenium.loginbutton).createnewevent).com" ). } 88 Chapter 7. selenium. } Though again there are no comments provided in the script but it is more comprehensible because of the keywords used in scripts.click( "addEditEventForm:_idcancel" ). selenium.waitForPageToLoad( "30000" ). A better script would have been: public void testNew() throws Exception { selenium.type( "loginForm:tbUsername" .open( ". Even the regular users of application would not be able to figure out as to what script does. selenium. selenium.waitForPageToLoad( "30000" ). selenium. selenium. selenium. "xxxxxxxx" ). // Click on Cancel button. "xxxxxxxx" ). selenium. A properties file contains key/value pairs. 7.createnewevent = adminHomeForm:_activitynew admin.0 The whole idea is to have a centralized location for objects and using comprehensible names for those objects. Values can be read from the properties file and used in Test Class to implement UI Map.cancel = addEditEventForm:_idcancel admin.readlines() source.events. Consider a property file prop.11 Organizing Your Test Suites This section has not been developed yet. In Python: # Collection of String values source = open( " input_file. Bitmap Comparison 89 .10 Organizing Your Test Scripts This section has not been developed yet. To achieve this.properties which has got definition of HTML object used above admin. • Handling Login/Logout State • Processing a Result Set 7.txt " .8.11. " r " ) values = source. where each key and value are strings. 7. 7.1 Data Driven Testing This section needs an introduction and it has not been completed yet.close() # Execute For loop for each String in the values array 7. properties files can be used in java.viewoldevents = adminHomeForm:_activityold Our objects still refer to html objects.8 Bitmap Comparison This section has not been developed yet. 7.username = loginForm:tbUsername admin.loginbutton = loginForm:btnLogin admin. but we have introduced a layer of abstraction between the test script and UI elements. Release 1.events.9 Solving Common Web-App Problems This section has not been developed yet. For more on Properties files follow this URL.events.Selenium Documentation. 90 Chapter 7. This is a very basic example of what you can do. Test Design Considerations . 7.2 Recovering From Failure A quick note though–recognize that your programming language’s exception.sqlserver.is_text_present( " Results * for " + search)) Why would we want a separate file with data in it for our tests? One important method of testing concerns running the same test repetetively with differnt data values. String url = "jdbc:sqlserver://192.1 Error Reporting 7.0 for search in values: sel. assuming you have database support functions.waitForPageToLoad( " 30000 " ) self. Refer to Selnium RC wiki for examples on reading data from spread sheet or using data provider capabilities of TestNG with java client driver.jdbc. and at last.failUnless(sel. Selenium included.12.180:1433. This file contains a different search string on each line.12.DatabaseName=TEST_DB" . generally handle this as it’s often a common reason for building test automation to support manual testing methods. // Prepare connection url.SQLServerDriver" ). search) sel. why not using them for some data validations/retrieval on the Application Under Test? Consider example of Registration process where in registered email address is to be retrieved from database. it’s iterating over the strings array and doing the search and assert on each.forName( "com.168.type( " q " . This is called Data Driven Testing and is a very common testing task.open( " / " ) sel.handling support can be used for error handling and recovery. but the idea is to show you things that can easily be done with either a programming or scripting language when they’re difficult or even impossible to do using Selenium-IDE.1. Specific cases of establishing DB connection and retrieving data from DB would be: In Java: // Load Microsoft SQL Server JDBC driver. The Python script above opens a text file.12.Selenium Documentation. Class. Release 1.12 Handling Errors Note: This section is not yet developed.click( " btnG " ) sel.microsoft. The code then saves this in an array of strings. 7. This section has not been developed yet.3 Database Validations Since you can also do database queries from your favorite programming language. 7. Test automation tools. public static Connection con = DriverManager. // Send SQL SELECT statements to the database via the Statement. "password" ). emailaddress).type( "userid" . String emailaddress = result. A more complex test could be to validate that inactive users are not able to login to application. 7. // Use the fetched value to login to application.createStatement(). This wouldn’t take too much work from what you’ve already seen.executeQuery ( "select top 1 email_address from user_register_table" ).getString( "email_address" ). // Create statement object which would be used in writing DDL and DML // SQL statement. Handling Errors 91 .executeQuery // method which returns the requested information as rows of data in a // ResultSet object. Release 1. This is very simple example of data retrieval from DB in Java.getConnection(url.Selenium Documentation. ResultSet result = stmt.12.0 // Get connection to DB. // Fetch value of "email_address" from "result" object. public static Statement stmt = con. selenium. "username" . Release 1. Test Design Considerations .Selenium Documentation.0 92 Chapter 7. CHAPTER EIGHT SELENIUM-GRID Please refer to the Selenium Grid website This section is not yet developed. 93 .seleniumhq. If there is a member of the community who is experienced in SeleniumGrid. We would love to have you contribute. please contact the Documentation Team. and would like to contribute. Release 1.0 94 Chapter 8. Selenium-Grid .Selenium Documentation. valueToType). Example: Add a “typeRepeated” action to Selenium. For each accessor there is an assertFoo. which types the text twice into a text box. text) { // All locator-strategies are automatically handled by "findElement" var element = this. The following examples try to give an indication of how Selenium can be extended with JavaScript. Selenium will automatically look through methods on these prototypes. On startup.doTypeRepeated = function(locator.replaceText(element.2 Actions All methods on the Selenium prototype beginning with “do” are added as actions. assertions and locator-strategies. For each action foo there is also an action fooAndWait registered. assertions and locators. An action method can take up to two parameters.CHAPTER NINE USER-EXTENSIONS NOTE: This section is close to completion. // Create the text to type var valueToType = text + text.page(). which will also auto-generate “verify” and “waitFor” commands. that makes sure that the element 95 . Selenium. This is done with JavaScript by adding methods to the Selenium object prototype. adding your own actions. 9. An assert method can take up to 2 parameters. Example: Add a valueRepeated assertion. but it has not been reviewed and edited. 9.page(). // Replace the element text with the new text this.3 Accessors/Assertions All getFoo and isFoo methods on the Selenium prototype are added as accessors (storeFoo). using name patterns to recognize which ones are actions. You can also define your own assertions literally as simple “assert” methods.findElement(locator). which will be passed the second and third column values in the test.1 Introduction It can be quite simple to extend Selenium. which will be passed the second and third column values in the test. and the PageBot object prototype. }. 9.prototype. verifyFooa nd waitForFoo registered. assertFoo. waitForFoo.page(). the first being the locator string (minus the prefix). actualValue). verifyNotFoo. the following commands will automatically be available: storeTextLength. if you add a getTextLength() method.3. and waitForNotTextLength commands. text) { // All locator-strategies are automatically handled by "findElement" var element = this. Example: Add a “valuerepeated=” locator.Selenium Documentation.locateElementByValueRepeated = function(text.findElement(locator). Also note that the assertValueRepeated method described above could have been implemented using isValueRepeated. Example. // Make sure the actual value matches the expected Assert.length. for (var i = 0. // Loop through all elements. assertNotFoo. // Get the actual element value var actualValue = element. inDocument) { // Create the text to search for var expectedValue = text + text.getTextLength = function(locator. Selenium. i < allElements.getElementsByTagName( "*" ). waitForTextLength. PageBot. assertFoo. Release 1. with the added benefit of also automatically getting assertNotValueRepeated.0 value consists of the supplied text repeated. User-Extensions . looking for ones that have // a value === our expected value var allElements = inDocument. and the second being the document in which to search. // Create the text to verify var expectedValue = text + text. assertTextLength. }. Selenium.1 Automatic availability of storeFoo. // The "inDocument" is a the document you are searching.matches(expectedValue. 9.prototype. text) { return this. verifyNotTextLength. 96 Chapter 9. that finds the first element a value attribute equal to the the supplied value repeated.4 Locator Strategies All locateElementByFoo methods on the PageBot prototype are added as locator-strategies. 9. verifyTextLength. }. assertNotFoo.prototype.assertValueRepeated = function(locator. verifyFoo.prototype. i++) { var testElement = allElements[i]. The 2 commands that would be available in tests would be assertValueRepeated and verifyValueRepeated.length. waitForValueRepeated and waitForNotValueRepeated.getText(locator). waitForFoo and waitForNotFoo for every getFoo All getFoo and isFoo methods on the Selenium prototype automatically result in the availability of storeFoo. storeValueRepeated.value. assertNotTextLength. and waitForNotFoo commands. A locator strategy takes 2 parameters. "*iexplore" . In your empty test.6. 4444. First. your user-extension should now be an uptions in the Commands dropdown. Using User-Extensions With Selenium-IDE 97 . 2. 6. Open Firefox and open Selenium-IDE. Below. ". }. 9. In Selenium Core Extensions click on Browse and find the user-extensions. 5. create a new command. js file. Place your user extension in the same directory as your Selenium Server.ca/" ).5. 9. While this name isn’t technically necessary. Click on OK. you must close and restart Selenium-IDE. If you are using client code generated by the Selenium-IDE you will need to make a couple small edits. } } return null. instantiate that HttpCommandProcessor object DefaultSelenium object. Options 4. 2.1 Example C# 1. This can be done in the test setup.5 Using User-Extensions With Selenium-IDE User-extensions are very easy to use with the selenium IDE. it’s good practice to keep things consistent.Selenium Documentation. 3. Click on Tools. Create your user extension and save it as user-extensions. 9. just below private StringBuilder verificationErrors. 9. 1. Next. 1.6 Using User-Extensions With Selenium RC If you Google “Selenium RC user-extension” ten times you will find ten different approaches to using this feature. is the official Selenium suggested approach. as you would the proc = new HttpCommandProcessor( "localhost" .0 if (testElement.value && testElement.value === expectedValue) { return testElement. Your user-extension will not yet be loaded. you will need to create an HttpCommandProcessor object with class scope (outside the SetupTest method.js.) HttpCommandProcessor proc. Release 1. Release 1.Selenium Documentation.0 1.Threading. 1. //selenium = new DefaultSelenium("localhost".DoCommand( "alertWrapper" . User-Extensions . Notice that the first letter of your function is lower case. regardless of the capitalization in your user-extension. Remember that user extensions designed for Selenium-IDE will only take two arguments. string[] inputParams = { "Hello World" }. verificationErrors = new StringBuilder(). Instantiate the DefaultSelenium object using the HttpCommandProcessor object you created. In this case there is only one string in the array because there is only one parameter for our user extension. Selenium automatically does this to keep common JavaScript naming conventions. [SetUp] public void SetupTest() { proc = new HttpCommandProcessor( "localhost" . Because JavaScript is case sensitive. } 98 Chapter 9.Text. System. "*iexpl selenium = new DefaultSelenium(proc). 1. execute your user-extension by calling it with the DoCommand() method of HttpCommandProcessor. This method takes two arguments: a string to identify the userextension method you want to use and string array to pass arguments. private HttpCommandProcessor proc. namespace SeleniumTests { [TestFixture] public class NewTest { private ISelenium selenium. your test will fail if you begin this command with a capital. Start the test server using the -userExtensions argument and pass in your user-extensinos. selenium = new DefaultSelenium(proc). Within your test code. NUnit.Framework.Text.RegularExpressions.js file. but a longer array will map each index to the corresponding user-extension parameter. inputParams is the array of arguments you want to pass to the JavaScript user-extension. proc.js using using using using using using System. System. System.jar -userExtensions user-extensions. java -jar selenium-server. 4444. "*iexpl selenium. Selenium. private StringBuilder verificationErrors. inputParams).Start(). 4444. DoCommand( "alertWrapper" .0 [TearDown] public void TeardownTest() { try { selenium. inputParams).ToString()).}.Stop(). } } } End Appendixes: 9. } [Test] public void TheNewTest() { selenium.Selenium Documentation.AreEqual( "" . } catch (Exception) { // Ignore errors if unable to close the browser } Assert.Open( "/" ). proc. Release 1. Using User-Extensions With Selenium RC 99 .6. string[] inputParams = { "Hello World" . verificationErrors. 0 100 Chapter 9.Selenium Documentation. Release 1.. 104 Chapter 10. .NET client driver configuration .0 With This Visual Studio is ready for Selenium Test Cases. Release 1. It should not be too different for higher versions of Eclipse • Launch Eclipse. It is written primarily in Java and is used to develop applications in this language and. in other languages as well as C/C++.0. (Europa Release). Perl. • Select File > New > Other. PHP and more.Version: 3. Following lines describes configuration of Selenium-RC with Eclipse . 105 .1 Configuring Selenium-RC With Eclipse Eclipse is a multi-language software development platform comprising an IDE and a plug-in system to extend it. by means of the various plug-ins.3. Python. Cobol. Release 1.Selenium Documentation. Java Client Driver Configuration .0 • Java > Java Project > Next 106 Chapter 11. Release 1.Selenium Documentation.1. Select JDK in ‘Use a project Specific JRE’ option (JDK 1.0 • Provide Name to your project. Configuring Selenium-RC With Eclipse 107 .5 selected in this example) > click Next 11. (This described in detail in later part of document.) 108 Chapter 11.Selenium Documentation. Release 1. Project specific libraries can be added here. Java Client Driver Configuration .0 • Keep ‘JAVA Settings’ intact in next window. Release 1.0 • Click Finish > Click on Yes in Open Associated Perspective pop up window.Selenium Documentation.1. Configuring Selenium-RC With Eclipse 109 . 11. 110 Chapter 11.Selenium Documentation.0 This would create Project Google in Package Explorer/Navigator pane. Java Client Driver Configuration . Release 1. 1. Configuring Selenium-RC With Eclipse 111 . Release 1.Selenium Documentation.0 • Right click on src folder and click on New > Folder 11. 0 Name this folder as com and click on Finish button. Java Client Driver Configuration . 112 Chapter 11.Selenium Documentation. Release 1. • This should get com package insider src folder. 1.Selenium Documentation. Release 1. Configuring Selenium-RC With Eclipse 113 .0 • Following the same steps create core folder inside com 11. Release 1. 114 Chapter 11. Create one more package inside src folder named testscripts. Java Client Driver Configuration . Please notice this is about the organization of project and it entirely depends on individual’s choice / organization’s standards. This is a place holder for test scripts.Selenium Documentation. Test scripts package can further be segregated depending upon the project requirements.0 SelTestCase class can be kept inside core package. selenium server etc) 11.Selenium Documentation.e. Configuring Selenium-RC With Eclipse 115 . Selenium client driver. This is a place holder for jar files to project (i.1. Release 1. Right click on Project name > New > Folder.0 • Create a folder called lib inside project Google. 0 This would create lib folder in Project directory. Java Client Driver Configuration . Release 1. 116 Chapter 11.Selenium Documentation. Configuring Selenium-RC With Eclipse 117 .1.Selenium Documentation.0 • Right click on lib folder > Build Path > Configure build Path 11. Release 1. Java Client Driver Configuration .Selenium Documentation. 118 Chapter 11. Select the jar files which are to be added and click on Open button.0 • Under Library tab click on Add External Jars to navigate to directory where jar files are saved. Release 1. 11. Configuring Selenium-RC With Eclipse 119 .0 After having added jar files click on OK button. Release 1.Selenium Documentation.1. 0 • Click Next and select the JDK to be used.Selenium Documentation. • Click Next and select Single Module Project. Configuring Selenium-RC With Intellij 123 . 11. Release 1.2. 0 • Click Next and select Java module. Release 1.Selenium Documentation. 124 Chapter 11. Java Client Driver Configuration . • Click Next and select Source directory. • Click Next and provide Module name and Module content root. • Click on Project Structure in Settings pan. Configuring Selenium-RC With Intellij 125 . This will launch the Project Pan.Selenium Documentation. 11. Release 1.0 • At last click Finish.2. Adding Libraries to Project: • Click on Settings button in the Project Tool bar. Release 1. 126 Chapter 11.0 • Select Module in Project Structure and browse to Dependencies tab.Selenium Documentation. Java Client Driver Configuration . • Browse to the Selenium directory and select selenium-java-client-driver.Selenium Documentation.).jar. 11.2.jar and seleniumserver. Release 1. Configuring Selenium-RC With Intellij 127 .0 • Click on Add button followed by click on Module Library. (Multiple Jars can be selected b holding down the control key. Release 1. 128 Chapter 11.0 • Select both jar files in project pan and click on Apply button.Selenium Documentation. Java Client Driver Configuration . Java Client Driver Configuration .0 132 Chapter 11.Selenium Documentation. Release 1. com/Products/activepython/index. • Installing Python Note: This will cover python installation on Windows and Mac only. • Add to your test’s path the file selenium.msi) 133 .CHAPTER TWELVE PYTHON CLIENT DRIVER CONFIGURATION • Download Selenium-RC from the SeleniumHQ downloads page • Extract the file selenium.py • Run Selenium server from the console • Execute your test from a console or your Python IDE The following steps describe the basic installation procedure. as in most linux distributions python is already pre-installed by default.x. After following this. (even write tests in a text processor and run them from command line!) without any extra work (at least on the Selenium side).py • Either write your Selenium test in Python or export a script from Selenium-IDE to a python file.mhtml 2. – Windows 1.x. Run the installer downloaded (ActivePython-x.x-win32-x86. the user can start using the desired IDE. Download Active python’s installer from ActiveState’s official site:. pythonmac.Selenium Documentation. get a universal binary at). Release 1. Python Client Driver Configuration .org/ (packages for Python 2. 134 Chapter 12. To install an extra Python.0 • Mac The latest Mac OS X version (Leopard at this time) comes with Python pre-installed.5. Copy the module with the Selenium’s driver for Python (selenium. Download the last version of Selenium Remote Control from the downloads page 2.0 You will get a . Extract the content of the downloaded zip file 3. Release 1. You will find the module in the extracted folder.py) in the folder C:/Python25/Lib (this will allow you to import it directly in any script you write). you’re done! Now any python script that you create can import selenium and start interacting with the browsers. • Installing the Selenium driver client for python 1. It contains a . it’s located inside seleniumpython-driver-client. Congratulations.Selenium Documentation.pkg file that you can launch. 135 .dmg file that you can mount. Selenium Documentation. Python Client Driver Configuration .0 136 Chapter 12. Release 1. however with CSS locators this is much simpler (and faster).1.locate elements based on their siblings.2 Starting to use CSS instead of XPATH 13. ’text-’)] 13. To demonstrate. the contains function can be used. Incidentally.1.1 Useful XPATH patterns 13. 13.2 starts-with Many sites use dynamic values for element’s id attributes.CHAPTER THIRTEEN LOCATING TECHNIQUES 13.1 Locating elements based on class In order to locate an element based on associated class in XPath you must consider that the element could have multiple classes and defined in any order.1. the element <span class="top heading bold"> can be located based on the ‘heading’ class without having to couple it with the ‘top’ and ‘bold’ classes using the following XPath: //span[contains(@class.3 contains If an element can be located by a value that could be surrounded by other text. Useful for forms and tables. this would be much neater (and probably faster) using the CSS locator strategy css=span.1 text Not yet written . For example. 13.4 siblings Not yet written . which can make them difficult to locate. if your dynamic ids have the format <input id="text-12345" /> where 12345 is a dynamic number you could use the following XPath: //input[starts-with(@id. ’heading’)]. • XPath: //div[contains(@class. ’article-heading’)] 137 .2.1.locate elements based on the text content of the node.heading 13. One simple solution is to use XPath functions and base the location on what you do know about the element. article-heading 138 Chapter 13.0 • CSS: css=div.Selenium Documentation. Release 1. Locating Techniques .
https://www.scribd.com/doc/49685070/Selenium-Documentation-2Epdf
CC-MAIN-2017-30
refinedweb
27,629
52.15
Confused about Assembly naming with Namespace Discussion in 'ASP .Net Building Controls' started by Elmo Watson, Sep 7, 2007. Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum. - Similar Threads strong naming assemblyDerrick, Jul 30, 2004, in forum: ASP .Net - Replies: - 6 - Views: - 2,365 - dananos - Aug 6, 2008 ASP.NET 2.0: What is the namespace and assembly name of generated assemblySA, Aug 9, 2004, in forum: ASP .Net - Replies: - 0 - Views: - 469 - SA - Aug 9, 2004 Javax.naming Exception: name not found in naming service.Harman, Jul 28, 2006, in forum: Java - Replies: - 1 - Views: - 2,499 - Moiristo - Jul 28, 2006 Confused about assembly naming with NamespaceElmo Watson, Sep 7, 2007, in forum: ASP .Net Web Controls - Replies: - 0 - Views: - 144 - Elmo Watson - Sep 7, 2007 Confused about style naming, Oct 23, 2006, in forum: Javascript - Replies: - 6 - Views: - 106 - Michael Winter - Oct 25, 2006
http://www.thecodingforums.com/threads/confused-about-assembly-naming-with-namespace.758877/
CC-MAIN-2014-35
refinedweb
182
73.68
This article describes how to implement “named format” functionality with C# and Linq Expressions.”? Other Named Format Implementations As you can guess, I am not the first one to write about this issue. For example, check out Phil Haacks blog post about this. That post deals a lot with the correct parsing of the format string and the correct interpretation of escaped and unescaped curly brackets. The emphasis in that post lies on correctness, not performance. My Approach We’ll, I like it fast and nasty 🙂 I needed a formatter that performs very fast, but does not need to support every twisted input string you can think of. So I settled with the single regular expression ([^{]|^){(\w+)}([^}]|$)|([^{]|^){(\w+)\:(.+)}([^}]|$). This is probably not the most elaborate way to parse a formatting input string (in fact, regular expressions are usually not powerful enough to handle brackets and bracket nesting), but it worked for all my scenarios (single-line, not too long strings). Ok, let’s dig into that. The key to performance in this case is pre-compilation. The first call to the formatter will parse the input strings, convert it into the regular string format form, construct Linq Expressions to call the regular String.Format method with the proper arguments, compile these expressions and cache them. Every subsequent call will just execute the precompiled expression. Here’s an example: Input format string: "My name is {Name} and I am {Age} years old" Input object: var obj = new { Name = "Santa", Age = 1700 } Generated code: string.Format("My name is {0} and I am {1} years old", obj.Name, obj.Age) Named Format With Linq Expressions Without further ado, here’s the code that does this: using System; using System.Collections.Concurrent; using System.Collections.Generic; using System.Linq; using System.Linq.Expressions; using System.Text; using System.Text.RegularExpressions; namespace MunirHusseini { public static class NamedFormat { private static readonly ConcurrentDictionary<string, object> PrecompiledExpressions = new ConcurrentDictionary<string, object>(StringComparer.OrdinalIgnoreCase); private static readonly Regex RegexFormatArgs = new Regex(@"([^{]|^){(\w+)}([^}]|$)|([^{]|^){(\w+)\:(.+)}([^}]|$)", RegexOptions.Compiled); public static string Format<T>(string pattern, T item) { // If we already have a compiled expression, just execute it. object o; if (PrecompiledExpressions.TryGetValue(pattern, out o)) { return ((Func<T, string>)o)(item); } // Convert named format into regular format and return // a list of the named arguments in order of appearance. string replacedPattern; var arguments = ParsePattern(pattern, out replacedPattern); // We'll be using the String.Format method to actually perform the formating. var formatMethod = typeof(string).GetMethod("Format", new[] { typeof(string), typeof(object[]) }); // Now, construct code with Linq Expressions... // The constant that contains the format string: var patternExpression = Expression.Constant(replacedPattern, typeof(string)); // The input object: var parameterExpression = Expression.Parameter(typeof(T)); // An array containing a call to a property getter for each named argument : var argumentArrayElements = arguments.Select(argument => Expression.Convert(Expression.PropertyOrField(parameterExpression, argument), typeof(object))); var argumentArrayExpressions = Expression.NewArrayInit(typeof(object), argumentArrayElements); // The actual call to String.Format: var formatCallExpression = Expression.Call(formatMethod, patternExpression, argumentArrayExpressions); // The lambda expression we will be compiling: var lambdaExpression = Expression.Lambda<Func<T, string>>(formatCallExpression, parameterExpression); // The lambda expression will look something like this // input => string.Format("my format string", new[]{ input.Arg0, input.Arg1, ... }); // Now we can compile the lambda expression var func = lambdaExpression.Compile(); // Cache the pre-compiled expression PrecompiledExpressions.TryAdd(pattern, func); // Execute the compiled expression return func(item); } private static IEnumerable<string> ParsePattern(string pattern, out string replacedPattern) { // Just replace each named format items with regular format items // and put all named format items in a list. Then return the // new format string and the list of the named items. var sb = new StringBuilder(); var lastIndex = 0; var arguments = new List<string>(); var lowerarguments = new List<string>(); foreach (var @group in from Match m in RegexFormatArgs.Matches(pattern) select m.Groups[m.Groups[6].Success ? 5 : 2]) { var key = @group.Value; var lkey = key.ToLowerInvariant(); var index = lowerarguments.IndexOf(lkey); if (index < 0) { index = lowerarguments.Count; lowerarguments.Add(lkey); arguments.Add(key); } sb.Append(pattern.Substring(lastIndex, @group.Index - lastIndex)); sb.Append(index); lastIndex = @group.Index + @group.Length; } sb.Append(pattern.Substring(lastIndex)); replacedPattern = sb.ToString(); return arguments; } } } Fast, you say? So how fast is this? This is as fast as a normal string.format method! At least after the first call it is. I created a simple test to measure the time needed: var input = new { Name = "Santa", Age = 1700 }; var sw = Stopwatch.StartNew(); var text = NamedFormat.Format("{Name} is {Age:0.0} years old.", input); Console.WriteLine(sw.ElapsedMilliseconds); sw = Stopwatch.StartNew(); for (var i = 0; i < 1000000; i++) { text = NamedFormat.Format("{Name} is {Age:0.0} years old.", input); } Console.WriteLine(sw.ElapsedMilliseconds); Console.WriteLine(text); When I run this on my Surface Pro (Intel i5-3317U), I get a measurements between 18ms and 25ms for the first call and 800ms to 1200ms for the next 1 million calls together. This equals 0.0008ms to 0.0012ms per call. Now that’s cooking with gas 🙂 Thanks, I am working with API and they provide queries in json strings with named fields like {id} etc.. Your extension class will make it nice and easy to plug data right in without any messing about. It’s good to know that this is helpful to somebody. Thank you for the feedback 🙂 Hey man, it works great just one small issue when there is a field but no mapped property to match it. This line of code throws an exception: var argumentArrayElements = arguments.Select(argument => Expression.Convert(Expression.PropertyOrField(parameterExpression, argument), typeof(object))); Can you suggest way that instead of exception it replaces them with an empty string instead? This would make this function work as a nice templating system. This is how I did it: I had to disable caching though as there is possibility that same template could be re-used with different fields. I am not sure yet how best to add the method parameters to signature. Good point, and good job 🙂 I guess you could use the name or hash code of the object’s type as a prefix to the cache key. For example var cacheKey = item.GetType().GetHashCode() + pattern; if (PrecompiledExpressions.TryGetValue(cacheKey , out o)) … Thanks! The cache is working again 🙂 I must say it took me a while before I really understood how this code worked. I ran into an issue as the function would not work on dynamic objects due to the difference in property access. I had to do a fair bit of exploring MSDN and SO as this was my first time using Expression class and I had only ever used a Func maybe once before. Took me a fair bit of trial and error but I think I addressed the issue. ExpandoObjects implement IDictionary. I thought about making an overload but thought that would be confusing so I created conditional statement that casts the Expando to IDictionary so the property can be accessed through Item property in the same fashion. I am not sure what these casts are doing to the performance though as from what I gather it should be avoided when possible. I haven’t thoroughly tested it but it handled my templates I threw at it with various different object types OK. I updated the gist, would love to get your opinion if you have the time. Guerrilla Coder, I think you did a great job with the code. I think that the conversion to IDictionary does not add any significant performance impacts. Dispite the name “Convert”, I think that the Convert expression is treated like a conversion operator in C#: it tells the compiler what type to use but does not perform any conversion during runtime. I didn’t find any proof for this on the internet, though. The dictionary lookup itself will probably add some measurable execution time, but that will still be very fast. In fact, I measured the runtime of the code for an ExpandoObject and a regular object with 1 million iterations each. Both measurements yielded almost exact timings (approx. 620ms for 1 million iterations on my machine). Link to the test:. Regarding whether or not to add a method overload: IMHO, there are some point that speak for adding an an overload instead of using if-blocks in the code: But in the end, your API must fit your needs and if that is the case, then stick to whatever you preferr. There’s just on thing: when using dictionaries, you should change the calculation of the cache key to reflect the keys that are contained in the dictionary. var prefix = item is IDictionary ? string.Join("_", ((IDictionary)item).Keys).GetHashCode() : item.GetType().GetHashCode(); var cacheKey = prefix + pattern;
https://softwareproduction.eu/2014/05/03/fast-named-formats-in-c
CC-MAIN-2022-33
refinedweb
1,446
50.33
A component that tried to avoid downloading duplicate content Project description MaybeDont is a library that helps avoid downloading pages with duplicate content during crawling. It learns which URL components are important and which are not important during crawling, and tries to predict if the page will be duplicate based on it’s URL. The idea is that if you have a crawler that just follows all links, it might download a lot of duplicate pages: for example, for a forum there might be pages like /view.php?topicId=10 and /view.php?topicId=10&start=0 - the only difference is added start=0, and the content of this pages is likely duplicate. If we knew that adding start=0 does not change content, then we would avoid downloading the page /view.php?topicId=10&start=0 if we have already fetched /view.php?topicId=10, and thus save time and bandwidth. Duplicate detector maybedont.DupePredictor collects statistics about page URLs and contents, and is able to predict if the new URL will bring any new content. First, initialize a DupePredictor: from maybedont import DupePredictor dp = DupePredictor( texts_sample=[page_1, page_2, page_3], jaccard_threshold=0.9) # default value texts_sample is a list of page contents. It can be ommited, but it is recommended to provide it: it is used to learn which parts of the page are common for a lot of site’s pages, and excludes this parts from duplicate comparison. This helps with pages where the content is small relative to the site chrome (footer, header, etc.): without removing chrome all such pages would be considered duplicates, as only a tiny fraction of the content changes. Next, we can update DupePredictor model with downloaded pages: dp.update_model(url_4, text_4) dp.update_model(url_5, text_5) After a while, DupePredictor will learn which arguments in URLs are important, and which can be safely ignored. DupePredictor.get_dupe_prob returns the probability of url being a duplicate of some content that has already been seem: dp.get_dupe_prob(url_6) Runtime overhead should be not too large: on a crawl with < 100k pages, expected time to update the model is 1-5 ms, and below 1 ms to get the probability. All visited urls and hashes of content are stored in memory, along with some indexing structures. Spider middleware If you have a Scrapy spider, or are looking for an inspiration for a spider middleware, check out maybedont.scrapy_middleware.AvoidDupContentMiddleware. First, it collects an queue of documents to know better which page elements are common on the site, in order to exclude them from content comparison. After that it builds it’s DupePredictor, updates it with crawled pages (only textual pages are taken into account), and starts dropping requests for duplicate content once it gets confident enough. Not all requests for duplicates are dropped: with a small probability (currenty 5%) requests are carried anyway. This makes duplicate detection more robust against changes in site URL or content structure as the crawl progresses. To enable the middleware, the following settings are required: AVOID_DUP_CONTENT_ENABLED = True DOWNLOADER_MIDDLEWARES['maybedont.scrapy_middleware.AvoidDupContentMiddleware'] = 200 Middleware is only applied to requests with avoid_dup_content in request.meta. Optional settings: - AVOID_DUP_CONTENT_THRESHOLD = 0.98 - minimal probability when requests are skipped. - AVOID_DUP_CONTENT_EXPLORATION = 0.05 - probability of still making a request that should be dropped - AVOID_DUP_CONTENT_INITIAL_QUEUE_LIMIT = 300 - number of pages that should be downloaded before DupePredictor is initialized How it works Duplicate detection is based on MinHashLSH from the datasketch library. Text 4-shingles of words are used for hashing, not spanning line breaks in the extracted text. Several hypotheses about duplicates are tested: - All URLs with a given URL path are the same (have the same content), regardless of query parameters; - All URLs which only differ in a given URL query parameter are the same (e.g. session tokens can be detected this way); - All URLs which have a given path and only differ in a given URL query parameter are the same; - All URLs which have a given path and query string and only differ in a single given query parameter are the same; - URLs are the same if they have same path and only differ in that some of them have a given param=value query argument added; - URLs are the same if they have a given path and only differ in a given param=value query argument; Bernoulli distribution is fit for each hypothesis. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/MaybeDont/
CC-MAIN-2018-22
refinedweb
755
51.48
Root causes of high deep sleep current [LoPy1, WiPy2 and SiPy], all the new modules **do not have deepsleep issues** @bucknall My colleague and I have made total of 3 orders ( 7 LoPy development boards in total), but the deep sleep shield didn't come with our orders. May I know where can I order the deep sleep shield? I've contacted the support team, but haven't got any reply from them. @daniel said in Root causes of high deep sleep current: @jcaron the reason for the higher current on the Pysense is because the sensors need to be disabled as part of the process of going to deepsleep. We will add that to the official library within the next few days. - is this actually confirmed to be a purely software issue? - has there been any progress with the libraries? I've seen a few updates pushed on Monday, but I don't think they're relevant to this issue? The other thing I really would like to see is the ability to wake on pins (and the related control of the pins connected to the PIC on the Pysense, which from what I understand will be the only pins which can used for wake up?) Thanks, Jacques. @jcaron That is correct, my wording was slightly confusing! If you order any of our devices (WiPy 2.0, LoPy or SiPy) you will automatically be sent a deep sleep shield. @bucknall From what I understood, you will automatically receive a Deep Sleep Shield with any new orders. I know I received shields with the boards I ordered week before last. @chrisi you will need to order a shield - the SiPy can also use the Pytrack/Pysense to sleep as well. If I order another Sipy now will the deep sleep feature be fixed or will I still need a shield? Thanks jmarcelino. Does the below earlier comment of yours still apply? By the way I found that if I do: from network import WLAN wlan = WLAN(mode=WLAN.STA) power saving is enabled, but if I use this instead it doesn't: from network import WLAN wlan = WLAN() wlan.mode(WLAN.STA) - jmarcelino last edited by @jalperin This works to turn off WiFi import network n = network.WLAN() n.deinit() Turning off Bluetooth would be similar but for network.Bluetooth() instead. However removing Bluetooth doesn't change anything in power consumption at the moment - but also it won't be on or consuming anything unless you had previously initialised Bluetooth() in your code. So it's actually better to do nothing at all for the Bluetooth side. Thank you, @jcaron. Very helpful. I have been executing that code, but followed by machine.deepsleep(). I gather that is not necessary. Correct? So, lastly, exactly how does one turn off wifi and bluetooth (I've read different methods in this forum)? @jalperin With the Pysense board, you need to: use the libraries that are available on github: add the following code: from pysense import Pysense py = Pysense() py.setup_sleep(number_of_seconds) py.go_to_sleep() This will tell the Pysense to stop the LoPy (including Wi-Fi, BT, LoRa...). It's actually powered off. It will be woken up after the given number of seconds has elapsed. Work is in progress to allow wake up on other conditions (some input going high for instance). At the moment, this will bring down power consumption to about 2.5 mA when running from a LiPo battery. Ultimately it should go down to a fraction of that once everything is correctly shut down. Note that when the LoPy comes out of deep sleep, it's as if it just rebooted. There are still issues with LoRa keeping state for instance. As the firmware turns on Wi-Fi when it starts, it's a good idea to stop it right away if you don't need it during the time the LoPy is awake. Not sure about Bluetooth (I stop it, but I'm not sure that's actually needed). Hope that helps. Unfortunately, I remain where I started. I am unsure whether I am shutting down WiFi and Bluetooth (I read various posts), and I am unsure how to properly get both the pysense and lopy to deep sleep. Do I understand it correctly that I don't really need the deep sleep shield but I do need new libraries (that will show up in a few days and will hopefully contain examples)? If so, how does one shut off wifi and bluetooth? Thanks for your help. @jcaron the reason for the higher current on the Pysense is because the sensors need to be disabled as part of the process of going to deepsleep. We will add that to the official library within the next few days. Cheers, Daniel @jalperin from what I understood, the Pysense (and Pytrack) boards actually incorporate what is present on the deep sleep shields. However, there is still a lingering issue on the Pysense (not the Pytrack) which still uses more current than it should in deep sleep, still waiting for a solution for this. I wish to run my lopy on a pysense with the lowest possible current draw from a LiPo battery. I assume I would plug the deep sleep shield in between them. Correct? Could someone please summarize in one place all the appropriate actions to turn off everything but awake periodically to take a lux reading and transmit it via lora? Thanks for your help. @irvinvp Here is the answer - athtest800 last edited by Hi! The PIC10F322 needs programming? If yes, where can I find the code? Thank you! - crankshaft last edited by @Fred - Is it possible to acknowledge receipt of the request for a shield when it is initially made and then notify us when it is dispatched, as at the moment I have no idea whether my submission was successful, or when I am likely to receive the shields ?! Hello, We have received all the surveys filled up to now and everything is going OK. We are targeting mass production of the deep sleep shields in early July and ship shortly afterwards. Cheers, Daniel I've filled the form at (mid April) and still nothing - no email confirmation, order progress and no DS shields. @Fred said in Root causes of high deep sleep current: Dear All, A quick update on this topic to confirm we are currently testing the Pilot Run samples in Eindhoven. All going well and we expect to have shields to ship circa 2 to 3 weeks e.g. 5/6 June. (...)
https://forum.pycom.io/topic/1022/root-causes-of-high-deep-sleep-current-lopy1-wipy2-and-sipy-all-the-new-modules-do-not-have-deepsleep-issues/79?lang=en-US
CC-MAIN-2020-50
refinedweb
1,098
71.04
Server that executes RPC commands through HTTP. Dependencies: EthernetInterface mbed-rpc mbed-rtos mbed Import programHTTP-Server Server that executes RPC commands through HTTP. Overview This program is a small HTTP Server that can execute RPC commands sent through HTTP and sends back a reply. This reply can be either a line of text or an HTML page. HTTP Request The server will only read the first line of the HTTP header. It does not check that the header is correct and it does not attempt to read the body. Hence, the server is only interested in the type of the request and the path. Instead of requesting a file, the path contains the RPC command. RPC command encoding Information The RPC command must be encoded in a special way because no spaces are allowed in the path. Thus, the RPC command is encoded in this way : /<obj name or type>/<func name>?arg1_name=arg1_val&arg2_name=arg2_val or, if the RPC command does not have any arguments : /<obj name or type>/<func name> So, a complete URL might be : The name of the arguments does not appear in the final RPC command. So these 3 urls will do exactly the same thing : Also, the order of the arguments are preserved. Request handlers To process requests, the server relies on RequestHandler. Each RequestHandler is assigned to a request type. Each type of request is assigned to a certain role : - PUT requests to create new objects - DELETE requests to delete objects - GET requests to call a function of an object However, there is a RequestHandler that accepts only GET requests but it can create/delete/call a function. This was necessary to provide an interactive web page that allows creation and deletion of objects. Reply The reply depends on the formatter. Currently, three formatters are available : - The most basic one does not modify the RPC reply. Hence, if you consider sending request from python scripts, this formatter is the most appropriate one. - A simple HTML formatter will allow the user to view the RPC reply and a list of RPC objects currently alive from a browser. - Finally, a more complex HTML formatter creates an entire web page where the user can create and delete an object as well as calling functions on these objects. Configure the server The configuration of the server consists on choosing the formatter and adding one or more request handlers to the server. The main function initializes the server in order to produce HTML code and to receive data only using GET requests. If you want to use a simpler and different version of the server you can change the content of the main function (located in main.cpp) by this code : main RPCType::instance().register_types(); EthernetInterface eth; eth.init(); eth.connect(); printf("IP Address is %s\n", eth.getIPAddress()); HTTPServer srv = create_simple_server(); if(!srv.init(SERVER_PORT)) { printf("Error while initializing the server\n"); eth.disconnect(); return -1; } srv.run(); return 0; However, this configuration will not work with the following examples. Examples I assume that the server is using the InteractiveHTMLFormatter (which should be the case if you did not make any changes). Using a browser Here is a quick guide how to run this program : - Compiles this program and copies it to the mbed - Open TeraTerm (install it if you don't have it), select serial and choose the port named "mbed Serial Port" - Reset your mbed - The IP address should appear in teraterm. In this example, I will use 10.2.200.116. - Open your browser and go to. - If everything is ok, you should see a webpage. Now, let's switch on a led. First, we need to create an object to control a led : Then, let's write an RPC command to switch on led : Using python This program creates and switches on led2. Sending RPC commands over HTTP with Python import httplib SERVER_ADDRESS = '10.2.200.38' h = httplib.HTTPConnection(SERVER_ADDRESS) h.request("GET", "/DigitalOut/new?arg=LED2&name=led2") r = h.getresponse() print r.read() h.request("GET", "/led2/write?arg=1") r = h.getresponse() print r.read() h.close() Of course, you might have to change the server address in order to make it work. HTTPServer.cpp@10:8b4c3d605bf0, 2013-07-18 (annotated) - Committer: - feb11 - Date: - Thu Jul 18 10:10:14 2013 +0000 - Revision: - 10:8b4c3d605bf0 - Parent: - 9:a9bf63017854 Improved javascript code
https://os.mbed.com/users/feb11/code/HTTP-Server/annotate/8b4c3d605bf0/HTTPServer.cpp/
CC-MAIN-2019-51
refinedweb
733
65.12
Ok newbie here. I got a question. I'm trying to write a program that reads a string from someone and gives an if-then-else result. Here's what I've done, can someone point out what I'm doing wrong? Of corse this isn't where the code stops, but this is where my problems are coming in.Of corse this isn't where the code stops, but this is where my problems are coming in.PHP Code: #include <stdio.h> void main(void) { char name; //input variable char f; //input variable char lname; //input variable char mname; //input variable printf ("What is your First Name?"); f=getchar(); if (f=="fantim") || (f == "Fantim") || (f == "FANTIM")) printf ("What is your middle name?"); scanf ("%d", &mname); if ((mname == "mi") || (mname == "Mi") || (mname =="MI")) printf ("What is your last name?"); scanf ("%d", &lname);
http://cboard.cprogramming.com/c-programming/62435-entering-specific-data-printable-thread.html
CC-MAIN-2015-06
refinedweb
142
93.54
This is an interactive blog post, you can modify and run the code directly from your browser. To see any of the output you have to run each of the cells. In particle physics applications (like the flavour of physics competition on kaggle) we often optimise the decision threshold of the classifier used to select events. Recently we discussed (once again) the question of how to optimise the decision threshold in an unbiased way. So I decided to build a small toy model to illustrate some points and make the discussion more concrete. What happens if you optimise this parameter via cross-validation and use the classifier performance estimated on each held-out subset as an estimate for the true performance? If you studied up on ML, then you know the answer: it will most likely be a optimistic estimate, not an unbiased one. Below some examples of optimising hyper-parameters on a dataset where the true performance is 0.5, aka there is no way to tell one class from the other. This is convenient because by knowing the true performance, we can evaluate whether or not our estimate is biased. After optimising some standard hyper-parameters we will build two meta-estimators that help with finding the best decision threshold via the normal GridSearchCV interface. To sweeten the deal, here a gif of Benedict Cumberbatch pretending to be unbiased: %config InlineBackend.figure_format='retina' %matplotlib inline import numpy as np import scipy as sp import matplotlib.pyplot as plt from sklearn.base import BaseEstimator, TransformerMixin, ClassifierMixin, MetaEstimatorMixin from sklearn.ensemble import RandomForestClassifier from sklearn.cross_validation import train_test_split from sklearn.grid_search import GridSearchCV from sklearn.pipeline import make_pipeline from sklearn.tree import DecisionTreeClassifier from sklearn.utils import check_random_state def data(N=1000, n_features=100, random_state=None): rng = check_random_state(random_state) gaussian_features = n_features//2 return (np.hstack([rng.normal(size=(N, gaussian_features)), rng.uniform(size=(N, n_features-gaussian_features))]), np.array(rng.uniform(size=N)>0.5, dtype=int)) X, y = data(200, random_state=1) # set aside data for final (unbiased)performance evaluation X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=1) param_grid = {'max_depth': [1, 2, 5, 10, 20, 30, 40, 50], 'max_features': [4, 8, 16, 32, 64, 80, 100]} rf = RandomForestClassifier(random_state=94) grid = GridSearchCV(rf, param_grid=param_grid, n_jobs=6, verbose=0) _ = grid.fit(X_train, y_train) The best parameters found and their score: print("Best score: %.4f"%grid.best_score_) print("Best params:", grid.best_params_) print("Score on a totally fresh dataset:", grid.score(X_test, y_test)) The best accuracy we found is around 0.62 with max_depth=1 and max_features=8. As we created the dataset with out any informative features we know that the true score of any classifier is 0.5. Therefore this is either a fluctuation (because we don't measure the score precisely enough) or the score from GridSearchCV is biased. You can also see that using a fresh, never seen before sample gives us an estimated accuracy of 0.56. Bias or no bias?¶ To test this whether the accuracy obtained from GridSearchCV is biased or just a fluke let's repeatedly grid-search for the best parameters and look at the average score. For this we generate a brand new dataset each time. The joys of having an infinite stream of data! def grid_search(n, param_grid, clf=None): X, y = data(120, random_state=0+n) if clf is None: clf = RandomForestClassifier(random_state=94+n) grid = GridSearchCV(clf, param_grid=param_grid, n_jobs=6, verbose=0) grid.fit(X, y) return grid scores = [grid_search(n, param_grid).best_score_ for n in range(40)] plt.hist(scores, range=(0.45,0.65), bins=15) plt.xlabel("Best score per grid search") plt.ylabel("Count") print("Average score: %.4f+-%.4f" %(np.mean(scores), sp.stats.sem(scores))) As you can see all of the best scores we find are above 0.5 and the average score is close to 0.58, with a small uncertainty. Conclusion: the best score obtained during grid search is not an unbiased estimate of the true performance. Instead it is an optimistic estimate. Threshold optimisation¶ Next, let's see what happens if we use a different hyper-parameter: the threshold applied to decide which class a sample falls in during prediction time. For this to work in the GridSearchCV framework we construct two meta-estimators. The first one is a transformer. It transforms the features of a sample into the output of a classifier. The second one is a very simple classifier, it assigns samples to one of two classes based on a threshold. Combining them in a pipeline we can then use GridSearchCV to optimise the threshold as it if was any other hyper-parameter. class PredictionTransformer(BaseEstimator, TransformerMixin, MetaEstimatorMixin): def __init__(self, clf): """Replaces all features with `clf.predict_proba(X)`""" self.clf = clf def fit(self, X, y): self.clf.fit(X, y) return self def transform(self, X): return self.clf.predict_proba(X) class ThresholdClassifier(BaseEstimator, ClassifierMixin): def __init__(self, threshold=0.5): """Classify samples based on whether they are above of below `threshold`""" self.threshold = threshold def fit(self, X, y): self.classes_ = np.unique(y) return self def predict(self, X): # the implementation used here breaks ties differently # from the one used in RFs: #return self.classes_.take(np.argmax(X, axis=1), axis=0) return np.where(X[:, 0]>self.threshold, *self.classes_) With these two wrappers we can use GridSearchCV to find the 'optimal' threshold. We use a different parameter grid that only varies the classifier threshold. You can experiment with optimising all three hyper-parameters in one go if you want to by uncommenting the max_depth and max_features lines. pipe = make_pipeline(PredictionTransformer(RandomForestClassifier()), ThresholdClassifier()) pipe_param_grid = {#'predictiontransformer__clf__max_depth': [1, 2, 5, 10, 20, 30, 40, 50], #'predictiontransformer__clf__max_features': [8, 16, 32, 64, 80, 100], 'thresholdclassifier__threshold': np.linspace(0, 1, num=100)} grids = [grid_search(n, clf=pipe, param_grid=pipe_param_grid) for n in range(10)] scores = [g.best_score_ for g in grids] print("Average score: %.4f+-%.4f" %(np.mean(scores), sp.stats.sem(scores))) This post started life as a jupyter notebook, download it or view it online.
http://betatim.github.io/posts/unbiased-performance/
CC-MAIN-2018-30
refinedweb
1,017
50.43
Very helpful to get started with NLP in PyTorch( PS- only for beginners) Kaggle NLP Competition - Toxic Comment Classification Challenge It’s getting worse day by day. I think people are overfitting on LB. The same submission is used by many. I am sure that at least up to 100-th position there is exact correlation between local CV and public leaderboard. Hierarchical Attention Network in keras with Tensorflow backend My latest vanilla pytorch code scored 0.9322 using a BiLSTM network. Like others have mentioned, really need to pay attention at how test dataset are being processed. In any case, rather happy with the results since training loss was around 0.066, so not much off from the final outcome Using this as a reference: Prior to this was referring a lot to wgpubs code with little success. I think I managed to fix the randomized prediction but Kaggle score was just around 0.57. Might need more troubleshooting. My Bidirectional GRU model with fastText embeddings scored 0.9847 on public leaderboard - I trained few similar models with different pretrained word embeddings and got similar results. I also tried more complex architectures, but didn’t get any improvements. Any suggestions would be greatly appreciated. Focus more on preprocessing, try 10 fold validation and use pretrained embedding. With these 2 things you should be able to cross a single model score of .986+ Thanks for the hints. Were you able to cross 0.986+ with a single model? Yes. A Simple Bi-Directional GRU with good preprocessing and fast text embeddings can give you easily .986+ . I am struggling on what to do next. Trying averaging , ensembling but not much success. So preprocessing is the key. As I tried everything else and best score I got i 0.9852. Everything else (up to 0.9871) is blending based on train set OOF. For me blending/ensembling gives 3-5 point boost in the 4th decimal . I am also using OOF. Do you also use the public blend. When added with that , it gives me a light boost. I think the key here is to blend different models: blend RNN type networks + LR on TfIDF/Bayesian features + Tree Based predictions. What kind of pre-processing do you recommend? Any kernels we should be looking at. I have a GRU (pytorch, fastai, fastext) that scores me a 0.976. I’m doing very minimal pre-processing/cleanup and I’m unsure about how to know what to cleanup and what preprocessing should be done. Btw, my hyperparams: max vocab size = 30,000 embedding size = 300 GRU hidden activations = 80 max seq. len = 100 Thanks - wg @sermakarevich did you try using CapsNet yet? If tuned well I think it has good potential and at the very least may contribute well to a blend. No, I did not as I have no idea how it works. I am trying to figure out what guys are doing with preprocessing at the moment. Nothing worked for me so far so I assume it can give a boost up to 0.988 (difference between my current best model and @VishnuSubramanian best model). I was able to improve the accuracy of my model when I started looking at the output of the tokenizer. Remember if the tokenizer does not do a good job then most of the words would end up as unknown as they will not be available in the pretrained embedding. So look for all the words which are being thrown away and try the different tokenizers of the top voted kernels and see which works better. You would be surprised a simple tokenizer would do a good job. Spell checking can also give you a slight improvement. @jamesrequa you are absolutely right . I tried CapsuleNet yesterday and it gave me a pretty good result of .9865+ without any fine tuning. But it takes a lot of time to train. Hi all, this is baffling me. Any ideas why this error is being generated. I have read a couple of threads but still cant resolve the issue: If I submit the original sample_submission file - no issues but once I use the same files with the predictions I get this error. Even tried the concatenation method in one of the Kaggle posts but still cant resolve. Has anyone had this issue or any thoughts on how to resolve? @amritv Make sure you set index=False when you save your submission file. I think this is probably the cause for it @jamesrequa I am using index=False in the submission file, here is the code for the concatenation: import os import glob import pandas def concatenate(indir='Submissions/concat', outfile='Submissions/concat/concantenated.csv'): os.chdir(indir) dfileList=glob.glob('*.csv') dfList=[] colnames=['id','toxic','severe_toxic','obscene','threat','insult','identity_hate'] for filename in fileList: print(filename) df=pandas.read_csv(filename, header=None) dfList.append(df) concatDf=pandas.concat(dfList, axis=0) concatDf.columns=colnames concatDf.to_csv(outfile, index=None) I also tried the recommendation here but this didnt work either. This is mind boggling and Im sure a simple fix @amritv That’s strange that Kaggle isn’t accepting it. You def shouldn’t need to do any concatenations of different csv files just to make the submission file. I’m not sure if you tried this already, but here is some reference code for how you could generate a new submission file from scratch and then populate it directly with the prediction arrays for each class. This is assuming you are taking the predictions directly as they were generated from the model. test_ids = pd.read_csv('./input/sample_submission.csv').id.values columns = ['id','toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate'] submission = pd.DataFrame(index=range(0,len(test_ids)), columns=columns) submission[["toxic", "severe_toxic", "obscene", "threat", "insult", "identity_hate"]] = test_preds submission.to_csv('submission.csv', index=False)
http://forums.fast.ai/t/kaggle-nlp-competition-toxic-comment-classification-challenge/9055?page=4
CC-MAIN-2018-13
refinedweb
975
66.84
For unit testing, sometimes it's useful to change verbosity level of output and Django gives you less or more information as desired. However, nothing seems to happen any different for v0, v1, v2, or v3. I think I'm always getting verbosity level 1. How do I change the verbosity level to something other than 1? Here are the details of what I tried (app called "survey"): Select manage.py Select test (hit shift enter) type: survey -v2 Pycharm accepts as valid input -v0, -v1, -v2, or -v3, as it should. But the output is always exactly the same (this example purposely has an error): /opt/bitnami/python/bin/python2.7 /home/bitnami/pycharm-2.6.3/helpers/pycharm/django_test_manage.py test survey -v3 /home/bitnami/PycharmProjects/marketr Testing started at 11:37 PM ... Creating test database for alias 'default'... Failure Traceback (most recent call last): File "/home/bitnami/PycharmProjects/marketr/survey/tests.py", line 19, in test_basic_addition self.assertEqual(1 + 1, 3) AssertionError: 2 != 3 Destroying test database for alias 'default'... Process finished with exit code 1 and here's the tests.py file: """ This file demonstrates writing tests using the unittest module. These will pass when you run "manage.py test". Replace this with more appropriate tests for your application. Example of Docstring test: 1 + 1 == 2 True """ from django.test import TestCase class SimpleTest(TestCase): def test_basic_addition(self): """ Tests that 1 + 1 always equals 2. """ self.assertEqual(1 + 1, 3)
https://intellij-support.jetbrains.com/hc/en-us/community/posts/205811859-How-to-specify-verbosity-level-with-manage-py-test-
CC-MAIN-2016-44
refinedweb
244
60.92
When it is required to place even and odd elements in a list into two different lists, a method with two empty lists can be defined. The modulus operator can be used to determine if the number is even or odd. Below is the demonstration of the same − def split_list(my_list): even_list = [] odd_list = [] for i in my_list: if (i % 2 == 0): even_list.append(i) else: odd_list.append(i) print("The list of odd numbers are :", even_list) print("The list of even numbers are :", odd_list) my_list = [2, 5, 13, 17, 51, 62, 73, 84, 95] print("The list is ") print(my_list) split_list(my_list) The list is [2, 5, 13, 17, 51, 62, 73, 84, 95] The list of odd numbers are : [2, 62, 84] The list of even numbers are : [5, 13, 17, 51, 73, 95] A method named ‘split_list’ is defined, that takes a list as a parameter. Two empty lists are defined. The parameter list is iterated over, and the modulus operator is used to determine if the number is even or odd. If it is an even number, it is added to first list, otherwise it is added to the second list. This is displayed as output on the console. Outside the function, a list is defined, and the method is called by passing this list. The output is displayed on the console.
https://www.tutorialspoint.com/python-program-to-put-even-and-odd-elements-in-a-list-into-two-different-lists
CC-MAIN-2021-25
refinedweb
225
77.06
What is the proper way to cast a treenode object into an another object that inherits from a treenode object? I have the following code but get a cast error at runtime: public class PublicationWorkflowAction : DocumentWorkflowAction { public override void Execute() { Publication pub = (Publication) Node; } ... public partial class Publication : TreeNode ... protected TreeNode Node ... Exception type: System.InvalidCastException Currently using Kentico 8.2.11. Thanks in advance. What is the reason you want to cast it? If this is to fetch data, I'd use node.GetValue("fieldname"); There are many columns on the object and I also have some methods I'd like to encapsulate in the Publication. Overall just want to do this for cleaner and easier code maintenance. Is it possible or is node.GetValue the cleanest route? @Robert take a look at this post on creating recurring events. This uses a tree node and creates a strongly typed EventDocument. Keep in mind you have to go through the effort and create the class to define the EventDocument. Please, sign in to be able to submit a new answer.
https://devnet.kentico.com/questions/casting-objects
CC-MAIN-2018-09
refinedweb
180
61.22
Created on 2015-03-27 17:09 by RusiMody, last changed 2015-07-17 14:03 by r.david.murray. This issue is now closed. Start python3.4 Do help(something) which invokes the pager Ctrl-C A backtrace results and after that the terminal is in raw mode even after exiting python [python 3.4 under debian testing with xfce4] I can't reproduce this. Maybe it is a debian bug? This looks like a duplicate of issue 21398. I can reproduce it with Python 3.4.1 (compiled myself) on Ubuntu 12.04. >>> help(str) Ctrl-C :Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.4/_sitebuiltins.py", line 103, in __call__ return pydoc.help(*args, **kwds) File "/usr/local/lib/python3.4/pydoc.py", line 1817, in __call__ self.help(request) File "/usr/local/lib/python3.4/pydoc.py", line 1867, in help else: doc(request, 'Help on %s:', output=self._output) File "/usr/local/lib/python3.4/pydoc.py", line 1603, in doc pager(render_doc(thing, title, forceload)) File "/usr/local/lib/python3.4/pydoc.py", line 1411, in pager pager(text) File "/usr/local/lib/python3.4/pydoc.py", line 1431, in <lambda> return lambda text: pipepager(text, 'less') File "/usr/local/lib/python3.4/pydoc.py", line 1453, in pipepager pipe.close() File "/usr/local/lib/python3.4/os.py", line 957, in close returncode = self._proc.wait() File "/usr/local/lib/python3.4/subprocess.py", line 1565, in wait (pid, sts) = self._try_wait(0) File "/usr/local/lib/python3.4/subprocess.py", line 1513, in _try_wait (pid, sts) = _eintr_retry_call(os.waitpid, self.pid, wait_flags) File "/usr/local/lib/python3.4/subprocess.py", line 491, in _eintr_retry_call return func(*args) KeyboardInterrupt I can reproduce on Python 3 on Ubuntu 14.10. When I hit Ctrl+C I get: >>> help(range) ... | __hash__(self, /) | Return hash(self). | :Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/wolf/dev/py/py3k/Lib/_sitebuiltins.py", line 103, in __call__ return pydoc.help(*args, **kwds) File "/home/wolf/dev/py/py3k/Lib/pydoc.py", line 1833, in __call__ self.help(request) File "/home/wolf/dev/py/py3k/Lib/pydoc.py", line 1886, in help else: doc(request, 'Help on %s:', output=self._output) File "/home/wolf/dev/py/py3k/Lib/pydoc.py", line 1619, in doc pager(render_doc(thing, title, forceload)) File "/home/wolf/dev/py/py3k/Lib/pydoc.py", line 1409, in pager pager(text) File "/home/wolf/dev/py/py3k/Lib/pydoc.py", line 1431, in <lambda> return lambda text: pipepager(text, 'less') File "/home/wolf/dev/py/py3k/Lib/pydoc.py", line 1455, in pipepager pipe.write(text) File "/home/wolf/dev/py/py3k/Lib/subprocess.py", line 900, in __exit__ self.wait() File "/home/wolf/dev/py/py3k/Lib/subprocess.py", line 1552, in wait (pid, sts) = self._try_wait(0) File "/home/wolf/dev/py/py3k/Lib/subprocess.py", line 1502, in _try_wait (pid, sts) = os.waitpid(self.pid, wait_flags) KeyboardInterrupt >>> If I keep pressing Enter the rest of the help gets printed. Once the pager is done, pressing enter doesn't go on a new line and the prompts (>>>) are printed one after the other on the same line. The same happens on my shell prompt once I exit from the interpreter. `reset` fixes it. On Python 2 Ctrl+C does nothing. I do see the anomalous behavior inside python, but on my gentoo system when I exit python the terminal is fine. The attached patch fixes the issue for me. Patch WFM too. The print of KeyboardInterrupt whould be dropped. less itself does nothing if you press ctl-c, and the pager is used when pydoc is called from the shell command line, and printing KeyboardInterrupt there just looks wrong. Updated patch. I suspect you also need ignore signals while piping data to the child process. Similar to how the POSIX system() call ignores SIGINT and SIGQUIT soon after spawning the child, until after the child has exited. Try with a large help text on Linux, like import _pyio help(_pyio) Also, Python 2 still gets interrupted for me, it is just that it doesn’t seem to happen immediately if it is up to the pipe.close() call. SIGINT *is* keyboard interrupt. Since less at least seems to ignore SIGQUIT I suppose also ignoring that would be reasonable, since the user (should) by analogy expect that behavior while the pager is active. Note, however, that the signals we are ignoring are in the parent process, not the child as is the case for system. So one can argue that letting the python process die when SIGQUIT is received would also be reasonable, and arguably is the less surprising option. New changeset 77c04e949b4b by R David Murray in branch '3.4': #23792: Ignore KeyboardInterrupt when the pydoc pager is active. New changeset fe0c830b43bb by R David Murray in branch 'default': Merge: #23792: Ignore KeyboardInterrupt when the pydoc pager is active. I've committed the fix. If someone wants to argue in favor of also handling SIGQUIT, they can open a new issue. I don’t think SIGQUIT handling is a big problem. But even with the new change, it is still easy to screw up the terminal in many cases, so I wouldn’t say this is fixed yet. Steps for Python 3 in a small 80 × 25 terminal on Linux: * import _pyio; help(_pyio) * Hit Ctrl-C Steps for Python 2: * import _pyio; help(_pyio) * Hit Ctrl-C * Hit Space ten times to scroll down. Alternatively, hit Ctrl-C a second time. I am posting a quick patch which I think should fix this in Python 3 by deferring the traceback until after the child has finished. Another method is using the signal module like <>, but that’s probably too platform-specific for the pydoc module. Here is a version that keeps things clean by not diplaying the traceback. The ctl-c does have an effect, but not a visible one unless one pays careful attention :) I think your patch should be fine for all practical cases I can think of. New changeset 7a5f30babc72 by R David Murray in branch '3.4': #23792: also catch interrupt around pipe.write. New changeset 536c4f4acae1 by R David Murray in branch 'default': Merge: #23792: also catch interrupt around pipe.write. Yeah, someone could theoretically manage to hit ctl-c between the time the process is started and the call to pipe.write, or between it and the call to wait, but I don't think those very-low-probability events are worth worrying about. Changing the title in case anyone else is looking for this bug. This is not raw mode. It's just that echo is turned off. It is sufficient to type (invisibly, of course): stty echo to resume normal use of the terminal. Well, not exactly. While the title was inaccurate, the real problem was the management of the subprocess, not what mode the terminal was in.
https://bugs.python.org/issue23792
CC-MAIN-2018-26
refinedweb
1,177
68.87
Python. How to Create a Covariance Matrix in Python Use the following steps to create a covariance matrix in Python. Step 1: Create the dataset. First, we’ll create a dataset that contains the test scores of 10 different students for three subjects: math, science, and history. import numpy as np math = [84, 82, 81, 89, 73, 94, 92, 70, 88, 95] science = [85, 82, 72, 77, 75, 89, 95, 84, 77, 94] history = [97, 94, 93, 95, 88, 82, 78, 84, 69, 78] data = np.array([math, science, history]) Step 2: Create the covariance matrix. Next, we’ll create the covariance matrix for this dataset using the numpy function cov(), specifying that bias = True so that we are able to calculate the population covariance matrix. np.cov(data, bias=True) array([[ 64.96, 33.2 , -24.44], [ 33.2 , 56.4 , -24.1 ], [-24.44, -24.1 , 75.56]]) Step 3: Interpret the covariance matrix. The values along the diagonals of the (-24.44), which indicates that students who score high on math tend to score low on history. Conversely, students who score low on math tend to score high on history. Step 4: Visualize the covariance matrix (optional). You can visualize the covariance matrix by using the heatmap() function from the seaborn package: import seaborn as sns import matplotlib.pyplot as plt cov = np.cov(data, bias=True) labs = ['math', 'science', 'history'] sns.heatmap(cov, annot=True, fmt='g', xticklabels=labs, yticklabels=labs) plt.show() You can also change the colormap by specifying the cmap argument: sns.heatmap(cov, annot=True, fmt='g', xticklabels=labs, yticklabels=labs, cmap='YlGnBu') plt.show() For more details on how to style this heatmap, refer to the seaborn documentation.
https://www.statology.org/covariance-matrix-python/
CC-MAIN-2022-21
refinedweb
287
58.08
- Click on Next - Select PIC 16F877A and click Next - Select Hi-Tech C compiler as show above and click Next - Select Project path, give a file name and click Next - Add required files to Project (Not required here) and click Next - Click Finish to complete project creation Next step is writing the program. All the information about writing programs which include its commands, operations and the microcontroller registers are available on the datasheet and Hi-Tech Toolsuite guide. But I will also try to make you understand the first steps so that the datasheet and guide may prove useful. You can start making your own programs at the end of this tutorial. Now let us view a program for 16F877A where 8 LED are connected at the PORT B (8 pins of port B, from pin no 33-40). This programs blinks the LEDs which means the LEDs remain on for a second and off for another second. Hi-Tech C Code #include <htc.h> #define _XTAL_FREQ 8000000 void main() { TRISB=0X00; PORTB=0X00; while(1) { PORTB=0XFF; __delay_ms(1000); PORTB=0X00; __delay_ms(1000); } } This is the core style of writing the microcontroller program. As you can see, the style is completely similar to that of the normal C programs. Just some additional keywords here. Remember that all the syntax and mathematical and logical operations supported by stdio and conio libraries are accepted here in htc library with some additional ones also. - In the first line, we start the program by including the library. This step is nothing new. - Now we need to define the frequency of oscillator used in the system. This is the style of defining the oscillator frequency. The value 8000000 means its frequency is 8MHz. - has to repeat over and over but the initializations do not need to be repeated. That is why the main program is written inside the while loop. - PORTB is also a register like TRISB. This register passes the value out of port B. This means PORTB=0xff will give high output from each eight line of port B. - This line is written to have a delay of 1000ms. So high logic will pass from port B pins for 500ms. - This line will make the output from port B get down to 0V, ie logic ‘0’. It is to be considered that until the value in PORTB or any other port register is changed, the output from the corresponding port will also not change. So the output will drop from 5V (logic 1) to 0V (logic 0) only when this line is executed. - Again a delay of 1000ms. - End So you can now understand why the LEDs connected to the port B will blink. For half a second, 5V or logic 1 is coming from port B and for the other half 0V or logic 0 is coming. See how easy it is to do anything using a microcontroller! - Go to File-> New, write the program and save as a C file (*.c). - Add this C file to Source group, Right Click on Source File >> Add Files - Set the Configuration Bits using the Configuration bits tool. Click on Configure >> Configuration Bits - Build the project to generate hex file. Click on Project >> Build This was just a basic demonstration. In the same manner, there are registers for numerous operations in the microcontroller. You have to set the register and provide some command and it will be carried out. PIC 16F877A also has general purpose resisters and RAM so you can even use a lot of variables and constants. It supports floating point arithmetic and ASCII values also. Programming in Hi-Tech C is just like programming in any other C compiler with some additional commands. Circuit Diagram 8MHz crystal is used to provide the required clock for the PIC 16F877A microcontroller. 22pF capacitors are used to stabilize the oscillation of the crystal. The first pin of the microcontroller (MCLR) is the Reset pin (stands for Memory Clear) which is tied to Vdd since it is an active low input. LEDs are connected to PORTB via 470Ω resistors to limit current through them. You can simulate the working using Proteus. If you haven’t yet started with Proteus try this tutorial. Video Here is also a small video, which may help you. Exporting Hex File The above generated hex file doesn’t contain Configuration Bits, we should use the Export option in MPLAB to export the hex file with configuration bits. - Click on File >> Export - Click OK - Give the file name and Click Save. - Now burn this exported hex file to the microcontroller Download Here You can download Hi-Tech C files and Proteus files here… You have got one thing wrong. To initialize pins as output, TRIS registers need to be set to 0, not one. need a lil help, i have 16F876A, with the folowing code start banksel TRISB banksel TRISC movlw 0XF0 movwf TRISB movwf TRISC movlw 0XF0 banksel PORTB banksel PORTC main bcf PORTB,0 bcf PORTC,3 ; nop ; nop call delay8b call delay8b call delay8b call delay8b call delay8b call delay8b call delay8b call delay8b call delay8b bsf PORTB,0 bsf PORTC,3 call delay8b call delay8b call delay8b call delay8b call delay8b bcf PORTC,0 bcf PORTC,1 bcf PORTC,2 bcf PORTC,4 call delay8b call delay8b call delay8b call delay8b call delay8b call delay8b call delay8b goto main delay8b movlw 0xFF movwf TEMP1 d13 movlw 0xFF movwf TEMP2 decfsz TEMP1 goto d12 goto d1r; RET d12 decfsz TEMP2 goto d12 goto d13 d1r return ; remaining code goes here goto main END ; directive ‘end of program’ but I can’t make my rgb to function, can any1 help me ? Its working We can use delay as the separate function It works in hardware also hi, plz help me with this. cofig as same it given in above exmple. #include #define _XTAL_FREQ 4000000 void main() { TRISB=0X00; PORTB=0X00; while(1) { PORTB=0XFF; _delay_ms(100); PORTB=0X00; _delay_ms(100); } } You need a pic microcontroller programmer (a hardware unit) for that. How to program the PIC microchip? where will i program it and where will i plug it? new student here. What was the error ? Make sure that you choosen the correct compiler. Sir, I am a student.I recently start to program of PIC16f877a.for blink led.I installed 1. MPLAB IDE v8.92, 2. picc-9_82 win, 3. picclite-setupnew, 4. univesal toolsuite -1.37 on my computer. and I learn about c code, like decimal,binary,hexadecimal and octal language. I showed some example from web site.I try to built blinkled.c file.but the feedback will built failed.Pls anyone help me to built binkled.c program.wait for feedback. thanks you…… Try changing your microcontroller. Help my Proteus 8 shuts down each time I run a simimulation, any assistance pls. Its a cracked version. when i press program the target device to my microcontroller it shows this error. how do i resolve this??? Please use our forums ( ) for asking doubts outside above tutorial’s scope. Plz tell me Sir.I want correct shift c program file…..16f628a void main() { CMCON = 0x07; // To turn off comparators ADCON1 =0x06; //Turne off adc } Hello please use our forums : for posting doubts not realated to above article. sir i need to program code for rgb led for pic12f625 Please post your program. Use our forums : for asking doubts not related to above topic. Try like this : void main() { TRISGPIO=0X00; GPIO=0X00; while(1) { GPIO=0XFF; __delay_ms(1000); GPIO=0X00; __delay_ms(1000); } } set the trisb=0xff Error [800] C:Program FilesMicrochiptprogramf.c; 483. undefined symbol “entry__ms” what does it means in pic10f206 i m new sir can u help me to plz write the program of blinking using delay in c for pic12f206 Try re-pasting the above code.. hi when i compile this code this error occurs can you please help? mplab v8.33 i am using Error [499] ; 0. undefined symbol: __delay_ms(test del.obj) Please give me more details about the error. If it is not related to above topic, please use our forums (). how to solve Error 499 undefined symbol? Dear sir Plz guied me I need a simple program for LED1 to LED8. task_1 when S1 unpressed ; {LED=11111110;} (HOLD IT for 2sec) {LED=11111101;} (HOLD IT for 2sec) {LED=11111011;} (HOLD IT for 2sec) {LED=11110111;} (HOLD IT for 2sec) {LED=11101111;} (HOLD IT for 2sec) Repeat this function every time till S1 unpressed . task_2 but when S1 pressed for 1time within 1sec {LED=11111110;} (HOLD IT for 20sec) & ofter 20 sec exit form task_2 & Repeat this function task_1 but when S1 pressed for 2time within 1sec {LED=11111101;} (HOLD IT for 20sec) & ofter 20 sec exit form task_2 & Repeat this function task_1 but when S1 pressed for 3time within 1sec {LED=11111011;} (HOLD IT for 20sec) & ofter 20 sec exit form task_2 & Repeat this function task_1 but when S1 pressed for 4time within 1sec {LED=11101111;} (HOLD IT for 20sec) & ofter 20 sec exit form task_2 & Repeat this function task_1 Is it in the above program ? Read the datasheet of PIC 12F508 carefully.. there is no TRIS, PORT register instead of it have TRISGPIO and GPIO registers.. Try using GP0, GP1.. .instead of RA0, RA1.. delay exceeds maximum limit of 197120 cycles how to clear this error? you must define that pin you are used.. 0b indicates binary,,, 0x indicates hex .. Read this article : Can you please give some detailed explanation for these words 0b00000000, 0X00, 0XFF. ?? or post a tutorial on “use of hexadecimal, decimal, binary values in microcontroller programming”. You need to set configuration bits… You can set it using _CONFIG or using Configuration Bits Tool.. _delay_ms(100) will provide 100mS delay. what that command _CONFIG(0X3F39); doing?? what are configuration bits ? why we need them? and one can u tell me the declaration for _delay_ms(100); Vry useful…bt im using mikroC……>>im 2 a beginnr @ microcontroller programming.. Hi Tech is no longer available. Sorry.. there is no PORTC and PORTB too.. The pins are labeled as GP0 to GP5. please refer the datasheet .. There are two registers.. GPIO and TRISGPIO.. #include #include #define _XTAL_FREQ 4000000 __CONFIG(0x0FFD); void main() { TRISC = 0; TRISC1=0; RC0 = 0; RC1=0; while(1) { RC0 = 1; RC1=0; __delay_ms(500); RC0 = 0; RC1=1; } ERRORS: undefined identifier “TRISC” undefined identifier “RC0” undefined identifier “RC1” There is no port named A.. only C and B Instead what? please post the code.. Please refer the datasheet of PIC 12F508… there is no PORTA I’m trying to port this code to pic 12F508. this is my code: #include #define _XTAL_FREQ 4000000 __CONFIG(0x0FFD); void main() { TRISA0 = 0; TRISA1=0; RA0 = 0; RA1=0; while(1) { RA0 = 1; RA1=0; __delay_ms(500); RA0 = 0; RA1=1; } but mplab triggers this errors:undefined identifier “TRISA0”, undefined identifier “TRISA1”, undefined identifier “RA0”, undefined identifier “RA1” Thanks for the feedback… sorry for the late replay.. I was busy for 2 days.. I will update the above article with configuration bits and will write a new post on configuration bits.. yaahoooooooooooooo it’s working.. changed to __CONFIG(0x3F39); //for 4mhz(xt oscillator) when i add this line : __CONFIG(0xFFBA); the LED just light up(not blinking). programmed with pickit2. still not working. all components, circuit, hex are ok. it works well in proteus. i can read, write, detect, verify the pic using my pickit and jdm programmers. Nop….. I think so… What is your exact application?? is it possible to do multi-threading in pic? (Hi tech c way) It should work… try changing the pin of microcontroller.. verify that the LED is working .. by connecting it to 5V instead of .. to pic.. If above solutions not working.. try replacing your microcontroller or programmer.. ??????????????????????????/ changed power supply to 5v smps(5.08v). still not working. yes it works in proteus, not in circuit. 5V,………… Try searching torrent for cracked versions.. yes, it works in proteus. proteus simulation failed. error: mixed model pic16.dll failed to authorize. Missing or invalid customer key. it’s a demo version of proteus. do you have the full version link?? downloading proteus. what is the minimum working voltage for 16f877a? Does it works in proteus?? Replaced crystal, added 100mf capasitor. but not working. no blink/ even not turning on led. my power supply is 12v adapter with 7805 regulator. output is 4.95v Verify the crystal’s frequency.. value of capacitors… It will work.. without capacitors………… If the value of capacitor and crystal are correct…. try replacing the crystal……. with a new one.. You should also use a good power supply… try adding a 100mf capacitor across the supply Also… increase the delay to 1000 ms… otherwise .. you might not be able to see .. blinking led..
https://electrosome.com/blinking-led-pic-microcontroller-hi-tech-c/
CC-MAIN-2021-43
refinedweb
2,143
75.61
In addition to the basic histogram, this demo shows a few optional features: - Setting the number of data bins - The normedflag, which normalizes bin heights so that the integral of the histogram is 1. The resulting histogram is an approximation of the probability density function. - Setting the face color of the bars - Setting the opacity (alpha value). Selecting different bin counts and sizes can significantly affect the shape of a histogram. The Astropy docs have a great section on how to select these parameters: import numpy as np import matplotlib.mlab as mlab import matplotlib.pyplot as plt np.random.seed(19680801) # example data mu = 100 # mean of distribution sigma = 15 # standard deviation of distribution x = mu + sigma * np.random.randn(437) num_bins = 50 fig, ax = plt.subplots() # the histogram of the data n, bins, patches = ax.hist(x, num_bins, normed=1) # add a 'best fit' line y = mlab.normpdf(bins, mu, sigma) ax.plot(bins, y, '--') ax.set_xlabel('Smarts') ax.set_ylabel('Probability density') ax.set_title(r'Histogram of IQ: $\mu=100$, $\sigma=15$') # Tweak spacing to prevent clipping of ylabel fig.tight_layout() plt.show() Total running time of the script: ( 0 minutes 0.055 seconds) Gallery generated by Sphinx-Gallery
https://matplotlib.org/2.1.0/gallery/statistics/histogram_features.html
CC-MAIN-2019-18
refinedweb
201
52.76
In this article we are going to learn how to build a simple "Universal JavaScript" application (a.k.a. "Isomorphic") using React, React Router and Express. Warm up your fingers and get your keyboard ready... it's going to be fun! About the Author Table of Contents - About the Author - About Universal JavaScript - What we are going to build - Folder structure - Project initialization - The HTML boilerplate - The data module - React components - The application entry point - Setting up Webpack and Babel - Playing with the single page app - Routing and rendering on the server with Express - Running the complete app - Wrapping up Hi, I am Luciano and I am the co-author of Node.js Design Patterns Second Edition (Packt), a book that will take you on a journey across various ideas and components, and the challenges you would commonly encounter while designing and developing software using the Node.js platform. In this book you will discover the "Node.js way" of dealing with design and coding decisions. It also features an entire chapter dedicated to Universal JavaScript. If this is the first time you read this term, keep reading, you are going to love this article! About Universal JavaScript One of the advantages of having Node.js as runtime for the backend of a web application is that we have to deal only with JavaScript as a single language across the web stack. With this capability it is totally legit to be willing to share some code between the frontend and the backend to reduce the code duplication between the browser and the server to the bare minimun. The art of creating JavaScript code that is "environment agnostic" is today being recognized as "Universal JavaScript", term that — after a very long debate — seems to have won a war against the original name "Isomorphic JavaScript". The main concerns that we generally have to face when building an Universal JavaScript application are: - Module sharing: how to use Node.js modules also in the browser. - Universal rendering: how to render the views of the application from the server (during the initialization of the app) and then keep rendering the other views directly in the browser (avoiding a full page refresh) while the user keep navigating across the different sections. - Universal routing: how to recognize the view associated to the current route from both the server and the browser. - Universal data retrival: how to access data (typically through APIs) from both the server and the browser. Universal JavaScript is still a pretty fresh field and there is no framework or approach that emerged as a "de-facto" standard with ready-made solutions for all these problems yet. Although, there are already a miriad of stable and well known libraries and tools that can be combined to successfully build a Universal JavaScript web application. In this article we are going to use React (with its companion library React Router) and Express to build a simple application focused on showcasing universal rendering and routing. We will also use Babel to take advantage of the lovely EcmaScript 2015 syntax and Webpack to build our code for the browser. What we are going to build I am a Judo fan and so the app we are going to build today is "Judo Heroes", a web app that showcases some the most famous Judo athletes and their collection of medals from the Olympic Games and other prestigious international tournaments. This app has essentially two views: An index page where you can select the athletes: And an athlete page that showcases their medals and some other details: To understand better how it works you can have a look at the demo app and navigate across the views. What's the matter with it anyway, you are probably asking yourself! Yes, it looks like a very simple app, with some data and a couple of views... Well there's something very peculiar that happens behind the scenes that will be hardly noticed by a regular user but it makes developement super interesting: this app is using universal rendering and routing! We can prove this using the developers tools of the browser. When we initially load a page in the browser (any page, not necessarily the home page, try for example this one) the server provides the full HTML code of the view and the browser only needs to download linked resources (images, stylesheets and scripts): Then, from there, when we switch to another view, everything happens only on the browser: no HTML code is loaded from the server and only the new resources (3 new images in the following example) are loaded by the browser: We can do another quick test (if you are still not convinced) from the command line using curl: curl -sS "" You will see the full HTML page (including the code rendered by React) being generated directly from the server: I bet you are now convinced enough and eager to get your hands dirty, so let's start coding! Folder structure At the end of this tutorial our project structure will look like in the following tree: ├── package.json ├── webpack.config.js ├── src │ ├── app-client.js │ ├── routes.js │ ├── server.js │ ├── components │ │ ├── AppRoutes.js │ │ ├── AthletePage.js │ │ ├── AthletePreview.js │ │ ├── AthletesMenu.js │ │ ├── Flag.js │ │ ├── IndexPage.js │ │ ├── Layout.js │ │ ├── Medal.js │ │ └── NotFoundPage.js │ ├── data │ │ └── athletes.js │ ├── static │ │ ├── index.html │ │ ├── css │ │ ├── favicon.ico │ │ ├── img │ │ └── js │ └── views ` └── index.ejs In the main level we have our package.json (to describe the project and define the dependencies) and webpack.config.js (Webpack configuration file). All the rest of the code will be stored inside the folder src, which contains the main files needed for routing ( routes.js) and rendering ( app-client.js and server.js). It also contains 4 subfolders: components: contains all the React components data: contains our data "module" static: contains all the static files needed for our application (css, js, images, etc.) and an index.htmlthat we will use initially to test our app. views: contains the template that we will use from the server to render the HTML content from the server. Project initialization The only requisite here is to have Node.js (version 6 is preferred) and NPM installed in your machine. Let's create a new folder called judo-heroes somewhere in the disk and point the terminal there, then launch: npm init This will bootstrap our Node.js project allowing us to add all the needed dependencies. We will need to have babel, ejs, express, react and react-router installed. To do so you can run the following command: npm install --save babel-cli@6.11.x babel-core@6.13.x \ babel-preset-es2015@6.13.x babel-preset-react@6.11.x ejs@2.5.x \ express@4.14.x react@15.3.x react-dom@15.3.x react-router@2.6.x We will also need to install Webpack(with its Babel loader extension) and http-server as a development dependencies: npm install --save-dev webpack@1.13.x babel-loader@6.2.x http-server@0.9.x The HTML boilerplate From now on, I am assuming you have a basic knowledge of React and JSX and its component based approach. If not you can read an excellent article on React components or have a look at all the other React related articles on Scotch.io. Initially we will focus only on creating a functional "Single Page Application" (with only client side rendering). Later we will see how to improve it by adding universal rendering and routing. So the first thing we need is an HTML boilerplate to "host" our app that we will store in src/static/index.html: <"></div> <script src="/js/bundle.js"></script> </body> </html> Nothing special here. Only two main things to underline: - We are using a simple "hand-made" stylesheet that you might want to download and save it under src/static/css/. - We also reference a /js/bundle.jsfile that contains all our JavaScript frontend code. We will see later in the article how to generate it using Webpack and Babel, so you don't need to worry about it now. The data module In a real world application we would probably use an API to obtain the data necessary for our application. In this case we have a very small dataset with only 5 athletes and some related information, so we can keep things simple and embed the data into a JavaScript module. This way we can easily import the data into any other component or module synchronously, avoiding the added complexity and the pitfalls of managing asynchronous APIs in an Universal JavaScript project, which is not the goal of the article. Let's see how the module looks like: // src/data/athletes.js const athletes = [ { 'id': 'driulis-gonzalez', 'name': 'Driulis González', 'country': 'cu', 'birth': '1973', 'image': 'driulis-gonzalez.jpg', 'cover': 'driulis-gonzalez-cover.jpg', 'link': 'ález', 'medals': [ { 'year': '1992', 'type': 'B', 'city': 'Barcelona', 'event': 'Olympic Games', 'category': '-57kg' }, { 'year': '1993', 'type': 'B', 'city': 'Hamilton', 'event': 'World Championships', 'category': '-57kg' }, { 'year': '1995', 'type': 'G', 'city': 'Chiba', 'event': 'World Championships', 'category': '-57kg' }, { 'year': '1995', 'type': 'G', 'city': 'Mar del Plata', 'event': 'Pan American Games', 'category': '-57kg' }, { 'year': '1996', 'type': 'G', 'city': 'Atlanta', 'event': 'Olympic Games', 'category': '-57kg' }, { 'year': '1997', 'type': 'S', 'city': 'Osaka', 'event': 'World Championships', 'category': '-57kg' }, { 'year': '1999', 'type': 'G', 'city': 'Birmingham', 'event': 'World Championships', 'category': '-57kg' }, { 'year': '2000', 'type': 'S', 'city': 'Sydney', 'event': 'Olympic Games', 'category': '-57kg' }, { 'year': '2003', 'type': 'G', 'city': 'S Domingo', 'event': 'Pan American Games', 'category': '-63kg' }, { 'year': '2003', 'type': 'S', 'city': 'Osaka', 'event': 'World Championships', 'category': '-63kg' }, { 'year': '2004', 'type': 'B', 'city': 'Athens', 'event': 'Olympic Games', 'category': '-63kg' }, { 'year': '2005', 'type': 'B', 'city': 'Cairo', 'event': 'World Championships', 'category': '-63kg' }, { 'year': '2006', 'type': 'G', 'city': 'Cartagena', 'event': 'Central American and Caribbean Games', 'category': '-63kg' }, { 'year': '2006', 'type': 'G', 'city': 'Cartagena', 'event': 'Central American and Caribbean Games', 'category': 'Tema' }, { 'year': '2007', 'type': 'G', 'city': 'Rio de Janeiro', 'event': 'Pan American Games', 'category': '-63kg' }, { 'year': '2007', 'type': 'G', 'city': 'Rio de Janeiro', 'event': 'World Championships', 'category': '-63kg' }, ], }, { // ... } ]; export default athletes; For brevity the file here has been truncated, and we are displaying just the data of one of the five athletes. If you want to see the full code check it out on the official repository. You can download the file into src/data/athletes.js. Also notice that I am not reporting 'use strict'; here though it should be present in every JavaScript file we are going to create through the course of this tutorial. As you can see, the file contains an array of objects where every object represents an athlete containing some generic information like id, name and country and another array of objects representing the medals won by that athlete. You might also want to grab all the image files from the repository and copy them under: src/static/img/. React components We are going to organize the views of our application into several components: - A set of small UI components used to build the views: AthletePreview, Flag, Medaland AthletesMenu. - A Layoutcomponent that is used as master component to define the generic appearence of the application (header, content and footer blocks). - Two main components that represent the main sections: IndexPageand AthletePage. - An extra "page" component that we will use as 404 page: NotFoundPage - The AppRoutescomponent that uses React Router to manage the routing between views. Flag component The first component that we are going to build allows us to display a nice flag and, optionally, the name of the country that it represents: // src/components/Flag.js import React from 'react'; const data = { 'cu': { 'name': 'Cuba', 'icon': 'flag-cu.png', }, 'fr': { 'name': 'France', 'icon': 'flag-fr.png', }, 'jp': { 'name': 'Japan', 'icon': 'flag-jp.png', }, 'nl': { 'name': 'Netherlands', 'icon': 'flag-nl.png', }, 'uz': { 'name': 'Uzbekistan', 'icon': 'flag-uz.png', } }; export default class Flag extends React.Component { render() { const name = data[this.props.code].name; const icon = data[this.props.code].icon; return ( <span className="flag"> <img className="icon" title={name} src={`/img/${icon}`}/> {this.props.showName && <span className="name"> {name}</span>} </span> ); } } As you might have noticed this component uses a small array of countries as data source. Again this makes sense only because we need a very small data set which, for the sake of this demo app, is not going to change. In a real application with a larger and more complex data set you might want to use an API or a different mechanism to connect the data to the component. In this component it's also important to notice that we are using two different props, code and showName. The first one is mandatory and must be passed to the component to select which flag will be shown among the ones supported. The showName props is instead optional and if set to a truthy value the component will also display the name of the country just after the flag. If you want to build a more refined reusable component for a real world app you might also want to add to it props validation and defaults, but we are going to skip this step here as this is not the goal for the app we want to build. Medal component The Medal component is similar to the Flag component. It receives some props that represent the data related to a medal: the type ( G for gold, S for silver and B for bronze), the year when it was won, the name of the event and the city where the tournament was hosted and the category where the athlete who won the medal competed. // src/components/Medal.js import React from 'react'; const typeMap = { 'G': 'Gold', 'S': 'Silver', 'B': 'Bronze' }; export default class Medal extends React.Component { render() { return ( <li className="medal"> <span className={`symbol symbol-${this.props.type}`} title={typeMap[this.props.type]}>{this.props.type}</span> <span className="year">{this.props.year}</span> <span className="city"> {this.props.city}</span> <span className="event"> ({this.props.event})</span> <span className="category"> {this.props.category}</span> </li> ); } } As for the previous component here we also use a small object to map the codes of the medal types to descriptive names. Athletes Menu component In this section we are going to build the menu that is displayed on top of every athlete page to allow the user to easily switch to another athlete without going back to the index: // src/components/AthletesMenu.js import React from 'react'; import { Link } from 'react-router'; export default class AthletesMenu extends React.Component { render() { return ( <nav className="athletes-menu"> {this.props.athletes.map(menuAthlete => { return <Link key={menuAthlete.id} to={`/athlete/${menuAthlete.id}`} {menuAthlete.name} </Link>; })} </nav> ); } } The component is very simple, but there are some key points to underline: - We are expecting the data to be passed in the component through a athletesprop. So from the outside, when we use the component in our layout, we will need to propagate the list of athletes available in the app directly into the component. - We use the mapmethod to iterate over all the athletes and generate for every one of them a Link. Linkis a special component provided by React Router to create links between views. - Finally, we use the prop activeClassNameto use the class activewhen the current route matches the path of the link. Athlete Preview component The AthletePreview component is used in the index to display the pictures and the names of the athletes. Let's see its code: // src/components/AthletePreview.js import React from 'react'; import { Link } from 'react-router'; export default class AthletePreview extends React.Component { render() { return ( <Link to={`/athlete/${this.props.id}`}> <div className="athlete-preview"> <img src={`img/${this.props.image}`}/> <h2 className="name">{this.props.name}</h2> <span className="medals-count"><img src="/img/medal.png"/> {this.props.medals.length}</span> </div> </Link> ); } } The code is quite simple. We expect to receive a number of props that describe the attributes of the athlete we want to display like id, image, name and medals. Note that again we are using the Link component to create a link to the athlete page. Layout component Now that we built all our basic components let's move to creating those that give the visual structure to the application. The first one is the Layout component, which has the only purpose of providing a display template to the whole application defining an header, a space for the main content and a footer: // src/components/Layout.js import React from 'react'; import { Link } from 'react-router'; export default class Layout extends React.Component { render() { return ( <div className="app-container"> <header> <Link to="/"> <img className="logo" src="/img/logo-judo-heroes.png"/> </Link> </header> <div className="app-content">{this.props.children}</div> <footer> <p> This is a demo app to showcase universal rendering and routing with <strong>React</strong> and <strong>Express</strong>. </p> </footer> </div> ); } } The component is pretty simple and we should understand how it works just by looking at the code. There's though a very interesting prop that we are using here, the children prop. This is a special property that React provides to every component and allows to nest components one inside another. We are going to see in the routing section how the React Router will make sure to nest the components into the Layout component. Index Page component This component constitutes the full index page and it contains some of the components we previously defined: // src/components/IndexPage.js import React from 'react'; import AthletePreview from './AthletePreview'; import athletes from '../data/athletes'; export default class IndexPage extends React.Component { render() { return ( <div className="home"> <div className="athletes-selector"> {athletes.map(athleteData => <AthletePreview key={athleteData.id} {...athleteData} />)} </div> </div> ); } } Note that in this component we are using the AthletePreview component we created previously. Basically we are iterating over all the available athletes from our data module and creating an AthletePreview component for each of them. The AthletePreview component is data agnostic, so we need to pass all the information about the current athlete as props using the JSX spread operator ( {...object}). Athlete Page component In a similar fashion we can define the AthletePage component: // src/components/AthletePage.js import React from 'react'; import { Link } from 'react-router'; import NotFoundPage from './NotFoundPage'; import AthletesMenu from './AthletesMenu'; import Medal from './Medal'; import Flag from './Flag'; import athletes from '../data/athletes'; export default class AthletePage extends React.Component { render() { const id = this.props.params.id; const athlete = athletes.filter((athlete) => athlete.id === id)[0]; if (!athlete) { return <NotFoundPage/>; } const headerStyle = { backgroundImage: `url(/img/${athlete.cover})` }; return ( <div className="athlete-full"> <AthletesMenu athletes={athletes}/> <div className="athlete"> <header style={headerStyle}/> <div className="picture-container"> <img src={`/img/${athlete.image}`}/> <h2 className="name">{athlete.name}</h2> </div> <section className="description"> Olympic medalist from <strong><Flag code={athlete.country}</strong>, born in {athlete.birth} (Find out more on <a href={athlete.link}Wikipedia</a>). </section> <section className="medals"> <p>Winner of <strong>{athlete.medals.length}</strong> medals:</p> <ul>{ athlete.medals.map((medal, i) => <Medal key={i} {...medal}/>) }</ul> </section> </div> <div className="navigateBack"> <Link to="/">« Back to the index</Link> </div> </div> ); } } By now, you must be able to understand most of the code shown here and how the other components are used to build this view. What might be important to underline is that this page component accepts from the outside only the id of the athlete, so we include the data module to be able to retrieve the related information. We do this at the beginning of the render method using the function filter on the data set. We are also considering the case where the received id does not exist in our data module, in this case we render NotFoundPage, a component that we are going to create in the next section. One last important detail is that here we are accessing the id with this.props.params.id (instead of simply this.props.id): params is a special object created by React Router when using a component from a Route and it allows to propagate routing parameters into components. It will be easier to understand this concept when we will see how to setup the routing part of the application. Not Found Page component Now let's see the NotFoundPage component, which acts as a template to generate the code of our 404 pages: // src/components/NotFoundPage.js import React from 'react'; import { Link } from 'react-router'; export default class NotFoundPage extends React.Component { render() { return ( <div className="not-found"> <h1>404</h1> <h2>Page not found!</h2> <p> <Link to="/">Go back to the main page</Link> </p> </div> ); } } App Routes component The last component we need to create is the AppRoutes component which is the master component that renders all the other views using internally the React Router. This component will use the routes module, so let's have a quick look at it first: // src/routes.js import React from 'react' import { Route, IndexRoute } from 'react-router' import Layout from './components/Layout'; import IndexPage from './components/IndexPage'; import AthletePage from './components/AthletePage'; import NotFoundPage from './components/NotFoundPage'; const routes = ( <Route path="/" component={Layout}> <IndexRoute component={IndexPage}/> <Route path="athlete/:id" component={AthletePage}/> <Route path="*" component={NotFoundPage}/> </Route> ); export default routes; In this file we are basically using the React Router Route component to map a set of routes to the page components we defined before. Note how the routes are nested inside a main Route component. Let's explain how this works: - The root route maps the path /to the Layoutcomponent. This allows us to use our custom layout in every section of our application. The components defined into the nested routes will be rendered inside the Layoutcomponent in place of the this.props.childrenproperty that we discussed before. - The first child route is an IndexRoutewhich is a special route used to define the component that will be rendered when we are viewing the index page of the parent route ( /in this case). We use our IndexPagecomponent as index route. - The path athlete/:idis mapped to the AthletePage. Note here that we are using a named parameter :id. So this route will match all the paths with the prefix /athlete/, the remaining part will be associated to the params idand will be available inside the component in this.props.params.id. - Finally the match-all route *maps every other path to the NotFoundPagecomponent. This route must be defined as the last one. Let's see now how to use these routes with the React Router inside our AppRoutes component: // src/components/AppRoutes.js import React from 'react'; import { Router, browserHistory } from 'react-router'; import routes from '../routes'; export default class AppRoutes extends React.Component { render() { return ( <Router history={browserHistory} routes={routes} onUpdate={() => window.scrollTo(0, 0)}/> ); } } Basically we only need to import the Router component and add it inside our render function. The router receives our routes mapping in the router prop. We also configure the history prop to specify that we want to use the HTML5 browser history for the routing (as an alternative you could also use hashHistory). Finally we also added an onUpdate callback to reset the scrolling of the window to the top everytime a link is clicked. The application entry point The last bit of code to complete our first version of the application is to define the JavaScript logic that initializes the whole app in the browser: // src/app-client.js import React from 'react'; import ReactDOM from 'react-dom'; import AppRoutes from './components/AppRoutes'; window.onload = () => { ReactDOM.render(<AppRoutes/>, document.getElementById('main')); }; The only thing we do here is to import our master AppRoutes component and render it using the ReactDOM.render method. The React app will be living inside our #main DOM element. Setting up Webpack and Babel Before we are able to run our application we need to generate the bundle.js file containing all our React components with Webpack. This file will be executed by the browser so Webpack will make sure to convert all the modules into code that can be executed in the most common browser environments. Webpack will convert ES2015 and React JSX syntax to equivalent ES5 syntax (using Babel), which can be executed practically by every browser. Furthermore we can use Webpack to apply a number of optimizations to the resulting code like combining all the scripts files into one file and minifying the resulting bundle. Let's write our webpack configuration file: // webpack.config.js const webpack = require('webpack'); const path = require('path'); module.exports = { entry: path.join(__dirname, 'src', 'app-client.js'), output: { path: path.join(__dirname, 'src', 'static', 'js'), filename: 'bundle.js' }, module: { loaders: [{ test: path.join(__dirname, 'src'), loader: ['babel-loader'], query: { cacheDirectory: 'babel_cache', presets: ['react', 'es2015'] } }] }, plugins: [ new webpack.DefinePlugin({ 'process.env.NODE_ENV': JSON.stringify(process.env.NODE_ENV) }), new webpack.optimize.DedupePlugin(), new webpack.optimize.OccurenceOrderPlugin(), new webpack.optimize.UglifyJsPlugin({ compress: { warnings: false }, mangle: true, sourcemap: false, beautify: false, dead_code: true }) ] }; In the first part of the configuration file we define what is the entry point and the output file. The entry point is the main JavaScript file that initializes the application. Webpack will recursively resolve all the included/imported resources to determine which files will go into the final bundle. The module.loaders section allows to specify transformations on specific files. Here we want to use Babel with the react and es2015 presets to convert all the included JavaScript files to ES5 code. In the final section we use plugins to declare and configure all the optimizations plugins we want to use: DefinePluginallows us to define the NODE_ENVvariable as a global variable in the bundling process as if it was defined in one of the scripts. Some modules (e.g. React) relies on it to enable or disable specific features for the current environment (production or development). DedupePluginremoves all the duplicated files (modules imported in more than one module). OccurenceOrderPluginhelps in reducing the file size of the resulting bundle. UglifyJsPluginminifies and obfuscates the resulting bundle using UglifyJs. Now we are ready to generate our bundle file, you just need to run: NODE_ENV=production node_modules/.bin/webpack -p (if you are on Windows you can use PowerShell and run set NODE_ENV=production | node_modules/.bin/webpack -p. Thanks Miles Rausch for the suggestion) The NODE_ENV environment variable and the -p option are used to generate the bundle in production mode, which will apply a number of additional optimizations, for example removing all the debug code from the React library. If everything went fine you will now have your bundle file in src/static/js/bundle.js. Playing with the single page app We are finally ready to play with the first version of our app! We don't have a Node.js web server yet, so for now we can just use the module http-server (previously installed as development dependency) to run a simple static file web server: node_modules/.bin/http-server src/static And your app will be magically available on. Ok, now take some time to play with it, click on all the links and explore all the sections. Does everything seem to work allright? Well, almost! There's just a little caveat... If you refresh the page in a section different from the index you will get a 404 error from the server. There are a number of ways to address this problem. In our case it will be solved as soon as we implement our universal routing and rendering solution, so let's move on to the next section. Routing and rendering on the server with Express Ok, we are now ready to evolve our application to the next level and build the missing server side part. In order to have server side routing and rendering we will use Express with a relatively small server script that we will see in a moment. The rendering part will use an ejs template as replacement for our index.html file that we will save in src/views/index.ejs: <"><%- markup -%></div> <script src="/js/bundle.js"></script> </body> </html> The only difference with the original HTML file is that we are using the template variable <%- markup -%> inside the #main div in order to include the React markup into the server-generated HTML code. Now we are ready to write our server application: // src/server.js import path from 'path'; import { Server } from 'http'; import Express from 'express'; import React from 'react'; import { renderToString } from 'react-dom/server'; import { match, RouterContext } from 'react-router'; import routes from './routes'; import NotFoundPage from './components/NotFoundPage'; // initialize the server and configure support for ejs templates const app = new Express(); const server = new Server(app); app.set('view engine', 'ejs'); app.set('views', path.join(__dirname, 'views')); // define the folder that will be used for static assets app.use(Express.static(path.join(__dirname, 'static'))); // universal routing and rendering app.get('*', (req, res) => { match( { routes, location: req.url }, (err, redirectLocation, renderProps) => { // in case of error display the error message if (err) { return res.status(500).send(err.message); } // in case of redirect propagate the redirect to the browser if (redirectLocation) { return res.redirect(302, redirectLocation.pathname + redirectLocation.search); } // generate the React markup for the current route let markup; if (renderProps) { // if the current route matched we have renderProps markup = renderToString(<RouterContext {...renderProps}/>); } else { // otherwise we can render a 404 page markup = renderToString(<NotFoundPage/>); res.status(404); } // render the index template with the embedded React markup return res.render('index', { markup }); } ); }); // start the server const port = process.env.PORT || 3000; const env = process.env.NODE_ENV || 'production'; server.listen(port, err => { if (err) { return console.error(err); } console.info(`Server running on{port} [${env}]`); }); The code is commented, so it shouldn't be hard to get a general understanding of what is going on here. The important part of the code here is the Express route defined with app.get('*', (req, res) => {...}). This is an Express catch-all route that will intercept all the GET requests to every URL in the server. Inside this route, we take care of delegating the routing logic to the React Router match function. ReactRouter.match accepts two parameters: the first one is a configuration object and the second is a callback function. The configuration object must have two keys: routes: used to pass the React Router routes configuration. Here, we are passing the exact same configuration that we used for the client-side rendering. location: This is used to specify the currently requested URL. The callback function is called at the end of the matching. It will receive three arguments, error, redirectLocation, and renderProps, that we can use to determine what exactly the result of the match operation was. We can have four different cases that we need to handle: - The first case is when we have an error during the routing resolution. To handle this case, we simply return a 500 internal server error response to the browser. - The second case is when we match a route that is a redirect route. In this case, we need to create a server redirect message (302 redirect) to tell the browser to go to the new destination (this is not really happening in our application because we are not using redirect routes in our React Router configuration, but it's good to have it ready in case we decide to keep evolving our application). - The third case is when we match a route and we have to render the associated component. In this case, the argument renderPropsis an object that contains the data we need to use to render the component. The component we are rendering is RouterContext(contained in the React Router module), which is responsible for rendering the full component tree using the values in renderProps. - The last case is when the route is not matched, and here we can simply return a 404 not found error to the browser. This is the core of our server- side routing mechanism and we use the ReactDOM.renderToString function to be able to render the HTML code that represents the component associated to the currently matched route. Finally, we inject the resulting HTML into the index.ejs template we defined before to obtain the full HTML page that we send to the browser. Now we are ready to run our server.js script, but because it's using the JSX syntax we cannot simply run it with the node interpreter. We need to use babel-node and the full command (from the root folder of our project) looks like this: NODE_ENV=production node_modules/.bin/babel-node --presets react,es2015 src/server.js Running the complete app At this stage your app is available at and, for the sake of this tutorial, it can be considered complete. Again feel free to check it out and try all the sections and links. You will notice that this time we can refresh every page and the server will be able to identify the current route and render the right page. Small advice: don't forget to check out the 404 page by entering a random non-existing URL! Wrapping up HOORAY! This completes our tutorial! I'm really happy to know you got to the end, but you can make me even more happier if you post here in the comments some example Universal JavaScript apps that you built using this (or a similar) approach. If you want to know more about Universal Javascript and improve your application even more (e.g. by adding Universal Data Retrival using REST APIs) I definitely recommend to read the chapter Universal JavaScript for Web Applications on my book Node.js Design Patterns: Until next time! PS: Huge thanks to Mario Casciaro for reviewing this article and to Marwen Trabelsi for the support on improving the code! Also thanks to @CriticalMAS for finding a typo :P
https://scotch.io/tutorials/react-on-the-server-for-beginners-build-a-universal-react-and-node-app?utm_source=scotch&amp;utm_campaign=share&utm_medium=reddit
CC-MAIN-2019-26
refinedweb
5,656
54.63
Quick-Start Guide for uVisor on mbed OS This guide will help you get started with uVisor on mbed OS by walking you through creating a sample application for the NXP FRDM-K64F board. The uVisor provides sandboxed environments and resources protection for applications built for ARM Cortex-M3 and Cortex-M4 devices. Here we will show you how to enable the uVisor and configure a secure box to get hold of some exclusive resources (memory, peripherals, interrupts). For more information on the uVisor design philosophy, please check out our the uVisor introductory document. Overview To get a basic blinky application running on mbed OS with uVisor enabled, you will need the following: - A platform and a toolchain supported by uVisor on mbed OS. You can verify this on the official list. Please note that uVisor might support some platform internally, but not on mbed OS. Generally this means that the porting process has only been partially completed. If you want to port your platform to uVisor and enable it on mbed OS, please follow the uVisor Porting Guide for mbed OS. - git. It will be used to download the mbed codebase. - The mbed command-line tools, mbed-cli. You can run pip install mbed-clito install them. For the remainder of this guide we will assume the following: - You are developing on a *nix machine, in the ~/codefolder. - You are building the app for the NXP FRDM-K64F target, with the GCC ARM Embedded toolchain. The instructions provided can be easily generalized to the case of other targets on other host OSs. Start with the blinky app To create a new mbed application called uvisor-example just run the following commands: $ cd ~/code $ mbed new uvisor-example The mbed-cli tools will automatically fetch the mbed codebase for you. By default, git will be used to track your code changes, so your application will be ready to be pushed to a git server, if you want to. Once the import process is finished, create a source folder: $ mkdir ~/code/uvisor-example/source and place a new file main.cpp in it: /* ~/code/uvisor-example/source/main.cpp */ #include "mbed.h" #include "rtos.h" DigitalOut led(LED1); int main(void) { while (true) { led = !led; Thread::wait(500); } } This simple application just blinks an LED from the main thread, which is created by default by the OS. Checkpoint Compile the application: $ mbed compile -m K64F -t GCC_ARM The resulting binary will be located at: ~/code/uvisor-example/.build/K64F/GCC_ARM/uvisor-example.bin Drag-and-drop it onto the USB device mounted on your computer in order to flash the device. When the flashing process is completed, press the reset button on the device. You should see the device LED blinking. In the next sections you will see: - How to enable uVisor on the uvisor-exampleapp. - How to add a secure box to the uvisor-exampleapp with exclusive access to a timer, to a push-button interrupt, and to static and dynamic memories. Enable uVisor To enable the uVisor on the app, just add the following lines at the beginning of the main.cpp file: /* ~/code/uvisor-example/source/main.cpp */ #include "mbed.h" #include "rtos.h" #include "uvisor-lib/uvisor-lib.h" /* Register privleged system hooks. * This is a system-wide configuration and it is independent from the app, but * for the moment it needs to be specified in the app. This will change in a * later version: The configuration will be provided by the OS. */ extern "C" void SVC_Handler(void); extern "C" void PendSV_Handler(void); extern "C" void SysTick_Handler(void); extern "C" uint32_t rt_suspend(void); UVISOR_SET_PRIV_SYS_HOOKS(SVC_Handler, PendSV_Handler, SysTick_Handler, rt_suspend); /* Main box Access Control Lists (ACLs). */ /* Note: These are specific to the NXP FRDM-K64F board. See the section below * for more information. */ static const UvisorBoxAclItem g_main_box_acls[] = { /* For the LED */ {SIM, sizeof(*SIM), UVISOR_TACLDEF_PERIPH}, {PORTB, sizeof(*PORTB), UVISOR_TACLDEF_PERIPH}, /* For messages printed on the serial port. */ {OSC, sizeof(*OSC), UVISOR_TACLDEF_PERIPH}, {MCG, sizeof(*MCG), UVISOR_TACLDEF_PERIPH}, {UART0, sizeof(*UART0), UVISOR_TACLDEF_PERIPH}, }; /* Enable uVisor, using the ACLs we just created. */ UVISOR_SET_MODE_ACL(UVISOR_ENABLED, g_main_box_acls); /* Rest of the existing app code */ ... In the code above we specified 3 elements: - System-wide uVisor configurations: UVISOR_SET_PRIV_SYS_HOOKS. Application authors currently need to specify the privileged system hooks at the application level with this macro, but in the future the operating system will register the privileged system hooks on its own. - Main box Access Control Lists (ACLs). Since with uVisor enabled everything runs in unprivileged mode, we need to make sure that peripherals that are accessed by the OS and the main box are allowed. These peripherals are specified using a list like the one in the snippet above. For the purpose of this example we provide you the list of all the ACLs that we know you will need. For other platforms or other applications you need to determine those ACLs following a process that is described in a section below. - App-specific uVisor configurations: UVISOR_SET_MODE_ACL. This macro sets the uVisor mode (enabled) and associates the list of ACLs we just created with the main box. Before compiling, we need to override the original K64F target to enable the uVisor feature. To do so, add the file ~/code/uvisor-example/mbed_app.json with the following content: { "target_overrides": { "K64F": { "target.features_add": ["UVISOR"], "target.extra_labels_add": ["UVISOR_SUPPORTED"] } }, "macros": [ "FEATURE_UVISOR", "TARGET_UVISOR_SUPPORTED" ] } The macros FEATURE_UVISOR and TARGET_UVISOR_SUPPORTED in the configuration file above are automatically defined for C and C++ files, but not for assembly files. Since the uVisor relies on those symbols in some assembly code, we need to define them manually. Checkpoint Compile the application again. This time the K64F target will include the new features and labels we provided in mbed_app.json; $ mbed compile -m K64F -t GCC_ARM The binary will be located at: ~/code/uvisor-example/.build/K64F/GCC_ARM/uvisor-example.bin Re-flash the device and press the reset button. The device LED should be blinking as in the previous case. If you enable uVisor in the blinky app as it was written above, you will not get any particular security feature. All code and resources share the same security context, which we call the main box. A lot happens under the hood, though. All the user code now runs in unprivileged mode, and the systems services like the NVIC APIs or the OS SVCalls are routed through the uVisor. Add a secure box Now that uVisor is enabled, we can finally add a secure box. A secure box is a special compartment that is granted exclusive access to peripherals, memories and interrupts. Private resources are only accessible when the context of the secure box is active. The uVisor is the only one that can enable a secure box context, for example upon thread switching or interrupt handling. Code that belongs to a box is not obfuscated by uVisor, so it is still readable and executable from outside of the box. In addition, declaring an object in the same file that configures a secure box does not protect that object automatically. Instead, we provide specific APIs to instruct the uVisor to protect a private resource. Here we will show how to use these APIs in the uvisor-example app. Configure the secure box For this example, we want to create a secure box called private_timer. The private_timer box will be configured to have exclusive access to the PIT timer and to the GPIO PORT C on the NXP FRDM-K64F board, which means that other boxes will be prevented from accessing these peripherals. Each secure box must have at least one thread, which we call the box’s main thread. In our private_timer box we will only use this thread throughout the whole program. The thread will constantly save the current timer value in a private buffer. In addition, we want to print the content of the buffered timer values whenever we press the SW2 button on the board. We want the box to have exclusive access to the following resources: - The timer and push-button peripherals (as specified by a peripheral ACL). Nobody else should be able to read the timer values. - The push-button interrupt (as specified by an IRQ ACL). We want the button IRQ to be re-routed to our box-specific ISR. - The buffer that holds the timer samples (as specified by a dynamic memory ACL). - The static memory that holds information about the timer buffer (as specified by a static memory ACL). Create a new source file, ~/code/uvisor-example/source/secure_box.cpp. We will configure the secure box inside this file. The secure box name for this example is private_timer. /* ~/code/uvisor-example/source/secure_box.cpp */ #include "mbed.h" #include "rtos.h" #include "uvisor-lib/uvisor-lib.h" /* Private static memory for the secure box */ typedef struct { uint32_t * buffer; int index; } PrivateTimerStaticMemory; /* ACLs list for the secure box: Timer (PIT). */ static const UvisorBoxAclItem g_private_timer_acls[] = { {PIT, sizeof(*PIT), UVISOR_TACLDEF_PERIPH} {PORTC, sizeof(*PORTC), UVISOR_TACLDEF_PERIPH}, }; static void private_timer_main_thread(const void *); /* Secure box configuration */ UVISOR_BOX_NAMESPACE(NULL); /* We won't specify a box namespace for this example. */ UVISOR_BOX_HEAPSIZE(4096); /* Heap size for the secure box */ UVISOR_BOX_MAIN(private_timer_main_thread, /* Main thread for the secure box */ osPriorityNormal, /* Priority of the secure box's main thread */ 1024); /* Stack size for the secure box's main thread */ UVISOR_BOX_CONFIG(private_timer, /* Name of the secure box */ g_private_timer_acls, /* ACLs list for the secure box */ 1024, /* Stack size for the secure box */ PrivateTimerStaticMemory); /* Private static memory for the secure box. */ Create the secure box’s main thread function In general, you can decide what to do in your box’s main thread. You can run it once and then kill it, use it to configure memories, peripherals, or to create other threads. In this app, the box’s main thread is the only thread for the private_timer box, and it will run throughout the whole program. The private_timer_main_thread function configures the PIT timer, allocates the dynamic buffer to hold the timer values and initializes its private static memory, PrivateTimerStaticMemory. A spinning loop is used to update the values in the buffer every time the thread is reactivated. /* Number of timer samples we will use */ #define PRIVATE_TIMER_BUFFER_COUNT 256 /* For debug purposes: print the buffer values when the SW2 button is pressed. */ static void private_timer_button_on_press(void) { for (int i = 0; i < PRIVATE_TIMER_BUFFER_COUNT; i++) { printf("buffer[%03d] = %lu\r\n", i, uvisor_ctx->buffer[i]); } } /* Main thread for the secure box */ static void private_timer_main_thread(const void *) { /* Create the buffer and cache its pointer to the private static memory. */ uvisor_ctx->buffer = (uint32_t *) malloc(PRIVATE_TIMER_BUFFER_COUNT * sizeof(uint32_t)); if (uvisor_ctx->buffer == NULL) { mbed_die(); } uvisor_ctx->index = 0; /* Setup the push-button callback. */ InterruptIn button(SW2); button.mode(PullUp); button.fall(&private_timer_button_on_press); /* Setup and start the timer. */ Timer timer; timer.start(); while (1) { /* Store the timer value. */ uvisor_ctx->buffer[uvisor_ctx->index] = timer.read_us(); /* Update the index. Behave as a circular buffer. */ if (uvisor_ctx->index < PRIVATE_TIMER_BUFFER_COUNT - 1) { uvisor_ctx->index++; } else { uvisor_ctx->index = 0; } } } A few things to note in the code above: - If code is running in the context of private_timer, then any object instantiated inside that code will belong to the private_timerheap and stack. This means that in the example above the InterruptInand Timerobjects are private to the private_timerbox. The same applies to the dynamically allocated buffer uvisor_ctx->buffer. - The content of the private memory PrivateTimerStaticMemorycan be accessed using the PrivateTimerStaticMemory * uvisor_ctxpointer, which is maintained by uVisor. - The InterruptInobject triggers the registration of an interrupt slot. Since that code is run in the context of the private_timerbox, then the push-button IRQ belongs to that box. If you want to use the IRQ APIs directly, read the section below. - Even if the private_timer_button_on_pressfunction runs in the context of private_timer, we can still use the printffunction, which accesses the UART0peripheral, owned by the main box. This is because all ACLs declared in the main box are by default shared with all the other secure boxes. This also means that the messages we are printing on the serial port are not secure, because other boxes have access to that peripheral. Warning: Instantiating an object in the secure_box.cppglobal scope will automatically map it to the main box context, not the private_timerone. If you want an object to be private to a box, you need to instantiate it inside the code that will run in the context of that box (like the InterruptInand Timerobjects), or alternatively statically initialize it in the box private static memory (like the bufferand indexvariables in PrivateTimerStaticMemory). Checkpoint Compile the application again: $ mbed compile -m K64F -t GCC_ARM Re-flash the device, and press the reset button. The device LED should be blinking as in the previous case. If you don’t see the LED blinking, it means that the application halted somewhere, probably because uVisor captured a fault. You can setup the uVisor debug messages to see if there is any problem. Follow the Debugging uVisor on mbed OS document for a step-by-step guide. If the LED is blinking, it means that the app is running fine. If you now press the SW2 button on the NXP FRDM-K64F board, the private_timer_button_on_press function will be executed, printing the values in the timer buffer. You can observe these values by opening a serial port connection to the device, with a baud rate of 9600. When the print is completed, you should see the LED blinking again. Expose public secure entry points to the secure box Wrap-up In this guide we showed you how to: - Enable uVisor on an existing application. - Add a secure box to your application. - Protect static and dynamic memories in a secure box. - Gain exclusive access to a peripheral and an IRQ in a secure box. - (Coming soon) Expose public secure entry points to a secure box. You can now modify the example or create a new one to protect your resources into a secure box. You might find the following resources useful: If you found any bug or inconsistency in this guide, please raise an issue. Appendix This section contains additional information that you might find useful when setting up a secure box. The NVIC APIs The ARM CMSIS header files provide APIs to configure, enable and disable IRQs in the NVIC module. These APIs are all prefixed with NVIC_ and can be found in the core_cm*.h files in your CMSIS module. In addition, the CMSIS header also provide APIs to set and get an interrupt vector at runtime. This requires the interrupt vector table, which is usually located in flash, to be relocated to SRAM. When the uVisor is enabled, all NVIC APIs are re-routed to the corresponding uVisor vIRQ APIs, which virtualize the interrupt module. The uVisor interrupt model has the following features: - The uVisor owns the interrupt vector table. - All ISRs are relocated to SRAM. - Code in a box can only change the state of an IRQ (enable it, change its priority, etc.) if the box registered that IRQ with uVisor at runtime, using the NVIC_SetVectorAPI. - An IRQ that belongs to a box can only be modified when that box context is active. Although this behaviour is different from the original NVIC one, it is backwards compatible. This means that legacy code (like a device HAL) will still work after uVisor is enabled. The general use case is the following: #define MY_IRQ 42 /* Set the ISR for MY_IRQ at runtime. * Without uVisor: Relocate the interrupt vector table to SRAM and set my_isr as the ISR for MY_IRQ. * With uVisor: Register MY_IRQ for the current box with my_isr as ISR. */ NVIC_SetVector(MY_IRQ, &my_isr); /* Change the IRQ state. */ NVIC_SetPriority(MY_IRQ, 3); NVIC_EnableIRQ(MY_IRQ); Note: In this model a call to NVIC_SetVectormust always happen before an IRQ state is changed. In platforms that don’t relocate the interrupt vector table such a call might be originally absent and must be added to work with uVisor. For more information on the uVisor APIs, checkout the uVisor API documentation document. The main box ACLs The code samples that we provide in this guide give you a ready-made list of ACLs for the main box. The list includes peripherals that we already know will be necessary to make the example app work, and it is specific to the NXP FRDM-K64F target. This section shows how to discover the needed ACLs for the main box. You might need to follow these instructions in case you want to generate the ACLs list for a different target or a different app. At the moment the uVisor does not provide a way to detect and list all the faulting ACLs for a given platform automatically. This is a planned feature that will be released in the future. In order to generate the list of ACLs, use the code provided in the Enable uVisor section. In this case, though, start with an empty ACLs list: static const UvisorBoxAclItem g_main_box_acls[] = { } You now need to compile your application using uVisor in debug mode. This operation requires some more advanced steps, which are described in detail in the Debugging uVisor on mbed OS document. The main idea is that you compile the application in debug mode: $ mbed compile -m K64F -t GCC_ARM -o "debug-info" and then use a GDB-compatible interface to flash the device, enable semihosting, and access the uVisor debug messages. Please read the Debugging uVisor on mbed OS document for the detailed instructions. Once the uVisor debug messages are enabled, you will see you application fail. The failure is due to the first missing ACL being hit by the main box code. The message will look like: *********************************************************** BUS FAULT *********************************************************** ... * MEMORY MAP Address: 0x4004800C Region/Peripheral: SIM Base address: 0x40047000 End address: 0x40048060 ... Now that you know which peripheral is causing the fault (the SIM peripheral, in this example), you can add its entry to the ACLs list: static const UvisorBoxAclItem g_main_box_acls[] = { {SIM, sizeof(*SIM), UVISOR_TACLDEF_PERIPH}, }; Note: If the fault debug screen does not show the name of the peripheral, you need to look it up in the target device reference manual. For readability, do not use the hard-coded addresses of your peripherals, but rather use the symbols provided by the target CMSIS module. Repeat the process multiple times until all ACLs have been added to the list. When no other ACL is needed any more, the system will run without hitting a uVisor fault.
https://docs.mbed.com/docs/mbed-os-api-reference/en/latest/APIs/security/uvisor/
CC-MAIN-2018-05
refinedweb
3,054
55.13
[ ] Todd Lipcon commented on HDFS-1969: ----------------------------------- This issue seems to extend back a ways -- I just tried the same steps going from 0.18.3 to 0.20.2 and saw essentially the same behavior. I think what needs to happen is that, when you ask the NN to do a rollback, it will only succeed if that NN's LAYOUT_VERSION constant matches the version of the previous/ directory. Otherwise we should exit with an error message indicating that you should run "rollback" using the old version of the software instead of the new version. > Running rollback on new-version namenode destroys namespace > ----------------------------------------------------------- > > Key: HDFS-1969 > URL: > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node > Affects Versions: 0.22.0 > Reporter: Todd Lipcon > Priority: Blocker > Fix For: 0.22:
http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-issues/201105.mbox/%3C919617072.29649.1305866571904.JavaMail.tomcat@hel.zones.apache.org%3E
CC-MAIN-2017-47
refinedweb
130
62.78
Subject: [boost] [container_bptree] updates and performance measurements From: Vadim Stadnik (vadimstdk_at_[hidden]) Date: 2011-06-14 08:43:16 Hi all, The code and documentation of the submitted library container_bptree have been updated. The new version can be downloaded using this link: 1. For those who do not know this library Id suggest before opening the documentation to spend some time to the answer the following warm-up question: Is it possible to sum up any N consecutive numbers in less than N-1 operations? 2. The library and tests compile and run correctly on two C++ compilers: MSVC++ 9/10 (tested with Visual Studio 2008/2010, including Express editions), and GCC 4.5.2 (tested with MinGW). 3. Lists of template parameters of class templates have been modified to simplify the generation of classes of STL variants of containers. 4. Performance measurements have been added to the documentation. They show that submitted B+ trees are highly competitive with Red Black trees in basic operations. 5. I would like to thank Beman Dawes for suggestion to adapt and apply map tests from his project [BTree]. Class templates of the namespace container_bptree can be used to generate three types of map containers with bidirectional and random access iterators. All of them pass successfully the adapted tests on Windows systems. At this stage it would be particularly valuable in addition to my own tests to run independent tests for associative containers with random access iterators. This is why I am wondering if there are Boost libraries, which include classes of such associative containers and test code. 6. Since I am a newcomer to Boost community I do not know how the current level of quality of the library container_bptree meets Boost standards. Anyone who tried the code and read the documentation is welcome to make comments and suggestions. I am quite happy to implement another list of improvements before the formal request for interest to this library. Regards, Vadim Stadnik Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2011/06/182681.php
CC-MAIN-2019-43
refinedweb
350
54.52
I have a sent.py file, and in this file I have a function getProcessing(), when I call this function I want to call a function (createmrjob) from another python file (processing.py). Im trying to do this with code below and it is working fine, but after I call this function getProcessing() when user choose option 2, it is created a processing.pyc file. It is normal? There is a way that this file its not created? def getProcessing(): from processing import createmrjob def nav(): print "" while True: response_options = {'1': Choice(msg="User Managment", callback=userManagment), '2': Choice(msg="processing", callback=getProcessing)} result = make_choice(response_options) if result is None: print "Selected option not available." else: result.callback() Yes, it is normal. The *.pyc file contains compiled version of your module. That is normal. Python tries to optimize by generating a .pyc version of the .py, which is pre-compiled. You can avoid this by environment variables, command-line parameters, or from inside the program. See this post . It is normal. It is byte code file, which is generated by the Python interpreter and is executed by Python's virtual machine.
http://www.dlxedu.com/askdetail/3/f359231b70aafe7d6111614ffd1b2a53.html
CC-MAIN-2018-30
refinedweb
191
60.21
I've added a category to my blog where I'll be adding content regarding the Adelaide .NET User Group. The website for the user group is: and we meet every 2nd Wednesday of the month. In recent months we've had some interesting topics including a lively discussion around the System.IO namespace and also a great demo of the features in SP2. You can read about meetings on the website at this address: or, better still, subscribe to this category of my blog and I'll be sending out reminders on the Monday before the meeting :-) If you are not currently a member of the group, come along and participate as we ready ourselves to get the most out of Whidbey!?
http://weblogs.asp.net/dneimke/archive/2004/06/18/158966.aspx#7529049
crawl-003
refinedweb
123
68.4
/* Program name: Sweets.java * Author name: Place your name here * Description: */ import java.io.*; public class Sweets { public static void main(String[] args) throws IOException { int sweets; System.out.println("Enter number of sweets: "); sweets = System.in.read(); numberOfSweets(); } public int numberOfSweets() { dozens = sweets / 12; System.out.print("You have "+ dozens +" dozens!"); } } Its main() method holds an integer variable named numberOfSweets to which you will assign a value. Originally posted by ayukawa madoka: I did... System.out.println( "Dozens: " + (dozens / 12) ); System.out.println( "Left over: " + (leftOver % 12) ); I already declared the variables but it won't work. When compile it says "Sweets.java:26: variable dozens might not have been initialized .out.println( "Dozens: " + (dozens / 12) );" // ^ points to "dozens" (same goes to leftOver)
http://www.coderanch.com/t/395904/java/java/Programming-Exercises
CC-MAIN-2015-14
refinedweb
124
63.25
Hey Everyone, Started Java for couple of months now and still struggling to put things together.Any assistant will be highly appreciated. I wanted to display an initial menu screen with the following set of options: 1. Display the current score for each possible response. 2. Vote 3. Quit the program. I wanted to display the user selects lets say the user selected option two, the program will display the question and the four possible responses eg Who will win the Champions League in 2010/11? " + "\n 1. Real Madrid" + "\n 2. Barcelona" + "\n 3. Chelsea" + "\n 4. Manchester United. The user will then enter a response and the response will be recorded in the program. this is what i have coded so far' import javax.swing.JOptionPane; import java.io.*; public class ChampionLeague { static public int menu(String message, String menuOptions[]){ int choice; int Quit = 0; String optionSelected = " "; String outputString = " "; //Show menu and get users choice choice = JOptionPane.showOptionDialog(null, "Display the current score for each possible response \n" + "2. Vote \n " + "3. Quit the program "); return choice; int vote = 4; String game [][] = new String [choice][choice]; game [0][0] = "Real Madrid "; game [0][1] = "250 "; // display current vote game [1][0] = "Barcelona "; game [1][1] = " 320 "; game [2][0] = "Chelsea "; game [2][1] = "140 "; game [3][0] = "Manchester United "; game [3][1] = "300 "; for(int i = 0; i < choice; i++){ String name = game[i][0]; String marks = game[i][1]; } JOptionPane.showMessageDialog(null, outputString ); String input = JOptionPane.showInputDialog(" Who will win the Champions League in 2010/11? " + "\n 1. Real Madrid" + "\n 2. Barcelona" + "\n 3. Chelsea" + "\n 4. Manchester United"); int x = Integer.parseInt(input); if (x == 1) outputString = outputString + "You entered Real Madrid"; else if (x == 2) outputString = outputString +("You Entered Barcelona"); else if (x == 3) outputString = outputString + ("You Entered Chelsea"); else if (x == 4) outputString = outputString + ("You Entered Manchester United"); else outputString = outputString + ("Invalid input\nEnter Valid Vote 1 - 4 "); optionSelected = Integer.toString(x); System.out.println(x); JOptionPane.showMessageDialog(null, outputString ); }//end of maim }// end of class
https://www.daniweb.com/programming/software-development/threads/353778/display-an-initial-menu-screen
CC-MAIN-2017-47
refinedweb
343
58.48
Problem with delegates: repeated editing of same cell in QTableWidget has delay of ca 1 second - linuxbastler Problem can be reproduced with the "Star Delegate Example". - open "Star Delegate Example" - in main.cpp replace the EditTriggers with all EditTriggers to start editing with single mouse click: // tableWidget.setEditTriggers(QAbstractItemView::DoubleClicked // | QAbstractItemView::SelectedClicked); tableWidget.setEditTriggers(QAbstractItemView::AllEditTriggers); - run the sample, click cell "Aqua" => editing starts immediately - click cell "Tom Jones" => editing starts immediately - click cell "Aqua" => editing starts immediately - hit Escape-Key to leave cell "Aqua" - click cell "Aqua" a second time => editing starts with a delay of ca 1 second We do not like this delay when consecutive editing the same cell. Where does this delay come from? Can it be disabled? This problem happens on Linux and Windows and on Qt 5.6.2 and Qt 5.10. - SGaist Lifetime Qt Champion Hi and welcome to devnet, What distribution/version of your OS are you using ? Linux: Ubuntu 17.10 64 Bit and Ubuntu 14.04 32 Bit Windows: Windows 7 64 Bit The problem is here basically it has to delay the editing because otherwise a double-click would be interpreted as 2 separate clicks. In release mode it's definetly not 1 sec but around 400ms As per @VRonin's reply, the Qt code resolves to: /*! \property QApplication::doubleClickInterval \brief the time limit in milliseconds that distinguishes a double click from two consecutive mouse clicks The default value on X11 is 400 milliseconds. On Windows and Mac OS, the operating system's value is used. */ void QApplication::setDoubleClickInterval(int ms) { QGuiApplication::styleHints()->setMouseDoubleClickInterval(ms); } int QApplication::doubleClickInterval() { return QGuiApplication::styleHints()->mouseDoubleClickInterval(); } So presumably if it really bothers you might do something about making your own call to QApplication::setDoubleClickInterval()to set it to a lower value. You might be able to do this just for the duration of your particular usage case, and have it restored to default the rest of the time. @VRonin One thing I do not see from your answer is the OP's: - click cell "Aqua" => editing starts immediately - hit Escape-Key to leave cell "Aqua" - click cell "Aqua" a second time => editing starts with a delay of ca 1 second I don't see why the delay applies in #7 but not in #5? @JonB said in Problem with delegates: repeated editing of same cell in QTableWidget has delay of ca 1 second: I don't see why the delay applies in #7 but not in #5? Because the delay only applies if (trigger == SelectedClicked) A possible solution for your problem is to subclass the view, reimplementing bool edit(const QModelIndex &index, EditTrigger trigger, QEvent *event)to change SelectedClickedto NoEditTriggerand call the base class @JonB VRonin is right, the delay only happens, if a selected cell is clicked to edit again. This is the differende of #7 to #5. @JonB the solution to use setDoubleClickInterval(int ms) works partially but not for everything in our program. E. g. with setDoubleClickInterval(0) the delay is gone, but Combo Boxes will open and close with one click (they then get a doubleclick?). So we tried to solution of @VRonin (subclass the view, reimplement edit). This works for us, but it is a little bit of software overhead to workaround the problem. @VRonin @JonB Seems there is some kind of "SingleClicked" missing in Qt to detect a single mouse click on a selected editable cell. We don't clearly see what a single click on a selected cell has to do with double clicks? Thanks for the solutions! @linuxbastler said in Problem with delegates: repeated editing of same cell in QTableWidget has delay of ca 1 second: We don't clearly see what a single click on a selected cell has to do with double clicks? Say you have tableWidget.setEditTriggers(QAbstractItemView::SelectedClicked);and you have something like connect(&tableWidget,&QAbstractItemView::doubleClicked,[]()->void{qDebug("Double Clicked!");}); instead of step 7, double-click on "Aqua". when you release the mouse and no delay is set, the editing starts, then you have a further mousepress+mouserelease that puts the cursor at the mouse position. This is not the expected behaviour, the correct one would be to execute the slot. Basically the problem is that at the time of the first mouserelease there is no way of knowing if the user intends to double-click or not so we have to wait and see @linuxbastler As @VRonin has explained above, every system does have to have some delay way of deciding whether a click is "standalone" or the start of a "double-click". And they have to do that by some delay after the first click to see if there is a second click coming. So they all have to do that, it's just a question of how long the delay is. we do not use doubleclick in this QTableWidget because we are on a touch device. Would be nice to have some configuration / settings to configure doubleclick in this situation If you don't use doubleclick then the "reimplement edit() in the view" solution is what you are after. You can actually make it a template to make it so you just need to change 1 line in your existing code: template<class T> class NoDelayView : public T{ #ifdef Q_COMPILER_STATIC_ASSERT static_assert(std::is_base_of<QAbstractItemView,T>::value,"Template argument must be a QAbstractItemView"); #endif Q_DISABLE_COPY(NoDelayView); public: explicit NoDelayView(QWidget* parent = Q_NULLPTR) : T(parent) {} protected: bool edit(const QModelIndex &index, QAbstractItemView::EditTrigger trigger, QEvent *event) Q_DECL_OVERRIDE{ return T::edit(index, trigger == QAbstractItemView::SelectedClicked ? QAbstractItemView::AllEditTriggers : trigger, event); } }; @VRonin ok, we got the idea of subclassing the view and reimplementing bool edit(). This solution is working for us, thanks! The template solution seems to be elegant but we did not get it running with the "Star Delegate Example" in the first try. @linuxbastler said in Problem with delegates: repeated editing of same cell in QTableWidget has delay of ca 1 second: The template solution seems to be elegant but we did not get it running with the "Star Delegate Example" in the first try. Fixed a few typos up there, now just replace QTableWidget tableWidget(4, 4);with NoDelayView<QTableWidget> tableWidget; tableWidget.setRowCount(4); tableWidget.setColumnCount(4); ok, the template solution is also working here. Thanks a lot for your help!
https://forum.qt.io/topic/87547/problem-with-delegates-repeated-editing-of-same-cell-in-qtablewidget-has-delay-of-ca-1-second
CC-MAIN-2018-09
refinedweb
1,053
50.87
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. def _current_total(self, cr, uid, ids, name, context=None, *args): res = {} for task in self.browse(cr, uid, ids,context): res[task.id] = {'amount': 0.0, 'completion' : 0.0, 'actual_amount' : 0.0,} task.actual_amount = task.completion * task.amount_vatin / 100 res[task.id] = {'actual_amount': task.actual_amount} return res you can refer at: Your Welcome :D About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/a-simple-example-of-fields-function-31280
CC-MAIN-2017-30
refinedweb
109
55
0 Hi. I'm trying to make a grade forecasting program for python 2.7. My problem is that I'm having difficulty trying to correlate the first "grade element" input with the first "grade element percentage" input. Here's what I have so far. ## Grade Forecast ## Chandiramani Grade Forecast import random print \ """ Welcome to Your Personalized Grade Forecast Tool. Instructions: 1 - View My Grade List 2 - Add a Grade Category 3 - Edit a Grade Category 4 - Delete a Grade Category 5 - Add a Grade 6 - Edit a Grade 7 - Delete a Grade 8 - Add an Extra Credit Grade 8 - Exit """ grade_total = 100 grade_elements = [] grade_elements_scores = [] user_grades = [] while (grade_total != 0): x = raw_input("Enter a Grade Category: ") x = x.upper() grade_elements.append(x) y = int(raw_input("Enter its Percentage: ")) grade_elements_scores.append(y) grade_total -= y print "You have",grade_total,"points left." if (grade_total < 0): grade_total += y print "Invalid input." print "The Course should total 100%." print "You will be able to add extra credit points later." **I don't know how to associate the two lists. Please help!
https://www.daniweb.com/programming/software-development/threads/331287/grade-forecaster
CC-MAIN-2017-09
refinedweb
175
57.98
Is there a one-liner that will zip/unzip files (*.zip) in PowerShell? - related: Unzip with PS in Server Core – Ruben Bartelink Aug 28 '13 at 16:20. - +1 Linked article has useful Tasks unlike the most upvoted answer – Ruben Bartelink Apr 10 '12 at 10:09 - 6-1 for suggesting a Google search. This is the top StackExchange result in a Google search for "unzip powershell" – machine yearning Aug 20 '15 at 9:59()) - 26 - 3The line $destination.Copyhere($zip_file.items())does the actual unziping. – Jonathan Allen Oct 28 '11 at 22:02 - 2You can parcel the above into a function, if you wanted: function unzip($filename) { if (!(test-path $filename)) { throw "$filename does not exist" } $shell = new-object -com shell.application $shell.namespace($pwd.path).copyhere($shell.namespace((join-path $pwd $filename)).items()) }– James Holwell Jan 29 '13 at 11:19 - 6 - 2This fails for me when the zip file contains just a folder (items is empty) – James Woolfenden Aug 19 '14 at 9:16 Now in .NET Framework 4.5, there is a ZipFile class that you can use like this: [System.Reflection.Assembly]::LoadWithPartialName('System.IO.Compression.FileSystem') [System.IO.Compression.ZipFile]::ExtractToDirectory($sourceFile, $targetFolder) - 1This would be great if there were a simple method to ExtractToDirectory and an option to overwrite all existing files. – James Dunne Apr 2 '13 at 0:43 - 1 - 3 - 2@JamesDunne - If you don't have other files you need to preserve, could use 'Remove-Item -Recurse $TargetFolder'. Otherwise, what you want can be done, but it would be non-trivial. You would need to open the zip for read, and then walk the zip, deleting any previous target object and unpacking the new one. Lucky for me, the easy solution works. ;) – Mike Sep 19 '13 at 2:44 - 3 You may wish to check out The PowerShell Community Extensions (PSCX) which has cmdlets specifically for this. - 1 - I've come across this because I actually want to automate the PSCX installation if I can for some coworkers. Trying it now to see what sort of issues I run into – jcolebrand Apr 27 '11 at 15:02.
https://serverfault.com/questions/18872/how-to-zip-unzip-files-in-powershell/201604
CC-MAIN-2020-29
refinedweb
357
64.2
Nested OBject manipulations Project description nob: the Nested OBject manipulator JSON is a very popular format for nested data exchange, and Object Relational Mapping (ORM) is a popular method to help developers make sense of large JSON objects, by mapping objects to the data. In some cases however, the nesting can be very deep, and difficult to map with objects. This is where nob can be useful: it offers a simple set of tools to explore and edit any nested data (Python native dicts and lists). For more, checkout the home page. Usage Instantiation nob.Nob objects can be instantiated directly from a Python dictionary: t = Nob({ 'key1': 'val1', 'key2': { 'key3': 4, 'key4': {'key5': 'val2'}, 'key5': [3, 4, 5] }, 'key5': 'val3' }) To create a Nob from a JSON (or YAML) file, simply read it and feed the data to the constructor: import json with open('file.json') as fh: t2 = Nob(json.load(fh)) import yaml with open('file.yml') as fh: t3 = Nob(yaml.load(fh)) Similarly, to create a JSON (YAML) file from a tree, you can use: with open('file.json', 'w') as fh: json.dump(t2[:], fh) with open('file.yml', 'w') as fh: yaml.dump(t3[:], fh) Basic manipulation The variable t now holds a tree, i.e the reference to the actual data. However, for many practical cases it is useful to work with a subtree. nob offers a useful class NobView to this end. It handles identically for the most part as the main tree, but changes performed on a NobView affect the main Nob instance that it is linked to. In practice, any access to a key of t yields a NobView instance, e.g.: tv1 = t['/key1'] # NobView(/key1) tv2 = t['key1'] # NobView(/key1) tv3 = t.key1 # NobView(/key1) tv1 == tv2 == tv3 # True Note that a full path '/key1', as well as a simple key 'key1' are valid identifiers. Simple keys can also be called as attributes, using t.key1. To access the actual value that is stored in the nested object, simply use the [:] operator: tv1[:] >>> 'val1' t.key1[:] >>> 'val1' To assign a new value to this node, you can do it directly on the NobView instance: t.key1 = 'new' tv1[:] >>> 'new' t[:]['key1'] >>> 'new' Of course, because of how Python variables work, you cannot simply assign the value to tv1, as this would just overwrite it's contents: tv1 = 'new' tv1 >>> 'new' t[:]['key1'] >>> 'val1' If you find yourself with a NobView object that you would like to edit directly, you can use the .set method: tv1 = t.key1 tv1.set('new') t[:]['key1'] >>> 'new' Because nested objects can contain both dicts and lists, integers are sometimes needed as keys: t['/key2/key5/0'] >>> NobView(/key2/key5/0) t.key2.key5[0] >>> NobView(/key2/key5/0) t.key2.key5['0'] >>> NobView(/key2/key5/0) However, since Python does not support attributes starting with an integer, there is no attribute support for lists. Only key access (full path, integer index or its stringified counterpart) are supported. Smart key access In a simple nested dictionary, the access to 'key1' would be simply done with: nested_dict['key1'] If you are looking for e.g. key3, you would need to write: nested_dict['key2']['key3'] For deep nested objects however, this can be a chore, and become very difficult to read. nob helps you here by supplying a smart method for finding unique keys: t['key3'] >>> NobView(/key2/key3) t.key3 >>> NobView(/key2/key3) Note that attribute access t.key3 behaves like simple key access t['key3']. This has some implications when the key is not unique in the tree. Let's say e.g. we wish to access key5. Let's try using attribute access: t.key5 >>> KeyError: Identifier key5 yielded 3 results instead of 1 Oups! Because key5 is not unique (it appears 3 times in the tree), t.key5 is not specific, and nob wouldn't know which one to return. In this instance, we have several possibilities, depending on which key5 we are looking for: t.key4.key5 >>> NobView(/key2/key4/key5) t.key2['/key5'] >>> NobView(/key2/key5) t['/key5'] >>> NobView(/key5) There is a bit to unpack here: - The first key5is unique in the NobView t.key4(and key4is itself unique), so t.key4.key5finds it correctly. - The second is complex: key2is unique, but key5is still not unique to t.key2. There is not much advantage compared to a full path access t['/key2/key5']. - The last cannot be resolved using keys in its path, because there are none. The only solution is to use a full path. Other tree tools Paths: any Nob (or NobView) object can introspect itself to find all its valid paths: t.paths >>> [Path('/'), Path('/key1'), Path('/key2'), Path('/key2/key3'), Path('/key2/key4'), Path('/key2/key4/key5'), Path('/key2/key5'), Path('/key2/key5/0'), Path('/key2/key5/1'), Path('/key2/key5/2'), Path('/key5')] Find: in order to easily search in this path list, the .find method is available: t.find('key5') >>> [Path('/key2/key4/key5'), Path('/key2/key5'), Path('/key5')] The elements of these lists are not strings, but Path objects, as described below. Iterable: any tree or tree view is also iterable, yielding its children: [tv for tv in t.key2] >>> [NobView(/key2/key3), NobView(/key2/key4), NobView(/key2/key5)] Copy: to make an independant copy of a tree, use its .copy() method: t_cop = t.copy() t == t_cop >>> True t_cop.key1 = 'new_val' t == t_cop >>> False A new standalone tree can also be produced from any tree view: t_cop = t.key2.copy() t_cop == t.key2 >>> True t_cop.key3 = 5 t_cop == t.key2 >>> False Numpy specifics If you end up with numpy arrays in your tree, you are no longer JSON compatible. You can remediate this by using the np.ndarray.tolist() method, but this can lead to a very long JSON file. To help you with this, Nob offers the np_serialize method, which efficiently rewrites all numpy arrays as binary strings using the internal np.save function. You can even compress these using the standard zip algorithm by passing the compress=True argument. The result can be written directly to disc as a JSON or YAML file: t.np_serialize() # OR t.np_serialize(compress=True) with open('file.json', 'w') as fh: json.dump(t[:], fh) # OR with open('file.yml', 'w') as fh: yaml.dump(t[:], fh) To read it back, use the opposite function np_deserialize: with open('file.json') as fh: t = Nob(json.load(fh)) # OR with open('file.yml') as fh: t = Nob(yaml.load(fh)) t.np_deserialize() And that's it, your original Nob has been recreated. Path All paths are stored internally using the nob.Path class. Paths are full (w.r.t. their Nob or NobView), and are in essence a list of the keys constituting the nested address. They can however be viewed equivalently as a unix-type path string with / separators. Here are some examples p1 = Path(['key1']) p1 >>> Path(/key1) p2 = Path('/key1/key2') p2 >>> Path(/key1/key2) p1 / 'key3' >>> Path(/key1/key3) p2.parent >>> Path(/key1) p2.parent == p1 >>> True 'key2' in p2 >>> True [k for k in p2] >>> ['key1', 'key2'] p2[-1] >>> 'key2' len(p2) >>> 2 These can be helpful to manipulate paths yourself, as any full access with a string to a Nob or NobView object also accepts a Path object. So say you are accessing the keys in list_of_keys at one position, but that thet also exist elsewhere in the tree. You could use e.g.: root = Path('/path/to/root/of/keys') [t[root / key] for key in list_of_keys] Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/nob/0.5.0/
CC-MAIN-2021-39
refinedweb
1,300
75.81
import torch model = torch.hub.load('pytorch/vision', 'deeplabv3_resnet101', pretrained=True) model.eval() All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (N, 3, H, W), where N is the number of images, H and W are expected to be at least 224 pixels. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. The model returns an OrderedDict with two Tensors that are of the same height and width as the input Tensor, but with 21 classes. output['out'] contains the semantic masks, and output['aux'] contains the auxillary loss values per-pixel. In inference mode, output['aux'] is not useful. So, output['out'] is of shape (N, 21, H, W). More documentation can be found here. #)['out'][0] output_predictions = output.argmax(0) The output here is of shape (21, H, W), and at each location, there are unnormalized proababilities corresponding to the prediction of each class. To get the maximum prediction of each class, and then use it for a downstream task, you can do output_predictions = output.argmax(0). Here’s a small snippet that plots the predictions, with each color being assigned to each class (see the visualized image on the left). # of 21 classes in each color r = Image.fromarray(output_predictions.byte().cpu().numpy()).resize(input_image.size) r.putpalette(colors) import matplotlib.pyplot as plt plt.imshow(r) # plt.show() Model Description Deeplabv3-ResNet101 is contructed by a Deeplabv3 model with a ResNet-101 backbone. The pre-trained model has been trained on a subset of COCO train2017, on the 20 categories that are present in the Pascal VOC dataset. Their accuracies of the pre-trained models evaluated on COCO val2017 dataset are listed below.
https://pytorch.org/hub/pytorch_vision_deeplabv3_resnet101/
CC-MAIN-2019-43
refinedweb
315
58.38
From our sponsor: Get personalized content recommendations to make your emails more engaging. Sign up for Mailchimp today. I recently redesigned my personal website to include a fun effect in the hero area, where the user’s cursor movement reveals alternative styling on the title and background, reminiscent of a spotlight. In this article we’ll walk through how the effect was created, using masks, CSS custom properties and much more. Duplicating the content We start with HTML to create two identical hero sections, with the title repeated. <div class="wrapper"> <div class="hero"> <h1 class="hero__heading">Welcome to my website</h1> </div> <div class="hero hero--secondary" aria- <p class="hero__heading">Welcome to my website</p> </div> </div> Duplicating content isn’t a great experience for someone accessing the website using a screenreader. In order to prevent screenreaders announcing it twice, we can use aria-hidden="true" on the second component. The second component is absolute-positioned with CSS to completely cover the first. Using pointer-events: none on the second component ensures that the text of the first will be selectable by users. Styling Now we can add some CSS to style the two components. I deliberately chose a bright, rich gradient for the “revealed” background, in contrast to the dark monochrome of the initial view. Somewhat counterintuitively, the component with the bright background is actually the one that will cover the other. In a moment we’ll add the mask, so that parts of it will be hidden — which is what gives the impression of it being underneath. Text effects There are a couple of different text effects at play in this component. The first applies to the bright text on the dark background. This uses -webkit-text-stroke, a non-standard CSS property that is nonetheless supported in all modern browsers. It allows us to outline our text, and works great with bold, chunky fonts like the one we’re using here. It requires a prefix in all browsers, and can be used as shorthand for -webkit-stroke-width and -webkit-stroke-color. In order to get the “glow” effect, we can set the text color to a transparent value and use the CSS drop-shadow filter with the same color value. (We’re using a CSS custom property for the color in this example): .heading { -webkit-text-stroke: 2px var(--primary); color: transparent; filter: drop-shadow(0 0 .35rem var(--primary)); } See the Pen Outlined text by Michelle Barker (@michellebarker) on CodePen.dark The text on the colored panel has a different effect applied. The intention was, for it to feel a little like an x-ray revealing the skeleton underneath. The text fill has a dotted pattern, which is created using a repeated radial gradient. To get this effect on the text, we in fact apply it to the background of the element, and use background-clip: text, which also requires a prefix in most browsers (at the time of writing). Again, we need to set the text color to a transparent value in order to see the result of the background-clip property: .hero--secondary .heading { background: radial-gradient(circle at center, white .11rem, transparent 0); background-size: .4rem .4rem; -webkit-background-clip: text; background-clip: text; color: transparent; } See the Pen Dotted text by Michelle Barker (@michellebarker) on CodePen.dark Creating the spotlight We have two choices when it comes to creating the spotlight effect with CSS: clip-path and mask-image. These can produce very similar effects, but with some important differences. Clipping We can think of clipping a shape with clip-path as a bit like cutting it out with scissors. This is ideal for shapes with clean lines. In this case, we could create a circle shape for our spotlight, using the circle() function: .hero--secondary { --clip: circle(20% at 70%); -webkit-clip-path: var(--clip); clip-path: var(--clip); } ( clip-path still needs to be prefixed in Safari, so I like to use a custom property for this.) clip-path can also take an ellipse, polygon, SVG path or a URL with an SVG path ID. See the Pen Hero with clip-path by Michelle Barker (@michellebarker) on CodePen.dark Masking Unlike clip-path the mask-image property is not limited to shapes with clean lines. We can use a PNGs, SVGs or even GIFs to create a mask. We can even use gradients: the blacker parts of the image (or gradient) act as the mask whereas the element will be hidden by the transparent parts. We can use a radial gradient to create a mask very similar to the clip-path circle: .hero--secondary { --mask: radial-gradient(circle at 70%, black 25%, transparent 0); -webkit-clip-path: var(--mask); clip-path: var(--mask); } Another advantage is that there are additional mask properties than correspond to CSS background properties — so we can control the size and position of the mask, and whether or not it repeats in much the same way, with mask-size, mask-position and mask-repeat respectively. See the Pen Hero with mask by Michelle Barker (@michellebarker) on CodePen.dark There’s much more we could delve into with clipping and masking, but let’s leave that for another day! I chose to use a mask instead of a clip-path for this project — hopefully the reason will become clear a little later on. Tracking the cursor Now we have our mask, it’s a matter of tracking the position of the user’s cursor, for which we’ll need some Javascript. First we can set custom properties for the center co-ordinates of our gradient mask. We can use default values, to give the mask an initial position before the JS is executed. This will also ensure that non-mouse users see a static mask, rather than none at all. .hero--secondary { --mask: radial-gradient(circle at var(--x, 70%) var(--y, 50%), black 25%, transparent 0); } In our JS, we can listen for the mousemove event, then update the custom properties for the x and y percentage position of the circle in accordance with the cursor position: const hero = document.querySelector('[data-hero]') window.addEventListener('mousemove', (e) => { const { clientX, clientY } = e const x = Math.round((clientX / window.innerWidth) * 100) const y = Math.round((clientY / window.innerHeight) * 100) hero.style.setProperty('--x', `${x}%`) hero.style.setProperty('--y', `${y}%`) }) See the Pen Hero with cursor tracking by Michelle Barker (@michellebarker) on CodePen.dark (For better performance, we might want to throttle or debounce that function, or use requestAnimationFrame, to prevent it repeating too frequently. If you’re not sure which to use, this article has you covered.) Adding animation At the moment there is no easing on the movement of the spotlight — it immediately updates its position when the mouse it moved, so feels a bit rigid. We could remedy that with a bit of animation. If we were using clip-path we could animate the path position with a transition: .hero--secondary { --clip: circle(25% at 70%); -webkit-clip-path: var(--clip); clip-path: var(--clip); transition: clip-path 300ms 20ms; } Animating a mask requires a different route. Animating with CSS Houdini In CSS we can transition or animate custom property values using Houdini – a set of low-level APIs that give developers access to the browser’s rendering engine. The upshot is we can animate properties (or, more accurately, values within properties, in this case) that aren’t traditionally animatable. We first need to register the property, specifying the syntax, whether or not it inherits, and an initial value. The initial-value property is crucial, otherwise it will have no effect. @property --x { syntax: '<percentage>'; inherits: true; initial-value: 70%; } Then we can transition or animate the custom property just like any regular animatable CSS property. For our spotlight, we can transition the --x and --y values, with a slight delay, to make them feel more natural: .hero--secondary { transition: --x 300ms 20ms ease-out, --y 300ms 20ms ease-out; } See the Pen Hero with cursor tracking (with Houdini animation) by Michelle Barker (@michellebarker) on CodePen.dark Unfortunately, @property is only supported in Chromium browsers at the time of writing. If we want an improved animation in all browsers, we could instead reach for a JS library. Animating with GSAP In CSS we can transition or animate custom property values using Houdini –I love using the Greensock(GSAP) JS animation library. It has an intuitive API, and contains plenty of easing options, all of which makes animating UI elements easy and fun! As I was already using it for other parts of the project, it was a simple decision to use it here to bring some life to the spotlight. Instead of using setProperty we can let GSAP take care of setting our custom properties, and configure the easing using the built in options: import gsap from 'gsap' const hero = document.querySelector('[data-hero]') window.addEventListener('mousemove', (e) => { const { clientX, clientY } = e const x = Math.round((clientX / window.innerWidth) * 100) const y = Math.round((clientY / window.innerHeight) * 100) gsap.to(hero, { '--x': `${x}%`, '--y': `${y}%`, duration: 0.3, ease: 'sine.out' }) }) See the Pen Hero with cursor tracking (GSAP) by Michelle Barker (@michellebarker) on CodePen.dark Animating the mask with a timeline The mask on my website’s hero section is slightly more elaborate than a simple spotlight. We start with a single circle, then suddenly another circle “pops” out of the first, surrounding it. To get an effect like this, we can once again turn to custom properties, and animate them on a GSAP timeline. Our radial gradient mask becomes a little more complex: We’re creating a gradient of two concentric circles, but setting the initial values of the gradient stops to 0% (via the default values in our custom properties), so that their size can be animated with JS: .hero { --mask: radial-gradient( circle at var(--x, 50%) var(--y, 50%), black var(--maskSize1, 0%), transparent 0, transparent var(--maskSize2, 0%), black var(--maskSize2, 0%), black var(--maskSize3, 0%), transparent 0); } Our mask will be invisible at this point, as the circle created with the gradient has a size of 0%. Now we can create a timeline with GSAP, so the central spot will spring to life, followed by the second circle. We’re also adding a delay of one second before the timeline starts to play. const tl = gsap.timeline({ delay: 1 }) tl .to(hero, { '--maskSize1': '20%', duration: 0.5, ease: 'back.out(2)' }) .to(hero, { '--maskSize2': '28%', '--maskSize3': 'calc(28% + 0.1rem)', duration: 0.5, delay: 0.5, ease: 'back.out(2)' }) See the Pen Hero with cursor tracking (GSAP) by Michelle Barker (@michellebarker) on CodePen.dark Using a timeline, our animations will execute one after the other. GSAP offers plenty of options for orchestrating the timing of animations with timelines, and I urge you to explore the documentation to get a taste of the possibilities. You won’t be disappointed! Smoothing the gradient For some screen resolutions, a gradient with hard color stops can result in jagged edges. To avoid this we can add some additional color stops with fractional percentage values: .hero { --mask: radial-gradient( circle at var(--x, 50%) var(--y, 50%), black var(--maskSize1, 0%) 0, rgba(0, 0, 0, 0.1) calc(var(--maskSize1, 0%) + 0.1%), transparent 0, transparent var(--maskSize2, 0%), rgba(0, 0, 0, 0.1) calc(var(--maskSize2, 0%) + 0.1%), black var(--maskSize2, 0%), rgba(0, 0, 0, 0.1) calc(var(--maskSize3, 0%) - 0.1%), black var(--maskSize3, 0%), rgba(0, 0, 0, 0.1) calc(var(--maskSize3, 0%) + 0.1%), transparent 0 ); } This optional step results in a smoother-edged gradient. You can read more about this approach in this article by Mandy Michael. A note on default values While testing this approach, I initially used a default value of 0 for the custom properties. When creating the smoother gradient, it turned out that the browser didn’t compute those zero values with calc, so the mask wouldn’t be applied at all until the values were updated with JS. For this reason, I’m setting the defaults as 0% instead, which works just fine. Creating the menu animation There’s one more finishing touch to the hero section, which is a bit of visual trickery: When the user clicks on the menu button, the spotlight expands to reveal the full-screen menu, seemingly underneath it. To create this effect, we need to give the menu an identical background to the one on our masked element. :root { --gradientBg: linear-gradient(45deg, turquoise, darkorchid, deeppink, orange); } .hero--secondary { background: var(--gradientBg); } .menu { background: var(--gradientBg); } The menu is absolute-positioned, the same as the masked hero element, so that it completely overlays the hero section. Then we can use clip-path to clip the element to a circle 0% wide. The clip path is positioned to align with the menu button, at the top right of the viewport. We also need to add a transition, for when the menu is opened. .menu { background: var(--gradientBg); clip-path: circle(0% at calc(100% - 2rem) 2rem); transition: clip-path 500ms; } When a user clicks the menu button, we’ll use JS to apply a class of .is-open to the menu. const menuButton = document.querySelector('[data-btn="menu"]') const menu = document.querySelector('[data-menu]') menuButton.addEventListener('click', () => { menu.classList.toggle('is-open') }) (In a real project there’s much more we would need to do to make our menu fully accessible, but that’s beyond the scope of this article.) Then we need to add a little more CSS to expand our clip-path so that it reveals the menu in its entirety: .menu.is-open { clip-path: circle(200% at calc(100% - 2rem) 2rem); } See the Pen Hero with cursor tracking and menu by Michelle Barker (@michellebarker) on CodePen.dark Text animation In the final demo, we’re also implementing a staggered animation on the heading, before animating the spotlight into view. This uses Splitting.js to split the text into <span> elements. As it assigns each character a custom property, it’s great for CSS animations. The GSAP timeline however, is a more convenient way to implement the staggered effect in this case, as it means we can let the timeline handle when to start the next animation after the text finishes animating. We’ll add that to the beginning of our timeline: // Set initial text styles (before animation) gsap.set(".hero--primary .char", { opacity: 0, y: 25, }); /* Timeline */ const tl = gsap.timeline({ delay: 1 }); tl .to(".hero--primary .char", { opacity: 1, y: 0, duration: 0.75, stagger: 0.1, }) .to(hero, { "--maskSize1": "20%", duration: 0.5, ease: "back.out(2)", }) .to(hero, { "--maskSize2": "28%", "--maskSize3": "calc(28% + 0.1rem)", duration: 0.5, delay: 0.3, ease: "back.out(2)", }); I hope this inspires you to play around with CSS masks and the fun effects that can be created!
https://tympanus.net/codrops/2021/05/04/dynamic-css-masks-with-custom-properties-and-gsap/
CC-MAIN-2022-40
refinedweb
2,502
55.95
Control.Monad.Trans.Class Contents Description The class of monad transformers. A monad transformer makes a. and Select mapStateT constructs a monad transformation mapStateT t :: StateT s M a -> StateT s N a For these monad transformers, lift is a natural transformation in the category of monads, i.e. for any monad transformation t :: M a -> N a, Each of the monad transformers introduces relevant operations. In a sequence of monad transformers, most of these operations.can be lifted through other transformers using lift or the map Identity is not: >>> runIdentity (undefined >> return 2)2 In a strict monad you know when each action is executed, but the monad is not necessarily strict in the return value, or in other components of the monad, such as a state. However you can use seq to create an action that is strict in the component you want evaluated. Examples Parsing the mtl package or similar, which contain methods get and put with types generalized over all suitable monads. Parsing and counting the item parser, we need to lift the StateT operations through the WriterT transformer. the monad classes of the mtl package or similar, this lifting is handled automatically by the instances of the classes, and you need only use the generalized methods get and put. We can also define a primitive using the Writer: tick :: Parser () tick = tell (Sum 1) Then the parser will keep track of how many ticks it executes. Interpreter monad This example is a cut-down version of the one in "Monad Transformers and Modular Interpreters", by Sheng Liang, Paul Hudak and Mark Jones in POPL'95 (). Suppose we want to define an interpreter that can do I/O and has exceptions, an environment and a modifiable store. We can define a monad that supports all these things as a stack of monad transformers: import Control.Monad.Trans.Class import Control.Monad.Trans.State import qualified Control.Monad.Trans.Reader as R import qualified Control.Monad.Trans.Except as E import Control.Monad.IO.Class type InterpM = StateT Store (R.ReaderT Env (E.ExceptT Err IO)) for suitable types Store, Env and Err. Now we would like to be able to use the operations associated with each of those monad transformers on InterpM actions. Since the uppermost monad transformer of InterpM)
https://hackage.haskell.org/package/transformers-0.5.5.0/docs/Control-Monad-Trans-Class.html
CC-MAIN-2019-04
refinedweb
384
55.64
With Ionic 2 development in full force, I figured it would be a good idea to update one of my more popular blog articles. Previously I had written about using the Apache Cordova InAppBrowser to launch external URLs using Ionic Framework 1. This time I’m going to accomplish the same, but using Ionic 2 and Angular. Like with the previous tutorial we will be using the Apache Cordova InAppBrowser plugin. The only change is in our framework. Let’s walk through this by creating a new project. Using your Command Prompt (Windows) or Terminal (Mac and Linux), run the following commands: ionic start ExampleProject blank --v2 cd ExampleProject ionic platform add ios ionic platform add android A few things to note here. You cannot add and build for the iOS platform unless you are using a Mac computer. It is also very important you are using the Ionic CLI that supports building Ionic Framework 2 projects. With the project created, let’s go ahead and add the Apache Cordova InAppBrowser plugin. From your Terminal or Command Prompt, run the following: cordova plugin add cordova-plugin-inappbrowser Now we’re ready to start developing! To keep this simple we’re only going to touch two files. Start by opening your project’s app/pages/home/home.js file and changing the code to look like the following: import {Platform, Page} from 'ionic-framework/ionic'; @Page({ templateUrl: 'build/pages/home/home.html' }) export class HomePage { static get parameters() { return [[Platform]]; } constructor(platform) { this.platform = platform; } launch(url) { this.platform.ready().then(() => { cordova.InAppBrowser.open(url, "_system", "location=true"); }); } } We’ve made a few changes in the above code. The first change is that we are now including the Platform dependency. With it we can make sure Apache Cordova plugins are ready before trying to use them. This is demonstrated in the launch function. In the launch function we are making use of the cordova.InAppBrowser.open function which takes three parameters. The first parameter is the URL you wish to navigate to. The second is what browser to use. The third parameter is options for the plugin. In our example, the URL is passed in from the UI that calls the function. Let’s take a look at that now. Open your project’s app/pages/home/home.html file and change it to look like the following: <ion-navbar *navbar> <ion-title> Home </ion-title> </ion-navbar> <ion-content <button (click)="launch('')">Launch URL</button> </ion-content> This UI view is simple. We have a button that calls the launch function of our HomePage class when clicked. If you’re interested in knowing what browsers are available or what options are available, they can all be seen from the plugins README file found on GitHub. Between Ionic Framework 2 and Ionic Framework 1, how you use the plugin isn’t different. The difference comes in how Angular is used in Ionic 2. A video version of this article can be seen below.
https://www.thepolyglotdeveloper.com/2016/01/launch-websites-with-ionic-2-using-the-inappbrowser/
CC-MAIN-2019-26
refinedweb
502
66.33
Getting Motivated to make things better… I set out tonight to revamp my SSRS DBA portal and reports. Jason Strate (@StrateSQL) sent me over an index analysis query that he’s been working on and it gave me motivation to get the initial page to my portal redone. If you’re not on Twitter or connected to Jason’s blog, I’d recommend reading and following him. Part of my ‘old’ methods I used in my DBA portal on SSRS had been tied into the SQL Server instances scans I do weekly. If you haven’t read how I set that up, you can here. I still like that scan method and will probably be using it for some time to come. I’ve written many different variations to accomplish the task of scanning the network for instances and that one has always fit into my processes very well so it will be around for some time to come. For SSRS and typical DBA reporting though, the need to be dynamic in using one report to look at all of your instances is a pretty nice addition. My last method for doing that was to just query the tables that the scans would populate for me. The problem with this is I had to setup a data source to at least that DBA database and send the request to get the listing that the scan found in the previous execution. This worked very well but I wanted to go a different route and make this more portable and easier for someone to take to their own environment. Accomplishing the task… The only way to really accomplish a programmatic solution other than connecting to a data source and using SQLCLR or a T-SQL method is to write a custom assembly. Once written, you can bring the assembly into SSRS and use it the same way you would call a function from the code section in the report properties. In C# and VB.NET searching for SQL Server instances is actually very easy. It’s just a matter of using the SqlDataSourceEnumerator class and calling up the GetDataSources method. This can be directed into a DataTable and then manipulated pretty much as you need it. In the case of the DBA portal page I want to return the instance found on the network and populate a parameter in order to make the selection of which server to analyze at any given time. In order to get the data from the DataTable into the parameter I needed to get everything into an Array. This made it clean in populating the parameter that contains multiple values. To do this I used the Array.ConvertAll method. That also gave me a way to ensure conversion to string types and it also gave me an easy way to handle default instances verses named instances. Handling default and named instances requires added coding because of the way the table is returned from the GetDataSources method. GetDataSources returns the following records for each instance found Resource: Note the description for InstanceName; Name of the server instance. Blank if the server is running as the default instance. The way this would have been more useful is if the InstanceName always shows as the @@SERVERNAME does. Sense named instances require the server name in the connection, there is the need to concatenate the two columns together when the InstanceName was found to have a value. To get that working we can do the following… public static string DataRowToString(DataRow dr) { if (Convert.ToString(dr[1].ToString()) != "") { return Convert.ToString(dr[0].ToString()) + "\" + Convert.ToString(dr[1].ToString()); } else { return Convert.ToString(dr[0].ToString()); } } The DataRowToString is called in the output parameter of Array.ConvertAll. The full code of the class library is below. using System; using System.Collections.Generic; using System.Text; using System.Data; namespace ClassLibrary1 { public class InstanceSearch { public static String[] InstanceFinder() { System.Data.Sql.SqlDataSourceEnumerator instance = System.Data.Sql.SqlDataSourceEnumerator.Instance; System.Data.DataTable dt = instance.GetDataSources(); DataRow[] dr = new DataRow[dt.Rows.Count]; dt.Rows.CopyTo(dr, 0); string[] strinstances = Array.ConvertAll(dr, new Converter<DataRow, String>(DataRowToString)); return strinstances; } public static string DataRowToString(DataRow dr) { if (Convert.ToString(dr[1].ToString()) != "") { return Convert.ToString(dr[0].ToString()) + "\" + Convert.ToString(dr[1].ToString()); } else { return Convert.ToString(dr[0].ToString()); } } } } To create this library yourself all you need to do is follow the steps outlined - Open Visual Studio .NET and create a new project - Select C# and then Class Library - Rename the Class that is created by default to InstanceSearch - Copy / Paste the code above into the class and save it - Build the library once done - Copy the dll file out of the debug or release folder where you created your project into the following directory (change the letter for VS versions higher than 8) C:Program FilesMicrosoft Visual Studio 8Common7IDEPrivateAssemblies Now that you have your custom assembly ready open another session of Visual Studio .NET and create a new report project. In your new report project add a new item by right clicking the project name and going through add to new item For now we can ignore the Data area of this report and any data sources that you might be used to initially setting up. Ensure you are in the layout and in the menu strip, select Report and open Report Properties. In the report properties window, select the References tab. Click the brose button to add the assembly to the assembly listing by opening the browse dialog. Navigate to the private assembly folder and double click the assembly we created earlier. Once the assembly is added to the development environment you are ready to call it. Add a new parameter to the report and make the following changes - Data Type = String - Allow blank value - Select Non-Queried under the “Available values” section - Make Label - rary1.InstanceSearch.InstanceFinder() as an expression in the Value option Once this is done, you can run the report to start the search for all the instances on the network. The report will be slow on load. Don’t be surprised about this and shocked. This is the same method that is taken when click the browse the network options from BIDS and SSMS. When the report renders you should see in the parameter the dropdown of all the servers available to query You can see how useful this can be to a DBA or even other groups in the IT department. The custom assembly doesn’t have to be restricted to a instance search. It can look for other software installations or other pieces of Active Directory that may interest administrators. There are many options available to the dynamic nature of starting a report path off this way. To extend this into database and other objects, check out In that blog I show the SQLCLR method but further into the building of the data source so you can completely go dynamic and move from instance to instance when analyzing your servers. I hope you have fun with it and get some use out of this method. As always nice work. 🙂 Thanks nice very nice. Needs more pictures 🙂 Just kidding…nice write up….maybe one day I will use SSRS but for now I am strictly backend
http://blogs.lessthandot.com/index.php/datamgmt/dbprogramming/dynamics-server-connections-in-ssrs/
CC-MAIN-2018-09
refinedweb
1,221
62.07
There has been a significant boom in distributed computing over the past few years. Various components communicate with each other over network inspite of being deployed on different physical machines.. So why exactly do we need these multiple physical machines? Let’s have a look - When data outgrows the storage capacity of a single physical machine, it becomes necessary to partition it across a number of separate machines. - File systems that manage the storage across a network of machines are called Distributed Filesystems. Since they are network-based, all the difficulty of network programming comes up, thus making distributed file systems more complex than regular disk file systems. - So, one of the biggest challenges is making the file system tolerate node failures without data loss. Hadoop comes with a distributed file system called HDFS i.e., Hadoop Distributed File System. Design of HDFS: HDFS is a filesystem designed for storing very large files, with streaming data access patterns, running on clusters of commodity hardware. Now let us understand this statement in detail. - “Very large” here means files that are hundreds of megabytes, gigabytes, or terabytes in size. There are Hadoop clusters running today that store petabytes of data. - “Streaming data access” HDFS is built around the idea that the most efficient data processing pattern is a write-once, read-many-times pattern. A dataset is typically generated or copied from the source, then various analyses are performed on that dataset over time. Each analysis will involve a large proportion, if not all, of the dataset, so the time to read the whole dataset is more important than the latency in reading the first record. - “Commodity hardware” Hadoop doesn’t require expensive, highly reliable hardware to run on. It’s designed to run on clusters of commodity hardware (commonly available hardware available from multiple vendors) for which the chance of node failure across the cluster is high, at least for large clusters. HDFS is designed to carry on working without a noticeable interruption to the user in the face of such failure. HDFS Concepts: - Blocks: A disk has a block size, which is the minimum amount of data that it can read or write. Filesystems for a single disk build on this by dealing with data in blocks, which are an integral multiple of the disk block size. Filesystem blocks are typically a few kilobytes in size, while disk blocks are normally 512 bytes. This is generally transparent to the filesystem user who is simply reading or writing a file of whatever length.HDFS too has the concept of a block, but it is a much larger uni – 64 MB by default. Like in a filesystem for a single disk, files in HDFS are broken into block-sized chunks, which are stored as independent units. Unlike a filesystem for a single disk, a file in HDFS that is smaller than a single block does not occupy a full block’s worth of underlying storage.Why Is a Block in HDFS So Large? Hms, - Namenodes and Datanodes: A HDFS cluster has two types of node operating in a master-worker pattern: a HDFS. A client accesses the filesystem on behalf of the user by communicating with the namenode and datanodes. The client presents a POSIX-like filesystem interface, so the user code does not need to know about the namenode and datanode to function. Datanodes are the work horses. The first way is to back up the files that make up the persistent state of the filesystem metadata. Hadoop can be configured so that the namenode writes its persistent state to multiple filesystems. These writes are synchronous and atomic. The usual configuration choice is to write to local disk as well as a remote NFS mount. Checkpoint Node and Backup Node: The. It stores the latest checkpoint in a directory that has the same structure as the Namenode’s directory. This permits the checkpointed image to be always available for reading by the namenode if necessary. A Backup node provides the same checkpointing functionality as the Checkpoint node. In Hadoop, Backup node keeps an in-memory, up-to-date copy of the file system namespace. It is always synchronized with the active NameNode state. The backup node in HDFS Architecture does not need to download FsImage and edits files from the active NameNode to create a checkpoint. It already has an up-to-date state of the namespace state in memory. The Backup node checkpoint process is more efficient as it only needs to save the namespace into the local FsImage file and reset edits. NameNode supports one Backup node at a time. References: Hadoop The Definitive Guide
https://blog.knoldus.com/hdfs-a-conceptual-view/
CC-MAIN-2019-47
refinedweb
776
63.29
You can configure log4net anywhere - in main service EXE, in DLL which really does the job.... The only condition is to configure logging system before its first use. You can create Init() method somewhere in Logutil.dll to configure logging system and call it during application startup. If there is problem to configure logging using app.config file, try to use the second way using separate config file to check if it works - if not, you have probably error in code related to config file location. R ________________________________ From: Jeegnesh Sheth [mailto:jsheth@src-solutions.com] Sent: Monday, October 06, 2008 2:23 PM To: Log4NET User Subject: RE: Log4net in a windows service Radovan, LogUtil.dll This has some custom logging as well as log4net dll reference and uses a custom appender to log. This is where I perform Myservice.exe has the following dll's that it uses - Dosomething.dl -à uses Logutil.dll and is included as a reference. - Myservice.exe.app.config From your information, I assume I should do this in LogUtil.dll and in the default cstor of Logutil, I should obtain the path to the myservice.exe.app.config path and set it. log = LogManager.GetLogger(typeof(LogUtil)); in the cstor Myservice.exe does not need to do any logging, only DoSomething library needs to perform the logging. The error I get is a generic .Net framework error showing all the assemblies which caused the error which may be due to versioning but the logs do not indicate this. So I did a clean install and that error was no longer the case but then it came down to just a system violation error each time I try to perform xmlconfigurator.configure() in the default cstor of the Myservice. Thanks From: Radovan Raszka [mailto:raszka@hasam.cz] Sent: Saturday, October 04, 2008 4:32 PM To: Log4NET User Subject: RE: Log4net in a windows service I configure log4net in main service class constructor. What kind of crash have you met - any exception (what?) ? public class IPservice : ServiceBase { ... public IPservice() { InitializeComponent(); this.CanPauseAndContinue = false; this.CanShutdown = true; this.CanHandleSessionChangeEvent = false; this.AutoLog = false; this.ServiceName = IPservice.SrvName; XmlConfigurator.Configure(); log = LogManager.GetLogger(typeof(IPservice)); } protected override void OnStart(string[] args){ ...} protected override void OnStop() {...} } ________________________________ From: Jeegnesh Sheth [mailto:jsheth@src-solutions.com] Sent: Friday, October 03, 2008 10:36 PM To: Log4NET User Subject: RE: Log4net in a windows service Radovan, Myservice.exe has the following: Default cstor A main entry point into process, named main Onstart and onstop methods If I place the XMlConfiguartor.configure() in any of the methods above, my applications keeps crashing. Is there something I am missing? I am trying to use your first solution. Many thanks From: Radovan Raszka [mailto:raszka@hasam.cz] Sent: Friday, October 03, 2008 3:58 PM To: Log4NET User Subject: RE: Log4net in a windows service Ok, there is also dependency on where log4net is set up. Examples in my last mail works, if log4net is configured from Myservice.exe (not from DLL), and config is stored in Myservice.exe.config (you add app.config to the project, but Visual studio copies this file to the output folder as <projectname>.exe.config, what is correct) If configuration is done from DLL, then logutil.DLL.config probably can not be used (at least it didn't work for me and I was told that application file is always searched as <processname>.exe.config), so save config into Myservice.exe.config or use second example. Radovan ________________________________ From: Jeegnesh Sheth [mailto:jsheth@src-solutions.com] Sent: Friday, October 03, 2008 9:22 PM To: Log4NET User Subject: RE: Log4net in a windows service Radovan, This is how my application is set up LogUtil.dll This has some custom logging as well as log4net dll reference and uses a custom appender to log Myservice.exe has the following dll's that it uses - Logutil.dll - Dosomething.dl - Myservice.exe.app.config Dosomthing.dll instantiates logutil.dll to write the logs. I tried putting your xmlconfigurator.configure in logutil.dll and that did not work I then tried placing it in dosomething.dll which did not work Placing it in myservice.exe did not produce anything Thoughts/ suggestions? From: Radovan Raszka [mailto:raszka@hasam.cz] Sent: Friday, October 03, 2008 10:41 AM To: Log4NET User Subject: RE: Log4net in a windows service there are 2 option: 1/ XmlConfigurator.Configure(); this configures log4net using app.config file, which must be in this form: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> </configSections> <appSettings> .... </appSettings> <log4net> .... </log4net> </configuration> This is probably best solution as you have only one config file for both application and log4net. 2/ configure log4net using extra XML file XmlConfigurator.Configure(new System.IO.FileInfo(GetAppPath() + "log4net.xml")) ..... public string GetAppPath() { string myPath = System.Reflection.Assembly.GetExecutingAssembly().Location; int i = myPath.LastIndexOf('\\'); return myPath.Remove(i + 1); } Both solutions works well with service, but when log4net config is changed, you must restart your service.
http://mail-archives.apache.org/mod_mbox/logging-log4net-user/200810.mbox/%3C0794C17B67A72245B24284103E4959D2385D@main.hasam.cz%3E
CC-MAIN-2014-41
refinedweb
849
52.76
README.md Telepathy for Delta Chat Who Authored by Nick Thomas under the MIT License. What Delta Chat is IM over email. Telepathy is a framework for abstracting over multiple IM protocols. This project glues the two together, allowing Telepathy clients to send/receive Delta messages. Telepathy CMs should have a name that is not the same as their protocol; so this CM is hereby named "padfoot". My first attempt was purple-plugin-delta. This has some licensing issues (linking libpurple with OpenSSL) that will be resolved with OpenSSL v3.0.0. At least until then, I've lost interest in it; my efforts are going into this version instead. When When it's ready. Where Here's where we're at right now: - Connect to DBUS - Advertise enough properties / interfaces to become visible in Empathy - Connect to deltachat-core-rust - Set up an account via autoconfiguration - Appear as online in Empathy - Disconnect! - Set up an account manually - [~] Contacts handling - Text messages - Multimedia messages - [~] Setup messages - Import/Export - Group chats - Geolocation messages Why Mobile IM, mostly. Desktop IM, also. It's ideal for my pinephone, and lighter than the electron desktop client. At this point, I don't know Rust, I don't know DBUS, I don't know Telepathy, and I don't know Deltachat particularly well either. So this also functions as a learning exercise! How Compiling This project is written in Rust, so you'll need a rust compiler to build it. Rustup comes highly recommended. There is a rust-toolchain file that I try to keep synced with the version of rust that deltachat-core-rust uses. Once you have a working rust compiler, just: $ cargo build --release to get a `telepathy-padfoot binary. Drop the release flag to make it build fast. Cross-compiling amd64 -> i386 If you need a 32-bit binary and you're on an am64 bit system, this seems to work, as long as you have 32-bit versions of libdbus-1 and libssl installed. On Debian, the full sequence looks like: $ dpkg --print-architecture amd64 # dpkg --add-architecture i386 $ dpkg --print-foreign-architectures i386 # apt update # apt install libdbus-1-dev:i386 libssl-dev:i386 $ rustup target install i686-unknown-linux-gnu $ PKG_CONFIG_ALLOW_CROSS=1 cargo build --target=i686-unknown-linux-gnu --release This creates a 32-bit executable at target/i686-unknown-linux-gnu/release/telepathy-padfoot I don't have a 32-bit machine to test this on, but happy to take fixes for it. Cross-compiling amd64 -> aarch64 This is a handy thing to do for linux phones, most of which use telepathy. Rust is quite heavy to compile - it's a pain even on a pinebook pro, which is the same architecture. Setup on a Debian machine is quite simple: $ dpkg --print-architecture amd64 # dpkg --add-architecture arm64 $ dpkg --print-foreign-architectures arm64 # apt update # apt install libdbus-1-dev:arm64 libssl-dev:arm64 gcc-aarch64-linux-gnu $ rustup target install aarch64-unknown-linux-gnu $ RUSTFLAGS="-C linker=aarch64-linux-gnu-gcc" PKG_CONFIG_ALLOW_CROSS=1 cargo build --target=aarch64-unknown-linux-gnu --release We have to specify the linker because of this bug. Note that this doesn't create a static binary, so you'll need to match versions for the shared libraries that are on the phone. In theory we can create static binaries with musl, but openssl makes it hard. If you get it working, tell me how! UBTouch uses an ancient version of OpenSSL: 1.0.2g. KDE Neon does much better with 1.1.1, so is easier to compile against. An alternative approach to using multiarch (as above) is to use debootstrap (or a similar tool) to get a sysroot containing libraries of all the right versions. E.g. You can then add -C link-args=--sysroot=/path/to/sysroot to RUSTFLAGS to use those libraries. Ufff. I've not got this working yet either. ...I'm compiling it directly on the phone. Not ideal. Add swap. Compiling directly on the phone, using KDE Neon, I can get Padfoot running at the same time as Spacebar, which is a Telepathy client. I can see that Padfoot is checked for protocols, but I don't see a way to start a connection with it yet. Next step for this is to get Spacebar built and running locally, for a better debugging experience. postmarketOS is more difficult. It's an aarch64...musl target. Rustup doesn't support this, and the rustc included in the repositories is stable, not nightly, so compiling directly on the phone is very difficult. Cross-compile is likely the way to go here, in the end, but I need to get one of the two tries above working first. Spacebar is available, but Empathy is not. Phosh uses Chatty, which is based on libpurple, so doesn't work with Padfoot. In the end, I tried Mobian. This is regular ordinary Debian Bullseye, plus a few Phosh packages. Installing Empathy and Padfoot together (Chatty is bundled but doesn't work), I have a working setup \o/ - although there are many warts, I can use Deltachat on Linux Mobile in at least one configuration. I'll probably keep Mobian for a while though, it's exactly what I want in a mobile phone. Yes, I am peculiar. Installing There is a share/ directory in this project that contains a bunch of files. They should be placed into /usr/share, following the same layout. Then put the binary into /usr/lib/telepathy/telepathy-padfoot. I should probably put this into the makefile. Running D-Bus activation is not enabled yet, since it gets in the way of disaster-driven development. Just run the telepathy-padfoot binary as the same user that your chat client will be running as. It registers to the DBUS session bus, and will be picked up next time your chat client scans (which may need a restart). Setup messages If you send an autocrypt setup message while a padfoot connection is up, it will notice it and open a channel asking you to reply with a message like: IMEX: <id> nnnn-nnnn-nnnn-nnnn-nnnn-nnnn-nnnn-nnnn-nnnn The ID is the delta-generated message ID, while the rest is the setup code. No whitespace! This bit is still extremely janky; it should instead be a magic channel type or action button of some kind. It is, however, functional. Delta wants us to enable the "Send copy to self" option in settings. That's exposed as "Bcc self" in the advanced options in Empathy. Once enabled, messages you send via Padfoot will appear in other clients. However, messages you send from other clients don't appear in Padfoot yet because it mostly ignores self messages as a dirty hack to avoid double-showing messages. Progess though. Autogenerated telepathy DBUS bindings It makes use of the dbus-codegen-rust crate to convert the telepathy interface specs into the executable code in src/telepathy. This is checked in, but can be regenerated like so: $ git submodule init telepathy-spec $ git submodule update telepathy-spec $ cargo install dbus-codegen $ ./scripts/dbus-codegen dbus-codegen-rust doesn't seem to handle namespaced attributes properly, so we modify the XML files in telepathy-spec... with sed. The tp:type attribute is renamed to tp:typehint. This will be fixed in the next release.
https://code.ur.gs/lupine/telepathy-padfoot
CC-MAIN-2021-31
refinedweb
1,229
63.7
- Legacy Code Never Goes Away - Glue Logic: A Sticky Business - Is Python a Scripting Language or a Full Development Language? - C++ Design Issues - Conclusion - References Glue Logic: A Sticky Business In this context, glue logic is simply a piece of Python script that allows you to call your legacy C++ programs, with no need to change them. Suppose we have a bunch of such C++ programs that must be called from Python, with the major requirement that we cannot make any changes to the C++ code. In other words, C++ code exists and is in use by real users. We can call it—but not change it! With this requirement in mind, let's look at an example of the glue logic approach. Calling C++ Programs from Python Let's say we have a really simple C++ program (called CPlusPlus0xProject) that we want to call from Python. Listing 1 illustrates an excerpt from the C++ program. Listing 1A simple C++ program. int main() { cout << endl << "Now calling doSomeMemoryWork()" << endl; doSomeMemoryWork(); cout << "Now calling getPersonDetails()" << endl; getPersonDetails(); return 0; } How do we run this C++ program from Python? This goal is achieved with the Python script in Listing 2. Listing 2Python invocation of a C++ program. import os, errno import subprocess as sp def invokeProgram(): try: print('Just before Python program invocation') p = sp.Popen(['/home/stephen/CPlusPlus0xProject'], stdout = sp.PIPE) result = p.communicate()[0] print('CPlusPlus0xProject invoked' + result + '\n') except OSError as e: print('Exception handler error occurred: ' + str(e.errno) + '\n') if __name__ == '__main__': invokeProgram() In Listing 2, our simple C++ program is invoked from Python by using the subprocess module. The script imports a few standard modules, defines a function, and then has a main section. The C++ program can be anything you like; Listing 2 is essentially the same as running the C++ program directly from the console. Listing 3 shows the combined Python and C++ program output. Listing 3The Python and C++ program output. Just before Python program invocation CPlusPlus0xProject invoked Now calling doSomeMemoryWork() Function name: doSomeMemoryWork() Before memory assignment a = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa b = bbbbbbbbbbbbbbbbbbbbbbbbbbbbbb c = cccccccccccccccccccccccccccccc After memory assignment a = bbbbbbbbbbbbbbbbbbbbbbbbbbbbbb b = bbbbbbbbbbbbbbbbbbbbbbbbbbbbbb c = cccccccccccccccccccccccccccccc Now calling getPersonDetails() Person just created is: John Doe Person name = John Doe Now inside the Person class destructor Why go to the trouble of using Python like this? Isn't it just as easy to run the C++ program from the console? Well, using Python like this gives you a great deal more power than is the case with the console. For one thing, you can take advantage of Python's exception-management facilities—this is the try-except block in Listing 2. If anything goes wrong inside the Python exception block, you'll see a message printed on the console. For example, let's change this line in Listing 2: p = sp.Popen(['/home/stephen/CPlusPlus0xProject'], stdout = sp.PIPE) to this: p = sp.Popen(['./home/stephen/CPlusPlus0xProject'], stdout = sp.PIPE) Running this erroneous Python code results in the following program output: Just before Python program invocation Exception handler error occurred: 2 Notice the exception that has been caught and reported. Now, you might be thinking, "This fellow is very pessimistic in outlook, worrying about exceptions from the word 'go.'" This is a good point, but exceptions are generally an unavoidable fact of life. Exception handling is perhaps not the most glamorous of areas to code, but working hard on your exception logic makes for a more solid end product. This in turn provides for an improved end-user experience. Clearly, the code being executed in these examples is very simple. But as you scale up the size of both the Python code and the C++ programs, you'll begin to see the power of the glue logic approach. A large number of C++ programs can be invoked from a fairly simple Python script. Now let's take a closer look at Python exceptions. Python Exceptions Python has a powerful exception-management facility and is substantially documented online. Listing 4 shows a typical exception example. Listing 4Python I/O exception management. try: fh = open('/home/stephen/myFile.txt', "w") fh.write("A test file.") except IOError as e: print('Exception handler error occurred: ' + str(e.errno) + '\n') else: print 'Successfully updated file.' The script in Listing 4 opens a file in write mode and then attempts to write to the file. If no errors occur, the output should be that associated with the else clause: Successfully updated file. If an IOError exception occurs, the except clause is executed. The Python exception code we've seen so far is very similar to the equivalent Java (or even C++) mechanism. The major merit of such exception handling is that errors can be handled close to where they occur. If the code in question can't handle the exception, at the very least it should document the exception in a logfile and then rethrow it. The latter scenario then gives higher-level code (that is, the calling code) a chance to handle the exception. Think of this as the rule: Exceptions are inconvenient, but they should never be ignored or swallowed by code without a really good reason. Given that Python has strong exception-support facilities, is it a full development language?
https://www.informit.com/articles/article.aspx?p=2175997&seqNum=2
CC-MAIN-2021-43
refinedweb
879
56.45
Hi, I have no idea. Does Nessus have any "verbose" mode to get more helpful error message? Including scap-security-guide list in this conversation because there might be people familiar with using SSG with Nessus. Regards On Mon, Apr 29, 2019 at 4:54 PM Riaz Ebrahim <mriazebrah...@gmail.com> wrote: > > Hi Jan Cerny, > > Thanks a lot for your response, Your answer was very useful to understand > about SSG files. As per your advice i tried with > scap-security-guide-0.1.43-oval-510.zip and XML validation error was gone, > but encountering new error as below from nessus > > "ssg-rhel6-ds-1.zip : Default namespace not found in OVAL" > > Do you get any clue by seeing this error?. Thanks in advance :) > > Thanks, > Riaz > > On Mon, Apr 29, 2019 at 2:44 PM Jan Cerny <jce...@redhat.com> wrote: >> >> Hi, >> >> I will try to answer, but I don't use Nessus, so I'm not sure what is >> the exact reason of this fail. >> >> In general, the SSG files are validated against SCAP XML schemas, so >> they are valid SCAP content. >> However, SCAP standard consist of multiple separate specifications. >> Strictly speaking, the SSG datastream >> doesn't conform to SCAP 1.2 specification, because the datastream >> contains OVAL checks conforming to OVAL >> version 5.11 which is a part of SCAP 1.3. For SCAP 1.2 conformance it >> would need to use OVAL checks >> in version 5.10 or older. >> >> According to this forum thread, it seems that Nessus doesn't support >> OVAL 5.11 it yet, but they say it's planned to be updated >> >> >> It could be a problem that Nessus expects datastreams that contain >> OVAL 5.10 only. >> Try using the SSG datastreams that contain OVAL 5.10 only. They can be >> downloaded from >> >> I hope Nessus should be able to consume these files. >> >> The reason why we use 5.11 is that it contains new checks that allows >> us to check easily system services using systemd >> and other new things introduced in RHEL 7. The aforementioned >> datastreams that contain OVAL 5.10 only >> have limited abilities in comparison with those containing OVAL 5.11. >> >> Best Regards >> >> Jan Černý >> Security Technologies | Red Hat, Inc. >> >> >> On Sat, Apr 27, 2019 at 6:34 AM Riaz Ebrahim <mriazebrah...@gmail.com> wrote: >> > >> > I need help on openscap SSG project. >> > >> > I am currently exploring SCAP Auditing feature from Nessus console. I >> > understood that Nessus supports SCAP Content (1.0 or 1.1 or 1.2) which can >> > be downloaded from NIST repository () >> > based on the target host version. This works great, However when i use >> > SCAP from OpenSCAP SSG (example "ssg-rhel6-ds.xml”), i am getting error as >> > “sg-rhel6-ds. .zip : sg-rhel6-ds.xml failed XML Schema validation” . >> > >> > I would like to what is the difference between openSSG scap data stream & >> > scap1.2 content downloaded from NIST repository. How i can convert openssg >> > data stream (Example - ssg-rhel6-ds.xml) to NIST scap 1.2 format. >> > >> > >> > My objective - To use openscap SSG from Nessus. Nessus scap scanning >> > expects SCAP 1.0, 1.1 or 1.2 content(in zip format). >> > >> > >> > Thanks in advance! >> > >> > _______________________________________________ >> > Open-scap-list mailing list >> > Open-scap-list@redhat.com >> > -- Jan Černý Security Technologies | Red Hat, Inc. _______________________________________________ Open-scap-list mailing list Open-scap-list@redhat.com
https://www.mail-archive.com/open-scap-list@redhat.com/msg00868.html
CC-MAIN-2019-43
refinedweb
556
69.89
I am developing a ASP.NET MVC 5 Application with EF6. I have about 5000 entities and I'm querying them through LINQ to collect all students with total tuitions: Models public class Student { public string ID { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public virtual IList<Payment> Payments { get; set; } } public class Payment { public string ID { get; set; } public double Tuition { get; set; } public int Month { get; set; } //Tuition added monthly public Student Student { get; set; } } student 1: {payments{1000}, payments{1500}, payments{3000}} student 2: {payments{400}, payments{1700}, payments{4000}} ... student n: {payments{5000}, payments{6500}, payments{7000}} var result = {{student1, 5500}, {student2, 6100}, ...} You can use this LINQ query and let EF construct the query a SQL server do the work. context.DbSet<Student>().Select(o => new { Student = o, TotalTution = o.Payments.Sum(p => p. Tution) )}
https://codedump.io/share/j5iBsxRlPP2S/1/how-to-sum-tuition-of-each-student-what-query
CC-MAIN-2017-39
refinedweb
145
50.16
I would like a command to save a file as "{ISO Date}.md" (i.e., today would be "2012-06-23.md") in a specific folder (always the same) without being prompted about anything. I can sort of see how one would go about this, but it's beyond my ken. Any ideas? Any help would be very appreciated. EDIT: removed old code excerpt That should give you a start, though you are going to want to handle cases of the file path already existing etc. Actually, you don't even need that redundant writing of the buffer before saving, or even dumping to the file. [code]import sublime_pluginimport datetimeimport os class IsoNotes(sublime_plugin.TextCommand): notes_path = r"C:\Users\nick\Desktop\notes" date_format = r"%Y-%m-%d" def run(self, edit): time = datetime.datetime.now().strftime(self.date_format) def suffixes(): yield '' for i in xrange(2, 2**30): yield str(i) for suffix in suffixes(): note_path = os.path.join(self.notes_path, time) note_path += '-' + suffix + '.md' if not os.path.exists(note_path): break self.view.retarget(note_path) self.view.run_command('save')[/code] @castles Where did you get retarget() from? There seems to be quite a number of API methods that aren't in the docs. Andy. Trust grep and introspection, not the docs I saw it in the file renaming plugin in the Default package iirc Awesome! Thank you @castles I've been tinkering with this plugin as you've been posting updates. From my rather quick testing I prefer the 2nd version because it was somewhat more robust (after I added a check not to overwrite an existing file). The 3rd version has more potential but is more finicky. But I will likely be able to spin out a command to save backups (e.g., title-1, title-2, etc.). Alas, I have yet to sit down with a proper versioning system. I was also wondering about retarget()... Thanks again! Alex Appending to the days file perhaps? I'm not sure I understand your question. I'm thinking of a different situation. I work a lot with text files (plain text or markdown, not code). I typically call them "some-file-6.md". I change the file name by an ever increasing number just before I make an edit where I remove/restructure/lose information. It would be handy to have a command to do just that, but I think I will probably work out how to do on my own. If you have another few minutes, is there a simple fix for the first file being saved as "{date}.md" rather than "{date}-.md" (that is,without the dash)? I've had a quick look, but couldn't work that out. I just re-read your question. Do you mean opening a new file, which, when saved, is then appended to the previous file? This might be handy if doable (and if it was what you meant) for editing the Monthly files. (Yes, I have monthly files, too. My system is a shambles ) Edit: Nevermind on the fix for the dash. I worked it out.
https://forum.sublimetext.com/t/save-file-as-iso-date/6031/5
CC-MAIN-2016-18
refinedweb
516
76.82
Hierarchy Doesn’t Scale Sun, November 11, 2002 I was just chatting with a friend of mine and he said that he really wanted to write a namespace extension so that he could expose a hierarchy of data in the shell. Back when namespace extensions were introduced with Win95, I thought that everything could be integrated into the shell, making the shell the last application. Sometime in the last seven years, I’ve come to hate that idea. As a hardcore computer geek, I’ve embraced the hierarchy organizational styles in three major applications: - Email folders (I keep things filed in a multi-level hierarchy and use my Inbox as a to do list) - The file system (I keep things filed in a multi-level hierarchy and use my Desktop as a to do list) - Free-form outline program (I keep things filed in a multi-level hierarchy and use a file called todo.txt as a to do list) As anal as I am about arranging and categorizing things into their various hierarchies (and as many places as I’ve spread my to do list to, apparently), the hierarchy only helps me about 50% of the time. I spend just as much time searching for things as I do going right to where it “should” be. The hierarchy used to be lots more helpful, but as the data I keep around grows over the years, it becomes less and less possible to remember where something really “belongs” and to find it there. In fact, I’ve come to believe that a hierarchy is a terrible way to keep data organized at all. A hierarchy is really just a way to associate key words (called “folder” names) with hunks of data (called “files”) and then only showing them in a very limited way. Searching is a possibility, but it either takes a long time (because file indexing is turned off) or it misses files (because file indexing is turned on — what’s with that, anyway?). Searching creates an ad hoc logical folder, but there’s no way in the shell to create a permanent logical folder with custom content. The basic hierarchy structure is easy to understand, but things become much more powerful if I can keep one hunk of data in multiple locations. Some versions of the Windows file system support this , but the shell doesn’t (although it can be simulated with shortcuts). Also, the same kind of “pivot table” capability that Excel provides, mixed with a much faster, more flexibly searching capability of a database, is much closer to what I’m after. Hopefully Longhorn will provide something like this. Also, being able to search all three of my hierarchical data sources at the same time would be pretty damn useful, but one thing at a time… Discuss
https://sellsbrothers.com/12570/
CC-MAIN-2019-47
refinedweb
471
50.91
Implement Navigation In this lesson, you use navigation controllers and segues to create the navigation flow of the FoodTracker app. At the end of the lesson, you’ll have a complete navigation scheme and interaction flow for the app. When you’re finished, your app will look something like this: Learning Objectives At the end of the lesson, you’ll be able to: Embed an existing view controller within a navigation controller in a storyboard Create segues between view controllers Edit the attributes of a segue in a storyboard using the Attributes inspector Pass data between view controllers using the prepare(for:sender:)method Perform an unwind segue Use stack views to create robust, flexible layouts Add a Segue to Navigate Forward With data displaying as expected, it’s time to provide a way to navigate from the initial meal list scene to the meal detail scene. Transitions between scenes are called segues. Before creating a segue, you need to configure your scenes. First, you’ll put your table view controller inside of a navigation controller. A navigation controller manages transitions backward and forward through a series of view controllers. The set of view controllers managed by a particular navigation controller is called its navigation stack. The first item added to the stack becomes the root view controller and is never popped off (removed from) the navigation stack. To add a navigation controller to your meal list scene Open your storyboard, Main.storyboard. Select the meal list scene by clicking on its scene dock. Choose Editor > Embed In > Navigation Controller. Xcode adds a new navigation controller to your storyboard, sets the storyboard entry point to it, and assigns the meal list scene as its root view controller. On the canvas, the icon connecting the controllers is the root view controller relationship. The table view controller is the navigation controller’s root view controller. The storyboard entry point is set to the navigation controller because the navigation controller is now a container for the table view controller. You might notice that your table view has a bar on top of it now. This is a navigation bar. Every controller on the navigation stack gets a navigation bar, which can contain controls for backward and forward navigation. Next, you’ll add a button to this navigation bar to transition to the meal detail scene. Checkpoint: Run your app. Above your table view you should now see extra space. This is the navigation bar provided by the navigation controller. The navigation bar extends its background to the top of the status bar, so the status bar doesn’t overlap with your content anymore. Configure the Navigation Bar for the Scenes Now, you’ll add the meal list title and a button (to add additional meals) to the navigation bar. Navigation bars get their title from the view controller at the top of the navigation stack—they don’t have a title themselves. Each view controller has a navigationItem property. This property defines the navigation bar’s appearance for that view controller. In Interface Builder, you can configure a view controller’s navigation item by editing the navigation bar in the view controller’s scene. To configure the navigation bar in the meal list Double-click the navigation bar in the meal list scene. A cursor appears in a text field, letting you enter text. Type Your Mealsand press Return. This sets the title for the table view controller’s navigation item. Open the Object library. (Choose View > Utilities > Show Object Library.) In the Object library, find a Bar Button Item object. Drag a Bar Button Item object from the list to the far right of the navigation bar in the meal list scene. A button called Item appears where you dragged the bar button item. Select the bar button item and open the Attributes inspector . In the Attributes inspector, choose Add from the pop-up menu next to the System Item option. The button changes to an Add button ( +). Checkpoint: Run your app. The navigation bar should now have a title and display an Add button ( +). The button doesn’t do anything yet. You’ll fix that next. You want the Add button ( +) to bring up the meal detail scene, so you’ll do this by having the button trigger a segue (or transition) to that scene. To configure the Add button in the meal detail scene On the canvas, select the Add button ( +). Control-drag from the button to the meal detail scene. A shortcut menu titled Action Segue appears in the location where the drag ended. The Action Segue menu allows you to choose the type of segue used to transition from the meal list scene to the meal detail scene when the user taps the Add button. Choose Show from the Action Segue menu. A show segue pushes the selected scene onto the top of the navigation stack, and the navigation controller presents that scene. When you select the show segue, Interface Builder sets up the show segue and alters the meal detail scene’s appearance in the canvas—it is presented with a navigation bar in Interface Builder. Checkpoint: Run your app. You can click the Add button and navigate to the meal detail scene from the meal list scene. Because you’re using a navigation controller with a show segue, the backward navigation is handled for you, and a back button automatically appears in the meal detail scene. This means you can click the back button in the meal detail scene to get back to the meal list. The push-style navigation you get by using the show segue is working just as it’s supposed to—but it’s not quite what you want when adding items. Push navigation is designed for a drill-down interface, where you’re providing more information about whatever the user selected. Adding an item, on the other hand, is a modal operation—the user performs an action that’s complete and self-contained, and then returns from that scene to the main navigation. The appropriate method of presentation for this type of scene is a modal segue. Instead of deleting the existing segue and creating a new one, simply change the segue’s style in the Attributes inspector. As is the case with most selectable elements in a storyboard, you can use the Attributes inspector to edit a segue’s attributes. To change the segue style Select the segue from the meal list scene to the meal detail scene. In the Attributes inspector, choose Present Modally from the Kind field’s pop-up menu. In the Attributes inspector, type AddItemin the Identifier field. Press Return. Later, you’ll use this identifier to identify the segue. A modal view controller doesn’t get added to the navigation stack, so it doesn’t get a navigation bar in Interface Builder. However, you want to keep the navigation bar to provide the user with visual continuity. To give the meal detail scene a navigation bar when presented modally, embed it in its own navigation controller. To add a navigation controller to the meal detail scene Select the meal detail scene by clicking on its scene dock. With the view controller selected, choose Editor > Embed In > Navigation Controller. As before, Xcode adds a navigation controller and shows the navigation bar at the top of the meal detail scene. You may need to update the frames in the meal detail scene, if they don’t update automatically. If you are getting warnings about misplaced views, select the view controller and press the Update Frames button in the bottom right corner of the canvas. This will correct the position of every view in the scene, based on their current constraints. Next, configure the navigation bar to add a title to this scene as well as two buttons, Cancel and Save. Later, you’ll link these buttons to actions. To configure the navigation bar in the meal detail scene Double-click the navigation bar in the meal detail scene. A cursor appears, letting you enter text. Type New Mealand press Return to save. Drag a Bar Button Item object from the Object library to the far left of the navigation bar in the meal detail scene. In the Attributes inspector, for System Item, select Cancel. The button text changes to Cancel. Drag another Bar Button Item object from the Object library to the far right of the navigation bar in the meal detail scene. In the Attributes inspector, for System Item, select Save. The button text changes to Save. Checkpoint: Run your app. Click the Add button. You still see the meal detail scene, but there’s no longer a button to navigate back to the meal list—instead, you see the two buttons you added, Cancel and Save. Those buttons aren’t linked to any actions yet, so you can click them, but they don’t do anything. You’ll configure the buttons to save or cancel adding a new meal and to bring the user back to the meal list soon. Store New Meals in the Meal List The next step in creating the FoodTracker app’s functionality is implementing the ability for a user to add a new meal. Specifically, when a user enters a meal name, rating, and photo in the meal detail scene and taps the Save button, you want MealViewController to configure a Meal object with the appropriate information and pass it back to MealTableViewController to display in the meal list. Start by adding a Meal property to MealViewController. To add a Meal property to MealViewController Open MealViewController.swift. Below the ratingControloutlet in MealViewController.swift, add the following property: /* This value is either passed by `MealTableViewController` in `prepare(for:sender:)` or constructed as part of adding a new meal. */ var meal: Meal? This declares a property on MealViewControllerthat is an optional Meal, which means that at any point, it may be nil. You care about configuring and passing the Meal only if the Save button was tapped. To be able to determine when this happens, add the Save button as an outlet in MealViewController.swift. To connect the Save button to the MealViewController code Open your storyboard. Click the Assistant button in the Xcode toolbar to open the assistant editor. If you want more space to work, collapse the project navigator and utility area by clicking the Navigator and Utilities buttons in the Xcode toolbar. In your storyboard, select the Save button. Control-drag from the Save button on your canvas to the code display in the editor on the right, stopping the drag at the line just below your ratingControlproperty in MealViewController.swift. In the dialog that appears, for Name, type saveButton. Leave the rest of the options as they are. Your dialog should look like this: Click Connect. You now have a way to identify the Save button. Create an Unwind Segue The task now is to pass the Meal object to MealTableViewController when a user taps the Save button and discard it when a user taps the Cancel button, switching from displaying the meal detail scene to displaying the meal list in either case. To accomplish this, you’ll use an unwind segue. An unwind segue moves backward through one or more segues to return the user to a scene managed by an existing view controller. While regular segues create a new instance of the destination view controller, unwind segues let you return to view controllers that already exist. Use unwind segues to implement navigation back to an existing view controller. Whenever a segue gets triggered, it provides a place for you to add your own code that gets executed. This method is called prepare(for:sender:), and it gives you a chance to store data and do any necessary cleanup on the source view controller (the view controller that the segue is coming from). You’ll implement this method in MealViewController to do exactly that. To implement the prepare(for:sender:) method on MealViewController Return to the standard editor by clicking the Standard button. Open MealViewController.swift. At the top of the file, under import UIKit, add the following: import os.log This imports the unified logging system. Like the print()function, the unified logging system lets you send messages to the console. However, the unified logging system gives you more control over when messages appear and how they are saved. In MealViewController.swift, above the //MARK: Actionssection, add the following: //MARK: Navigation This is a comment to help you (and anybody else who reads your code) know that this method is related to the navigation flow of your app. Below the comment, add this method skeleton: // This method lets you configure a view controller before it's presented. override func prepare(for segue: UIStoryboardSegue, sender: Any?) { } In the prepare(for:sender:)method, add a call to the superclass’s implementation: super.prepare(for: segue, sender: sender) The UIViewControllerclass’s implementation doesn’t do anything, but it’s a good habit to always call super.prepare(for:sender:)whenever you override prepare(for:sender:). That way you won’t forget it when you subclass a different class. Below the call to super.prepare(for:sender:), add the following guardstatement: // Configure the destination view controller only when the save button is pressed. guard let button = sender as? UIBarButtonItem, button === saveButton else { os_log("The save button was not pressed, cancelling", log: OSLog.default, type: .debug) return } This code verifies that the sender is a button, and then uses the identity operator ( ===) to check that the objects referenced by the senderand the saveButtonoutlet are the same. If they are not, the elsestatement is executed. The app logs a debug message using the system’s standard logging mechanisms. Debug messages contain information that may be useful during debugging or when troubleshooting specific problems. They are intended for debugging environments, and do not appear in a shipping app. After logging the debug message, the method returns. Below the elsestatement, add the following code: let name = nameTextField.text ?? "" let photo = photoImageView.image let rating = ratingControl.rating This code creates constants from the current text field text, selected image, and rating in the scene. Notice the nil coalescing operator ( ??) in the nameline. The nil coalescing operator is used to return the value of an optional if the optional has a value, or return a default value otherwise. Here, the operator unwraps the optional Stringreturned by nameTextField.text(which is optional because there may or may not be text in the text field), and returns that value if it’s a valid string. But if it’s nil, the operator the returns the empty string ( "") instead. Add the following code: // Set the meal to be passed to MealTableViewController after the unwind segue. meal = Meal(name: name, photo: photo, rating: rating) This code configures the mealproperty with the appropriate values before segue executes. Your prepare(for:sender:) method should look like this: // This method lets you configure a view controller before it's presented. override func prepare(for segue: UIStoryboardSegue, sender: Any?) { super.prepare(for: segue, sender: sender) // Configure the destination view controller only when the save button is the meal to be passed to MealTableViewController after the unwind segue. meal = Meal(name: name, photo: photo, rating: rating) } The next step in creating the unwind segue is to add an action method to the destination view controller (the view controller that the segue is going to). This method must be marked with the IBAction attribute and take a segue ( UIStoryboardSegue) as a parameter. Because you want to unwind back to the meal list scene, you need to add an action method with this format to MealTableViewController.swift. In this method, you’ll write the logic to add the new meal (that’s passed from MealViewController, the source view controller) to the meal list data and add a new row to the table view in the meal list scene. To add an action method to MealTableViewController Open MealTableViewController.swift. Before the //MARK: Private Methodssection, add the following line: //MARK: Actions Below the //MARK: Actionscomment, add the following: @IBAction func unwindToMealList(sender: UIStoryboardSegue) { } In the unwindToMealList(_:)action method, add the following ifstatement: if let sourceViewController = sender.sourceViewController as? MealViewController, meal = sourceViewController.meal { } There’s a lot happening in the condition for this ifstatement. This code uses the optional type cast operator ( as?) to try to downcast the segue’s source view controller to a MealViewControllerinstance. You need to downcast because sender.sourceViewControlleris of type UIViewController, but you need to work with a MealViewController. The operator returns an optional value, which will be nilif the downcast wasn’t possible. If the downcast succeeds, the code assigns the MealViewControllerinstance to the local constant sourceViewController, and checks to see if the meal property on sourceViewControlleris nil. If the mealproperty is non- nil, the code assigns the value of that property to the local constant mealand executes the ifstatement. If either the downcast fails or the mealproperty on sourceViewControlleris nil, the condition evaluates to falseand the ifstatement doesn’t get executed. In the ifstatement, add the following code: // Add a new meal. let newIndexPath = IndexPath(row: meals.count, section: 0) This code computes the location in the table view where the new table view cell representing the new meal will be inserted, and stores it in a local constant called newIndexPath. In the ifstatement, below the previous line of code, add the following code: meals.append(meal) This adds the new meal to the existing list of meals in the data model. In the ifstatement, below the previous line of code, add the following code: tableView.insertRows(at: [newIndexPath], with: .automatic) This animates the addition of a new row to the table view for the cell that contains information about the new meal. The .automaticanimation option uses the best animation based on the table’s current state, and the insertion point’s location. You’ll finish a more advanced implementation of this method in a little while, but for now, the unwindToMealList(_:) action method should look like this: @IBAction func unwindToMealList(sender: UIStoryboardSegue) { if let sourceViewController = sender.source as? MealViewController, let meal = sourceViewController.meal { // Add a new meal. let newIndexPath = IndexPath(row: meals.count, section: 0) meals.append(meal) tableView.insertRows(at: [newIndexPath], with: .automatic) } } Now you need to create the actual unwind segue to trigger this action method. To link the Save button to the unwindToMealList action method Open your storyboard. On the canvas, Control-drag from the Save button to the Exit item at the top of the meal detail scene. A menu appears in the location where the drag ended. It shows all the available unwind action methods. Choose unwindToMealListWithSender:from the shortcut menu. Now, when users tap the Save button, they navigate back to the meal list scene, during which process the unwindToMealList(sender:)action method is called. Checkpoint: Run your app. Now when you click the Add button ( +), create a new meal, and click Save, you should see the new meal in your meal list. Disable Saving When the User Doesn't Enter an Item Name What happens if a user tries to save a meal with no name? Because the meal property on MealViewController is an optional and you set your initializer up to fail if there’s no name, the Meal object doesn’t get created and added to the meal list—which is what you expect to happen. But you can take this a step further and keep users from accidentally trying to add meals without a name by disabling the Save button while they’re typing a meal name, and checking that they’ve specified a valid name before letting them dismiss the keyboard. To disable the Save button when there’s no item name In MealViewController.swift, find the //MARK: UITextFieldDelegatesection. You can jump to it quickly using the functions menu, which appears if you click the name of the file at the top of the editor area. In this section, add another UITextFieldDelegatemethod: func textFieldDidBeginEditing(_ textField: UITextField) { // Disable the Save button while editing. saveButton.isEnabled = false } The textFieldDidBeginEditingmethod gets called when an editing session begins, or when the keyboard gets displayed. This code disables the Save button while the user is editing the text field. Scroll to the bottom of the class. Before the last closing curly brace ( }), add the following line: //MARK: Private Methods Below the //MARK: Private Methodscomment, add the following method: private func updateSaveButtonState() { // Disable the Save button if the text field is empty. let text = nameTextField.text ?? "" saveButton.isEnabled = !text.isEmpty } This is a helper method to disable the Save button if the text field is empty. Go back to the the //MARK: UITextFieldDelegatesection and find the textFieldDidEndEditing(_:)method: func textFieldDidEndEditing(_ textField: UITextField) { } The implementation should be empty at this point. Add these lines of code: updateSaveButtonState() navigationItem.title = textField.text The first line calls updateSaveButtonState()to check if the text field has text in it, which enables the Save button if it does. The second line sets the title of the scene to that text. Find the viewDidLoad()method. override func viewDidLoad() { super.viewDidLoad() // Handle the text field’s user input through delegate callbacks. nameTextField.delegate = self } Add a call to updateSaveButtonState()in the implementation to make sure the Save button is disabled until a user enters a valid name: // Enable the Save button only if the text field has a valid Meal name. updateSaveButtonState() Your viewDidLoad() method should look like this: override func viewDidLoad() { super.viewDidLoad() // Handle the text field’s user input through delegate callbacks. nameTextField.delegate = self // Enable the Save button only if the text field has a valid Meal name. updateSaveButtonState() } And your textFieldDidEndEditing(_:) method should look like this: func textFieldDidEndEditing(_ textField: UITextField) { updateSaveButtonState() navigationItem.title = textField.text } Checkpoint: Run your app. Now when you click the Add button ( +), the Save button is disabled until you enter a valid (nonempty) meal name and dismiss the keyboard. Cancel a New Meal Addition A user might decide to cancel the addition of a new meal, and return to the meal list without saving anything. For this, you’ll implement the behavior of the Cancel button. To create and implement a cancel action method Open your storyboard. Click the Assistant button in the Xcode toolbar to open the assistant editor. In your storyboard, select the Cancel button. Control-drag from the Cancel button on your canvas to the code display in the editor on the right, stopping the drag at the line just below the //MARK: Navigationcomment in MealViewController.swift. In the dialog that appears, for Connection, select Action. For Name, type cancel. For Type, select UIBarButtonItem. Leave the rest of the options as they are. Your dialog should look like this: Click Connect. Xcode adds the necessary code to MealViewController.swiftto set up the action. @IBAction func cancel(_ sender: UIBarButtonItem) { } In the cancel(_:)action method, add the following line of code: dismiss(animated: true, completion: nil) The dismiss(animated:completion:)method dismisses the modal scene and animates the transition back to the previous scene (in this case, the meal list). The app does not store any data when the meal detail scene is dismissed, and neither the prepare(for:sender:)method nor the unwind action method are called. Your cancel(_:) action method should look like this: @IBAction func cancel(_ sender: UIBarButtonItem) { dismiss(animated: true, completion: nil) } Checkpoint: Run your app. Now when you click the Add button ( +) and click Cancel instead of Save, you should navigate back to the meal list without adding a new meal. Wrapping Up In this lesson, you learned how to push scenes onto the navigation stack, and how to present views modally. You learned how to navigate back to a previous scene using segue unwinding, how to pass data across segues, and how to dismiss modal views. At this point, the app displays an initial list of sample meals, and lets you add new meals to the list. In the next lesson, you’ll add the ability to edit and delete meals. Implement Edit and Delete Behavior Copyright © 2017 Apple Inc. All rights reserved. Terms of Use | Privacy Policy | Updated: 2016-12-08
https://developer.apple.com/library/content/referencelibrary/GettingStarted/DevelopiOSAppsSwift/ImplementNavigation.html
CC-MAIN-2017-17
refinedweb
4,044
64.1
Thanks for the response.When I said 8 digits maximum, what I was talking about was the accuracy of the machine (Arduino). Or rather, what is the epsilon of the machine. 64-bit computers are accurate to 15 decimal places, stuff like that. (how many bit is Arduino?) Since I've never done anything like this before... Where can I get an LCD screen (7 or 8, or more, digits)? From the datasheet:Text strings are limited to 80 characters and must be terminated with ASCII[255]. /* * Hello World! * * This is the Hello World! for Arduino. * outputs to DX160. */void setup() // run once, when the sketch starts{ Serial.begin(57600); // set up Serial library at 57600 bps Serial.print(186,BYTE); // Clears the screen (DX160) delay(1000); // waits a second Serial.print("Hello world!"); // prints Hello world! delay(10); // waits... Serial.print(255,BYTE); // sends the end-of-string command - gives the "OK" for the DX160 to print.}void loop() { } The important part is to keep output function limited, since it takes cpu cycles to update the screen, and for the most accurate measurement you want to be "listening" for the signal as much as possible. #include <avr/io.h> #include <avr/interrupt.h>uint16_t timer_value = 0;uint8_t new_pulse = 0;ISR(TIMER1_OVF_vect) { // handle timer overflow here if neccessary}ISR(TIMER1_CAPT_vect) { timer_value = TCNT1; // get counter value for main loop TCNT1 = 0; // reset counter new_pulse = 1; // set flag for main loop}int main(void) { TCCR1B = (1<<CS10)|(1<<CS12)| // Timer clock = system clock / 1024 (1<<ICES1)| // rising edge detect (1<<ICNC1); // noise canceller TIFR = 1<<ICF1; // Clear any ICF1 pending interrupts TIMSK = (1<<TICIE1)| // Enable Timer1 Capture (1<<TOIE1); // and overflow interrupts sei(); while(1) { if (new_pulse == 1) { new_pulse = 0; // timer_value = the count from the timer ie. time between pulses do_something_with_value(timer_value); } }} Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=5248.msg40105
CC-MAIN-2016-22
refinedweb
335
65.73
boost::dynamic_bitset... Discussion in 'C++' started by Sam Smith, Sep 28, 2004.26 - Julie - May 17, 2004 Boost + Python C/API: Mixing python return types with boost return typesSteve Knight, Oct 9, 2003, in forum: Python - Replies: - 2 - Views: - 945 - Steve Knight - Oct 10, 2003 Problems mixing boost::lambda::bind and boost::shared_ptr..Toby Bradshaw, Jun 1, 2006, in forum: C++ - Replies: - 6 - Views: - 1,984 - Kai-Uwe Bux - Jun 2, 2006 #include <boost/shared_ptr.hpp> or #include "boost/shared_ptr.hpp"?Colin Caughie, Aug 29, 2006, in forum: C++ - Replies: - 1 - Views: - 891 - Shooting - Aug 29, 2006 Boost::any and boost::lambda with std::find_ifMisiu, Jan 31, 2007, in forum: C++ - Replies: - 3 - Views: - 2,615 - Misiu - Jan 31, 2007
http://www.thecodingforums.com/threads/boost-dynamic_bitset.285900/
CC-MAIN-2015-40
refinedweb
120
67.69
I had the opportunity to present a session on Speech Recognition for Windows Phone while at That Conference in Wisconsin Dells in early August. The session was well received and there was a lot of excitement among those that attended, so I promised them that I would provide an online walk through of the session, along with the sample code that was used for demo purposes. The slides are available on my Slideshare account, and the demo project is available in Git. Why? Before digging into the “how-to”, I wanted to touch on why it would be ideal to integrate speech into your mobile apps, regardless of platform. When posed with this question, the three main reasons that my fellow developers mentioned were: - Safety (ex: prevent distracted driving), - Accessibility (ex: to accommodate those with visual impairments or reading disabilities), and - Ease of use And as one very bright young lady mentioned during my presentation, speech makes your apps seem really, really cool! I wholeheartedly agree with that! Developing an app that users can “talk” to will keep them engaged because your app is easy to use. And with the popularity of Cortana on Windows Phone devices, users are now expecting to have voice commands integrated into their apps. Period. No ifs, ands, or buts about it! How? In Windows Phone apps, speech recognition is extremely easy to implement, and to prove it, I am going to walk through the steps on how you can incorporate speech into your app quickly and easily! There are 3 ways to incorporate speech in a Windows Phone 8.1 app: - Voice Commands - Text-To-Speech - Speech Recognition Today, we will dig into voice commands - what they are and how you can create an app that leverages voice commands with some configuration and very little code on your part. Voice Commands Voice commands enable a user to launch your app, and/or navigate directly to a page within your app using a word or phrase. In Windows Phone 8.1 (XAML based or Silverlight), this requires the following 4 steps: - Add the Microphone capability - Add a Voice Command Definition file - Install the Voice Command Sets - Handle navigation accordingly Step 1 - Add the Microphone capability In order to use device features or user resources within your app, you must include the appropriate capabilities. Failing to do so will result in an exception being thrown at run-time when the app attempts to access them. Voice commands require the use of the device’s microphone. To add this capability in your app, simply double-click on the application manifest file, Package.appxmanifest, in the Solution Explorer. Click on the Capabilities tab, and ensure that Microphone is checked within the list, as shown below. To learn more about Windows Runtime capabilities, check out Microsoft’s Windows Dev Center article, App capability declarations (Windows Runtime apps). Step 2- Add the Voice Command Definition file The next step is to define the types of commands your app will recognize. This is accomplished using a Voice Command Definition (VCD) File. The VCD file is simply an XML file. Voice commands allow you to launch your app by speaking the app name followed by a pre-defined command (ex: “Dear Diary, add a new entry”). By default, all apps installed on a Windows Phone device recognize a single voice command whether the app is configured to recognize voice commands or not. You can launch an app on Windows Phone 8.1 device by telling Cortana to “open” or “start” an app by name (ex: “open Facebook”). To view which apps support voice commands, you can simply ask Cortana, “What can I say?”, and she will provide a list of apps and sample commands that are available. Now let’s talk about how you can get Cortana to recognize voice commands for your Windows Phone app! In your Windows Phone project, add a new item and select the Voice Command Definition template from the list, as shown below. This will add a VCD file in your project, pre-populated with sample commands. Take the time to review the format of this file, because it gives great examples on the formatting you need to follow when configuring your own commands. The VCD file allows you to define which commands your app will recognize in a specific language, supporting up to 15 languages in a single VCD file. In each CommandSet, you can provide a user-friendly name for your app. This is a way to provide a speech-friendly word or phrase that the user can easily say out loud for those apps that have strange characters in the name. You will also provide example text that will be displayed beneath your app’s name when someone asks Cortana the special “What can I say?” phrase. <?xml version="1.0" encoding="utf-8"?> <!-- Root element - required. Contains 1 to 15 CommandSet elements.--> <VoiceCommands xmlns=""> <!-- Required (at least 1). --> <!-- Contains the language elements for a specific language that the app will recognize. --> <CommandSet xml: <!-- Optional. Specifies a user friendly name, or nickname, for the app. --> <CommandPrefix>Diary</CommandPrefix> <!-- Required. Text that displays in the "What can I say?" screen. --> <Example> new entry </Example> <!-- Required (1 to 100). The app action that users initiate through speech. --> <Command Name="ViewEntry"> ... </Command> ... </CommandSet> </VoiceCommands> Next, you will then include the commands you want to define that will launch your app. A single Command will also include example text for the user, as well as phrases that Cortana will listen for, and the feedback that will be displayed on-screen and read aloud by Cortana when the command is recognized. You can also dynamically populate a common phrase with a list of words or phrases, using a PhraseList. <Command Name="ViewEntry"> <!-- Required. Help text for the user --> <Example>view yesterday's entry</Example> <!-- Required (1 to 10). A word or phrase that the app will recognize --> <ListenFor>view entry from {timeOfEntry}</ListenFor> <ListenFor>view {timeOfEntry} entry</ListenFor> <!-- Required (only 1). The response that the device will display or read aloud when the command is recognized --> <Feedback>Searching for your diary entry...</Feedback> <!-- Required (only 1). Target is optional and used in WP Silverlight apps. --> <Navigate /> </Command> <PhraseList Label="timeOfEntry"> <Item>yesterday</Item> <Item>last week</Item> <Item>first</Item> <Item>last</Item> </PhraseList> Finally, you can use a PhraseTopic to allow the user to dictate whatever message they so desire to your application. <Command Name="EagerEntry"> <Example>Dear Diary, my day started off great</Example> <ListenFor>{dictatedVoiceCommandText}</ListenFor> <Feedback>Hold on a second, I want to get this down...</Feedback> <Navigate /> </Command> <PhraseTopic Label="dictatedVoiceCommandText" Scenario="Dictation"> <Subject>Diary Entry</Subject> </PhraseTopic> For more information on the anatomy of a Voice Command Definition file, refer to the MSDN article, Voice Command Definition Elements and Attributes. Step 3 - Install the Command Sets on the Device Once a Voice Command Definition file has been created and configured for your app, you will need to ensure that the voice command sets are installed on the device. The Speech API for Windows Phone simplifies this for you, by providing a helper class called VoiceCommandManager. This class contains methods that allow you to install command sets from a VCD file, and retrieve installed command sets. When the app is launched, you will need to load your VCD file into your app as a StorageFile, and pass it into the InstallCommandSetsFromStorageFileAsync method on the VoiceCommandManager object. Note that only the command sets for the language that the device is currently set to will be installed. If the user changes the language settings on his device, it is important that this method is called on application launch to install command sets corresponding with the current language setting. using Windows.Media.SpeechRecognition; using Windows.Storage; ... private async Task<bool> InitializeVoiceCommands() { bool commandSetsInstalled = true; try { Uri vcdUri = new Uri("ms-appx:///MyVoiceCommands.xml", UriKind.Absolute); //load the VCD file from local storage StorageFile file = await StorageFile.GetFileFromApplicationUriAsync(vcdUri); //register the voice command definitions await VoiceCommandManager.InstallCommandSetsFromStorageFileAsync(file); } catch (Exception ex) { //voice command file not found or language not supported or file is //invalid format (missing stuff), or capabilities not selected, etc etc commandSetsInstalled = false; } return commandSetsInstalled; } PRO TIP If the Microphone capability was not added to your application manifest (Package.appxmanifest) then an UnauthorizedAccess exception will be thrown when calling the InstallCommandSetsFromStorageFileAsync method, with an error message indicating that the ID_SPEECH_RECOGNITION capability is required. Step 4 - Handle navigation Last but not least, you will need to perform a check in the application’s OnActivated event to determine: a) if the app was launched by voice command, and b) which command triggered the launch The OnActivated event receives a parameter, IActivatedEventArgs, which allows you to check which action launched your app. In this case, we only care if the app was launched by voice commands. If the app was in fact launched by voice commands, we can cast the parameter as a type of VoiceCommandActivatedEventArgs, and check its Result to determine the hierarchy of commands (Result.RulePath) that triggered the app to launch and what the user actually said to launch the app (Result.Text). With that information, we can decide where to navigate to (ex. Default view, search page, add new entry, etc). protected override void OnActivated(IActivatedEventArgs args) { if (args.Kind == ActivationKind.VoiceCommand) { Frame rootFrame = Window.Current.Content as Frame; VoiceCommandActivatedEventArgs vcArgs = (VoiceCommandActivatedEventArgs)args; //check for the command name that launched the app string voiceCommandName = vcArgs.Result.RulePath.FirstOrDefault(); switch (voiceCommandName) { case "ViewEntry": rootFrame.Navigate(typeof(ViewDiaryEntry), vcArgs.Result.Text); break; case "AddEntry": case "EagerEntry": rootFrame.Navigate(typeof(AddDiaryEntry), vcArgs.Result.Text); break; } } } As you can see, most of the work in this post included configuring your voice command definitions. With that in place, and a few lines of code, you’ve empowered your users with the ability to launch and/or deep-link into your app using voice commands! To Be Continued… In this post, we covered how you can enable your users to launch your app, and navigate to a specific page, using simple words or phrases. In the next post, we will discuss how to incorporate text-to-speech within your app to enable valuable information to be read aloud to the user. In the meantime, I challenge you to experiment with the following while incorporating Voice Commands within your Windows Phone app: a) Create a simple set of Voice Commands in 2 languages. b) Test out your voice commands for each of the supported languages. c) Test out your voice commands while the device is set to a language that is not supported. What did you observe? d) Remove the Navigate element from one or more of the commands in your VCD file and run. What happens? e) Remove the Microphone capability from the package manifest and run. What did you notice? If you’re going to take me up on my challenge, take a moment to check out and register for the new Developer Movement, You can submit your app as part of a challenge, earn points, and get rewards. Hello, I can't manage to have my app showed when I ask Cortana: "What can I say?". Apparently, everything goes well: microphone capability, vcd file, install command set… I'm using Windows 10, Visual Studio 2015 (VSO synchronized), Emulator 8.1 WVGA 4 inch 512 MB. VCD is set with xml:lang="en-us"; the emulator too; my laptop language setting too. Cortana is activated (sign in to Microsoft account done). What is missing in here? It's been a few days since I started troubleshooting… Thank you in advance for your help!
https://blogs.msdn.microsoft.com/cdndevs/2014/10/22/the-power-of-speech-in-your-windows-phone-8-1-apps/
CC-MAIN-2018-51
refinedweb
1,943
53.21
Princess Finder using React, ml5.js, and Teachable Machine Learning Published on Dec 28, 2020 It's celebration time 🎉. We are just done with the fabulous Christmas 🎅 and waiting for the new year bell to ring. Hashnode's Christmas Hackathon is also going strong with many of the enthusiasts are building something cool and writing about it. There shouldn't be any excuse to stay away from it. So, I thought of building something cool(at least, my 7 years old daughter thinks it is 😍) while learning about a bit of Machine Learning. So what is it about? I've borrowed all the Disney Princess dolls from my daughter to build a Machine Learning model such that, an application can recognize them with confidence using a webcam. I have given it a name too. The app is called, Princess Finder. In this article, we will learn the technologies behind it along with the possibilities of extending it. The Princess Finder The Princess Finder app is built using, - The Teachable Machine: How about an easy and fast way to create machine learning modelsthat you can directly use in your app or site? The Teachable Machine allows you to traina computer with images, sounds, and poses. We have created a model using the Disney princess so that, we can perform an Image Classificationby using it in our app. - ml5.js: It is machine learning for the web using your web browser. It uses the web browser's built-in graphics processing unit (GPU) to perform fast calculations. We can use APIs like, imageClassifier(model), classify, etc. to perform the image classification. - React: It is a JavaScript library for building user interfaces. We can use ml5.jsin a React application just by installing and importing it as a dependency.. Here is a quick demo with lots of excitements, A Few Terminologies If you are a newbie to Machine Learning, you may find some of the terminologies a bit overwhelming. It is better to know the meaning of them at a high level to understand the usages. You can read more about these and other Machine Learning terminologies from here. Our Princess Finder app uses the Supervised Machine learning where we have trained the model with lots of examples of princesses pictures. Each of the example data also contains a label to identify a particular princess by the name. Teachable Machine We can create the ML models with a few simple steps using the Teachable Machine user interfaces. To get started, browse to this link. You can select either an image, sound, or pose project. In our case, it will be an image project. Next, we need to define the classifications by selecting the examples(the images and labels). We can either use a webcam to take the snaps or can upload the images. We start the training once the examples are loaded. This is going to create a model for us. After the training is complete, you can test the model with the live data. Once satisfied, you can export the model to use it in an application. Finally, we can download the model to use it in our app. You can optionally upload the model to the cloud to consume it using a URL. You can also save the project to google drive. If you are interested to use or extend the model I have created, you can download and import it into the Teachable Machine interface. User Interface using ml5.js and React So we have a model now. We will use the ml5.js library to import the model and classify the images using the live stream. I have used React as I am most familiar with it. You can use any UI library, framework, or vanilla JavaScript for the same. I have used the create-react-app to start the skeleton of my app up and running within a minute. Install the ml5.js dependency, # Or, npm install ml5 yarn add ml5 Unzip the model under the public folder of the project. We can create a folder called, model under the public and extract the files. Use the ml5.js library to load the model. We will use the imageClassifier method to pass the model file. This method call returns a classifier object that we will use to classify the live images in a while. Also note, once the model is loaded successfully, we initialize the webcam device so that we can collect the images from the live stream. useEffect(() => { classifier = ml5.imageClassifier("./model/model.json", () => { navigator.mediaDevices .getUserMedia({ video: true, audio: false }) .then((stream) => { videoRef.current.srcObject = stream; videoRef.current.play(); setLoaded(true); }); }); }, []); We also need to define a video component in the render function, <video ref={videoRef} style={{ transform: "scale(-1, 1)" }} Next, we call the classify() method on the classifier to get the results. The results is an array of all the labels with the confidence factor of a match. classifier.classify(videoRef.current, (error, results) => { if (error) { console.error(error); return; } setResult(results); }); We should use the classify method call in a specified interval. You can use a React hook called, useInterval for the same. The results array may look like this, Please find the complete code of the App.js file from here. That's all, you can now use this result array to provide any UI representation you would like to. In our case, we have used this results array in two React components, - List out the Princess and Highlight the one with the maximum match <Princess data={result} /> - Show a Gauge chart to indicate the matching confidence. <Chart data={result[0]} /> The Princess component loops through the results array and render them along with using some CSS styles to highlight one. import React from "react"; const Princess = (props) => { const mostMatched = props.data[0]; const allLabels = props.data.map((elem) => elem.label); const sortedLabels = allLabels.sort((a, b) => a.localeCompare(b)); return ( <> <ul className="princes"> {sortedLabels.map((label) => ( <li key={label}> <span> <img className={`img ${ label === mostMatched.label ? "selected" : null }`} src={ label === "No Dolls" ? "./images/No.png" : `./images/${label}.png` } alt={label} /> <p className="name">{label}</p> </span> </li> ))} </ul> </> ); }; export default Princess; The Chart component is like this, import React from "react"; import GaugeChart from "react-gauge-chart"; const Chart = (props) => { const data = props.data; const label = data.label; const confidence = parseFloat(data.confidence.toFixed(2)); return ( <div> <h3>Classification Confidence: {label}</h3> <GaugeChart id="gauge-chart3" nrOfLevels={3} colors={["#FF5F6D", "#FFC371", "rgb(26 202 26)"]} arcWidth={0.3} percent={confidence} /> </div> ); }; export default Chart; That's all about it. Please find the entire source code from the GitHub Repository. Feel free to give the project a star(⭐) if you liked the work. Before We End... Hope you find the article insightful. Please 👍 like/share so that it reaches others as well. Let's connect. Feel free to DM or follow me on Twitter(@tapasadhikary). Have fun and wish you a very happy 2021 ahead. Loved it as usual. This was a stellar Tapas Adhikary. Brilliant project with simple explanation. Thank you. Seasons greeting. Full Stack Developer It's a brilliant project. I really loved the way you explained the project with simplicity. Awesome !!!! Happy New Year Tapas Adhikary
https://atapas.hashnode.dev/princess-finder-using-react-ml5js-and-teachable-machine-learning-ckj8288ch03gew7s1ht1u3pmu?guid=none&deviceId=0fe4383a-b455-4603-8427-7a7f5757728d
CC-MAIN-2021-04
refinedweb
1,207
67.76
[20-Jan-2011 10:55:59] <nyeates> btw: dev chat from last week about avalon info, I just uploaded it at [20-Jan-2011 10:56:52] <rmatte> so who are our guest devs for today's session? [20-Jan-2011 10:57:00] <Sam-I-Am> morning folks [20-Jan-2011 10:57:21] <ddreggors> odd no it does not [20-Jan-2011 10:57:21] <davetoo> feh: 127.0.0.1 sendto error Host localhost and localhost.localdomain are both using ip 127.0.0.1 [20-Jan-2011 10:57:23] <Sam-I-Am> rmatte: you're our guest dev [20-Jan-2011 10:57:33] <rmatte> that's news to me lol [20-Jan-2011 10:57:35] <ddreggors> but it is actually running and serving pages [20-Jan-2011 10:57:49] <Simon4> ddreggors: I think I know what it is [20-Jan-2011 10:57:52] * Simon4 suddenly clicks [20-Jan-2011 10:58:18] <Simon4> run netstat -lntp |grep 80 and pastie the output? (on the server that's running httpd) [20-Jan-2011 10:58:19] <rmatte> davetoo: dumbest error ever lol [20-Jan-2011 10:58:42] <nyeates> Devs will be incoming in a min, ill announce [20-Jan-2011 10:58:57] <rmatte> INCOMING! [20-Jan-2011 10:58:59] * rmatte ducks [20-Jan-2011 10:59:24] * Simon4 bets ipv6 is enabled and httpd is listening on ipv6:*:80, not ipv4 so it doesn't turn up under the ipv4 listening socket MIB [20-Jan-2011 10:59:26] <davetoo> rmatte: yeah, that's what you get with a bone-stock RHEL install [20-Jan-2011 10:59:26] <ddreggors> tcp 0 0 :::80 :::* LISTEN - [20-Jan-2011 10:59:29] <Simon4> yup [20-Jan-2011 10:59:34] <davetoo> oh [20-Jan-2011 10:59:37] <rmatte> Simon4: ah, yeh that makes sense [20-Jan-2011 10:59:41] <davetoo> Simon4: w00t! [20-Jan-2011 10:59:50] <rmatte> aloha kells [20-Jan-2011 10:59:54] * Simon4 has hit his head against this for the best part of half a day before [20-Jan-2011 10:59:59] <davetoo> me too [20-Jan-2011 11:00:00] <kells> G'day! [20-Jan-2011 11:00:01] <rmatte> haha [20-Jan-2011 11:00:08] <Sam-I-Am> howdy [20-Jan-2011 11:00:12] <davetoo> I think I created a trac ticket for it [20-Jan-2011 11:00:27] <davetoo> tcp6 causes trouble [20-Jan-2011 11:00:47] <Simon4> ddreggors: the solution is to disable ipv6 fully on the server [20-Jan-2011 11:00:51] <Simon4> not just leave it unconfigured [20-Jan-2011 11:01:00] <davetoo> what if he can't? [20-Jan-2011 11:01:07] <davetoo> What if it's actually in use? [20-Jan-2011 11:01:18] <rmatte> ipv6 in use? blasphemy [20-Jan-2011 11:01:23] <ddreggors> exactly [20-Jan-2011 11:01:25] <Sam-I-Am> whats this ipv6 thing? [20-Jan-2011 11:01:27] <Sam-I-Am> lol [20-Jan-2011 11:01:29] <ddreggors> no cant unconfigure [20-Jan-2011 11:01:35] <Sam-I-Am> <- network has run full v6 since 2003 [20-Jan-2011 11:01:42] <ddreggors> cant disable sorry [20-Jan-2011 11:01:46] <davetoo> I have used it in production several times [20-Jan-2011 11:01:49] <Simon4> is there an ipv6 listening port mib? [20-Jan-2011 11:01:51] <rmatte> Sam-I-Am: you are the borg [20-Jan-2011 11:02:05] <Sam-I-Am> rmatte: resistance is futile [20-Jan-2011 11:02:09] <Sam-I-Am> as is remembering ip addresses [20-Jan-2011 11:02:11] <nyeates> Hola everyone! We have magnificent John Causey, the awesome Kells Kerney, and the wonderful Chris Parlette who is onlooking between support cases [20-Jan-2011 11:02:11] <rmatte> so I'm told [20-Jan-2011 11:02:13] <rmatte> lol [20-Jan-2011 11:02:29] * Sam-I-Am has a support case open about lots of devices in a group [20-Jan-2011 11:02:40] <davetoo> so ... [20-Jan-2011 11:02:44] <rmatte> Sam-I-Am: which should probably apply to any organizer [20-Jan-2011 11:02:44] <davetoo> 3.1.x? [20-Jan-2011 11:02:50] <Sam-I-Am> rmatte: yeah, i think so [20-Jan-2011 11:03:12] <Sam-I-Am> the reason for me to go to 3.x will be enhanced reporting, and access control to said reports. [20-Jan-2011 11:03:17] <davetoo> 3.1.0 in a week? Month? [20-Jan-2011 11:03:24] <pmcguire> \nick ptmcg [20-Jan-2011 11:03:32] <davetoo> d'oh [20-Jan-2011 11:03:33] pmcguire is now known as ptmcg [20-Jan-2011 11:03:37] <rmatte> wrong slash muahaha [20-Jan-2011 11:03:38] <ptmcg> oof! [20-Jan-2011 11:03:39] <kells> Looking at 3.1.0 around Feb-ish [20-Jan-2011 11:03:43] <Sam-I-Am> must be a dos user [20-Jan-2011 11:03:47] <davetoo> kells: thanks [20-Jan-2011 11:03:48] <rmatte> yeh, seriously [20-Jan-2011 11:03:49] <rmatte> [20-Jan-2011 11:03:53] <ddreggors> httpd OSProcess false down alerts [20-Jan-2011 11:03:58] <Sam-I-Am> kells: whats the big fixes in 3.1? [20-Jan-2011 11:04:00] <ptmcg> you guys are harsh! [20-Jan-2011 11:04:00] <ddreggors> any clue there? [20-Jan-2011 11:04:04] <rmatte> hehe [20-Jan-2011 11:04:17] <nyeates> So the latest info on 3.1.0 is that it is coming soon....we are doing internal beta testing for it and edging towards a release....no date yet though [20-Jan-2011 11:04:25] <kells> The new reporting engine etc is (atm) an add-on feature for Enterprise only. [20-Jan-2011 11:04:38] <Sam-I-Am> kells: does it cost more for current ent customers? [20-Jan-2011 11:04:55] <rmatte> Sam-I-Am: was just about to ask that [20-Jan-2011 11:05:00] <kells> Reporting is an extra cost item, and does not come bundled. [20-Jan-2011 11:05:01] <nyeates> there were 60 some bug fixes in 3.1 a week ago, though they may have dealt it down to less fixes...despite, 3.1 will be a lot of improvement and bug kills for the community and enterprise [20-Jan-2011 11:05:13] <Sam-I-Am> well, that should be fun to justify [20-Jan-2011 11:05:22] <kells> The first release will be for a select group of customers to sanity check, and then a GA release. [20-Jan-2011 11:05:22] <davetoo> nyeates: thanks [20-Jan-2011 11:05:37] <rmatte> kells: any idea of pricing? [20-Jan-2011 11:05:39] <kells> We're still working on some of the fixes in 3.1 [20-Jan-2011 11:05:45] <davetoo> kells: When you say "reporting is an extra-cost item", [20-Jan-2011 11:06:09] <davetoo> we're not losing any of the existing "functionality"? [20-Jan-2011 11:06:13] <davetoo> such as it is [20-Jan-2011 11:06:13] <kells> I have no idea about pricing -- you'd have to talk to a sales guy in a couple of weeks as I don't think it's been nailed down yet. [20-Jan-2011 11:06:20] <zenChild> Anyone have issues running commands against a device from 'Components' -> 'IP Services' -> 'Administration' in 3.0.3? I'm getting the error screen when trying to run anything from there. [20-Jan-2011 11:06:25] <jcausey> @sam-I-am: if you're looking for specifics, core side tickets that will be in 3.1are listed at [20-Jan-2011 11:06:29] <nyeates> existing functionality in reporting is staying [20-Jan-2011 11:06:35] <kells> No loss of functionality for existing reports. If they worked before, they should continue to work. [20-Jan-2011 11:07:03] <Sam-I-Am> jcausey: thanks [20-Jan-2011 11:07:16] <kokey> hmmm [20-Jan-2011 11:07:27] <kokey> does this look funny to anyone at first glance? : [20-Jan-2011 11:07:28] <kokey> CDEF:percent=ifHCOutOctets,1000,/,100,* [20-Jan-2011 11:07:28] <kokey> AREA:percent#00cc0099:utillah [20-Jan-2011 11:07:29] <kokey> DEF:ifHCOutOctets=rrdPath/ifHCOutOctets_ifHCOutOctets.rrd:ds0:AVERAGE [20-Jan-2011 11:10:36] * Sam-I-Am sees line noise. or perl. [20-Jan-2011 11:10:48] <kokey> haha [20-Jan-2011 11:11:45] <rmatte> kokey: no it doesn't [20-Jan-2011 11:11:55] <davetoo> So on this brand new 3.0.3 RPM install, [20-Jan-2011 11:11:57] <Simon4> nothing to laugh about there [20-Jan-2011 11:11:58] <Sam-I-Am> seems fine at a glance [20-Jan-2011 11:12:05] <davetoo> when I go to Infrastructure->Processes, [20-Jan-2011 11:12:07] <davetoo> I see nothing. [20-Jan-2011 11:12:14] <davetoo> No processes are loaded. [20-Jan-2011 11:12:26] <Simon4> davetoo: just wait longer? [20-Jan-2011 11:12:31] <rmatte> davetoo: click on the search box and hit enter [20-Jan-2011 11:12:39] <nyeates> Sam-I-Am: Some "big" bug fixes might be: shift-select events is possible, zenaws ec2 monitoring scales properly, IE 8 event viewing fixes, and a possible fix for being able to copy a template properly [20-Jan-2011 11:12:57] <kokey> ok so rrdPath is a cool thing to use? [20-Jan-2011 11:12:57] <Simon4> template copy should be a definite fix, not a possible one [20-Jan-2011 11:13:12] <davetoo> Shouldn't I see a set of default processes to monitor or not monitor? [20-Jan-2011 11:13:21] <Simon4> kokey: where did you get rrdPath from ? [20-Jan-2011 11:13:21] <Sam-I-Am> kokey: i tend to do this [20-Jan-2011 11:13:21] <Sam-I-Am> DEF:Inbound-raw=${here/fullRRDPath}/ifInOctets_ifInOctets.rrd:ds0:MAX [20-Jan-2011 11:13:27] <nyeates> Simon4: depends on if we can fit it in...otherwise it will come in the next version, or as a patch [20-Jan-2011 11:14:00] <Sam-I-Am> nyeates: cool... the template copy thing is a big one. i spent way too much time in zmi doing that when i was testing 3 [20-Jan-2011 11:14:10] <rmatte> davetoo: actually that makes perfect sense, it was like that in 2.5 as well [20-Jan-2011 11:14:10] <kokey> Simon4: don't know if i add a non-custom graph point that's what it fills in when i view the graph commands [20-Jan-2011 11:14:18] <rmatte> davetoo: you have to add the processes you want to monitor by hand [20-Jan-2011 11:14:32] <Sam-I-Am> kokey: you'll want the 'here' thing. [20-Jan-2011 11:14:34] <davetoo> no more defaults? [20-Jan-2011 11:14:41] <rmatte> davetoo: the only things that pre-exist in Zenoss are windows services (as it picks them up), and IP Services [20-Jan-2011 11:14:55] <rmatte> davetoo: I never remember there being defaults for processes, I had to add all of my own [20-Jan-2011 11:15:11] <Jane_Curry> Allelujia to being able to shift-select a bunch of events! [20-Jan-2011 11:15:22] <davetoo> maybe I somehow carried them forward from my 0.99 or 1.x installs [20-Jan-2011 11:15:26] <rmatte> davetoo: I entered all of the exes, and linux processes that I wanted to monitor by hand [20-Jan-2011 11:15:33] <davetoo> in a zenpack or something [20-Jan-2011 11:15:45] <davetoo> interesting [20-Jan-2011 11:15:47] <rmatte> davetoo: If you upgrade the ones you already have set carry forward [20-Jan-2011 11:15:50] <kokey> if I put in the 'here' it does this: DEF:ifHCOutOctets-raw=__render_with_namespace__:ds0:AVERAGE [20-Jan-2011 11:15:51] <rmatte> so that's a safe assumption [20-Jan-2011 11:16:21] <Sam-I-Am> kokey: thats what it should do methinks [20-Jan-2011 11:17:49] <kokey> now i just wish really that i could get the RRD errors when it tries to render it [20-Jan-2011 11:17:59] <nyeates> Jane_Curry: shift-select: we implemented it to the best of our abilities....I think that it works well within certain # of events that are being shift-selected....otherwise if you scroll out of the cached # of events, I think it messes up...so be aware of the possible limitation [20-Jan-2011 11:18:08] <kokey> then it could tell me stuff like missing DEF or file or whatever [20-Jan-2011 11:18:46] <kokey> got it, in the event.log! [20-Jan-2011 11:19:29] <Sam-I-Am> kokey: yep [20-Jan-2011 11:19:38] <Sam-I-Am> they're still somewhat vague depending on what happened [20-Jan-2011 11:19:47] <kokey> error: invalid rpn expression in: ifHCOutOctets-raw,1000,/,100,* [20-Jan-2011 11:20:03] <davetoo> zenpatch should work for rpm installs, yes? [20-Jan-2011 11:20:07] <davetoo> (within reason) [20-Jan-2011 11:20:20] <kokey> hmmmm so something about CDEF:percent=ifHCOutOctets-raw,1000,/,100,* is fishy [20-Jan-2011 11:20:28] <kells> davetoo: yes, zenpatch works for RPM installs [20-Jan-2011 11:20:32] <davetoo> I need the fixes to zenctl for HA-linux [20-Jan-2011 11:20:51] <nyeates> kokey: that rpn...can u write the arithmetic version of what u are trying to calc? [20-Jan-2011 11:21:13] <davetoo> you can debug the RPM in 'dc' [20-Jan-2011 11:21:21] <davetoo> RPN, that is [20-Jan-2011 11:21:25] <kells> However, zenpatch essentially only works with directories under $ZENHOME/Products, so things like $ZENHOME/bin aren't patchable. [20-Jan-2011 11:21:39] <phonegi> jane: a little late here. did I miss much? [20-Jan-2011 11:21:40] <davetoo> hmm [20-Jan-2011 11:21:41] <kokey> nyeates: well what i want to do is ( ifHCOutOctets-raw/1000 ) * 100 [20-Jan-2011 11:21:53] <kokey> nyeates: but what i want to do later is replace the 1000 with here.speed [20-Jan-2011 11:22:14] <davetoo> kells: right, that patch is certainly going to be against .../inst/bin/zenctl [20-Jan-2011 11:22:28] <kokey> nyeates: i just put a number in there for now so i only debug one thing at a time [20-Jan-2011 11:22:43] <Jane_Curry> <phonegi> not yet - other good threads running so not jumped in yet... [20-Jan-2011 11:23:52] <nyeates> kokey: put a number in for that first variable 'ifHCOutOctets-raw' [20-Jan-2011 11:23:56] <nyeates> see if that is even allwoed [20-Jan-2011 11:24:34] <nyeates> kokey: where is this expression going in? custom graph definition? [20-Jan-2011 11:24:50] <kokey> nyeates: hehe... error: rpn expressions without DEF or CDEF variables are not supported [20-Jan-2011 11:25:16] <kokey> nyeates: yeah i built a custom one, but it's the v3 interface so i put in the DEF, CDEF, and AREA in as custom graph points [20-Jan-2011 11:26:19] <nyeates> anyone know custom graph defs good? Im not sure I can be of further help [20-Jan-2011 11:26:31] <Sam-I-Am> i do, but apparently my stuff in v2 doesnt work well in v3 [20-Jan-2011 11:26:34] <kokey> it's ok i think i can probably fiddle this one to function [20-Jan-2011 11:26:39] <Sam-I-Am> i was helping earlier... [20-Jan-2011 11:26:46] <themactech> I have a question on component templates [20-Jan-2011 11:26:50] <kokey> yeah if i just have a blanket definition it actually breaks the whole monitoring template [20-Jan-2011 11:26:57] <themactech> I posted this in forums but got no answer [20-Jan-2011 11:27:02] <kokey> i think i'm close now [20-Jan-2011 11:27:06] <themactech> can components have sub-components? [20-Jan-2011 11:27:08] <kokey> now that i have found the log [20-Jan-2011 11:28:22] <phonegi> themactech: structurally, yes. I have created a device that has a relation that contains a component that contains another relation; however in v3 these all show up under "Components" [20-Jan-2011 11:29:16] <nyeates> Jane_Curry: I got your message that you have an initial version of "Creating Zenoss ZenPacks" doc... including that it has a section on new 3.0 GUI....that is great news! I really want to check it out [20-Jan-2011 11:30:17] <Jane_Curry> Get it from [20-Jan-2011 11:30:42] <phonegi> nyeates, jane: I've got a lot of research that I'd like to contribute, just haven't found the time to write it up. Jane, definitely going to take a look. [20-Jan-2011 11:30:48] <Jane_Curry> I have a link to it in th users forum and a bunch of questions at [20-Jan-2011 11:31:10] <nyeates> I would advise anyone else involved in zenpack development to check out Janes docs. They have always been top notch. [20-Jan-2011 11:31:30] <phonegi> nyeates: Second that!!! [20-Jan-2011 11:31:47] <Jane_Curry> Blush [20-Jan-2011 11:32:32] <Jane_Curry> The doc is now big - over 100 pages - but the new Zenoss 3 section is pages 75-97 if folk want a quick skim [20-Jan-2011 11:32:49] <davetoo> do I have it correctly that ZCA does some of the UI generation automagically? [20-Jan-2011 11:32:49] <rmatte> Jane_Curry: I most certainly will [20-Jan-2011 11:32:50] <phonegi> Jane, I've got some info regarding interfaces that I can post [20-Jan-2011 11:34:21] * Simon4 downloads for a read [20-Jan-2011 11:34:53] <phonegi> davetoo: I believe ZCA adds/filters content when a page is loaded. ZCML allows us to control that content. [20-Jan-2011 11:35:23] <themactech> Jane's doc is all i have to work with right now [20-Jan-2011 11:35:29] <themactech> trying to figure out components [20-Jan-2011 11:35:49] <phonegi> I'm just learning how to use the ZCML to manipulate ExtJS content. [20-Jan-2011 11:36:01] <davetoo> pydev doesn't like zcml [20-Jan-2011 11:36:16] <nyeates> themactech: official docs (in addition to dev docs and APIs) are at [20-Jan-2011 11:37:30] <nyeates> they may not help you with what u are asking...but its worth a look [20-Jan-2011 11:37:35] <themactech> I have all the official docs, but the almost never cover what I need to know [20-Jan-2011 11:38:00] <themactech> Making custom components and component templates is my top priority now [20-Jan-2011 11:38:28] <themactech> I was working on getting all event data from remote locations to a central NOC via port 25 (emails) and I finally got that to work [20-Jan-2011 11:38:39] <themactech> now i tackle components and putting them in zenpacks [20-Jan-2011 11:38:56] <themactech> Also, I still want to know how to add fields to the zenoss database [20-Jan-2011 11:39:12] <themactech> I've asked this before and got no answer, just in case someone is feeling inspired today [20-Jan-2011 11:39:25] <phonegi> themactech: Good starting point is Egor's deviceAdvDetail zenpack. [20-Jan-2011 11:39:31] <themactech> I want to add a warranty expiration date and warranty status field to the database [20-Jan-2011 11:39:53] <themactech> Egor knows this stuff inside out but you have to reverse engineer all his stuff because he doesn't document any of it [20-Jan-2011 11:40:38] <themactech> I have gone through his zenpacks but the Bridge MIB zenpack with Jane's guide is my best bet so far [20-Jan-2011 11:40:42] <phonegi> themactech: yep. That's how we learn most of it. Trying to finish my stuff which I document heavily [20-Jan-2011 11:40:57] <davetoo> I have a request: [20-Jan-2011 11:41:03] <themactech> I have documented everything I do but it is more specific to my stuff [20-Jan-2011 11:41:07] <davetoo> PLEASE package ipython [20-Jan-2011 11:41:22] <davetoo> though it would require adding libreadline-dev as a dependency [20-Jan-2011 11:41:41] <Simon4> Jane_Curry: Just read quickly through the v3 specific stuff, that's really excellent! [20-Jan-2011 11:45:05] <rmatte> nice, I just figured out what to change to fix the stupid looking paths when "Overriding" templates lol [20-Jan-2011 11:46:19] <Jane_Curry> Bye Simon....:) [20-Jan-2011 11:46:24] <jcausey> @davetoo -- most of us devs use it while working on the product already; i'll get an enhancement ticket in the system and see if we can get it bundled in Avalon. [20-Jan-2011 11:47:01] <Simon4> Jane_Curry: that doc's going to save me a bunch of pain and reverse engineering in the future [20-Jan-2011 11:47:22] * Simon4 had been putting off "learning" the v3 ui stuff as it was all a bit terrifying [20-Jan-2011 11:48:12] <nyeates> ok, so who wants to beta test 3.1? I can take like 6 people [20-Jan-2011 11:48:18] <Jane_Curry> I still think its terrifying - the more you look the deeper it gets! [20-Jan-2011 11:48:19] <Simon4> me [20-Jan-2011 11:48:27] <Simon4> (at nyeates) [20-Jan-2011 11:48:34] <Jane_Curry> nyeates - timescale? [20-Jan-2011 11:48:43] <nyeates> now...well, soon [20-Jan-2011 11:48:59] <nyeates> when I get the bits hosted somewhere [20-Jan-2011 11:49:12] <nyeates> the version already exists [20-Jan-2011 11:49:15] <rmatte> nyeates: I'll do it [20-Jan-2011 11:49:26] <Jane_Curry> <themactech> - grab the new doc - it has LOTS more on device components with Z 3 [20-Jan-2011 11:49:53] <Simon4> themactech: what Jane said, the doc will really help you [20-Jan-2011 11:50:05] <themactech> I just downloaded it and sent it to the printer [20-Jan-2011 11:50:16] <themactech> It's my new bible [20-Jan-2011 11:50:21] <Jane_Curry> nyeates: Can't do it next week - probably could starting the week after [20-Jan-2011 11:50:32] <themactech> that says a lot considering I'm an atheist [20-Jan-2011 11:50:36] * Simon4 is off work for a month next week, so skiing/beta testing/zenpack coding will be on the agenda [20-Jan-2011 11:50:59] <Jane_Curry> Now lets keep religion out of this! We have enough trouble with ZCML [20-Jan-2011 11:51:03] <nyeates> where you going skiing? in UK? [20-Jan-2011 11:51:07] <Simon4> all hail our ZCML overlords [20-Jan-2011 11:51:14] <Simon4> nyeates: Jackson Hole, Wyoming [20-Jan-2011 11:51:29] * nyeates jealous :-) [20-Jan-2011 11:51:36] <themactech> So does anyone have any ideas on how to add extra fields in the database, and I am not talking about zproperties since those can't be accessed by a modeler [20-Jan-2011 11:52:07] <nyeates> in which database [20-Jan-2011 11:52:10] <themactech> I would want to add fields in the same category as the comments or rack space info [20-Jan-2011 11:52:14] <Simon4> themactech: as in "add extra attributes to the device object" ? [20-Jan-2011 11:52:19] <Jane_Curry> Any devs there provide answers to the Zenoss 3 GUI questions posted at [20-Jan-2011 11:52:55] <themactech> I would want these fields added to the base /device branch since I would want to apply them to all devices [20-Jan-2011 11:53:05] <themactech> warranty management is a big issue for our clients [20-Jan-2011 11:53:14] <Jane_Curry> BTW - most of the initial deep digging in that new Zenoss 3 section comes from phonegi so thank him [20-Jan-2011 11:53:17] <Simon4> themactech: you can monkeypatch them in [20-Jan-2011 11:53:22] <Simon4> as part of your zenpack [20-Jan-2011 11:53:24] <themactech> for most vendors, if you let a warranty lapse you cannot re-establish coverage [20-Jan-2011 11:53:49] <themactech> can you point me to any doc on monkeypatching those in? [20-Jan-2011 11:55:21] <nyeates> themactech: look around on the community (and outside zenoss.org) for docs on this....it seems like i recall *something* already existing...I cannot recall if it was about events or devices though, and where I saw it [20-Jan-2011 11:56:04] <nyeates> any more takers on beta 3.1 testing? [20-Jan-2011 11:56:19] <nyeates> ive got simon and rmatte [20-Jan-2011 11:56:44] <Jane_Curry> Me if it can run over the next 3 weeks [20-Jan-2011 11:57:17] <rmatte> nyeates: do you have it available in stack installer form though? [20-Jan-2011 11:58:01] <Jane_Curry> Nota Bene!! Anyone who has pulled todays version of the doc, it is not updated beyond the Zenoss 3 section - it is very DRAFT [20-Jan-2011 11:58:17] <jplouis> Jane: The interfaces and infos are related/linked [20-Jan-2011 11:58:21] <Sam-I-Am> id love to beta, but i'm trying to roll out 2.5 here :/ [20-Jan-2011 11:58:25] <Jane_Curry> I will post at when the next draft is complete [20-Jan-2011 11:59:00] <jplouis> Jane: they are used to describe an object in the domain so that the UI can generate the pages and or forms for the object [20-Jan-2011 11:59:01] <nyeates> rmatte: I do not think so. I think it comes in RPM format [20-Jan-2011 11:59:40] <Jane_Curry> jplouis: I know they are related / linked - I'm looking for a human description of what these things are and how/why they are linked [20-Jan-2011 11:59:41] <rmatte> nyeates: ah, then I'm out [20-Jan-2011 12:00:05] <rmatte> nyeates: can't properly test the upgrade against my existing lab box without a stack install [20-Jan-2011 12:00:17] <Jane_Curry> Sorry - I also need a stack install [20-Jan-2011 12:00:27] <nyeates> heh ok :-) ill let you know if it changes [20-Jan-2011 12:00:29] * Simon4 is happy with rpm [20-Jan-2011 12:00:32] <jplouis> Jane: configure.zcml binds or associates the info object with the domain object [20-Jan-2011 12:00:58] <rmatte> k [20-Jan-2011 12:01:08] <jplouis> Jane: The interface describes the attributes that are available and contains type information about the attributes [20-Jan-2011 12:02:24] <nyeates> Sam-I-Am: what version are you on now? 2.4? [20-Jan-2011 12:02:26] <jplouis> Jane: for example in the ZenJMX zenpack the is an info and interface for the ZenJMX datasource. With out the interface the UI wouldn't be able to generate the form to modify/create the zenjmx datasources [20-Jan-2011 12:02:45] <Sam-I-Am> nyeates: current install is 2.4.1, but i'm rolling out 2.5.2 now [20-Jan-2011 12:02:53] <Sam-I-Am> nyeates: its a new install, so no cruft from 2.4 [20-Jan-2011 12:03:04] <Sam-I-Am> once this is rolled out and working, i will start testing 3.1 [20-Jan-2011 12:03:16] <jplouis> Jane: The same is also true for components [20-Jan-2011 12:03:18] <Sam-I-Am> 3.0 just didnt seem as polished when i did my testing [20-Jan-2011 12:03:19] <nyeates> if your doing new install, why not go to 3.x.....already started i guess? [20-Jan-2011 12:03:19] <Jane_Curry> jplouis: but the interfaces.py file can be a a "barebones" effectively dummy with no attributes in it (sez Chap 14 of the Dev Guide) [20-Jan-2011 12:04:00] <Sam-I-Am> nyeates: i think i tested with 3.0.1, which was... kinda meh [20-Jan-2011 12:04:11] <Jane_Curry> jplouis: it is the info.py that actually has the attributes with the ProxyProperty in it [20-Jan-2011 12:04:44] <Simon4> we still have massive performance issues with the 3.x series so are still on 2.5 for our main install [20-Jan-2011 12:04:56] <jplouis> Jane: yes, the interface is more of a definition with zope schema information in it [20-Jan-2011 12:05:08] <Jane_Curry> jplouis: See my confusion?? [20-Jan-2011 12:05:12] * Simon4 has a secondary 3.0.3 install though which can wear some 3.1 goodness [20-Jan-2011 12:05:20] <Simon4> and I can test it against a backup of our epic install [20-Jan-2011 12:05:29] <jplouis> the schema information helps generate the form field with the appropriate types, boolean, ints, strings etc... [20-Jan-2011 12:05:55] <jplouis> along with the appropriate types it allows the UI to do basic validation based on the types [20-Jan-2011 12:06:35] <nyeates> Simon4: we will definetely use your valuable input [20-Jan-2011 12:06:49] <Jane_Curry> jplouis: and If I have a "barebones" interfaces.py then the real schema work is done is my JavaScript resources file??????? [20-Jan-2011 12:07:27] themactech_ is now known as themactech [20-Jan-2011 12:07:38] <jplouis> can you give me an example of barebones? [20-Jan-2011 12:07:39] <nyeates> stack 3.1 beta is posible...i will get to each of you in a min [20-Jan-2011 12:09:05] <rmatte> nyeates: I'm working on patching the stupid looking path names for template overriding (and the root path missing), I'll submit a patch for it via a trac ticket when I'm done [20-Jan-2011 12:10:19] <rmatte> I think I finally got the paths displaying properly... the root path will be interesting [20-Jan-2011 12:11:14] <phonegi> jane: maybe this helps a little regarding interfaces: [20-Jan-2011 12:11:22] <nyeates> if there is a ticket for this, submit the changes in comments....or create a new tick if not existing [20-Jan-2011 12:11:32] <nyeates> ^ to rmatte [20-Jan-2011 12:12:06] <rmatte> will do [20-Jan-2011 12:12:16] <rmatte> I don't know if there's a ticket yet, I'll have to hunt [20-Jan-2011 12:12:18] <rmatte> [20-Jan-2011 12:12:31] <Jane_Curry> jplouis: page 85 of new doc at [20-Jan-2011 12:12:52] <Jane_Curry> largely taken from Chap 14 of Dev Guide [20-Jan-2011 12:13:11] <Jane_Curry> stack beta is good [20-Jan-2011 12:14:25] <phonegi> jane: p85 - for field is defined like a python import. To make the adapter "for" the class A defined in file F.py, it would be for ".F.A" [20-Jan-2011 12:16:05] <jplouis> Jane: The interface should generally not be empty, regardless of what the dev guid says. Any properties you are exposing in the info object should be "defined" in the corresponding interface. Nothing enforces it but it is good practice and like I mentioned earlier the schema information in the interface allows zenoss to glean information about the properties and display the correctly [20-Jan-2011 12:16:42] <Jane_Curry> phonegi: thanks for interfaces ref - looks good. Now need to find a similar common sense answer to info.py [20-Jan-2011 12:17:37] <phonegi> Jane: info.py abstracts info that will be displayed to the user from data saved as part of the component [20-Jan-2011 12:17:47] <jplouis> Jane: sorry if I'm being obtuse, I really want to answer your question [20-Jan-2011 12:20:29] <jplouis> The infos are zope adapters that proxy domain objects. And as phonegi said, the info objects are what is used to display data to the users [20-Jan-2011 12:21:17] <nyeates> Thanks all for the dev chat....this concludes it! See you all in 2 weeks. Same bat time, same bat channel. [20-Jan-2011 12:21:24] <jplouis> We use infos for domain objects all over the place from, components to devices to datasources [20-Jan-2011 12:21:38] <phonegi> Jane: It allows you to write code for display that does not need to be part of the class definition [20-Jan-2011 12:22:04] <Jane_Curry> phonegi: thanks - penny drops about "for" field! [20-Jan-2011 12:22:46] <Jane_Curry> jplouis: Sorry - not accusing you of being obtuse at all. I am slow on these things.... [20-Jan-2011 12:23:13] <jplouis> I just know we devs can be obtuse and too technical [20-Jan-2011 12:24:15] <jplouis> got to go, I'll look at your questions on the forum [20-Jan-2011 12:25:42] <phonegi> Thanks all! Jane: will be in touch.
http://community.zenoss.org/docs/DOC-10281
CC-MAIN-2014-35
refinedweb
5,697
62.41
Connect Using the SAP HANA .NET Core Interface You will learn - How to install the .NET Core SDK - How to create and debug a .NET Core application that queries an SAP HANA database Prerequisites - You have completed the first 3 tutorials in this mission. .NET Core is a free and open source software framework for Microsoft Windows, Linux and Mac operating systems and is the successor to the .NET Framework. The first step is to check if you have the .NET Core SDK installed and what version it is. Enter the following command: dotnet --version If the dotnet command is not recognized, it means that the .Net Core SDK has not been installed. If the SDK is installed, the command returns the currently installed version, such as 3.1.202. If the .NET Core SDK is not installed, download it from Download .NET and run the installer on Microsoft Windows or Mac. Note: Select the ‘Build Apps: Download .NET Core SDK’ option. On Linux, follow the instructions for the appropriate Linux version such as openSUSE 15 Package Manager - Install .NET Core. In order for the shell to recognize that the .Net Core SDK has installed and for any dotnet commands in future steps to be recognized, a new shell window needs to be opened. For further details on supported versions, see SAP Note 2939501 - SAP HANA Client Supported Platforms for 2.5 and later. Create a new console app with the below commands: cd %HOMEPATH%/HANAClientsTutorial dotnet new console -o dotNET export HDBDOTNETCORE=/home/dan/sap/hdbclient/dotnetcore cd $HOME/HANAClientsTutorial dotnet new console -o dotNET On Linux or Mac, modify the HDBDOTNETCOREvariable to point to the location of the libadonetHDB.soor libadonetHDB.dylibfile. Open the dotNET.csprojfile: cd dotNET notepad dotNET.csproj cd dotNET pico dotNET.csproj Add the following below the PropertyGroupsection to indicate where to load the SAP HANA Client .NET Core driver from. Modify the HintPathsection with the information about where the dll is located on your machine. <ItemGroup> <Reference Include="Sap.Data.Hana.Core.v2.1"> <HintPath>C:\SAP\hdbclient\dotnetcore\v2.1\Sap.Data.Hana.Core.v2.1.dll</HintPath> </Reference> </ItemGroup> <ItemGroup> <Reference Include="Sap.Data.Hana.Core.v2.1"> <HintPath>/home/dan/sap/hdbclient/dotnetcore/v2.1/Sap.Data.Hana.Core.v2.1.dll</HintPath> </Reference> </ItemGroup> Once the dotNet.csprojfile has been updated, save and close the file. The SAP HANA Client interface for .NET Core is compatible with version 2.1, and 3.x releases of .NET Core. Open an editor to edit the file Program.cs. notepad Program.cs pico Program.cs Replace the entire contents of Program.cswith the code below: using System; using Sap.Data.Hana; namespace dotNETQuery { class Program { static void Main(string[] args) { try { // User1UserKey retrieved from hdbuserstore contains server:port, UID and PWD // encrypt must be true when connecting to HANA Cloud // If hdbuserstore is not used to retrieve the connnection information, the format would be // "Server=10.7.168.11:39015;UID=User1;PWD=Password1;encrypt=true;sslValidateCertificate=false" using (var conn = new HanaConnection("key=User1UserKey;encrypt=true;sslValidateCertificate=false")) { conn.Open(); Console.WriteLine("Connected"); var query = "SELECT TITLE, FIRSTNAME, NAME FROM HOTEL.CUSTOMER"; using (var cmd = new HanaC++) { sbRow.Append(reader[i].ToString().PadRight(20)); } Console.WriteLine(sbRow.ToString()); } conn.Close(); } } } catch (Exception ex) { Console.WriteLine("Error - " + ex.Message); Console.WriteLine(ex.ToString()); } } } } Save and close the Program.csfile after replacing the code. Note that the address, port, UID and PWD will be retrieved from the hdbuserstore. The above app makes use of some of the SAP HANA client .NET Core driver methods, such as HanaConnection. Connection details for this class can be found at Microsoft ADO.NET Connection Properties. Run the app: dotnet run Before running the program make sure to be in the directory where Program.cs is saved If you have not already done so, download Visual Studio Code. If you have not already done so, in Visual Studio Code, choose File | Add Folder to Workspace, and then add the HANAClientsTutorialfolder. Open the file Program.cs. Visual Studio Code will recognize the csfile extension and will suggest installing the C# for Visual Studio Code extension. Click Install. Place a breakpoint. Select Run | Start Debugging. Choose the .NET Core environment. Notice that the debug view becomes active and that the RUN option is .NET Core Launch. Notice that the program stops running at the breakpoint that was set. Observe the variable values in the leftmost pane. Step through code. For further information on debugging .NET Core apps consult Tutorial: Debug a .NET Core console application using Visual Studio Code and Instructions for setting up the .NET Core debugger. Congratulations! You have now created and debugged a .NET Core application that connects to and queries an SAP HANA database.
https://developers.sap.com/tutorials/hana-clients-dot-net-core.html
CC-MAIN-2020-40
refinedweb
798
52.87
CodePlexProject Hosting for Open Source Software It appears that Prism does not provide guidance for a logging API. What are people using for logging? I would like to choose an API which allows plugging in different logger implementations. Thanks. Naresh Hi Naresh, You can find examples of the use of logging in Prism in the StockTrader Reference Implementation, as well as some QuickStarts like the Modularity Quickstart. Also, you can find this article in which there is information about logging in Prism. Prism uses the Facade pattern for logging, so there is an ILoggerFacade interface available in the Prism Library. I hope you find this helpful. Guido Leandro Maliandi Thanks Guido. I had completely missed the ILoggerFacade in my Prism reading. BTW, your "this article" link is pointing to the Microsoft.Practices.Prism.Logging namespace. Which article were you trying to show me? I intended to show the Microsoft.Practices.Prism.Logging namespace itself, so that you could take a look at the logging infrastructure provided by the Prism Library. Sorry for the confusing wording. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://compositewpf.codeplex.com/discussions/238350
CC-MAIN-2017-26
refinedweb
211
68.36
sorry it didnt come out properly My name Guru, Could you tell me your code the one for my bank account? didn't think so. otherwise, what code are you requesting in this thread? if it's free java code you're after, you might first want to check the forum rules, before you get the feeling that people are rude towards you this is my code am not sure whats wrong with it its not printing the name,how much,the hour in/out,the minute in/out and not letting me do anything just says this and its supposed t obe in java java.lang.NoClassDefFoundError: PublicGarage Exception in thread "main" ----jGRASP wedge2: exit code for process is 1. ----jGRASP: operation complete. import java.util.Scanner; public class PublicGarage { public static void main(String [] main) { char repeat; //To hold Y,y,N,v,C,c,V,v double hour = 0.0; //To hold hours between 0-23 double minute = 0.0; //To hold minutes between 0-59 double cost = 0.0; String customerName; //To hold input Scanner keyboard=new Scanner(System.in); System.out.printf("Type of vehicle? C for Car, V for Van"); choice=keyboard.nextcharAt(0); System.out.printf("Enter arrival hour(0-23)"); scanf("%2d", hour_in); System.out.printf("\nEnter arrival minute(0-59)"); scanf("%d", minute_in); System.out.printf("\nEnter exit hour(0-23)"); scanf("%2d", hour_out); System.out.printf("\nEnter exit minute(0-59)"); scanf("%d", minute_out); System.out.println("************"); customerName=FullName("NameGarage"); System.out.println(customerName); System.out.println("************"); System.out.println("Your charge is" + cost); double totalTime = ( hour * 60.0 ) + minute; if( type.equals("van")) { if( totalTime >= 514 ) // more than 8.57 hours cost = 30.0; else if( totalTime < 514 && totalTime > 0) cost = 3.50 * totalTime; // 3.50 per hour } if( type.equals("car")) { if( totalTime >= 600) // more than 10 hours cost = 20.0; else if( totalTime < 600 && totalTime > 0) cost = 2.00 * totalTime; // 2.00 per hour } System.out.println("Enter y for a new customer or N to stop"); string=keyboard.nextLine(); choice=keyboard.charAt(0); while(choice=='Y' || choice=='y') } } Edited 3 Years Ago by mike_2000_17: Fixed formatting Is that code in a file called "PublicGarage.java"? This is just a class path issue or a file naming issue. Try compiling your program with "-classpath" keyword. e.g. javac -classpath <classpath> PublicGarage.java ...
https://www.daniweb.com/programming/software-development/threads/113447/hi-am-new-to-java-and-am-stuck-with-program-am-not-sure-what-is-wrong
CC-MAIN-2017-09
refinedweb
395
62.24
Reflex is ROOT's (or actually CINT's) reflection database. It knows everything about all classes, types, and members in the system. We are currently in the process of a major update of Reflex, including changes in the API and its internal structures. Bare with us - you will get proper documentation once this upgrade is finished. Until then this is what you need to know: Reflex is a container - it does not know anything by itself. You can use genreflex, a python script wrapping GCCXML, or rootcint -reflex or regular rootcint and Cintex, by calling Cintex::Enable(). In the near future, CINT will use Reflex directly; any dictionary loaded by CINT will automatically populate Reflex. The main user interface to the Reflex data consists of Note that conversions exist, e.g. Reflex::Member is both a scope and a type, thus a Reflex::Scope representing Reflex::Member can be converted to a Reflex::Type. Just check the list of member functions for these types - they are easy to understand. using Reflex; // save typing Scope s = Scope::ByName("Reflex::Scope"); // the scope containing the member we want to query Member m = s.FunctionMemberByName("SubTypeByName"); // get the function member Type mt = m.TypeOf(); // we need its type Type arg = mt.FunctionParameterAt(0); // the type of the first function parameter cout << arg.Name(SCOPED | QUALIFIED) << endl; // print the scoped and qualified name
http://root.cern.ch/root/htmldoc/CINT_REFLEX_Index.html
crawl-003
refinedweb
229
57.16
PyScons 1.0.87 An extension to Scons which enables dependency tracking on python script imports.======= PyScons ======= PyScons is a tool which works with Scons_. It is installed into a new environment with either of the two commands:: from pyscons import PYTOOL env = Environment(tools = ['default', PYTOOL()]) or:: from pyscons import PYTOOL env = Environment() PYTOOL()(env) This does three things: 1. Installs a builder: PyScript. 2. Installs a builder: PyScons. 3. Installs a new scanner for python scripts. PyScript -------- This Builder runs python scripts and modules. First, it will automatically find the ".py" files referred to when running a module as a script with the '-m' option. For example the following code will run a module as script and add the appriate files to the dependencies:: env.PyScript("out", ["-m timeit", "myscript.py"], "python $SOURCES > $TARGET") Second, it defaults the command string to "python $SOURCES" and using the "capture" keyword argument, can automatically append the appropriate strings to capture the output or error (or both) to the targets:: env.PyScript("out", ["-m timeit", "myscript.py"], capture="output") or to capture both the output and error:: env.PyScript(["out","err"], ["-m timeit", "myscript.py"], capture="both") Just like Command, multiple steps can be used to create a file:: env.PyScript("out", ["step1.py", "step2.py"], ["python ${SOURCES[0]} > temp", "python ${SOURCES[1]} > $TARGET", Delete("temp")]) PyScons (experimental) ---------------------- This Builder enables running a python script as if it is a scons script. This is distinct from SConscript which functions like an include. Instead, PyScons spawns a new scons process. Spawning a new process allows for temporary files to be automatically deleted without triggering a rebuild. To use this builder, create a .py file with, for example, the following code in a file (my_scons.py):: from pyscons import PySconsSetup targets, sources, basename = PySconsSetup() temp = basename + ".temp" PyScript(temp, ["step1.py"] + sources, capture="out") Pyscript(targets, ["step2.py", temp], capture="out") Now, this file can be called from a SConstruct file like so:: PyScons(targets, sources, "my_scons.py", options = "-j4") The string in the options keyword is NOT added to the command signature. Options that do affect the output should be added to the sig_options keyword, and these will be added to the signature:: PyScons(targets, sources, "my_scons.py", options = "-j4", sig_options = "--critical_opt") The temp file be generated if it is required to generate targets, but will be immediately deleted. This is useful for builders which generate large intermediate files which would should be deleted without triggering a rebuild. This can be better than passing a list to the Command function for a few special cases: 1. PyScons enables parallel execution of a multistep submodule(if you pass the -j option to the spawned scons) 2. PyScons creates a workflow environment (like Pipeline Pilot) in scons which enables complex tasks to be packaged in scons files for use in other scons files. 3. PyScons can turn intermediate file deletion on and off with a single flag:: PyScons(targets, sources, "my_scons.py", clean = True) # intermediate file deleted PyScons(targets, sources, "my_scons.py", clean = False) # intermediate file retained 4. PyScons ignores the "options" parameter when constructing the command's signature, enabling you to change parameters (e.g. the -j number of procs) without triggering a rebuild. Unfortunately, dependency tracking does not propagate up from the spawned scons. In this example, "step1.py" and "step2.py" will not be tracked and changes to them will not trigger a rebuild. There is a trick around this, add the following two lines to "my_scons.py":: ### step1.py #DEPENDS step2.py These two comments illustrate the two ways of explicetely including the dependency on the two scripts used on the scons file. To help distinguish files which are to be run in this ways (being called by PyScons), they may be given the extensions ".scons" or ".pyscons" as well. In this example, this would amount to renaming "my_scons.py" to "my_scons.scons" PyScanner --------- This scanner uses the modulefinder module to find all import dependencies in all sources with a 'py' extension. It can take two options in its constructor: 1. filter_path: a callable object (or None) which takes a path as input and returns true if this file should be included as a dependency by scons, or false if it should be ignored. By default, this variable becomes a function which ensures that no system python modules or modules from site-packages are tracked. To track all files, use "lambda p: True". 2. recursive: with a true (default) or false, it enables or disables recursive dependency tracking. For example to track all files (including system imports) in a nonrecursive scanner, use the following install code in your SConstruct:: from pyscons import PYTOOL env = Environment(tools = ['default', PYTOOL(recursive = False, filter_path = lambda p: True)]) Known Issues ------------ Relative imports do not work. This seems to be a bug in the modulefinder package that I do not know how to fix. Author ------ S. Joshua Swamidass (homepage_) .. _Scons: - Author: S. Joshua Swamidass - Categories - Development Status :: 5 - Production/Stable - Intended Audience :: Developers - Intended Audience :: Science/Research - License :: Free for non-commercial use - License :: OSI Approved :: MIT License - Natural Language :: English - Programming Language :: Python - Topic :: Software Development :: Build Tools - Topic :: System :: Installation/Setup - Package Index Owner: sswamida - DOAP record: PyScons-1.0.87.xml
https://pypi.python.org/pypi/PyScons/1.0.87
CC-MAIN-2016-22
refinedweb
878
57.57
Type: Posts; User: aupres I try to invoke oracle pro*c in C++/CLI project. This is my sample code. #include "stdafx.h" using namespace System; using namespace System::IO; using namespace System::Diagnostics; void... Pls, check my codes. I added app.config file into project. codes are <configuration> <connectionStrings> <add name ="SQLConnection" providerName="Oracle.DataAccess.Client" ... I developed Web Services program in WCF with VS 2008 C++/CLI on server side and deployed it. But I have no idea how to generate wsdl file. My template is Visual C++ CLR console application. So i... Hi! I am Java programmer in south korea. I want to know how to link IIS 6.0 and Tomcat 5.0. I found reference site in "Apache " site, but the html file is written in GERMAN!!! Please kindly inform...
http://forums.codeguru.com/search.php?s=c77e1238b127011f819807ea7cc879a1&searchid=6151521
CC-MAIN-2015-06
refinedweb
135
71.61
This section contains the following topics: Overview of Containers in a CDB Overview of Services in a CDB Overview of Commonality in a CDB Overview of Database Files in a CDB A container is a collection of schemas, objects, and related structures in a multitenant container database (CDB) that appears. This section contains the following topics: Data Dictionary Architecture in a CDB Cross-Container Operations The root container, also called the root, is a collection of schemas, schema objects, and nonschema objects to which all PDBs belong. Every CDB has one and only one root container, named CDB$ROOT, which stores the system metadata required to manage PDBs. All PDBs belong to the root. The root does not store user data. Thus, you must not add user data to the root or modify system-supplied schemas in the root. However, you can create common users and roles for database administration (see "Common Users in a CDB"). A common user with the necessary privileges can switch between PDBs. See Also: Oracle Database Administrator's Guide A PDB is a user-created set of schemas, objects, and related structures that appears logically to an application as a separate database. Every PDB is owned by SYS, which is a common user in the CDB (see "Common Users in a CDB"), regardless of which user created the PDB. This section contains the following topics: Scope for Names and Privileges in PDBs Database Links Between PDBs You can use PDBs to achieve the following goals: Store data specific to a particular application For example, a sales application can have its own dedicated PDB, and a human resources application can have its own dedicated PDB. Move data into a different CDB A database is "pluggable" because you can package it as a self-contained unit, and then move it into another CDB. Isolate grants within PDBs A local or common user with appropriate privileges can grant EXECUTE privileges on a package to PUBLIC within an individual PDB. PDBs must be uniquely named within a CDB, and follow the same naming rules as service names. Moreover, because a PDB has a service with its own name, a PDB name must be unique across all CDBs whose services are exposed through a specific listener. The first character of a user-created PDB name must be alphanumeric, with remaining characters either alphanumeric or an underscore ( _). Because service names are case-insensitive, PDB names are case-insensitive, and are in upper case even if specified using delimited identifiers. See Also: Oracle Database Net Services Reference for the rules for service names PDBs have separate namespaces, which has implications for the following structures: Schemas A schema contained in one PDB may have the same name as a schema in a different PDB. These two schemas may represent distinct local users, distinguished by the PDB in which the user name is resolved at connect time, or a common user (see "Overview of Common and Local Users in a CDB"). Objects An object must be uniquely named within a PDB, not across all containers in the CDB. This is true both of schema objects and nonschema objects. Identically named database objects and other dictionary objects contained in different PDBs are distinct from one another. An Oracle Database directory is an example of a nonschema object. In a CDB, common user SYS owns directories. Because each PDB has its own SYS schema, directories belong to a PDB by being created in the SYS schema of the PDB. During name resolution, the database consults only the data dictionary of the container to which the user is connected. This behavior applies to object names, the PUBLIC schema, and schema names. In a CDB, all database objects reside in a schema, which in turn resides in a container. Because PDBs appear to users as non-CDBs, schemas must be uniquely named within a container but not across containers. For example, the rep schema can exist in both salespdb and hrpdb. The two schemas are independent (see Figure 18-7 for an example). A user connected to one PDB must use database links to access objects in a different PDB. This behavior is directly analogous to a user in a non-CDB accessing objects in a different non-CDB. See Also: Oracle Database Administrator's Guide to learn how to access objects in other PDBs using database links From the user and application perspective, the data dictionary in each container in a CDB is separate, as it would be in a non-CDB. For example, the DBA_OBJECTS view in each PDB can show a different number of rows. This dictionary separation enables Oracle Database to manage the PDBs separately from each other and from the root. In a newly created non-CDB that does not yet contain user data, the data dictionary contains only system metadata. For example, the TAB$ table contains rows that describe only Oracle-supplied tables, for example, TRIGGER$ and SERVICE$. The following graphic depicts three underlying data dictionary tables, with the red bars indicating rows describing the system. Figure 18-1 Unmixed Data Dictionary Metadata in a Non-CDB If users create their own schemas and tables in this non-CDB, then the data dictionary now contains some rows that describe Oracle-supplied entities, and other rows that describe user-created entities. For example, the TAB$ dictionary table now has a row describing employees and a row describing departments. Figure 18-2 Mixed Data Dictionary Metadata in a Non-CDB In a CDB, the data dictionary metadata is split between the root and the PDBs. In the following figure, the employees and departments tables reside in a PDB. The data dictionary for this user data also resides in the PDB. Thus, the TAB$ table in the PDB has a row for the employees table and a row for the departments table. Figure 18-3 Data Dictionary Architecture in a CDB The preceding graphic shows that the data dictionary in the PDB contains pointers to the data dictionary in the root. Internally, Oracle-supplied objects such as data dictionary table definitions and PL/SQL packages are represented only in the root. This architecture achieves two main goals within the CDB: Reduction of duplication For example, instead of storing the source code for the DBMS_ADVISOR PL/SQL package in every PDB, the CDB stores it only in CDB$ROOT, which saves disk space. Ease of database upgrade If the definition of a data dictionary table existed in every PDB, and if the definition were to change in a new release, then each PDB would need to be upgraded separately to capture the change. Storing the table definition only once in the root eliminates this problem. The CDB uses an internal mechanism to separate data dictionary information. Specifically, Oracle Database uses the following automatically managed pointers: Metadata links Oracle Database stores metadata about dictionary objects only in the root. For example, the column definitions for the OBJ$ dictionary table, which underlies the DBA_OBJECTS data dictionary view, exist only in the root. As depicted in Figure 18-3, the OBJ$ table in each PDB uses an internal mechanism called a metadata link to point to the definition of OBJ$ stored in the root. The data corresponding to a metadata link resides in its PDB, not in the root. For example, if you create table mytable in hrpdb and add rows to it, then the rows are stored in the PDB files. The data dictionary views in the PDB and in the root contain different rows. For example, a new row describing mytable exists in the OBJ$ table in hrpdb, but not in the OBJ$ table in the root. Thus, a query of DBA_OBJECTS in the root and DBA_OBJECTS in hrdpb shows different result sets. Object links In some cases, Oracle Database stores the data (not metadata) for an object only once in the root. For example, AWR data resides in the root. Each PDB uses an internal mechanism called an object link to point to the AWR data in the root, thereby making views such as DBA_HIST_ACTIVE_SESS_HISTORY and DBA_HIST_BASELINE accessible in each separate container. Oracle Database automatically creates and manages object and metadata links. Users cannot add, modify, or remove these links. A container data object is_. All container data objects have a CON_ID column. The following table shows the meaning of the values for this column. Table 18-1 Container ID Values In a CDB, for every DBA_ view, a corresponding CDB_ view exists. The owner of a CDB_ view is the owner of the corresponding DBA_ view. The following figure shows the relationship among the different categories of dictionary views. Figure 18-4 Dictionary View Relationships in a CDB When the current container is a PDB, a user can view data dictionary information for the current PDB only. To an application connected to a PDB, the data dictionary appears as it would for a non-CDB. When the current container is the root, however, a common user can query CDB_ views to see metadata for the root and for PDBs for which this user is privileged. The following table shows a scenario involving queries of CDB_ views. Each row describes an action that occurs after the action in the preceding row. Table 18-2 Querying CDB_ Views The data dictionary that stores the metadata for the CDB as a whole is stored only in the system tablespaces. The data dictionary that stores the metadata for a specific PDB is stored in the self-contained tablespaces dedicated to this PDB. The PDB tablespaces contain both the data and metadata for an application back end. Thus, each set of data dictionary tables is stored in its own dedicated set of tablespaces. For a given session, the current container is the one in which the session is running. The current container can be the root (for common users only) or a PDB. Each session has exactly one current container at any point in time. Because the data dictionary in each container is separate, Oracle Databases uses the data dictionary in the current container for name resolution and privilege authorization. See Also: Oracle Database Administrator's Guide to learn more about the current container A cross-container operation is a DDL statement that affects any of the following: The CDB itself Multiple containers Multiple entities such as common users or common roles that are represented in multiple containers A container different from the one to which the user issuing the DDL statement is currently connected Only a common user connected to the root can perform cross-container operations (see "Common Users in a CDB"). Examples include user SYSTEM granting a privilege commonly to another common user (see "Roles and Privileges Granted Commonly in a CDB"), and an ALTER DATABASE . . . RECOVER statement that applies to the entire CDB. See Also: Oracle Database Administrator's Guide Clients must connect to PDBs using services. A connection using a service name starts a new session in a PDB. A foreground process, and therefore a session, at every moment of its lifetime, has a uniquely defined current container. The following figure shows two clients connecting to PDBs using two different listeners. Figure 18-5 Services in a CDB When you create a PDB, the database automatically creates and starts a service inside the CDB. The service has a property, shown in the DBA_SERVICES.PDB column, that identifies the PDB as the initial current container for the service. The service has the same name as the PDB. The PDB name must be a valid service name, and must be unique within the CDB. For example, in Figure 18-5 the PDB named hrpdb has a default service named hrpdb. The default service must not be dropped. You can create additional services for each PDB. Each additional service denotes its PDB as the initial current container. In Figure 18-5, nondefault services exist for erppdb and hrpdb. Create, maintain, and drop additional services using the same techniques that you use in a non-CDB. Note: When two or more CDBs on the same computer system use the same listener, and two or more PDBs have the same service name in these CDBs, a connection that specifies this service name connects randomly to one of the PDBs with the service name. To avoid incorrect connections, ensure that all service names for PDBs are unique on the computer system, or configure a separate listener for each CDB on the computer system. See Also: Oracle Database Administrator's Guide to learn how to manage services associated with PDBs A CDB administrator with the appropriate privileges can connect to any container in the CDB. The administrator can use either of the following techniques: Use the ALTER SESSION SET CONTAINER statement, which is useful for both connection pooling and advanced CDB administration, to switch between containers. For example, a CDB administrator can connect to the root in one session, and then in the same session switch to a PDB. In this case, the user requires the SET CONTAINER system privilege in the container. Connect directly to a PDB. In this case, the user requires the CREATE SESSION privilege in the container. Table 18-3 describes a scenario involving the CDB in Figure 18-5. Each row describes an action that occurs after the action in the preceding row. Common user SYSTEM queries the name of the current container and the names of PDBs in the CDB. Table 18-3 Services in a CDB See Also: Oracle Database Administrator's Guide to learn how to connect to PDBs In a CDB, the basic principle of commonality is that a common phenomenon is the same in every existing and future container. In a CDB, "common" means "common to all containers." In contrast, a local phenomenon is restricted to exactly one existing container. A corollary to the principle of commonality is that only a common user can alter the existence of common phenomena. More precisely, only a common user connected to the root can create, destroy, or modify CDB-wide attributes of a common user or role. This section contains the following topics: Overview of Common and Local Users in a CDB Overview of Common and Local Roles in a CDB Overview of Privilege and Role Grants in a CDB Overview of Common Audit Configurations Every user that owns objects that define the database is common. User-created users are either local or common. Figure 18-6 shows the possible user types in a CDB. Figure 18-6 Users in a CDB A common user is a database user that has the same identity in the root and in every existing and future PDB. Every common user can connect to and perform operations within the root, and within any PDB in which it has privileges. Every common user is either Oracle-supplied or user-created. Examples of Oracle-supplied common users are SYS and SYSTEM. Figure 18-7 shows sample users and schemas in two PDBs: hrpdb and salespdb. SYS and c##dba are common users who have schemas in CDB$ROOT, hrpdb, and salespdb. Local users hr and rep exist in hrpdb. Local users hr and rep also exist in salespdb. Figure 18-7 Users and Schemas in a CDB Common users have the following characteristics: A common user can log in to any container (including CDB$ROOT) in which it has the CREATE SESSION privilege. A common user need not have the same privileges in every container. For example, the c##dba user may have the privilege to create a session in hrpdb and in the root, but not to create a session in salespdb. Because a common user with the appropriate privileges can switch between containers, a common user in the root can administer PDBs. The name of every user-created common user must begin with the characters c## or C##. (Oracle-supplied common user names do not have this restriction.) No local user name may begin with the characters c## or C##. The names of common users must contain only ASCII or EBCDIC characters. Every common user is uniquely named across all containers. A common user resides in the root, but must be able to connect to every PDB with the same identity. The schemas for a common user can differ in each container. For example, if c##dba is a common user that has privileges on multiple containers, then the c##dba schema in each of these containers may contain different objects. See Also: Oracle Database Security Guide to learn about common and local accounts A local user is a database user that is not common and can operate only within a single PDB. Local users have the following characteristics: A local user is specific to a particular PDB and may own a schema in this PDB. In Figure 18-7, local user hr on hrpdb owns the hr schema. On salespdb, local user rep owns the rep schema, and local user hr owns the hr schema. A local user can administer a PDB, including opening and closing it. A common user with SYSDBA privileges can grant SYSDBA privileges to a local user. In this case, the privileged user remains local. A local user in one PDB cannot log in to another PDB or to the CDB root. For example, when local user hr connects to hrpdb, hr cannot access objects in the sh schema that reside in the salespdb database without using a database link. In the same way, when local user sh connects to the salespdb PDB, sh cannot access objects in the hr schema that resides in hrpdb without. Figure 18-7 shows that a local user and schema named rep exist on hrpdb. A completely independent local user and schema named rep exist on the salespdb PDB. The following table describes a scenario involving the CDB in Figure 18-7. Each row describes an action that occurs after the action in the preceding row. Common user SYSTEM creates local users in two PDBs. Table 18-4 Local Users in a CDB See Also: Oracle Database Security Guide to learn about local user accounts Every Oracle-supplied role is common. In Oracle-supplied scripts, every privilege or role granted to Oracle-supplied users and roles is granted commonly, with one exception: system privileges are granted locally to the common role PUBLIC (see "Grants to PUBLIC in a CDB"). User-created roles are either local or common. A common role is a database role that exists in the root and in every existing and future PDB. Common roles are useful for cross-container operations (see "Cross-Container Operations"), ensuring that a common user has a role in every container. Every common role is either user-created or Oracle-supplied. All Oracle-supplied roles are common, such as DBA and PUBLIC. User-created common roles must have names starting with C## or c##, and must contain only ASCII or EBCDIC characters. For example, a CDB administrator might create common user c##dba, and then grant the DBA role commonly to this user, so that c##dba has the DBA role in any existing and future PDB. A user can only perform common operations on a common role, for example, granting privileges commonly to the role, when the following criteria are met: The user is a common user whose current container is root. The user has the SET CONTAINER privilege granted commonly, which means that the privilege applies in all containers. The user has privilege controlling the ability to perform the specified operation, and this privilege has been granted commonly (see "Roles and Privileges Granted Commonly in a CDB"). For example, to create a common role, a common user must have the CREATE ROLE and the SET CONTAINER privileges granted commonly. In the CREATE ROLE statement, the CONTAINER=ALL clause specifies that the role is common. See Also: Oracle Database Security Guide to learn how to manage common roles Oracle Database SQL Language Reference to learn about the CREATE ROLE statement A local role exists only in a single PDB, just as a role in a non-CDB exists only in the non-CDB. A local role can only contain roles and privileges that apply within the container in which the role exists. PDBs in the same CDB may contain local roles with the same name. For example, the user-created role pdbadmin may exist in both hrpdb and salespdb. These roles are completely independent of each other, just as they would be in separate non-CDBs. See Also: Oracle Database Security Guide to learn how to manage local roles Just as in a non-CDB, users in a CDB can grant roles and privileges. A key difference in a CDB is the distinction between roles and privileges that are locally granted and commonly granted. A privilege or role granted locally is exercisable only in the container in which it was granted. A privilege or role granted commonly is exercisable in every existing and future container. Users and roles may be common or local. However, a privilege is in itself neither common nor local. If a user grants a privilege locally using the CONTAINER=CURRENT clause, then the grantee has a privilege exercisable only in the current container. If a user grants a privilege commonly using the CONTAINER=ALL clause, then the grantee has a privilege exercisable in any existing and future container. In a CDB, every act of granting, whether local or common, occurs within a specific container. The basic principles of granting are as follows: Both common and local phenomena may grant and be granted locally. Only common phenomena may grant or be granted commonly. Local users, roles, and privileges are by definition restricted to a particular container. Thus, local users may not grant roles and privileges commonly, and local roles and privileges may not be granted commonly. The following figure illustrates these principles. In the top, a common user commonly grants a role or privilege to a common user or role. Consequently, the grant recipient has the privilege or role ( p/r box) in all containers. In the bottom section of the diagram, local users ( L boxes) and common users ( C boxes) make local grants to one another. Consequently, each user receives a grant of a privilege or role ( p/r box) that is restricted to the container in which the grant occurred. The local grants have no applicability to common or local users and roles in other containers. Figure 18-8 Common and Local Grants The following sections describe the implications of the preceding principles. Roles and privileges may be granted locally to users and roles regardless of whether the grantees, grantors, or roles being granted are local or common. The following table explains the valid possibilities for locally granted roles and privileges. Table 18-5 Local Grants Footnote 1 Privileges in this role are available to the grantee only in the container in which the role was granted, regardless of whether the privileges were granted to the role locally or commonly. Footnote 2 Privileges in this role are available to the grantee only in the container in which the role was granted and created. A role or privilege is granted locally when the following criteria are met: only one container. By default, the GRANT statement includes the CONTAINER=CURRENT clause, which indicates that the privilege or role is being granted locally. A user or role may be locally granted a privilege ( CONTAINER=CURRENT). For example, a READ ANY TABLE privilege granted locally to a local or common user in hrpdb applies only to this user in this PDB. Analogously, the READ ANY TABLE privilege granted to user hr in a non-CDB has no bearing on the privileges of an hr user that exists in a separate non-CDB. A user or role may be locally granted a role ( CONTAINER=CURRENT). As shown in Table 18-5, a common role may receive a privilege granted locally. For example, the common role c##dba may be granted the READ ANY TABLE privilege locally in hrpdb. If the c##cdb common role is granted locally, then privileges in the role apply only in the container in which the role is granted. In this example, a common user who has the c##cdba role does not, because of a privilege granted locally to this role in hrpdb, have the right to exercise this privilege in any PDB other than hrpdb. See Also: Oracle Database Security Guide to learn how to grant roles and privileges in a CDB Privileges and common roles may be granted commonly. User accounts or roles may be granted roles and privileges commonly only if the grantees and grantors are both common. If a role is being granted commonly, then the role itself must be common. The following table explains the possibilities for common grants. Table 18-6 Common Grants Footnote 3 Privileges that were granted commonly to a common role are available to the grantee across all containers. In addition, any privilege granted locally to a common role is available to the grantee only in the container in which that privilege was granted to the common role. See Also: Oracle Database Security Guide to learn more about common grants A role or privilege is granted commonly when the following criteria are met: The grantor is a common user. The grantee is a common user or common role. all containers. The GRANT statement includes a CONTAINER=ALL clause specifying that the privilege or role is being granted commonly. If a role is being granted, then it must be common, and if an object privilege is being granted, then the object on which the privilege is granted must be common. A common user or role may be commonly granted a privilege ( CONTAINER=ALL). The privilege is granted to this common user or role in all existing and future containers. For example, a SELECT ANY TABLE privilege granted commonly to common user c##dba applies to this user in all containers. A user or role may receive a common role granted commonly. As mentioned in a footnote on Table 18-6, a common role may receive a privilege granted locally. Thus, a common user can be granted a common role, and this role may contain locally granted privileges. For example, the common role c##admin may be granted the SELECT ANY TABLE privilege that is local to hrpdb. Locally granted privileges in a common role apply only in the container in which the privilege was granted. Thus, the common user with the c##admin role does not have the right to exercise an hrpdb-contained privilege in salespdb or any PDB other than hrpdb. See Also: Oracle Database Security Guide to learn how to grant roles and privileges in a CDB In a CDB, PUBLIC is a common role. In a PDB, privileges granted locally to PUBLIC enable all local and common user account to exercise these privileges in this PDB only. Every privilege and role granted to Oracle-supplied users and roles is granted commonly except for system privileges granted to PUBLIC, which are granted locally. This exception exists because you may want to revoke some grants included by default in Oracle Database, such as EXECUTE on the SYS.UTL_FILE package. Assume that local user account hr exists in hrpdb. This user locally grants the SELECT privilege on hr.employees to PUBLIC. Common and local users in hrpdb may exercise the privilege granted to PUBLIC. User accounts in salespdb or any other PDB do not have the privilege to query hr.employees in hrpdb. Privileges granted commonly to PUBLIC enable all local users to exercise the granted privilege in their respective PDBs and enable all common users to exercise this privilege in the PDBs to which they have access. Oracle recommends that users do not commonly grant privileges and roles to PUBLIC. See Also: Oracle Database Security Guide to learn how the PUBLIC role works in a multitenant environment In this scenario, SYSTEM creates common user c##dba and tries to give this user privileges to query a table in the hr schema in hrpdb. The scenario shows how the CONTAINER clause affects grants of roles and privileges. The first column shows operations in CDB$ROOT. The second column shows operations in hrpdb. Table 18-7 Granting Roles and Privileges in a CDB See Also: Oracle Database Security Guide to learn how to manage common and local roles For both mixed mode and unified auditing, a common audit configuration is visible and enforced across all PDBs. Audit configurations are either local or common. The scoping rules that apply to other local or common phenomena, such as users and roles, all apply to audit configurations. Note: Audit initialization parameters exist at the CDB level and not in each PDB. PDBs support the following auditing options: Object auditing Object auditing refers to audit configurations for specific objects. Only common objects can be part of the common audit configuration. A local audit configuration cannot contain common objects. Audit policies Audit policies can be local or common: Local audit policies A local audit policy applies to a single PDB. You can enforce local audit policies for local and common users in this PDB only. Attempts to enforce local audit policies across all containers result in an error. In all cases, enforcing of a local audit policy is part of the local auditing framework. Common audit policies A common audit policy applies to all containers. This policy can only contain actions, system privileges, common roles, and common objects. You can apply a common audit policy only to common users. Attempts to enforce a common audit policy for a local user across all containers result in an error. A common audit configuration is stored in the SYS schema of the root. A local audit configuration is stored in the SYS schema of the PDB to which it applies. Audit trails are stored in the SYS or AUDSYS schemas of the relevant PDBs. Operating system and XML audit trails for PDBs are stored in subdirectories of the directory specified by the AUDIT_FILE_DEST initialization parameter. See Also: Oracle Database Security Guide to learn about common audit configurations A CDB has the same structure as a non-CDB, except that each PDB and application root has its own set of tablespaces, including its own SYSTEM, SYSAUX, and undo tablespaces. A CDB contains the following files: One control file One online redo log One set of undo data files In a single-instance CDB, only one active undo tablespace exists. For an Oracle RAC CDB, one active undo tablespace exists for every instance. Only a common user who has the appropriate privileges and whose current container is the root can create an undo tablespace. All undo tablespaces are visible in the data dictionaries and related views of all containers. SYSTEM and SYSAUX tablespaces for every container The primary physical difference between CDBs and non-CDBs is the data files in SYSTEM and SYSAUX. A non-CDB has only one SYSTEM tablespace and one SYSAUX tablespace. In contrast, the CDB root and each PDB has its own SYSTEM and SYSAUX tablespaces. Each container also has its own set of dictionary tables describing the objects that reside in the container. Zero or more user-created tablespaces In a typical use case, each PDB has its own set of non-system tablespaces. These tablespaces contain the data for user-defined schemas and objects in the PDB. Within a PDB, you manage permanent and temporary tablespaces in the same way that you manage them in a non-CDB. You can also limit the amount of storage used by the data files for a PDB by using the STORAGE clause in a CREATE PLUGGABLE DATABASE or ALTER PLUGGABLE DATABASE statement. The storage of the data dictionary within the PDB enables it to be portable. You can unplug a PDB from a CDB, and plug it in to a different CDB. A set of temp files for every container One default temporary tablespace exists for the CDB root, and one for every PDB. The following figure shows aspects of the physical storage architecture of a CDB with two PDBs: hrpdb and salespdb. Figure 18-9 Architecture of a CDB See Also: "Data Dictionary Architecture in a CDB" Oracle Database Administrator’s Guide to learn about the state of a CDB after creation
https://docs.oracle.com/database/121/CNCPT/cdblogic.htm
CC-MAIN-2017-26
refinedweb
5,411
52.29
If I were subscribed to a single stock, let's say SPY, with a daily resolution and we are only halfway through the day what would the expected data["SPY"].Close value be? The price up to that current point? This close price could differ from that actual price at the end of the day? I am asking because I am trying to create a rolling window of historical close prices, but my code may not be valid if I am not retrieving the most accurate close price. def Initialize(self): self.SetStartDate(2010, 10, 11) self.SetCash(10000) self.SetBrokerageModel(BrokerageName.OandaBrokerage) self.AddForex("EURUSD", Resolution.Minute) self.Close = RollingWindow[float](100) def OnData(self, data): if 8 <= self.Time.hour <= 12: self.Close.Add(data["EURUSD"].Close) Is the above code not capturing the most accurate closing price? If so, can someone give me some advice? I was thinking of using a RollingWindow of QuoteBars instead, but I believe I'd run into the same problem. Many thanks and Happy New Year Marc
https://www.quantconnect.com/forum/discussion/7125/ondata-close-price-question-accurately-retrieving-the-current-days-close/p1
CC-MAIN-2021-31
refinedweb
175
59.8
Balanced Brackets Algorithm in Java Last modified: April 30, 2020 1. Overview Balanced Brackets, also known as Balanced Parentheses, is a common programming problem. In this tutorial, we will validate whether the brackets in a given string are balanced or not. This type of strings are part of what's known as the Dyck language. 2. Problem Statement A bracket is considered to be any of the following characters – “(“, “)”, “[“, “]”, “{“, “}”. A set of brackets is considered to be a matched pair if an opening bracket, “(“, “[“, and “{“, occurs to the left of the corresponding closing bracket, “)”, “]”, and “}”, respectively. However, a string containing bracket pairs is not balanced if the set of brackets it encloses is not matched. Similarly, a string containing non-bracket characters like a-z, A-Z, 0-9 or other special characters like #,$,@ is also considered to be unbalanced. For example, if the input is “{[(])}”, the pair of square brackets, “[]”, encloses a single unbalanced opening round bracket, “(“. Similarly, the pair of round brackets, “()”, encloses a single unbalanced closing square bracket, “]”. Thus, the input string “{[(])}” is unbalanced. Therefore, a string containing bracket characters is said to be balanced if: - A matching opening bracket occurs to the left of each corresponding closing bracket - Brackets enclosed within balanced brackets are also balanced - It does not contain any non-bracket characters There are a couple of special cases to keep in mind: null is considered to be unbalanced, while the empty string is considered to be balanced. To further illustrate our definition of balanced brackets, let's see some examples of balanced brackets: () [()] {[()]} ([{{[(())]}}]) And a few that are not balanced: abc[](){} {{[]()}}}} {[(])} Now that we have a better understanding of our problem, let's see how to solve it! 3. Solution Approaches There are different ways to solve this problem. In this tutorial, we will look at two approaches: - Using methods of the String class - Using Deque implementation 4. Basic Setup and Validations Let's first create a method that will return true if the input is balanced and false if the input is unbalanced: public boolean isBalanced(String str) Let's consider the basic validations for the input string: - If a null input is passed, then it's not balanced. - For a string to be balanced, the pairs of opening and closing brackets should match. Therefore, it would be safe to say that an input string whose length is odd will not be balanced as it will contain at least one non-matched bracket. - As per the problem statement, the balanced behavior should be checked between brackets. Therefore, any input string containing non-bracket characters is an unbalanced string. Given these rules, we can implement the validations: if (null == str || ((str.length() % 2) != 0)) { return false; } else { char[] ch = str.toCharArray(); for (char c : ch) { if (!(c == '{' || c == '[' || c == '(' || c == '}' || c == ']' || c == ')')) { return false; } } } Now that the input string is validated, we can move on to solving this problem. 5. Using String.replaceAll Method In this approach, we'll loop through the input string removing occurrences of “()”, “[]”, and “{}” from the string using String.replaceAll. We continue this process until no further occurrences are found in the input string. Once the process is complete, if the length of our string is zero, then all matching pairs of brackets have been removed and the input string is balanced. If, however, the length is not zero, then some unmatched opening or closing brackets are still present in the string. Therefore, the input string is unbalanced. Let's see the complete implementation: while (str.contains("()") || str.contains("[]") || str.contains("{}")) { str = str.replaceAll("\\(\\)", "") .replaceAll("\\[\\]", "") .replaceAll("\\{\\}", ""); } return (str.length() == 0); 6. Using Deque Deque is a form of the Queue that provides add, retrieve and peek operations at both ends of the queue. We will leverage the Last-In-First-Out (LIFO) order feature of this data structure to check for the balance in the input string. First, let's construct our Deque: Deque<Character> deque = new LinkedList<>(); Note that we have used a LinkedList here because it provides an implementation for the Deque interface. Now that our deque is constructed, we will loop through each character of the input string one by one. If the character is an opening bracket, then we will add it as the first element in the Deque: if (ch == '{' || ch == '[' || ch == '(') { deque.addFirst(ch); } But, if the character is a closing bracket, then we will perform some checks on the LinkedList. First, we check whether the LinkedList is empty or not. An empty list means that the closing bracket is unmatched. Therefore, the input string is unbalanced. So we return false. However, if the LinkedList is not empty, then we peek on its last-in character using the peekFirst method. If it can be paired with the closing bracket, then we remove this top-most character from the list using the removeFirst method and move on to the next iteration of the loop: if (!deque.isEmpty() && ((deque.peekFirst() == '{' && ch == '}') || (deque.peekFirst() == '[' && ch == ']') || (deque.peekFirst() == '(' && ch == ')'))) { deque.removeFirst(); } else { return false; } By the end of the loop, all characters are balance-checked, so we can return true. Below is a complete implementation of the Deque based approach: Deque<Character> deque = new LinkedList<>(); for (char ch: str.toCharArray()) { if (ch == '{' || ch == '[' || ch == '(') { deque.addFirst(ch); } else { if (!deque.isEmpty() && ((deque.peekFirst() == '{' && ch == '}') || (deque.peekFirst() == '[' && ch == ']') || (deque.peekFirst() == '(' && ch == ')'))) { deque.removeFirst(); } else { return false; } } } return true; 7. Conclusion In this tutorial, we discussed the problem statement of Balanced Brackets and solved it using two different approaches. As always, the code is available over on Github.
https://www.baeldung.com/java-balanced-brackets-algorithm
CC-MAIN-2021-17
refinedweb
929
64.51
MQ_GETSETATTR(2) Linux Programmer's Manual MQ_GETSETATTR(2) mq_getsetattr - get/set message queue attributes #include <sys/types.h> #include <mqueue.h> int mq_getsetattr(mqd_t mqdes, struct mq_attr *newattr, struct mq_attr *oldattr); Note: There is no glibc wrapper for this system call; see NOTES. Do not use this system call. This is the low-level system call used to implement mq_getattr(3) and mq_setattr(3). For an explanation of how this system call operates, see the description of mq_setattr(3). This interface is nonstandard; avoid its use. Glibc does not provide a wrapper for this system call; call it using syscall(2). (Actually, never call it unless you are writing a C library!) mq_getattr(3), mq_overview(7) This page is part of release 4.16 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2017-09-15 MQ_GETSETATTR(2) Pages that refer to this page: syscalls(2), mq_getattr(3), mq_overview(7)
http://www.man7.org/linux/man-pages/man2/mq_getsetattr.2.html
CC-MAIN-2018-51
refinedweb
168
60.11
/* Data structures associated with breakpoints in GDB. Copyright (BREAKPOINT_H) #define BREAKPOINT_H 1 #include "frame.h" #include "value.h" #include "gdb-events.h" struct value; struct block; /* This is the maximum number of bytes a breakpoint instruction can take. Feel free to increase it. It's just used in a few places to size arrays that should be independent of the target architecture. */ #define BREAKPOINT_MAX 16 /* Type of breakpoint. */ /* FIXME In the future, we should fold all other breakpoint-like things into here. This includes: * single-step (for machines where we have to simulate single stepping) (probably, though perhaps it is better for it to look as much as possible like a single-step to wait_for_inferior). */ enum bptype { bp_none = 0, /* Eventpoint has been deleted. */ bp_breakpoint, /* Normal breakpoint */ /* APPLE LOCAL begin subroutine inlining */ bp_inlined_breakpoint, /* Breakpoint at an inlined subroutine */ /* APPLE LOCAL end subroutine inlining */ bp_hardware_breakpoint, /* Hardware assisted breakpoint */ bp_until, /* used by until command */ bp_finish, /* used by finish command */ bp_watchpoint, /* Watchpoint */ bp_hardware_watchpoint, /* Hardware assisted watchpoint */ bp_read_watchpoint, /* read watchpoint, (hardware assisted) */ bp_access_watchpoint, /* access watchpoint, (hardware assisted) */ bp_longjmp, /* secret breakpoint to find longjmp() */ bp_longjmp_resume, /* secret breakpoint to escape longjmp() */ /* Used by wait_for_inferior for stepping over subroutine calls, for stepping over signal handlers, and for skipping prologues. */ bp_step_resume, /* Used by wait_for_inferior for stepping over signal handlers. */ bp_through_sigtramp, /* Used to detect when a watchpoint expression has gone out of scope. These breakpoints are usually not visible to the user. This breakpoint has some interesting properties: 1) There's always a 1:1 mapping between watchpoints on local variables and watchpoint_scope breakpoints. 2) It automatically deletes itself and the watchpoint it's associated with when hit. 3) It can never be disabled. */ bp_watchpoint_scope, /* The breakpoint at the end of a call dummy. */ /* FIXME: What if the function we are calling longjmp()s out of the call, or the user gets out with the "return" command? We currently have no way of cleaning up the breakpoint in these (obscure) situations. (Probably can solve this by noticing longjmp, "return", etc., it's similar to noticing when a watchpoint on a local variable goes out of scope (with hardware support for watchpoints)). */ bp_call_dummy, /* Some dynamic linkers (HP, maybe Solaris) can arrange for special code in the inferior to run when significant events occur in the dynamic linker (for example a library is loaded or unloaded). By placing a breakpoint in this magic code GDB will get control when these significant events occur. GDB can then re-examine the dynamic linker's data structures to discover any newly loaded dynamic libraries. */ bp_shlib_event, /* Some multi-threaded systems can arrange for a location in the inferior to be executed when certain thread-related events occur (such as thread creation or thread death). By placing a breakpoint at one of these locations, GDB will get control when these events occur. GDB can then update its thread lists etc. */ bp_thread_event, /* On the same principal, an overlay manager can arrange to call a magic location in the inferior whenever there is an interesting change in overlay status. GDB can update its overlay tables and fiddle with breakpoints in overlays when this breakpoint is hit. */ bp_overlay_event, /* These breakpoints are used to implement the "catch load" command on platforms whose dynamic linkers support such functionality. */ bp_catch_load, /* These breakpoints are used to implement the "catch unload" command on platforms whose dynamic linkers support such functionality. */ bp_catch_unload, /* These are not really breakpoints, but are catchpoints that implement the "catch fork", "catch vfork" and "catch exec" commands on platforms whose kernel support such functionality. (I.e., kernels which can raise an event when a fork or exec occurs, as opposed to the debugger setting breakpoints on functions named "fork" or "exec".) */ bp_catch_fork, bp_catch_vfork, bp_catch_exec, /* These are catchpoints to implement "catch catch" and "catch throw" commands for C++ exception handling. */ bp_catch_catch, bp_catch_throw /* APPLE LOCAL begin gnu_v3 */ /* These are gnu_v3_catch & throw catchpoints. We would like to print that these are catchpoints, not ordinary breakpoints, but gdb has to manage them like ordinary breakpoints. */ , bp_gnu_v3_catch_catch , bp_gnu_v3_catch_throw /* APPLE LOCAL end gnu_v3 */ }; /* States of enablement of breakpoint. */ enum enable_state { bp_disabled, /* The eventpoint is inactive, and cannot trigger. */ bp_enabled, /* The eventpoint is active, and can trigger. */ bp_shlib_disabled, /* The eventpoint's address is in an unloaded solib. The eventpoint will be automatically enabled and reset when that solib is loaded. */ bp_call_disabled, /* The eventpoint has been disabled while a call into the inferior is "in flight", because some eventpoints interfere with the implementation of a call on some targets. The eventpoint will be automatically enabled and reset when the call "lands" (either completes, or stops at another eventpoint). */ bp_hand_call_disabled, /* APPLE LOCAL. The eventpoint has been disabled while various mi commands update variables, so that the hand calls for updating the variables will not trigger the breakpoints. */ bp_permanent /* There is a breakpoint instruction hard-wired into the target's code. Don't try to write another breakpoint instruction on top of it, or restore its value. Step over it using the architecture's SKIP_INSN macro. */ }; /* Disposition of breakpoint. Ie: what to do after hitting it. */ enum bpdisp { disp_del, /* Delete it */ disp_del_at_next_stop, /* Delete at next stop, whether hit or not */ disp_disable, /* Disable it */ disp_donttouch /* Leave it alone */ }; enum target_hw_bp_type { hw_write = 0, /* Common HW watchpoint */ hw_read = 1, /* Read HW watchpoint */ hw_access = 2, /* Access HW watchpoint */ hw_execute = 3 /* Execute HW breakpoint */ }; /* GDB maintains two types of information about each breakpoint (or watchpoint, or other related event). The first type corresponds to struct breakpoint; this is a relatively high-level structure which contains the source location(s), stopping conditions, user commands to execute when the breakpoint is hit, and so forth. The second type of information corresponds to struct bp_location. Each breakpoint has one or (eventually) more locations associated with it, which represent target-specific and machine-specific mechanisms for stopping the program. For instance, a watchpoint expression may require multiple hardware watchpoints in order to catch all changes in the value of the expression being watched. */ enum bp_loc_type { bp_loc_software_breakpoint, bp_loc_hardware_breakpoint, bp_loc_hardware_watchpoint, bp_loc_other /* Miscellaneous... */ }; struct bp_location { /* Chain pointer to the next breakpoint location. */ struct bp_location *next; /* Type of this breakpoint location. */ enum bp_loc_type loc_type; /* Each breakpoint location must belong to exactly one higher-level breakpoint. This and the DUPLICATE flag are more straightforward than reference counting. */ struct breakpoint *owner; /* Nonzero if this breakpoint is now inserted. */ char inserted; /* Nonzero if this is not the first breakpoint in the list for the given address. */ char duplicate; /* If we someday support real thread-specific breakpoints, then the breakpoint location will need a thread identifier. */ /* Data for specific breakpoint types. These could be a union, but simplicity is more important than memory usage for breakpoints. */ /* Note that zero is a perfectly valid code address on some platforms (for example, the mn10200 (OBSOLETE) and mn10300 simulators). NULL is not a special value for this field. Valid for all types except bp_loc_other. */ CORE_ADDR address; /* For any breakpoint type with an address, this is the BFD section associated with the address. Used primarily for overlay debugging. */ asection *section; /* "Real" contents of byte where breakpoint has been inserted. Valid only when breakpoints are in the program. Under the complete control of the target insert_breakpoint and remove_breakpoint routines. No other code should assume anything about the value(s) here. Valid only for bp_loc_software_breakpoint. */ gdb_byte shadow_contents[BREAKPOINT_MAX]; /* Address at which breakpoint was requested, either by the user or by GDB for internal breakpoints. This will usually be the same as ``address'' (above) except for cases in which ADJUST_BREAKPOINT_ADDRESS has computed a different address at which to place the breakpoint in order to comply with a processor's architectual constraints. */ CORE_ADDR requested_address; }; /* This structure is a collection of function pointers that, if available, will be called instead of the performing the default action for this bptype. */ struct breakpoint_ops { /* The normal print routine for this breakpoint, called when we hit it. */ enum print_stop_action (*print_it) (struct breakpoint *); /* Display information about this breakpoint, for "info breakpoints". */ void (*print_one) (struct breakpoint *, CORE_ADDR *); /* Display information about this breakpoint after setting it (roughly speaking; this is called from "mention"). */ void (*print_mention) (struct breakpoint *); }; /* APPLE LOCAL begin bp_set_state */ /* The set states for bp_set_state. */ enum bp_set_state { bp_state_unset, /* Breakpoint hasn't been set yet. */ bp_state_set, /* Breakpoint is all ready to be inserted into the target. */ bp_state_waiting_load /* We were able to find the breakpoint in an objfile, but that objfile wasn't loaded into the target yet. We need this extra state because breakpoint resetting can happen between restarting the target and loading the objfile, at which point we can't read program text and so can't do things like move the breakpoint over the prologue. So we want to make sure we try the breakpoint again when the target's text is loaded into memory. */ }; /* APPLE LOCAL end bp_set_state */ /* Note that the ->silent field is not currently used by any commands (though the code is in there if it was to be, and set_raw_breakpoint does set it to 0). I implemented it because I thought it would be useful for a hack I had to put in; I'm going to leave it in because I can see how there might be times when it would indeed be useful */ /* This is for a breakpoint or a watchpoint. */ struct breakpoint { struct breakpoint *next; /* Type of breakpoint. */ enum bptype type; /* Zero means disabled; remember the info but don't break here. */ enum enable_state enable_state; /* What to do with this breakpoint after we hit it. */ enum bpdisp disposition; /* Number assigned to distinguish breakpoints. */ int number; /* Location(s) associated with this high-level breakpoint. */ struct bp_location *loc; /* Line number of this address. */ int line_number; /* Source file name of this address. */ char *source_file; /* Non-zero means a silent breakpoint (don't print frame info if we stop here). */ unsigned char silent; /* Number of stops at this breakpoint that should be continued automatically before really stopping. */ int ignore_count; /* Chain of command lines to execute when this breakpoint is hit. */ struct command_line *commands; /* Stack depth (address of frame). If nonzero, break only if fp equals this. */ struct frame_id frame_id; /* Conditional. Break only if this expression's value is nonzero. */ struct expression *cond; /* String we used to set the breakpoint (malloc'd). */ char *addr_string; /* Language we used to set the breakpoint. */ enum language language; /* Input radix we used to set the breakpoint. */ int input_radix; /* String form of the breakpoint condition (malloc'd), or NULL if there is no condition. */ char *cond_string; /* String form of exp (malloc'd), or NULL if none. */ char *exp_string; /* The expression we are watching, or NULL if not a watchpoint. */ struct expression *exp; /* The largest block within which it is valid, or NULL if it is valid anywhere (e.g. consists just of global symbols). */ struct block *exp_valid_block; /* Value of the watchpoint the last time we checked it. */ struct value *val; /* Holds the value chain for a hardware watchpoint expression. */ struct value *val_chain; /* Holds the address of the related watchpoint_scope breakpoint when using watchpoints on local variables (might the concept of a related breakpoint be useful elsewhere, if not just call it the watchpoint_scope breakpoint or something like that. FIXME). */ struct breakpoint *related_breakpoint; /* Holds the frame address which identifies the frame this watchpoint should be evaluated in, or `null' if the watchpoint should be evaluated on the outermost frame. */ struct frame_id watchpoint_frame; /* Thread number for thread-specific breakpoint, or -1 if don't care */ int thread; /* Count of the number of times this breakpoint was taken, dumped with the info, but not used for anything else. Useful for seeing how many times you hit a break prior to the program aborting, so you can back up to just before the abort. */ int hit_count; /* Filename of a dynamically-linked library (dll), used for bp_catch_load and bp_catch_unload (malloc'd), or NULL if any library is significant. */ char *dll_pathname; /* Filename of a dll whose state change (e.g., load or unload) triggered this catchpoint. This field is only valid immediately after this catchpoint has triggered. */ char *triggered_dll_pathname; /* Process id of a child process whose forking triggered this catchpoint. This field is only valid immediately after this catchpoint has triggered. */ int forked_inferior_pid; /* Filename of a program whose exec triggered this catchpoint. This field is only valid immediately after this catchpoint has triggered. */ char *exec_pathname; /* Methods associated with this breakpoint. */ struct breakpoint_ops *ops; /* Was breakpoint issued from a tty? Saved for the use of pending breakpoints. */ int from_tty; /* Flag value for pending breakpoint. first bit : 0 non-temporary, 1 temporary. second bit : 0 normal breakpoint, 1 hardware breakpoint. */ int flag; /* Is breakpoint pending on shlib loads? */ int pending; /* APPLE LOCAL begin breakpoints */ /* Record the shared library name that this breakpoint is to be set for. If NULL, then don't bother with this. N.B. this is not the sharedlibrary it is actually set in, and will be null unless the breakpoint's creator specifically limited the breakpoint to a particular shlib. */ char *requested_shlib; /* This is the objfile that the breakpoint is currently set in. Need this for "tell_breakpoint_objfile_changed" since you may have many objfiles overlapping the same address range... */ struct objfile *bp_objfile; /* APPLE LOCAL begin radar 5273932 */ /* Objfile name of where the bp was set. Used to save the name of the objfile if the objfile pointer needs to be re-set to NULL. */ char *bp_objfile_name; /* APPLE LOCAL end radar 5273932 */ /* Used for save-breakpoints. */ int original_flags; /* Has this breakpoint been successfully set yet? */ enum bp_set_state bp_set_state; /* APPLE LOCAL end breakpoints */ }; /* The following stuff is an abstract data type "bpstat" ("breakpoint status"). This provides the ability to determine whether we have stopped at a breakpoint, and what we should do about it. */ typedef struct bpstats *bpstat; /* Interface: */ /* Clear a bpstat so that it says we are not at any breakpoint. Also free any storage that is part of a bpstat. */ extern void bpstat_clear (bpstat *); /* Return a copy of a bpstat. Like "bs1 = bs2" but all storage that is part of the bpstat is copied as well. */ extern bpstat bpstat_copy (bpstat); extern bpstat bpstat_stop_status (CORE_ADDR pc, ptid_t ptid, int stopped_by_watchpoint); /* This bpstat_what stuff tells wait_for_inferior what to do with a breakpoint (a challenging task). */ enum bpstat_what_main_action { /* Perform various other tests; that is, this bpstat does not say to perform any action (e.g. failed watchpoint and nothing else). */ BPSTAT_WHAT_KEEP_CHECKING, /* Rather than distinguish between noisy and silent stops here, it might be cleaner to have bpstat_print make that decision (also taking into account stop_print_frame and source_only). But the implications are a bit scary (interaction with auto-displays, etc.), so I won't try it. */ /* Stop silently. */ BPSTAT_WHAT_STOP_SILENT, /* Stop and print. */ BPSTAT_WHAT_STOP_NOISY, /* Remove breakpoints, single step once, then put them back in and go back to what we were doing. It's possible that this should be removed from the main_action and put into a separate field, to more cleanly handle BPSTAT_WHAT_CLEAR_LONGJMP_RESUME_SINGLE. */ BPSTAT_WHAT_SINGLE, /* Set longjmp_resume breakpoint, remove all other breakpoints, and continue. The "remove all other breakpoints" part is required if we are also stepping over another breakpoint as well as doing the longjmp handling. */ BPSTAT_WHAT_SET_LONGJMP_RESUME, /* Clear longjmp_resume breakpoint, then handle as BPSTAT_WHAT_KEEP_CHECKING. */ BPSTAT_WHAT_CLEAR_LONGJMP_RESUME, /* Clear longjmp_resume breakpoint, then handle as BPSTAT_WHAT_SINGLE. */ BPSTAT_WHAT_CLEAR_LONGJMP_RESUME_SINGLE, /* Clear step resume breakpoint, and keep checking. */ BPSTAT_WHAT_STEP_RESUME, /* Clear through_sigtramp breakpoint, muck with trap_expected, and keep checking. */ BPSTAT_WHAT_THROUGH_SIGTRAMP, /* Check the dynamic linker's data structures for new libraries, then keep checking. */ BPSTAT_WHAT_CHECK_SHLIBS, /* Check the dynamic linker's data structures for new libraries, then resume out of the dynamic linker's callback, stop and print. */ BPSTAT_WHAT_CHECK_SHLIBS_RESUME_FROM_HOOK, /* This is just used to keep track of how many enums there are. */ BPSTAT_WHAT_LAST }; struct bpstat_what { enum bpstat_what_main_action main_action; /* Did we hit a call dummy breakpoint? This only goes with a main_action of BPSTAT_WHAT_STOP_SILENT or BPSTAT_WHAT_STOP_NOISY (the concept of continuing from a call dummy without popping the frame is not a useful one). */ int call_dummy; }; /* The possible return values for print_bpstat, print_it_normal, print_it_done, print_it_noop. */ enum print_stop_action { PRINT_UNKNOWN = -1, PRINT_SRC_AND_LOC, PRINT_SRC_ONLY, PRINT_NOTHING }; /* Tell what to do about this bpstat. */ struct bpstat_what bpstat_what (bpstat); /* Find the bpstat associated with a breakpoint. NULL otherwise. */ bpstat bpstat_find_breakpoint (bpstat, struct breakpoint *); /* Find a step_resume breakpoint associated with this bpstat. (If there are multiple step_resume bp's on the list, this function will arbitrarily pick one.) It is an error to use this function if BPSTAT doesn't contain a step_resume breakpoint. See wait_for_inferior's use of this function. */ extern struct breakpoint *bpstat_find_step_resume_breakpoint (bpstat); /* Nonzero if a signal that we got in wait() was due to circumstances explained by the BS. */ /* Currently that is true if we have hit a breakpoint, or if there is a watchpoint enabled. */ #define bpstat_explains_signal(bs) ((bs) != NULL) /* Nonzero if we should step constantly (e.g. watchpoints on machines without hardware support). This isn't related to a specific bpstat, just to things like whether watchpoints are set. */ extern int bpstat_should_step (void); /* Nonzero if there are enabled hardware watchpoints. */ extern int bpstat_have_active_hw_watchpoints (void); /* Print a message indicating what happened. Returns nonzero to say that only the source line should be printed after this (zero return means print the frame as well as the source line). */ extern enum print_stop_action bpstat_print (bpstat); /* Return the breakpoint number of the first breakpoint we are stopped at. *BSP upon return is a bpstat which points to the remaining breakpoints stopped at (but which is not guaranteed to be good for anything but further calls to bpstat_num). Return 0 if passed a bpstat which does not indicate any breakpoints. */ extern int bpstat_num (bpstat *); /* Perform actions associated with having stopped at *BSP. Actually, we just use this for breakpoint commands. Perhaps other actions will go here later, but this is executed at a late time (from the command loop). */ extern void bpstat_do_actions (bpstat *); /* Modify BS so that the actions will not be performed. */ extern void bpstat_clear_actions (bpstat); /* Given a bpstat that records zero or more triggered eventpoints, this function returns another bpstat which contains only the catchpoints on that first list, if any. */ extern void bpstat_get_triggered_catchpoints (bpstat, bpstat *); /* Implementation: */ /* Values used to tell the printing routine how to behave for this bpstat. */ enum bp_print_how { /* This is used when we want to do a normal printing of the reason for stopping. The output will depend on the type of eventpoint we are dealing with. This is the default value, most commonly used. */ print_it_normal, /* This is used when nothing should be printed for this bpstat entry. */ print_it_noop, /* This is used when everything which needs to be printed has already been printed. But we still want to print the frame. */ print_it_done }; struct bpstats { /* Linked list because there can be two breakpoints at the same place, and a bpstat reflects the fact that both have been hit. */ bpstat next; /* Breakpoint that we are at. */ struct breakpoint *breakpoint_at; /* Commands left to be done. */ struct command_line *commands; /* Old value associated with a watchpoint. */ struct value *old_val; /* Nonzero if this breakpoint tells us to print the frame. */ char print; /* Nonzero if this breakpoint tells us to stop. */ char stop; /* Tell bpstat_print and print_bp_stop_message how to print stuff associated with this element of the bpstat chain. */ enum bp_print_how print_it; }; enum inf_context { inf_starting, inf_running, inf_exited }; /* The possible return values for breakpoint_here_p. We guarantee that zero always means "no breakpoint here". */ enum breakpoint_here { no_breakpoint_here = 0, ordinary_breakpoint_here, permanent_breakpoint_here }; /* Prototypes for breakpoint-related functions. */ /* APPLE LOCAL declare set_breakpoint_count */ extern void set_breakpoint_count (int); extern enum breakpoint_here breakpoint_here_p (CORE_ADDR); extern int breakpoint_inserted_here_p (CORE_ADDR); extern int software_breakpoint_inserted_here_p (CORE_ADDR); /* APPLE LOCAL begin breakpoint MI */ extern struct breakpoint *find_breakpoint (int); extern void breakpoint_print_commands (struct ui_out *, struct breakpoint *); extern void breakpoint_add_commands (struct breakpoint *, struct command_line *); /* APPLE LOCAL end breakpoint MI */ extern int breakpoint_thread_match (CORE_ADDR, ptid_t); extern void until_break_command (char *, int, int); /* APPLE LOCAL breakpoints */ extern void breakpoint_update (void); /* APPLE LOCAL breakpoints */ extern void breakpoint_re_set (struct objfile *); extern void breakpoint_re_set_thread (struct breakpoint *); extern int ep_is_exception_catchpoint (struct breakpoint *); extern struct breakpoint *set_momentary_breakpoint (struct symtab_and_line, struct frame_id, enum bptype); extern void set_ignore_count (int, int, int); extern void set_default_breakpoint (int, CORE_ADDR, struct symtab *, int); extern void mark_breakpoints_out (void); extern void breakpoint_init_inferior (enum inf_context); extern struct cleanup *make_cleanup_delete_breakpoint (struct breakpoint *); extern struct cleanup *make_exec_cleanup_delete_breakpoint (struct breakpoint *); extern void delete_breakpoint (struct breakpoint *); extern void breakpoint_auto_delete (bpstat); extern void breakpoint_clear_ignore_counts (void); extern void break_command (char *, int); /* APPLE LOCAL: for rbreak_command's setting of breakpoints */ /* APPLE LOCAL radar 6366048 search both minsyms & syms for bps. */ extern void rbr_break_command (char *, int, int); extern void hbreak_command_wrapper (char *, int); extern void thbreak_command_wrapper (char *, int); extern void rbreak_command_wrapper (char *, int); /* APPLE LOCAL: Added by_location argument. */ extern void watch_command_wrapper (char *, int, int); extern void awatch_command_wrapper (char *, int, int); extern void rwatch_command_wrapper (char *, int, int); /* END APPLE LOCAL */ extern void tbreak_command (char *, int); extern int insert_breakpoints (void); extern int remove_breakpoints (void); /* This function can be used to physically insert eventpoints from the specified traced inferior process, without modifying the breakpoint package's state. This can be useful for those targets which support following the processes of a fork() or vfork() system call, when both of the resulting two processes are to be followed. */ extern int reattach_breakpoints (int); /* This function can be used to update the breakpoint package's state after an exec() system call has been executed. This function causes the following: - All eventpoints are marked "not inserted". - All eventpoints with a symbolic address are reset such that the symbolic address must be reevaluated before the eventpoints can be reinserted. - The solib breakpoints are explicitly removed from the breakpoint list. - A step-resume breakpoint, if any, is explicitly removed from the breakpoint list. - All eventpoints without a symbolic address are removed from the breakpoint list. */ extern void update_breakpoints_after_exec (void); /* This function can be used to physically remove hardware breakpoints and watchpoints from the specified traced inferior process, without modifying the breakpoint package's state. This can be useful for those targets which support following the processes of a fork() or vfork() system call, when one of the resulting two processes is to be detached and allowed to run free. It is an error to use this function on the process whose id is inferior_ptid. */ extern int detach_breakpoints (int); extern void enable_longjmp_breakpoint (void); extern void disable_longjmp_breakpoint (void); extern void enable_overlay_breakpoints (void); extern void disable_overlay_breakpoints (void); extern void set_longjmp_resume_breakpoint (CORE_ADDR, struct frame_id); /* These functions respectively disable or reenable all currently enabled watchpoints. When disabled, the watchpoints are marked call_disabled. When reenabled, they are marked enabled. The intended client of these functions is call_function_by_hand. The inferior must be stopped, and all breakpoints removed, when these functions are used. The need for these functions is that on some targets (e.g., HP-UX), gdb is unable to unwind through the dummy frame that is pushed as part of the implementation of a call command. Watchpoints can cause the inferior to stop in places where this frame is visible, and that can cause execution control to become very confused. Note that if a user sets breakpoints in an interactively called function, the call_disabled watchpoints will have been reenabled when the first such breakpoint is reached. However, on targets that are unable to unwind through the call dummy frame, watches of stack-based storage may then be deleted, because gdb will believe that their watched storage is out of scope. (Sigh.) */ extern void disable_watchpoints_before_interactive_call_start (void); extern void enable_watchpoints_after_interactive_call_stop (void); extern void clear_breakpoint_hit_counts (void); extern int get_number (char **); extern int get_number_or_range (char **); /* The following are for displays, which aren't really breakpoints, but here is as good a place as any for them. */ extern void disable_current_display (void); extern void do_displays (void); extern void disable_display (int); extern void clear_displays (void); extern void disable_breakpoint (struct breakpoint *); extern void enable_breakpoint (struct breakpoint *); extern void make_breakpoint_permanent (struct breakpoint *); extern struct breakpoint *create_solib_event_breakpoint (CORE_ADDR); extern struct breakpoint *create_thread_event_breakpoint (CORE_ADDR); extern void remove_solib_event_breakpoints (void); extern void remove_thread_event_breakpoints (void); /* APPLE LOCAL: ObjC hand-call fail point breakpoint. */ extern struct breakpoint *create_objc_hook_breakpoint (char *hookname); /* APPLE LOCAL breakpoints */ extern void disable_breakpoints_in_shlibs (int silent); /* APPLE LOCAL breakpoints */ extern void re_enable_breakpoints_in_shlibs (int silent); extern void create_solib_load_event_breakpoint (char *, int, char *, char *); extern void create_solib_unload_event_breakpoint (char *, int, char *, char *); extern void create_fork_event_catchpoint (int, char *); extern void create_vfork_event_catchpoint (int, char *); extern void create_exec_event_catchpoint (int, char *); /* This function returns TRUE if ep is a catchpoint. */ extern int ep_is_catchpoint (struct breakpoint *); /* This function returns TRUE if ep is a catchpoint of a shared library (aka dynamically-linked library) event, such as a library load or unload. */ extern int ep_is_shlib_catchpoint (struct breakpoint *); extern struct breakpoint *set_breakpoint_sal (struct symtab_and_line); /* Enable breakpoints and delete when hit. Called with ARG == NULL deletes all breakpoints. */ extern void delete_command (char *arg, int from_tty); /* Pull all H/W watchpoints from the target. Return non-zero if the remove fails. */ extern int remove_hw_watchpoints (void); /* APPLE LOCAL begin breakpoints */ extern struct breakpoint *find_finish_breakpoint (void); extern int exception_catchpoints_enabled (enum exception_event_kind ex_event); extern void disable_exception_catch (enum exception_event_kind ex_event); void gnu_v3_update_exception_catchpoints (enum exception_event_kind ex_event, int tempflag, char *cond_string); int handle_gnu_v3_exceptions (enum exception_event_kind ex_event); void tell_breakpoints_objfile_changed (struct objfile *objfile); void tell_breakpoints_objfile_removed (struct objfile *objfile); /* APPLE LOCAL end breakpoints */ /* Indicator of whether exception catchpoints should be nuked between runs of a program. */ extern int deprecated_exception_catchpoints_are_fragile; /* Indicator of when exception catchpoints set-up should be reinitialized -- e.g. when program is re-run. */ extern int deprecated_exception_support_initialized; /* APPLE LOCAL begin radar 6366048 search both minsyms & syms for bps. */ extern void remove_duplicate_sals (struct symtabs_and_lines *, struct symtabs_and_lines, char **); /* APPLE LOCAL end radar 6366048 search both minsyms & syms for bps. */ extern void breakpoints_relocate (struct objfile *, struct section_offsets *); /* APPLE LOCAL Disable breakpoints while updating data formatters. */ extern struct cleanup * make_cleanup_enable_disable_bpts_during_varobj_operation (void); #endif /* !defined (BREAKPOINT_H) */
http://opensource.apple.com/source/gdb/gdb-1344/src/gdb/breakpoint.h
CC-MAIN-2015-40
refinedweb
4,194
54.63
Other AliasNs_CsDestroy, Ns_CsEnter, Ns_CsInit SYNOPSIS #include "ns.h" void Ns_CsDestroy(Ns_Cs *csPtr) void Ns_CsEnter(Ns_Cs *csPtr) void Ns_CsInit(Ns_Cs *csPtr) void Ns_CsLeave(Ns_Cs *csPtr) DESCRIPTION Critical section locks are used to prevent more than one thread from executing a specific section of code at one time. They are implemented as "objects", which simply means that memory is allocated to hold the lock state. They can also be called "sychronization objects". While a thread is executing a critical section of code, all other threads that want to execute that same section of code must wait until the lock surrounding that critical section has been released. This is crucial to prevent race conditions which could put the server into an unknown state. For example, if a section of code frees a pointer and then decrements a counter that stores how many pointers exist, it is possible that the counter value and the actual number of pointers may be different. If another section of the server relies on this counter and reads it when the pointer has been freed, but the counter has not yet been decremented, it could crash the server or put it into an unknown state. Critical section locks should be used sparingly as they will adversely impact the performance of the server or module. They essentially cause the section of code they enclose into behaving in a single-threaded manner. If a critical section executes slowly or blocks, other threads that must execute that section of code will begin to block as well until the critical section lock is released. You will normally want to wrap sections of code that are used to both read and write values, create and destroy pointers and structures or otherwise look at or modify data in the system. Use the same named lock for both read and write operations on the same data. Threads that are waiting for a critical section lock to be released do not have to poll the lock. The critical section lock functions use thread condition functions to signal when a lock is released. - Ns_CsDestroy(csPtr) Destroy a critical section object. Note that you would almost never need to call this function as synchronization objects are typically created at startup and exist until the server exits. The underlying objects in the critical section are destroyed and the critical section memory returned to the heap. - Ns_CsEnter(csPtr) Lock a critical section object, initializing it first if needed. If the critical section is in use by another thread, the calling thread will block until it is no longer so. Note that critical sections are recursive and must be exited the same number of times as they were entered. - Ns_CsInit(csPtr) Initialize a critical section object. Memory will be allocated to hold the object's state. - Ns_CsLeave(csPtr) Unlock a critical section once. A count of threads waiting to enter the critical section is kept, and a condition is signaled if this is the final unlock of the critical section so that other threads may enter the critical section. KEYWORDS
http://manpages.org/ns_csleave/3
CC-MAIN-2020-50
refinedweb
508
51.28
Recoil is a slick new React library written by some people at Facebook that work on a tool called "Comparison View." It came about because of ergonomics and performance issues with context and useState. It's a very clever library, and almost everyone will find a use for it - check out this explainer video if you want to learn more. At first I was really taken aback by the talk of graph theory and the wondrous magic that Recoil performs, but after a while I started to see that maybe it's not that special after all. Here's my shot at implementing something similar! Before I get started, please note that the way I've implemented my Recoil clone is completely different to how the actual Recoil is implemented. Don't assume anything about Recoil from this. Recoil is built around the concept of "atoms". Atoms are small atomic pieces of state that you can subscribe to and update in your components. To begin, I'm going to create a class called Atom that is going to wrap some value T. I've added helper methods update and snapshot to allow you to get and set the value. class Atom<T> { constructor(private value: T) {} update(value: T) { this.value = value; } snapshot(): T { return this.value; } } To listen for changes to the state, you need to use the observer pattern. This is commonly seen in libraries like RxJS, but in this case I'm going to write a simple synchronous version from scratch. To know who is listening to the state I use a Set of callbacks. A Set (or Hash Set) is a data structure that only contains unique items. In JavaScript it can easily be turned into an array and has helpful methods for quickly adding and removing items. Adding a listener is done through the subscribe method. The subscribe method returns Disconnecter - an interface containing a method that will stop a listener from listening. This is called when a React component is unmounted and you no longer want to listen for changes. Next, a method called emit is added. This method loops through each of the listeners and gives them the current value in the state. Finally I update the update method to emit the new values whenever the state is set. type Disconnecter = { disconnect: () => void }; class Atom<T> { private listeners = new Set<(value: T) => void>(); constructor(private value: T) {} update(value: T) { this.value = value; this.emit(); } snapshot(): T { return this.value; } emit() { for (const listener of this.listeners) { listener(this.snapshot()); } } subscribe(callback: (value: T) => void): Disconnecter { this.listeners.add(callback); return { disconnect: () => { this.listeners.delete(callback); }, }; } } Phew! It's time to write the atom up into our React components. To do this, I've created a hook called useCoiledValue. (sound familiar?) This hook returns the current state of an atom, and listens and re-renders whenever the value changes. Whenever the hook is unmounted, it disconnects the listener. One thing that's a little weird here is the updateState hook. By performing a set state with a new object reference ( {}), React will re-render the component. This is a little bit of a hack, but it's an easy way to make sure the component re-renders. export function useCoiledValue<T>(value: Atom<T>): T { const [, updateState] = useState({}); useEffect(() => { const { disconnect } = value.subscribe(() => updateState({})); return () => disconnect(); }, [value]); return value.snapshot(); } Next I've added a useCoiledState method. This has a very similar API to useState - it gives you the current value of the state and allows you to set a new one. export function useCoiledState<T>(atom: Atom<T>): [T, (value: T) => void] { const value = useCoiledValue(atom); return [value, useCallback((value) => atom.update(value), [atom])]; } Now that we've got those hooks out of the road, it's time to move onto Selectors. Before that though, let's refactor what we've got a little bit. A selector is a stateful value, just like an atom. To make implementing them a bit easier, I'll move most of the logic out of Atom into a base class called Stateful. class Stateful<T> { private listeners = new Set<(value: T) => void>(); constructor(private value: T) {} protected _update(value: T) { this.value = value; this.emit(); } snapshot(): T { return this.value; } subscribe(callback: (value: T) => void): Disconnecter { this.listeners.add(callback); return { disconnect: () => { this.listeners.delete(callback); }, }; } } class Atom<T> extends Stateful<T> { update(value: T) { super._update(value); } } Moving on! A selector is Recoil's version of "computed values" or "reducers". In their own words: A selector represents a piece of derived state. You can think of derived state as the output of passing state to a pure function that modifies the given state in some way. The API for selectors in Recoil is quite simple, you create an object with a method called get and whatever that method returns is the value of your state. Inside the get method you can subscribe to other pieces of state, and whenever they update so too will your selector. In our case, I'm going to rename the get method to be called generator. I'm calling it this because it's essentially a factory function that's supposed to generate the next value of the state, based on whatever is piped into it. In code, we can capture this generate method with the following type signature. type SelectorGenerator<T> = (context: GeneratorContext) => T; For those unfamilar with Typescript, it's a function that takes a context object ( GeneratorContext) as a parameter and returns some value T. This return value is what becomes the internal state of the selector. What does the GeneratorContext object do? Well that's how selectors use other pieces of state when generating their own internal state. From now on I'll refer to these pieces of state as "dependencies". interface GeneratorContext { get: <V>(dependency: Stateful<V>) => V } Whenever someone calls the get method on the GeneratorContext, it adds a piece of state as a dependency. This means that whenever a dependency updates, so too will the selector. Here's what creating a selector's generate function might look like: function generate(context) { // Register the NameAtom as a dependency // and get it's value const name = context.get(NameAtom); // Do the same for AgeAtom const age = context.get(AgeAtom); // Return a new value using the previous atoms // E.g. "Bob is 20 years old" return `${name} is ${age} years old.`; }; With the generate function out of the way, let's create the Selector class. This class should accept the generate function as a constructor parameter and use a getDep method on the class to return the value of the Atom dependencies. You might notice in the constructor that I've written super(undefined as any). This is because super must be the very first line in a derived class's constructor. If it helps, in this case you can think of undefined as uninitialised memory. export class Selector<T> extends Stateful<T> { private getDep<V>(dep: Stateful<V>): V { return dep.snapshot(); } constructor( private readonly generate: SelectorGenerator<T> ) { super(undefined as any); const context = { get: dep => this.getDep(dep) }; this.value = generate(context); } } This selector is only good for generating state once. In order to react to changes in the dependencies, we need to subscribe to them. To do this, let's update the getDep method to subscribe to the dependencies and call the updateSelector method. To make sure the selector only updates once per change, let's keep track of the deps using a Set. The updateSelector method is very similar to the constructor in the previous example. It creates the GeneratorContext, runs the generate method and then uses the update method from the Stateful base class. export class Selector<T> extends Stateful<T> { private registeredDeps = new Set<Stateful>(); private getDep<V>(dep: Stateful<V>): V { if (!this.registeredDeps.has(dep)) { dep.subscribe(() => this.updateSelector()); this.registeredDeps.add(dep); } return dep.snapshot(); } private updateSelector() { const context = { get: dep => this.getDep(dep) }; this.update(this.generate(context)); } constructor( private readonly generate: SelectorGenerator<T> ) { super(undefined as any); const context = { get: dep => this.getDep(dep) }; this.value = generate(context); } } Almost done! Recoil has some helper functions for creating atoms and selectors. Since most JavaScript devs consider classes evil, they'll help mask our atrocities. One for creating an atom... export function atom<V>( value: { key: string; default: V } ): Atom<V> { return new Atom(value.default); } And one for creating a selector... export function selector<V>(value: { key: string; get: SelectorGenerator<V>; }): Selector<V> { return new Selector(value.get); } Oh, remember that useCoiledValue hook from before? Let's update that to accept selectors too: export function useCoiledValue<T>(value: Stateful<T>): T { const [, updateState] = useState({}); useEffect(() => { const { disconnect } = value.subscribe(() => updateState({})); return () => disconnect(); }, [value]); return value.snapshot(); } That's it! We've done it! 🎉 Give yourself a pat on your back! Finished? For the sake of brevity (and in order to use that clickbaity "100 lines" title) I decided to omit comments, tests and examples. If you want a more thorough explanation (or want to play with examples), all that stuff is up in my "recoil-clone" Github repository. There's also an example site live so you can test it out. I once read that all good software should be simple enough that anyone could rewrite it if they needed to. Recoil has a lot of features that I haven't implemented here, but it's exciting to see such a simple and intuitive design that can reasonably be implemented by hand. Before you decide to roll my bootleg Recoil in production though, make sure you look into the following things: useMutableSource. If you're on a recent version of React you should use this instead of setStatein useCoiledValue. keyfield for each atom and selector which is used as metadata for a feature called "app-wide observation". I included it despite not using it to keep the API familiar. Other than that, hopefully I've shown you that you don't always have to look to a library when deciding on state management solutions. More often then not you can engineer something that perfectly fits your solution - that's how Recoil was born after all. After writing this post I was shown the jotai library. It's for a very similar feature set to my clone and supports async!
https://bennetthardwick.com/recoil-from-scratch
CC-MAIN-2021-31
refinedweb
1,733
57.67
Have you ever been in the start of a project where there is no agreement about how to authenticate your users? Maybe you are in discussions about using ASP.NET Identity with Owin authentication middleware, maybe the Membership provider, maybe ADFS or Identity Server. There are many options, and sometimes because of that (and because security is hard) this is done at the end of projects, frequently under time pressure which is never good. Turns out that what the way you do access control in your ASP.NET MVC app is pretty much independent of how you decide to authenticate and authorize your users. You just have to create an IPrincipal and an IIdentity and set it up in HttpContext (see this for a simple example). Even though you don’t need to choose the authentication mechanism to have authenticated users, you do have to pick the implementation of IPrincipal and IIdentity that you want to use. You should pick ClaimsPrincipal and ClaimsIdentity. The reason for this is that if you embrace Claims Based Authentication you will be assigning claims to your users. If you don’t know about claims (and you are used to use roles in your applications), this is the best explanation I’ve seen. Oh, and another reason, there’s a claim type (a claim has among other things a type and a value) that you can use that will behave exactly as a Role (AuthorizeAttribute will treat it just as you would expect, e.g. [Authorize(Role=”Admin)]). That claim type is… you’ve guessed it, Role (System.Security.Claims.ClaimTypes.Role). So if you add claims with that type to your ClaimsIdentity, it would be just as if the user contains the roles that are those claims’ values. So if you want to have users who you can configure without actually using an authentication mechanism you can: - Read from a configuration file (web.config would be the easiest place) the properties that you want your user to have (e.g. it’s name, roles, etc) Handle the Authenticate event in Global.asax (this will work even if you are using Owin/Katana and hosting in IIS, or you can create a custom Owin middleware that will work just as well as doing it in Global.asax) - In the handler, create a ClaimsPrincipal with a ClaimsIdentity and set the principal to HttpContext.Current.User If you decide to store the user information as app settings entries in your web.config, it would look like this: <configuration> <appSettings> <add key="test-security" value="true"/> <add key="username" value="John Doe"/> <add key="roles" value="Manager Admin"/> </appSettings> And your Global.asax would look like this: public class MvcApplication : System.Web.HttpApplication { public MvcApplication() { AuthenticateRequest += OnAuthenticateRequest; } private void OnAuthenticateRequest(object sender, System.EventArgs e) { if (ConfigurationManager.AppSettings["test-security"] != "true") return; “login” a user with username “John Doe” and the roles “Admin” and “Manager”. You can find this example on github here. NOTE: You have to set an authenticationType (in the example above it’s “test-security”) when you create the ClaimsIdentity or else the property IIdentity.IsAuthenticated will return false. Owin If you are using Owin you don’t need Global.asax. You can just use one of the IAppBuilder’s extension methods that allows you to add an owin middleware as a lambda expression. Everything is the same as above, but instead of Global.asax add this as your first middleware in your owin startup class (usually Startup.cs ) public class Startup { public void Configuration(IAppBuilder app) { RouteConfig.RegisterRoutes(RouteTable.Routes); app.Use((owinContext, next) => { if (ConfigurationManager.AppSettings["test-security"] != "true") return next.Invoke(); work as well if you are hosting in IIS, //but if you are using owin, might as well use the owin //to set the principal owinContext.Authentication.User = principal; return next.Invoke(); }); //rest of your owin startup configuration } } You can find this example on github here.
https://www.blinkingcaret.com/2016/01/13/security-without-authentication-mechanism-asp-net-mvc/
CC-MAIN-2019-09
refinedweb
655
55.54
But i want to make buttons in my gui and run the corresponding exe/command what i would like to know if possible, is how to make these exe/commands to open in the same cmd window. (no in different instances of cmd) The Unexpected Result in the title is because i made this script to open my 2 exe/commands and made me reboot my pc... WARNING do not run this script (it opens very fast lots of cmd windows) #include <ButtonConstants.au3> #include <GUIConstantsEx.au3> #include <WindowsConstants.au3> Local $Button_1, $Button_2, $msg #Region ### START Koda GUI section ### Form= $Form1 = GUICreate("Form1", 338, 221, 299, 132) $Button1 = GUICtrlCreateButton("ipconfig", 48, 128, 75, 25) $Button2 = GUICtrlCreateButton("netstat", 216, 128, 75, 25) GUISetState(@SW_SHOW) #EndRegion ### END Koda GUI section ### asssdd() Func asssdd() While 1 $msg = GUIGetMsg() Select Case $msg = $GUI_EVENT_CLOSE ExitLoop Case $msg = $Button_1 Run('ipco.exe') ; Will Run ipco.exe that exist in the same folder as the script Case $msg = $Button_2 Run('netstat.exe') ; Will Run netstat.exe that exist in the same folder as the script EndSelect WEnd EndFunc
http://www.autoitscript.com/forum/topic/139564-unexpected-resultsend-scripts-in-same-cmd/
CC-MAIN-2013-20
refinedweb
181
54.73
While developing a largeish project (split in several files and folders) in Python with IPython, I run into the trouble of cached imported modules. The problem is that instructions import module None module.py import sys try: del sys.modules['module'] except AttributeError: pass import module obj = module.my_class() import os for mod in ['module.submod1', 'module.submod2']: try: del sys.module[mod] except AttributeError: pass # sometimes this works, sometimes not. WHY? Quitting and restarting the interpreter is the best solution. Any sort of live reloading or no-caching strategy will not work seamlessly because objects from no-longer-existing modules can exist and because modules sometimes store state and because even if your use case really does allow hot reloading it's too complicated to think about to be worth it.
https://codedump.io/share/TpEak9Giqpeu/1/prevent-python-from-caching-the-imported-modules
CC-MAIN-2017-17
refinedweb
132
57.98
../E21642-12.epub /> ../E21642-12.mobi /> This chapter presents general rules for names and parameters used in TimesTen SQL statements. It includes the following topics: Duplicate parameter names Inferring data type from parameters Basic names, or simple names, identify columns, tables, views and indexes. Basic names must follow these rules: The maximum length of a basic name is 30 characters. A name can consist of any combination of letters (A to Z a to z), decimal digits (0 to 9), $, #, @, or underscore (_). For identifiers, the first character must be a letter (A-Z a-z) and not a digit or special character. However, for parameter names, the first character can be a letter (A-Z a-z), a decimal digit (0 to 9), or special characters $, #, @, or underscore (_). TimesTen changes lowercase letters (a to z) to the corresponding uppercase letters (A to Z). Thus names are not case-sensitive. If you enclose a name in quotation marks, you can use any combination of characters even if they are not in the set of legal characters. When the name is enclosed in quotes, the first character in the name can be any character, including one or more spaces. If a column, table, or index is initially defined with a name enclosed in quotation marks and the name does not conform to the rule noted in the second bullet, then that name must always be enclosed in quotation marks whenever it is subsequently referenced. Unicode characters are not allowed in names. The owner name is the user name of the account that created the table. Tables and indexes defined by TimesTen itself have the owner SYS or TTREP. User objects cannot be created with owner names SYS or TTREP. TimesTen converts all owner and table names to upper case. Owners of tables in TimesTen are determined by the user ID settings or login names. For cache groups, Oracle database table owner names must always match TimesTen table owner names. Owner names may be specified by the user during table creation, in addition to being automatically determined if they are left unspecified. See "CREATE TABLE". When creating owner names, follow the same rules as those for creating basic names. See "Basic names". Basic names and user names are simple names. In some cases, simple names are combined and form a compound identifier, which consists of an owner name combined with one or more basic names, with periods ( .) between them. In most cases, you can abbreviate a compound identifier by omitting one of its parts. If you do not use a fully qualified name, a default value is automatically for the missing part. For example, if you omit the owner name (and the period) when you refer to tables you own, TimesTen generates the owner name by using your login name. A complete compound identifier, including all of its parts, is called a fully qualified name. Different owners can have tables and indexes with the same name. The fully qualified name of these objects must be unique. The following are compound identifiers: Column identifier : [[ Owner .] TableName. ] ColumnName [ Owner .] IndexName Table identifier : [ Owner .] TableName Row identifier : [[ Owner .] TableName .] rowid In SQL syntax, object names that share the same namespace must each be unique. This is so that when a name is referenced in any SQL syntax, the exact object can be found. If the object name provided is not qualified with the name (namespace) of the user that owns it, then the search order for the object is as follows: Search for any match from all object names within the current user namespace. If there is a match, the object name is resolved. If no match is found in the user namespace, search for any match from the PUBLIC namespace, which contains objects such as public synonyms. Public synonyms are pre-defined for SYS and TTREP objects. If there is a match, the object name is resolved. Otherwise, the object does not exist. Any tables, views, materialized views, sequences, private synonyms, PL/SQL packages, functions, procedures, and cache groups owned by the same user share one namespace and so the names for each of these objects must be unique within that namespace. Indexes are created in their own namespace. For example, because tables and views are in the same namespace, a table and a view owned by the same user cannot have the same name. However, tables and indexes are in different namespaces, so a table and an index owned by the same user can have the same name. Tables that are owned by separate users can have the same name, since they exist in separate user namespaces. Dynamic parameters pass information between an application program and TimesTen. TimesTen uses dynamic parameters as placeholders in SQL commands and at runtime replaces the parameters with actual values. A dynamic parameter name must be preceded by a colon ( :) when used in a SQL command and must conform to the TimesTen rules for basic names. However, unlike identifiers, parameter names can start with any of the following characters: Uppercase letters: A to Z Lowercase letters: a to z Digits: 0 to 9 Special characters: # $ @ _ Note:Instead of using a :DynamicParametersequence, the application can use a ?for each dynamic parameter. Enhanced ":" style parameter markers have this form: :parameter [INDICATOR] :indicator The : indicator is considered to be a component of the : parameter. It is not counted as a distinct parameter. Do not specify '?' for this style of parameter marker. Consider this SQL statement: SELECT * FROM t1 WHERE c1=:a AND c2=:a AND c3=:b AND c4=:a; Traditionally in TimesTen, multiple instances of the same parameter name in a SQL statement are considered to be multiple occurrences of the same parameter. When assigning parameter numbers to parameters, TimesTen assigns parameter numbers only to the first occurrence of each parameter name. The second and subsequent occurrences of a given name do not get their own parameter numbers. In this case, a TimesTen application binds a value for every unique parameter in a SQL statement. It cannot bind different values for different occurrences of the same parameter name nor can it leave any parameters or parameter occurrences unbound. In Oracle Database, multiple instances of the same parameter name in a SQL statement are considered to be different parameters. When assigning parameter numbers, Oracle Database assigns a number to each parameter occurrence without regard to name duplication. An Oracle database application, at a minimum, binds a value for the first occurrence of each parameter name. For the subsequent occurrences of a given parameter, the application can either leave the parameter occurrence unbound or it can bind a different value for the occurrence. The following table shows a query with the parameter numbers that TimesTen and Oracle Database assign to each parameter. The total number of parameter numbers for TimesTen in this example is 2. The total number of parameters for Oracle Database in this example is 4. The parameter bindings provided by an application produce different results for the traditional TimesTen behavior and the Oracle Database behavior. You can use the DuplicateBindMode general connection attribute to determine whether applications use traditional TimesTen parameter binding for duplicate occurrences of a parameter in a SQL statement or Oracle-style parameter binding. Oracle-style parameter binding is the default. Consider this statement: SELECT :a FROM dual; TimesTen cannot infer the data type of parameter a from the query. TimesTen returns this error: 2778: Cannot infer type of parameter from its use The command failed. Use the CAST function to declare the data type for parameters: SELECT CAST (:a AS NUMBER) FROM dual;
http://docs.oracle.com/cd/E11882_01/timesten.112/e21642/names.htm
CC-MAIN-2015-22
refinedweb
1,270
54.02
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. write() got multiple values for keyword argument 'context' I'm overriding the write() method in validation of odoo hr_attendance My code is def write(self, cr, uid, ids, context=None): for att in self.browse(cr, uid, ids, context=context): prev_att_ids = self.search(cr, uid, [('employee_id', '=', att.employee_id.id), ('name', '<', att.name), ('action', 'in', ('sign_in', 'sign_out'))], limit=1, order='name DESC') next_add_ids = self.search(cr, uid, [('employee_id', '=', att.employee_id.id), ('name', '>', att.name), ('action', 'in', ('sign_in', 'sign_out'))], limit=1, order='name ASC') prev_atts = self.browse(cr, uid, prev_att_ids, context=context) next_atts = self.browse(cr, uid, next_add_ids, context=context) if prev_atts and prev_atts[0].action == att.action: return self.write(cr, uid, ids, {'state': True}) if next_atts and next_atts[0].action == att.action: # next exists and is same action return self.write(cr, uid, ids, {'state': True}) if (not prev_atts) and (not next_atts) and att.action != 'sign_in': # first attendance must be sign_in return self.write(cr, uid, ids, {'state': True}) else: return self.write(cr, uid, ids, {'state': False}) return True This is my code. While writing a record in attendance it should check the three "if condition" and want to change the value of the field "state" either "true or false" according to the "if conditions". But the problem is, in attendance module I have created a record and when I press save it show this error " ValidateError Error while validating constraint write() got multiple values for keyword argument 'context' " How can I solve this issue? Help me Thanks. Regards, Uppili Arivukkannu Thanks for your reply Emipro Technologies I have changed the code but it show this error ValidateError Error while validating constraint maximum recursion depth exceeded I have tried this code import resource,sys resource.setrlimit(resource.RLIMIT_STACK, (2**29,-1)) sys.setrecursionlimit(10**6) But it automatically stops the odoo server Please do not call write() method from defination of the write() method. It is the cause infinite recursion. Please remove """ self.write(cr, uid, ids, {'state': True})""" from your code. Hello, You have define wrong write method def write(self, cr, uid, ids, context=None): You have not write one argument "vals". So, vals are passed in context as well as default context is set. So, you got this error. In that you have to pass following arguments thats it. def write(self, cr, uid, ids, vals, context={}) It will resolve your issue. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/write-got-multiple-values-for-keyword-argument-context-103277
CC-MAIN-2017-34
refinedweb
451
60.21
Welcome to the Parallax Discussion Forums, sign-up to participate. public class HelloWorld { public static void main() { System.out.println("Hello World!"); } } AwesomeCronk wrote: » Thanks! I've got Rev. B on mine. Let me know what driver version you are using, please. AwesomeCronk wrote: » @Publison I called tech support a few hours ago. The guy who answered asked me to send an email, so that he could forward it to one of the senior engineers. That's definitely progress. Parallax makes a variety of carrier boards for BASIC Stamps. The Javelin Stamp can be powered and programmed using any of these carrier boards. You can also make your own connections for supply voltage and serial cables. See the Hardware Setup section in Chapter 2. You can't use it with a Homework board but it will work on a Board of Education. There are a few steps to configure the IDE in order for it to work on a Board of Education. When the you open the Javelin IDE go to the Projects tab - Global Options - Debugger Tab - ...Button - Value - COM # Field, then select the Add button, then press ok. In the serial port option make sure your Javelin is selected then select Ok and do an Project Identify. @cgracey? @Publison? Going to take another 2 days. Tried on WIN 10 and WIN 8. Can't go on with any help. Time to call tech support. I called tech support a few hours ago. The guy who answered asked me to send an email, so that he could forward it to one of the senior engineers. That's definitely progress. Probably Jeff Martin. He has been there for years. Maybe Chip may step in. I emailed a couple of engineers that have been here since the days of the Javelin and these are their comments below. Jeff: There's a situation that may be involved here... the software for the computer and firmware on the Javelin Stamp were designed in a way that was unfortunately affected by CPU speed; the problem is, with more modern computers, the two have difficulty communicating with each other, resulting in the error message he saw. If I remember right, it had something to do with the size of the user code in the Javelin, or whether or not the Javelin had ever been programmed in the first place. I think we had some kind of recovery tool, but I think it required a slower computer to be effective. I'll see if I can find that. Andy: "[Error IDE-0054 ] Unable to find a Javelin on any port" is like "...no BASIC Stamp Found..." It means that it is not receiving a response from the Javelin Stamp. Are you getting any rx and tx (red, blue) lights flickering? Also try to get details about the demo board (and or USB/serial converter) he's using, and compare the programming schematic in this doc with the one for the demo board he has: Side notes: His program is missing import stamp.core.* The software he would have to use is ancient. I'm not sure that it would successfully communicate with USB/serial drivers. Unfortunately the software developer we contracted allowed his source code to get nuked, so we have no way of getting updated software. Me:(Miguel Tech Support) I also notice that the usb to serial converter in the picture is newer and different than the original converter that came with the development board. By the time the new usb converter was designed the Basic Stamp PDB was already discontinued. There were cases where the new converters were not working with older boards. Best Regards Miguel Rodriguez Parallax Tech Support This is the email that I received earlier. I am using a Javelin Stamp, Rev. B, purchased from the forums. Arrived in original packaging with a never-used claim. As to PC speed, I have no idea. Andy: No Rx or Tx lights. My Javelin is on a Professional Development Board from Chris S. My code was taken from the Javelin Stamp Manual, found on page 35 of the attached pdf. Miguel: I have an older USB-Serial adapter, which yielded the same communications issue. I called support a (long)while ago, and I was directed to the FTDI site for the most recent drivers, as it did not work with either JS1 or BS2. I ended up getting the most recent one for use with BS2s. the new one works with BS2s, but nothing works with JS1s. My theory at the time still holds. I may need to find a copy of the drivers released with the JSIDE in 2009. I do have a product disk from then that holds a copy of the JSIDE. it may hold such drivers. This is my response. I think it is Stamp compatible. Try it on a regular Stamp board. If the Javelin stamp is no good they come up on Ebay fairly often for not much money. I do not think the USB IC on the board has anything to do with it running. Does your Javelin stamp work? Or do you have a board with a USB problem? Or connecting to Editor problem? Edit. " My JSIDE terminal " What is that? The USB IC on my BOE is the programming link(built-USB/Serial interpreter) from Stamp editor on PC to the BS2. Built for the BS2 family, there is no reason that the Javelin could be programmed on it. @Ken Gracey, is(sorry, was) there any sort of Javelin tester board, to determine the functionality of a Javelin? My board is one hundred percent functional. JSIDE-Javelin Stamp Integrated Development Environment. The terminal is the USB port on which the Javelin is being programmed. You can use your Stamp board to test your Javelin module. That is from Javelin manual.Here's link for one I looked in. I myself would just put it on there. If you want reassurance, call or email support before you do it. The Javelin was Parallax's new microcontroller before the Propeller. When you get it going it should be fun! Edit Sent a message to Support for you. "The manual is 5 years outdated " That's right. The Javelin is not a current device with Parallax. As you can see on the website they no longer sell them and have not made anymore. Support never got back to me today. Alll day! Will phone them in the morning for you. Awesome Cronk Let's just ask the forum. Anybody know if it's okay to run a Javelin on a BOE USB? I would just put it on there. If you want confirmation before doing that then we will wait. Awesome Cronk Here you go. Support did get back to me yesterday. I just missed it another email account of ours. Let me know how that works. [Error IDE-0022] Error reading from the serial port [timeout].
http://forums.parallax.com/discussion/comment/1453214/
CC-MAIN-2020-05
refinedweb
1,161
76.72
import "golang.org/x/exp/sumdb/internal/tlog" Package tlog implements a tamper-evident log used in the Go module go.sum database server. This package is part of a DRAFT of what the go.sum database server will look like. Do not assume the details here are final! This package follows the design of Certificate Transparency (RFC 6962) and its proofs are compatible with that system. See TestCertificateTransparency. HashSize is the size of a Hash in bytes. CheckRecord verifies that p is a valid proof that the tree of size t with hash th has an n'th record with hash h. CheckTree verifies that p is a valid proof that the tree of size t with hash th contains as a prefix the tree of size n with hash h."). ParseRecord parses a record description at the start of text, stopping immediately after the terminating blank line. It returns the record id, the record text, and the remainder of text. func ReadTileData(t Tile, r HashReader) ([]byte, error) ReadTileData reads the hashes for tile t from r and returns the corresponding tile data. SplitStoredHashIndex is the inverse of StoredHashIndex. That is, SplitStoredHashIndex(StoredHashIndex(level, n)) == level, n. StoredHashCount returns the number of stored hashes that are expected for a tree with n records. StoredHashIndex maps the tree coordinates (level, n) to a dense linear ordering that can be used for hash storage. Hash storage implementations that store hashes in sequential storage can use this function to compute where to read or write a given hash. A Hash is a hash identifying a log record or tree root. HashFromTile returns the hash at the given storage index, provided that t == TileForIndex(t.H, index) or a wider version, and data is t's tile data (of length at least t.W*HashSize). NodeHash returns the hash for an interior tree node with the given left and right hashes. ParseHash parses the base64-encoded string form of a hash. RecordHash returns the content hash for the given record data.. StoredHashesForRecordHash is like StoredHashes but takes as its second argument RecordHash(data) instead of data itself.. MarshalJSON marshals the hash as a JSON string containing the base64-encoded hash. String returns a base64 representation of the hash for printing. UnmarshalJSON unmarshals a hash from JSON string containing the a base64-encoded hash.. A HashReaderFunc is a function implementing HashReader. func (f HashReaderFunc) ReadHashes(indexes []int64) ([]Hash, error) A RecordProof is a verifiable proof that a particular log root contains a particular record. RFC 6962 calls this a “Merkle audit path.” func ProveRecord(t, n int64, r HashReader) (RecordProof, error) ProveRecord returns the proof that the tree of size t contains the record with index n..) ParseTilePath parses a tile coordinate path. TileForIndex returns the tile of height h ≥ 1 and least width storing the given hash storage index. Path returns a tile coordinate path describing t.. A Tree is a tree description, to be signed by a go.sum database server. ParseTree parses a tree root description. A TreeProof is a verifiable proof that a particular log tree contains as a prefix all records present in an earlier tree. RFC 6962 calls this a “Merkle consistency proof.” func ProveTree(t, n int64, h HashReader) (TreeProof, error) ProveTree returns the proof that the tree of size t contains as a prefix all the records from the tree of smaller size n. Package tlog imports 9 packages (graph) and is imported by 1 packages. Updated 2019-11-29. Refresh now. Tools for package owners.
https://godoc.org/golang.org/x/exp/sumdb/internal/tlog
CC-MAIN-2019-51
refinedweb
591
66.33
Testing your mobile game or app before releasing it to the app stores is a crucial task. Not only to find bugs but also to test the app’s user experience, balancing settings or just its long-term fun factor. Cause of this we are happy to introduce a new plugin for beta testing your apps: Felgo Qt 5 HockeyApp Plugin This post answers the following questions: - How beta testing works - How does HockeyApp support beta testing? - How can I integrate HockeyApp to my Qt 5 App? - How to get started with HockeyApp How Beta Testing Works It might be easy to provide test versions to your colleagues within your company or team. But it is definitely hard to set up a deployment process to provide external testers with new versions. Especially when: - You are a single developer also responsible for QA. Your testers then are mostly scattered around the globe and you might only know them via Twitter or your support channels. - You or your company is developing a game or app for a customer who wants to test pre-release versions on a regular basis. For that use cases both Apple and Google have solutions in place. Apple recently acquired TestFlight and offers it within iTunes Connect for beta testing. Google provides alpha and beta stages for your uploaded Google Play Store apps. Both however come with serious drawbacks: Apple requires you to submit your beta apps into their review process. This means you’re losing a lot of time from the upload of your app until it’s on the devices of your testers. Moreover Apple might reject it even before it reaches your testers’ devices. For Play Store beta distribution you need to invite your testers into a Google Group, which requires them to create a Google ID and then to wait some time until it hits the Google Play Store. So instead of relying on platform-dependent services why not use a service that specializes in beta app distribution? And more, wouldn’t it be great if there would be such a service for Qt-based apps? Say Hello to our new Felgo Qt 5 HockeyApp Plugin! HockeyApp is an app distribution service for mobile beta app distribution. In comparison to Apple or Google it’s not bound to a specific platform. Instead it provides the functionality across iOS, Android and Windows Phone 8, a great match with the Felgo cross-platform philosophy. Thanks to our new plugin it’s really easy to integrate HockeyApp’s functionality into your Qt or Felgo app or game, both for iOS and Android. How Does HockeyApp Support Beta Testing? As soon as you have a new app version ready for testing you can upload your build to HockeyApp and optionally notify your testers via email about the new version. The next time your testers open your app on their devices an alert notifies your testers about a new version of your app. You can even show release notes if you want. Your testers can then update your app over-the-air without downloading any extra files to their computers. When you open HockeyApp’s dashboard you also get some insights how many and who of your users already are on the latest version and which devices they are using for tests. How Can I Integrate HockeyApp to my Qt 5 App? If you’ve ever added one of our other Qt 5 plugins to your app you already know how to do it: - Download the HockeyApp plugin. You can either purchase it as a standalone plugin for one app. Or you go for a Felgo subscription and can use it for an unlimited amount of apps and games. There is also a test project available on GitHub which you can use for testing. - After installation you can add it to your code like this: import VPlayPlugins.hockeyapp 1.0 GameWindow { HockeyApp { appId: "<HOCKEYAPP-APP-ID>" } } The appId lets the plugin know which app it should use for updates. You can create a new app for beta distribution after registering at. For every new release of your project you can then follow these 3 steps: Increase The App Version Number HockeyApp identifies app updates by the version number of your uploaded builds. For a new update you therefore need to increase the number. On iOS, you can set a new version number in your project’s Info.plist file: <key>CFBundleVersion</key> <string>18</string> <key>CFBundleShortVersionString</key> <string>0.18</string> The CFBundleShortVersionString variable is the version name that is displayed to your users whereas the CFBundleVersion is used internally by HockeyApp to identify a new version. On Android, set the version number in your project’s AndroidManifest.xml file: <manifest … android:versionName="0.18" android:versionCode="18" … > Again, the android:versionName variable is the version name that is displayed to your users whereas the android:versionCode is used internally by HockeyApp to identify a new version. If you’re using Felgo these files are already in place in your project’s android & ios subfolders, if you’re starting from scratch you first have to copy the autogenerated files from a previous build and add the following lines to your Qt project file: # iOS ios { # Path to custom Info.plist file QMAKE_INFO_PLIST = ios/Project-Info.plist } # Android android { # Path to directory containg custom AndroidManifest.xml file ANDROID_PACKAGE_SOURCE_DIR = $$PWD/android } If you’re developing a Felgo game also make sure to increase the versioncode and versionname properties in your project’s config.json file. This also means that you need a new license key from. Create a Release Build After adapting your project files you can create your beta app binaries. Change the build target in Qt Creator to “iphoneos-clang” or “Android for armeabi (-v7a)” and select the “Release” build type. When using Felgo also make sure to set the “stage” property to “publish” within your project’s config.json file. This is also a good time to add all your QML files to a resource file and disable the deployment folder, otherwise your source files are shipped as plain text files to your testers. Felgo makes changing from QML files with DEPLOYMENTFOLDERS to resources very easy: you do not need to change any paths in your QML logic, as Felgo does this automatically for you. For more details what steps are involved at changing, and why it is beneficial to only switch to resources at publishing, see here. Build And Upload The Binaries for HockeyApp As the last step, you can now build the binaries. Depending on your target platform follow these steps: iOS - Clean your project, select “Run qmake” from Qt Creator’s “Build” menu and open the resulting Xcode project from the shadow build directory - Select your distribution provisioning profile for the “release” build type (you probably have to create a new one at) - Run the “Archive” option from the “Build” menu from within Xcode - Upload the resulting build to HockeyApp. You can either upload a build manually on the HockeyApp dashboard or a lot easier, you can use the HockeyApp Desktop. Just make sure to open the app before creating your build, as soon as the “Archive” built steps succeeded, HockeyApp then asks you if it should upload the newly created build. Android - Add signing settings to your project within Qt Creator’ Projects settings: - Clean your project, select “Run qmake” from Qt Creator’s “Build” menu and finally build the app - Upload the resulting APK file to HockeyApp with the help of the HockeyApp desktop app or on the dashboard. The APK file can be found at “<shadowbuild-dir>/android-build/bin/QtApp-release.apk”. Congrats! The next time your testers open your beta app they get notified about latest uploaded version and can install it over the air. That’s all you need to know to get started with Felgo’s HockeyApp Plugin! You can now begin by installing the plugin like described here. The plugin is included in your Felgo Indie subscription plan & above for your Felgo games or as a standalone Qt 5 plugin for your Qt apps. It is available both for iOS & Android apps. For an easy start you can download our sample project from GitHub, which already comes pre-configured for HockeyApp. Just replace the app identifiers in AndroidManifest.xml & Project-Info.plist, add your own HockeyApp app id and you’re ready to go! For further information have a look at our official documentation and ask your questions in our support forums.
https://felgo.com/ios-development/beta-distribution-for-mobile-qt-apps-felgo-games-with-hockeyapp-felgo
CC-MAIN-2019-39
refinedweb
1,425
60.55
MyClassLoader - Java ClassLoader - Part 2 By vaibhavc on Jan 02, 2008 MyClassLoader will take one more entry for completion. Before writing our own custom ClassLoader, we have to devote sometime to see the methods of ClassLoader. Some of them need special attention while others we can ignore. Before starting with methods, we can see some type of ClassLoader available in jdk(openjdk) itself. AppletClassLoader, RMIClassLoader, SecureClassLoader, URLClassLoader are some of them. Remember all the custom ClassLoader need to extend ClassLoader except one :-). Any guesses ? Bootstrap Class Loader - Yes, this is responsible for loading runtime classes(rt.jar- very famous jar file in /jre/lib :-) ) . It has a native implementation and hence varies across JVM. So, when we write java MyProgram Bootstrap ClassLoader comes into the action first. Alright, back to methods: we can see the whole list of methods of ClassLoader here. But we will see those of our interest: - loadClass -> entry point for ClassLoader. In JDK 1.1 or earlier, this is the only method we need to override but after JDK 1.2 some dynamics get changed. Will discuss that later. - defineClass -> As I mentioned in the last blog, this is one of the complex method which takes raw data and turn it into Class Object. Need not to worry, it is defined as final(so we can't change... who want to change). - findSystemClass -> looks for the class file in local disk, if yes calls defineClass and convert the raw data into Class Object. In JDK 1.2, new delegation model came into picture where if ClassLoader can't able to find a class, it asks (it's) parent ClassLoader to do it. JDK 1.2 came up with a new method called findClass which contains specialized code and help you when you are messed up with lot of ClassLoader. So, from JDK 1.2 and onwards, we just need to override findClass and everything will work fine, if not it will throw ClassNotFoundException. There are lot of other methods like getParent, getSystemClassLoader, but we can write our custom ClassLoader without touching these methods. So, top skeleton looks like: public class MyClassLoader extends ClassLoader { public CustomClassLoader(){ //getClassLoader returns ClassLoader super(CustomClassLoader.class.getClassLoader()); } } //lot of thing after this Posted by Tag Heuer watches on December 20, 2009 at 02:20 PM IST #
https://blogs.oracle.com/vaibhav/entry/myclassloader_java_classloader_part_2
CC-MAIN-2014-10
refinedweb
385
66.64
Overcoming Positional Parameter Parsing in Java Overcoming Positional Parameter Parsing in Java This primer into positional parameter parsing using a custom class is a good reminder of how to keep your programs flexible. Join the DZone community and get the full member experience.Join For Free During the initial days of learning any programming language, it is natural to hard code all the inputs needed by a program. But, as we learn more about a programming language, and also learn to create more flexible programs, there is a need to develop a 'generic' program. For the context of this article, I have defined a 'generic' application to be one that accepts command-line parameters. I will cover three methods of handling parameters passed to an application. Method 1: Positional Parameters This method is the first step for handling parameters. The main method of Java accepts an array of strings, and it is only natural to access command line parameters using the array indices. Thus, if a program accepts two parameters, it would use the arguments given below: . . . public static void main(String[] args) { . . . String inFilePath = args[0]; String outFilePath = args[1]; . . . open file in read mode using inFilePath open file in write mode using outFilePath . . . } . . . The biggest problem of this method is that the arguments parsed by the application are positional in nature. If, by any chance, the user makes the mistake of specifying the parameters incorrectly, the whole operation can end in disaster. For example, in the above example, if the paths are interchanged, we will end up overwriting an existing file! Surely not what we want from the application. Method 2: Checking for Parameters Instead of using positional parameters, can we not use a more robust and flexible method? Can we not use named arguments? In fact, we can — and the answer is simple. Here is an example: . . . public static void main(String[] args) { . . . String inFilePath = null; String outFilePath = null; for ( int i = 0; i < args.length; i++ ) { if ( args[i].equals("-i") ) { i++; inFilePath = args[i]; } else if ( args[i].equals("-o") ) { i++; outFilePath = args[i]; } else { // nothing to do, the unrecognized argument will be skipped } } open file in read mode using inFilePath open file in write mode using outFilePath . . . } . . . To invoke the application, we use the following method: java -jar copy.jar CopyFile -i infile.txt -o outFile.txt Method 3: Custom Class While the code shown in method two is simple, it is tedious to check each parameter in an explicit manner. Can we not create another mechanism? Indeed we can. This the very same thing, but using a custom class for this purpose. The class for command line parsing is as below: import java.util.ArrayList; @SuppressWarnings("unchecked") public class CommandOptions { protected ArrayList arguments; public CommandOptions(String[] args) { parse(args); } public void parse(String[] args) { arguments = new ArrayList(); for ( int i = 0; i < args.length; i++ ) { arguments.add(args[i]); } } public int size() { return arguments.size(); } public boolean hasOption(String option) { boolean hasValue = false; String str; for ( int i = 0; i < arguments.size(); i++ ) { str = (String)arguments.get(i); if ( true == str.equalsIgnoreCase(option) ) { hasValue = true; break; } } return hasValue; } public String valueOf(String option) { String value = null; String str; for ( int i = 0; i < arguments.size(); i++ ) { str = (String)arguments.get(i); if ( true == str.equalsIgnoreCase(option) ) { value = (String)arguments.get(i+1); break; } } return value; } } . . . public static void main(String[] args) { CommandOptions cmd = new CommandOptions(args); String inFilePath = null; String outFilePath = null; if ( cmd.hasOption("-i") ) { inFilePath = cmd.valueOf("-i"); } else if ( cmd.hasOption("-o") ) { outFilePath = cmd.valueOf("-o"); } open file in read mode using inFilePath open file in write mode using outFilePath . . . } . . . To invoke the application, we use the following method: java -jar copy.jar CopyFile -i infile.txt -o outFile.txt Conclusion The way I have presented the methods may leave you feeling that these are a gradual and evolutionary progression. In fact, after living with Method 1 for quite some time, I created the custom class described in Method 3 — and also published it as 'public domain' code on code.google.com. Unfortunately, the site closed down. It was only recently that I created Method 3 for an application where I did not want to go through the elaborate mechanism of a custom class. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/overcoming-positional-parameter-parsing-in-java
CC-MAIN-2019-30
refinedweb
734
51.14
13.1. Simulating a discrete-time Markov Discrete-time Markov chains are stochastic processes that undergo transitions from one state to another in a state space. Transitions occur at every time step. Markov chains are characterized by their lack of memory in that the probability to undergo a transition from the current state to the next depends only on the current state, not the previous ones. These models are widely used in scientific and engineering applications. Continuous-time Markov processes also exist and we will cover particular instances later in this chapter. Markov chains are relatively easy to study mathematically and to simulate numerically. In this recipe, we will simulate a simple Markov chain modeling the evolution of a population. How to do it... 1. Let's import NumPy and matplotlib: import numpy as np import matplotlib.pyplot as plt %matplotlib inline 2. We consider a population that cannot comprise more than \(N=100\) individuals, and define the birth and death rates: N = 100 # maximum population size a = .5 / N # birth rate b = .5 / N # death rate 3. We simulate a Markov chain on the finite space \({0, 1, ..., N}\). Each state represents a population size. The x vector will contain the population size at each time step. We set the initial state to \(x_0=25\) (that is, there are 25 individuals in the population at initialization time): nsteps = 1000 x = np.zeros(nsteps) x[0] = 25 4. Now we simulate our chain. At each time step \(t\), there is a new birth with probability \(ax_t\), and independently, there is a new death with probability \(bx_t\). These probabilities are proportional to the size of the population at that time. If the population size reaches 0 or N, the evolution stops:] 5. Let's look at the evolution of the population size: fig, ax = plt.subplots(1, 1, figsize=(8, 4)) ax.plot(x, lw=2) We see that, at every time step, the population size can stay stable, increase, or decrease by 1. 6. Now, we will simulate many independent trials of this Markov chain. We could run the previous simulation with a loop, but it would be very slow (two nested for loops). Instead, we vectorize the simulation by considering all independent trials at once. There is a single loop over time. At every time step, we update all trials simultaneously with vectorized operations on vectors. The x vector now contains the population size of all trials, at a particular time. At initialization time, the population sizes are set to random numbers between 0 and N: ntrials = 100 x = np.random.randint(size=ntrials, low=0, high=N) 7. We define a function that performs the simulation. At every time step, we find the trials that undergo births and deaths by generating random vectors, and we update the population sizes with vector operations:] 8. Now, let's look at the histograms of the population size at different times. These histograms represent the probability distribution of the Markov chain, estimated with independent trials (the Monte Carlo method): bins = np.linspace(0, N, 25) nsteps_list = [10, 1000, 10000] fig, axes = plt.subplots(1, len(nsteps_list), figsize=(12, 3), sharey=True) for i, nsteps in enumerate(nsteps_list): ax = axes[i] simulate(x, nsteps) ax.hist(x, bins=bins) ax.set_xlabel("Population size") if i == 0: ax.set_ylabel("Histogram") ax.set_title(f"{nsteps} time steps") Whereas, initially, the population sizes look uniformly distributed between 0 and \(N\), they appear to converge to 0 or \(N\) after a sufficiently long time. This is because the states 0 and \(N\) are absorbing; once reached, the chain cannot leave these states. Furthermore, these states can be reached from any other state. How it works... Mathematically, a discrete-time Markov chain on a space \(E\) is a sequence of random variables \(X_1, X_2, ...\) that satisfy the Markov property: A (stationary) Markov chain is characterized by the probability of transitions \(P(X_j \mid X_i)\). These values form a matrix called the transition matrix. This matrix is the adjacency matrix of a directed graph called the state diagram. Every node is a state, and the node \(i\) is connected to the node \(j\) if the chain has a non-zero probability of transition between these nodes. There's more... Simulating a single Markov chain in Python is not particularly efficient because we need a for loop. However, simulating many independent chains following the same process can be made efficient with vectorization and parallelization (all tasks are independent, thus the problem is embarrassingly parallel). This is useful when we are interested in statistical properties of the chain (example of the Monte Carlo method). There is a vast literature on Markov chains. Many theoretical results can be established with linear algebra and probability theory. Many generalizations of discrete-time Markov chains exist. Markov chains can be defined on infinite state spaces, or with a continuous time. Also, the Markov property is important in a broad class of stochastic processes. Here are a few references: - Markov chains on Wikipedia, available at - Absorbing Markov chains on Wikipedia, available at - Monte-Carlo methods on Wikipedia, available at See also - Simulating a Brownian motion
https://ipython-books.github.io/131-simulating-a-discrete-time-markov-chain/
CC-MAIN-2019-09
refinedweb
859
57.57
You can position any components anywhere on the screen using Flexbox. You can arrange them vertically, horizontally, center them, distribute evenly and much more. Table of contents What is Flexbox? The CSS3 Flexible Box, or Flexbox, is a layout mode providing for the arrangement of elements on a page such that the elements behave predictably when the page layout must accommodate different screen sizes and different display devices. Let’s Get Started Let’s start by creating a new app. Open Terminal App and run these commands to initialize a new project and run it in emulator. react-native init LearnignFlexbox; cd LearnignFlexbox; react-native run-ios; Once your app is up and running, press ⌘D and select Enable Hot Reloading. This will save you some time having to reload the app manually every time you make a change. Open index.ios.js file and change its content with the following code. import React, { Component } from 'react'; import { AppRegistry, StyleSheet, Text, View } from 'react-native'; class LearnignFlexbox extends Component { render() { return ( <View style={styles.container}> <View style={styles.navBar}> <Text style={styles.navBarButton}>Back</Text> <Text style={styles.navBarHeader}>Awesome App</Text> <Text style={styles.navBarButton}>More</Text> </View> <View style={styles.content}> <Text style={styles.text}> Welcome to Awesome App! </Text> </View> </View> ); } } const styles = StyleSheet.create({ }); AppRegistry.registerComponent('LearnignFlexbox', () => LearnignFlexbox); As you can see, we have a View component with container style and two more Views inside it, navBar and content. navBar has 3 Text components with either navBarButton style for button on left and right or navBarHeader style for the header. And, lastly, content View has Text inside. Ok, let’s take at look at our progress so far. Doesn’t look that great, huh? That’s because we haven’t defined any of those styles mentioned yet. By default all of the components are vertically stacked on top of each other. Defining Styles Let’s start adding our styles inside styles definition we added earlier. const styles = StyleSheet.create({ }); Main Flexible Container First, let’s add container style container: { flex: 1, }, flex: 1 means that container is flexible and will take up all of the space on the screen. Nothing has changed on the screen, because container has white background. Understanding Flexible Containers To illustrate how that works let’s take a look at quick example here. Let’s change container background and try setting flex to ether 0 or 1. container: { flex: 0, backgroundColor: '#374046' }, container: { flex: 1, backgroundColor: '#374046' }, Container on the left has flex set to 0 and it takes as much space as all of its children take. In contrast, container on the right has flex set to 1 and takes all of the available space on the screen. Let’s add a nav bar at the top of the screen. Add navBar style navBar: { flexDirection: 'row', paddingTop: 30, height: 64, backgroundColor: '#1EAAF1' }, We set total height to 64 and paddingTop to 30 to push it down from status bar. And we set flexDirection to row, which means that all of the navBar children components will be stacked horizontally instead of vertically. By default flexDirection is column and children are stacked vertically. Let’s see where are we at. Looks better, but we want to have buttons aligned on the sides and header centered. So, let’s add styles to do that. navBarButton: { color: '#FFFFFF', textAlign:'center', width: 64 }, navBarHeader: { flex: 1, color: '#FFFFFF', fontWeight: 'bold', textAlign: 'center' }, We set width to 64 for buttons, and flex to 1 for the header, which means that it will take all of the space available between buttons. Looks much better. Center Content Vertically and Horizontally Next, we want to center welcome text on the screen. Let’s add some styles for content and text containers. content: { flex: 1, justifyContent: 'center', alignItems: 'center', backgroundColor: '#374046' }, text: { color: '#EEEEEE' }, We set flex to 1 to take all available screen space, justifyContent to center to center children components vertically and alignItems to center to center horizontally. When you switch flexDirectionto rowmode justifyContentand alignItemswork the opposite way. justifyContentgets responsible for aligning children components horizontally and alignItemsvertically. Let’s check out how is out app looking so far. Looks pretty good. Tab Bar Now, let’s add tab bar at the bottom. Just add new View after <View style={styles.content}> closing </View> tag. <View style={styles.content}> <Text style={styles.text}> Welcome to Awesome App! </Text> </View> // Add the following <View style={styles.tabBar}> <View style={[styles.tabBarButton, styles.button1]} /> <View style={[styles.tabBarButton, styles.button2]} /> <View style={[styles.tabBarButton, styles.button3]} /> <View style={[styles.tabBarButton, styles.button4]} /> <View style={[styles.tabBarButton, styles.button5]} /> </View> We added View with tabBar style and 5 children View components for buttons. Each button uses two styles, first is tabBarButton, which is the same for all buttons and unique styles button1 through button5 for each button. When you want to use more than one style for a component you can use and array []of styles style={[styles.tabBarButton, styles.button1]}with as many comma separated styles as you wish. And let’s define the styles tabBar: { height: 50 }, tabBarButton: { flex: 1 }, button1: { backgroundColor: '#8BC051' }, button2: { backgroundColor: '#CCD948' }, button3: { backgroundColor: '#FDE84D' }, button4: { backgroundColor: '#FCBF2E' }, button5: { backgroundColor: '#FC9626' } We defined fixed height for tabBar, and set flex to 1 for tabBarButton, which means that each button will take as much space as available equally. And we also defined unique color for each button. Let’s see how does that look. Oops, all of our buttons got stacked vertically, which is default behavior, but not what we wanted. Let’s set flexDirection to row in order to stack buttons horizontally. tabBar: { flexDirection: 'row', height: 50 }, Ok, now it perfect. Understanding Flex When we set flex to 1 for each button, that means each button will take as much space is available and all of then will take the same amount of space. If we wanted 3rd button to be as twice as big as other buttons we could set its flex value to 2. button3: { flex: 2, backgroundColor: '#FDE84D' }, And then it will look like Understanding Aligning Containers Let’s change tabBarButton to have fixed weight and height instead of being flexible. tabBarButton: { width: 30, height: 30 }, Primary and Secondary Axis In flexDirection default column mode the primary axis is column, the secondary is row, and vice versa in row mode. Justify Content Adding justifyContent to a component’s style sets the distribution of children along the primary axis. Available options are flex-start (default) — Children distributed at the start. tabBar: { flexDirection: 'row', justifyContent: 'flex-start' } center — Children are centered. tabBar: { flexDirection: 'row', justifyContent: 'center' } flex-end — Children distributed at the end. tabBar: { flexDirection: 'row', justifyContent: 'flex-end' }, space-around — Children distributed with equal space around them. tabBar: { flexDirection: 'row', justifyContent: 'space-around' } space-between — Children distributed evenly. tabBar: { flexDirection: 'row', justifyContent: 'space-between' } Align Items Adding alignItems to a component’s style sets the alignment of children along the secondary axis. Available options are flex-start — Children aligned at the start. tabBar: { flexDirection: 'row', justifyContent: 'space-between', alignItems: 'flex-start' } center — Children aligned at the center. tabBar: { flexDirection: 'row', justifyContent: 'space-between', alignItems: 'center' } flex-end — Children aligned at the end. tabBar: { flexDirection: 'row', justifyContent: 'space-between', alignItems: 'flex-end' } stretch (default) — Children stretched to fill up all space. This options doesn’t work for children with fixed dimension along the secondary axis. Conclusion Flexbox is very powerful tool for creating any kind of responsive layouts for your apps. Keep in mind that default flex direction is column and do not forget to change it to row when you need to arrange children horizontally. I hope you enjoyed the tutorial. Subscribe to find out about new tutorials and learn how to build amazing apps!
https://rationalappdev.com/layout-react-native-components-with-flexbox/
CC-MAIN-2020-40
refinedweb
1,300
57.87
ValueError: I/O operation on closed file in Python 3 Dung Do Tien Jul 12 2022 33 Hello everyone. I have created a small app with Python 3.10 and I have a CSV file that needs to be read and displayed in some format. It is simple like this: import csv data = open("data-coin.csv", "r") read_file = csv.reader(data) data.close() for p in read_file: print(f'Coin name: {p[0]}, Position: {p[1]}, Value: {p[2]} MeV') But it throws an exception ValueError: I/O operation on closed file. Below is the detail of that exception. Traceback (most recent call last): File "main.py", line 5, in <module> for p in read_file: ValueError: I/O operation on closed file. ** Process exited - Return Code: 1 ** Press Enter to exit terminal I installed Python version 3.10 and window 11. Thanks for any suggestions. Have 1 answer(s) found. - J0 Jakub Szumiato Jul 12 2022 This error threw because you read data after the file is closed. Please don't close the stream file before reading the data of it. Here is a solution for you: import csv particles = open("data-coin.csv", "r") read_file = csv.reader(particles) for p in read_file: print(f'Coin name: {p[0]}, Position: {p[1]}, Value: {p[2]} MeV') particles.close() #Output Coin name: BTC, Position: 1, Value: 22000 MeV Coin name: ETH, Position: 2, Value: 1100 MeV Coin name: USDT, Position: 3, Value: 1 MeV Coin name: DOT, Position: 4, Value: 7.5 MeV Coin name: MATIC, Position: 5, Value: 0.56 MeV Coin name: NEAR, Position: 6, Value: 3.45 MeV Coin name: NRM, Position: 7, Value: 17 MeV Coin name: AMP, Position: 8, Value: 0.5 MeV Coin name: USDC, Position: 9, Value: 1 MeV ** Process exited - Return Code: 0 ** Press Enter to exit terminal Hope this is helpful for you. If this answer is useful for you, please BUY ME A COFFEE!!! I need your help to maintain blog. * Type maximum 2000 characters. * All comments have to wait approved before display. * Please polite comment and respect questions and answers of others.
https://quizdeveloper.com/faq/valueerror-io-operation-on-closed-file-in-python-3-aid3493
CC-MAIN-2022-33
refinedweb
353
68.36
"Nathan J. Mehl" wrote: > In the immortal words of Alan Eldridge (alane@xxxxxxxxxxxx): > > WARNING. HEAV:Y SARCASM AHEAD. It's late, and these are things that bit me > > and generally annoyed the crap out of me when I ws getting 7.1 up and > > running. These things are *not* the fault of the SGI dudes! It's the kernel > > interface from hell... yes, it's (shield your kid's eyes, folks, you don't > > want them to see this) DEVFS. > > Eh. Devfs itself is fine. Coming from a solaris background, it's > nice to see one of the free unices catch up and join the mid-90s. Right with you on this one. > Of > course, it would be even nicer if devfs' namespace in some way > corresponded to the bios' view of the system bus (a la OpenBoot), but > we can't really hold Richard Gooch responsible for lousy design > decisions made by IBM ~15 years ago... It always amazes me how sometimes the worst designs are the most popular, guess money does really talk or in this case what cost less talks. > I suspect that a lot of the pain will Go Away once (well, if) one of > the "big three" distributions (redhat/suse/debian) take the leap and > enable it for their next release. What devfs/devfsd need more than > anything else now is a solid run through an organized QA cycle. In > that respect I'm grateful to SGI for sneaking it into the XFS 1.0 > release -- the archives of this list will provide good starting > material for whoever wants to make their distribution work with it. > (Hint hint, redhat lurkers. :) This is right on! while devfs isn't necessary for XFS it will be crucial in the future for salability and the management of large disk farms. I made the decision for XFS 1.0 to enable devfs by default knowing very well that many apps would have issues. But given that none of the distributions are working to clean up apps to be devfs friendly this seemed like a good test bed to smoke out some of the problems. In retrospect it was a bit of a headache to deal with since every devfs question came our way rather than to the devfs lists. What is apparent at this time, most people do not have large disk farms and as such can work quite comfortably in the current but horribly broken device naming scheme. The current plan (and I think Eric already did this) is to enable devfs but not have it mount on /dev by default. This will allow for mounting either on /dev or some other mount point such as /devfs but only as an option. Hopefully one of the big distros will bit the bullet and start fixing the apps that are not devfs friendly. -- Russell Cattelan cattelan@xxxxxxxxxxx
http://oss.sgi.com/archives/xfs/2001-06/msg03256.html
CC-MAIN-2017-04
refinedweb
478
80.21
I have successfully embedded the Perl interpreter, and it is working great. The next step is to provide the ability for the user to make calls into my C program to get certain information, take certain actions, etc. I have written an extension using SWIG, and built that into my program as well. After constructing my perl interpreter, with the appropriate xs_init function that adds my DynTrans as a static XSUB, I immediately use: eval_pv("use DynTrans;", FALSE); [download] So, the problem: In order for this to work, I have to have DynTrans.pm in the directory where I run my application. I want to remove this requirement, I want the entire application to be completely self-contained. I have gone so far as to modify my code like this: perl_setup_module(); eval_pv("use DynTrans;", FALSE); perl_cleanup_module(); [download] So, the question: Is there a way that I can make XSUBs availiable to my perl interpreter without having to have the .pm file around at all? I tried building the entire .pm file into my application with a series of eval_pv() for every line, but of course that didn't work. What I would really like is a programatic interface into whatever the use DynTrans; perl stuff does. I have read and re-read perlembed, perlxs, perlguts, perlapi, perlcall, several perl books, forums, SuperSearch, etc. and I cannot find a way to do this. Can anyone save me from the dreaded .pm file? Thanks in advance. What happens if you put the whole text of the module in a single string and pass that to eval_pv()? I haven't tried, but I don't see any reason why that shouldn't work. It seems that Perl isn't executing the .pm file directly as Perl code, at least not in the main interpreter namespace, but is rather pulling it in as part of the module loading/initialization process. I.E. the eval_pv() method that I tried would be akin to writing a perl script and beginning it with: (this is the .pm file that SWIG generated) package DynTrans; require Exporter; @ISA = qw(Exporter); package DynTrans; boot_DynTrans(); package DynTrans; @EXPORT = qw( GetTableName GetAction ); . . . User script here ... [download] Does this make sense? where. Thanks for the suggestions, though. I would post this to the Perl-Xs list. List is low traffic, but you will get some good suggestions. Hell yes! Definitely not I guess so I guess not Results (49 votes), past polls
http://www.perlmonks.org/?node_id=428832
CC-MAIN-2014-52
refinedweb
409
73.58
Difference between revisions of "ACCESS-ESM 1.5" Latest revision as of 20:10, 30 September 2021 - historical - ssp585 - last millenium - mid holocene - last interglacial - AMIP run (non-coupled atmosphere only) Model Spinup If starting from scratch the model will take some time to spin up. It's recommended that you branch off of piControl, which has already been spun up, rather than starting a run from scratch (except for SSP runs, which will branch from historical) The script 'warm-start.sh' in the Payu experiment directory will set up restart files based on an existing run Using Payu 1. Load the Conda environment to access Payu module use /g/data/hh5/public/modules module load conda/analysis3 2. Download the Payu experiment configuration git clone 3. Edit and run the warm-start script if needed ./warm-start.sh 4. Run the model payu run Each 'payu run' submission will run the model for one model year. You can run multiple years with e.g. 'payu run -n 25' - Payu will automatically resubmit the run after each year has finished. We recommend inspecting the model output after running for a year or so to make sure it's behaving as desired, especially if it's a non-standard configuration. Experiment Details AMIP Science Contacts: Mustapha Adamu, Shayne McGregor An uncoupled, atmosphere only version of ACCESS-ESM is provided at this Github repository. CSIRO KSH Scripts CSIRO is running the ACCESS-ESM model using ksh shell scripts. If you need to know, you can look here. Fixing Ice Restart Dates (Payu) In configuration file 'ice/input_ice.nml' make a note of the value of 'init_date' - this is the date that the model thinks it started at (ignore the similarly named 'inidate' value here). Make a copy of the restart directory that you want to re-run the model from (e.g. 'cp -r archive/restart1995 archive/restart11995') We'll need to work out some numbers to put into the restart directory, which can be done in Python import cftime init_date = cftime.datetime(1850,1,1) # Initial date as set in ice/input_ice.nml dt = 3600 # Model timestep as set in ice/input_ice.nml this_run_start = cftime.datetime(1996,1,1) # Date we want to start at prev_run_start = cftime.datetime(1995,1,1) # Date the previous run started at runtime0 = (prev_run_start - init_date).total_seconds() # Time between init_date and start of previous run runtime = (this_run_start - prev_run_start).total_seconds() # Model runtime of previous run time = (this_run_start - init_date).total_seconds() # Time between init_date and start of this run istep0 = time / dt # Model steps between init_date and start of this run print(f"{runtime0=}, {runtime=}, {time=}, {istep0=}") Restart namelists Payu also checks the namelist files in the restart directory, so we'll have to edit those as well. In the restart directory, edit ice/input_ice.nml (e.g. archive/restart11995/ice/input_ice.nml), setting 'runtime0' and 'runtime' to the values just calculated. Also still in the restart directory edit ice/cice_in.nml to set 'istep0'. Restart file Now we'll set up the restart file. Payu will start from the highest numbered restart file in 'archive/restart11995/ice', and we need to add the 'time' and 'istep0' values previously calculated to that. 'scripts/cicedumpdatemodify.py' is provided to do this modification, use it like scripts/cicedumpdatemodify.py -v -i archive/restart11995/ice/iced.40990601 -o archive/restart11995/ice/iced.99990101 --istep0 1279800 --time 4607280000 Give the output file a bigger number than any other restart to make sure Payu picks it up. Checking As a first check, run 'payu setup' in the configuration directory, then check work/ice/input_ice.nml. 'inidate' should be set to the date you're starting this run at. 'runtime' should be set to the number of seconds this run will cover, 31536000 for a normal year or 31622400 for a leap year. Next, check work/ice/RESTART/ice.restart_file contains the name of the new ice restart file, e.g. 'iced.99990101' Clean up the work directory with 'payu sweep', then submit the run with 'payu run'. After the model has run for a little, check the output of 'grep istep work/ice/ice_diag.d', which should look something like istep0 = 52584 istep1: 52584 idate: 18611231 sec: 0 Restart read at istep= 52584 189302400.000000 istep1: 52584 idate: 18560101 sec: 0 istep1: 52608 idate: 18560102 sec: 0 istep1: 52632 idate: 18560103 sec: 0 istep1: 52656 idate: 18560104 sec: 0 istep1: 52680 idate: 18560105 sec: 0 istep1: 52704 idate: 18560106 sec: 0 istep1: 52728 idate: 18560107 sec: 0 ... After the 'Restart read' line the values of idate should be dates starting from the target start time, incrementing by one day for each line. If idate isn't incrementing something has gone wrong and the run should be stopped. After at least a month has run check the output in work/ice/HISTORY to make sure that the time and time_bounds values are correct.
http://climate-cms.wikis.unsw.edu.au/index.php?title=ACCESS-ESM_1.5&diff=cur&oldid=1278
CC-MAIN-2022-27
refinedweb
817
64.61
In this article, we discuss several use scenarios for inline assembly, also called inline asm. For beginners, we introduce basic syntax, operand referencing, constraints, and common pitfalls that new users need to be aware of. For intermediate users, we discuss the clobbers list, as well as branching topics that facilitate the use of branch instructions within inline asm stanzas in their C/C++ code. Lastly, we discuss memory clobbers and the volatile attribute for advanced users who use inline asm to optimize their code. We conclude with an example of multithreaded locking with inline asm. Basic inline asm In the asm block shown in code Listing 1, the addc instruction is used to add two variables, op1 and op2. In any asm block, assembly instructions appear first, followed by the inputs and outputs, which are separated by a colon. The assembly instructions can consist of one or more quoted strings. The first colon separates the output operands; the second colon separates the input operands. If there are clobbered registers, they are inserted after the third colon. If there are no clobbered inputs for the asm block, the third colon can be omitted, as Listing 2 shows. Listing 1. Opcodes, inputs, outputs, and clobbers int res=0; int op1=20; int op2=30; asm ( " addc. %0,%1,%2 \n" : "=r"(res) : "b"(op1), "r"(op2) : "r0" ); Listing 2. No clobbered inputs for the asm block, so third colon omitted asm ( " addc. %0,%1,%2 \n" : "=r"(res) : "b"(op1), "r"(op2) ); Note: The clobbers list is discussed later in this section. Each instruction "expects" inputs and outputs to be passed in a certain format. In the previous example, the addc. instruction expects its operands to be passed through registers, hence op1 and op2 are passed into the asm block with the "b" and "r" constraints. For a complete listing of all legal asm constraints for the IBM XL C and C++ compiler, see the compiler language reference. Register constraints on variable declarations In some programs, you will want to tie variables to certain hardware registers. This is done at the variable declaration. The following example ties the variable res to GPR0 throughout the life of the program: int register res asm("r0")=0; When the variable type is not matched with the type of target hardware register, you will receive a compilation error notice. After a variable is tied to a specific register, it is not possible to use another register to hold the same variable. For example, the following code will cause a compilation error, the variable res is associated at declaration time with GPR0, but in the asm block, the user attempts to use any register but GPR0 to pass in res . Listing 3. Compilation error when conflicting constraints are used on a variable int register res asm("r0")=0; asm ( " addc. %0,%1,%2 \n" : "=b"(res) : "b"(op1), "r"(op2) : "r0" ); In the example in Listing 4, there is no output operand for the stw instruction, hence the outputs section of the asm is empty. None of the registers is modified, so they are all input operands, and the target address is passed in with the input operands. However, something is modified: the addressed memory location. But that location is not explicitly mentioned in the instruction, so the output of the instruction is implicit rather than explicit. Listing 4. Instructions with no output operands int res [] = {0,0}; int a=45; int *pointer = &res[0]; asm ( " stw %0,%1(%2) \n" : : "r"(a), "i"(sizeof(int)),"r"(pointer)); Listing 5. Instructions with preserved operands int res [] = {0,0}; int a=45; asm ( " stw %0,%1(%2) \n" : "+r"(res[0]) : "r"(a), "i"(sizeof(int)),"r"(pointer)); In listing 5, if you want to preserve the initial value of a result variable that is not necessarily modified by the asm block, then you need to use the + (plus sign) constraint to preserve the initial value of that variable, as is shown with res[0]. Target memory addresses in inline asm If an instruction specifies two of its arguments in a form similar to D(RA), where D is a literal value and RA is a general register, then this is taken to mean that D+RA is an effective address. In this case, the appropriate constraints are "m" or "o". Both "m" and "o" refer to memory arguments. Constraint "o" is described as an offsettable memory location. But in the IBM® POWER® architecture, nearly all memory references require an offset, so "m" and "o" are equivalent. In this case, you can use a single constraint to refer to two operands in the instruction. Listing 6 is an example. Listing 6. A single constraint to refer to two operands in the instruction int res [] = {0,0}; int a=45; asm ( " stb %1,%0 \n" : "=m"(res[1]) : "r"(a)); The form of the instruction stb (from the assembly language reference) is: stb RS,D(RA). Although the stb instruction technically takes three operands (a source register, an address register, and an immediate displacement), the asm description of it uses only two constraints. The "=m" constraint is used to notify the compiler that the memory address of res is to be used for the result of the store instruction (The "sync" instruction is often used for this purpose, but there are others available, as described in the POWER ISA See Resources for a link.) The "=m" indicates that the operand is a modified memory location. You do not need to know the address of the target location beforehand, because that task is left to the compiler. This allows the compiler to choose the right register ( r1 for an automatic variable, for instance) and apply the right displacement automatically. This is necessary, because it would generally be impossible for an asm programmer to know what address register and what displacement to use. In other instances, you can also override this behavior by manually calculating the target address as in the following example. Listing 7. Manually calculating the target address int res [] = {0,0}; int a=45; asm ( " stb %0,%1(%2) \n" : : "r"(a), "i"(sizeof(int)),"r"(&res)); In this code, the specification %1(%2) represents a base address and an offset, where %2 represents the base address, and res[0] and %1 represent the offset, sizeof(int). As a result, the store is performed at the effective address, res. Note: For some instructions, GPR0 cannot be used as a base address. Specifying GPR0 tells the assembler not to use a base register at all. To ensure that the compiler does not choose r0 for an operand, you can use the constraint "b" rather than "r". Addressing modes for POWER and PowerPC instructions The IBM POWER architecture type is RISC. Instructions typically operate either with three register arguments (two registers for source arguments, one register to hold a result) or with two registers and an immediate value (one register and one immediate value for the source arguments, and one register to hold the result). There are exceptions to this pattern, but mostly it is true. Among the instructions that take two registers and an immediate value, there are two special subclasses: load instructions and store instructions. These instructions use the immediate value as an offset to the value in the source register to form an "effective address." The offset value is typically an offset onto the stack ( r1 is the stack pointer), or it is an offset to the TOC (Table of Contents -- r2 is the TOC pointer). The TOC is used to promote the construction of position-independent code, which enables efficient dynamic loading of shared libraries on these machines. When using inline asm, you do not have to use specific registers nor manually construct effective addresses. The argument constraints are used to direct the compiler to choose registers or construct effective addresses appropriate to the requirements of the instructions. Thus, if a general register is required by the instruction, you could use either the "r" or "b" constraint. The "b" constraint is of interest, because many instructions use the designation of 0 specially –- a designation of register 0 does not mean that r0 is used, but instead a literal value of 0. For these instructions, it is wise to use "b" to denote the input operands to prevent the compiler from choosing r0. If the compiler chooses r0, and the instruction takes that to mean a literal 0, the instruction would produce incorrect results. Listing 8. r0 and its special meaning in the stbx instruction char res[8]={'a','b','c','d','e','f','g','h'}; char a='y'; int index=7; asm (" stbx %0,%1,%2 \n" : : "r"(a), "r"(index), "r"(res) ); Here, the expected result string is abcdefgy, but if the compiler chose r0 for %1, then the result would incorrectly be ybcdefgh. To prevent this from happening, use "b" as in Listing 9 shows. Listing 9. Using "b" constraint to signify non-zero GPR char res[8]={'a','b','c','d','e','f','g','h'}; char a='y'; int index=7; asm (" stbx %0,%1,%2 \n" : : "r"(a), "b"(index), "r"(res) ); Another example is in the following ASM block. While it appears that the asm block below does res=res+4, that is not the actual functional behavior of the code. Listing 10. Meaning of r0 in the second operand with addi opcode int register res asm("r0")=5; int b=4; asm ( " addi %0,%0,%1 \n" : "+r"(res) : "i"(b) : "r0"); where: addi %0(result operand),%0(input operand res),%3(immediate operand b) Because res is tied to r0, the translation of the asm code in assembly looks becomes: addi 0, 0 ,4 The second operand does not translate to register zero. Instead, it translates to the immediate number zero. In effect, the following is the result of the addi operation: res=0+4 This case is special to the addi opcode. If, instead, res was tied to r1, then the original intended behavior would have been obtained: res=res+4 Clobbers list Basic clobbers list In cases when registers that are not directly tied to the inputs/outputs are used within the asm block, the user must specify such registers within the clobbers list. The clobbers list is used to notify the compiler that the registers contained within the list can potentially have their values altered. Hence, they should not be used to hold other data other than for the instructions that they are used for. In the example in Listing 11, registers 8 and 7 are added to the clobbers list because they are used in the instructions but are not explicitly tied to any of the input/output operands. Also, condition register field zero is added to the clobbers list for the same reason. Although it is not present in the input/output operands, the mfocrf instruction reads that bit from the condition register and moves the value in register 8. Listing 11. Clobbers list example asm (" addc. %0,%2,%3 \n" " mfocrf 8,0x1 \n" " andi. 7,8,0xF \n" " stw 7,%1 \n" : "=r"(res),"=m"(c_bit) : : "b"(a), "r"(b) : "r0","r7","r8","cr0" ); clobbers list If, instead, the mfocrf instruction read from condition register field 1 (cr1), then that field would need to be added to clobbers list instead. Also, the period [full stop] at the end of the addc. and andi. instructions means their results are compared to zero, and the result of the comparison is stored in condition register field 0. When clobbered registers are omitted from the clobbers list, the results from the asm operations might not be correct. This is because such clobbered registers might be reused to hold intermediate values for other operations. Unless the compiler detects that those registers are clobbered, the intermediate data can be used to perform the programmer's instructions, with inaccurate results. Also, the user's asm instructions may clobber values used by the compiler. Exceptions to the clobbers list Nearly all registers can be clobbered, except for those listed in Table 1. Table 1. Registers that cannot be clobbered Memory clobbers Memory clobber implies a fence, and it also impacts how the compiler treats potential data aliases. A memory clobber says that the asm block modifies memory that is not otherwise mentioned in the asm instructions. So, for example, a correct use of memory clobbers would be when using an instruction that clears a cache line. The compiler will assume that virtually any data may be aliased with the memory changed by that instruction. As a result, all required data used after the asm block will be reloaded from memory after the asm completes. This is much more expensive than the simple fence implied by the "volatile" attribute (discussed later). Remember, because the memory clobber says anything might be aliased, everything that is used needs to be reloaded after the asm, regardless of whether it had anything to do with the asm. A memory clobber can be added to the clobbers list by simply using the "memory" word instead of a register name. Branching Basic branching Branching can be tricky with inline asm, this is because you need to know the address of the instruction to which to branch before compile time. Although this is not possible, you can use labels. Using labels, the branch-to address can be designated with a unique identifier that can be used as a target branch address. Within a single source file, labels cannot be repeated within an inline asm block, nor within neighboring asm blocks within the same source. In a given program, each label is unique. There is an exception to this rule, however, and this is if you use relative branching (more on this later). With relative branching, more than one label with the same identifier can be found within the same program and within the same asm block. Note: Labels cannot be used in asm to define macros because of possible namespace clashes. In the example in Listing 12, the branch occurs when the LT bit, bit 0, of the condition register is set. If is it not set, then the branch is not taken. Listing 12. Example of branch taken when LT bit of CR0 is set (0x80000000) asm ( " addic. %0,%2,%4 \n" " bc 0xC,0,here \n" " there: add %1,%2,%3 \n" " here: mul %0,%2,%3 \n" : "=r"(res),"=r"(res2) : "r" (a),"r"(b),"r"(c) : "cr0" ); Likewise, a branch would occur if the GT bit (bit 1) of the condition register is set, as in the code in Listing 13. Listing 13. Example of branch taken when GT bit of CR0 is set (0x40000000) asm ( " addic. %0,%2,%4 \n" " bc 0xC,1,here \n" " there: add %1,%2,%3 \n" " here: mul %0,%2,%3 \n" : "=r"(res),"=r"(res2) : "r" (a),"r"(b),"r"(c) : "cr0" ); With inline asm, it is perfectly legal to branch within the same asm block; however, it is not possible to branch between different asm blocks, even if they are contained within the same source. Relative branching As discussed earlier, relative branching allows you to reuse the name of a label more than once within the same program. It is predominantly used, however, to dictate the position of the target address relative to the branch instruction. These are examples of the relative branch codes that can be used: - F -forward - B -backward Note: That they must be suffixed to numeric labels to be syntactically correct. In this example (Listing 14), notice that the target address is referenced as "Hereb". In this case, we use the label of the target address appended with a suffix that dictates where this label is located relative to the branch instruction itself. The label "Here" is located before the branch instruction, hence the use of the "b" suffix in "Hereb." Listing 14. Needs caption asm ( " 10: lwarx %0,0,%2 \n" " cmpwi %0,0 \n" " bne- 20f \n" " ori %0,%0,1 \n" " stwcx. %0,0,%2 \n" " bne- 10b \n" " sync \n" " ori %1,%1,1 \n" " 20: \n" :) The condition register The condition register is used to capture information on results of certain instructions. For non-floating point instructions with period (.) suffixes that set the CR, the result of the operation is compared to zero. - If the result is greater than zero, then bit 1 of the CR field is set (0x4). - If it is less than zero, then bit 0 is set (0x8). - If the result is equal to zero, then bit 2 is set (0x2). For all compare instructions, the two values are compared, and any CR field can be set (not just CR0). Table 2 lists the bits and their corresponding meanings (there are eight such sets of 4 bits in the condition register, called "cr0, cr1, cr2 … cr7"). Table 2. Bits of a CR field and the meanings of different settings Note: For floating point instructions with a period suffix, CR1 is set to the upper 4 bits of the FPSCR. Blocking the Volatile attribute Making an inline asm block "volatile" as in this example, ensures that, as it optimizes, the compiler does not move any instructions above or below the block of asm statements. asm volatile(" addic. %0,%1,%2\n" : "=r"(res): "=r"(a),"r"(a)) This can be particularly important in cases when the code is accessing shared memory. This will be illustrated in the next section on multithreaded locking.\ Multithreaded locking One of the most common uses of inline asm is in writing short segments of instructions to manage multithreaded locks. Because of the loose memory model on the POWER architecture, constructing such locks requires careful use of a pair of instructions: - One instruction that loads the lock word and creates a "reservation" - Another that updates the lock word if the reservation hasn't been lost in the interim Note: If the reservation has been lost, a loop can be used to retry repeatedly. Listing 15 shows a basic inline function that attempts to acquire a lock (there are several problems with this code, which we discuss after these examples). Listing 15. Example of Acquire lock function coded in asm; } Listing 16 is an example of how this inline function could be used. Listing 16. Example of how the acquireLock function can be used if (acquireLock(lockWord)){ //begin to use the shared region temp = x + 1; . . . } Because the function is inline, the resulting code won't have an actual call in it. Instead, it will precede the use of the shared region x with the instructions to acquire the lock. The first problem to notice with this code is the lack of a synchronization instruction. One of the key performance enhancements enabled by the loose memory model of the POWER architecture is the ability of the machine to reorder loads and stores to make more efficient use of internal pipelines. However, there are times when the programmer needs to curtail this reordering to some degree to properly access shared storage. In the case of a lock, you would not want a load of data from the shared region ("x" in the case above) to be reordered so that it occurs before the lock on the region is acquired. For this reason, a synchronization instruction should be inserted to tell the machine to limit reordering in this case. The sync instruction is often used for this purpose, but there are others available, as described in the POWER ISA (see Resources). In the code example in Listing 17, we inserted sync instruction to prevent reordering of loads of "x" (this is called an "import barrier"): Listing 17. Sync example " sync \n" //import barrier "; } In that asm block, the sync will prevent any subsequent loads from occurring until after it is known which way the preceding branch went. That way the variable x will not be loaded unless the branch was not taken and the acquireLock returns true. So, are we set now? Unfortunately not. We still have to worry what the compiler might do. Modern optimizing compilers can be very aggressive in moving code around -- and even removing it completely -- if it appears that the changes might make the program run faster without changing the semantics of the code. However, compilers typically aren't aware of the complexities involved with accessing shared memory. For example, a compiler might move the statement temp = x + 1; to a place higher in the program if it determines that the result would be scheduled more efficiently (and it assumes that the "if" is usually taken). Of course, that would be disastrous from the viewpoint of accessing shared data. To prevent the movement of any loads (or any instructions at all) from below the inline asm to a location above it, you can use the keyword "volatile" (also known as the volatile attribute) to modify the asm block, as Listing 18 shows. Listing 18. Volatile keyword example inline bool acquireLock(int *lock){ bool returnvalue = false; int lockval; asm volatile ( "0: lwarx %0,0,%2 \n" //load lock and reserve . . . "1: \n" //didn't get lock, return false : "+r" (lockval), "+r" (returnvalue) : "r"(lock) //parameter lock is an address : "cr0" ); //cmpwi, stwcx both clobber cr0 return returnvalue; } When you do this, an internal fence is placed before and after the asm block that prevents instructions from being moved past it. And remember that this asm block is inlined, so it will prevent the access to x from being moved above the asm-implemented lock. Memory clobbers in multithreaded locking The discussion of multithreaded locking would not be complete without a mention of memory clobbers. The keyword memory is often added to the clobber list in such situations, although it is not always clear why it would be needed. The use of memory in the clobbers list means that memory is altered unpredictably by the asm block. However, memory modifications in the locking example given are quite predictable. Although the variable lock is a pointer (that points to a lock location), that isn't any more unpredictable that the expression "*lock" in a C program. In that case, a well-behaved compiler would likely associate the expression "*lock" with all variables of the appropriate type, and so would correctly reload any affected variables after the pointer was used for modifying data. Nonetheless, the use of memory clobbers appears to be a pervasive practice, which is probably driven by an abundance of caution when dealing with shared regions. Programmers should be aware, though, of the performance penalties involved and of alternative approaches. When an inline asm includes "memory" in the clobbers list, it means that any variable in the program might have been modified by the asm, so it must be reloaded before it is used. This requirement can pretty much put a sledgehammer to optimization efforts by the compiler. A potentially lighter-weight approach would be to make the shared region volatile (in addition to the asm block itself). Making a variable volatile means its value must be reloaded before it is used in any given expression. If the shared region in question is a data structure, such as a list or queue, this will ensure that the updated structure is reloaded after the lock is acquired. However, all of the non-shared data accesses can enjoy the full complement of compiler optimizations. Tip: If the shared data structure is accessed by a pointer (say *p), be sure to declare the pointer so that you ndicate that it's the object pointed to that is volatile, not the pointer itself. For example, this declares that the list pointed to by p is volatile: volatile list *p Acknowledgments Thank you Ian McIntosh, Christopher Lapkowski, Jim McInnes, and Jae Broadhurst. You've each played an important role in publishing this article. Resources Learn - For alternatives to the sync instruction, see the IBM Power ISA (Instruction Set Architecture) PDF. - answers and get involved in the C/C++ community in Rational Cafés. - Join the Rational software forums to ask questions and participate in discussions. -.
http://www.ibm.com/developerworks/rational/library/inline-assembly-C-Cpp-guide/index.html
CC-MAIN-2014-35
refinedweb
4,030
57.71
Extracting Replies to a Tweet (Python) Yesterday on Twitter, Patrick O’Shaughnessy asked people to name a product or service that they would be willing to pay more for than they currently do. As of this morning there are about 250 replies–some are silly, but some are genuine. Patrick then asked if anyone could extract and parse the responses. I volunteered to help put the usernames and text of the responses into a CSV. The Python script that I wrote for this is short enough to embed here (also available as a gist): import csv import tweepy # get credentials at developer.twitter.com auth = tweepy.OAuthHandler('API Key', 'API Secret') auth.set_access_token('Access Token', 'Access Token Secret') api = tweepy.API(auth) # update these for whatever tweet you want to process replies to name = 'patrick_oshag' tweet_id = '1101551802930077696' replies=[] for tweet in tweepy.Cursor(api.search,q='to:'+name, result_type='recent', timeout=999999).items(1000): if hasattr(tweet, 'in_reply_to_status_id_str'): if (tweet.in_reply_to_status_id_str==tweet_id): replies.append(tweet) with open('replies_clean.csv', 'wb') as f: csv_writer = csv.DictWriter(f, fieldnames=('user', 'text')) csv_writer.writeheader() for tweet in replies: row = {'user': tweet.user.screen_name, 'text': tweet.text.encode('ascii', 'ignore').replace('\n', ' ')} csv_writer.writerow(row) This is a quick-and-dirty script that you can update for whichever tweet you are interested in. For this particular conversation, the data is available in a Google Sheet here.
https://mattdickenson.com/2019/03/02/extract-replies-to-tweet/
CC-MAIN-2021-43
refinedweb
232
52.26
Prev Java Set Code Index Headers Your browser does not support iframes. weird CopyOnWriteArraySet error From: "Daisy" <jeffrdrew@gmail.com> Newsgroups: comp.lang.java.programmer Date: 28 Oct 2006 12:19:52 -0700 Message-ID: <1162063192.164639.159530@k70g2000cwa.googlegroups.com> I'm getting a java.util.NoSuchElementException at high loads using java.util.concurrent.CopyOnWriteArraySet. I have one guess why and would like to hear if it makes sense. Could this error occur because two threads call the same instance? For example, thread A executes i.hasNext() which returns true. Then thread B runs and happens to start executing at i.next(). When thread A runs again, it will execute the i.next() but B has already advanced the iterator to the end of the list. I'm using CopyOnWriteArraySet to avoid synchronizing. Do I have to sync to avoid this issue? Thanks for the opinions! public class DistributionSet extends CopyOnWriteArraySet { private Iterator i; public void enqueue( Object message ) { for ( i = this.iterator( ) ; i.hasNext( ) ; ) { // .next() call is throwing an error - why? either: // copy hasn't completed or .hasNext() has a different count, or some listener was removed EventListener listener = ( EventListener ) i.next( ); listener.eventObserved( message ) ; } } public boolean add( EventListener consumer ) { super.add( consumer ); return true; } }!]
http://preciseinfo.org/Convert/Articles_Java/Set_Code/Java-Set-Code-061028221952.html
CC-MAIN-2021-49
refinedweb
207
54.79
SVG 2 DOM Contents - 1 Background - 2 SVG 2.0 DOM - 3 Ideas - 3.1 Array getters and setters - 3.2 Global ECMAScript constructors - 3.3 Simple SVG API - 3.4 valueOf - 3.5 Getter and setter for groups of attributes - 3.6 Get rid of the need to explicitly specify a new element's namespace - 3.7 Constructors for Elements - 3.8 innerHTML type facilities - 3.9 Construct DOM trees from JSON - 3.10 HTML canvas API - 3.11 Simpler SVGAnimatedLength access - 4 Other APIs Background It has been recognized for a long time that there are serious issues with the existing SVG DOM interfaces. The SVG WG will be taking a hard look at these issues to see what can be done to resolve or improve the situation in SVG 2.0. This page documents the known issues and acts as a collection point for DOM ideas and proposals for SVG 2.0. SVG 1.1 DOM Issues Performance issues One of the things that's more attractive about Canvas than SVG is that immediate-mode graphics typically render much quicker than retained-mode graphics. It is evident that there exists a requirement for speed as well as interactivity. Where the SVG DOM seems to fall is when SVG needs to be built and changed dynamically. As per the previous topic, issues with implementing the SVG 1.1 DOM API have been raised by browser vendors. One such example is; In SVG: var c = document.createElementNS(, "circle"); c.cx.baseVal.value = 20; c.cy.baseVal.value = 20; c.r.baseVal.value = 20; c.style.fill = "lime"; svgContainer.appendChild(c); In the above SVG example Setting the three circle attributes via the SVG API requires 9 round trips (6 getters and 3 setters) from JavaScript to browser and back. Additionally, it requires creation of at least 6 temporary objects. However, for Canvas: ctx.fillStyle = "lime"; ctx.beginPath(); ctx.arc(20, 20, 20, 0, Math.PI*2, true); ctx.closePath(); ctx.fill(); The above Canvas example does appear to be less verbose. Both APIs (or a similar Canvas one) should be implemented such that they construct the same DOM to test the speed difference in changing DOM object construction. One suggestion is to have an interface that allows the following: c.cx = 20; c.cy = 20; c.r = 20; Notes: - There are hacks that exist to get around the verbosity problem. - The SVG Working Group has a proposal to simplify these SVG DOM accessors in SVG 2.0. Summary of links Making Improvements After discussing DOM improvements in an SVG telephone conference the SVG Working Group received feedback from the community on how the SVG 1.1 DOM could be improved in SVG 2.0. -. - birtles 2012-03-26: An API to get the points as an array of floats would be really useful (and is a fairly common feature for such APIs). The operation would be basically be similar to normalizedPathSegListexcept straight line segments and close path segments would also be converted to cubic beziers. The return value would be an array of subpaths, each being an array of floats. A similar API to create / update a path from such an array would also be needed. This allows complex path operations to be performed on arbitrary paths efficiently (rather than testing the type of each segment). - Whatever simpler API is devised, it should be serialisable (i.e. convertible into a DOM upon request). Both E4X and Web DOM might be worth studying. - Performance measurements of the DOM as well as other parts of SVG (SMIL, use, non-native text layout) should be taken to determine the overhead of each. However, it is expected that; - a large portion of the overhead is bookkeeping and DOM management, - SMIL can be slow, and this largely due to the bookkeeping it has to do, - complex text layout can be slower than aligned text, but not by much. General Applicability Obviously, any new methods that improve SVG could be used for non-graphical content as well, including other languages like HTML, MathML, etc. If other Working Groups are interested in the ideas here, it may be advisable to develop this as part of a joint task force with the WebApps WG. Summary of links SVG 2.0 DOM The SVG Working Group is looking at improving the DOM for SVG 2.0. Suggestions on how to carry out the improvements are: - Leave the majority of the SVG 1.1 DOM intact. Changes would include rearranging some stuff, clarify loose wording, and dropping unimplemented features - Complete rewrite: develop a new DOM for SVG 2.0. Clearly there are advantages and disadvantages of each approach. Given the above information, there are a number of questions: - What is the best way to improve the DOM API for SVG 2.0? - What features should be kept? - What features should be dropped? - Should any new features be added? (note: use cases and requirements for suggesting new features is a must) Most likely, the best solution would be somewhere between adherence to legacy and a complete rewrite. Ideas Here are a collection of random ideas. Split them out into a separate page (linked to from here) if they start to become more concrete proposals. Array getters and setters The SVG*List interfaces should all support the normal array syntax used in other DOM specs, that is: list.length, list[i]. See. (This has been already been implemented by Opera) Global ECMAScript constructors See the Global ECMAScript Constructors proposal. Simple SVG API Many of the ideas on this page have been expanded and implemented as a simple prototype in the Simple SVG API proposal. valueOf Another ECMAScript only idea. See Issue 2044. This would work by specifying a valueOf function on relevant prototype objects such as SVGAnimatedLength, or by specifying a value for these object's DefaultValue internal property (the internal DefaultValue method by default in ES232 calls valueOf on the object). The latter method would not work for SVGAnimatedBoolean objects though, since the ECMAScript ToBoolean() operator doesn’t invoke DefaultValue like ToNumber() does. Also, unary ! won’t invoke valueOf on an object (it just evaluates to true for all objects). Even more troublesome would be == and !=. If you had rect.x and rect.y both be SVGAnimatedLengths whose values were 10, then rect.x == rect.y will evaluate to false, since == will check for object identity (jwatt: is that correct? isn't that what === is for? Hmm, integrate in once we've figured out what that means for us). You’d need to do rect.x == +rect.y or something instead. The advantage of this idea is that it could work without breaking the existing baseVal stuff though, and that's important as there's likely too much content out there that relies on that now. (jwatt: is that true? are there no circumstances under which the existing DOM stuff could be broken?) jwatt: heycam, it would be great if you could expand on the details here of how this would work, and how and where it would break down. Getter and setter for groups of attributes In ECMAScript, it would be very nice to be able to do: element.setAttributes( { x:10, y:20, width:"100%", height:"2em", fill:"lime" } ); Likewise, it would be nice to be able to do something similar with getters. For more, see Simple SVG API. Get rid of the need to explicitly specify a new element's namespace It would be good to have a createElement method to the Element interface that would create an element in the same namespace as the element on which it's called (and would not require you to get a Document object). For more see Simple SVG API. Constructors for Elements One option for making the SVG DOM faster and more convenient for authors would be element constructors, either on a per-element basis, or a generic constructor. These can already be simulated in JavaScript libraries as convenience methods, but the chief benefit, making fewer DOM calls and increasing the speed of script, would be lost. Note that all of the following constructors assume an implicit namespace mechanism (as proposed above), where the element created takes on the same namespace as the element it is called on. For explicitly creating an element in a namespace other than that of the parent document or element (for example, creating an HTML p element inside an SVG context), there could be namespace-aware versions of all of these methods. For namespaced attributes, the traditional namespace-aware setAttributeNS() could be used, or some syntax could be devised for the constructor. The proposal below suggests that namespaced attributes should pass an array as the value, with the first item being the attribute namespace, and the second item the attribute value. Note also that for all object parameters, the attribute value names must be quoted if they contain a dash ("-"); other attribute value names may optionally be quoted. Element-Specific Constructors Having well-defined and specific constructors for individual elements might be better for languages such as Java, and might appeal more to people just learning SVG, since it could give more detailed error messages, but would increase implementation size and time. var r = document.createRectangle( string id, length x, length y, length width, length height, length rx, length ry, object style ); document.documentElement.appendChild( r ); var c = document.createCircle( string id, length cx, length cy, length r, object style ); document.documentElement.appendChild( c ); var p = document.createPath( string id, string d, object style ); document.documentElement.appendChild( c ); var lg = document.createLinearGradient( string id, length x1, length y1, length x2, length y2 ); lg.appendChild( document.createColorStop( string id, length offset, string stop-color, length stop-opacity ) ); document.documentElement.appendChild( c ); Generic Constructors A generic constructor would offer less specific functionality, but would be more extensible, and easier to remember. It would require authors to use more explicit object notation than the element-specific constructors, but would also allow them to omit unwanted values, which would default to the lacuna value. Note: This proposal is expanded in more detail and with more features, with a simple prototype, at Simple SVG API. Insertion Constructors Both proposals above assume that the element will be returned as a object, which is then appended to the DOM in the usual way. However, creating the element in the same single step as it is constructed would be even easier for authors in most instances. The would require that the constructor method is available on the Element object, so that the newly minted element is appended to the DOM in the right location. A pointer to the new element would still be returned by the method, but would not need to be added separately. Note: This proposal is expanded in more detail and with more features, with a simple prototype, at Simple SVG API. Insertion Constructors for Specific Elements A less general form of the above would be methods that insert particular graphical elements as children of a given element. element.drawRect(10, 20, '100%', '2em', { fill: 'cornflowerblue' }); element.drawCircle('50%', 100, 30, { stroke: 'red', 'stroke-width': 5, fill: 'none' }); element.drawPath("M150,150 L200,100 H250 V170 Q350,90 375,150 T400,150 C500,100 575,300 560,150 S650,160 550,300 Z M500,200 A25,35 -80 1,1 450,220 Z", { stroke:"blue", stroke-width:"1", fill:"yellow", fill-rule:"evenodd" }); Specific Commands for Paths Canvas has specific methods for its path commands, which would often be easier than composing the string for a path element's @d attribute. The Canvas commands are roughly equivalent to SVG, so it should be simple to add these to the path element interface: - ctx.beginPath() // would use var el = element.drawPath() instead - el.moveTo(x, y) - el.lineTo(x, y) - el.arc(x, y, radius, startAngle, endAngle, anticlockwise) // this is different than SVG's path arc command, could include both - el.quadraticCurveTo(x1, y1, x2, y2) // the first pair of points are the control point, the last pair of points is the end point - el.bezierCurveTo(x1, y1, x2, y2, x3, y3) // the first two pairs of points are the control points, the last pair of points is the end point Common Graphical API This last two approaches are roughly similar to the Canvas API, but any of these approaches could be specified to return either a DOM or draw directly to a buffer and return a a reference to that, depending on whether it was called in a document/vector context ("document", "<svg>") Note: This proposal is expanded in more detail and with more features, with a simple prototype, at Simple SVG API. innerHTML type facilities It would be good if XML had an innerHTML-like feature. Writing markup in an ECMAScript string can be a bit of a pain, but it can also simplify things a lot, and to get better that that for the types of scenario in which it's useful probably means E4X-like capabilities. Calling this facility innerHTML would make it seem like the markup should be parsed as being in the XHTML namespace, so maybe innerMarkup or insertCode would be a better name, or, for symmetry with textContent, maybe markupContent. Construct DOM trees from JSON One proposal for making the creation of SVG content using ECMAScript easier is to have a method that will create an SVG DOM tree from JSON. This does not seem very promising however. See JSON Constructors for more.. Simpler SVGAnimatedLength access One idea floated during the Auckland 2011 F2F was to give SVGAnimatedLength and SVGLength objects properties that expose the length's value in particular units, like CSS OM is going to. You would be able to do: myCircle.cx.px = 100; myCircle.cy.em *= 2; These would be shorthands for manipulating the SVGAnimatedLength's base value. We would provide these properties on the actual SVGLength objects, too: var animatedCircumference = 2 * Math.PI * myCircle.animVal.r.px; This could be extended to other SVGAnimatedBlah interfaces. Other APIs E4X E4X has a lot of attractive features, but it has proved to be a very poorly designed specification, and there is no bridge between E4X and the DOM in implementations. (See Brendan and Jeff's comment's.) While you can create XML using E4X, there is no way to insert that XML into an existing DOM tree. (Why is that??) Maybe E4X can be used for ideas though. HTML canvas wrapper APIs A number of APIs have been built on top of the canvas API (which is quite low level) that could be investigated (Processing.js etc). (These also have animation APIs that might be worth looking at too.) "simpler DOM" efforts Script Libraries We should look at successful script libraries like jQuery, RaphaelJS, dojo.gfx, prototype, YUI and others for inspiration on functionality and syntax. We should investigate author feedback on the benefits and downsides to those languages, and if possible, talk to the folks behind them for lessons learned..
http://www.w3.org/Graphics/SVG/WG/wiki/SVG_2_DOM
CC-MAIN-2015-11
refinedweb
2,501
55.74
I am using aerospike server version 5.5.0.7 and run it over k8s i an using java client version 4.1.2 I am trying to write to aerospike cache with ttl - when I tried to write to aerospike server without configured ttl on the client-side it worked but when I added experiation to the write policy (120 seconds) and changed the namespace configuration on the server to be as follows: namespace test { replication-factor 2 memory-size 1G nsup-period 600 default-ttl 120 .. } I sometimes managed to write but most of the times failed with the 22 error what am I doing wrong?
https://discuss.aerospike.com/t/aerospike-error-22/8563
CC-MAIN-2021-31
refinedweb
108
63.53
lackluster and in and of itself is quite low-level. Finagle defines a service as a function that transforms a request into a response, and composes services with filters that manipulate requests/responses themselves. It’s a clean abstraction, given that this is basically what all web service frameworks do. Thrift Finagle by itself isn’t super opinionated. It gives you building blocks to build services (service discovery, circuit breaking, monitoring/metrics, varying protocols, etc) but doesn’t give you much else. Our first set of services built on finagle used Thrift over HTTP. Thrift, similiar to protobuf, is an intermediate declarative language that creates RPC style services. For example: namespace java tutorial namespace py tutorial typedef i32 int // We can use typedef to get pretty names for the types we are using service MultiplicationService { int multiply(1:int n1, 2:int n2), } Will create an RPC service called MultiplicationService that takes 2 parameters. Our implementation at Curalate hosted Thrift over HTTP (serializing Thrift as JSON) since all our services are web based behind ELB’s in AWS. We have a lot of services at Curalate that use Thrift, but we’ve found a few shortcomings: Model Reuse Thrift forces you to use primitives when defining service contracts, which makes it difficult to share lightweight models (with potentially useful utilities) to consumers. We’ve ended up doing a lot of mapping between generated Thrift types and shared model types. Curalate’s backend services are all written in Scala, so we don’t have the same issues that a company like Facebook (who invented Thrift) may have with varying languages needing easy access to RPC. Requiring a client Many times you want to be able to interact with a service without needing access to a client. Needing a client has made developers to get used to cloning service repositories, building the entire service, then entering a Scala REPL in order to interact with a service. As our service surface area expands, it’s not always feasible to expect one developer to build another developers service (conflicting java versions, missing SBT/Maven dependencies or settings, etc). The client requirement has led to services taking heavyweight dependencies on other services and leaking dependencies. While Thrift doesn’t force you to do this, this has been a side effect of it taking extra love and care to generate a Thrift client properly, either by distributing Thrift files in a jar or otherwise. Over the wire inspection With Thrift-over-HTTP, inspecting requests is difficult. This is due to the fact that these services use Thrift serialization, which unlike JSON, isn’t human-readable. Because Thrift over HTTP is all POSTs to /, tracing access and investigating ELB logs becomes a jumbled mess of trying to correlate times and IP’s to other parts of our logging infrastructure. The POST issue is frustrating, because it’s impossible for us to do any semantic smart caching, such as being able to insert caches at the serving layer for retrieval calls. In a pure HTTP world, we could insert a cache for heavily used GETs given a GET is idempotent. RPC API design Regardless of Thrift, RPC encourages poorly unified API’s with lots of specific endpoints that don’t always jive. We have many services that have method topologies that are poorly composable. A well designed API, and cluster of API’s, should gently guide you to getting the data you need. In an ideal world if you get an ID in a payload response for a data object, there should be an endpoint to get more information about that ID. However, in the RPC world we end up with a batch call here, a specific RPC call there, sometimes requiring stitching several calls to get data that should have been a simple domain level call. Internal vs External service writing We have lot of public REST API’s and they are written using the Lift framework (some of our oldest code). Developers moving from internal to external services have to shift paradigms and move from writing REST with JSON to RPC with Thrift. Overall Thrift is a great piece of technology, but after using it for a year we found that it’s not necessarily for us. All of these things have prompted a shift to writing REST style services. Finatra Finatra is an HTTP API framework built on top of Finagle. Because it’s still Finagle, we haven’t lost any of our operational knowledge of the underlying framework, but instead we can now write lightweight HTTP API’s with JSON. With Finatra, all our new services have Swagger automatically enabled so API exploration is simple. And since it’s just plain JSON using Postman is now possible to debug and inspect APIs (as well as viewing requests in Charles or other proxies). With REST we can still distribute lightweight clients, or more importantly, if there are dependency conflicts a service consumer can very quickly roll an HTTP client to a service. Our ELB logs now make sense and our new API’s are unified in their verbiage (GET vs POST vs PUT vs DELETE) and if we want to write RPC for a particular service we still can. There are a few other things we like about Finatra. For those developers coming from a background of writing HTTP services, Finatra feels familiar with the concept of controllers, filters, unified test-bed for spinning up build verification tests (local in memory servers), dependency injection (via Guice) baked in, sane serialization using Jackson, etc. It’s hard to do the wrong thing given that it builds strong production level opinions onto Finagle. And thankfully those opinions are ones we share at Curalate! We’re not in bad company — Twitter, Duolingo, and others are using Finatra in production.
http://onoffswitch.net/thrift-finatra/
CC-MAIN-2018-17
refinedweb
971
57.3
go / gofrontend / 2c390ba951e83b547f6387cc9e19436c085b3775 / . / libgo / go / cmd / cgo / doc.go blob: ca18c45d9d965d4d6383af8889a35ef2afd2ccf8 [ file ] [ log ] [ blame ] // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. /* Cgo enables the creation of Go packages that call C code. Using cgo with the go command, , .S or .sx. Go references to C. C references to Go GoInt. Passing pointers. Special cases 3. The EGLDisplay and EGLConfig types from the EGL API.. The EGLDisplay case was introduced in Go 1.12. Use the egl rewrite to auto-update code from Go 1.11 and earlier: go tool fix -r egl <pkg> The EGLConfig case was introduced in Go 1.15. Use the eglconf rewrite to auto-update code from Go 1.14 and earlier: go tool fix -r eglconf <pkg> Using cgo directly */ package main /* Implementation details. Cgo provides a way for Go programs to call C code linked into the same address space. This comment explains the operation of cgo. Cgo reads a set of Go source files and looks for statements saying import "C". If the import has a doc comment, that comment is taken as literal C code to be used as a preamble to any C code generated by cgo. A typical preamble #includes necessary definitions: // #include <stdio.h> import "C" For more details about the usage of cgo, see the documentation comment at the top of this file. Understanding C Cgo scans the Go source files that import "C" for uses of that package, such as C.puts. It collects all such identifiers. The next step is to determine each kind of name. In C.xxx the xxx might refer to a type, a function, a constant, or a global variable. Cgo must decide which. The obvious thing for cgo to do is to process the preamble, expanding #includes and processing the corresponding C code. That would require a full C parser and type checker that was also aware of any extensions known to the system compiler (for example, all the GNU C extensions) as well as the system-specific header locations and system-specific pre-#defined macros. This is certainly possible to do, but it is an enormous amount of work. Cgo takes a different approach. It determines the meaning of C identifiers not by parsing C code but by feeding carefully constructed programs into the system C compiler and interpreting the generated error messages, debug information, and object files. In practice, parsing these is significantly less work and more robust than parsing C source. Cgo first invokes gcc -E -dM on the preamble, in order to find out about simple #defines for constants and the like. These are recorded for later use. Next, cgo needs to identify the kinds for each identifier. For the identifiers C.foo, cgo generates this C program: <preamble> #line 1 "not-declared" void __cgo_f_1_1(void) { __typeof__(foo) *__cgo_undefined__1; } #line 1 "not-type" void __cgo_f_1_2(void) { foo *__cgo_undefined__2; } #line 1 "not-int-const" void __cgo_f_1_3(void) { enum { __cgo_undefined__3 = (foo)*1 }; } #line 1 "not-num-const" void __cgo_f_1_4(void) { static const double __cgo_undefined__4 = (foo); } #line 1 "not-str-lit" void __cgo_f_1_5(void) { static const char __cgo_undefined__5[] = (foo); } This program will not compile, but cgo can use the presence or absence of an error message on a given line to deduce the information it needs. The program is syntactically valid regardless of whether each name is a type or an ordinary identifier, so there will be no syntax errors that might stop parsing early. An error on not-declared:1 indicates that foo is undeclared. An error on not-type:1 indicates that foo is not a type (if declared at all, it is an identifier). An error on not-int-const:1 indicates that foo is not an integer constant. An error on not-num-const:1 indicates that foo is not a number constant. An error on not-str-lit:1 indicates that foo is not a string literal. An error on not-signed-int-const:1 indicates that foo is not a signed integer constant. The line number specifies the name involved. In the example, 1 is foo. Next, cgo must learn the details of each type, variable, function, or constant. It can do this by reading object files. If cgo has decided that t1 is a type, v2 and v3 are variables or functions, and i4, i5 are integer constants, u6 is an unsigned integer constant, and f7 and f8 are float constants, and s9 and s10 are string constants, it generates: <preamble> __typeof__(t1) *__cgo__1; __typeof__(v2) *__cgo__2; __typeof__(v3) *__cgo__3; __typeof__(i4) *__cgo__4; enum { __cgo_enum__4 = i4 }; __typeof__(i5) *__cgo__5; enum { __cgo_enum__5 = i5 }; __typeof__(u6) *__cgo__6; enum { __cgo_enum__6 = u6 }; __typeof__(f7) *__cgo__7; __typeof__(f8) *__cgo__8; __typeof__(s9) *__cgo__9; __typeof__(s10) *__cgo__10; long long __cgodebug_ints[] = { 0, // t1 0, // v2 0, // v3 i4, i5, u6, 0, // f7 0, // f8 0, // s9 0, // s10 1 }; double __cgodebug_floats[] = { 0, // t1 0, // v2 0, // v3 0, // i4 0, // i5 0, // u6 f7, f8, 0, // s9 0, // s10 1 }; const char __cgodebug_str__9[] = s9; const unsigned long long __cgodebug_strlen__9 = sizeof(s9)-1; const char __cgodebug_str__10[] = s10; const unsigned long long __cgodebug_strlen__10 = sizeof(s10)-1; and again invokes the system C compiler, to produce an object file containing debug information. Cgo parses the DWARF debug information for __cgo__N to learn the type of each identifier. (The types also distinguish functions from global variables.) Cgo reads the constant values from the __cgodebug_* from the object file's data segment. At this point cgo knows the meaning of each C.xxx well enough to start the translation process. Translating Go Given the input Go files x.go and y.go, cgo generates these source files: x.cgo1.go # for gc (cmd/compile) y.cgo1.go # for gc _cgo_gotypes.go # for gc _cgo_import.go # for gc (if -dynout _cgo_import.go) x.cgo2.c # for gcc y.cgo2.c # for gcc _cgo_defun.c # for gcc (if -gccgo) _cgo_export.c # for gcc _cgo_export.h # for gcc _cgo_main.c # for gcc _cgo_flags # for alternative build tools The file x.cgo1.go is a copy of x.go with the import "C" removed and references to C.xxx replaced with names like _Cfunc_xxx or _Ctype_xxx. The definitions of those identifiers, written as Go functions, types, or variables, are provided in _cgo_gotypes.go. Here is a _cgo_gotypes.go containing definitions for needed C types: type _Ctype_char int8 type _Ctype_int int32 type _Ctype_void [0]byte The _cgo_gotypes.go file also contains the definitions of the functions. They all have similar bodies that invoke runtime·cgocall to make a switch from the Go runtime world to the system C (GCC-based) world. For example, here is the definition of _Cfunc_puts: //go:cgo_import_static _cgo_be59f0f25121_Cfunc_puts //go:linkname __cgofn__cgo_be59f0f25121_Cfunc_puts _cgo_be59f0f25121_Cfunc_puts var __cgofn__cgo_be59f0f25121_Cfunc_puts byte var _cgo_be59f0f25121_Cfunc_puts = unsafe.Pointer(&__cgofn__cgo_be59f0f25121_Cfunc_puts) func _Cfunc_puts(p0 *_Ctype_char) (r1 _Ctype_int) { _cgo_runtime_cgocall(_cgo_be59f0f25121_Cfunc_puts, uintptr(unsafe.Pointer(&p0))) return } The hexadecimal number is a hash of cgo's input, chosen to be deterministic yet unlikely to collide with other uses. The actual function _cgo_be59f0f25121_Cfunc_puts is implemented in a C source file compiled by gcc, the file x.cgo2.c: void _cgo_be59f0f25121_Cfunc_puts(void *v) { struct { char* p0; int r; char __pad12[4]; } __attribute__((__packed__, __gcc_struct__)) *a = v; a->r = puts((void*)a->p0); } It extracts the arguments from the pointer to _Cfunc_puts's argument frame, invokes the system C function (in this case, puts), stores the result in the frame, and returns. Linking Once the _cgo_export.c and *.cgo2.c files have been compiled with gcc, they need to be linked into the final binary, along with the libraries they might depend on (in the case of puts, stdio). cmd/link has been extended to understand basic ELF files, but it does not understand ELF in the full complexity that modern C libraries embrace, so it cannot in general generate direct references to the system libraries. Instead, the build process generates an object file using dynamic linkage to the desired libraries. The main function is provided by _cgo_main.c: int main() { return 0; } void crosscall2(void(*fn)(void*, int, uintptr_t), void *a, int c, uintptr_t ctxt) { } uintptr_t _cgo_wait_runtime_init_done(void) { return 0; } void _cgo_release_context(uintptr_t ctxt) { } char* _cgo_topofstack(void) { return (char*)0; } void _cgo_allocate(void *a, int c) { } void _cgo_panic(void *a, int c) { } void _cgo_reginit(void) { } The extra functions here are stubs to satisfy the references in the C code generated for gcc. The build process links this stub, along with _cgo_export.c and *.cgo2.c, into a dynamic executable and then lets cgo examine the executable. Cgo records the list of shared library references and resolved names and writes them into a new file _cgo_import.go, which looks like: //go:cgo_dynamic_linker "/lib64/ld-linux-x86-64.so.2" //go:cgo_import_dynamic puts puts#GLIBC_2.2.5 "libc.so.6" //go:cgo_import_dynamic __libc_start_main __libc_start_main#GLIBC_2.2.5 "libc.so.6" //go:cgo_import_dynamic stdout stdout#GLIBC_2.2.5 "libc.so.6" //go:cgo_import_dynamic fflush fflush#GLIBC_2.2.5 "libc.so.6" //go:cgo_import_dynamic _ _ "libpthread.so.0" //go:cgo_import_dynamic _ _ "libc.so.6" In the end, the compiled Go package, which will eventually be presented to cmd/link as part of a larger program, contains: _go_.o # gc-compiled object for _cgo_gotypes.go, _cgo_import.go, *.cgo1.go _all.o # gcc-compiled object for _cgo_export.c, *.cgo2.c The final program will be a dynamic executable, so that cmd/link can avoid needing to process arbitrary .o files. It only needs to process the .o files generated from C files that cgo writes, and those are much more limited in the ELF or other features that they use. In essence, the _cgo_import.o file includes the extra linking directives that cmd/link is not sophisticated enough to derive from _all.o on its own. Similarly, the _all.o uses dynamic references to real system object code because cmd/link is not sophisticated enough to process the real code. The main benefits of this system are that cmd/link remains relatively simple (it does not need to implement a complete ELF and Mach-O linker) and that gcc is not needed after the package is compiled. For example, package net uses cgo for access to name resolution functions provided by libc. Although gcc is needed to compile package net, gcc is not needed to link programs that import package net. Runtime When using cgo, Go must not assume that it owns all details of the process. In particular it needs to coordinate with C in the use of threads and thread-local storage. The runtime package declares a few variables: var ( iscgo bool _cgo_init unsafe.Pointer _cgo_thread_start unsafe.Pointer ) Any package using cgo imports "runtime/cgo", which provides initializations for these variables. It sets iscgo to true, _cgo_init to a gcc-compiled function that can be called early during program startup, and _cgo_thread_start to a gcc-compiled function that can be used to create a new thread, in place of the runtime's usual direct system calls. Internal and External Linking The text above describes "internal" linking, in which cmd/link parses and links host object files (ELF, Mach-O, PE, and so on) into the final executable itself. Keeping cmd/link simple means we cannot possibly implement the full semantics of the host linker, so the kinds of objects that can be linked directly into the binary is limited (other code can only be used as a dynamic library). On the other hand, when using internal linking, cmd/link can generate Go binaries by itself. In order to allow linking arbitrary object files without requiring dynamic libraries, cgo supports an "external" linking mode too. In external linking mode, cmd/link does not process any host object files. Instead, it collects all the Go code and writes a single go.o object file containing it. Then it invokes the host linker (usually gcc) to combine the go.o object file and any supporting non-Go code into a final executable. External linking avoids the dynamic library requirement but introduces a requirement that the host linker be present to create such a binary. Most builds both compile source code and invoke the linker to create a binary. When cgo is involved, the compile step already requires gcc, so it is not problematic for the link step to require gcc too. An important exception is builds using a pre-compiled copy of the standard library. In particular, package net uses cgo on most systems, and we want to preserve the ability to compile pure Go code that imports net without requiring gcc to be present at link time. (In this case, the dynamic library requirement is less significant, because the only library involved is libc.so, which can usually be assumed present.) This conflict between functionality and the gcc requirement means we must support both internal and external linking, depending on the circumstances: if net is the only cgo-using package, then internal linking is probably fine, but if other packages are involved, so that there are dependencies on libraries beyond libc, external linking is likely to work better. The compilation of a package records the relevant information to support both linking modes, leaving the decision to be made when linking the final binary. Linking Directives In either linking mode, package-specific directives must be passed through to cmd/link. These are communicated by writing //go: directives in a Go source file compiled by gc. The directives are copied into the .o object file and then processed by the linker. The directives are: //go:cgo_import_dynamic <local> [<remote> ["<library>"]] In internal linking mode, allow an unresolved reference to <local>, assuming it will be resolved by a dynamic library symbol. The optional <remote> specifies the symbol's name and possibly version in the dynamic library, and the optional "<library>" names the specific library where the symbol should be found. On AIX, the library pattern is slightly different. It must be "lib.a/obj.o" with obj.o the member of this library exporting this symbol. In the <remote>, # or @ can be used to introduce a symbol version. Examples: //go:cgo_import_dynamic puts //go:cgo_import_dynamic puts puts#GLIBC_2.2.5 //go:cgo_import_dynamic puts puts#GLIBC_2.2.5 "libc.so.6" A side effect of the cgo_import_dynamic directive with a library is to make the final binary depend on that dynamic library. To get the dependency without importing any specific symbols, use _ for local and remote. Example: //go:cgo_import_dynamic _ _ "libc.so.6" For compatibility with current versions of SWIG, #pragma dynimport is an alias for //go:cgo_import_dynamic. //go:cgo_dynamic_linker "<path>" In internal linking mode, use "<path>" as the dynamic linker in the final binary. This directive is only needed from one package when constructing a binary; by convention it is supplied by runtime/cgo. Example: //go:cgo_dynamic_linker "/lib/ld-linux.so.2" //go:cgo_export_dynamic <local> <remote> In internal linking mode, put the Go symbol named <local> into the program's exported symbol table as <remote>, so that C code can refer to it by that name. This mechanism makes it possible for C code to call back into Go or to share Go's data. For compatibility with current versions of SWIG, #pragma dynexport is an alias for //go:cgo_export_dynamic. //go:cgo_import_static <local> In external linking mode, allow unresolved references to <local> in the go.o object file prepared for the host linker, under the assumption that <local> will be supplied by the other object files that will be linked with go.o. Example: //go:cgo_import_static puts_wrapper //go:cgo_export_static <local> <remote> In external linking mode, put the Go symbol named <local> into the program's exported symbol table as <remote>, so that C code can refer to it by that name. This mechanism makes it possible for C code to call back into Go or to share Go's data. //go:cgo_ldflag "<arg>" In external linking mode, invoke the host linker (usually gcc) with "<arg>" as a command-line argument following the .o files. Note that the arguments are for "gcc", not "ld". Example: //go:cgo_ldflag "-lpthread" //go:cgo_ldflag "-L/usr/local/sqlite3/lib" A package compiled with cgo will include directives for both internal and external linking; the linker will select the appropriate subset for the chosen linking mode. Example As a simple example, consider a package that uses cgo to call C.sin. The following code will be generated by cgo: // compiled by gc //go:cgo_ldflag "-lm" type _Ctype_double float64 //go:cgo_import_static _cgo_gcc_Cfunc_sin //go:linkname __cgo_gcc_Cfunc_sin _cgo_gcc_Cfunc_sin var __cgo_gcc_Cfunc_sin byte var _cgo_gcc_Cfunc_sin = unsafe.Pointer(&__cgo_gcc_Cfunc_sin) func _Cfunc_sin(p0 _Ctype_double) (r1 _Ctype_double) { _cgo_runtime_cgocall(_cgo_gcc_Cfunc_sin, uintptr(unsafe.Pointer(&p0))) return } // compiled by gcc, into foo.cgo2.o void _cgo_gcc_Cfunc_sin(void *v) { struct { double p0; double r; } __attribute__((__packed__)) *a = v; a->r = sin(a->p0); } What happens at link time depends on whether the final binary is linked using the internal or external mode. If other packages are compiled in "external only" mode, then the final link will be an external one. Otherwise the link will be an internal one. The linking directives are used according to the kind of final link used. In internal mode, cmd/link itself processes all the host object files, in particular foo.cgo2.o. To do so, it uses the cgo_import_dynamic and cgo_dynamic_linker directives to learn that the otherwise undefined reference to sin in foo.cgo2.o should be rewritten to refer to the symbol sin with version GLIBC_2.2.5 from the dynamic library "libm.so.6", and the binary should request "/lib/ld-linux.so.2" as its runtime dynamic linker. In external mode, cmd/link does not process any host object files, in particular foo.cgo2.o. It links together the gc-generated object files, along with any other Go code, into a go.o file. While doing that, cmd/link will discover that there is no definition for _cgo_gcc_Cfunc_sin, referred to by the gc-compiled source file. This is okay, because cmd/link also processes the cgo_import_static directive and knows that _cgo_gcc_Cfunc_sin is expected to be supplied by a host object file, so cmd/link does not treat the missing symbol as an error when creating go.o. Indeed, the definition for _cgo_gcc_Cfunc_sin will be provided to the host linker by foo2.cgo.o, which in turn will need the symbol 'sin'. cmd/link also processes the cgo_ldflag directives, so that it knows that the eventual host link command must include the -lm argument, so that the host linker will be able to find 'sin' in the math library. cmd/link Command Line Interface The go command and any other Go-aware build systems invoke cmd/link to link a collection of packages into a single binary. By default, cmd/link will present the same interface it does today: cmd/link main.a produces a file named a.out, even if cmd/link does so by invoking the host linker in external linking mode. By default, cmd/link will decide the linking mode as follows: if the only packages using cgo are those on a list of known standard library packages (net, os/user, runtime/cgo), cmd/link will use internal linking mode. Otherwise, there are non-standard cgo packages involved, and cmd/link will use external linking mode. The first rule means that a build of the godoc binary, which uses net but no other cgo, can run without needing gcc available. The second rule means that a build of a cgo-wrapped library like sqlite3 can generate a standalone executable instead of needing to refer to a dynamic library. The specific choice can be overridden using a command line flag: cmd/link -linkmode=internal or cmd/link -linkmode=external. In an external link, cmd/link will create a temporary directory, write any host object files found in package archives to that directory (renamed to avoid conflicts), write the go.o file to that directory, and invoke the host linker. The default value for the host linker is $CC, split into fields, or else "gcc". The specific host linker command line can be overridden using command line flags: cmd/link -extld=clang -extldflags='-ggdb -O3'. If any package in a build includes a .cc or other file compiled by the C++ compiler, the go tool will use the -extld option to set the host linker to the C++ compiler. These defaults mean that Go-aware build systems can ignore the linking changes and keep running plain 'cmd/link' and get reasonable results, but they can also control the linking details if desired. */
https://go.googlesource.com/gofrontend/+/2c390ba951e83b547f6387cc9e19436c085b3775/libgo/go/cmd/cgo/doc.go
CC-MAIN-2020-50
refinedweb
3,426
55.74
In this tutorial you will learn how to write to file by line Write to a file by line using java you can use the newLine() method of BufferedWriter class. newLine() method breaks the continuous line into a new line. Here I am going to give the simple example which will demonstrate you how to write to file by line. In this example I have first created a new text file named "fileByLineileByLine.java import java.io.*; class WriteToFileByLine { public static void main(String args[]) { String pLine = "Previous Line text"; String nLine = "New Line text"; WriteToFileByLine wtfbl = new WriteToFileByLine(); wtfbl.writeToFileByLine(pLine); wtfbl.writeToFileByLine(nLine); } public void writeToFileByLine(String str) { try { File file = new File("fileByLine.txt"); FileWriter fw = new FileWriter(file, true); BufferedWriter bw = new BufferedWriter(fw); bw.write(str); bw.newLine(); bw.close(); } catch (Exception e) { System.out.println(e); } } } How to Execute this example : After doing the basic process to execute a java program write simply on command prompt as : javac WriteToFileByLine.java to compile the program And after successfully compilation to run simply type as : java WriteToFileByLine Output : When you will execute this example a text file will be created on the specified place as the path given by you with containing the text that you are trying to write in that file by java program like as : Advertisements Posted on: January By Line Post your Comment
http://roseindia.net/java/examples/io/writeToFileByLine.shtml
CC-MAIN-2017-04
refinedweb
231
51.99
# Queries in PostgreSQL. Index scan ![image](https://habrastorage.org/r/w1560/webt/yt/-n/2a/yt-n2a_ykg5kkudcqt1i2hdjpr8.png) Queries in PostgreSQL. Index scan ================================= In previous articles we discussed [query execution stages](https://postgrespro.com/blog/pgsql/5969262) and [statistics](https://postgrespro.com/blog/pgsql/5969296). Last time, I started on data access methods, namely [Sequential scan](https://postgrespro.com/blog/pgsql/5969403). Today we will cover Index Scan. This article requires a basic understanding of the index method interface. If words like "operator class" and "access method properties" don't ring a bell, check out my [article on indexes](https://postgrespro.com/blog/pgsql/4161264) from a while back for a refresher. Plain Index Scan ---------------- Indexes return row version IDs (tuple IDs, or TIDs for short), which can be handled in one of two ways. The first one is *Index scan*. Most (but not all) index methods have the INDEX SCAN property and support this approach. The operation is represented in the plan with an Index Scan node: ``` EXPLAIN SELECT * FROM bookings WHERE book_ref = '9AC0C6' AND total_amount = 48500.00; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Index Scan using bookings_pkey on bookings (cost=0.43..8.45 rows=1 width=21) Index Cond: (book_ref = '9AC0C6'::bpchar) Filter: (total_amount = 48500.00) (4 rows) ``` Here, the access method returns TIDs one at a time. The indexing mechanism receives an ID, reads the relevant table page, fetches the tuple, checks its visibility and, if all is well, returns the required fields. The whole process repeats until the access method runs out of IDs that match the query. The Index Cond line in the plan shows only the conditions that can be filtered using just the index. Additional conditions that can be checked only against the table are listed under the Filter line. In other words, index and table scan operations aren't separated into two different nodes, but both execute as a part of the Index Scan node. There is, however, a special node called Tid Scan. It reads tuples from the table using known IDs: ``` EXPLAIN SELECT * FROM bookings WHERE ctid = '(0,1)'::tid; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Tid Scan on bookings (cost=0.00..4.01 rows=1 width=21) TID Cond: (ctid = '(0,1)'::tid) (2 rows) ``` ### Cost estimation When calculating an Index scan cost, the system estimates the index access cost and the table pages fetch cost, and then adds them up. The index access cost depends entirely on the access method of choice. If a B-tree is used, the bulk of the cost comes from fetching and processing of index pages. The number of pages and rows fetched can be deduced from the total amount of data and the selectivity of the query conditions. Index page access patterns are random by nature (because pages that are logical neighbors aren't usually neighbors on disk too) and therefore the cost of accessing a single page is estimated as *random\_page\_cost*. The costs of descending from the tree root to the leaf page and additional expression calculation costs are also included here. The CPU component of the table part of the cost includes the processing time of every row (*cpu\_tuple\_cost*) multiplied by the number of rows. The I/O component of the cost depends on the index access selectivity and the *correlation* between the order of the tuples on disk to the order in which the access method returns the tuple IDs. **High correlation is good.** In the ideal case, when the tuple order on disk perfectly matches the logical order of the IDs in the index, Index Scan will move from page to page *sequentially*, reading each page *only once*. ![](https://habrastorage.org/r/w1560/getpro/habr/post_images/add/a9c/f83/adda9cf832611d146b5a3be65b52775a.png) This correlation data is a collected statistic: ``` SELECT attname, correlation FROM pg_stats WHERE tablename = 'bookings' ORDER BY abs(correlation) DESC; ``` ``` attname | correlation −−−−−−−−−−−−−−+−−−−−−−−−−−−−− book_ref | 1 total_amount | 0.0026738467 book_date | 8.02188e−05 (3 rows) ``` Absolute `book_ref` values close to 1 indicate a high correlation; values closer to zero indicate a more chaotic distribution. Here's an example of an Index scan over a large number of rows: ``` EXPLAIN SELECT * FROM bookings WHERE book_ref < '100000'; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Index Scan using bookings_pkey on bookings (cost=0.43..4638.91 rows=132999 width=21) Index Cond: (book_ref < '100000'::bpchar) (3 rows) ``` The total cost is about 4639. The selectivity is estimated at: ``` SELECT round(132999::numeric/reltuples::numeric, 4) FROM pg_class WHERE relname = 'bookings'; ``` ``` round −−−−−−−− 0.0630 (1 row) ``` This is about 1/16, which is to be expected, considering that `book_ref` values range from 000000 to FFFFFF. For B-trees, the index part of the cost is just the page fetch cost for all the required pages. Index records that match any filters supported by a B-tree are always ordered and stored in logically connected leaves, therefore the number of the index pages to be read is estimated as the index size multiplied by the selectivity. However, the pages aren't stored on disk in order, so the scanning pattern is *random*. The cost estimate includes the cost of processing each index record (at *cpu\_index\_tuple\_cost*) and the filter costs for each row (at *cpu\_operator\_cost* per operator, of which we have just one in this case). The table I/O cost is the sequential read cost for all the required pages. At perfect correlation, the tuples on disk are ordered, so the number of pages fetched will equal the table size multiplied by selectivity. The cost of processing each scanned tuple (*cpu\_tuple\_cost*) is then added on top. ``` WITH costs(idx_cost, tbl_cost) AS ( SELECT ( SELECT round( current_setting('random_page_cost')::real * pages + current_setting('cpu_index_tuple_cost')::real * tuples + current_setting('cpu_operator_cost')::real * tuples ) FROM ( SELECT relpages * 0.0630 AS pages, reltuples * 0.0630 AS tuples FROM pg_class WHERE relname = 'bookings_pkey' ) c ), ( SELECT round( current_setting('seq_page_cost')::real * pages + current_setting('cpu_tuple_cost')::real * tuples ) FROM ( SELECT relpages * 0.0630 AS pages, reltuples * 0.0630 AS tuples FROM pg_class WHERE relname = 'bookings' ) c ) ) SELECT idx_cost, tbl_cost, idx_cost + tbl_cost AS total FROM costs; ``` ``` idx_cost | tbl_cost | total −−−−−−−−−−+−−−−−−−−−−+−−−−−−− 2457 | 2177 | 4634 (1 row) ``` This formula illustrates the logic behind the cost calculation, and the result is close enough to the planner's estimate. Getting the exact result is possible if you are willing to account for several minor details, but that is beyond the scope of this article. **Low correlation is bad.** The whole picture changes when the correlation is low. Let's create an index for the `book_date` column, which has a near-zero correlation with the index, and run a query that selects a similar number of rows as in the previous example. Index access is so expensive (56957 against 4639 in the good case) that the planner only selects it if told to do so explicitly: ``` CREATE INDEX ON bookings(book_date); SET enable_seqscan = off; SET enable_bitmapscan = off; EXPLAIN SELECT * FROM bookings WHERE book_date < '2016-08-23 12:00:00+03'; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Index Scan using bookings_book_date_idx on bookings (cost=0.43..56957.48 rows=132403 width=21) Index Cond: (book_date < '2016−08−23 12:00:00+03'::timestamp w... (3 rows) ``` A lower correlation means that the order in which the access method returns tuple IDs will have little to do with the actual positions of the tuples on disk, so every next tuple will be in a different page. This makes the Index Scan node jump from page to page, and the number of page fetches in the worst case scenario will equal the number of tuples. ![](https://habrastorage.org/r/w1560/getpro/habr/post_images/00a/86d/81a/00a86d81ab5a7d31401806924895cf54.png) This doesn't mean, however, that simply substituting *seq\_page\_cost* with *random\_page\_cost* and `relpages` with `reltuples` will give us the correct cost. In fact, that would give us an estimate of 535787, which is much higher than the planner's estimate. The key here is caching. Frequently accessed pages are stored in the buffer cache (and in the OS cache too), so the larger the cache size, the higher the chance to find the requested page in it and avoid accessing the disk. The cache size for planning specifically is defined by the *effective\_cache\_size* parameter (4GB by default). Lower parameter values increase the expected number of pages to be fetched. I won't put the exact formula here (you can dig it up from the `backend/optimizer/path/costsize.c` file, under the `index_pages_fetched` function). What I will put here is a graph illustrating how the number of pages fetched relates to the table size (with the selectivity of 1/2 and the number of rows per page at 10): ![](https://habrastorage.org/r/w1560/getpro/habr/post_images/30f/2ef/2a4/30f2ef2a459e851f1969bb37bc78900e.png) The dotted lines show where the number of fetches equals half the number of pages (the best possible result at perfect correlation) and half the number of rows (the worst possible result at zero correlation and no cache). The *effective\_cache\_size* parameter should, in theory, match the total available cache size (including both the PostgreSQL buffer cache and the system cache), but the parameter is only used in cost estimation and does not actually reserve any cache memory, so you can change it around as you see fit, regardless of the actual cache values. If you tune the parameter way down to the minimum, you will get a cost estimation very close to the no-cache worst case mentioned before: ``` SET effective_cache_size = '8kB'; EXPLAIN SELECT * FROM bookings WHERE book_date < '2016-08-23 12:00:00+03'; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Index Scan using bookings_book_date_idx on bookings (cost=0.43..532745.48 rows=132403 width=21) Index Cond: (book_date < '2016−08−23 12:00:00+03'::timestamp w... (3 rows) ``` ``` RESET effective_cache_size; RESET enable_seqscan; RESET enable_bitmapscan; ``` When calculating the table I/O cost, the planner takes into account both the worst and the best case costs, and then picks an intermediate value based on the actual correlation. Index Scan is efficient when you scan only some of the tuples from a table. The more the tuple locations in the table correlate with the order in which the access method returns the IDs, the more tuples can be retrieved efficiently using this access method. Conversely, the access method at a lower order correlation becomes inefficient at a very low number of tuples. Index Only Scan --------------- When an index contains all the data required to process a query, we call it a *covering* index for the query. Using a covering index, the access method can retrieve data directly from the index without a single table scan. This is called an *Index Only Scan* and can be used by any access method with the RETURNABLE property. The operation is represented in the plan with an Index Only Scan node: ``` EXPLAIN SELECT book_ref FROM bookings WHERE book_ref < '100000'; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Index Only Scan using bookings_pkey on bookings (cost=0.43..3791.91 rows=132999 width=7) Index Cond: (book_ref < '100000'::bpchar) (3 rows) ``` The name suggests that the Index Only Scan node never queries the table, but in some cases it does. PostgreSQL indexes don't store visibility information, so the access method returns the data from *every* row version that matches the query, regardless of their visibility to the current transaction. Tuple visibility is verified afterwards by the indexing mechanism. But if the method has to scan the table for visibility anyway, how is it different from plain Index Scan? The trick is a feature called *the visibility map*. Each heap relation has a visibility map to keep track of which pages contain only the tuples that are known to be visible to all active transactions. Whenever an index method returns a row ID that points to a page which is flagged in the visibility map, you can tell for certain that all the data there is visible to your transaction. The Index Only Scan cost is determined by the number of table pages flagged in the visibility map. This is a collected statistic: ``` SELECT relpages, relallvisible FROM pg_class WHERE relname = 'bookings'; ``` ``` relpages | relallvisible −−−−−−−−−−+−−−−−−−−−−−−−−− 13447 | 13446 (1 row) ``` The only difference between the Index Scan and the Index Only Scan cost estimates is that the latter multiplies the I/O cost of the former by the fraction of the pages not present in the visibility map. (The processing cost remains the same.) In this example every row version on every page is visible to every transaction, so the I/O cost is essentially multiplied by zero: ``` WITH costs(idx_cost, tbl_cost) AS ( SELECT ( SELECT round( current_setting('random_page_cost')::real * pages + current_setting('cpu_index_tuple_cost')::real * tuples + current_setting('cpu_operator_cost')::real * tuples ) FROM ( SELECT relpages * 0.0630 AS pages, reltuples * 0.0630 AS tuples FROM pg_class WHERE relname = 'bookings_pkey' ) c ) AS idx_cost, ( SELECT round( (1 - frac_visible) * -- the fraction of the pages not in the visibility map current_setting('seq_page_cost')::real * pages + current_setting('cpu_tuple_cost')::real * tuples ) FROM ( SELECT relpages * 0.0630 AS pages, reltuples * 0.0630 AS tuples, relallvisible::real/relpages::real AS frac_visible FROM pg_class WHERE relname = 'bookings' ) c ) AS tbl_cost ) SELECT idx_cost, tbl_cost, idx_cost + tbl_cost AS total FROM costs; ``` ``` idx_cost | tbl_cost | total −−−−−−−−−−+−−−−−−−−−−+−−−−−−− 2457 | 1330 | 3787 (1 row) ``` Any changes still present in the heap that haven't been vacuumed increase the total plan cost (and make the plan less desirable for the optimizer). You can check the actual number of necessary heap fetches by using the `EXPLAIN ANALYZE` command: ``` CREATE TEMP TABLE bookings_tmp WITH (autovacuum_enabled = off) AS SELECT * FROM bookings ORDER BY book_ref; ALTER TABLE bookings_tmp ADD PRIMARY KEY(book_ref); ANALYZE bookings_tmp; EXPLAIN (analyze, timing off, summary off) SELECT book_ref FROM bookings_tmp WHERE book_ref < '100000'; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Index Only Scan using bookings_tmp_pkey on bookings_tmp (cost=0.43..4712.86 rows=135110 width=7) (actual rows=132109 l... Index Cond: (book_ref < '100000'::bpchar) Heap Fetches: 132109 (4 rows) ``` Because vacuuming is disabled, the planner has to check the visibility of every row (Heap Fetches). After vacuuming, however, there is no need for that: ``` VACUUM bookings_tmp; EXPLAIN (analyze, timing off, summary off) SELECT book_ref FROM bookings_tmp WHERE book_ref < '100000'; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Index Only Scan using bookings_tmp_pkey on bookings_tmp (cost=0.43..3848.86 rows=135110 width=7) (actual rows=132109 l... Index Cond: (book_ref < '100000'::bpchar) Heap Fetches: 0 (4 rows) ``` Indexes with an INCLUDE clause ------------------------------ You may want an index to include a specific column (or several) that your queries frequently use, but your index may not be extendable: * For a unique index, adding an extra column may make the original columns no longer unique. * The new column data type may lack the appropriate operator class for the index access method. If this is the case, you can (in PostgreSQL 11 and higher) supplement an index with columns which are not a part of the search key. While lacking the search functionality, these payload columns can make the index covering for the queries that want these columns. People commonly refer to these specific types of indexes as *covering*, which is technically incorrect. Any index is covering for a query as long as its set of columns *covers* the set of columns required to execute the query. Whether the index uses any fields added with the INCLUDE clause or not is irrelevant. Furthermore, the same index can be covering for one query but not for another. This is an example that replaces an automatically generated primary key index with another one with an extra column: ``` CREATE UNIQUE INDEX ON bookings(book_ref) INCLUDE (book_date); BEGIN; ALTER TABLE bookings DROP CONSTRAINT bookings_pkey CASCADE; ``` ``` NOTICE: drop cascades to constraint tickets_book_ref_fkey on table tickets ALTER TABLE ``` ``` ALTER TABLE bookings ADD CONSTRAINT bookings_pkey PRIMARY KEY USING INDEX bookings_book_ref_book_date_idx; -- new index ``` ``` NOTICE: ALTER TABLE / ADD CONSTRAINT USING INDEX will rename index "bookings_book_ref_book_date_idx" to "bookings_pkey" ALTER TABLE ``` ``` ALTER TABLE tickets ADD FOREIGN KEY (book_ref) REFERENCES bookings(book_ref); COMMIT; EXPLAIN SELECT book_ref, book_date FROM bookings WHERE book_ref < '100000'; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Index Only Scan using bookings_pkey on bookings (cost=0.43..437... Index Cond: (book_ref < '100000'::bpchar) (2 rows) ``` Bitmap Scan ----------- While Index Scan is effective at high correlation, it falls short when the correlation drops to the point where the scanning pattern is more random than sequential and the number of page fetches increases. One solution here is to collect all the tuple IDs beforehand, sort them by page number, and then use them to scan the table. This is how *Bitmap Scan*, the second basic index scan method, works. It is available to any access method with the BITMAP SCAN property. Consider the following plan: ``` CREATE INDEX ON bookings(total_amount); EXPLAIN SELECT * FROM bookings WHERE total_amount = 48500.00; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Bitmap Heap Scan on bookings (cost=54.63..7040.42 rows=2865 wid... Recheck Cond: (total_amount = 48500.00) −> Bitmap Index Scan on bookings_total_amount_idx (cost=0.00..53.92 rows=2865 width=0) Index Cond: (total_amount = 48500.00) (5 rows) ``` The Bitmap Scan operation is represented by two nodes: Bitmap Index Scan and Bitmap Heap Scan. The Bitmap Index Scan calls the access method, which generates the *bitmap* of all the tuple IDs. A bitmap consists of multiple chunks, each corresponding to a single page in the table. The number of tuples on a page is limited due to a significant header size of each tuple (up to 256 tuples per standard page). As each page corresponds to a bitmap chunk, the chunks are created large enough to accommodate this maximum number of tuples (32 bytes per standard page). The Bitmap Heap Scan node scans the bitmap chunk by chunk, goes to the corresponding pages, and fetches all the tuples there which are marked in the bitmap. Thus, the pages are fetched in ascending order, and each page is only fetched once. The actual scan order is not sequential because the pages are probably not located one after another on the disk. The operating system's prefetching is useless here, so Bitmap Heap Scan employs its own prefetching functionality (and it's the only node to do so) by asynchronously reading the *effective\_io\_concurrency* of the pages. This functionality largely depends on whether or not your system supports the `posix_fadvise` function. If it does, you can set the parameter (on the table space level) in accordance with your hardware capabilities. ### Map precision When a query requests tuples from a large number of pages, the bitmap of these pages can occupy a significant amount of local memory. The allowed bitmap size is limited by the *work\_mem* parameter. If a bitmap reaches the maximum allowed size, some of its chunks start to get upscaled: each upscaled chunk's bit is matched to a page instead of a tuple, and the chunk then covers a range of pages instead of just one. This keeps the bitmap size manageable while sacrificing some accuracy. You can check your bitmap accuracy with the `EXPLAIN ANALYZE` command: ``` EXPLAIN (analyze, costs off, timing off) SELECT * FROM bookings WHERE total_amount > 150000.00; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Bitmap Heap Scan on bookings (actual rows=242691 loops=1) Recheck Cond: (total_amount > 150000.00) Heap Blocks: exact=13447 −> Bitmap Index Scan on bookings_total_amount_idx (actual rows... Index Cond: (total_amount > 150000.00) (5 rows) ``` In this case the bitmap was small enough to accommodate all the tuples data without upscaling. This is what we call an *exact* bitmap. If we lower the *work\_mem* value, some bitmap chunks will be upscaled. This will make the bitmap *lossy*: ``` SET work_mem = '512kB'; EXPLAIN (analyze, costs off, timing off) SELECT * FROM bookings WHERE total_amount > 150000.00; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Bitmap Heap Scan on bookings (actual rows=242691 loops=1) Recheck Cond: (total_amount > 150000.00) Rows Removed by Index Recheck: 1145721 Heap Blocks: exact=5178 lossy=8269 −> Bitmap Index Scan on bookings_total_amount_idx (actual rows... Index Cond: (total_amount > 150000.00) (6 rows) ``` When fetching a table page using an upscaled bitmap chunk, the planner has to recheck its tuples against the query conditions. This step is always represented by the Recheck Cond line in the plan, whether the checking actually takes place or not. The number of rows filtered out is displayed under Rows Removed by Index Recheck. On large data sets, even a bitmap where each chunk was upscaled may still exceed the *work\_mem* size. If this is the case, the *work\_mem* limit is simply ignored, and no additional upscaling or buffering is done. ### Bitmap operations A query may include multiple fields in its filter conditions. These fields may each have a separate index. Bitmap Scan allows us to take advantage of multiple indexes at once. Each index gets a row version bitmap built for it, and the bitmaps are then ANDed and ORed together. Example: ``` EXPLAIN (costs off) SELECT * FROM bookings WHERE book_date < '2016-08-28' AND total_amount > 250000; ``` ``` −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Bitmap Heap Scan on bookings Recheck Cond: ((total_amount > '250000'::numeric) AND (book_da... −> BitmapAnd −> Bitmap Index Scan on bookings_total_amount_idx Index Cond: (total_amount > '250000'::numeric) −> Bitmap Index Scan on bookings_book_date_idx Index Cond: (book_date < '2016−08−28 00:00:00+03'::tim... (7 rows) ``` In this example the BitmapAnd node ANDs two bitmaps together. When exact bitmaps are ANDed and ORed together, they remain exact (unless the resulting bitmap exceeds the *work\_mem* limit). Any upscaled chunks in the original bitmaps remain upscaled in the resulting bitmap. ### Cost estimation Consider this Bitmap Scan example: ``` EXPLAIN SELECT * FROM bookings WHERE total_amount = 28000.00; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Bitmap Heap Scan on bookings (cost=599.48..14444.96 rows=31878 ... Recheck Cond: (total_amount = 28000.00) −> Bitmap Index Scan on bookings_total_amount_idx (cost=0.00..591.51 rows=31878 width=0) Index Cond: (total_amount = 28000.00) (5 rows) ``` The planner estimates the selectivity here at approximately: ``` SELECT round(31878::numeric/reltuples::numeric, 4) FROM pg_class WHERE relname = 'bookings'; ``` ``` round −−−−−−−− 0.0151 (1 row) ``` The total Bitmap Index Scan cost is calculated identically to the plain Index Scan cost, except for table scans: ``` SELECT round( current_setting('random_page_cost')::real * pages + current_setting('cpu_index_tuple_cost')::real * tuples + current_setting('cpu_operator_cost')::real * tuples ) FROM ( SELECT relpages * 0.0151 AS pages, reltuples * 0.0151 AS tuples FROM pg_class WHERE relname = 'bookings_total_amount_idx' ) c; ``` ``` round −−−−−−− 589 (1 row) ``` When bitmaps are ANDed and ORed together, their index scan costs are added up, plus a (tiny) cost of the logical operation. The Bitmap Heap Scan I/O cost calculation differs significantly from the Index Scan one at perfect correlation. Bitmap Scan fetches table pages in ascending order and without repeat scans, but the matching tuples are no longer located neatly next to each other, so no quick and easy sequential scan all the way through. Therefore, the probable number of pages to be fetch increases. ![](https://habrastorage.org/r/w1560/getpro/habr/post_images/30f/5bb/9d6/30f5bb9d677ee029cce4e7a140605f20.png) This is illustrated by the following formula: ![](https://habrastorage.org/getpro/habr/post_images/36b/7bc/a2d/36b7bca2d574a8388c95cbb4e7924cad.svg) The fetch cost of a single page is estimated somewhere between *seq\_page\_cost* and *random\_page\_cost*, depending on the fraction of the total number of pages to be fetched. ``` WITH t AS ( SELECT relpages, least( (2 * relpages * reltuples * 0.0151) / (2 * relpages + reltuples * 0.0151), relpages ) AS pages_fetched, round(reltuples * 0.0151) AS tuples_fetched, current_setting('random_page_cost')::real AS rnd_cost, current_setting('seq_page_cost')::real AS seq_cost FROM pg_class WHERE relname = 'bookings' ) SELECT pages_fetched, rnd_cost - (rnd_cost - seq_cost) * sqrt(pages_fetched / relpages) AS cost_per_page, tuples_fetched FROM t; ``` ``` pages_fetched | cost_per_page | tuples_fetched −−−−−−−−−−−−−−−+−−−−−−−−−−−−−−−+−−−−−−−−−−−−−−−− 13447 | 1 | 31878 (1 row) ``` As usual, there is a processing cost for each scanned tuple to add to the total I/O cost. With an exact bitmap, the number of tuples fetched equals the number of table rows multiplied by the selectivity. When a bitmap is lossy, all the tuples on the lossy pages have to be rechecked: ![](https://habrastorage.org/r/w1560/getpro/habr/post_images/0b9/24b/0ff/0b924b0ffa79c8a472fee4d144228b02.png) This is why (in PostgreSQL 11 and higher) the estimate takes into account the probable fraction of lossy pages (which is calculated from the total number of rows and the *work\_mem* limit). The estimate also includes the total query conditions recheck cost (even for exact bitmaps). The Bitmap Heap Scan startup cost is calculated slightly differently from the Bitmap Index Scan total cost: there is the bitmap operations cost to account for. In this example, the bitmap is exact and the estimate is calculated as follows: ``` WITH t AS ( SELECT 1 AS cost_per_page, 13447 AS pages_fetched, 31878 AS tuples_fetched ), costs(startup_cost, run_cost) AS ( SELECT ( SELECT round( 589 /* child node cost */ + 0.1 * current_setting('cpu_operator_cost')::real * reltuples * 0.0151 ) FROM pg_class WHERE relname = 'bookings_total_amount_idx' ), ( SELECT round( cost_per_page * pages_fetched + current_setting('cpu_tuple_cost')::real * tuples_fetched + current_setting('cpu_operator_cost')::real * tuples_fetched ) FROM t ) ) SELECT startup_cost, run_cost, startup_cost + run_cost AS total_cost FROM costs; ``` ``` startup_cost | run_cost | total_cost −−−−−−−−−−−−−−+−−−−−−−−−−+−−−−−−−−−−−− 597 | 13845 | 14442 (1 row) ``` Parallel Index Scans -------------------- All the index scans (plain, Index Only and Bitmap) can run in parallel. The parallel scan costs are calculated the same way sequential scan costs are, but (as is the case with parallel sequential scans) the distribution of processing resources between parallel processes brings the total cost down. The I/O operations are synchronized between processes and are performed sequentially, so no changes here. Let me demonstrate several examples of parallel plans without getting into cost estimation. Parallel Index Scan: ``` EXPLAIN SELECT sum(total_amount) FROM bookings WHERE book_ref < '400000'; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Finalize Aggregate (cost=19192.81..19192.82 rows=1 width=32) −> Gather (cost=19192.59..19192.80 rows=2 width=32) Workers Planned: 2 −> Partial Aggregate (cost=18192.59..18192.60 rows=1 widt... −> Parallel Index Scan using bookings_pkey on bookings (cost=0.43..17642.82 rows=219907 width=6) Index Cond: (book_ref < '400000'::bpchar) (7 rows) ``` Parallel Index Only Scan: ``` EXPLAIN SELECT sum(total_amount) FROM bookings WHERE total_amount < 50000.00; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Finalize Aggregate (cost=23370.60..23370.61 rows=1 width=32) −> Gather (cost=23370.38..23370.59 rows=2 width=32) Workers Planned: 2 −> Partial Aggregate (cost=22370.38..22370.39 rows=1 widt... −> Parallel Index Only Scan using bookings_total_amoun... (cost=0.43..21387.27 rows=393244 width=6) Index Cond: (total_amount < 50000.00) (7 rows) ``` During a Bitmap Index Scan, the building of the bitmap in the Bitmap Index Scan node is always performed by a single process. When the bitmap is done, the scanning is performed in parallel in the Parallel Bitmap Heap Scan node: ``` EXPLAIN SELECT sum(total_amount) FROM bookings WHERE book_date < '2016-10-01'; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Finalize Aggregate (cost=21492.21..21492.22 rows=1 width=32) −> Gather (cost=21491.99..21492.20 rows=2 width=32) Workers Planned: 2 −> Partial Aggregate (cost=20491.99..20492.00 rows=1 widt... −> Parallel Bitmap Heap Scan on bookings (cost=4891.17..20133.01 rows=143588 width=6) Recheck Cond: (book_date < '2016−10−01 00:00:00+03... −> Bitmap Index Scan on bookings_book_date_idx (cost=0.00..4805.01 rows=344611 width=0) Index Cond: (book_date < '2016−10−01 00:00:00+... (10 rows) ``` Comparison of access methods ---------------------------- This graph displays the dependency between the cost of various access methods and the selectivity of conditions: ![](https://habrastorage.org/r/w1560/getpro/habr/post_images/94e/2f0/b40/94e2f0b409138da34e223e92370f385a.png) The graph is qualitative, the exact numbers will obviously depend on dataset and parameter values. Sequential Scan is selectivity-agnostic and usually outperforms others after a certain selectivity threshold. The Index Scan cost is highly dependent on the correlation between the physical order of the tuples on disk and the order in which the access method returns the IDs. Index Scan at perfect correlation outperforms the others even at higher selectivities, but at a low correlation (which is usually the case) the cost is high and quickly overtakes the Sequential Scan cost. All told, Index Scan reigns supreme in one very important case: whenever an index (usually a unique index) is used to fetch a single row. Only Index Scan can outperform the others (when applicable), including even Sequential Scan during full table scans. Its performance, however, depends highly on the visibility map, and degrades down to the Index Scan level in the worst-case scenario. The Bitmap Scan cost, while dependent on the memory limit allocated for the bitmap, is still more reliable than the Index Scan cost. This access method performs worse than Index Scan at perfect correlation, but outperforms it by far when the correlation is low. Each access method shines in specific situations; none is always worse or always better than the others. The planner always goes the distance to select the most efficient access method for every specific case. For its estimations to be accurate, up-to-date statistics are paramount. To be continued.
https://habr.com/ru/post/666974/
null
null
4,740
52.19
As well as creating the compiled .ps file we also have to find a way to make it available to the running Silverlight application. The .ps file has to be downloaded either as part of the Silverlight application DLL or within the XAP file. Alternatively you could make the file available on the server and arrange to download it separately on demand. In most cases it's simpler to include it in the DLL or the XAP file it is generally small. To do this first include the .ps file in the project by right clicking on it and select "include in project". Next select the file and in the Properties window and set its Build Action to Resource which will ensure it is included in the DLL or to Content which will ensure it is included in the XAP file. In the rest of the example it is assumed that Action is set to Resource and the .ps file is included in the DLL. Now that we have the HLSL pixel shader compiled correctly and stored as shader1.ps in the shader directory we can write the C# to make use of it. First we need to add: using System.Windows.Media.Effects;( @"/MyShader; component/shader/shader1.ps", UriKind.Relative);. Notice that the .ps file is loaded from the local Silverlight DLL. In this case the name of the project is MyShader but you should change this to whatever the project is called. articl,e .ps the next article. If you would like to know when the next article on Silverlight HLSL is published register with iProgrammer or follow us on Twitter. <ASIN:0470534044 > <ASIN:1430225491 > <ASIN:1430219505 > <ASIN:1847199763 > <ASIN:1430230185 >
http://www.i-programmer.info/programming/silverlight/891-custom-bitmap-effects-getting-started.html?start=2
CC-MAIN-2014-52
refinedweb
283
74.79
Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office. You can subscribe to this list here. Showing 10 results of 10 Mico Siahaan wrote: > When I tried to run, I got this message: > > ProgrammingError: SQLite objects created in a thread can only be used in that sa > me thread.The object was created in thread id 3404 and this is thread id 3848 This is a bug in SQLObject, related to SQLite having a different kind of thread safety check than other adapters. SQLObject never concurrently uses a connection from more than one thread, but SQLite doesn't like you to ever reuse a connection in a new thread. I've asked if anyone wants to make a test for this (any short repeatable script will do!), but no one ever does. So, probably out of stubborness on my part as much as anything, the bug has gone unfixed. Probably the fix is to make a method sqlobject.sqlite.sqliteconnection.SQLiteConnection.getConnection, which recreates a connection each time it is called. It depends on the overhead involved in making a SQLite connection, but I suspect the overhead is very small (?). If not, then it should keep a pool of connections, keyed by thread id, or using threadlocal storage. TurboGears and some other frameworks have a workaround for this bug, but the workaround causes some other problems. There's also an issue about using in-memory databases with SQLite, if you aren't persisting the connection. I don't know if it's possible at all in a multi-threaded environment, because of that check (unless maybe you created a worker thread and queries were queued on that thread -- which would end up serializing all queries and transactions, but at least would be slightly functional). Really SQLite needs a way of naming an in-memory database, so you can refer to it instead of always creating a new one on demand. At least that's my impression of what :memory: does. -- Ian Bicking / ianb@... / Kevin Dangoor <dangoor@...> writes: > Can someone in the know (like, say, Ian :) take a look at this and see > if it's accurate and what it may be missing? > > Kevin, I'm no expert, but as a hint on the style, you should try using the same notation for explaining "cache = False". You used "cache=False" everywhere, except the connection string where you used "cache=0". I believe that these are the same, so they should be written the same way. This avoids that feeling of "did he mention that before? is it the same as '?cache=0' that he mentioned before?" Be seeing you, -- Jorge Godoy <godoy@...> Can someone in the know (like, say, Ian :) take a look at this and see if it's accurate and what it may be missing? Thanks, Kevin -- Kevin Dangoor Author of the Zesty News RSS newsreader company: Switching the auto commit mode did not fix the transaction problem=20 for SQL Server inserts.=20 My connection is: 'mssql://user:password@...:1433/Database?autoCommit=3D1'; Any other suggestions? =20 Here's the tail end of the stack trace again: File "c:\python24\lib\site-packages\SQLObject-0.8dev-py2.4.egg\sqlobject\dbco nnection.py", line 219, in _runWithConnection self.releaseConnection(conn) File "c:\python24\lib\site-packages\SQLObject-0.8dev-py2.4.egg\sqlobject\dbco nnection.py", line 261, in releaseConnection conn.commit() File "C:\Python24\lib\site-packages\pymssql.py", line 244, in commit self.__cnx.query("commit tran") error: SQL Server message 3902, state 1, severity 16: The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION. -Tim Date: Wed, 23 Nov 2005 01:20:33 -0800 (GMT-08:00) From: Xun Cheng <chengx@...> Subject: [SQLObject] Re: MSSQL with Pymssql: Commit without Begin Transaction >> I am unable perform insert operations when using the pymsql and the=20 >> new mssql module in 0.8dev. I am able to perform select operations=20 >> successfully. >>=20 >> The problem on insertion appears to be that the module is not issuing >> a "begin" transaction before it tries to "commit". >I'm using sybase and I hit the similar issue. The workaround I used=20 >is to switch to >autocommit mode. You can use a dburl string like=20 >the following: "sybase://user:passwd@.../dbname?autoCommit=3D1";. >I hope it works for mssql too. >xun _________________________ TCW Computers has opened a second location to serve you, TCW East.=20 TCW East is a computer repair facility for convenient carry-in service.=20 The new office is located at 1920 Lincoln Highway East, Lancaster.=20 Phone: 717-653-2700. _________________________ ** CONFIDENTIAL ** This email communication is intended only for the one to whom it is = addressed, and may be privileged, confidential and exempt from = disclosure. If you are not that addressee or responsible for delivery to the = addressee, any dissemination of this communication is prohibited. If you = received this email in error please reply to the sender. Thank you. On Mon, Nov 28, 2005 at 05:59:16PM +0530, Rajeev Joseph Sebastian wrote: > What is the idea behind having many registries ? Registries are namespaces. phd@... 1 >> python Python 2.4.2 (#1, Oct 3 2005, 20:57:52) [GCC 3.3.5 (Debian 1:3.3.5-13)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import this The Zen of Python, by Tim Peters ... Namespaces are one honking great idea -- let's do more of those! ;) > Wouldn't things be easier to > have just a single registry ? It would be easier for simple programs, but not so easy for complex programs that connect to two or more databases at once. Oleg. -- Oleg Broytmann phd@... Programmers don't die, they just GOSUB without RETURN. Hi Ian, Oleg, and all, What is the idea behind having many registries ? Wouldn't things be easier to have just a single registry ? Regards, Rajeev J Sebastian Dinamis Corporation 3266 Yonge Street, Suite 1419 Toronto, ON Canada M4N 3P6 +1 416-410-3326 Brian Beck <exogen@...> writes: > Take a look at the code for TurboGears (), which uses > CherryPy + SQLObject (with SQLite) and handles this issue quite nicely. Not just SQLite, but every other RDBMS supported by SQLObject. I'm using it with PostgreSQL, there are people using it with MySQL and so on. -- Jorge Godoy <godoy@...> Mico Siahaan wrote: > Dear all, > > I tried to make simple app using CherryPy + SQLObject. Take a look at the code for TurboGears (), which uses CherryPy + SQLObject (with SQLite) and handles this issue quite nicely. -- Brian Beck Adventurer of the First Order > ProgrammingError: SQLite objects created in a thread can only be used in = that sa > me thread.The object was created in thread id 3404 and this is thread id = 3848 > > Can you suggest me what was wrong and how to fix this? see Erik Stephens wrote: > I just stumbled across _init in the docs. Sub-classing SQLObject, > overriding _init to define an attribute if self.sqlmeta.idName is > defined, seems to get me what I want: > > def _init(self, *lArgs, **dArgs): > apply(SQLObject._init, (self,) + lArgs, dArgs) > if hasattr(self.sqlmeta, 'idName'): > setattr(self, self.sqlmeta.idName, lArgs[0]) You can do that, you could also do: def _get_person_id(self): return self.id However, irregardless of any aliases, obj.id always has to be the primary key -- SQLObject uses that a lot internally. -- Ian Bicking | ianb@... |
http://sourceforge.net/p/sqlobject/mailman/sqlobject-discuss/?viewmonth=200511&viewday=28
CC-MAIN-2014-35
refinedweb
1,243
66.74
Hello folks, I am working on an assignment for school and I would greatly appreciate some help with my code. I would like to retrieve some data from my header via a function in my main. However, when I call the function in my main the text shows up, but not the data. If I attempt to retrieve the data straight in the main without the use of a function, I get the right results, it is when I use a function that I have the problem. Some help would be greatly appreciate it. here is my header: and here is my main:and here is my main:Code:#ifndef HEADER_H #define HEADER_H #include <iostream> #include <string> #include <cstring> using namespace std; class Account { public: char first_name[20]; char last_name[20]; float sin_number; char account_type[10]; int number_of_transactions; void set_data(char fname[20],char lname[20],float sinnum, int acctype, int numtrans) { strcpy (first_name,fname); strcpy (last_name,lname); sin_number = sinnum; if (acctype = 1) { strcpy(account_type, "Chequing"); } if (acctype = 2) { strcpy(account_type,"Savings"); } number_of_transactions = numtrans; } void get_data() { cout<<"the first name is : "<<first_name<<endl; cout<<"the last name is : "<<last_name<<endl; cout<<"the sin number is : "<<sin_number<<endl; cout<<"the account type is : "<<account_type<<endl; cout<<"the number of transactions is : "<<number_of_transactions<<endl; } }; #endif Thank you very much for your help. I am very new to this.Thank you very much for your help. I am very new to this.Code:#include <iostream> #include "Account.h" using namespace std; class functions : public Account { public: void PrintStatement(void) { Account acc; acc.get_data(); } int main() { Account acc; functions func; char fname[20]; char lname[20]; float sinnum; int acctype; int numtrans; cout<<"Please enter the client's first name"<<endl; cin>>fname; cout<<"Please enter the client's last name"<<endl; cin>>lname; cout<<"Please enter the client's sin number"<<endl; cin>> sinnum; cout<<"Please enter the 1: for chequing or 2: for savings"<<endl; cin>>acctype; if (acctype !=1 && acctype !=2) { cout<<"Invalid account type, Please enter the 1: for chequing or 2: for savings"<<endl; cin>>acctype; } cout<<"Please enter the client's transaction number"<<endl; cin>>numtrans; acc.set_data(fname, lname, sinnum, acctype, numtrans); acc.get_data(); //it works here func.PrintStatement(); //here it doesnt work return (0); }
https://cboard.cprogramming.com/cplusplus-programming/149420-problem-getting-data-stored-header-file-via-function.html
CC-MAIN-2017-13
refinedweb
378
51.28
We currently only support strings and integers. We should also support floats, dates and arrays. Arrays can contain any key types, including other arrays. The order goes: Arrays Strings Dates floats/integers Note that the spec doesn't make any difference between integers and floats. I've been working on it quite intensively, here's what I did: - changed all key columns in the schema to the BLOB type - implemented schema upgrade - rewrote Key.h to work with plain buffers (strings, integers and arrays supported) - fixed all callers of BindToStatement() to pass additional argument (auto-increment flag) - forked some string code into StringUtils.h/cpp So it seems the approach is correct, all automatic tests passed on the try server. There's still a lot of work, but it looks promising (to have it in FF 11). The work is dependent on Ben's compressed structures clones and files in indexedDB Created attachment 577263 [details] [diff] [review] initial patch (not for review) Support for floats has been added. I'm now going to rebase using the patch in bug 701772. I added also support for dates. So the patch should now cover all types. I'll attach it in a minute. Created attachment 582039 [details] [diff] [review] patch for feedback the patch depends on files in idb and autoincrement patch Comment on attachment 582039 [details] [diff] [review] patch for feedback Review of attachment 582039 [details] [diff] [review]: ----------------------------------------------------------------- ::: dom/indexedDB/IDBObjectStore.cpp @@ +1908,3 @@ > mKey.ToInteger() >= mObjectStore->Info()->nextAutoIncrementId) { > // XXX Once we support floats, we should use floor(mKey.ToFloat()) here > autoIncrementNum = mKey.ToInteger(); See the XXX comment here. This should be: else if (mKey.IsFloat() && mKey.ToFloat() >= (float)...->nextAutoIncrementId) { autoIncrementNum = floor(mKey.ToFloat()); ::: dom/indexedDB/Key.cpp @@ +51,5 @@ > + return NS_OK; > + } > + > + if (JSVAL_IS_INT(aVal)) { > + *aType = AppendFromInteger(JSVAL_TO_INT(aVal)); Why not simply use AppendFromFloat((jsdouble)JSVAL_TO_INT(aVal))? @@ +90,5 @@ > + > + *aType = 4; > + return NS_OK; > + } else if (JS_ObjectIsDate(aCx, obj)) { > + printf("date!\n"); Remove the printf ::: dom/indexedDB/Key.h @@ +176,5 @@ > + > + char type; > + nsresult rv = AppendFromJSVal(aCx, aVal, &type); > +// XXXvarga cleanup this mess! > +// NS_ENSURE_SUCCESS(rv, rv); What's going on here? @@ +205,4 @@ > return NS_OK; > } > > PRInt64 ToInteger() const I don't think we need this function any more @@ +212,3 @@ > } > > + void ToString(nsString& aString) const Or this. @@ +270,5 @@ > + > + static > + PRUint64 DoFlipBits(PRUint64 u) > + { > + PRUint64 mask = -PRInt64(u >> 63) | PR_UINT64(0x8000000000000000); It's actually not safe to right-shift a signed value. The C++ spec doesn't define if sign extension will happen or not but here you are relying on sign extension not happening. Additionally, this won't do the right thing for -0. I think it will compare less than 0 whereas it should compare equal to it. I think simply doing return u & PRUINT64(0x8000000000000000) ? -u : u | PRUINT64(0x8000000000000000); will work. But it's possible that there's a way to do it without the branch. @@ +281,5 @@ > + PRUint64 mask = ((u >> 63) - 1) | PR_UINT64(0x8000000000000000); > + return u ^ mask; > + } > + > + void PrintBuffer() At least put this in #ifdef DEBUG. @@ +299,5 @@ > + > + AppendUCS2toUTF8LT(aString, mBuffer); > + if (mBuffer.Last() != '\0') { > + mBuffer.Append('\0'); > + } I don't understand this branch? Why shouldn't we always end with an extra \0? @@ +301,5 @@ > + if (mBuffer.Last() != '\0') { > + mBuffer.Append('\0'); > + } > + > + return 3; It seems silly to have a function that always returns the same value. Why not make AppendFromJSVal take a bool* instead and make that function explicitly set it to true in all places except when parsing void/null. (FWIW, we should make AppendFromJSVal not deal with void/null, that should be the job of the callers or a separate explicit Key::FromJSValSupportNullAndVoid function (but with a better name)) @@ +309,5 @@ > + { > + mBuffer.Append('\2'); > + > + PRUint64 number = *reinterpret_cast<PRUint64*>(&aFloat); > + number = SwapBytes(DoFlipBits(number)); I don't think calling SwapBytes here is correct for both low and high endiannness. @@ +321,5 @@ > + { > + mBuffer.Append('\1'); > + > + PRUint64 number = *reinterpret_cast<PRUint64*>(&aFloat); > + number = SwapBytes(DoFlipBits(number)); Same here. Also, I don't think this will do the right thing for negative zero. Positive and negative zero should compare as the same value which I don't think will happen here. @@ +386,5 @@ > + > + return result; > + } > + > + PRInt64 ToInteger(PRUint32* aPos) const I don't think we need this function any more. ::: dom/indexedDB/KeyUtils.cpp @@ +39,5 @@ > +#include "nsAString.h" > +#include "KeyUtils.h" > + > +void > +AppendUCS2toUTF8LT( const nsAString& aSource, nsACString& aDest ) Make these two functions live as static helper functions on Key instead. That way we only need one .cpp file. And it'll be clear that they are only there for key encoding/decoding. ::: dom/indexedDB/test/test_put_get_values.html @@ +22,5 @@ > +// let testString = { key: String.fromCharCode(0xF3, 0xF4, 0xF6), value: "testString" }; > +// let testInt = { key: 1, value: 1002 }; > +// let testInt = { key: ["abc\0"], value: 1002 }; > + let testInt = { key: new Date(1), value: 1002 }; > +// let testInt = { key: ["abc"], value: 1002 }; The changes to this file look in general wonky Err.. nevermind about the shift safetyness. You are right-shifting a unsigned value which is totally safe. Created attachment 582647 [details] [diff] [review] Fixes on top of Jans patch This mostly changes the encoding/decoding code. It also fixes most (but not all) of my review comments. The two remaining comments is that we need to fix swapbytes to only swap on little-endian systems, and the test-changes in test_put_get_values.html still need to be looked at. I'm happy to fix both these things tomorrow or monday. Created attachment 582648 [details] [diff] [review] Jan's plus my changes. This is the total set of changes made by the two patches together. For review ease if needed. test_keys.html is missing in your patches actually, the endianness is properly handled in nsIStreamBufferAccess.idl Created attachment 582764 [details] [diff] [review] On top of my patch This is on top of my patch (which is on top of Jan's patch). Includes fixes to the endian-issue and the test issue so this should be ready-to-go. Created attachment 582765 [details] [diff] [review] Total changes Comment on attachment 582765 [details] [diff] [review] Total changes >--- a/dom/indexedDB/IDBObjectStore.cpp >+++ b/dom/indexedDB/IDBObjectStore.cpp >@@ -1903,34 +1903,34 @@ AddHelper::DoDatabaseWork(mozIStorageCon > // XXX Once we support floats, we should use floor(mKey.ToFloat()) here >- autoIncrementNum = mKey.ToInteger(); >+ autoIncrementNum = floor(mKey.ToFloat()); Remove comment. >--- /dev/null >+++ b/dom/indexedDB/Key.cpp >@@ -0,0 +1,378 @@ >+/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- */ >+/* vim: set ts=2 et sw=2 Indexed Database. >+ * >+ * The Initial Developer of the Original Code is The Mozilla Foundation. >+ * Portions created by the Initial Developer are Copyright (C) 2010 >+ * >+ * Contributor(s): >+ * Ben Turner <bent.mozilla "jsdate.h" >+#include "jsnum.h" >+#include "Key.h" >+ >+USING_INDEXEDDB_NAMESPACE >+ >+/* >+ ["foo"] 7 s s s 0 0 >+ [["foo"]] 11 s s s 0 0 0 >+ [[["foo"]]] 12 3 s s s 0 0 0 0 >+ [[[]]] 12 0 0 0 >+ [[]] 8 0 >+ [[], "foo"] 8 3 s s s 0 0 >+ [] 4 >+*/ Clarify what this comment actually means, please. >+ >+const int MaxArrayCollapse = 3; >+ >+nsresult >+Key::EncodeJSVal(JSContext* aCx, const jsval aVal, PRUint8 aTypeOffset) >+{ >+ PR_STATIC_ASSERT(eMaxType * MaxArrayCollapse < 256); >+ >+ if (JSVAL_IS_STRING(aVal)) { >+ jsval tempRoot = JSVAL_VOID; >+ EncodeString(xpc_qsAString(aCx, aVal, &tempRoot), aTypeOffset); Note that I fixed this in bug 709977. >+ return NS_OK; >+ } >+ >+ if (JSVAL_IS_INT(aVal)) { >+ EncodeNumber((PRFloat64)JSVAL_TO_INT(aVal), eFloat + aTypeOffset); s/PRFloat64/double/ >+ if (!JSVAL_IS_PRIMITIVE(aVal)) { >+ JSObject* obj = JSVAL_TO_OBJECT(aVal); >+ if (JS_IsArrayObject(aCx, obj)) { >+ return NS_OK; >+ } else if (JS_ObjectIsDate(aCx, obj)) { No else after a return. >+// static >+nsresult >+Key::DecodeJSVal(const unsigned char*& aPos, const unsigned char* aEnd, '*&'? >+ JSContext* aCx, PRUint8 aTypeOffset, jsval* aVal) >+{ >+ while (aPos < aEnd && *aPos - aTypeOffset != eTerminator) { >+ if (!JS_SetElement(aCx, array, index++, &val)) { Incrementing here is ugly, IMO. >+ } else if (*aPos - aTypeOffset == eDate) { >+ jsdouble msec = static_cast<jsdouble>(DecodeNumber(aPos, aEnd)); Just use double. This would look nicer if you returned at the end of each branch. >+Key::EncodeString(const nsAString& aString, PRUint8 aTypeOffset) >+{ >+ *(buffer++) = eString + aTypeOffset; This is also '*buffer++', but I appreciate the parens. >+ // Encode string >+ for (const PRUnichar* iter = start; iter < end; ++iter) { >+ } >+ else if (*iter <= TWO_BYTE_LIMIT) { Cuddle else-if. >+ *(buffer++) = (char)(c >> 16); >+ *(buffer++) = (char)(c >> 8); >+ *(buffer++) = (char)c; static_cast >+Key::DecodeString(const unsigned char*& aPos, const unsigned char* aEnd, >+ nsString& aString) >+{ Same here. >+Key::EncodeNumber(PRFloat64 aFloat, PRUint8 aType) double here too. >+PRFloat64 And here >--- a/dom/indexedDB/Key.h >+++ b/dom/indexedDB/Key.h >+ const unsigned char* BufferStart() const >+ { >+ return (const unsigned char*)mBuffer.BeginReading(); static_cast >---. > Clarify what this comment actually means, please. Yeah, the intent was always to write a longer comment. Jan asked me to attach before I spent the time doing that though. > >+ return NS_OK; > >+ } > >+ > >+ if (JSVAL_IS_INT(aVal)) { > >+ EncodeNumber((PRFloat64)JSVAL_TO_INT(aVal), eFloat + aTypeOffset); > > s/PRFloat64/double/ Is that guaranteed to be 64bit? Please provide pointers. > >+// static > >+nsresult > >+Key::DecodeJSVal(const unsigned char*& aPos, const unsigned char* aEnd, > > '*&'? Yes. It's an in-out parameter. > >+ } else if (*aPos - aTypeOffset == eDate) { > >+ jsdouble msec = static_cast<jsdouble>(DecodeNumber(aPos, aEnd)); > > Just use double. > > This would look nicer if you returned at the end of each branch. Why? > >+Key::EncodeString(const nsAString& aString, PRUint8 aTypeOffset) > >+{ > >+ *(buffer++) = eString + aTypeOffset; > > This is also '*buffer++', but I appreciate the parens. Yeah, if precedence is ambigious enough that I feel the need to look it up, I'd rather not rely on it and force others to look it up too. > >+ // Encode string > >+ for (const PRUnichar* iter = start; iter < end; ++iter) { > >+ } > >+ else if (*iter <= TWO_BYTE_LIMIT) { > > Cuddle else-if. I made things consistent with the other style instead. > >+ *(buffer++) = (char)(c >> 16); > >+ *(buffer++) = (char)(c >> 8); > >+ *(buffer++) = (char)c; > > static_cast I don't actually think that makes things more readable in bit-twiddling operations like this. > >---. It's not an exported header, so there's no other option right now. I'll file a follow-up. If you're touching non-installed JS headers, you shouldn't land. Period. You can easily add some of the double bit twiddling into mfbt or nsMathUtils. We need this for the js_DateGetMsecSinceEpoch function. It does more than bit-twiddling unfortunately. The right fix is simply to move it into a public JS-API per discussion with jorendorff jsdate.h is still exported: Oooh, neato, I for some reason assumed it wasn't. Things compile fine with that -I removed. Ah, my misstake, we needed this for the JSDOUBLE_IS_NaN macro in jsnum.h. That is simple bit-twiddling which we can do elsewhere. Suggestions for where? Created attachment 582787 [details] [diff] [review] Latest interdiff. Goes on top of the others Created attachment 582788 [details] [diff] [review] Total changes I'm pretty sure we rely on the size of 'double' in a lot of place. How about mfbt/Double.h for the bit twiddling? Comment on attachment 582788 [details] [diff] [review] Total changes this looks really good, there are only some nits and most of them have been already fixed +#include "jsdate.h" +#include "nsContentUtils.h" +#include "Key.h" +#include "nsJSUtils.h" +#include "nsIStreamBufferAccess.h" +#include "xpcprivate.h" +#include "XPCQuickStubs.h" the rules for headers are: 1. IndexedDatabase.h 2. Interfaces 3. Other moz headers 4. Other indexeddb class headers 5. Forward declarations of other classes. and alphabetize them >+ if (JSVAL_IS_DOUBLE(aVal) && !DOUBLE_IS_NaN(JSVAL_TO_DOUBLE(aVal))) { This doesn't compile on windows Jonas says, he already fixed it The Double.h can be done in a followup IMO >+static inline >+PRUint64 DoFlipBits(PRUint64 u) >+{ >+ return u & PR_UINT64(0x8000000000000000) ? >+ -u : >+ (u | PR_UINT64(0x8000000000000000)); >+} >+ >+static inline >+PRUint64 UndoFlipBits(PRUint64 u) >+{ >+ return u & PR_UINT64(0x8000000000000000) ? >+ (u & ~PR_UINT64(0x8000000000000000)) : >+ -u; >+} >+ these operations can be done directly in EncodeNumber/DecodeNumber (per discussion on IRC) >+protected: >+ no new line needed >+ const unsigned char* BufferStart() const >- is(event.target.result, key, "correct returned key in " + test); >+ is(JSON.stringify(event.target.result), JSON.stringify(key), >+ "correct returned key in " + test); there's a function called isDeeply(), but as we discussed JSON.strigify() provides better debugging info r=janv Created attachment 582898 [details] [diff] [review] Total patch. Fixes review comments This is the whole shebang with Jan's review comments fixed Comment on attachment 582898 [details] [diff] [review] Total patch. Fixes review comments Review of attachment 582898 [details] [diff] [review]: ----------------------------------------------------------------- ::: dom/indexedDB/Key.cpp @@ +19,5 @@ > + * Portions created by the Initial Developer are Copyright (C) 2010 > + * > + * Contributor(s): > + * Ben Turner <bent.mozilla@gmail.com> Hey, I had nothing to do with this! @@ +158,5 @@ > + > + if (!JSVAL_IS_PRIMITIVE(aVal)) { > + JSObject* obj = JSVAL_TO_OBJECT(aVal); > + if (JS_IsArrayObject(aCx, obj)) { > + aTypeOffset += eMaxType; I really think you mean eArray here, right? @@ +179,5 @@ > + if (!JS_GetElement(aCx, obj, index, &val)) { > + return NS_ERROR_DOM_INDEXEDDB_UNKNOWN_ERR; > + } > + > + nsresult rv = EncodeJSVal(aCx, val, aTypeOffset); Hm... This is recursive and based on unsanitized input so someone could craft a key that will crash us. You need some recursion protection. @@ +221,5 @@ > + > + jsuint index = 0; > + while (aPos < aEnd && *aPos - aTypeOffset != eTerminator) { > + jsval val; > + nsresult rv = DecodeJSVal(aPos, aEnd, aCx, aTypeOffset, &val); Here too. ::: dom/indexedDB/Key.h @@ +72,3 @@ > "Don't compare unset keys!"); > > + return mBuffer.Equals(aOther.mBuffer); Just out of curiosity why didn't you do |Compare(a, b) == 0| to match all the others? @@ +134,5 @@ > + { > + NS_ASSERTION(IsFloat(), "Why'd you call this?"); > + const unsigned char* pos = BufferStart(); > + double res = DecodeNumber(pos, BufferEnd()); > + NS_ASSERTION(pos >= BufferEnd(), "Should consume whole buffer"); Should be == right? @@ +184,5 @@ > + const unsigned char* pos = BufferStart(); > + nsresult rv = DecodeJSVal(pos, BufferEnd(), aCx, 0, aVal); > + NS_ENSURE_SUCCESS(rv, rv); > + > + NS_ASSERTION(pos >= BufferEnd(), == again @@ +237,3 @@ > } > > +protected: Er? We don't expect any subclasses, do we? I think we can leave this private. @@ +252,5 @@ > + eFloat = 1, > + eDate = 2, > + eString = 3, > + eArray = 4, > + eMaxType = eArray So I don't quite understand why you use eMaxType all over the place. Seems to me you really want eArray, and you just want to make sure that eArray is always greater than the other basic types. ::: dom/indexedDB/OpenDatabaseHelper.cpp @@ +1110,5 @@ > + if (type == mozIStorageStatement::VALUE_TYPE_INTEGER) { > + PRInt64 intKey; > + aArguments->GetInt64(0, &intKey); > + key.SetFromInteger(intKey); > + } else { No cuddling! And assert that type == VALUE_TYPE_TEXT? @@ +1111,5 @@ > + PRInt64 intKey; > + aArguments->GetInt64(0, &intKey); > + key.SetFromInteger(intKey); > + } else { > + nsAutoString stringKey; nsString here. @@ +1146,5 @@ > + "id INTEGER PRIMARY KEY, " > + "object_store_id, " > + "key_value, " > + "data, " > + "file_ids " Shit, how'd I miss this earlier? Since you're doing this, can you reorder the column so that |file_ids| comes before |data|? @@ +1259,5 @@ > + )); > + NS_ENSURE_SUCCESS(rv, rv); > + > + rv = aConnection->ExecuteSimpleSQL(NS_LITERAL_CSTRING( > + "INSERT OR IGNORE INTO index_data " At this point we don't need the OR IGNORE right? @@ . (In reply to ben turner [:bent] from comment #27) > @@ . triggers, indexes, etc drop with the table automatically Checked in! It missed the Firefox 11 release for a few hours. (In reply to Scoobidiver from comment #30) > It missed the Firefox 11 release for a few hours. are you sure ? It made 11. *** Bug 574801 has been marked as a duplicate of this bug. *** Added a sentence in: and a note to:
https://bugzilla.mozilla.org/show_bug.cgi?id=692614
CC-MAIN-2017-09
refinedweb
2,447
50.84
# Citymobil — a manual for improving availability amid business growth for startups. Part 3 ![](https://habrastorage.org/r/w1560/getpro/habr/post_images/182/22e/47e/18222e47e6ac4fcbf10398e11dde2e32.png) This is the next article of the series describing how we’re increasing our service availability in Citymobil (you can read the previous parts [here](https://habr.com/ru/company/mailru/blog/449034/) and [here](https://habr.com/ru/company/mailru/blog/449310/)). In further parts, I’ll talk about the accidents and outages in detail. But first let me highlight something I should’ve talked about in the first article but didn’t. I found out about it from my readers’ feedback. This article gives me a chance to fix this annoying shortcoming. 1. Prologue =========== One reader asked me a very fair question: «What’s so complicating about backend of the ride-hailing service?» That’s a good question. Last summer, I asked myself that very question before starting to work at Citymobil. I was thinking: «that’s just a taxi service with its three-button app». How hard could that be? It turned to be a very high-tech product. To clarify a bit what I’m talking about and what a huge technological thing it is, I’m going to tell you about a few product directions at Citymobil: * Pricing. Our pricing team deals with the problem of the best ride price at every point and at every moment of time. The price is determined by supply and demand balance prediction based on statistics and some other data. It’s all done by a complicated and constantly developing service based on machine learning. Also the pricing team deals with implementation of various payment methods, extra charges upon completing of a trip, chargebacks, billing, interaction with partners and drivers. * Orders dispatching. Which car completes the client’s order? For example, an option of choosing the closest vehicle isn’t the best one in terms of maximization of a number of trips. Better option is to match cars and clients so that to maximize the trips number considering a probability of this specific client cancelling his order under these specific circumstances (because the wait is too long) and a probability of this specific driver cancelling or sabotaging the order (e.g. because the distance is too big or the price is too small). * Geo. All about addresses search and suggesting, pickup points, adjustments of estimated time of arrival (our map supply partners don’t always provide us with accurate ETA information with allowance for traffic), direct and reverse geocoding accuracy increase, car arrival point accuracy increase. There’s lots of data, lots of analytics, lots of machine learning based services. * Antifraud. The difference in trip cost for a passenger and a driver (for instance, in short trips) creates an economic incentive for intruders trying to steal our money. Dealing with fraud is somewhat similar to dealing with mail spam — both precision and recall are very important. We need to block maximum number of frauds (recall), but at the same time we can’t take good users for frauds (precision). * Driver incentives team oversees developing of everything that can increase the usage of our platform by drivers and the drivers’ loyalty due to different kinds of incentives. For example, complete X trips and get extra Y money. Or buy a shift for Z and drive around without commission. * Driver app backend. List of orders, demand map (it shows a driver where to go to maximize her profits), status changes, system of communication with the drivers and lots of other stuff. * Client app backend (this is probably the most obvious part and that’s what people usually call «taxi backend»): order placement, information on order status, providing the movement of little cars on the map, tips backend, etc. This is just the tip of the iceberg. There’s much more functionality. There’s a huge underwater part of the iceberg behind what seems to be a pretty simple interface. And now let’s go back to accidents. Six months of accidents history logging resulted in the following classification: * bad release: 500 internal server errors; * bad release: database overload; * unfortunate manual system operation interaction; * Easter eggs; * external reasons; * bad release: broken functionality. Below I’ll go in detail about the conclusions we’ve drawn regarding our most common accident types. 2. Bad release: 500 internal server errors ========================================== Our backend is mostly written in PHP — a weakly typed interpreted language. We’d release a code that crashed due to the error in class or function name. And that’s just one example when 500 error occurs. It can also be caused by logical error in the code; wrong branch was released; folder with the code was deleted by mistake; temporary artifacts needed for testing were left in the code; tables structure wasn’t altered according to the code; necessary cron scripts weren’t restarted or stopped. We were gradually addressing this issue in stages. The trips lost due to a bad release are obviously proportional to its in-production time. Therefore, we should do our best and make sure to minimize the bad release in-production time. Any change in the development process that reduce an average time of bad release operating time even by 1 second is good for business and must be implemented. Bad release and, in fact, any accident in production has two states that we named «a passive stage» and «an active stage». During the passive stage we aren’t aware of an accident yet. The active stage means that we already know. An accident starts in the passive stage; in time it goes into the active stage — that’s when we find out about it and start to address it: first we diagnose it and then — fix it. To reduce duration of any outage, we need to reduce duration of active and passive stages. The same goes to a bad release since it’s considered a kind of an outage. We started analyzing the history of troubleshooting of outages. Bad releases that we experienced when we just started to analyze the accidents caused an average of 20-25-minute downtimes (complete or partial). Passive stage would usually take 15 minutes, and the active one — 10 minutes. During the passive stage we’d receive user complaints that were processed by our call center; and after some specific threshold the call center would complain in a Slack chat. Sometimes one of our colleagues would complain about not being able to get a taxi. The colleague’s complain would signal about a serious problem. After a bad release entered the active stage, we began the problem diagnostics, analyzing recent releases, various graphs and logs in order to find out the cause of the accident. Upon determining the causes, we’d roll back if the bad release was the latest or we’d perform a new deployment with the reverted commit. This is the bad release handling process we were set to improve. Passive stage: 20 minutes. Active stage: 10 minutes. 3. Passive stage reduction ========================== First of all, we noticed that if a bad release was accompanied by 500 errors, we could tell that a problem had occurred even without users’ complains. Luckily, all 500 errors were logged in New Relic (this is one of the monitoring system we use) and all we had to do was to add SMS and IVR notifications about exceeding of a specific number of 500 errors. The threshold would be continuously lowered as time went on. The process in times of an accident would look like that: 1. An engineer deploys a release. 2. The release leads to an accident (massive amount of 500s). 3. Text message is received. 4. Engineers and devops start looking into it. Sometimes not right away but in 2-3 minutes: text message could be delayed, phone sounds might be off; and of course, the habit of immediate reaction upon receiving this text can’t be formed overnight. 5. The accident active stage begins and lasts the same 10 minutes as before. As a result, the active stage of «Bad release: 500 internal server errors» type of accident would begin 3 minutes after a release. Therefore, the passive stage was reduced from 15 minutes to 3. Result: Passive stage: 3 minutes. Active stage: 10 minutes. 4. Further reduction of a passive stage ======================================= Even though the passive stage had been reduced to 3 minutes, it’s still bothered us more than active one since during the active stage we were doing something trying to fix the problem, and during the passive stage the service was totally or partially down, and we were absolutely clueless. To further reduce the passive stage, we decided to sacrifice 3 minutes of our engineers’ time after each release. The idea was very simple: we’d deploy code and for three minutes afterwards we were looking for 500 errors in New Relic, Sentry and Kibana. As soon as we saw an issue there, we’d assume it to be code related and began troubleshooting. We chose this three-minute period based on statistics: sometimes the issues appeared in graphs within 1-2 minutes, but never later than in 3 minutes. This rule was added to the do’s and dont’s. At first, it wasn’t always followed, but over time our engineers got used to this rule like they did to basic hygiene: brushing one’s teeth in the morning takes some time also, but it’s still necessary. As a result, the passive stage was reduced to 1 minute (the graphs were still being late sometimes). It also reduced the active stage as a nice bonus. Because now an engineer would face the problem prepared and be ready to roll her code back right away. Even though it didn’t always help, since the problem could’ve been caused by a release deployed simultaneously by somebody else. That said, the active stage in average reduced to five minutes. Result: Passive stage: 1 minutes. Active stage: 5 minutes. 5. Further reduction of an active stage ======================================= We got more or less satisfied with 1-minute passive stage and started thinking about how to further reduce an active stage. First of all we focused our attention on the history of outages (it happens to be a cornerstone in a building of our availability!) and found out that in most cases we don’t roll a release back right away since we don’t know which version we should go for: there are many parallel releases. To solve this problem we introduced the following rule (and wrote it down into the do’s and dont’s): right before a release one should notify everyone in a Slack chat about what you’re about to deploy and why; in case of an accident one should write: «Accident, don’t deploy!» We also started notifying those who don’t read the chat about the releases via SMS. This simple rule drastically lowered number of releases during an ongoing accident, decreases the duration of troubleshooting, and reduced the active stage from 5 minutes to 3. Result: Passive stage: 1 minutes. Active stage: 3 minutes. 6. Even bigger reduction of an active stage =========================================== Despite the fact that we posted warnings in the chat regarding all the releases and accidents, race conditions still sometimes occurred — someone posted about a release and another engineer was deploying at that very moment; or an accident occurred, we wrote about it in the chat but someone had just deployed her code. Such circumstances prolonged troubleshooting. In order to solve this issue, we implemented automatic ban on parallel releases. It was a very simple idea: for 5 minutes after every release, the CI/CD system forbids another deployment for anyone but the latest release author (so that she could roll back or deploy hotfix if needed) and several well-experienced developers (in case of emergency). More than that, CI/CD system prevents deployments in time of accidents (that is, from the moment the notification about accident beginning arrives and until arrival of the notification about its ending). So, our process started looking like this: an engineer deploys a release, monitors the graphs for three minutes, and after that no one can deploy anything for another two minutes. In case if a problem occurs, the engineer rolls the release back. This rule drastically simplified troubleshooting, and total duration of the active and passive stages reduced from 3+1=4 minutes to 1+1=2 minutes. But even a two-minute accident was too much. That’s why we kept working on our process optimization. Result: Passive stage: 1 minute. Active stage: 1 minute. 7. Automatic accident determination and rollback ================================================ We’d been thinking for a while how to reduce duration of the accidents caused by bad releases. We even tried forcing ourselves into looking into `tail -f error_log | grep 500`. But in the end, we opted for a drastic automatic solution. In a nutshell, it’s an automatic rollback. We’ve got a separate web server and loaded it via balancer 10 times less than the rest of our web servers. Every release would be automatically deployed by CI/CD systems on this separate server (we called it *preprod,* but despite its name it’d receive real load from the real users). Then the script would perform `tail -f error_log | grep 500`. If within a minute there was no 500 error, CI/CD would deploy the new release in production onto other web servers. In case there were errors, the system rolled it all back. At the balancer level, all the requests resulted in 500 errors on preprod would be re-sent on one of the production web servers. This measure reduced the 500 errors releases impact to zero. That said, just in case of bugs in automatic controls, we didn’t abolish our three-minute graph watch rule. That’s all about bad releases and 500 errors. Let’s move onto the next type of accidents. Result: Passive stage: 0 minutes. Active stage: 0 minutes. --- In further parts, I’m going to talk about other types of outages in Citymobil experience and go into detail about every outage type; I’ll also tell you about the conclusions we made about the outages, how we modified the development process, what automation we introduced. Stay tuned!
https://habr.com/ru/post/449708/
null
null
2,400
61.97
I'm trying to create a simple threaded application whereby i have a method which does some long processing and a widget that displays a loading bar and cancel button. My problem is that no matter how i implement the threading it doesn't actually thread - the UI is locked up once the thread kicks in. I've read every tutorial and post about this and i'm now resorting on asking the community to try and solve my problem as i'm at a loss! Initially i tried subclassing QThread until the internet said this was wrong. I then attempted the moveToThread approach but it made zero difference. Initialization code: PythonThread class (apparently QThreads are bugged in pyQt and don't start unless you do this): class PythonThread (QtCore.QThread): def __init__(self, parent=None): QtCore.QThread.__init__(self, parent) def start(self): QtCore.QThread.start(self) def run(self): QtCore.QThread.run(self) LoadThread class: class LoadThread (QtCore.QObject): results = QtCore.Signal(tuple) def __init__ (self, arg): # Init QObject super(QtCore.QObject, self).__init__() # Store the argument self.arg = arg def load (self): # # Some heavy lifting is done # loaded = True errors = [] # Emits the results self.results.emit((loaded, errors)) Any help is greatly appreciated! Thanks. Ben. The problem was with the SQL library I was using (a custom in-house solution) which turned out not to be thread safe and thus performed blocking queries. If you are having a similar problem, first try removing the SQL calls and seeing if it still blocks. If that solves the blocking issue, try reintroducing your queries using raw SQL via MySQLdb (or the equivalent for the type of DB you're using). This will diagnose whether or not the problem is with your choice of SQL library. The function connected to the started signal will run the thread which it was connected, the main GUI thread. However, a QThread's start() function executes its run() method in the thread after the thread is initialized so a subclass of QThread should be created and its run method should run LoadThread.load, the function you want to execute. Don't inherit from PythonThread, there's no need for that. The QThread subclass's start() method should be used to start the thread. PS: Since in this case the subclass of QThread's run() method only calls LoadThread.load(), the run() method could be simply set to LoadThread.load: class MyThread(QtCore.QThread): run = LoadThread.load # x = y in the class block sets the class's x variable to y An example: import time from PyQt4 import QtCore, QtGui import sys application = QtGui.QApplication(sys.argv) class LoadThread (QtCore.QObject): results = QtCore.pyqtSignal(tuple) def __init__ (self, arg): # Init QObject super(QtCore.QObject, self).__init__() # Store the argument self.arg = arg def load(self): # # Some heavy lifting is done # time.sleep(5) loaded = True errors = [] # Emits the results self.results.emit((loaded, errors)) l = LoadThread("test") class MyThread(QtCore.QThread): run = l.load thread = MyThread() button = QtGui.QPushButton("Do 5 virtual push-ups") button.clicked.connect(thread.start) button.show() l.results.connect(lambda:button.setText("Phew! Push ups done")) application.exec_()
http://www.dlxedu.com/askdetail/3/0723cd2549b0c8577b706b6439883805.html
CC-MAIN-2018-47
refinedweb
528
60.11
consume a synchronous RESTful service. In the example, the target URL is set dynamically by using variables. Scenario We would like to determine geographic coordinates such as latitude and longitude based on a given address. This conversion is called geocoding. Here, we use Google’s Geocoding API in the following format:<given_address>&sensor=false The given address is set dynamically, the output format is JSON, and the sensor parameter equals false indicates that my application does not use a sensor to determine the location. The address data type of my outbound interface simply contains street, city, country, region, and zip code. The particular address elements are used to dynamically set the address in the URL of the RESTful service call. In the SAP Process Integration Designer perspective of the NetWeaver Developer Studio (NWDS), I have defined an Integration Flow with a SOAP sender channel and a REST receiver adapter, i.e., we expose the RESTful service as a SOAP web service. The format of the incoming request message is XML. The response from the Goecoding API is JSON which is converted to XML and passed back to the SOAP sender. In the following I will focus on the configuration of the receiver adapter of type REST. Configuring the REST receiver channel On the Integration Flow double click on the receiver channel, and switch to tab REST URL below the Adapter-Specific settings. Enter the URL Pattern as follows using variables for street, city, country, and the sensor:{street_par}+{city_par}+{country_par}&sensor={boolean} The address variables street_par, city_par, and country_par are replaced by the respective values in the request XML message. For each address part, I use an xpath expression to parse and read the respective values from the XML message. The boolean variable is replaced by the static value false. Switch to tab REST Operation. Here, I have set the http operation of my RESTful service equals GET. Finally, we need to define the format of the messages of the RESTful service. Switch to tab Data Format. Though I have maintained the format of the request message, the settings for the input message are actually superfluous since we do not provide any payload anyway. All information of the request is provided in the URL as seen above. However, entering the request format in the channel doesn’t harm. The format of the response is expected to be in JSON, so I choose JSON as data format. Furthermore I need to convert the JSON to XML and hence select the Convert to XML check box. Running the scenario For testing the scenario, I used soapui. In the figure below you see that I have entered an address as request in XML format. The response contains the geographic coordinates of the given address. I hope this blog was helpful to understand the consumption of RESTful services using the SAP PI REST adapter. If you like to learn more, check out the other blogs in the series, accessible from the main blog PI REST Adapter – Blog Overview. Hi Ivo, Can you please clarify my query?In Advantco REST adapter,if want to provide the username and password for the URL, there we have “Enable Custom Request HTTP Headers”. But in PO REST adapter, where to configure these details. I have tried using “Pattern Variable Replacement”.But when executing, these details are not considered and it is displaying “401 unauhorised2 error. Please let me know if any inputs for my query. Thanks, Leela Hi Leela, let me reply to you on behalf of Ivo since I guess that he went into xmas break already. Support for custom request http headers in the REST receiver channel is planned to be shipped with the next SP, i.e., 7.4 SP10, around March 2015, see also a rough roadmap in my blog Alex Hi Alex, I want to add two lines into http headers as you see below. I understand that you said, there will be no possibility of it until March 2015. Am I right? Thanks, Hi, right, custom http headers is planned for 7.31 SP15 which is planned to be shipped in March this year. The sample that you have shown in your screenshot should be supported. Alex Hi Alex, I have a requirement to set Cross Origin http headers in the REST adapter. These have to be returned to the calling web app. Is this supported too with the coming SP? Thanks, Jan CORS support together wil anonymous access will be shipped with NW731 SP17. Regards, ivo Hi, I developed scenario based this blog, but I am getting below error: This is the response JSON format: Based on it I had created Response Data type. Please help me out in creating Data type for response. If structure is wrong. You can just create response MD with one filed (simple type in string). And no need to maintain OP, then all response payload will be presented Found a solution for your response issue Nani G? We are facing the same issue. Hello Nani G, have you found a solution for your problem? I get the same error while processing the response message. Alexander Bundschuh Ivo Kulms Are there any possibilities to view more processing steps? We can not view the payload of the response message. Best regards Hi Volha, the response is displayed as separate entry in the list of messages in message monitor with sender of response equals receiver of request message and receiver equals sender of request message Alex Hello Volha, Did you find the solution for your problem? We are also facing the same. Any help please Hi Nani, your challenge is that google’s result comes without any namespaces inside the xml. (You can see in the soapUI screenshot above. Or send the ws request directly via browser.) The DT of your result message is manually created. So it uses namespaces by default. In consequence your mapping does not find the root element (as it expects it with NS) => error “cannot create target element”. Solution: use an external definition for your result message. We managed it that way. “Dirty” way: use XSLT to add the missing NS BR Björn Hi all Could you provide all the design objects? I have some doubts about the response objects (mappings, types…) Thanks in advance Hello, I am trying to use the above scenario and the same fails when the REST Adapter makes the call to Google API. The error we get is : HTTP GET call to not successful. HTTP/1.0 400 Bad Request The reason for the error is because the REST Adapter seems to make the “&” as “& amp ;” [without spaces] in the HTTP URL Is this a bug on the SP2 version that we are on? Regards, Bhavesh My bad, I had made some mistakes in the way I had passed the actual request. Missed adding a comma between the Address values.. Thanks! Hi Ivo, I am planning to fetch the documents from the portal, is it possible with REST Adapter? I never tried this myself and I am not familiar with the REST Api , so I cannot really say. Have you tried it yourself and ran into issues? Regards, Ivo Thanks for the post, very usefull. I developed an interface follow this post and works fine.! One issue I have in a new interface, In the REST URL I need to construct a URL pattern that have more than 10 variables. When I use the “pattern variable replacement” I have only 10 entries… How can I add more variables…? Thanks Regards Martin Hi Martin, when you enable the last variable, a table will become visible where you can enter more variables. Please refer to the documentation for the specific values to put in there for the various settings. Regards, ivo Ivo, Thanks for the answer. I see the table at the end of the ten variable but I have problem with fill this. I follow the help documentation at, Configuring the Receiver REST Adapter – Advanced Adapter Engine – SAP Library But unsuccessfull. My URL is like this: http:…./SAP_WebDrone/ConsultasCAE?CUIT={CUIT}&CAI={CAI}&DD={DD}&MM={MM}&AA={AA}&T_COMP={T_COMP}&PV_TA={PV_TA}&PV_NOR={PV_NOR}&DOC_T={DOC_T}&DOC_NRO={DOC_NRO}&IMPORTE={IMPORTE} For the first ten varibles for example put (first variable): Pattern variable replacement: – Value source: “XPath Expression” – Pattern element name: CUIT – XPath expression: //CUIT And work ok. For the 11th variable I fill the table: Additional Pattern elements: – Variable: IMPORTE – Type: XPath – Expression: //IMPORTE ( but error) try with /ConsultasAFIP/IMPORTE (but error) Error is: “Returning to application. Exception: com.sap.engine.interfaces.messaging.api.exception.MessagingException: com.sap.aii.adapter.rest.ejb.receiver.PlaceholderMissingException: URL placeholder IMPORTE is not configured, or has an empty value“ Like it not recognise the variable… Any suggestions? Thanks Regards Martin I am afraid, but all you can do is sit and wait 😀 We have recently fixed this issue and the fix will be shipped as part of the (weekly) patch cycle. (refer to SAP Note 2186319 – is it not released yet) Sorry for the inconvenience, ivo Ok Thanks 😥 jejeje Regards Martin Hi everyone, Thank you so much, all blogs are very interesting and helpful. Currently, I am facing some issues with a Receiver Rest Adapter in SAP PI 7.4 SP 10 using OAuth Authentication with SAML Bearer Assertion. The Web service which I want to consume is a Web Service of Salesforce Company. I have already been reading documentation about Salesforce Web services as well as Rest Adapter, but I have not could do it. Do you have any sample scenario about that? Alexander Bundschuh Ivo Kulms Abdullah Azzouni Blog It Forward – Simona Lincheva Hi Ivo Kulms Can you please tell me how to create the inbound interface for receiver REST adapter? Do we get any XSD or WSDL for REST API ?? Or we create the inbound interface with same structure that we have used in the SOAP sender side? Thanks, Indrajit Hi, Thank you so much for the post. I am trying to use the above scenario and I have problems with GET parameters. I configured a receiver REST channel. I use this URL pattern: http://<domain_name>/api/transactions_logs?start_date={start_date_par}&end_date={end_date_par}&paginate={boolean} and I configured all elements start_date_par, end_date_par and boolean. It seems like PI call only uses URL http://<domain_name>/api/transactions_logs without any parameter. When I try testing the URL from Chrome rest client, the target application gets a perfect hit. I tried hardcoding all the variables putting in the URL pattern the value http://<domain_name>/api/transactions_logs?start_date=2015-09-10T09:00:00&end_date=2015-09-10T09:00:00&paginate=false with same result. Can you help me? Can you tellme if there are any monitor where I can check URL and message used for PI? Any help will be apreciated Hi, Could you advice some inputs on ” Rest Receiver Adapter : extracting synchronous Response header data” . i tried to get from scn, but no path forward to proceed..? Regards, Ashu Hi Ivo/Alex, I am working on SOAP(XI) to REST synchronous scenario. I am facing with JSON to XML conversion while executing the scenario. PI is sending the request to REST URL in JSON format. For multiple record PI channel is generating JSON in correct format but when we are sending request with single record PI channel is not generating “[ “ (square bracket) for an array field (Items) as required in JSON format. Below is the required JSON format: { “A”: “abc”, “B”: “def”, “C”: TRUE, “Items”: [ { “T”: “123ASD”, “c”: false } ] } Below is the JSON generated by PI: { “A”: “abc”, “B”: “def”, “C”: TRUE, “Items”: { “T”: “123ASD”, “c”: false } } Any suggestion on this issue? Thanks & Regards, Nida Fatima Hi Nida, will be supported with next SP shipment 7.31 SP17 / 7.4 SP13, planned to be shipped in November Alex Hi, I am facing the following challenge while configuring Receiver REST Adapter. The API I am trying to call requires a content MD5-checksum in the HTTP header The content-MD5 is not a part of the URL, so it is not possible to set the parameter via Adapter Specific Attributes. Thanks & Regards Volha How did you create the data type of the response of google? I miss this step. Hi Umberto, I haven’t done any mapping of the response, so no need to define a data type, otherwise you simply have to create the data type in ESR based on a sample XML response Alex Hi Alexander, I have known the steps of how you created the scenario, how did you not use maping in response. Hi Umberto, this post is nearly a year old. If you have a question, please post a discussion in the SAP Process Orchestration forum, and add a link to the original blog. Rgds, Jocelyn, a SCN Moderator I am getting the same issue can you please help me with this <![CDATA[ com.sap.engine.interfaces.messaging.api.exception.MessagingException: com.sap.engine.interfaces.messaging.api.exception.MessagingException: com.sap.aii.adapter.rest.ejb.common.exception.HttpCallException: HTTP GET call to Avenue+New York+USA&sensor=false not successful. HTTP/1.0 400 Bad Request at com.sap.aii.adapter.soap.web.SOAPHandler.processSOAPtoXMB(SOAPHandler.java:772) at com.sap.aii.adapter.soap.web.MessageServlet.doPost(MessageServlet.java:530) this is my channel configuration Thanks a lot for your help Hello Team, Thank you very much for this wonderful blog. I have some query from my side. i am getting the error as given below Avenue+New York+USA&sensor=false not successful. HTTP/1.0 400 Bad Request but the url is successfully working in web browser but getting reply as http/1.0 400 bad request. and i am using the below XML structure to send the request <?xml version=”1.0″ encoding=”UTF-8″?> <ns0:address xmlns:ns0=”XYZ”> <street/> <city/> <country/> <region/> <zip/> </ns0:address> please suggest if any of the above details are wrong and is there any necessary to upgrade the patch level ? regards, Avinash. Hi Avinash, please check the space between “5th” and “Avenue. For convienience Web browsers do replace the invalid space automatically. Best Regards, Ivo Hi Kulms, I have tried it but still no luck getting the same error. Regards, Avinash Hello Avinash Plese see this, i had the same Issue Issue Consuming synchronous RESTful service I have configured a scenario with a REST receiver for consuming a REST Service. My URL looks like that:{para1}¶2={para2} The para1 and para2 are in my request payload. The problem is that these parameters are optional. It is possible that they have no value. And the REST service allows calls like this:¶2=&value2 But I get always an error when I try to call the service via the REST receiver with an empty parameter. “MP: exception caught with cause com.sap.aii.adapter.rest.ejb.receiver.PlaceholderMissingException: URL placeholder para1 is not configured, or has an empty value” In my real scenario I have more than 2 parameters and one or more of them are always empty. Is there a solution for this problem? Otherwise I can’t use this for my purpose or I have to find a workaround (combine all parameter in one) Hmmm… I’m not quite there yet, but I will require optional parameters as well in my use case. Have you resolved this issue? Hi Alex, I was trying to achieve the same thing like Gil, but in the end figured out that it’s impossible to leave any parameter value empty. Even I combined all parameters in one in my Java mapping program. “&” this kind of symbol was encoded by PO, and result went error. Dear All, I am new to SAP PI but I developed this scenario. During testing from SOAP UI I am getting the connection time out error with message “” Following error message log in Message monitoring Hello, Regarding first image. Do you know where do I get the latest version of NWDS with SAP PI perspective inside? I have been working with SAP PI 7.3 for the last few years, is the scenario configuration the same and it only differs by the Adapter type in the ES Builder? Kindly regards, Arthur Silva
https://blogs.sap.com/2014/12/18/pi-rest-adapter-consuming-synchronous-restful-service/
CC-MAIN-2017-30
refinedweb
2,714
63.8