text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Custom Image Filter851713 Mar 30, 2011 8:16 PM Hello, This content has been marked as final. Show 13 replies 1. Re: Custom Image Filter851329 Mar 31, 2011 3:32 PM (in response to 851713)Olek,1 person found this helpful You are on the right track. The shift operators will let you get at the alpha and color values in a four-byte integer. Unlike a language like C, which would let you treat an array of integers as an array of bytes (or even records) if you wanted to, Java tends to keep you committed to treating integers like integers. In your image array, this means you will usually pack 8-bit alpha, red, green, and blue values, each in the range from 0 to 255, into single 32-bit values. The shift operators are how you move the eight bits of one value into the right place in the 32-bit integer. So, if you have four values, a, r, g, and b, you need to pack them into an integer like this: So, your alpha value is the highest eight bits in the integer, your red value is next highest eight bits, green is the next highest, and blue is the lowest eight bits. To move any eight-bit value into the right place, you shift it by that number of bits. To combine the results of multiple shifts, you use the logical OR operation: +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ |31|30|29|28|27|26|25|24|23|22|21|20|19|18|17|16|15|14|13|12|11|10| 9| 8| 7| 6| 5| 4| 3| 2| 1| 0| +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ | a | r | g | b | +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ Notice I have used integers, not bytes, for the a, r, g, and b variables. That's because there is no unsigned byte data type in Java, and Java would sign-extend bytes with values "greater" than or equal to 0x80 (since those are actually negative numbers when stored in bytes). Thus: int a, r, g, b; ... int i = (a << 24) | (r << 16) | (g << 8) | b; results in i being 0xFFFFFF80. In my example above, with all the bit-shifts and ORs, if b were 0x80, Java's sign-extension would treat it as 0xFFFFFF80, and that would again have all the bits to the left of the blue value set to 1. byte b = 0x80; int i = b; Now, to get the a, r, g, and b values back out of their packed format, you shift again, but the other direction: This time note that I have used the AND operator after shifting, to mask off the bits I don't want. a = (i >> 24) & 0xFF; r = (i >> 16) & 0xFF; g = (i >> 8) & 0xFF; b = i & 0xFF; Now, this may all seem clunky and awkward, especially if you are used to a language like C that allows pointers that can point to the same memory locations, but treat them as different data types. Indeed, I'd concede that it is clunky and awkward, but Java2D seems to work best with packed integers (even though its Image class does support other formats; but they play heck with your code in other ways). It is somewhat comforting, perhaps, to know that, if you dig into the routines in Java2D that provide access to individual colors, you find out that all this shifting and ORing and ANDing is exactly how the wizards at Sun do it, so it must be right. ;) To round this out, here's an SSCCE you can try: For more on the "bitwise" operators (as <<, >>, &, and | are sometimes called), see Chapter 3 of "Core Java," Volume I, 8'th edition. public class S { public static void main(String[] args) { int a, r, g, b; int i; a = 0xF0; r = 0xE0; g = 0xC0; b = 0x80; i = (a << 24) | (r << 16) | (g << 8) | b; System.out.printf("0x%08x\n", i); a = (i >> 24) & 0xFF; r = (i >> 16) & 0xFF; g = (i >> 8) & 0xFF; b = i & 0xFF; System.out.printf("0x%02x, 0x%02x, 0x%02x, 0x%02x\n", a, r, g, b); } } 2. Re: Custom Image Filtermorgalr Mar 31, 2011 4:28 PM (in response to 851713)Using the shift is prefered due to speed, but you can also do "AND" and normal division to do the same thing:1 person found this helpful in an ARGB BufferedImage a getPixel will return 1 byte for the Alpha, 1 byte for the Red, one byte for the Green, and one byte for the blue. To set the Alpha to 0, you do: To get Blue value: int myPixel = MyARGB.getRBB(myX, myY); myPixel = myPixel & 0x00FFFFFF; //hex value for binary mask to zero alpha and preserver RGB values MyARGB.setRGB(myX, myY, myPixel); To get Green value: int myPixel = MyARGB.getRBB(myX, myY); int myBlue = myPixel & 0x000000FF; //mask off only Blue--least significant byte; To get Red value: int myPixel = MyARGB.getRBB(myX, myY); int myGreen = myPixel & 0x0000FF00; //mask off only Green--notice mask change; myGreen = myBlue/0x00000100; //devide by 256 (2^8) to shift the Green value right 1 byte--same as shifting 8 times Even if you don't use this, it may help you see what is happening in the shift operator. int myPixel = MyARGB.getRBB(myX, myY); int myRed = myPixel & 0x00FF0000; //mask off only Red--notice mask change; myRed = myRed/0x00010000; //devide by 256*256 (65536 or 2^16) to shift the Red value right 2 bytes--same as shifting 16 times You can also retrieve color codes this way: int myPixel = MyARGB.getRBB(myX, myY); Color myColor = new Color(myPixel); int myRED = myColor.getRed(); int myGreen = myColor.getGreen(); int myBlue = myColor.getBlue(); 3. Re: Custom Image Filter851713 Mar 31, 2011 5:08 PM (in response to morgalr)Hi again, Thank you very much Stevens Miller for your selfmade tutorial. Didn't expect this here ;) I hope that now I know the basics to write this filter based on the packing/unpacking using the shif operators. Thank you too morgalr! I have one more question: Can you describe the behavior of those hex-styled values like : 0x00FFFFFF If you use 00 you ignore this part - ok If you use FF you get the "complete" value of this part? What happend if you use intermediate values? best wishes, Olek 4. Re: Custom Image Filter851329 Mar 31, 2011 5:17 PM (in response to morgalr)morgalr, those get/set methods are probably the best way to become familiar with image work under Java/Java2D, though you're also right about the speed. My rough tests indicate that "/ 256" takes about twelve times as long as ">> 8" takes. (Which surprises me, since 25 years ago, Unix experts were bragging to me that the stock C compiler recognized "/ 256" and compiled it into a shift, for optimization. I'd have expected javac to do it too, but apparently not. Indeed, maybe those experts were wrong.) Once the packing concepts are mastered, I'd expect the overhead in calling the get/set methods wouldn't make it worth it to use them. In fact, IIRC, it's those methods I looked at when I found that even the Java2D source uses shifts and masks, so you're ultimately just calling routines to do the same thing one line of code would do. But I think it's a good idea to show them to anyone who is new to working with raster image data. They are much easier to understand than the bit-twiddling is. 5. Re: Custom Image Filter851713 Mar 31, 2011 6:07 PM (in response to 851713)Re, Slowly I get it I guess ;) FF == this color 100% opaque or color value == 255 - isn't it? 00 == this color 0% opaque or clolor value == 0 I very often use CSS or Adobe Photoshop where I met those 0xXXXXXX values but as most people( again I guess ), are more familiar with RGB(A) then hex-colors. Greetings, Olek Edited by: 848710 on Mar 31, 2011 11:06 AM 6. Re: Custom Image Filtermorgalr Mar 31, 2011 6:41 PM (in response to 851713)Olek,1 person found this helpful The idea of binary masking just goes back to binary "AND" and "OR" bit manipulation: 1 and anything is anything: 1 and 1 is 1, 1 and 0 is 0 (general: 1 & X = X where X can be 0 or 1). 0 and anything is 0: 0 and 1 is 0, 0 and 0 is 0 (general: 0 & X = 0 where X can be 0 or 1). So when you devise a mask, you have to think in binary: 8 bits, xxxxxxxx, and you can easliy represent that in Hexadecimal format or Hex. Hex is a base 16 number system and as such can represent number for each place holder from 0 to 15. This gives a problem with our decimal based system, 0 to 9, so we need something to represent 10, 11, 12, 13, 14, and 15 when using Hex. This problem was solved by extending the digits by adding the first 6 letters of the alphabet--A to F. You end up with a number system that has valid digits of 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F. Each digit of a Hex number can be represented in 1/2 byte or 4 bits xxxx. If you desire to "set" a bit, you use OR and a 1 value in that bit location. if you want to "reset" (clear) a bit you use a 0 in that bit location. if you want to leave it alone you use a 1 with an AND operation //not covered here. if you want to leave it alone you use a 0 with an OR operation //not covered here. for a simple example, lets look at your problem with an ARGB number: Let's say you get 428,1541,137 as your pixel value. I don't know about you, but I cannot invision what part of that is assigned to each part of the ARGB image pixel, so if we convert it to hex it becomes 0xFF332211 where the "0x" denotes that you are using a Hex based number. The FF or bit pattern 11111111 (decimal 255) controls our Alpha characteristics. 33 or bit pattern 00110011 (decimal 51) controls Red. 22 or bit pattern 00100010 (decimal 34) controls Green. And 11 bit pattern 00010001 (decimal 17) controls Blue. To get our original number back we have to multiply the various numbers by thier weights according to placement--just like in decimal, but using Hex. 255*256^3 (alpha) + 51 * 256^2 (Red) + 34 * 256^1 (Green) + 17 * 256^0 (Blue). I'll leave you to do the math. When you do a shift in Java or other language, you move the bit patter over to the left or right a known number of bits, this is what happens if you do integer division by 2. Notice to shift 8 places would be divide by 2 a total of 8 times or 2^8 or decimal 256 or the 5 bit bit patterm 10000 or hex 0x0100. To clear or reset the bits in the alpha component you need to use zeros to clear with an AND operation (0 and anything is 0), but you want to leave the other bits intact so you have to use a 1 with the AND operation (1 and anything is your original "anything"). So you get: 0xFF332211 & 0x00FFFFFF which will give you 0x00332211 //all alpha set to 0 and intact color values. Even simpler example 3 AND 5 = ? look at the bit patterns 00000011 (3) 00000101 (5) When you and those you get 00000001 (decimal 1). 0 AND 0 is 0 0 AND 0 is 0 0 AND 0 is 0 0 AND 0 is 0 0 AND 0 is 0 0 AND 1 is 0 1 AND 0 is 0 1 AND 1 is 1 I hope this helps you in your visualization. 7. Re: Custom Image Filter851329 Mar 31, 2011 6:48 PM (in response to 851713)Olek,1 person found this helpful The hex values in my sample program serve two different purposes. In this part they are the actual values for alpha, red, green, and blue. I always use hex for that because it is easy to figure out how to add another 25% here (0x40) or another 12.5% there (0x20). I chose those four values because they were distinct, easy to recognize after being packed and then unpacked, and all were guaranteed to sign-extend if treated as bytes. You are quite correct that hex values like these get used in other settings, like CSS, to specify colors. Because each pair of hex digits represents a distinct byte, it's a very simple way to specify a color in a single hex number. In my example, that number would just be 0xE0C080 (or, if you included the alpha channel, 0xF0E0C080). a = 0xF0; r = 0xE0; g = 0xC0; b = 0x80; Now, when it is necessary to strip off (people often say "mask") the bits you don't want from a packed four-byte integer, the hex numbers are serving a different purpose. When you use the AND operator (the "&" character), it actually operates at the level of the individual binary digits (the "bits") to produce a result. Suppose we want to get just the green value from the packed integer in my example. The integer is 0xF0E0C080. In binary, that's this: To get the green part, you need to shift these bits eight places to the right, with ">> 8." That gives you this result: 11110000111000001100000010000000 A number of interesting features appear in that result. First, notice that the eight bits on the right end of the original number are gone. They get lost as a result of the shift operation. The right-most 24 bits of the result are exactly the same as the original left-most 24 bits of the number before the shift operation. That's literally what "shift" does: it moves the bits. But, most interestingly of all (perhaps) are the left-most eight bits of the result. Notice they are all "1" bits. That's because the right-shift operator (the ">>") sign-extends its result. The effect is that a negative number produces a negative result. In Java, this is always how the right-shift operator works. In other languages, including C, it's not always specified. Some C compilers will sign-extend right-shifted negative numbers, and some won't. It's up to the compiler writers to decide. In Java, it's not up to the compiler writers. The ">>" always sign-extends. Whether or not right-shifts should extend signs has been known to keep some programmers arguing for hours and hours. These types of debates are called "religious" arguments, which usually means they are not worth having. As I am a Unitarian Universalist, and my church has taken no position on sign-extension in right-shifts, I don't care one way or the other. To avoid needing to know, however, I always just use a mask to get only the bits I want, which I will explain in a second. (But, you should know, Java includes an unsigned right-shift operator, ">>>," so you have final say in your own code as to whether there will be sign-extension or not.) All of this comes from the fact that Java, C, and most languages represent signed integers with "two's-complement" arithmetic, which is a kind of numerical magic you don't want to have learn much about if you don't have to. 11111111111100001110000011000000 Now, we wanted the green byte, right? It started out packed into our four-byte integer as the ninth, tenth, eleventh, twelfth, thirteenth, fourteenth, fifteenth, and sixteenth bits, starting with the first bit as the one on the right. By shifting our integer eight bits to the right, our green byte is now in the first, second, third, fourth, fifth, sixth, seventh, and eighth bits. (Beware that most people number bits the same way that you use an array index, by calling the first one "bit zero," the second one "bit one," and so on). In my code, I masked off the bits I didn't want, and kept the bits I did want, with one of those hex values: 0xFF. The operation was this: To understand what that does, you need again to see it in binary. The "i >> 8" part results in the 32 bits I wrote out above. The "0xFF" part is another 32-bit number, the first 24 of which are all "0," and the last eight of which are all "1" (Java conveniently treats hard-coded constants like integers, so 0xFF is treated like a 32-bit number, the same as if we had written "0x000000FF"). So, in binary, i >> 8 and 0xFF look like this: g = (i >> 8) & 0xFF; Finally, the AND operator works on the matching bits in the two values above. That is, the left-most bit of each is matched with the left-most bit of the other, the second left-most bit is matched with the other's second left-most bit, and so on. For each matched pair of bits, if one of the bits is "1" and the other bit is "1," the matching bit in the result is "1." In any other case, the result is "0." So, "(i >> 8) & 0xFF" results in this: 11111111111100001110000011000000 00000000000000000000000011111111 In hex, that's "0xC0," which is our green value, which is what we wanted. 00000000000000000000000011000000 So, in some cases, my hex values were providing actual numbers, to set the values of alpha and the color channels. In other cases, they are serving as masks, to get rid of the bits I don't want and leave behind only the ones I do want, after shifting, to unpack those alpha and color values when I have them in packed form. Hope that helps. 8. Re: Custom Image Filter851713 Mar 31, 2011 7:14 PM (in response to 851329)Re, Thanks again both for your outstanding tutorials. I am familiar with bitwise logic since I am a college student ( bioinformatics ). So shame on me ;) Ok taking into account, that this special color-management was not part of basic lectures. Now I wrote the filter. It works but not that nice as I suggested. The scenario is the following : I got an image where I paint some stuff on it. Next I use a bright pass filter ( from the book filthy rich clients ) on this image. After this each pixel of the resultig pixel is full opaque. What I want is that I overlay this image over an other. With my current filter the result looks not very smooth. The trick is maybe how to setup the alpha value of thos pixels who are fairly dark. Here is my code The results are fairly different for some thresholds. private void filterImage( int[] pixels, int width, int height ) { // color values for alpha, red, green and blue int a = 0; int r = 0; int g = 0; int b = 0; int index = 0; for ( int y = 0; y < height; y++ ) { for ( int x = 0; x < width; x++ ) { int pixel = pixels[index]; // unpack the pixel's components a = ( pixel >> 24 ) & 0xFF; r = ( pixel >> 16 ) & 0xFF; g = ( pixel >> 8 ) & 0xFF; b = pixel & 0xFF; int maxVal = Math.max( Math.max( r, g), b ); // check if the max value of r/g/b is below the threshold if ( maxVal < threshold ) { // set the alpha value to the max value a = Integer.parseInt( Integer.toHexString( maxVal ), 16 ); // pack the four color values into one pixel int translucentPixel = ( a << 24 ) | ( r << 16 ) | ( g << 8 ) | b; // and set the new value pixels[index] = translucentPixel; } //System.out.println( "Red : " + r + " Green " + g + " Blue " + b ); index++; } } } Again, thanks a lot, Olek 9. Re: Custom Image Filter851329 Mar 31, 2011 7:37 PM (in response to 851713)Hmmm... In your original post here, you said you wanted to set the alpha value to zero if the RGB values were below a threshold. Here, though, you're setting alpha to the maximum value of the RGB channels. By the way, this line1 person found this helpful is the same as this line a = Integer.parseInt( Integer.toHexString( maxVal ), 16 ); But, if you really want to set alpha to zero, use this, of course: a = maxVal; Or, more simply, just don't shift anything into those bits when you pack your ARGB integer, like this: a = 0;: int translucentPixel = ( r << 16 ) | ( g << 8 ) | b;? 10. Re: Custom Image Filter851713 Mar 31, 2011 8:23 PM (in response to 851329)Hi agian, Stevens Miller wrote:Thank you - I am still not familiar with hex-values. Hmmm... In your original post here, you said you wanted to set the alpha value to zero if the RGB values were below a threshold. Here, though, you're setting alpha to the maximum value of the RGB channels. By the way, this line is the same as this line a = Integer.parseInt( Integer.toHexString( maxVal ), 16 ); a = maxVal; But, if you really want to set alpha to zero, use this, of course:True, but I realized, that this looks not smooth, so I try to play with the alpha value. a = 0; Or, more simply, just don't shift anything into those bits when you pack your ARGB integer, like this:Ok, thats nice too. int translucentPixel = ( r << 16 ) | ( g << 8 ) | b;:Yes of course.? What I got after the bright pass filter is an image with, lets say some text on it. The text is soft glowing around the edges of the text. Therefore I first paint the brightpassed image with the text, then I draw on top of this image the same text without the filter. Now it looks nice. But I want to add some nice looking background. Here is the point where I want to change the opacity of those pixel which are nearly black, but maybe this is not enough. Maybe I got to change for all pixel of the merged image ( bright pass filtered + normal image ) the alpha value to get a nice - smooth - looking result if I paint the merged image on top of an other image. Thanks again for your helpfull explainations. Best wishes, Olek Edited by: Olek_ on 31.03.2011 13:22 11. Re: Custom Image Filter851329 Mar 31, 2011 8:51 PM (in response to 851713)Olek, Okay, thanks to your last description, I think I know what you're trying to accomplish, graphically speaking. And I think you're running into a conceptual problem that affects a lot of people when they try to composite two or more images. It may be that you're confusing the glowing effect you're seeing around your text with a mask (not the same kind of mask I wrote about in Java). I'd suggest you try my scaled alpha trick first and see if that gets you closer, but I think you might want to hold off writing more code for now and play with a good, versatile imaging tool like the Gnu Image Manipulator Program. That's something of an Adobe Photoshop work-alike, but it's free. (Of course, if you have or want to buy Photoshop, that's a good one too). You can get it at. I'd suggest working with that for a while and getting fully familiar with image compositing. I've done a lot of this and I can tell you it's not obvious at all, at first, how you get the blending you're looking for. Once you get it figured out, it will seem very straightforward, but it takes one of those "a-ha!" moments that go with things you study for a long time before they're clear, like infinite series with finite sums and special relativity. Just keep at it and you'll get it. But, seriously, I recommend working more with the graphic side for now, and return to writing code when you can define precisely for yourself what your code has to do. Right now, I think you're laboring to master two sets of concepts (one being graphics, the other being Java programming) when you could be working on just one of them. Best of luck! 12. Re: Custom Image Filtermorgalr Mar 31, 2011 9:57 PM (in response to 851329)Olek, Generally speaking what you want to do now is mask your image. Choose everything that is not your Text and Effect and turn it transparent. You can then write that over another ARGB image. You'll get your background image with your text laying over the top and no other interaction between the 2 images. 13. Re: Custom Image Filter851713 Apr 1, 2011 1:37 PM (in response to 851329)Hello Mr. Steven Millers, Thanks again for your great help! your approach using Looks much better and nearly in the way I want to. a = (255 * maxVal) / threshold; I know GIMP and I use it on my Linux-distribution too. What I not want is to reproduce the functionality of Photoshop or GIMP. I am far beyond this scope. I only want to create some nice looking gui using everything Java2D can provide. The reason I want to compute this images by Java instead of paint some nice images in Photoshop/GIMP is, that I can't scale such "finished" images. But if I create them by myself I can. On the other hand it is a nice and exciting work for me. I use the SwingX-library too, which provides nice BlendComposites also described in the great book "Filthy rich clients" by Chet Hase and Romain Guy. If you got more suggestions let me hear! Maybe I can use AlphaComposite too for those kinds of problem? @morgalr : Yes it is a mask-thing. Now it works nice for the moment. Thank you too. Greetings, Olek
https://community.oracle.com/message/9489621
CC-MAIN-2016-44
refinedweb
4,345
76.15
Powerful Hacks With ESXi vim-cmd Command, Together With Shell Commands If you have read my previous article on the vim-cmd, you may have realized how handy it is, especially when it comes to manage virtual machines. There is however a pretty challenging problem to use it – for most commands for a virtual machine, it requires vmid which is an integer that uniquely identifies the virtual machine in the context of an ESXi server. It’s like primary key in SQL database to locate a record (virtual machine instance) in a table (virtual machine type). For people who are familiar with vSphere APIs, the vmid is the same as the value of ManagedObjectReference value of a virtual machine in ESXi. Because most administrators who use commands are not necessarily familiar with vSphere API, it doesn’t help much. Because we mostly refer a virtual machine with its name or even IP address, it’s not easy to get a virtual machine’s vmid. What we can do is to list all the virtual machines with the following command and search for a virtual machine’s name and write the vmid down: From the list, I can search and find the vmid for my virtual machine so that I can take other actions on it, for example, powering it on/off. Since ESXi shell support other Linux commands, it’s natural to consider grep a virtual machine, just to save some eye search as follows: With the above command, I don’t need to scroll down pages, which is a big plus, but not enough if you want to further automate it. Then, we can use cut command to get the vmid immediately: (Don’t try to get other items, say f 2 for the name. If you wan to get it, use sed to consolidate multiple spaces to one first). With the vmid in hand, we can now run almost all the command in the vmsvc namespace. Don’t try to pipeline directly as the vim-cmd command does not take standard input as its parameter. Here is where xargs command comes to help. The following commmand pass the vmid to the next vim-cmd command to suspend that virtual machine. Another interesting usage is to get a virtual machine’s IP address with the vim-cmd command. If you don’t have VMware Tools installed, there is no way for you to do it via the command. If you run the related command, you will get something like the following: But if you have a virtual machine with VMware Tools installed, you may have the ipAddress set. Then, you can get the IP as follows: With the above command, you get the IP address of a named virtual machine in one line. When you have the IP address of a virtual machine, you can do a lot of fun stuff, for example, ssh to it. The point I want to make here is that vim-cmd is powerful, and even more so if combined with other shell commands in ESXi. What I just showed is limited use case. You can also list all the virtual machines on a particular datastore, get all the vmdk files used by a particular virtual machine, and so on. If you have interesting use cases and/or scripts, please feel free to share them in the comments. Update: Got a tweet from William who mentioned his writing on vimsh which is very similar to vim-cmd. You may find many useful samples there too. you mean xargs -n1 vim-cmd vmsvc/power.suspend I think the other one could bail out, which may (or may not) be undesirable. Hi Bish, the -n1 option makes it cleaner. Thanks for the tip! Steve Hi Steve.. Is there a way to get the snapshot ID based on the Snapshot name for this output |-ROOT –Snapshot Name : 6.5.4.01 –Snapshot Id : 19 –Snapshot Desciption : –Snapshot Created On : 2/9/2016 4:12:44 –Snapshot State : powered off –|-CHILD —-Snapshot Name : 6.5.4.02 —-Snapshot Id : 20 —-Snapshot Desciption : Testing —-Snapshot Created On : 2/9/2016 23:48:29 —-Snapshot State : powered off Thanks Rajeev I think you use grep and pipe the out to cut for all the snapshot IDs. Good luck. Steve Best post ever, seriously. Who needs vCenter? 😉 Thanks Fred, right on! You can also try our vSearch product as an alternative: -Steve Thank you, I’ve just been looking for information about this subject for a while and yours is the greatest I’ve came upon till now. However, what in regards to the conclusion? Are you certain concerning the source? Good article! We are linking to this particularly great post on our site. Keep up the good writing. I read this article fully about the resemblance of hottest and earlier technologies, it’s remarkable article. This fo course all goes for a BIG BALL OF CHALK when you have a VM with a NOT/ANNOTATION .. do you have a work-around for this as it’s quite difficult to get a script working properly when there’s a not/annotation especially when using column and row values
http://www.doublecloud.org/2013/12/powerful-hacks-with-esxi-vim-cmd-command-together-with-shell-commands/
CC-MAIN-2020-24
refinedweb
867
69.41
class in UnityEngine.SocialPlatforms.GameCenter / Implemented in:UnityEngine.GameCenterModule Implements interfaces:ISocialPlatform iOS GameCenter implementation for network services. An application bundle ID must be registered on iTunes Connect before it can access GameCenter. This ID must be properly set in the iOS player properties in Unity. When debugging you can use the GameCenter sandbox (a text displaying this is shown when logging on). You must log on in the application to get into sandbox mode, logging on in the GameCenter application will always use the production version. When using the GameCenterPlatform class in C# you need to include the UnityEngine.SocialPlatforms.GameCenter namespace. Some things to be aware of when using the generic API: Authenticate() If the user is not logged in, a standard GameKit UI is shown where they can log on or create a new user. It is recommended this is done as early as possible. Achievement descriptions and Leaderboards The achivements descriptions and leaderboard configurations can be configured in the iTunes Connect portal. Achievements get unique identifiers and the leaderboards use category names as identifiers. GameCenter Sandbox Development applications use the GameCenter Sandbox. This is a seperate GameCenter than the live one, nothing is shared between them. It is recommended that you create a seperate user for testing with the GameCenter Sandbox, you should not use your real Apple ID for this. You using the sandbox and you will be logged on to the real one. If the application has not been submitted to Apple already then this will probably result in an error. To fix this all that needs to be done is to delete the app and redeploy with Xcode. To make another Apple ID a friend of a sandbox user it needs to be a sandbox user as well. If you start getting errors when accessing GameCenter stating that the application is not recognized you'll need to delete the application completely and re-deploy. Make sure you are not logged on when starting the newly installed application again.
https://docs.unity3d.com/ScriptReference/SocialPlatforms.GameCenter.GameCenterPlatform.html
CC-MAIN-2022-27
refinedweb
335
55.74
mean ok it help to distingwish between two elements with the same name but why do we have a URL attached to NAMESPACE? It would be very helpful if some one could provide more details. Thanks that is an unique identifier that tells us exactly what namespace this is As the for of the URI, often a URL syntax is choosen, because the owner of a domain can assume noone else will pick his domain as part of a URI It gives you more garantuee that your namespace is unique <foo:bar xmlns:foo="foo-namespace"/ is valid as well, here I have chosen "foo-namespace" as the URI for my namespace, note that the namespace binding is important (the URI), not the prefix. A prefix has no meaning in XML <foo:bar xmlns:foo="foo-namespace"/ <xsl:bar xmlns:xsl="foo-namespace"/ <xsd:bar xmlns:xsd="foo-namespace"/ <html:bar xmlns:html="foo-namespace" <bar xmlns="foo-namespace"/> all mean the same element in the same namespace (XML wise they are exactly the same) Note that I have used some confusing prefixes, just to show you that prefix only has a meaning in combination with a binding You are speaking to a crowd... you raise your right hand when you are speaking English, and raise your left hand when you're speaking French. This signaling lets the audience know when you are switching bewteen languages. If the crowd is mostly English speaking, you might be able to perform a shortcut and only use your left hand when speaking French (since it is assumed that the rest of the conversation is in English). That's what namespaces are all about... they allow you to tell the XML parser when you're switching between a markup language that it knows about to another markup language that it might not know about. And just like the silly example above, you can have a default language and only use the namespaces when the "foriegn" language appears. One of the main purposes for namespaces is to help the XML parser to validate the document (like performing a spelling and grammer check). As you can imagine, the XML parser really needs to know the language in order to valide the spelling/grammer (since the rules for English are lot different that French). As for you the coder... namespaces are not that exciting. They are not something that you'd normally use to accomplish a task (unless you're writing the spelling/grammer check logic). But, you still would have to use namespaces to keep your document from failing validation.
https://www.experts-exchange.com/questions/27512912/XML-name-space.html
CC-MAIN-2018-30
refinedweb
434
62.72
Technical Support On-Line Manuals RL-ARM User's Guide (MDK v4) #include <rtl.h> BOOL icmp_ping ( U8* remip, /* IP address of the remote host. */ void (*cbfunc)(U8 event) ); /* Function to call when the ping session ends. */ The icmp_ping function starts the ICMP ping process on the TCPnet system. This causes the ICMP client to send the ICMP Echo request to remote ip address and waits for an ICMP Echo reply. The argument remip points to an array of 4 bytes containing the dotted-decimal notation of the IP address of the remote host to ping to. The argument cbfunc points to a function that the ICMP client running on TCPnet calls when the ping session ends. The cbfunc is an event callback function that uses the event argument of the cbfunc function to signal one of the following ICMP events: The icmp_ping function is in the RL-TCPnet library. The prototype is defined in rtl.h. The icmp_ping function returns __TRUE if the ICMP client has been successfully started. Otherwise it returns __FALSE. static void ping_cback (U8 event); void ping_host (void) { U8 hostip[4] = {192,168,0,100}; if (icmp_ping (&hostip[0], ping_cback) == __TRUE) { printf("Ping started.\n"); } else { printf("Ping not started, ICMP not ready or bad parameters.\n"); } } static void ping_cback (U8 event) { switch (event) { case ICMP_EVT_SUCCESS: printf ("Remote host responded to ping.\n"); break; case ICMP_EVT_TIMEOUT: /* Timeout, try again. */ printf ("Ping timeout, no.
https://www.keil.com/support/man/docs/rlarm/rlarm_icmp_ping.htm
CC-MAIN-2020-34
refinedweb
237
74.49
Closed Bug 746703 Opened 10 years ago Closed 10 years ago ICS: Menu Button is displayed twice Categories (Firefox for Android Graveyard :: General, defect) Tracking (firefox14 verified, firefox15 verified, blocking-fennec1.0 +) Firefox 15 People (Reporter: padamczyk, Assigned: mbrubeck) References Details Attachments (3 files, 4 obsolete files) Using a HTC EVO X (ICS) it appears that device does not recognized that we have a menu button within the app, so it draws another one. See screenshot. *The HTC has physical buttons, unlike the Nexus Galaxy. We should discuss if this is bad/common enough to block. tracking-fennec: --- → ? Would be nice to know what the device market size is too. So this is will happen all the HTC ONE devices, they will likely be fairly popular as the last HTC device was Sprint's #1 selling phone so I hear. More background: It appears all apps that have 2.3 support show this issue (facebook, google docs, flickr). The newer 4.0 apps don't seem to exhibit this issue such as Chrome Beta, ICS native apps or Instagram. I suspect this can be fixed by changing targetSdkVersion to 14 in our AndroidManifest.xml. Assignee: nobody → mbrubeck Hardware: x86 → All Could someone with an affected device please verify that the bug is fixed in this test build? Attachment #616300 - Flags: review?(blassey.bugs) explains, .)" notes some other effects of setting targetSdkVersion to 14 or higher: * Hardware acceleration is enabled by default. * JNI uses indirect references to avoid GC-related bugs. * Apps that don't specify a theme will use the device's default theme. Just loaded your apk. FIXED!!!!! Any possible side effects from targeting SDK 14? (In reply to Aaron Train [:aaronmt] from comment #9) > Any possible side effects from targeting SDK 14? See comment 7 above for known side effects. I don't think any of these *should* cause any user-visible changes, but we'd definitely need to watch for regressions from this patch, and be prepared to back it out if there are bad regressions that couldn't be fixed in time for the first release. tracking-fennec: ? → --- blocking-fennec1.0: --- → ? blocking-fennec1.0: ? → + Status: NEW → ASSIGNED Target Milestone: --- → Firefox 14 Status: ASSIGNED → RESOLVED Closed: 10 years ago Resolution: --- → FIXED Backed out for causing bug 747681: Status: RESOLVED → REOPENED Resolution: FIXED → --- Aaron points out in bug 747681 that this change hardware-accelerated our window by default. We probably don't handle that too well. More info: I am rather against this landing as-is again. Too many moving parts! It should be the first thing to land in a regular train model, IMO. (In reply to Joe Drew (:JOEDREW!) from comment #13) > Backed out for causing bug 747681: > Backout merged to m-c in time for 14: I think we should consider *not* blocking on this because the only available fix is high-risk. (See comment 10 and comment 14 for details.) If we can fix this in Firefox 15 instead, it could still be released before buttony ICS phones are too widespread. blocking-fennec1.0: + → ? let's try bumping the SDK level to 14 and turning off accelerated surfaces. Either way, we need this fixed sooner rather than later so it can be well tested. blocking-fennec1.0: ? → + There's a test build at Can someone with an affected phone verify that this fixes the menu button, while not causing crashes on rotate (bug 747681)? Attachment #616300 - Attachment is obsolete: true Attachment #617975 - Flags: review?(blassey.bugs) Forgot hg qref. Attachment #617975 - Attachment is obsolete: true Attachment #617975 - Flags: review?(blassey.bugs) Attachment #617989 - Flags: review?(blassey.bugs) The build above in comment #18 specifying hardwareAccelerated with false is not valid overrode with the forced declaration of targetSDK 14 as it reintroduces bug 747681. Comment on attachment 617989 [details] [diff] [review] patch v2 (In reply to Aaron Train [:aaronmt] from comment #20) > The build above in comment #18 specifying hardwareAccelerated with false is > not valid overrode with the forced declaration of targetSDK 14 as it > reintroduces bug 747681. The targetSdk attribute is not supposed to override an explicit hardwareAccelerated="false" attribute. According to the docs, "If necessary, you can manually disable hardware acceleration with the hardwareAccelerated attribute for individual <activity> elements or the <application> element." - I think my patch was wrong. Preparing a new build and patch. Attachment #617989 - Attachment is obsolete: true Attachment #617989 - Flags: review?(blassey.bugs) The previous patch was missing an "android:" namespace prefix. Oops! New test build at Comment on attachment 618711 [details] [diff] [review] patch v3 Bug 747681 is apparently not caused by hardware acceleration. If I just add hardwareAccelerated="true" to the manifest, there are no problems. So it is caused by some other targetSdkVersion="14" change -- maybe the JNI changes. The activity was restarting because of the new "screenSize" configuration change introduced in SDK version 13 and later. This fixes the regression. Attachment #618711 - Attachment is obsolete: true Attachment #618786 - Flags: review?(blassey.bugs) Built with this patch. Rotation seems to be fixed. Though I'm crashing when using CTP Flash. Build at (In reply to Kevin Brosnan [:kbrosnan] from comment #25) > Built with this patch. Rotation seems to be fixed. Though I'm crashing when > using CTP Flash. Build at > The suspicion was that the flash issues were caused by hardware accel. Can we try the current patch plus disabling hardware accel? Here's an additional patch that disables HW acceleration, and a build with both patches: Attachment #619009 - Flags: review?(blassey.bugs) Target Milestone: Firefox 14 → --- qawanted: assigning to kevin to take a look. fennec-746703-e.apk is still crashing on the Galaxy Nexus when playing Flash. My Android 2.3 phone does not crash when playing Flash. I/GeckoApp(15116): Got message: DOMContentLoaded I/GeckoTab(15116): Updated title: Unbelievably Delayed Reaction - YouTube for tab with id: 1 D/GeckoFavicons(15116): Creating LoadFaviconTask with URL = and favicon URL = D/GeckoFavicons(15116): Calling loadFavicon() with URL = and favicon URL = (4) I/GeckoApp(15116): URI -, title - Unbelievably Delayed Reaction - YouTube D/GeckoFavicons(15116): Favicon URL is now: D/GeckoFavicons(15116): Calling getFaviconUrlForPageUrl() for D/GeckoFavicons(15116): Downloading favicon for URL = with favicon URL = I/Gecko (15116): Compositor: Composite took 50 ms. D/GeckoFavicons(15116): Downloaded favicon successfully for URL = D/GeckoFavicons(15116): Saving favicon on browser database for URL = I/GeckoAppShell(15116): createSurface E/dalvikvm(15116): JNI ERROR (app bug): accessed stale local reference 0x2590007d (index 31 in a table of size 30) E/dalvikvm(15116): VM aborting F/libc (15116): Fatal signal 11 (SIGSEGV) at 0xdeadd00d (code=1) D/GeckoFavicons(15116): Saving favicon URL for URL = D/GeckoFavicons(15116): Calling setFaviconUrlForPageUrl() for D/GeckoFavicons(15116): LoadFaviconTask finished for URL = (4) I/GeckoApp(15116): Favicon successfully loaded for URL = I/GeckoApp(15116): Favicon is for current URL = I/GeckoTab(15116): Updated favicon for tab with id: 1 I/ActivityManager( 192): Process org.mozilla.fennec_mbrubeck (pid 15116) has died. W/ActivityManager( 192): Force removing ActivityRecord{41b668d8 org.mozilla.fennec_mbrubeck/.App}: app died, no saved state I/WindowManager( 192): WIN DEATH: Window{41aa0e90 SurfaceView paused=false} I/WindowManager( 192): WIN DEATH: Window{41909ca0 org.mozilla.fennec_mbrubeck/org.mozilla.fennec_mbrubeck.App paused=false} D/dalvikvm( 373): GC_CONCURRENT freed 2897K, 29% free 17013K/23687K, paused 2ms+3ms I/DEBUG (12519): debuggerd committing suicide to free the zombie! D/Zygote ( 118): Process 15116 exited cleanly (11) W/InputManagerService( 192): Got RemoteException sending setActive(false) notification to pid 15116 uid 10033 I/DEBUG (15190): debuggerd: Apr 13 2012 21:29:36 D/dalvikvm( 373): GC_FOR_ALLOC freed 311K, 27% free 17426K/23687K, paused 30ms D/dalvikvm( 373): GC_FOR_ALLOC freed 1359K, 28% free 17245K/23687K, paused 24ms I/dalvikvm-heap( 373): Grow heap (frag case) to 18.469MB for 1644560-byte allocation It looks like the flash crash is due to the JNI changes mentioned in comment 18. This may be a latent bug in our code that we can fix. (I hope it's not a bug in the Flash plugin's code...) Status: REOPENED → ASSIGNED that jni bug is fuxed in bug 749750 Do we want to do one more build to test this or are we all set? Now that I'm back home I was able to test this locally and verified that bug 749750 fixes the Flash crash. Pushed to the targetSdkVersion change to inbound: I'll wait until this has made it into Nightly before requesting Aurora approval. There's also a new test build here if anyone wants to start testing early: status-firefox14: --- → affected Target Milestone: --- → Firefox 15 Comment on attachment 619009 [details] [diff] [review] part 2: disable hardware acceleration? Not checking in part 2 (disable hardware acceleration) because it did not end up having any effect on the regressions we found. Attachment #619009 - Flags: checkin- Status: ASSIGNED → RESOLVED Closed: 10 years ago → 10 years ago Resolution: --- → FIXED Comment on attachment 618786 [details] [diff] [review] patch v4 [Approval Request Comment] User impact if declined: Incorrect main UI layout on many ICS devices. Testing completed (on m-c, etc.): Landed on m-c 5/1 Risk to taking this patch (and alternatives if risky): Android-only blocker fix. This patch is somewhat risky since it has various effects on how Android configures our app. (See discussion above for details.) Two regressions were already found and fixed. However, the only alternative as far as I know is to leave this bug unfixed. String changes made by this patch: None. Attachment #618786 - Flags: approval-mozilla-aurora? Comment on attachment 618786 [details] [diff] [review] patch v4 [Triage Comment] While a bit risky, this is mobile only and blocks Fennec 1.0. Approved for Aurora 14. Attachment #618786 - Flags: approval-mozilla-aurora? → approval-mozilla-aurora+ There is a single Menu Button on the latest Nightly and Aurora builds. Closing bug as verified fixed on: Firefox 15.0a1 (2012-05-29) Firefox 14.0a2 (2012-05-29) Device: Galaxy Nexus OS: Android 4.0.2 status-firefox15: --- → verified Status: RESOLVED → VERIFIED Product: Firefox for Android → Firefox for Android Graveyard
https://bugzilla.mozilla.org/show_bug.cgi?id=746703
CC-MAIN-2022-33
refinedweb
1,667
55.54
11 July 2011 06:08 [Source: ICIS news] SINGAPORE (ICIS)--Japan’s Mitsui Chemicals will provide the manufacturing technology for Indian producer ONGC Petro-Additions Ltd’s (OPaL) new 340,000 tonne/year high density polyethylene (HDPE) unit at Dahej in India, the company said on Monday. OPaL will begin commercial operations at the HDPE unit in 2014, Mitsui Chemicals said in a statement. The engineering, procurement, construction and commissioning work of the HDPE plant will be managed by ?xml:namespace> Financial details of the project were not disclosed. OPaL is a joint venture that was incorporated in 2006 by Indian energy major, Oil and Natural Gas Commission (ONGC), which holds a 26% stake, and Gujarat State Petroleum Corp (GSPC), which has a 5% share. Other shareholders of the company include GAIL India, which holds a 19% stake. For more on high.
http://www.icis.com/Articles/2011/07/11/9476328/japans-mitsui-chemicals-provides-hdpe-technology-to-opal.html
CC-MAIN-2013-20
refinedweb
142
52.09
#include <previewimageprovider.h> Defines a result image. Instances that have been returned (c.f. TakeOutput) are expected to be immutable to allow safe shared ownership of the pixel data. The image size in width and height. Its product has to match the size of @_samplesFloat32Linear, i.e. _imageSize.x * _imageSize.y = _samplesFloat32Linear->GetCount(). The pixel data of the result preview image in linear color space. Pixels are defined as (r, g, b, a) values in row-major order. It is expected to give away ownership to the underlying data, avoiding any read or write access to it after return. Defines whether this preview result is final for the current state of the subject within the the node graph. A final state can then be used for cache revival later on and may be waited for by blocking preview request calls, e.g. coming from the HW Renderer or from viewport with an enabled "Animate Preview" setting for the respective node material.
https://developers.maxon.net/docs/Cinema4DCPPSDK/html/structmaxon_1_1nodes_1_1_preview_image_provider_output_image.html
CC-MAIN-2022-27
refinedweb
161
57.37
Can a namespace or expression be used in the queue contact email field? Currently, we have been creating an exchange distribution list with the service team members and using this email address in the queue contact email field for notifications. This requires us to mange members for both an exchange DL and the newScale service team. It would be much easier to just manage one. I would like to know if it is possible to use namespace or an expression to have the email addresses of the service team members for the service team associated with a queue populate the queue email contact field. Thanks in advance. Brian Hi Brian, Unfortunately, the email field for a queue will accept only actual email addresses. Your current method -- maintaining a distribution list in your mail application -- is the standard method for emailing all users who work in a particular queue. You can use a list of addresses in the email contact field, separted by commas. So, just as you would with a distribution list, changes in the service team would require changes in two places (with the Dist List change the Service Team OU and DL, with my option change the ST OU and remove the address from the email contact list for that queue). The advantage I see in my solution is that it is fully contained within the application, not relying on updating the distribution list, which is in a different application, and I don't have rights to in our environment, therefore someone else needs to make that change. Neither solution is ideal. There should email contacts set up by default for everyone in a service team. In my humble opinion, having to create that queue manually, then create the email contact manually, and mantain that list manually is a flaw in the application.
https://supportforums.cisco.com/t5/intelligent-automation/email-contact-for-queues/td-p/1827260
CC-MAIN-2018-13
refinedweb
305
57.91
Is the future of JavaScript ECMAScript 4? - | - - - - - - Read later Reading List The discussion on the future of ECMAScript has been quite lively lately. Brendan Eich kicked off a flurry of posts about ECMAScript 4 and if that is the right path. ECMAScript 4 is the next version of the standard which is implemented as JavaScript and JScript. With the publication of an overview of ECMAScript 4 Eich, the creator of JavaScript, has pushed forward the question of how will we get JavaScript to ECMAScript 4. While work on ECMAScript 4 is progressing, there are many who are unhappy with the specification, arguing that it is too much, too fast, and fails to address some of the critical issues of the language today. After publishing the overview, Eich beat up on Microsoft for their lack of participation in the debate. That sparked a response from the JScript team at Microsoft, who are consolidating a list of all known divergences of JScript from the specification, as well as the generally accepted behavior of JavaScript. Microsoft believes that ECMAScript 4 is too big of a jump and Chris Wilson, Platform Architect for IE detailed his personal thoughts as well. Douglas Crockford, a well respected JavaScript expert at Yahoo!, has reservations as well:. Ajaxian has compiled several posts on the subject and even Dave Thomas has written about version 4:. Keep up with the future of JavaScript here Unfairly pigeonholed by Paul Fremantle If this was a new language like Ruby or Groovy, these changes would probably be seen as too little! JavaScript spearheaded many of the dynamic features that are now considered cool in the latest batch of languages, but has long been seen as just a toy language only good enough for hacking around in browsers. Holding its development back would be unfair and pointless --- its perfectly easy to define which version of the language a .js is targeted against. Can we just get a VM already? by Dan Tines Is EMCA Script 4 Necessary Right Now? by Peter Wagener My question is this: do we need ES 4 right now? Can't we wait until the browsers and supportive technologies catch up to ES 3 before barreling ahead with a whole lot of new language features that would disrupt the momentum that JavaScript has (finally) got going for it? It seems to me that a lot of the technologies surrounding JavaScript need some soak time -- not another dose of frills around the language basics. The Dojo Toolkit just shipped their 1.0 release last week; Prototype and the libraries built on top of it are mature and stable; Google Gears holds a lot of promise for moving the browsers towards a full-featured development platform; The browsers themselves are (ever so slowly) moving closer to actually meeting the EMCA 3 specification. The smart thing would be to see how these technologies are adopted by developers in the real world first, and use those observations to help form the next version of the language. I am all for a lot of the proposed changes (namespaces and standardized OO syntax in particular), but the current 'extra' technologies already provide what I need right now, without having to wait for browser support. I figure the ES4 will be 'approved' at some point next year, and no one will care. The browsers will continue their march towards ES3. Developers will be curious about ES4, but mostly for academic reasons. JavaScript on big programs by Diego Fernandez research.sun.com/techrep/2007/abstract-168.html Sheds some light in the current limitations of JavaScript... I think that is really on topic with the JavaScript/ECMAScript debate. It is the time! by Kaveh Shahbazian 2 - the power of JavaScript lies in it's functional aspects(Closures, First Class Functions) and very handy features like prototype-based inheritance(It is as practical as duck-typing in Ruby). So you are going to replace good features with bulky-stupid-coding-fashion of C#, Java and other pain-in-the-neck things. Now people are just starting to what JavaScript is! And I tell you it is a real ACCEPTABLE-LISP! Tell me; where you CAN define a beautiful DSL like JQuery? 3 - Please forget about EcmaScript 4; And try to make more useful things like FireFox! Cheers!
https://www.infoq.com/news/2007/11/ecmascript-4
CC-MAIN-2017-43
refinedweb
718
69.82
by Michael S. Kaplan, published on 2006/04/14 16:05 -04:00, original URI: Starting in the 2.0 version of the .NET Framework, Microsoft has had the UmAlQuraCalendar Class, which is a more secular version of the HijriCalendar class in that it is based on a table-based algorithm, and does not support the "advance date" functionality I have talked about in the past. While still somewhat controversial with religious authorities who object to a calendar that is not based on the moon sighting (just as they object to technology to help with the sighting in some cases), it is to others a reasonable compromise for the need to have a more predictable calendar in the secular world. Which is not to say there are not some limitations. Try the following code: using System; using System.Globalization; public class Test { public static void Main() { DateTime dtLater = new DateTime(2030, 1, 1); UmAlQuraCalendar uaqc = new UmAlQuraCalendar(); DateTime dtMuchLater = uaqc.AddMonths(dtLater, 1); } } The clever among you might object that this code does nothing with the date it has calculated, so what is its's point? Well, try it out and compile it. Once you run the small application, here is what it will say: Unhandled Exception: System.ArgumentOutOfRangeException: Specified time is not supported in this calendar. It should be between 04/30/1900 00:00:00 (Gregorian date) and 05/13/2029 23:59:59 (Gregorian date), inclusive. Parameter name: time at System.Globalization.UmAlQuraCalendar.CheckTicksRange(Int64 ticks) at System.Globalization.UmAlQuraCalendar.GetDatePart(DateTime time, Int32 part) at System.Globalization.UmAlQuraCalendar.AddMonths(DateTime time, Int32 months) at Test.Main() Hmmm... that seems like a problem, doesn't it? Well, it is working as designed, per the docs: Remarks:. And one of the downsides of a table-based calendar is that until tables exist there is no especially good answer on what to do other than throw an exception (the moral equivalent of walking off the end of the table). But certainly such a problem can have an impact on being able to use the calendar in everyday secular matters, couldn't it? Obviously something better will have to happen here for people to be able make appropriate use of it as a calendar in many different scenarios.... This post brought to you by "ޓ" (U+0793, a.k.a. THAANA LETTER TAVIYANI) # Maurits [MSFT] on 14 Apr 2006 8:05 PM: # Michael S. Kaplan on 15 Apr 2006 9:35 AM: # Michael S. Kaplan on 15 Apr 2006 9:38 AM: # Maurits [MSFT] on 15 Apr 2006 1:32 PM: referenced by 2011/06/15 Does your code avoid the [government sanctioned] Y1.45K bug? 2011/02/26 Why I don't like the JapaneseCalendar class #1: Respecting (or at least admitting) the history.... 2010/02/12 Evil Date Parsing lives! Viva Evil Date Parsing!, explained 2008/12/16 Grody to the Max[Date]! 2006/09/29 .NET is too busy being consistent with Windows to be consistent with itself.... 2006/09/18 More on changes to the Hijri calendar? go to newer or older post, or back to index or month or day
http://archives.miloush.net/michkap/archive/2006/04/14/576627.html
CC-MAIN-2017-30
refinedweb
525
57.06
C Tutorial: Learn C in 20 Minutes By Huw Collingbourne for Udemy Looking for more than a beginner’s guide? Check out Huw’s full C course. Note: This post is part of our “Getting Started” series of free text tutorials on some of our most popular course topics. To jump to a specific section, click the table of contents below: A Quick Introduction to C What You Need to Program C Your first C project in CodeLite Your first C project in NetBeans Understanding a Simple C Program Elements of the C Language And finally… A Quick Introduction to C Even though it is now quite old, having been developed in the late 1960s, C is still one of the most widely used programming languages. The reason for C’s continued popularity is simple: it is a very flexible and powerful language that can be used not only to create ‘high-level’ applications such as word processors or games but also ‘low-level’ programs that interface with the computer’s hardware and memory. This makes it possible for an experienced C programmer to write very fast and efficient programs. The downside of all this power is that C is a complex and, potentially, error-prone language. Many modern languages such as Java and C# provide built-in ‘safety nets’ to make sure that simple operations such as reading data into memory cannot easily crash your programs. However, C lets you do pretty much whatever you want to do. When you do something as simple as reading text entered by a user, it is quite possible that you will accidentally overwrite some piece of memory and cause your entire program to crash. So, in short, C is not a language for the faint-hearted. If you are a complete beginner to programming, C will provide you with many difficult problems that you would never encounter when using languages such as Java, C#, Pascal or Python. But knowledge of C will also give you a huge advantage over people who only know those more modern languages. A good C programmer simply has a deeper understanding of how computer programs actually work. And that is one reason why good C programmers are able to command high salaries. The C language also forms the basis for languages such as C++ and Objective-C, which add an object-oriented ‘layer’ over C itself. In order to program those languages, you first need to understand C. What You Need to Program C Before you can write C programs, you need to install a text editor that supports C syntax. You also need a C compiler and an associated tool called a linker to translate your source code into machine code that can be run by your computer’s hardware. There are numerous free editors and Integrated Development Environments (IDEs) available for C. Two of the easiest to use are CodeLite and NetBeans. Both of these are available on multiple operating systems (Windows, OS X or Linux). They provide powerful C editors with an integrated C compiler and debugger. CodeLite can be downloaded from here. You can download NetBeans from here. For use with C, you must be sure to install either the C/C++ edition or the full (‘All’) release of NetBeans. If you are completely new to C and you are not already a NetBeans user, I would recommend starting with CodeLite, as it is a bit easier to install and it assumes that you will be programming in C or C++, whereas NetBeans supports a larger number of programming languages. Your First C Project in CodeLite If you install CodeLite, these are the steps you need to take to create, compile and run your first project that simply displays “hello world”. - Select: File | New | New Project - In the dialog (under Console) select: Simple executable (gcc) - Click Next. - Give the project a name, such as HelloWorld. - Browse to choose a directory for this project (for example, C:\Test). - Click Next. Click Finish. You will see your new project in CodeLite’s WorkSpace panel. Click this to open up the src folder and load the file main.c into the editor. You will see that CodeLite has automatically written this small program: #include <stdio.h> int main(int argc, char **argv) { printf("hello world\n"); return 0; } To run this, just click CTRL+F9 or select Build and Run Project from the Build menu. This pops up a Terminal window that displays “hello world”. Your First C Project in NetBeans If you’ve decided to use NetBeans, these are the steps you need to take in order to create, compile and run a new project. - Select File | New Project. - Select the C/C++ category in the left pane of the New Project dialog. - Select C/C++ Application in the right pane. - Click Next. - Enter a project name, such as HelloWorld. - Browse to choose a directory for this project (for example, C:\Test). - Make sure Create Main File is checked. - Click Finish. You will see your new project shown in the NetBeans Projects panel. Click to open up the Source Files folder and load the file main.c into the editor. You will see that, just like CodeLite, NetBeans has automatically written a small ‘hello world’ program. To compile and run this, select Run Project from the Run menu (or press F6). The output from this program is shown in the ‘Run’ tab of the docked Output window. If all goes well, this should display ‘hello world’. Understanding a Simple C Program Before going any further, let’s take a closer look at the ‘Hello world’ program. Although this is very short and simple, it illustrates many of the fundamental features of the C programming language. #include This is the first line of the program: #include <stdio.h> Here #include is an instruction to the C compiler. It tells it to include the file whose name is specified between angle brackets. Here that file is called stdio.h and it forms a part of C’s standard code library. We need to include this file since it defines the printf() function used in our program. The compiler would generate a warning if we forgot to include stdio.h. This will be explained more fully in the section on header files later on. The main function Now comes this bit of code: int main(int argc, char **argv) This is the ‘main function’: it is the first bit of code that runs when the program is started. In C the main function must be given the name main. The function name is followed by a pair of parentheses. The parentheses may contain two named arguments, arcg and argv, separated by a comma. Blocks of code, such as the code inside the main function, are enclosed by a pair of curly brackets. Note that each statement in C must be terminated with a semicolon: justify;">printf("hello world\n"); return 0; argc and argv The arguments argc and argv in the main() function may optionally be initialized with values passed to the program when it is run; argc is declared to be of the type int (an integer), while argv is declared to be char ** which is an ‘argument vector’ or a pointer to an array of character-string arguments. If you are new to C, that probably sounds like gobbledygook. Don’t worry. For now, just think of argv as a list of strings. I’ll have more to say about pointers later on. To pass values to the program, you can just run the program at the command prompt and put any arguments after the name of the program itself, with spaces between each item. For example, if I wanted to pass the arguments “hello” and “world” to the program HelloWorld.exe (on Windows) or HelloWorld.app (on OS X), I would enter this at the command prompt or Terminal: HelloWorld hello world My program ‘receives’ those two bits of data, and it stores them in the second argument, argv. The first argument, argc is an automatically calculated value that represents the total number of the arguments stored in argv. The current version of the HelloWorld program ignores any arguments passed to main(). But try rewriting the code to match this: int main(int argc, char **argv) { int i; for (i = 0; i < argc; i ) { printf("Hello World! argc=%d arg %d is %s\n", argc, i, argv[i]); } return 0; } Case-sensitivity Be careful when you write C code. Bear in mind that the language is case-sensitive, which means that identifiers must always use the same mix of uppercase and lowercase characters. If an argument is called argc in one place, it is no good calling it argC or Argc somewhere else. It must always be referred to using lowercase-only characters: argc. The block of code that starts with the keyword for in the preceding example is a loop that causes the code that follows it, between the curly brackets, to execute for a certain number of times. Here the code executes for the number of times indicated by the value of the argc argument. The printf function provides a useful way of embedding bits of data into a string using special place-markers: %s prints a string; %d prints an integer. You must be sure to put the same number and type of arguments after the string itself as the place-markers in the string. Here the string has two integer place-markers and one string place-marker: \"Hello World! argc=%d arg %d is %s\\n\" The printf function substitutes the values of argc, i, argv[i], at the points marked by %d, %d and %s in the string. At each turn through the for loop, the string at the index i in the argv array is printed. The “ \n” at the end of the string adds a new line to it. When you pass the program the two arguments: hello and world, this is the output which is displayed (notice that argument 0 is the name of the program itself): Hello World! argc=3 arg 0 is HelloWorld Hello World! argc=3 arg 1 is hello Hello World! argc=3 arg 2 is world Return values When you declare an argument, it must always be preceded by its data type. That is why argc is preceded by the keyword int. The same is true for functions. If a function returns a value, the function must be preceded by the data type of the return value. The main function here returns an integer: return 0; That is why the main function has been declared using the type int: int main(int argc, char **argv) In this case, the return value 0 just indicates that no problems were detected. We don’t need to use this return value in this simple program. However, you will use the values returned by other functions, as we’ll see presently. Elements of the C Language Now that we’ve created, run and modified a simple C program, let’s examine in greater detail some of the core features of the C language. Variables Variables are identifiers to which values can be assigned. Each variable must be declared with a data type such as int (integer) or double (floating point number), and it can only be assigned values of the declared type. Here are examples of some variables: int total; double tax; You assign values using the assignment operator = like this: total = 100; tax = 200.5; Functions Functions are named blocks of code. We’ve already seen the main function. The C standard library contains many useful functions such as printf() which we use by ‘calling’ the function by name. In a complete program, you would create numerous different functions. For example, you could write this function called add() that takes two int arguments, num1 and num2 and returns an int value: int add( int num1, int num2 ) { num1 = num1 + num2; return num1; } If you were to write this add() function above the main function in your program, you could call it by name, pass it two integer arguments and display the result. To do that, rewrite the main function as follows: int main(int argc, char **argv){ printf("Result = %d\n", add(2,5)); return 0; } When a function doesn’t return a value, it must be declared using the keyword void like this: void sayhello() { printf( "Hello\n" ); } Mathematical operators We’ve already used the addition operator + in the code sample above. Other mathematical operators include the multiplication *, division / and subtraction – operators. There are also ‘compound assignment operators’ that perform a calculation prior to assigning the result to a variable. This table shows some examples of common compound assignment operators along with the non-compound equivalent. When you want to increment or decrement by 1 (add 1 to, or subtract 1 from) the value of a variable, you may also use the ++ and -- operators. Here is an example of the increment (++) operator: int a; a = 10; a++; // a is now 11 This is an example of the decrement (–) operator: int a; a = 10; a--; // a is now 9 Tests and Comparisons C can perform tests using the if statement. The test expression must be placed between parentheses, and it should be capable of evaluating to true or false. If true, the statement following the test executes. Optionally, an else clause may follow the if clause, and this will execute if the test evaluates to false. Here is an example: if (income > 1000000) { bonus = 5000; } else { bonus = 500; } The > operator compares two values to test if the value on the left is greater than the value on the right. If you want to test if the value to the left is lower than the value on the right, use the < operator. To test if the two values are equal, use the == operator. Be careful that you don’t confuse the equality comparison operator == with the assignment = operator, which assigns the value on the right to a variable on the left. Here are some more comparison operators: == // equals != // not equals > // greater than < // less than <= // less than or equal to >= // greater than or equal to Switch statements Switch statements let you perform multiple tests as an alternative to writing numerous if and else tests. A switch statement starts with the keyword switch followed by an integer test value. This is followed by one or more case statements, each of which specifies a value to be compared with the test value. If a case value matches the test value, the code following the case value and a colon (e.g., case 10:) executes. Optionally, you may specify a default which will execute if no match is made by any of the case tests. The following code shows a switch statement that tries to match an integer variable asciivalue which represents the numeric ASCII code of a character. It assigns a descriptive string to the variable chartype when asciivalue is 0, 7 or 8. In all other cases, it assigns the string “Some Other Character” to chartype. switch( asciivalue ) { case 0: chartype = "NULL"; break; case 7: chartype = "Bell"; break; case 8: chartype = "BackSpace"; break; default: chartype = "Some Other Character"; break; } break In a switch statement, all case values will continue to be tested even after a match is made unless the break keyword is encountered. That is why break is placed after each case test in the example above to ensure that the switch block is exited once a match is made. for loops Loops provide a way of running a block of code repeatedly. If you want to run code a fixed number of times, you could use a for loop. int i; for( i = 0; i < 50; i++) { printf(\"Loop #%d\\n\", i); } The code that controls this for loop is divided into three parts. The first part initializes an int variable, i, with the value 0. The second section contains a test ( i < 50). The loop executes as long as this test remains true. Here the test states that the value of the variable i must be less than 50. Finally, we have to be sure to increment the value of i at each turn through the loop. That happens in the third part of the loop control statement ( i++) which adds 1 to i. When you run the code shown above, 50 strings are displayed, the first being “Loop #0”, while the last is “Loop #49”. while loops If you don’t know in advance how many times the code inside a loop needs to be run, you could use a while loop. For example, if you want to keep prompting the user to enter something at the keyboard, you may not know in advance how many times you will need to show a prompt before the user enters valid data. Here’s a simple example that repeatedly displays “Enter y or n” until the user enters either the character ‘y’ or the character ‘n’. The character entered is assigned to the char variable c at each turn through the loop, and it is tested using the ‘not equals’ operator != and the ‘logical and’ operator && at each turn through the loop – so the while test here can be expressed as “while c is not ‘y’ and c is not ‘n’, run this code”: char c; c = \'x\'; while( (c != \'y\') && (c != \'n\') ) { printf(\"\\nEnter y or n: \"); c = getchar(); getchar(); } Logical Operators In the code example above, I used the && operator to mean ‘and’. This lets me chain together conditions and test that all of them are true in order for the entire test to evaluate to true. If I wanted to test that any one of a set of conditions were true, I could use the ‘logical or’ operator || as in the example below, which executes some code if x is either less than 1000 or x is greater than 5000: if (x < 1000) || (x > 5000) { // run some code here } You can document your source code by adding comments that are ignored by the C compiler. Traditional C comments are placed between the delimiters /* and */ and may span many lines: /* * This is a traditional * multi-line C comment * */ Most modern C compilers also support single-line comments that begin with the // characters and end at the end of the line: This is a single-line comment Arrays An array is a linear collection of elements of a specified data type. Each element has an index. The first element is at index 0. The last element is at the index given by the length of the array (that is, the number of elements) minus 1. An array can be declared with its type followed by its capacity between square brackets. This is a declaration of an array containing 5 integers: int intarray[5]; You can index into an array using square brackets to obtain an existing element or assign a new value at a specified index. Here I assign 100 to index 2 of my array: intarray[2] = 100; An array can be initialized when it is declared by placing empty square brackets after the variable name and then assigning a comma-delimited list of values between curly brackets, as in this example: int intarray[] = {1,2,3,4,5}; Strings There is no string data type in C. What we call a string in C is really an array of characters terminated by a null ‘ \0‘ character. When you initialize a string variable, as shown below, C automatically adds a null at the end: char mystr[] = "Hello world"; Constants Sometimes you may want have an identifier whose value (unlike that of a variable) is never changed. In such a case, you should define a constant. The traditional way of defining a constant in C is to use the directive #define followed by an identifier and a value to be substituted for that identifier, like this: #define PI 3.141593 In fact, this is not a true constant since its value can be changed, as in the code below (though your compiler may show a warning message): #define PI 3.14159 #define PI 55.5 Modern C compilers provide an alternative way of defining true constants using the keyword const, whose values cannot be changed, like this: const double PI = 3.14159; Preprocessor directives In the example above, #define is a pre-processor directive. The C compiler includes a tool called the preprocessor that deals with special directives preceded by a hash # character. Probably the most commonly used preprocessor directives are #define to define constants and #include to include header files. Header files A file with the extension ‘.h’ is called a ‘header file’. A header file typically contains definitions of functions and constants. The header file does not contain the implementation of functions – only their declarations. The implementations are generally contained in source code files that end with the extension ‘.c’. The declarations in a header file enable the compiler to perform various checks. For example, it can check that all the data types used by the function-calls in your program – that is, the types of the data returned from functions or passed to functions as arguments – are correct. When you include a header file, its contents are inserted into your code just as though you had cut and pasted them. The include file name is enclosed in angle brackets when it is a standard C library file: #include <string.h> The file name is put between double-quotes characters when it is in your working directory: #include \"mystring.h\" Pointers and addresses Each piece of data in your computer’s memory is stored at some memory location or ‘address’. You can display that address using the ‘address-of’ operator & placed before a variable name. Let’s imagine I have a string variable: char str1[] = \"Hello\"; This is how I would display the addresses of str1: printf(\"%d\\n\", &str1); This would print a number which is the address in memory of the str1 array of characters: 2686746 In fact, an array variable such as str1 is itself equivalent to the address of the array, so even if I didn’t use the & operator, str1 would still give me the array address. If I were to enter this: printf(\"%d\\n\", str1); …the same number (that is, the address of the array of characters) would be displayed: 2686746 But let’s suppose I have another string declared like this: char *str2 = \"Goodbye\"; Here the asterisk or ‘star’ * before the variable name indicates that str2 is not the array itself but is, in fact, a pointer to a memory location. Here it points to the address at the start of the memory holding the characters “Goodbye”. But unlike str1 (which was declared not as a pointer but as an array), str2 is not the address of the array of characters. It is a pointer or ‘reference to’ that address. So if I display the address of str2 using the & operator, I get a different address (the address of the str2 pointer variable) than if I display the address to which str2 points (the address of the array of characters). We can see this in the example below: printf(\"%d\\n\", &str2); printf(\"%d\\n\", str2); This will display two different numbers, such as: 2686740 4206628 Pointers are one of the most difficult-to-understand features of the C language. The fact of the matter is that most modern object-oriented languages make very little, if any, explicit use of pointers. Even so, that doesn’t mean that languages such as Java, C# and Ruby do not, in fact, use pointers. Pointers are used all the time whenever any computer program runs. Many languages manipulate them ‘behind the scenes’ – for example, by placing an object at a memory location and automatically creating a variable that ‘points to’ the object. The programmer uses the object without ever having to know anything about the address in memory where that object is stored. The downside of pointers is that they are not only difficult to use, but can also be dangerous. To take a simple example, a pointer lets you poke bits of data into memory that might already contain some other data. That can cause memory corruption, which could crash your program. While the avoidance of explicit pointer manipulation makes programming simpler and less hazardous, it also makes it difficult for the programmer to interact directly with the computer’s hardware and memory. Sometimes, for maximum speed and efficiency, or when creating low-level programs such as device drivers and operating systems, direct memory-addressing using pointers is invaluable. That is one reason why, despite competition from more modern and user-friendly computer languages, C continues to be one of the most widely used and important of all programming languages. It also explains why really good C programmers (who are capable of writing robust C programs that don’t crash) are so highly valued. And Finally… This tutorial has been a very quick overview of C. It certainly won’t explain everything you need to know in order to write good C programs, but I hope it has given you an understanding of the core features of the C language and helped you to understand why C is one of the most enduring of programming languages. Recommended Articles Linked List in C: Learning it the Ideal Way Begin C Programming with C Operators The C Switch Statement: A Basic Introduction C getline : Reading from a Stream Common C++ Interview Questions Top courses in C# C# students also learn Empower your team. Lead the industry. Get a subscription to a library of online courses and digital learning tools for your organization with Udemy for Business.
https://blog.udemy.com/c-tutorial-learn-c-in-20-minutes/
CC-MAIN-2021-49
refinedweb
4,317
67.49
WARNING: this project is largely outdated, and some of the modules are no longer supported by modern distributions of Python. For a more modern, cleaner, and more complete GUI-based viewer of realtime audio data (and the FFT frequency data), check out my Python Real-time Audio Frequency Monitor project.(). How did I do it? Easy. First, I made the GUI with QtDesigner (which comes with Python x,y). I saved the GUI as a .ui file. I then used the pyuic4 command to generate a python script from the .ui file. In reality, I use a little helper script I wrote designed to build .py files from .ui files and start a little “ui.py” file which imports all of the ui classes. It’s overkill for this, but I’ll put it in the ZIP anyway. Here’s what the GUI looks like in QtDesigner: After that, I tie everything together in a little script which updates the plot in real time. It takes inputs from button click events and tells a clock (QTimer) how often to update/replot the data. Replotting it involves just rolling it with numpy.roll(). Check it out: import ui_plot #this was generated by pyuic4 command import sys import numpy from PyQt4 import QtCore, QtGui import PyQt4.Qwt5 as Qwt numPoints=1000 xs=numpy.arange(numPoints) ys=numpy.sin(3.14159*xs*10/numPoints) #this is our data def plotSomething(): global ys ys=numpy.roll(ys,-1) c.setData(xs, ys) uiplot.qwtPlot.replot() if __name__ == "__main__": app = QtGui.QApplication(sys.argv) win_plot = ui_plot.QtGui.QMainWindow() uiplot = ui_plot.Ui_win_plot() uiplot.setupUi(win_plot) # tell buttons what to do when clicked uiplot.btnA.clicked.connect(plotSomething) uiplot.btnB.clicked.connect(lambda: uiplot.timer.setInterval(100.0)) uiplot.btnC.clicked.connect(lambda: uiplot.timer.setInterval(10.0)) uiplot.btnD.clicked.connect(lambda: uiplot.timer.setInterval(1.0)) # set up the QwtPlot (pay attention!) c=Qwt.QwtPlotCurve() #make a curve c.attach(uiplot.qwtPlot) #attach it to the qwtPlot object uiplot.timer = QtCore.QTimer() #start a timer (to call replot events) uiplot.timer.start(100.0) #set the interval (in ms) win_plot.connect(uiplot.timer, QtCore.SIGNAL('timeout()'), plotSomething) # show the main window win_plot.show() sys.exit(app.exec_()) I’ll put all the files in a ZIP to help out anyone interested in giving this a shot. Clicking different buttons updates the graph at different speeds. If you make something cool with this concept, let me know! I’d love to see it. DOWNLOAD PROJECT: realtime_python_graph.zip 14 thoughts on “Realtime Data Plotting in Python” Hi Scott, I live in Gainesville too (although I’m not a student), and I’m currently learning python. Are there any good resources / meetup groups for programmers that I should look into? Thanks. Hey Jon, glad to hear from a fellow pythoner! I still haven’t been, but I hear a lot about the Gainesville Hackerspace – – they meet frequently, and I hear it’s a cool way to meet people and see neat things. It’s on my to-do list to go one day soon. Hi Scott, very cool post, very useful. Many packages I have seen (Pyopencv, pyODE) seem to recommand pygame for fast plotting, but it is a pain to display mathematical stuff in a pygame window (everything must be converted into pixels). On my computer, I manage to display with Matplotlib in quasi-real time the stream of my webcam using imshow(), but certainly on less powerfull machines your solution will be preferable. One of the other advantages is that the speed of PyQwt is far greater than matplotlib, so if you’re looking for high-famerate, low CPU load plotting, it’s something to look into! Qwt is not your only option and in fact probably not the best one. The fact it is quite fast is more-or-less its only advantage. For more flexible real-time data display, I recommend Traits + Chaco + Enaml (all by enthought, at). I have recently built a waveform analysis application with these (w/ real-time display) and can really recommend them. Note, Enaml (the new GUI design declarative language and library) can also embed matplotlib figures but matplotlib seems to throttle its update rate so it’s difficult to get very responsive plots. Enaml uses either wxPython or PyQt as it’s GUI backend so it can easily integrate with existing PyQt stuff. I wrote a chaco example to mirror your Qwt example. It will happily process events at ~ 50/second while the rest of the GUI remains fully responsive. I’m trying to find somewhere to post it… I also use Traits and highly recommend it for its ease-of-use and flexibility. I’ve written a fully interactive tool for spectral analysis that has sped up work which used to take weeks into minutes. There are definitely a few GUI modules available now for Python, and they’re getting better and better! Also @bc: Are you sure it was matplotlib throttling the updates? Creating the figure/axes is what takes the longest amount of time, but if you just update an existing plot with: scat = ax.scatter(…);scat.set_offsets(x, y), or similarly use set_data for plots, then you cut out a lot of the refresh lag than if you were re-building the figure and axes canvas every time. Some time back, I benchmarked the drawing speed of matplotlib and found it surprisingly fast (esp for large datasets where it uses decimation to speed up rendering). However, when I tried a quick test with MPL + Enaml today, I found a rather slow update rate compared with Chaco. I didn’t investigate this further but assumed there is something in MPL which is limiting the redrawing rate (to avoid saturating CPU etc.). I.e. I think the slow update is by design, knowing that the core drawing calls can be much faster. In my test, I was not re-creating the figure or axes; just calling set_data(…) on the line artist. Thanks man. Big help. 🙂 Although, If you manage to integrate Chako with Qt Designer, Please do put it up. I can’t seem to find any resources in that direction. Hello !! I would like to know how can i open a python script to plot the data in the realtime. the data as follows: EXemple; 0 0.217413641411 0 0.001 0.202640969807 0 0.002 0.13284039654 0 0.003 0.111942324101 0 0.004 0.0806826346525 0 . . This data is being sent from a C file as an output. In python, i would like read each line and update it in the graph realtime. can you help me as how do i go about this Dear Scott, I’m part of a research group trying to make a python program for us research. We are from Rio de Janeiro (Brazil) and we are trying to run your code but this is a common problem when we try: { Traceback (most recent call last): File “realtimePlot.py”, line 1, in import ui_plot File “/home/moreiras/Documentos/ProjetoIC/qwtplot-example-python/ui_plot.py”, line 10, in from PyQt4 import QtCore, QtGui RuntimeError: the sip module implements API v10.0 to v10.1 but the PyQt4.QtCore module requires API v9.2 } If possible, please help us. Regards, Felipe Moreira. Hey man I want to plot SIN on LCD in real time not showing immedaitly,can u help me? Scott; I’m an old HAM radio hardware guy, but I Think I can make a 0-30 MHz Spectrum analyzer with this if I can get it to work. I installed Python 3.4 but the PyQwt apparently wont work with it. I keep getting this: Traceback (most recent call last): File “C:UsersJerryDesktopSWHrealTimeAudio.py”, line 1, in import ui_plot File “C:UsersJerryDesktopSWHui_plot.py”, line 56, in from PyQt4 import Qwt5 ImportError: cannot import name ‘Qwt5’ I assume I have a too new version of python ?? What version should I use ?? As you can tell programming is not my thing. Thanks. This is an absolutely fascinating project! I’m psyched to get it working but I get the same error message as Jerry: “Cannot import name ‘Qwt5′” Any help would be appreciated. Peter You must log in to post a comment.
https://www.swharden.com/wp/2013-05-08-realtime-data-plotting-in-python/
CC-MAIN-2017-39
refinedweb
1,384
66.33
[PATCH v3] virtio: new feature to detect IOMMU device quirk From: Michael S. Tsirkin Date: Mon Jul 18 2016 - 22:38:38 EST ]. Signed-off-by: Michael S. Tsirkin <mst@xxxxxxxxxx> --- wanted to use per-device dma ops but that does not seem to be ready. So let's put this in virtio code for now, and move when it becomes possible. include/uapi/linux/virtio_config.h | 4 +++- drivers/virtio/virtio_ring.c | 15 ++++++++++++++- 2 files changed, 17 insertions(+), 2 deletions(-) diff --git a/include/uapi/linux/virtio_config.h b/include/uapi/linux/virtio_config.h index 4cb65bb..3f6195e 100644 --- a/include/uapi/linux/virtio_config.h +++ b/include/uapi/linux/virtio_config.h @@ -49,7 +49,7 @@ * transport being used (eg. virtio_ring), the rest are per-device feature * bits. */ #define VIRTIO_TRANSPORT_F_START 28 -#define VIRTIO_TRANSPORT_F_END 33 +#define VIRTIO_TRANSPORT_F_END 34 #ifndef VIRTIO_CONFIG_NO_LEGACY /* Do we get callbacks when the ring is completely used, even if we've @@ -63,4 +63,6 @@ /* v1.0 compliant. */ #define VIRTIO_F_VERSION_1 32 +/* Do not bypass the IOMMU (if configured) */ +#define VIRTIO_F_IOMMU_PLATFORM 33 #endif /* _UAPI_LINUX_VIRTIO_CONFIG_H */ diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index ca6bfdd..2a0c8bf 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -117,7 +117,10 @@ struct vring_virtqueue { #define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq) /* - * The interaction between virtio and a possible IOMMU is a mess. + * Modern virtio devices might set feature bits to specify whether + * they use the platform IOMMU. If there, just use the DMA API. + * + * If not there, the interaction between virtio and DMA API is messy. * * On most systems with virtio, physical addresses match bus addresses, * and it doesn't particularly matter whether we use the DMA API. @@ -137,6 +140,10 @@ struct vring_virtqueue { static bool vring_use_dma_api(struct virtio_device *vdev) { + if (virtio_has_feature(vdev, VIRTIO_F_IOMMU_PLATFORM)) + return true; + + /* Otherwise, we are left to guess. */ /* * In theory, it's possible to have a buggy QEMU-supposed * emulated Q35 IOMMU and Xen enabled at the same time. On @@ -1099,6 +1106,12 @@ void vring_transport_features(struct virtio_device *vdev) break; case VIRTIO_F_VERSION_1: break; + case VIRTIO_F_IOMMU_PASSTHROUGH: + break; + case VIRTIO_F_IOMMU_PLATFORM: + /* Ignore passthrough hint for now, obey kernel config. */ + __virtio_clear_bit(vdev, VIRTIO_F_IOMMU_PASSTHROUGH); + break; default: /* We don't understand this bit. */ __virtio_clear_bit(vdev, i); -- MST ]
http://lkml.iu.edu/hypermail/linux/kernel/1607.2/01293.html
CC-MAIN-2020-29
refinedweb
367
51.24
Directory Listing Use the subprocess module within pycad's Triangle. Use the subprocess module with argument list to run gmsh from pycad to avoid having to deal with spaces in path names. Changes to example09 for memory footprint. One change too help wording in layer_cake function. example09b removed from testing. Not for impending release. Fix some epydoc errors Fixed issue 564 removed minimize.py since it hasn't worked in who knows how long typo fixed Switched the build_dir keyword param to variant_dir. Should fix issue 525. new darcy flux Updates to layer cake and cookbook as well as examples. Joels suggested changes for naming conventions. The function is now located in esys.pycad.extras as layer_cake. function LayerCake fixed is now under esys.pycad.layer_cake Adding layer cake builder to pycad gmsh now supports the Mesh.SubdivisionAlgorithm and Mesh.Algorithm3D option. The Mesh.Algorithm, Mesh.Smoothing, Mesh.Optimization options are now set in the .geo file rather than in the command line.. a test for unique PropertySet name added. a namespace problem in pycad fixed. Change __url__ to launchpad site some more work on tests on MPI proper error handling for ppn2mpeg execution added. size_t may be 64 bits which is incompatible to MPI_INT. This problem is fixed by inserting a cast in Mesh_read.c. Moreover a fix has been added making sure that gmsh and triangle are executed on one processor only. Fixing some warnings from epydoc. Disabled c++ signatures in python docstrings. Removed references to Bruce in epydoc and the users guide. I? This now passes on windows. The main issue was that __del__ was being called when there was no open script or mesh file??? Could not fathom why, but added an extra test in __del__ to make sure the files were actually open before closing & unlinking. It may be better to deal with the opening, writing, closing, os.system call, and final unlinking of the script file in getMeshHandler(). That way all manipulations are local, and only the mesh file need be handled in a remote piece of code like __del__. Not tested on Altix or linux yet.... get to that tonight.)
https://svn.geocomp.uq.edu.au/escript/trunk/pycad/?view=log&sortdir=down&pathrev=3465
CC-MAIN-2021-10
refinedweb
357
69.99
Many native applications today are mixed with .NET extensions, either for business logic or user interface. In this article I will talk about the case when you want to display a modal managed window in an MFC application, with the problems that might appear and the solutions for them. I will go through both windows forms and WPF. However, I will not use a mixed-mode DLL, but expose .NET components to COM and use that from C++. Creating Class Libraries I will assume the following projects are created: Windows Forms - Create a new Windows Form Control Library called WinFormsLib Add a simple windows form called SampleForm; the form should look like this: Windows Presentation Foundation - Create a new WPF User Control Library called WpfControlLib Add a simple WPF window called SampleWpfWindow; the window show look like this: What we want to do is opening both of these managed windows as modal dialogs from an MFC application. To expose them to the native world, I will create a .NET class library project registered for COM interop. That would allow us to use the .NET code without building a mixed-mode DLL in C++/CLI. The details for registering for COM interop are beyond the scope of this article. In a few words one needs to do the following: - Mark each interface exposed to COM with attributes GuidAttribute and InterfaceTypeAttribute - Mark each interface method exposed to COM with attribute DispIdAttribute - Mark each class exposed to COM with attributes GuidAttribute, ClassInterfaceAttribute and ProgIdAttribute - Set the ComVisibleAttribute to true in AssemblyInfo.cs, or use the ComVisibleAttribute with true for every type you want to expose to COM - From the Project Properties > Build, check the Register for COM interop option Creating a Class Library Consumed from MFC The next step is to create a project called DotNetDemoLib, and add references to the two projects created earlier, WinFormsLib and WpfControlLib. I will add an interface called IWindows to this project, with two methods. Since this needs to be exposed to COM it must be decorated with the attributes mentioned earlier. The interface looks like this: namespace DotNetDemoLib { [Guid("B3E12DC9-817F-4cba-BE4D-035D2D5A5EC9")] [InterfaceType(ComInterfaceType.InterfaceIsIDispatch)] public interface IWindows { [DispId(1)] void StartWpfWindow(); [DispId(2)] void StartForm(); } } Method StartWpfWindow would open the window from the WpfControlLib project, and StartForm the windows form from the WinFormsLib project. The implementation for this interface would like this: namespace DotNetDemoLib { [Guid("262E0AF6-2546-4284-8426-4C62B3341584")] [ClassInterface(ClassInterfaceType.None)] [ProgId("DotNetDemoLib.ManagedWindows")] public class ManagedWindows : IWindows { SampleWpfWindow m_window; SampleForm m_form; #region IWindows Members public void StartWpfWindow() { m_window = new SampleWpfWindow(); m_window.ShowDialog(); } public void StartForm() { m_form = new SampleForm(); m_form.ShowDialog(); } #endregion } } After building this project, with the "Register to COM interop" option, a type library file (.tlb) is generated. This file must be imported in the C++ project in order to be able to use these COM components. An MPF Demo Project To demonstrate the usage of the IWindows interface, we need an MFC project. For the sake of simplicity this can be a dialog base application. Let's call it MfcDemoApp. The dialog should have two buttons, one for opening the Windows Forms window and one for opening the WPF window. In order to be able to use the IWindows interface we need to import the TLB file (of course the actual #import directive must use the correct path to the file). #import "DotNetDemoLib.tlb" using namespace DotNetDemoLib; With that done, the handlers for the button click messages can look like this: void CMfcDemoAppDlg::OnBnClickedButtonOpenWpf() { IWindowsPtr pWindows(__uuidof(ManagedWindows)); try { pWindows->StartWpfWindow(); } catch(_com_error& ce) { AfxMessageBox((LPCTSTR)ce.ErrorMessage()); } } void CMfcDemoAppDlg::OnBnClickedButtonOpenWinforms() { IWindowsPtr pWindows(__uuidof(ManagedWindows)); try { pWindows->StartForm(); } catch(_com_error& ce) { AfxMessageBox((LPCTSTR)ce.ErrorMessage()); } } Problems and Solutions Windows Forms After putting all this together we can run the MFC application and open the two windows. Let's start with the Windows Form window. The image below shows how it looks. You can notice that the MFC dialog is not accessible when the form is opened, so indeed we have the behavior of a modal dialog. But you can also notice the taskbar shows two applications, MfcDemoApp and SampleForm, even though we have only one. This is because, by default, the form is set to appear in taskbar, because the window has the WS_EX_APPWINDOW window style set. This style forces a top-level window into the taskbar when the window is visible. To prevent the form from showing in the taskbar we can set the ShowInTaskbar property to false. public void StartForm(long parentWindow) { m_form = new SampleForm(); m_form.ShowInTaskbar = false; m_form.ShowDialog(); } Windows Presentation Foundation Let's try opening the WPF window. The window shows up correctly, but there are two problems: - There are again two applications in the taskbar The WPF window does not behave like a modal dialog, because you can select from the taskbar any of the two and have it shown separately, even though you cannot access the parent, MFC dialog. In this image only the parent, MFC dialog, is visible, but you can see the WPF window in the taskbar, so you can click on it and bring it to foreground. In this image only the chid, WPF window, is visible, but you can see the MFC dialog in the taskbar. This is because the WPF window does not have the owner set. To solve these problems we have to do the following: - Set the ShowInTaskbar property of the Window to false, just like for the windows form. - Set the owner for the WPF window. The first task is pretty trivial. The second is a little bit more complicated. For that we need to use a helper class called WindowInteropHelper that assists interoperability between WPF and Win32. This class has two properties: - Handle, that only returns the handle of the WPF window used to create the object; - Owner, that gets or sets the handle of the owner window. We need to use this later property, but we must pass the handle to the owner window from C++. You might be tempted to add an IntPtr argument to the StartWpfWindow(), but the COM doesn't seem to be able to marshal that. When the method is called it triggers a COM exception saying that the method does not exist. The workaround is to pass an integer that can be used to construct an IntPtr in C#. The interface method should look like this: [DispId(1)] void StartWpfWindow(long parentWindow); The implementation changes to: public void StartWpfWindow(long parentWindow) { m_window = new SampleWpfWindow(); WindowInteropHelper wnd = new WindowInteropHelper(m_window); wnd.Owner = new IntPtr(parentWindow); m_window.ShowInTaskbar = false; m_window.ShowDialog(); } And the usage from VC++ also changes to: void CMfcDemoAppDlg::OnBnClickedButtonOpenWpf() { IWindowsPtr pWindows(__uuidof(ManagedWindows)); try { long handle = reinterpret_cast<long>(m_hWnd); pWindows->StartWpfWindow(handle); } catch(_com_error& ce) { AfxMessageBox((LPCTSTR)ce.ErrorMessage()); } } Now, if you run the application again you can see that the WPF window acts like a true modal window. Back to Windows Forms What if you want to set the parent for a Windows Form window? You'd need to override the CreateParams property that provides the required parameters for creating a control. Its type is CreateParams (same name). One of the members of this type is property Parent, which you can set with a handle passed from C++. However, since we cannot change the code for the SampleForm class, we can create a new form, derived from SampleForm that overrides CreateParams and sets the parent. This class can look like this: namespace DotNetDemoLib { class WrapperForm : SampleForm { IntPtr m_hwndParent; public WrapperForm(IntPtr parent) { m_hwndParent = parent; } protected override CreateParams CreateParams { get { CreateParams cp = base.CreateParams; cp.Parent = m_hwndParent; return cp; } } } } The interface method should change to: [DispId(2)] void StartForm(long parentWindow); The implementation for the interface method becomes: public void StartForm(long parentWindow) { m_form = new WrapperForm(new IntPtr(parentWindow)); m_form.ShowInTaskbar = false; m_form.ShowDialog(); } And the usage from VC++ changes to: void CMfcDemoAppDlg::OnBnClickedButtonOpenWinforms() { IWindowsPtr pWindows(__uuidof(ManagedWindows)); try { long handle = reinterpret_cast<long>(m_hWnd); pWindows->StartForm(handle); } catch(_com_error& ce) { AfxMessageBox((LPCTSTR)ce.ErrorMessage()); } } Conclusions Displaying managed modal windows from native code is pretty easy, but you should set the owner of the managed window and if you don't want the window to show up in task bar then also set the ShowInTaskbar (a property of both System.Windows.Form and System.Windows.Window) to false.
https://mobile.codeguru.com/csharp/.net/net_wpf/article.php/c16387/Opening-Modal-Managed-Windows-from-MFC.htm
CC-MAIN-2018-51
refinedweb
1,391
53.41
This is an interactive blog post, you can modify and run the code directly from your browser. To see any of the output you have to run each of the cells. When building an ensemble of trees (a Random Forest or via gradient boosting) one question keeps coming up: how many weak learners should I add to my ensemble? This post shows you how to keep growing your ensemble until the test error reaches a minimum. This means you do not end up wasting time waiting for your ensemble to build 1000 trees if you only need 200. First a few imports and generating a toy dataset: %config InlineBackend.figure_format='retina' %matplotlib inline import numpy as np import matplotlib.pyplot as plt from sklearn.cross_validation import train_test_split from sklearn.datasets import make_classification from sklearn.ensemble import GradientBoostingClassifier from sklearn.metrics import roc_auc_score X, y = make_classification(n_samples=4000, n_features=40, n_informative=10, n_clusters_per_class=3, random_state=2) # Split data into a development and evaluation set X_dev,X_eval, y_dev,y_eval = train_test_split(X, y, train_size=0.5, random_state=3) # Split development set into a train and test set X_train,X_test, y_train,y_test = train_test_split(X_dev, y_dev, test_size=0.33, random_state=4) How many trees?¶ To get started we build an ensemble of 400 trees using gradient boosting. Without knowing how many trees are needed to reach the minimum test error we can just hope that 400 is enough. One downside is that we might be waiting a long time to fit a model with this many trees and then learn that we only needed a small fraction of them (say 200). opts = dict(max_depth=2, learning_rate=0.1, n_estimators=400) clf = GradientBoostingClassifier(**opts) _ = clf.fit(X_train, y_train) Next we plot the validation curve for our fitted classifier and check with the test set at which number of n_estimators we reach the minimum test error. def validation_curve(clf): test_score = np.empty(len(clf.estimators_)) train_score = np.empty(len(clf.estimators_)) for i, pred in enumerate(clf.staged_predict_proba(X_test)): test_score[i] = 1-roc_auc_score(y_test, pred[:,1]) for i, pred in enumerate(clf.staged_predict_proba(X_train)): train_score[i] = 1-roc_auc_score(y_train, pred[:,1]) best_iter = np.argmin(test_score) test_line = plt.plot(test_score, label='test') colour = test_line[-1].get_color() plt.plot(train_score, '--', color=colour, label='train') plt.xlabel("Number of boosting iterations") plt.ylabel("1 - area under ROC") plt.legend(loc='best') plt.axvline(x=best_iter, color=colour) validation_curve(clf) As suspected we reach the minimum after approximately 200 iterations. After this the score $1-\textrm{AUC}(\textrm{ROC})$ does not improve any further. In this case the performance does not degrade noticably with more iterations but it does take more time to build a model with 400 trees instead of 200. Now that we know the answer let's see if we can construct a meta estimator that would have stopped at roughly 200 trees instead of fitting all 400. Early stopping¶ The obvious solution is to measure the performance of our ensemble as we go along and stop adding trees once we think we have reached the minimum. The EarlyStopping meta estimator takes an unfitted estimator, the maximum number of iterations and a function to calculate the score as arguments. It will repeatedly add one more base estimator to the ensemble, measure the performance, and check if we reached minimum. If we reached the minimum it stops, otherwise it keeps adding base estimators until it reaches the maximum number of iterations. There are a few more details worth pointing out: there is a minimum number of trees required to skip over the noisier part of the score function; and EarlyStoppingdoes not actually stop at the minimum, instead it continues on until the score has increased by scaleabove the current minimum. This is a simple solution to the problem that we only know we reached the minimum by seeing the score increase again. from sklearn.base import ClassifierMixin, clone from functools import partial def one_minus_roc(X, y, est): pred = est.predict_proba(X)[:, 1] return 1-roc_auc_score(y, pred) class EarlyStopping(ClassifierMixin): def __init__(self, estimator, max_n_estimators, scorer, n_min_iterations=50, scale=1.02): self.estimator = estimator self.max_n_estimators = max_n_estimators self.scorer = scorer self.scale = scale self.n_min_iterations = n_min_iterations def _make_estimator(self, append=True): """Make and configure a copy of the `estimator` attribute. Any estimator that has a `warm_start` option will work. """ estimator = clone(self.estimator) estimator.n_estimators = 1 estimator.warm_start = True return estimator def fit(self, X, y): """Fit `estimator` using X and y as training set. Fits up to `max_n_estimators` iterations and measures the performance on a separate dataset using `scorer` """ est = self._make_estimator() self.scores_ = [] for n_est in range(1, self.max_n_estimators+1): est.n_estimators = n_est est.fit(X,y) score = self.scorer(est) self.estimator_ = est self.scores_.append(score) if (n_est > self.n_min_iterations and score > self.scale*np.min(self.scores_)): return self return self We have all the ingredients, so let's take them for a spin. If EarlyStopping works it should stop adding trees to the ensemble before we get to 200 trees. def stop_early(classifier, **kwargs): n_iterations = classifier.n_estimators early = EarlyStopping(classifier, max_n_estimators=n_iterations, # fix the dataset used for testing by currying scorer=partial(one_minus_roc, X_test, y_test), **kwargs) early.fit(X_train, y_train) plt.plot(np.arange(1, len(early.scores_)+1), early.scores_) plt.xlabel("number of estimators") plt.ylabel("1 - area under ROC") stop_early(GradientBoostingClassifier(**opts), n_min_iterations=100) Success ?!¶ Looks like it works. The EarlyStopping meta estimator suggests that around 210 trees is what is needed. While this is not in perfect agreement with what we found before, it is pretty good given how simple a "minimisation" strategy we use. Homework¶ Remember we split the dataset into three parts? We reserved one part as evaluation set. What is it good for? Should you quote the result from the test set as your best estimate of the performance of your classifier? Try and answer that question using the evaluation data set. Read my post on unbiased performance estimates if you need some inspiration. Taking it further¶ There are several ways to build on what we have created so far. Three I can think of: - try it with other types of estimators - better methods for dealing with the noise inherent to the score function - deal with very flat score functions One and two are connected. Using EarlyStopping with a RandomForestClassifier works, however the score as a function of the number of estimators is much more noisy. Right now this means quite a bit of hand tuning of the minimum number of iterations and required increase ( scale) to declare that we found a minimum. To get you started, here a little snippet of using EarlyStopping with a random forest. from sklearn.ensemble import RandomForestClassifier stop_early(RandomForestClassifier(n_estimators=400, max_depth=2), n_min_iterations=100) plt.grid() This post started life as a jupyter notebook, download it or view it online.
http://betatim.github.io/posts/stop-ensemble-growth-early/
CC-MAIN-2018-30
refinedweb
1,143
50.53
0 Members and 1 Guest are viewing this topic. oh, and default player is unlocked. Is this intended? Industrial demand is low, but as you say, it's realistic and I'm okay with it. Profits are slow to accumulate in this era no matter what you do. The downside that I find in playing 1750 games is that there is a very long stale period of several decades where ship traffic is the only feasible method of profit-making. Building canals of any size beyond limited runs will bankrupt you quickly. The monthly maintenance charge is the thing you need to keep an eye on the most. AP - are you sure that you do not have maximum wait for load set? once your boat makes that 80 hour trip across the map loaded with cargo, the profits are nice. You just have to have the capital to survive a year without income from those lines. Program received signal SIGSEGV, Segmentation fault.0x0000000000469a78 in dingliste_t::check_season (this=0x7e65e50, month=21017) at dataobj/dingliste.cc:14241424 if (!d->check_season(month)) { /** * true if tunnelboden (hence true also for tunnel mouths) * check for visibility in is_visible() */ inline bool ist_tunnel() const { return ( (get_typ()==tunnelboden) ); } bool check_season(const long /*month*/) { calc_bild(); return true; } if (!d->check_season(month)) {
https://forum.simutrans.com/index.php?topic=13186.0
CC-MAIN-2019-35
refinedweb
212
65.73
Scrap - Writing a spider to crawl a site and extract data - Exporting the scraped data using the command line - Changing spider to recursively follow links - Using spider arguments reading through Dive Into Python 3. Alternatively, you can follow the Python Tutorial. If you’re new to programming and want to start with Python, you may find useful the online book Learn Python The Hard Way. You can also take a look at this list of Python resources for non-programmers. Creating a project¶ Before you start scraping, you will have to set up a new Scrapy project. Enter a directory where you’d like to store your code and run: scrapy startproject tutorial This will create a tutorial directory Our first Spider¶ Spiders are classes that you define and that Scrapy uses to scrape information from a website (or a group of websites). They must subclass scrapy.Spider and define the initial requests to make, optionally how to follow links in the pages, and how to parse the downloaded page content to extract data. This is the code for our first Spider. Save it in a file named quotes_spider.py under the tutorial/spiders directory in your project: import scrapy class QuotesSpider(scrapy.Spider): name = "quotes" def start_requests(self): urls = [ '', '', ] for url in urls: yield scrapy.Request(url=url, callback=self.parse) def parse(self, response): page = response.url.split("/")[-2] filename = 'quotes-%s.html' % page with open(filename, 'wb') as f: f.write(response.body) self.log('Saved file %s' % filename) As you can see, our Spider subclasses scrapy.Spider and defines some attributes and methods: name: identifies the Spider. It must be unique within a project, that is, you can’t set the same name for different Spiders. start_requests(): must return an iterable of Requests (you can return a list of requests or write a generator function) which the Spider will begin to crawl from. Subsequent requests will be generated successively from these initial requests. parse(): a method that will be called to handle the response downloaded for each of the requests made. The response parameter is an instance of TextResponsethat holds the page content and has further helpful methods to handle it. The parse()method usually parses the response, extracting the scraped data as dicts and also finding new URLs to follow and creating new requests ( Request) from them. How to run our spider¶ To put our spider to work, go to the project’s top level directory and run: scrapy crawl quotes This command runs the spider with name quotes that we’ve just added, that will send some requests for the quotes.toscrape.com domain. You will get an output similar to this: ... (omitted for brevity) 2016-09-20 14:48:00 [scrapy] INFO: Spider opened 2016-09-20 14:48:00 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2016-09-20 14:48:00 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023 2016-09-20 14:48:00 [scrapy] DEBUG: Crawled (404) <GET> (referer: None) 2016-09-20 14:48:00 [scrapy] DEBUG: Crawled (200) <GET> (referer: None) 2016-09-20 14:48:01 [quotes] DEBUG: Saved file quotes-1.html 2016-09-20 14:48:01 [scrapy] DEBUG: Crawled (200) <GET> (referer: None) 2016-09-20 14:48:01 [quotes] DEBUG: Saved file quotes-2.html 2016-09-20 14:48:01 [scrapy] INFO: Closing spider (finished) ... Now, check the files in the current directory. You should notice that two new files have been created: quotes-1.html and quotes-2.html, with the content for the respective URLs, as our parse method instructs. Note If you are wondering why we haven’t parsed the HTML yet, hold on, we will cover that soon. What just happened under the hood?¶ Scrapy schedules the scrapy.Request objects returned by the start_requests method of the Spider. Upon receiving a response for each one, it instantiates Response objects and calls the callback method associated with the request (in this case, the parse method) passing the response as argument. A shortcut to the start_requests method¶ Instead of implementing a start_requests() method that generates scrapy.Request objects from URLs, you can just define a start_urls class attribute with a list of URLs. This list will then be used by the default implementation of start_requests() to create the initial requests for your spider: import scrapy class QuotesSpider(scrapy.Spider): name = "quotes" start_urls = [ '', '', ] def parse(self, response): page = response.url.split("/")[-2] filename = 'quotes-%s.html' % page with open(filename, 'wb') as f: f.write(response.body) The parse() method will be called to handle each of the requests for those URLs, even though we haven’t explicitly told Scrapy to do so. This happens because parse() is Scrapy’s default callback method, which is called for requests without an explicitly assigned callback. Extracting data¶ The best way to learn how to extract data with Scrapy is trying selectors using the shell Scrapy shell. Run: scrapy shell '' Note Remember to always enclose urls in quotes when running Scrapy shell from command-line, otherwise urls containing arguments (ie. & character) will not work. On Windows, use double quotes instead: scrapy shell "" You will see something like: [ ... Scrapy log here ... ] 2016-09-19 12:09:27 [scrapy] DEBUG: Crawled (200) <GET> (referer: None) [s] Available Scrapy objects: [s] scrapy scrapy module (contains scrapy.Request, scrapy.Selector, etc) [s] crawler <scrapy.crawler.Crawler object at 0x7fa91d888c90> [s] item {} [s] request <GET> [s] response <200> [s] settings <scrapy.settings.Settings object at 0x7fa91d888c10> [s] spider <DefaultSpider 'default' at 0x7fa91c8af990> [s] Useful shortcuts: [s] shelp() Shell help (print this help) [s] fetch(req_or_url) Fetch request (or URL) and update local objects [s] view(response) View response in a browser >>> Using the shell, you can try selecting elements using CSS with the response object: >>> response.css('title') [<Selector xpath='descendant-or-self::title' data='<title>Quotes to Scrape</title>'>] The result of running response.css('title') is a list-like object called SelectorList, which represents a list of Selector objects that wrap around XML/HTML elements and allow you to run further queries to fine-grain the selection or extract the data. To extract the text from the title above, you can do: >>> response.css('title::text').extract() ['Quotes to Scrape'] There are two things to note here: one is that we’ve added ::text to the CSS query, to mean we want to select only the text elements directly inside <title> element. If we don’t specify ::text, we’d get the full title element, including its tags: >>> response.css('title').extract() ['<title>Quotes to Scrape</title>'] The other thing is that the result of calling .extract() is a list, because we’re dealing with an instance of SelectorList. When you know you just want the first result, as in this case, you can do: >>> response.css('title::text').extract_first() 'Quotes to Scrape' As an alternative, you could’ve written: >>> response.css('title::text')[0].extract() 'Quotes to Scrape' However, using .extract_first() avoids an IndexError and returns None when it doesn’t find any element matching the selection. There’s a lesson here: for most scraping code, you want it to be resilient to errors due to things not being found on a page, so that even if some parts fail to be scraped, you can at least get some data. Besides the extract() and extract_first() methods, you can also use the re() method to extract using regular expressions: >>> response.css('title::text').re(r'Quotes.*') ['Quotes to Scrape'] >>> response.css('title::text').re(r'Q\w+') ['Quotes'] >>> response.css('title::text').re(r'(\w+) to (\w+)') ['Quotes', 'Scrape'] In order to find the proper CSS selectors to use, you might find useful opening the response page from the shell in your web browser using view(response). You can use your browser developer tools or extensions like Firebug (see sections about Using Firebug for scraping and Using Firefox for scraping). Selector Gadget is also a nice tool to quickly find CSS selector for visually selected elements, which works in many browsers. XPath: a brief intro¶ Besides CSS, Scrapy selectors also support using XPath expressions: >>> response.xpath('//title') [<Selector xpath='//title' data='<title>Quotes to Scrape</title>'>] >>> response.xpath('//title/text()').extract_first() 'Quotes to Scrape' XPath expressions are very powerful, and are the foundation of Scrapy Selectors. In fact, CSS selectors are converted to XPath under-the-hood. You can see that if you read closely the text representation of the selector objects in the shell. While perhaps not as popular as CSS selectors, XPath expressions offer more power because besides navigating the structure, it can also look at the content. Using XPath, you’re able to select things like: select the link that contains the text “Next Page”. This makes XPath very fitting to the task of scraping, and we encourage you to learn XPath even if you already know how to construct CSS selectors, it will make scraping much easier. We won’t cover much of XPath here, but you can read more about using XPath with Scrapy Selectors here. To learn more about XPath, we recommend this tutorial to learn XPath through examples, and this tutorial to learn “how to think in XPath”. Extracting data in our spider¶ Let’s get back to our spider. Until now, it doesn’t extract any data in particular, just saves the whole HTML page to a local file. Let’s integrate the extraction logic above into our spider. A Scrapy spider typically generates many dictionaries containing the data extracted from the page. To do that, we use the yield Python keyword in the callback, as you can see below:(), } If you run this spider, it will output the extracted data with the log: 2016-09-19 18:57:19 [scrapy] DEBUG: Scraped from <200> {'tags': ['life', 'love'], 'author': 'André Gide', 'text': '“It is better to be hated for what you are than to be loved for what you are not.”'} 2016-09-19 18:57:19 [scrapy] DEBUG: Scraped from <200> {'tags': ['edison', 'failure', 'inspirational', 'paraphrased'], 'author': 'Thomas A. Edison', 'text': "“I have not failed. I've just found 10,000 ways that won't work.”"} Storing the scraped data¶ The simplest way to store the scraped data is by using Feed exports, with the following command: scrapy crawl quotes -o quotes.json That will generate an quotes.json file containing all scraped items, serialized in JSON. For historic reasons, Scrapy appends to a given file instead of overwriting its contents. If you run this command twice without removing the file before the second time, you’ll end up with a broken JSON file. You can also used other formats, like JSON Lines: scrapy crawl quotes -o quotes.jl The JSON Lines format is useful because it’s stream-like, you can easily append new records to it. It doesn’t have the same problem of JSON when you run twice. Also, as each record is a separate line, you can process big files without having to fit everything in memory, there are tools like JQ to help doing that at the command-line. In small projects (like the one in this tutorial), that should be enough. However, if you want to perform more complex things with the scraped items, you can write an Item Pipeline. A placeholder file for Item Pipelines has been set up for you when the project is created, in tutorial/pipelines.py. Though you don’t need to implement any item pipelines if you just want to store the scraped items. Following links¶ Let’s say, instead of just scraping the stuff from the first two pages from, you want quotes from all the pages in the website. Now that you know how to extract data from pages, let’s see how to follow links from them. First thing is to extract the link to the page we want to follow. Examining our page, we can see there is a link to the next page with the following markup: <ul class="pager"> <li class="next"> <a href="/page/2/">Next <span aria-→</span></a> </li> </ul> We can try extracting it in the shell: >>> response.css('li.next a').extract_first() '<a href="/page/2/">Next <span aria-→</span></a>' This gets the anchor element, but we want the attribute href. For that, Scrapy supports a CSS extension that let’s you select the attribute contents, like this: >>> response.css('li.next a::attr(href)').extract_first() '/page/2/' Let’s see now our spider modified to recursively follow the link to the next page, extracting data from it:(), } next_page = response.css('li.next a::attr(href)').extract_first() if next_page is not None: next_page = response.urljoin(next_page) yield scrapy.Request(next_page, callback=self.parse) Now, after extracting the data, the parse() method looks for the link to the next page, builds a full absolute URL using the urljoin() method (since the links can be relative) and yields a new request to the next page, registering itself as callback to handle the data extraction for the next page and to keep the crawling going through all the pages. What you see here is Scrapy’s mechanism of following links: when you yield a Request in a callback method, Scrapy will schedule that request to be sent and register a callback method to be executed when that request finishes. Using this, you can build complex crawlers that follow links according to rules you define, and extract different kinds of data depending on the page it’s visiting. In our example, it creates a sort of loop, following all the links to the next page until it doesn’t find one – handy for crawling blogs, forums and other sites with pagination. More examples and patterns¶ Here is another spider that illustrates callbacks and following links, this time for scraping author information: import scrapy class AuthorSpider(scrapy.Spider): name = 'author' start_urls = [''] def parse(self, response): # follow links to author pages for href in response.css('.author+a::attr(href)').extract(): yield scrapy.Request(response.urljoin(href), callback=self.parse_author) # follow pagination links next_page = response.css('li.next a::attr(href)').extract_first() if next_page is not None: next_page = response.urljoin(next_page) yield scrapy.Request(next_page, callback=self.parse) def parse_author(self, response): def extract_with_css(query): return response.css(query).extract_first().strip() yield { 'name': extract_with_css('h3.author-title::text'), 'birthdate': extract_with_css('.author-born-date::text'), 'bio': extract_with_css('.author-description::text'), } This spider will start from the main page, it will follow all the links to the authors pages calling the parse_author callback for each of them, and also the pagination links with the parse callback as we saw before. The parse_author callback defines a helper function to extract and cleanup the data from a CSS query and yields the Python dict with the author data. Another interesting thing this spider demonstrates is that, even if there are many quotes from the same author, we don’t need to worry about visiting the same author page multiple times. By default, Scrapy filters out duplicated requests to URLs already visited, avoiding the problem of hitting servers too much because of a programming mistake. This can be configured by the setting DUPEFILTER_CLASS. Hopefully by now you have a good understanding of how to use the mechanism of following links and callbacks with Scrapy. As yet another example spider that leverages the mechanism of following links, CrawlSpider class for a generic spider that implements a small rules engine that you can use to write your crawlers on top of it. Also, a common pattern is to build an item with data from more than one page, using a trick to pass additional data to the callbacks. Using spider arguments¶ You can provide command line arguments to your spiders by using the -a option when running them: scrapy crawl quotes -o quotes-humor.json -a tag=humor These arguments are passed to the Spider’s __init__ method and become spider attributes by default. In this example, the value provided for the tag argument will be available via self.tag. You can use this to make your spider fetch only quotes with a specific tag, building the URL based on the argument: import scrapy class QuotesSpider(scrapy.Spider): name = "quotes" def start_requests(self): url = '' tag = getattr(self, 'tag', None) if tag is not None: url = url + 'tag/' + tag yield scrapy.Request(url, self.parse) def parse(self, response): for quote in response.css('div.quote'): yield { 'text': quote.css('span.text::text').extract_first(), 'author': quote.css('span small a::text').extract_first(), } next_page = response.css('li.next a::attr(href)').extract_first() if next_page is not None: next_page = response.urljoin(next_page) yield scrapy.Request(next_page, self.parse) If you pass the tag=humor argument to this spider, you’ll notice that it will only visit URLs from the humor tag, such as. You can learn more about handling spider arguments here. Next steps¶ This tutorial covered only the basics of Scrapy, but there’s a lot of other features not mentioned here. Check the What else? section in Scrapy at a glance chapter for a quick overview of the most important ones. You can continue from the section Basic concepts to know more about the command-line tool, spiders, selectors and other things the tutorial hasn’t covered like modeling the scraped data. If you prefer to play with an example project, check the Examples section.
http://doc.scrapy.org/en/1.2/intro/tutorial.html
CC-MAIN-2019-39
refinedweb
2,920
62.88
ROS 2 Eloquent Segmentation fault (core dumped) I have a ROS 2 node running on my local network sending image data to another ROS 2 node running on a VM in the cloud. Both nodes run on Ubuntu 18.04 and ROS eloquent.To test the functionality, I use this ROS Node:... def publish_random_images(self): for i in range(200, 800, 25): image_np = np.uint8(np.random.randint(0, 255, size=(i, i, 3))) # Convert the image to a message time_msg = self.get_time_msg() img_msg = self.get_image_msg(image_np, time_msg) #if self.calib: # camera_info_msg = self.get_camera_info(time_msg) #self.camera_info_publisher_.publish(camera_info_msg) self.image_publisher_.publish(img_msg) self.get_logger().info("Published image of size: " + str(i) + " x " + str(i)) time.sleep(1) So I'm sending Random Pictures with Increasing Size. I can receive the image data up to a size of 320x320 pixel on the node in the cloud. All images that exceed this size are no longer received and I get the following error message: Segmentation fault (core dumped) I have increased the value of the ipfrag_high_thresh and implemented the Quality of Service settings as seen in the linked file. Without the Qos settings I could transfer a maximum of 270x270 pixels. I have also tried cyclonedds, but could not achieve better performance with. Does anyone have an idea what I could change to increase the transmittable image size? Or what's causing the core dump. I am not 100% sure on this, but I suspect permuting the image size of a topic is not a wise move. I think someone with more knowledge of topic internals would have more to say on this. When I've seen this done it has been through dynamic reconfigure or having a service that adjusts the topic. Why are you doing this? It seems like something that would only be useful for exhaustive testing / fuzzing. It's actually more of a test. I have first tried to transfer pictures with the size 640x480 pixels. The topic of the image data was available in the VM, but no data was transferred on the topic. So I changed the image sizes and looked if this is responsible for the behaviour. I guess I didn't properly phrase the question. I want to increase the maximum transmitted image size and find out what is causing the segmentation fault. In the actual application, fixed image sizes are transferred. The shown application should only show up to which image size data is transferred. I hope this makes it clearer Those seem like two distinct problems. I am sure there is probably a maximum image size, but it is probably quite large. The QoS requirements might change that but I feel like you should be able to back calculate that from the bandwidth. The segfault on changing the topic size is a whole different story. Does this occur when there are no QoS settings?
https://answers.ros.org/question/358609/ros-2-eloquent-segmentation-fault-core-dumped/
CC-MAIN-2020-40
refinedweb
483
66.94
Mac OS X Proxy Testing & Switching I recently did some work with some PVAs from XGC Media. The list of 200 accounts they sent me only had 16 working accounts in it. Within a day they had a new list of 200 for me that was perfect. I'm impressed. But the real reason for this post is that to work with these accounts it requires a large number of proxy servers and an efficient way to test them, configure them , and then reset safari so you don't get narc'ed by cookies. Below is the source code I used to automate all this by command line on OS X Lion. No need to buy proxy switching software on a mac. First you need a web page that tells you your IP address and nothing else. Code: <?php // myip.php header('Content-type: text/plain'); $ip = getenv("REMOTE_ADDR"); Echo $ip; ?> Next we need a shell script called "randomproxy" to execute from the terminal window. Code: #!/bin/tcsh set gateway = `curl -s ""` set num = `jot -r 1 1 25` set port = 60001 set login = "username" set password = "password" if ($num == 1) set pick = 100.200.300.1 else if ($num == 2) set pick = 100.200.300.2 else if ($num == 3) set pick = 100.200.300.3 else if ($num == 4) set pick = 100.200.300.4 else if ($num == 5) set pick = 100.200.300.5 endif set proxy = "http://"$login":"$password"@"$pick":$port" set unused = `curl -s -x "$proxy" "" -o "test.txt"` set test = `more test.txt` if ($test == $gateway) then echo "FAILED: chose $pick and got $test and gateway $gateway" else if ($pick == $test) then echo "PASSED: chose $pick and got $test and gateway $gateway" networksetup -setwebproxystate "Wi-Fi" On networksetup -setsecurewebproxystate "Wi-Fi" On networksetup -setwebproxy "Wi-Fi" $pick $port On $login $password networksetup -setsecurewebproxy "Wi-Fi" $pick $port On $login $password ./resetsafari else echo "FAILED: chose $pick and got $test and gateway $gateway" endif Remember to "chmod 755 randomproxy" this script! Then we need an applescript file called "resetsafari" to automate resetting Safari Code: #!/usr/bin/osascript tell application "System Events" tell process "Safari" set frontmost to true click menu item "Reset Safari?" of menu 1 of menu bar item "Safari" of menu bar 1 --delay 1 --may be uncommented if needed click button "Reset" of window 1 end tell end tell Remember to "chmod 755 resetsafari" this script too! Then optionally we need a script called "disableproxies" to turn the proxy config off when we are done Code: networksetup -setwebproxystate "Wi-Fi" Off networksetup -setsecurewebproxystate "Wi-Fi" Off Remember to "chmod 755 disableproxies" this script too! In a Terminal Window: To pick a random proxy, test it, and configure it in your network settings if it passes and reset Safari just type: Code: ./randomproxy To disable proxy settings when you are done just type: Code: ./disableproxies
https://www.blackhatworld.com/seo/mac-os-x-proxy-switcher-for-free.362973/
CC-MAIN-2018-13
refinedweb
482
69.62
Here are the highlights of what’s new and improved in 7.16. For detailed information about this release, check out the release notes. Other versions: 7.15 | 7.14 | 7.13 | 7.12 | 7.11 | 7.10 | 7.9 | 7.8 | 7.7 | 7.6 | 7.5 | 7.4 | 7.3 | 7.2 | 7.1 | 7.0 Upgrade Assistant for 8.xedit Upgrade Assistant is your one-stop shop to help you prepare for upgrading to 8.x. Review and address Elasticsearch and Kibana deprecation issues, analyze Elasticsearch deprecation logs, migrate system indices, and back up your data before upgrading, all from this app. Unified integrations viewedit All ingest options for Elastic have been moved to a single Integrations view. This provides a more consistent experience for onboarding to Elastic and increases the discoverability of integrations. All entry points for adding integrations now route to this page. Reference lines in Lensedit Reference lines are now available in Lens to help you easily identify important values in your visualizations. Create reference lines with static values, dynamic data using Elasticsearch Quick Functions, or define with a custom Formula. Reference lines can come from separate index patterns, such as a goal dataset that is independent of the source data. With reference lines, you can: - Track metrics against goals, warning zones, and more. - Add display options, such as color, icons, and labels. Apply color to the area above or below the reference line. Enhancements to visualization editorsedit Kibana offers even more ways to work with your visualizations: - Apply custom field formats in TSVB. Take advantage of the field formatters from your index pattern in TSVB—or override the format for a specific visualization. - Filter in TSVB. You always had the ability to ignore global filters in TSVB layers, and now you can also change them. This makes it easier to explore your data in TSVB without having to edit the filters for each series. - View data and requests in Lens. View the data in visualizations and the requests that collected the data right in the Lens editor. - View requests in Console. View the requests that collect the data in visualizations in Console. - Auto fit rows to content. Automatically resize Aggregation-based data table rows so that text and images are fully visible. New and updated connectors in Alertingedit Alerting has grown its collection of connectors with the addition of the ServiceNow ITOM connector, which allows for easy integration with ServiceNow event management. In addition, Kibana provides a more efficient integration for ServiceNow ITSM and SecOps connectors with certified applications on the ServiceNow store. Also added is the ability to authenticate the email connector with OAuth 2.0 Client Credentials for the MS Exchange email service. Rule duration on displayedit In Rules and Connectors, the Rules view now includes the rule duration field, which shows how long a rule is taking to complete execution. This helps you identify rules that run longer than you anticipate. You can observe the duration for the last 30 executions on the rules detail page. Osquery Manager now generally availableedit With the GA release, Osquery Manager is now fully supported and available for use in production. In addition, the 7.16 release offers the following new capabilities: - Customize the osquery configuration. - Map saved query results to ECS. - Test out queries when editing saved queries. - Map static values to ECS. - Schedule query packs for one or more agent policies. - Set custom namespace values for the integration. - Query three new Kubernetes tables. For more information, refer to Osquery. Transform health alerting rulesedit [beta] This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. A new rule type notifies you when continuous transforms experience operational issues. It enables you to detect when a transform stops indexing data or is in a failed state. For more details, refer to Generating alerts for transforms.
https://www.elastic.co/guide/en/kibana/7.16/whats-new.html?elektra=stack-and-cloud-7-16-blog
CC-MAIN-2022-27
refinedweb
673
58.08
Thread Synchronization Deadlock Summary: By the end of this tutorial "Thread Synchronization Deadlock", you will come to know what is deadlock and how to avoid it. Synchronization is explained in simple terms and code. Start reading. Introduction Thread Basics Generally, one thread accesses one source of data (or to say, its own source) like your thread accesses your bank account. Sometimes, it may be needed multiple threads to access the same source of data like all the partners of a business may be acting on the joint bank account at the same time. When multiple threads access or share, there may be data corruption or inconsistency. To avoid data inconsistency, a concept called synchronization comes. The literature meaning of synchronization is putting the events in an order. synchronized is a keyword of Java used as access modifier. A block of code or a method can be synchronized. The advantage of synchronization is only one thread is allowed to access the source and other threads are made to wait until the first thread completes its task (of accessing the source). This is known as a thread safe operation. Synchronization is a way of inter-thread communication (another is join()). In the following snippet of code, whole method is synchronized. In the following snippet, only one line of code is synchronized and all the remaining code is untouched (not synchronized). Synchronization makes a program to work very slow as only one thread can access the data at a time. Designers advice to synchronize only the important and critical statements of the code. Do not synchronize unnecessarily. In the above code, first statement is critical and the second is not. For this reason, in the second snippet, only first statement is synchronized. This increases the performance to some extent. More on Thread Synchronization Deadlock – Thread Regulation To explain the affect of synchronization, a small program is taken where 100 threads are created and iterated 10,000 times each. Each iteration increments the variable k by one and decrements by one. That is, at the end, the value should not be changed; should remain as zero, the default value for an unassigned instance variable. Output screen on Thread Synchronization Deadlock An array of 100 threads by name bundle is created. In a for loop they are initialized and started. In each iteration k is incremented and at the same time decremented, so that the value of k is unchanged. The screenshot displays k is always 0 for any number times of execution. Just remove, the synchronized(this) statement and observe different outputs. Why this happens? How synchronization mechanism works internally? Let us see. wait(), notify() and notifyAll() These methods are defined in Object class inherited by every class so that every Java class can make use of these methods. When a thread accesses the synchronized code, the JVM calls wait() method. With this method call, the resource is locked so that no other thread is allowed to access. All the threads requiring the resource should wait in a queue outside. When the accessing thread job is over, it calls notify() or notifyAll() method and thus relinquishes the lock. Now JVM allows one of the waiting threads to access the resource. Again this thread calls wait() method and locks. As only one thread is allowed to access, the program runs very slow. wait(), notify() and notifyAll() methods are useful with synchronization only; outside they do not have any meaning. The electronics people say, the synchronization is nothing but applying monitor concept. Remember, Vector methods are synchronized and ArrayList methods are not. Similarly, StringBuffer methods are synchronized and SringBuilder methods are not. Now the programmer has got choice to choose depending on the code necessity. Synchronization mechanism is very useful to control the scheduling of threads. Starvation and Deadlock These are the two problems that may creep into the programming code if the programmer is not aware of these concepts. - Your style of programming should allow fairly each thread to get microprocessor time; this is known as fair system. A fair system does not allow a thread to get starved of processor time. A starved thread is the one that does not get processor for a long time. A starved thread cannot give the output fast. - After starvation, another problem with threads is deadlock. When one thread depends on another for their execution, a deadlock occurs. When two threads are holding locks on two different resources, one thread would like to have other's source, a deadlock occurs. Deadlock is the result of poor programming code and is not shown by a compiler or execution environment as an exception. When exists, the program simply hangs and programmer should take care of himself. Note: A class as a whole cannot be synchronized. That is, synchronized class declaration raises compilation error. Thanks for finally talking about >Thread Synchronization Deadlock <Loved it! Why wait,notify,notifyall comes under object not in thread To be accessed by all objects. Locks are placed on objects. Gud Afternoon Respected Sir The above program has an Logical Error. I run it and tested it. The synchronized block never runs and the println statement in the last prints default value 0; Sir, I made following program which never printed any thing. Sir, please also explain why anonymous block is not being executed. public class TestAgain implements Runnable { public void run() { { System.out.println(“DN follows way2java.com”); } } public static void main(String[] args) throws Exception { SynchDemo sd = new SynchDemo(); Thread bundle[] = new Thread[101]; for(int i = 0; i < bundle.length; i++) { bundle[i] = new Thread(sd); bundle[i].start(); } } } Regards Dn SIr, it is said above that wait() method is called by JVM to put a lock on the synchronized block (guarded block). And when the execution is over, it (the thread) will invoke either notify() or notifyAll() method to release the lock thus to allow other thread to gain the lock. But in docs.oracle.com pages for multi threading, it is said that the wait() method suspends the execution of a current thread until it receives a notifyAll() message from other thread to gain lock. “When wait is invoked, the thread releases the lock and suspends execution. At some future time, another thread will acquire the same lock and invoke Object.notifyAll, informing all threads waiting on that lock that something important has happened:” This is really confusing for me. Will you please clarify my doubt ? Anyhow, you have understood my notes. It enough to know on synchronization. Hello Sir, can we use synchronized keyword even for objects also ?? Synchronized is applied on methods and block of statements but not on classes and objects. Ultimately, synchronized work on thread objects only. sir, can u explain what is object lock and class lock in detail??? A Java class cannot be locked. “synchronized” keyword cannot be applied to a class and can be applied to a block of statements or a method. The thread object calling the method will be locked. sir what are the topics comes under advanced java??? and core java??? Sir you explained very well, i understand the topic .sir can you tell me by doing spring in java can we get a job, or servlet and jsp. sir tell me which on is best. Servlets and JSP will get job if you are a fresher. Spring and Hibernate are required for experienced. wait(), notify() and notifyAll() not understand? Just take the help of of programmer or lecturer. If you are at Hyd, meet me. I’m not understand Thread Snchronization from above explantion………….please explain it with simple explain? Synchronization is such a subject that requires the help of a lecturer or programmer etc whole knows Java already. why wait(), notify() and notifyAll() are the method of object class, not of Thread class sir as a fresher i got good knowledge from way2java.com but i want some real time projects with good explanation as in way2java.com, ” because there are no calls for freshers so i want to get a good experience from your real time projects, to get the job on core java” Very few jobs are available on only Core Java (Android requires only Core Java). You must learn servlets and JSP, the easiest way of getting job. My site does not deal with projects. Thanks sir to giving this wonderful site.Its awesome site to all java guys…most experienced guys also need some fine tune….. guys go through this site and make your self more brilliant.. sir what is interrupt(), wait(), and notify() can you please explain it with example Refer way2java:,this link: sir can u plz tell me through a small program that how can b wait and notify used without synchronization Iam new to Java and learning it.. This website is really helpful for me to understand easily. If you are very new, you need some guidance from a person. Thread Is Abstract class But How To create Object For that Abstract Class Thread is not abstract class. Following is the class signature of Thread class as defined in java.lang package. public class java.lang.Thread extends java.lang.Object implements java.lang.Runnable This is really good website for Java students as it s described each and every topic in detailed way. Thank u so much sir for providing such a good explanation on Java. Nice, thanks for your compliment. Advice your friends also to go through this web site. way2java…………. is superb website to learn java.All the concepts were explained here easy manner and everybody can understand those concepts…………… Thanks. Utmost care is taken to write and still revision is pending. This is a good explanation about each topic of Java.Someone can easily answer to difficult questions. With regards,
https://way2java.com/multithreading/synchronization-and-deadlock/
CC-MAIN-2017-39
refinedweb
1,633
66.74
From: PJ Waskiewicz <peter.p.waskiewicz.jr@intel.com>This patch adds the CPU index to the upper 8 bits of the queue_mapping inthe skb. This will support 256 CPUs and 256 Tx queues. The reason forthis is the qdisc layer can obscure which CPU is generating a certain flowof packets, so network drivers don't have any insight which CPU generated aparticular packet. If the driver knows which CPU generated the packet,then it could adjust Rx filtering in the hardware to redirect the packetback to the CPU who owns the process that generated this packet.Preventing the cache miss and reschedule of a process to a different CPU isa big win in network performance, especially at 10 gigabit speeds.Signed-off-by: PJ Waskiewicz <peter.p.waskiewicz.jr@intel.com>Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>--- net/sched/sch_prio.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-)diff --git a/net/sched/sch_prio.c b/net/sched/sch_prio.cindex 4aa2b45..84bbd10 100644--- a/net/sched/sch_prio.c+++ b/net/sched/sch_prio.c@@ -86,6 +86,7 @@ prio_enqueue(struct sk_buff *skb, struct Qdisc *sch) } #endif + skb->queue_mapping |= (get_cpu() << 8); if ((ret = qdisc->enqueue(skb, qdisc)) == NET_XMIT_SUCCESS) { sch->bstats.bytes += skb->len; sch->bstats.packets++;
https://lkml.org/lkml/2008/6/24/462
CC-MAIN-2016-36
refinedweb
210
50.02
Release Notes for the Service Bus April 2014 Release Updated: April 3, 2014 These release notes will be updated periodically. For information about Windows Azure Service Bus pricing, see the Service Bus Pricing FAQ topic. What’s New For a list of new features introduced in the Service Bus April 2014 release, see What's New in the Azure SDK 2.3 Release (April 2014). Prerequisites Account Requirements Before running Service Bus applications, you must create one or more service namespaces. To create and manage your service namespaces, log on to the Azure Management portal, and click Service Bus. For more information, see How To: Create or Modify a Service Bus Service Namespace. SDK Samples. Runtime Requirements. Quotas For information about quotas for Service Bus, see Azure Service Bus Quotas.
http://msdn.microsoft.com/library/azure/hh667331.aspx
CC-MAIN-2014-23
refinedweb
130
55.34
- Small and seeminly easy program won't compile.........Help!!!!!!! - C++ Compiler - Finding GCF - errno, rmdir - Can you send files over TCP IP with Winsock or the CAsynSocket Class? - Error handling, thw With statement and Distributing - simple program - Floating point exceptions - Please Help Me Soon - Simple Text String - Is this good enough to start with? - Simple Parallel Port Control - plz look and help me im desperate - What is this? - inheritance - Getprocesses(); - C++ header files - Recursions....please help - CDS_FULLSCREEN in OpenGL - namespace warnings - Inserting to the front of a file - Internet Programming - Read a Text File - Class rectangle/need help thanks!!! - Opening files with variable name? - Pretty Print Program - Combo Box Extended - Interrupt??? - Wiping the Screen - Colour theory... (weird title) - I want to get back on the track.. - Need HELP with this simple C++ program - Repetitive Bug with C++ Program - Needs Help - recording in C++ - Question about "Answer Key" Program - Help writing directly to Hardware port in Borland Visual C++ 5 - get() --Confused at usage - Sleep on Bloodshed Dev C++ - *.DAT files? - Regedit Problem in C++ - Exception Numbers (4343904 (0x424860)) - file streams - Compile Error: Unterminated character constant?? - Newbies about array - reference vs. const reference - How long have you been with c++? - Timing in C/C++ - Bookes - Can someone help me with TCP/IP? - Logic doesn't equal
http://cboard.cprogramming.com/sitemap/f-3-p-965.html?s=eb29dfff16ef67cdb73a26fd21fd61cf
CC-MAIN-2014-15
refinedweb
215
66.74
#include <Kokkos_Time.hpp> List of all members. The Kokkos_Time class is a wrapper that encapsulates the general information needed getting timing information. Currently it return the elapsed time for each calling processor.. A Kokkos_Comm object is required for building all Kokkos_Time objects. Kokkos_Time support both serial execution and (via MPI) parallel distributed memory execution. It is meant to insulate the user from the specifics of timing across a variety of platforms. Kokkos_Time Destructor. Completely deletes a Kokkos_Time object. Kokkos_Time elapsed time function. Returns the elapsed time in seconds since the timer object was constructed, or since the ResetStartTime function was called. A code section can be timed by putting it between the Kokkos_Time constructor and a call to ElapsedTime, or between a call to ResetStartTime and ElapsedTime. Kokkos_Time function to reset the start time for a timer object. Resets the start time for the timer object to the current time A code section can be timed by putting it between a call to ResetStartTime and ElapsedTime.
http://trilinos.sandia.gov/packages/docs/r8.0/packages/kokkos/doc/html/classKokkos_1_1Time.html
CC-MAIN-2013-48
refinedweb
166
57.47
Technical Article A Catch from the Codeshare: DNS Response Translation Updated 04-Nov-2015•Originally posted on 04-Nov-2015 by Jason Rahm F5 article application delivery big-ip dns devops dns irules. DNS Response Translation This solution from MVP Hamish Marson utilizes the local traffic manager aspect of iRules to service the global traffic manager response data. In fact, the iRule doesn’t care about the request at all. It looks for two things, the answer to the request (one or more addresses) and the client IP making the request. There is a lot more included in the rule (variable setting, HSL logging, comments) but here's the meat of it: when DNS_RESPONSE { set rrs [DNS::answer] foreach rr $rrs { if { [DNS::type $rr] == "A" } { set dstip [DNS::rdata $rr] set gteAddClass [class match -value [IP::client_addr] equals $LOCAL_CLASS_ADDRESS ] switch -exact $gteAddClass { internal { set transIP [class match -value $dstip equals $GTE_CLASS_INTERNAL] } 3dp { set transIP [class match -value $dstip equals $GTE_CLASS_3DP] } default { set transIP [class match -value $dstip equals $GTE_CLASS_INTERNET] } } if { $transIP ne "" } { DNS::rdata $rr $transIP return } } } } Thanks to the DNS services profile, you no longer need to roll your own decodes with binary scan, you can use the DNS:: namespace iRules commands. Because more than one answer could be returned, the foreach loop is utilized to iterate through the list. In the loop, a check for an A record is made, and then a couple variables are setup for the client IP and the GTM response address. Then, the switch sets the translations with data from the data groups (variables established in the full rule linked above) before the resource record data is updated by the DNS::rdata command. Clever solution, Hamish! What other catch from the codeshare would you like to see highlighted? Post a comment below to let me know! View all articles in this series A Catch From the Codeshare - Web UI TweaksA Catch from the Codeshare: Let's EncryptA Catch from the Codeshare: Python iRule DeploymentsA Catch from the Codeshare: Network Translations 0 Ratings Log in to rate this content Print Download Share
https://devcentral.f5.com/articles/a-catch-from-the-codeshare-dns-response-translation
CC-MAIN-2018-17
refinedweb
349
51.72
Architecture :: Creating Each Class Library Project For Interface?Jan 11, 2010 weather to create each class libaray project for Interface, Service, Model, DTO, Repository ?View 1 Replies weather to create each class libaray project for Interface, Service, Model, DTO, Repository ?View 1 Replies how to create a class library file and how to implement in web forms?View 3 Replies I have a Web project in which I defined some methods to parse some webpages. I then save the retrieved data in a DB. Now I have run a lengthy operation(it may take 2 days) in which I continously parse webpage this way. Can I use the webproject from a Windows project. If I try to add a refference to it it doesn't work. How should I do it? I have used until now an asp.net application, and done the calls to the parsing methods in the pageLoad event but after an hour or so the process stops. We are consuming third Web services. Instead of using datatypes from web service proxy we need to create our own class which will take the values from web service output. Since the web service data types are so much deep,we are facing lots of problems to create our cutom classes. Is there any tool available to create classes directly from web services in ordered way.... I am developing a project using Visual Studio 2008 in C#, I have added the class library within the project and set up the relevant references and used the relevant using statements. this is the error message: Error 28 The type or namespace name 'Domain' does not exist in the namespace 'Forestry.SchoolLibrary' (are you missing an assembly reference?) C:ProjectsSchoolofForestry runkSourceSchoolAccountPresenterEditAccountPresenter.cs 26 40 School these are my using statements: using System; using System.Collections.Generic; using System.Configuration; using System.Linq; using System.Data.Linq; using System.Text; using System.Xml; using Forestry.SchoolLibrary.Core.Domain; using Forestry.SchoolLibrary.Properties; I'm working on an internet application that has been set up as a web SITE project (I know...) in Visual Studio. I need to add additional features/functionality so have added a class library to the project and referred to it in the main web site project. The issue now arises because I need to make use of core objects which live inside the App_Code directory in the web site project but this project doesn't appear to expose its DLL like web app/ code library projects do. Because of this I can't add a reference to the web site project in the class library to leverage the common site-wide code/objects. I can't move the stuff out of App_Code so I'm looking for a way to refer to the website project dll from the new class library. I have a .net project whose part of the functionality is xls transformation. I now want to move that functionality away from the main project to a separate class library project. I have no problems moving classes and accessing them, but part of the resources that I wanted to move were xslt files that define the transformations. Those files were originally located in the special asp.net folder: App_Data and then accessed using Server.MapPath(~/App_Data/XSLT/anXsltFile.xslt) I wanted to move those files to that separate library as well, but not sure how to approach it, and how to access those files within the class library. I've created class library project and added service reference to webservice. When i try to use webservice object am not able to access webservice methods. myservice proxy=new myservice(); proxy.( no methods are coming)? I have a custom gridview control which is a class library project.I want to be able to specify the css class for the gridview which I can do.But I also want to set a default css stylesheet to the gridview. So if I don't override it using the cssclass property it must get it's cssclass from the default stylesheet I have in my project. How can I specify the default stylesheet. I have a stylesheet in a folder called "Styles" in my project. I have one C# windows class library type project which has a web reference whose settings are automatically added in to settings.settings file( Webservice name, type, scope, value). I also have one separate ASP.net project which has let us as suppose, one page with a button. When I click this button, it calls a method of referred DLL of previous project and control goes there. When in that project, when control goes in to location where I use this web reference, it tries to get the URL of web reference from settings file and raises an exception - 'The current configuration system does not support user-scoped settings'. So, What is the way for me to call a c# project having web service in a asp.net project ? This is really a source stopper for our development.. I am implementing a custom membership and role providers where I need to store all the role/membership information in the user's session. I am implementing these custom providers inside a class library project (different from the website project) and need to access the session in them. The idea is to store the role/membership related information in the session after retrieving them for the first time from the database. When I try to access the Session using System.Web.HttpContext.Current.Session I get this as a null object (Object reference not set to an instance of an object. Why is the session turning out to be null? Does anyone know what could be the reason I am not able to add reference to System.Web. It gives an exclamation(!) symbol in front of the dll under reference folder in VS 1020. I wand to refer to System.Web.Security.MembershipProvider class.View 1 Replies How to access proxy class of a web service which has been referenced in the website project of a solution within a class library project in the same solution? I mean no web-service reference/setting is added to the class library and instead it needs to be picked from the web project. I've got an ASP.NET 4 web forms application I'm building in Studio 2010. The solution has two projects. A web forms application and a class library. I have added a reference to the class libraryr project in the web forms project, but when I go a code behind page in the web forms project, it does not recognize the class from the class library and throws the "type or namespace <class> could not be found ...." error. Adding a using <class library projec> does not work either as it does not recognize that. What am I missing?. I read why this happens in a book a while ago but can't quite remember now. Below is my class file. [Code].... i have the following code: [Code].... I was trying to implement as above. How can i call this class from the code behind of my page, and call the methods Open and Close connection ? How do i instantiate the object ? Using Visual Studio 2005 I have list of class files, when i try to run the class files, it showing error as "a project with output type of class library cannot be started directly" How to run the class file? How to create a dll? When to use Abstract class and when to use Interface class.View 10 Replies I am searching for the advantages and disadvantages of the explicit interface implementationView 1 Replies
http://asp.net.bigresource.com/Architecture-creating-each-class-library-project-for-Interface--AlUaGBZDH.html
CC-MAIN-2018-47
refinedweb
1,286
74.19
#define is a C preprocessor. There are other ways of making the best usage of #define for writing efficient programs , for the purpose of testing and debugging, for deciding the course of action of the programs When we first learn about #define, the commonly said example is define some commonly used values like PI, NAME_LEN ,… But there are other uses of #define. Let me show some aspects of #define The general syntax of #define #define identifier(identifier,..., indentifier) token_sequence 1. It is not necessary that all the identifiers used in token sequence is already defined #define A B+20 #define B 10 As you can see that B was not defined when A is being defined. So the only condition is that we need to define it somewhere. 2. Using gcc, you can define macros and decide the course of your programming. For example, if I wish to perform debugging. I will define __DEBUG__ through gcc.To define macros using gcc, compile using the D option of gcc gcc -Dmacro[=defn]...] The following code snippet depicts this #include <stdio.h> /*This program shows how to make the use of #define to define the course *of programming */ #define A B+20 /B is not defined here, but will be defined later/ #define B 10 int main() { #ifdef __DEBUG__ printf("__DEBUG__ DEFINED\n"); printf("%d\n",A); #else printf("__DEBUG__ not defined\n"); printf("%d\n",B); #endif return 0; } If I compile $gcc test.c Output: __DEBUG__ not defined 10 If I compile $gcc -D __DEBUG__ test.c Output: __DEBUG__ defined 30 Thus we can see how the C Preprocessors also help in deciding the course of action of a program. The above is also called Conditional Compilation
https://linuxprograms.wordpress.com/2008/03/07/c-define-ifdef-ifndef-else/
CC-MAIN-2017-39
refinedweb
287
61.77
Tue Sep 4 09:40:36 PDT 2001 <blair@orcaware.com> Blair Zajac * Release version 1.05. Sun Sep 2 23:06:10 PDT 2001 <blair@orcaware.com> Blair Zajac * lib/DateTime/Precise.pm: Change the default value of $USGSMidnight from 1 to 0, which in dprintf now disables treating midnight (00:00) of one day as 24:00 from the previous day. This results in odd behavior when the date is printed using dprintf, as it will return the previous date. Fix the POD documentation for $USGSMidnight to complete the end of the sentence. * lib/DateTime/Precise.pm: Optimize dprintf to use a series of elsif's instead of a series of if's when deciding what to do with a %X string. * lib/DateTime/Precise.pm: Do not put &IsLeapYear and &DaysInMonth in @EXPORT, instead put them in @EXPORT_OK to reduce namespace pollution. * lib/DateTime/Precise.pm: Instead of doing multiple shift's in subroutines to get the subroutine's parameter, set the variables directly from @_. * lib/DateTime/Precise.pm: Change the variables $Secs_per_week, $Secs_per_day, $Secs_per_hour and $Secs_per_minute to constant subroutines to improve the performance of the module. * t/01date_time.t: Ditto. * lib/DateTime/Precise.pm: Make minor formatting changes to reflect my current coding style. Tue Aug 28 13:16:59 PDT 2001 <blair@orcaware.com> Blair Zajac * Release version 1.04. Tue Aug 28 13:14:01 PDT 2001 <blair@orcaware.com> Blair Zajac * README: Update Blair Zajac's email address to blair@orcaware.com. Remove reference to the Caltech FTP site for a secondary repository of Blair Zajac's Perl modules. * lib/DateTime/Precise.pm: Update Blair Zzajac's email address to blair@orcaware.com. Tue Aug 28 12:34:15 PDT 2001 <blair@orcaware.com> Blair Zajac * lib/DateTime/Precise.pm: The $VERSION variable was being set using $VERSION = substr q$Revision: 1.04 $, 10;' which did not properly set $VERSION to a numeric value in Perl 5.6.1 probably due to the trailing ' ' character after the number. This resulted in 'use DateTime::Precise 1.03' failing to force Perl to use version 1.03 or newer of DateTime::Precise even if 1.02 or older was installed because $VERSION was set using substr and Perl would not consider $VERSION to be set. Now use the longer but effective: $VERSION = sprintf '%d.%02d', '$Revision: 1.04 $' =~ /(\d+)\.(\d+)/; Sun Jun 10 20:10:19 PDT 2001 * Release 1.03. Sun Jun 10 18:54:44 PDT 2001 <blair@orcaware.com> (Blair Zajac) * lib/DateTime/Precise.pm: Try to import Time::HiRes::time to load a high resolution time. Fix a bug when the time was modified using seconds() and the time had previously a non-zero fractional second component. The previous fractional seconds would be included in a sum with the argument to seconds(). Now reset the fractional part of the time to 0 before using the seconds() argument. * t/01date_time.t: Reorder a test to ensure that the above bug in seconds() is checked. Thu Feb 22 20:37:22 PST 2001 <blair@orcaware.com> (Blair Zajac) * Release version 1.02. Thu Feb 22 20:27:46 PST 2001 <blair@orcaware.com> (Blair Zajac) * Fix a bug where if 0 is passed to an increment or decrement function, then it would actually increment the time by 1 unit. Check for a defined value instead of a non-0 or non-'' value. Wed Jan 31 15:24:17 PST 2001 <blair@orcaware.com> (Blair Zajac) * Release version 1.01. Wed Jan 31 15:09:10 PST 2001 <blair@orcaware.com> (Blair Zajac) * Fix a bug where a \s was not being properly added to a regular expression. This also fixes the 'Unrecognized escape \s passed through at Precise.pm line 1483' warning when using Perl 5.6.0. Thu Apr 8 10:20:30 PDT 1999 <blair@orcaware.com> (Blair Zajac) * Have ok() in t/*.t return the success or failure of the test instead of the number of tests performed. * Release version 1.00. Thu Oct 22 09:11:09 PDT 1998 <blair@orcaware.com> (Blair Zajac) * Fix a bug in new() where it wouldn't correctly set the time using the one argument form of time. This bug found by Joe Pepin <joepepin@att.com>. * Fix some spelling mistakes. * Release version 0.04. Sun Jun 28 11:56:40 PDT 1998 <blair@orcaware.com> (Blair Zajac) * Add dscanf('%u') which loads GMT time into the object. This complements dscanf('%U') which loads local time into the object. * lib/DateTime/Precise.pm: Update the POD to reflect the change in dscanf. * Change test #8 to use %u instead of %U so the test will succeed in any timezone. Release version 0.03. Fri Jun 26 16:57:37 PDT 1998 <blair@orcaware.com> (Blair Zajac) * Add a new jday() method that returns day_of_year() - 1. Fri Jun 26 15:06:52 PDT 1998 <blair@orcaware.com> (Blair Zajac) * New() didn't properly initialize the underlying representation of DateTime::Precise before it was passed off to set_time(). Add a new test to cover this case. * Update the POD a little. * Release version 0.02. Mon Jun 22 13:46:03 PDT 1998 <blair@orcaware.com> (Blair Zajac) * Add a new method copy, which creates a new copy of an existing DateTime::Precise object. Use copy to create copies of times instead of new and clone. Add tests for copy. * Add comparison test between integer and fractional times. Sun Jun 21 12:05:38 PDT 1998 <blair@orcaware.com> (Blair Zajac) * Have overloaded neg operator state the class of the offending object instead of DateTime::Precise. * Have all of the set_* methods return the newly set object if the set was successful, undef otherwise. Thu Apr 24 12:00:00 PDT 1998 <blair@orcaware.com> (Blair Zajac) * Version 0.01 First version. * Merge jpltime.pl from the JPL GPS group and DateTime.pm from Greg Fast into this package. * The changes below refer to the DateTime part of this package written by Greg Fast. Revision history for DateTime.pm Version numbers refer to RCS/CVS revision number. 1.4.1 r1.17 Thu Apr 2 1998 - damn. set_from_epoch time was broken. serves me right for not running my own 'make test'. 1.4 r1.16 Mon Mar 30 1998 - redid documentation. cleaned up, etc. nothing exciting. - ignored Changes file long enough for it to no longer be valid. 1.3.2 r1.8 Mon Sep 15 15:00:00 1997 - oops. addSec was behaving very wrong on day boundaries. 1.3.1 r1.7 Thu Sep 11 18:50:04 CDT 1997 - squashed bug in passing new a dt of form "yyyy.mm.dd" (no time) 1.3 r1.5 Thu Sep 11 10:20:04 CDT 1997 - switched internal obj storage from hash to scalar. 1.2.6 r1.3 Wed Sep 10 18:47:36 1997 - imported to CVS - added 22 deadly (heh) tests to test.pl - fixed subtle bug in overloaded <=> and cmp (in comparing objects with non-objects) 1.21 Fri Sep 05 20:45:55 1997 - suppressed some warnings 1.20 Fri Sep 05 20:12:22 1997 - added new-from-internalfmt capability. 1.19 Thu Sep 04 23:30:34 1997 - properly reset $VERSION 1.18 Thu Jul 31 20:42:19 1997 - replaced some tr///s with lc()s 1.17 Thu Jul 31 19:35:28 1997 - whee 1.16 Thu Jul 31 19:28:39 1997 - dscanf now works quietly, and doesn't die on failure. 1.15 Thu Jul 31 19:08:31 1997 - dscanf works properly 1.14 Thu Jul 31 18:48:51 1997 - functional, but vapid, dscanf inserted 1.13 Tue Jul 29 00:08:54 1997 - interim random checkin. 1.12 Thu Jun 19 16:42:50 1997 - weekday tested. bug in AUTOLOADING of dec_* fixed. 1.10 Thu Jun 19 16:25:01 1997 - typo 1.9 Thu Jun 19 16:20:41 1997 - added weekday(), dprintf("%w") 1.8 Fri Jun 06 20:28:08 1997 - doco fixes 1.4 Thu May 22 20:05:07 1997 - started doco. (copied to nwisw tree) 1.2 Fri May 09 02:23:55 1997 - fixed all references to Gtime 1.1 Tue Apr 29 22:45:51 1997 - Initial revision
https://metacpan.org/changes/distribution/DateTime-Precise
CC-MAIN-2018-30
refinedweb
1,390
78.45
Hexlite - Solver for a fragment of HEX Project description Hexlite - Solver for a fragment of HEX This is a solver for a fragment of the HEX language and for Python-based plugins which is based on Python interfaces of Clingo and does not contain any C++ code itself. The intention is to provide a lightweight system for an easy start with HEX. The vision is that HEXLite can use existing Python plugins and runs based on the Clingo python interface, without realizing the full power of HEX. The system is currently under development and only works for certain programs: - External atoms with only constant inputs are evaluated during grounding in Gringo - External atoms with predicate input(s) and no constant outputs are evaluated during solving in a clasp Propagator - External atoms with predicate input(s) and constant outputs that have a domain predicate can also be evaluated - Liberal Safety is not implemented - Properties of external atoms are not used - If it has a finite grounding, it will terminate, otherwise, it will not - as usual with Gringo - FLP Check is implemented explicitly and does not work with strong negation and weak constraints - FLP Check can be deactivated - There is a Java Plugin API (see below) The system is described in the following publication. Peter Schüller (2019) The Hexlite Solver. In: Logics in Artificial Intelligence. JELIA 2019. Lecture Notes in Computer Science, vol 11468. Springer, Cham In case of bugs please report an issue here: License: MIT Author: Peter Schüller schueller.p@gmail.com Available at PyPi: Installation with Conda: The easiest way to install hexliteis Conda. First you need to install the clingodependency: $ conda install -c potassco clingo Then you install hexlite: $ conda install -c peterschueller hexlite (If you wonder why we do not automatically install clingo as a dependency: certain conda restrictions prevent that clingois automatically installed unless the potassco channel is manually added by the user.) Then you test hexlite: $ hexlite -h Installation with pip: This will download, build, and locally install Python-enabled clingomodules. If you do not have it: install python-pip: for example under Ubuntu via $ sudo apt-get install python-pip Install hexlite: $ pip install hexlite --user Setup Python to use the "Userinstall" environment that allows you to install Python programs without overwriting system packages: Add the following to your .profileor .bashrcfile: export PYTHONUSERBASE=~/.local/ export PATH=$PATH:~/.local/bin Run hexlite the first time. This will help to download and build pyclingo unless it is already usable via import clingo: $ hexlite The first run of hexlite might ask you to enter the sudo password to install several packages. (You can do this manually. Simply abort and later run hexliteagain.) Ubuntu 16.04 is tested Debian 8.6 (jessie) is tested Ubuntu 14.04 can not work without manual installation of cmake 3.1 or higher (for buildling clingo) Using the Java API Building the JAVA API is not automated, you need to install Jpype version =0.7.3 or >=0.7.5and antand mavenand run mvn clean compile package install See also .travis.ymlhow to build and install and test the Java plugin. Using the Docker image There is a Dockerfile that builds a docker image where hexlite and its source code is installed. Build the image with $ ./build-docker-image.sh Run the image and start a shell in the image with $ ./run-docker-image.sh In the image, run an example: # hexlite --pluginpath /opt/hexlite/plugins/ --plugin testplugin -- /opt/hexlite/tests/inputs/extatom2.hex Should give the following output (it is a set, the order of items does not matter): {prefix("test:"),more("a","b","c"),complete("test: a b c")} Running Hexlite on Examples in the Repository If hexliteby itself shows the help, you can run it on some examples in the repository. Hexlite needs to know where to find plugins and what is the name of the Python modules of these plugins The path for plugins is given as argument --pluginpath <path>. This argument can be given multiple times. You can use absolute or relative paths. The python modules to load are given as argument --plugin <module> [<argument1> <argument2>]. Multiple plugins can be loaded. Each plugin can have arguments. !ATTENTION!: If you specify the HEX input file after --plugin <module>, it becomes the argument of the plugin. In this case, you need to - specify the HEX input files before the other arguments or - indicate end of the argument list with the --option. To run one of the examples in the tests/directory you can use one of the following methods to call hexlite: $ hexlite --pluginpath ./plugins/ --plugin testplugin -- tests/inputs/extatom3.hex $ hexlite tests/inputs/extatom3.hex --pluginpath ./plugins/ --plugin testplugin $ hexlite --pluginpath=./plugins/ --plugin=testplugin tests/inputs/extatom3.hex Jpype we need version 0.7.3 or above 0.7.5, the versions below and 0.7.4 have a bug that affects us conda-forge has 0.7.3 $ conda install -c conda-forge jpype1=0.7.3 Developer Readme For developing hexlite without uploading to anaconda repository: Install clingo with conda, but but do not install hexlite with conda. $ conda install -c potassco clingo $ git clone git@github.com:hexhex/hexlite.git install hexlitein develop mode into your user-defined Python space: $ python3 setup.py develop --user If you want to remove this development installation: $ python3 setup.py develop --uninstall --user $ rm ~/.local/bin/hexlite (Installed scripts are not automatically uninstalled.) Releasing For building and uploading a new version to pip and conda (Note: conda requires to upload to pip first) Update version in setup.py. Build pypi source package $ python setup.py sdist Verify that dist/ contains the right archives with the right content (no wheels etc.) Upload to pypi (the twine in Ubuntu 18.04 does not work, you must install via pip3 install twine) $ twine upload dist/* Update version in meta.yaml. Build for anaconda cloud First, some conda packages need to be installed via conda install conda-build conda-verify anaconda. $ conda build . -c potassco --no-verify --no-test (get upload command from last lines of output, check output, then re-run without the last two arguments.) If conda is installed on an encrypted /home/ or similar, this will abort with a permission error. You can make it work by creating a new directory on an unencrypted /tmp/, for example /tmp/conda-build, and run conda build as follows: $ conda build --croot /tmp/conda-build/ . -c potassco Verify that archive to upload contains the right content (and no backup files, experimental results, etc...) $ anaconda upload <path-from-conda-build> Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/hexlite/
CC-MAIN-2021-10
refinedweb
1,125
55.64
Essentials All Articles What is LAMP? Linux Commands ONLamp Subjects Linux Apache MySQL Perl PHP Python BSD ONLamp Topics App Development Database Programming Sys Admin Last spring I began a coding project that for various reasons I needed to write mostly in C. I wanted to do as much up-front work as possible to minimize bugs and debugging time later. At about the same time, I picked up The Pragmatic Programmer. This book introduces many important techniques for minimizing bugs and improving the quality of your code. One technique that I found particularly exciting was Design by Contract. Related Reading Secure Programming Cookbook for C and C++ Recipes for Cryptography, Authentication, Input Validation & More By John Viega, Matt Messier Chances are you've heard of Design by Contract. If you're coding in Eiffel you're probably already using it. If you haven't heard of Design by Contract, then you're like I was six months ago. To quote Bertrand Meyer, the key concept of Design by Contract is "viewing the relationship between a class and its clients as a formal agreement, expressing each party's right and obligations" [3]. Because this discussion will segue into C coding and C doesn't have classes, we'll talk about functions instead. To use a function (written by you or someone else), you need to understand the range of arguments allowed and the effects of calling the function. This is where contracts come into play. Most languages that support Design by Contract provide two types of statements to express the obligations of the function caller and the callee: preconditions and postconditions. The caller must meet all the preconditions of the function being called, and the callee or the function must meet its own postconditions. The failure of either party to live up to the terms of the contract is a bug in the software [3]. The quintessential examples of preconditions and postconditions are those of the square root function for positive real numbers. The precondition of the square root function is that the argument must be a real number greater than or equal to zero. The postcondition is that the return value is a real number such that its square must equal the argument (ignoring round-off issues). /* square root function for positive real numbers */ double sqrt(double x) { /* ... */ } When someone calls sqrt() with a value of x that is less than zero, sqrt() cannot meet its postcondition. Again, this contractual breach indicates a bug in the software. The best course of action usually is to abort [3]. sqrt() x The third type of statement Design by Contract places at our disposal is the invariant. Invariants describe conditions which must always hold for an object (a structure or type in C). For example, once initialized, any instance of the dynamic array type DArray_T, shown below, should have the properties length >= 0 and capacity >= length. These properties are invariants. DArray_T length >= 0 capacity >= length darray.c: struct DArray_T { void **ptr; long length; long capacity; }; darray.h: typedef struct DArray_T *DArray_T; In an object-oriented setting, calling a method on an object causes the object's invariants to be checked. Checking invariants happens before checking preconditions and after postconditions. So whenever someone calls the function void *DArray_get(DArray_T ary, long index); the following happens: ary DArray_get() You're probably wondering how I went from discussing methods to the C function DArray_get(). As you know, C doesn't truly have objects or methods on objects. Instead it has structures, types, and functions. However, invariants apply just as well in C as they do in an object-oriented language. If a function has write access to a structure or type, then DBC should check the invariants of that structure or type before and after the function executes. The function DArray_get() has write access to all its parameters; therefore, the invariants of all its parameters should be checked. Obviously, long won't have any invariants, but DArray_T will. long The const qualifier can specify types as read only, but C's opaque types are a more powerful mechanism. Opaque types are pointers that client code cannot dereference. DArray_T as defined above is an opaque type. Defining struct DArray_T in darray.c and not darray.h hides its fields from clients. Therefore, only the functions defined in darray.c have access to the fields of struct DArray_T. Only functions in darray.c need to check the invariants of DArray_T, eliminating the need to check the invariants of DArray_T (or any opaque type) in client code. (However, client code still needs to check the invariants of nonopaque types.) const struct DArray_T struct DArray_T C doesn't provide any Design by Contract language features, but it does provide assert(). In the sqrt() example, an assert statement works fine for checking the precondition assert() double sqrt(double x) { assert(x >= 0); /* ... */ } Why not just stick with assert() and forget all this stuff about Design by Contract? Well, assert() does not quite live up to Design by Contract, for the following reasons: Side effects--Contract checking should not produce any side effects. The software should behave the same with or without contract checking in place. Unfortunately, adding assert() or any other type of condition requires more coding, and more coding introduces the possibility of more errors. A statement such as assert(size >= 0 && size = MAX_BUFFER_SIZE); is valid C code, but obviously it does not produce the desired effect. Because assert() can appear in arbitrary locations, assert statements serve better as sanity checks in confusing parts of your code or for checking loop invariants. However, if you have a section of code littered with asserts, it is usually a good sign that you need to do some refactoring [2]. Not satisfied with assert() and excited about Design by Contract, I set out to create my own Design by Contract implementation for C. After looking at some of the solutions available for Java [1] I decided to use a subset of the Object Constraint Language to express contracts [4]. Using Ruby and Racc, I created Design by Contract for C, a code generator that turns contracts embedded in C comments into C code to check the contracts. The front end to this code generator is the dbcparse.rb Ruby script. Running ./dbcparse.rb -h will print instructions on how to use the script. For starters, ./dbcparse.rb darray.c will print to standard out darray.c with all contracts expanded into C code. dbcparse.rb ./dbcparse.rb -h ./dbcparse.rb darray.c Below is the contract for and implementation of DArray_get(): /** pre: index >= 0 pre: index < ary->length post: return == ary->ptr[index] */ void *DArray_get(DArray_T ary, long index) { return ary->ptr[index]; } Opening the comment with /** specifies a contract. Because the DArray_get() function directly follows the contract, the contract applies to the DArray_get() function (also called the context of the contract). Inside the comment, we have pre: and post: tags, which specify a precondition and postcondition, respectively. The return keyword is simply the value that DArray_get() returns. /** pre: return This function is pretty simple; its preconditions and postconditions describe it nicely. Notice that the postcondition relies on the preconditions being true and that the function should not execute if the preconditions fail. When defining a contract, each condition must be a Boolean expression; assignment is not allowed. The normal C comparison operators (<, >, <=, >=, ==, and !=) all work. Instead of the &&, ||, and ! operators, we have and, or, and not, respectively. C control structures (if, else, for, do, while, ?:) do not appear; instead there is an implies operator and two iterating operators, forall and exists. The iterating operators check a condition over a range; a forall statement is true if its condition holds true over the entire range, and an exists statement is true if its condition is true at least once in the range. The syntax is as follows: < > <= >= == != && || ! and or not if else for do while ?: implies forall exists forall ( declaration in range | expression ) exists ( declaration in range | expression ) forall ( declaration in range | expression ) exists ( declaration in range | expression ) What do I mean by range? Many languages support a range idiom; I borrowed this syntax from Perl. (Other languages, such as Ruby, also use this syntax.) Ranges come in two forms: inclusive and exclusive. An inclusive range is start..end, and an exclusive range is start...end. All ranges include the start, but only inclusive ranges include the end. start..end start...end Here is an example of the contract for a function that creates a shallow copy of a DArray_T: /** post: return != NULL implies return->length == ary->length and forall (long i in 0...ary->length | return->ptr[i] == ary->ptr[i]) */ DArray_T DArray_copy(DArray_T ary) { /* ... */ } In words, this contract says that if the function returns a non-null DArray_T, then the returned DArray_T will have the same length and elements as ary. You're probably wondering how this crazy syntax works. Let me explain. First, the expression return != NULL is checked. If it is false, the condition is true and we are done. If not, we must check what return != NULL implies. According to the condition, it implies: return != NULL return != NULL return->length == ary->length and forall (long i in 0...ary->length | return->ptr[i] == ary->ptr[i]) The expression return->length == ary->length is self-explanatory. The forall statement is more complex. Remember that the syntax of the forall statement is return->length == ary->length long i is the declaration, 0...ary->length is the range being iterated over, and return->ptr[i] == ary->ptr[i] is the expression to check at each iteration. long i 0...ary->length return->ptr[i] == ary->ptr[i] I mentioned before that all instances of DArray_T should have the properties length >= 0 and capacity >= length. How do we know that the DArray_T being passed to DArray_get() has these properties? By specifying them as invariants. capacity >= length /** context DArray_T inv: self != NULL inv: self->length >= 0 inv: self->capacity >= self->length inv: (self->capacity == 0 and self->ptr == NULL) or (self->capacity > 0 and self->ptr != NULL) */ struct DArray_T { void **ptr; long length; long capacity; }; Now, calling every function in darray.c with a parameter of type DArray_T (the context) will check these invariants before and after the function executes. Note that defining invariants in a module (.c file) makes the invariants local to that module (as in darray.c), and defining invariants in a header file makes them global, causing the invariants to be checked for all applicable functions. Contract specifications occur in comments beginning with /**. If there is no context in the contract, the context is the next C statement following the contract. Below is the syntax for expressing contracts; statements in angle brackets are optional: The comparison operators (<, >, <=, >=, ==, and !=) have the same precedence as in C. The logical operators have the following precedence: and, or, implies. Finding a contract breach calls dbc_error(). This function has the following prototype: dbc_error() void dbc_error(const char *message); message is simply a string with information about the condition that failed. dbc_error() should log this information--or print it to standard error--and not return. It is very important that dbc_error() not return. In my projects I usually define dbc_error() as: #ifdef HAVE_RUBY_H #include <ruby.h> #define dbc_error(msg) rb_raise(rb_eFatal, msg) #else #include <stdio.h> #include <stdlib.h> #define dbc_error(msg) do { (void)fputs(msg, stderr); abort(); } while (0) #endif It is generally a good idea to stick to the fail-hard principle [3] and call abort(). (In Ruby, raising a fatal error is similar to calling abort(), but it also gives you a Ruby stack trace.) abort() DBC for C has come a long way since I decided it would be a worthy hack last spring. The project that originally brought about the need for DBC for C has benefited greatly. The benefits include: Hopefully you're now interested in using contracts in your C code. Here are some features of DBC for C that I will explore in the future: @pre In preparation for this article I spent some long nights preparing DBC for C for public consumption; hopefully you're ready to give it a try. Charlie Mills develops data warehousing applications using C and Ruby. Return to ONLamp.com Sponsored by: © 2015, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://archive.oreilly.com/pub/a/onlamp/2004/10/28/design_by_contract_in_c.html?page=last&x-order=date
CC-MAIN-2015-35
refinedweb
2,079
57.06
Graphics to BufferedImage ?858876 May 3, 2011 12:52 PM Hello there. This content has been marked as final. Show 5 replies 1. Re: Graphics to BufferedImage ?gimbal2 May 3, 2011 1:15 PM (in response to 858876)A bit confusing post. But going by your title, you want to create a BufferedImage, draw to that and then draw that BufferedImage to the window. Right?1 person found this helpful Creating a BufferedImage can be done through its constructor. All you need to know is the width and height in pixels, plus if you want it to be transparent or not (TYPE_INT_RGB or TYPE_INT_ARGB). Drawing to a BufferedImage is done by obtaining a Graphics2D instance for it, through BufferedImage.createGraphics(). Don't forget to dispose() that Graphics object when you're done with it. Note: is there a specific reason why you want to do this? Swing is by default double buffered, so without knowing it you are already drawing to an offscreen buffer in stead of directly to the screen. 2. Re: Graphics to BufferedImage ?858876 May 3, 2011 1:24 PM (in response to gimbal2)Sorry for the confusing post. I have a map, and i have sprites, and a window class. Each is drawd by a different class. My Paint method is something similar to this: now, this makes me draw the map on the game loop, and if i wanted to move the map around i would have to translate the graphics to move only the map and sprites. However, i belive this could be easyer if i have a separate map for example, and i could use my paint method like this: public void paint(Graphics g) { MapClass.drawMap(g); SpriteClass.drawSprites(g); WindowClass.drawWindows(g); } So i could control camera movements easy, and i can save the map into memory so i dont have to draw it over and over again. Ive googled into how to create a buffered image from a graphics but couldnt do it. Lets say i have this method doing the paint ( a simple one ) public void paint(Graphics g) { g.drawImage(MapClass.getMap(), camX, camY, screenX, screenY); g.drawImage(SpriteClass.getSprites(), camX, camY, screenX, screenY); WindowClass.drawWindows(g); } Now, i would have something like that // class Sprites public void drawSprites(Graphics g) { g.drawImage( sprite, spriteX, spriteY, sizeX, sizeY); } So i could just paint this img on the main graphics, and move it easy, and pheraphs store it in memory. public BufferedImage getSprites() { Graphics g = new Graphics(); g.drawImage( sprite, spriteX, spriteY, sizeX, sizeY); BufferedImage img = new BufferedImage(parms, parms2); img = g; // ? return img; } Tryed to explain it easyer. Thanx for your help. =] 3. Re: Graphics to BufferedImage ?morgalr May 3, 2011 7:40 PM (in response to 858876)Ziden, It's really easy to do, if you notice your paint takes a reference to a graphics context as an argument, you basically make a new method that takes a graphics context as the arguemnt and pass it the graphis context to a BufferedImage. In that method copy all of the commands that are now in your class' overridden paint, then make a timer object that when fired, will update your image. It works very well in SWING when using paintComponent because SWING has double buffering as default. New Method public void paint(Graphics g) { //copy this to another method MapClass.drawMap(g); SpriteClass.drawSprites(g); WindowClass.drawWindows(g); } Now you just need an event handler that will call your MyOffScreenPaint with the graphics context from MyImage. public void MyOffScreenPaint(Graphics g) { MapClass.drawMap(g); SpriteClass.drawSprites(g); WindowClass.drawWindows(g); } public void paint(Graphics g) { //new override for class paint--note MyImage is the same image you used the graphics context from MyOffScreenPaint super.paint(g); g.drawImage(MyImage, 0, 0, this); } 4. Re: Graphics to BufferedImage ?858876 May 3, 2011 7:47 PM (in response to morgalr)i didnt got how i could get MyImage im sorry this way , as i understood, would be the same then adding a g.drawImage after the call of those 3 draw methods, no ? if no, i didnt understood, sorry 5. Re: Graphics to BufferedImage ?morgalr May 3, 2011 9:01 PM (in response to 858876)Here is an example: import java.awt.image.BufferedImage; import java.awt.Dimension; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.awt.Graphics; import java.awt.Graphics2D; import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.Timer; public class Junk{ public Junk(){ try{ JFrame f = new JFrame("Offscreen Paint"); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); MyJPanel p = new MyJPanel(); f.add(p); f.pack(); f.setVisible(true); }catch(Exception e){ e.printStackTrace(); } } public static void main(String[] args){ new Junk(); } class MyJPanel extends JPanel implements ActionListener{ int inc = 1; int count = 0; BufferedImage bi = new BufferedImage(256, 256, BufferedImage.TYPE_INT_ARGB); Graphics myG = bi.createGraphics(); Timer t = new Timer(500, this); public MyJPanel(){ this.setPreferredSize(new Dimension(bi.getWidth(), bi.getHeight())); myG.clearRect(0, 0, 256, 256); this.repaint(); t.start(); } public void actionPerformed(ActionEvent e){ MyPaintOffScreen(myG); this.repaint(); } public void paintComponent(Graphics g){ super.paintComponent(g); g.drawImage(bi, 0, 0, this); } public void MyPaintOffScreen(Graphics g){ if(count<0){ count = 1; inc *= -1; }else if(count > 9){ count = 8; inc *= -1; } g.clearRect(0, 0, 256, 256); g.drawString(Integer.toString(count), 128, 128); count+=inc; } } }
https://community.oracle.com/message/9562865
CC-MAIN-2017-17
refinedweb
897
67.86
At 03:07 08/03/99 -0500, Charles Wilson wrote: >> ------------------------------- >> 1) I was unable to install !! (what about you ?) >I had no problems -- BUT I had renamed /usr/bin/install to cyg-install, and >copied Pierre Humblet's install script from Let's try. Hum. No, that does not work. 'make install' call a perl script 'installperl'. >>. Oh yes, no pb about it, but see my new email about 'ntea', I thought that might *also* solve problems for directories. Well, I could change the ownership of /usr/local/bin (*grin*), or apply a patch. >> ------------------------------- >> 2) Perl Modules >> >Nope, I didn't try this. I just wanted a basic build. Too bad :) >The Right Thing is to define a global in cw32imp.h that says "You already >included me, skip the rest of this file" eg >#ifndef CW32IMP >#define CW32IMP >---rest of file >#endif No, sorry, that's not the problem (but indeed cw32imp.h was lacking this feature). I investigate a bit further, and found that the file was included once, but it redefined some stuff previously defined in many other headers :(( You definitely have to exclude this file. That would be nice if you could try a very simple Module like String-CRC (see CPAN repository). .tar.gz % cd String-CRC-1.0 % perl Makefile.PL % make % make test >>. >will help the linker ignore its presence. The real issue is where is it >coming from in the first place. That, I can't answer. Maybe somebody else?? I found it ! SORRY, my fault (once again). The 'Od' following the "-ld" came from an old module that I compiled for Perl (DB_File) as I was testing my old Perl build (it was lying around). And of course, I did that while my partitions where mounted as text ! And this is where it comes from, as the file was then copied to the obscure Perl directories, and is now used as part as the concatenation :( I rebuild it in binary mount mode, and all modules compile perfectly : => do not mix modules build in text and binary mount modes ! Well, I guess we are near the end : 2 problems to solve for me - install fail - cw32imp.h shall NOT be include Thanks P.S. : Here is summary of what I think about the modules : Some Perl modules are bundled with .xs files which are translated to .c format and compiled to produce object files (.o) and/or a library (.a). These .xs are extensions/stubs that might be useful to speed up functions or access external libraries/packages. If Perl was built statically, then these extensions have to be statically linked (added) against perl.exe in order to produce a new perl.exe which will be reinstalled somewhere (replacing the old one). Perl maintains a list of extensions that have been added to the current perl.exe (that is, all modules except libperl.a). As an extension might also depend on other libraries (for example, a module might implement a perl extension to handle the Zlib, and therefore shall refer to -lz), Perl also stores a list (text file) of additional(extra) libs needed for every module. This seems to be site specific : for a module M, it might be located in two places : /usr/local/lib/perl5/5.00502/cygwin32/auto/M/extralibs.ld /usr/local/lib/perl5/site_perl/cygwin32/auto/M/extralibs.ld For example, the DB_File module is a Perl extension designed to allow anyone to use the Berkeley DB library (which have to be build separately). Its extralibs.ld might be '-L/usr/local/lib -ldb' (notice the -ldb). Note : this /usr/local/lib/perl5/site_perl/cygwin32/auto/M/ will also store the extensions itself (.a) useful to rebuild the whole perl.exe. After building/testing/installing an extension module you might update the new perl.exe by issuing in the module directory : > make -f Makefile.aperl inst_perl MAP_TARGET=perl.exe This step will build a new perl.exe by compiling a classical perlmain.c file, statically link it to libperl.a, to ALL previous extensions (.a stored in cygwin32/auto/M/), to usual libraries (-lcygwin -lm, etc.) AND to all extralibs. The list of all extralibs is computed by concatenating (cat) all extralibs.ld (for every known extensions + the current one), leading to an extralibs.all file, which is used. BEWARE of your mount mode. If you compiled some modules on text mounted partitions, the extralibs.ld file will store very nasty '0d' bytes related to the way Windows handle text file ('\n' are translated to'0a0d' instead of 'Oa' in Unix world). If you switch to binary mount mode, the previously described process will FAIL because the linker will crash on '0d' in extralibs.all : in binary mode, it will not understand that 'OaOd' might be swallowed together, it will just swallow 'Oa', thus leaving the 'Od'. ______________________________________________________________ Sebastien Barre -- Want to unsubscribe from this list? Send a message to cygwin-unsubscribe@sourceware.cygnus.com
http://www.cygwin.com/ml/cygwin/1999-03/msg00221.html
CC-MAIN-2015-32
refinedweb
829
67.86
# Another event for CSS position: sticky Have you ever wondered how to track when elements with `positions: sticky` become fixed? [Eric Bidelman](https://developers.google.com/web/resources/contributors/ericbidelman) has an amazing [post](https://developers.google.com/web/updates/2017/09/sticky-headers) on this topic, go and read it now. I've found some difficulties while using it in my project. Here they are: 1. It breaks encapsulation. `sticky-change` event relates to header element, but you have to insert sentinels to header's parent (and make it `position: relative`). 2. It involves lots of factors that should be consistent and their connection is not always obvious. For example you can't set `--default-padding` greater than `40px`, which is top-sentinel's height. 3. You can't track block in the middle of an article. Let's try to improve it! All of those issues reflect the same problem: Eric's solution is about tracking sticky's parent position, not sticky block itself. Let's improve this while keeping the original idea. How? We'll add sentinels to header itself and observe their intersection with container. ![example](https://habrastorage.org/webt/kt/qz/tn/ktqztnzu7m2jgsegn_guj0ak_iy.gif) Here is how to do it: 1. You need one sentinel for each sticky side you want to observe. 2. Set first sentinel `top` property equals to header `top` but with reverse sign minus 1. For example if your header have `top: 10px`, then set sentinel header to `-11px`. You can use something like `top: calc(-1px + -1 * var( -- header-sticky-top))` or just set `-1px` if you have header top equals to zero. 3. Add other sentinels if needed. 4. Observe sentinels intersection with container. 5. You can say that header stuck if sentinel intersection record has `isIntersecting = true`, `intersectionRatio = 0`, and `intersectionRect.top = rootBounds.top` 6. Same other sides, just watch bottom, left, or right instead of top. 7. Don't forget to add `visibility: hidden` and `pointer-events: none` to sentinels. Check out [demo](https://cdn.rawgit.com/Yavanosta/sentinel-demo/076b8b95/index.html) and [sources](https://github.com/Yavanosta/sentinel-demo/blob/master/index.html) here
https://habr.com/ru/post/436182/
null
null
359
51.65
Intro If you read this blog before (or saw the YouTube videos) you're aware that it's heavy on the demos, so it's no surprise that I like to streamline as much as possible the process of setting things up. Streamlining the setup of the development environment is in fact one of the main reasons I love to use Docker so much, as it allows me to create a Docker Compose file, describe the dependencies I want to use, be it a PostgreSQL database, some Redis for cache or maybe a RabbitMQ instance to play a bit with messaging. Currently I'm preparing a presentation around the topic of event-driven systems, so to spice things up a bit, I thought about running things in Azure, instead of my usual local Docker approach. As I haven't really played much with Azure, it's an opportunity to learn a bit about it, while preparing the demos for the presentation. When researching how to start preparing my demos, most of the content I came across uses the Azure portal to create and manage the resources. As you might be thinking, poking around a web UI doesn't really match my usual run a command and have things up approach. This post, as well as an upcoming one, dive exactly into the subject of preparing things to have an easy way to setup a demo, using Azure Resource Manager (shortened to ARM) templates and GitHub actions. To be clear, using ARM templates and infrastructure as code isn't really something new that I'm going to tell you about, it's just that it seems it's usually discussed in more dedicated contexts, not applied to preparing coding demos. Disclaimer: before proceeding I feel it's important to clearly note that the samples don't necessarily follow the best cloud architecture practices, particularly in terms of security, as the focus of these posts is on setting up an environment for coding demos, not to be production ready. What is the Azure Resource Manager and ARM templates? Azure Resource Manager is the deployment and management service for Azure. It provides a management layer that enables us to create, update, and delete resources. Azure Resource Manager templates, typically referred to as ARM templates, are JSON files in which we describe what services we want to use in Azure. We can think about it a bit like a Docker Compose file, but for Azure resources (and massively more verbose). When we're poking around the Azure portal, checking a box here, clicking a dropdown there, even if we don't touch a single line of JSON, it's being used behind the scenes and stored by the Azure Resource Manager. I won't go deep into all the advantages of using ARM templates (or an alternative infrastructure as code tool), as I think you'd be better served by reading the documentation, but in the context of preparing demos, I think you can start to imagine why I want to use something like this: I can create a file describing the Azure services I want to use, store it in source control next to the sample code and startup a demo environment with a couple of commands. High-level structure As mentioned before, an ARM template is a JSON file, but it follows a certain structure, so you can use it to guide you when trying to understand what's going on. An empty template would look like this: { "$schema": "", "contentVersion": "1.0.0.0", "parameters": { }, "variables": { }, "functions": { }, "resources": [ ], "outputs": { } } Note that not all of the shown template sections are mandatory, just put them in for us to see what's available. Going through each of them: parameters- where we declare parameters that can be provided when deploying the template to control it's behavior. A typical usage of parameters is to have a single template file, but then parameterize it with different values depending on the environment. variables- in here we can declare variables that we want to reuse in the template. The variables don't need to be constants, we can compose them from other variables, parameters or even values provided by functions. functions- when defining a template we can use predefined functions for a myriad of things (e.g. concatenate strings or get the resource group in which the template will be deployed). In the functionssection of the template we can also create our own functions that may help us simplify the template. resources- as the name suggests, in this section we declare and configure all the Azure resources we want to use. This is where all those parameters, variables and functions will prove useful, cause there's a lot going on here! outputs- as the deployment completes, there are values that we might want to get as an output, so we can use them elsewhere (e.g. a connection string from a created database). This is the section we can use to declare such values. How to get started So, how to start creating a template? I'll say one thing, you won't be typing it all manually! Well, you can, but it's a massive pain 😛. In fact, even without typing everything manually it's still a pain, but not as much 🙂. Get the template from the portal A possible way to do things is to use the portal, download the ARM template it provides while creating/managing the resources. You can get it just before creating a new resource, but even after it's created, there's a way to download the associated template. In this screenshot, we're at the final step of creating a new web app through the portal. In this final page, at the bottom, we have the option to download the template. Clicking the download link takes us to this page where we can see the template and change some things before we download it. If you have an existing resource, you can also find the option to download the associated template. Below is an example using an existing web app. Find what you need in the Azure Quickstart Templates page Another approach is to head to the Azure Quickstart Templates page (or associated GitHub repository) and try to find what you need in there. It's not certain that it has what we need, but given the amount of templates in there, the odds are not too bad. Mix and match Coming as a surprise to probably no one, the approach I found that makes the most sense is... mix and match! Depending on the needs, we might find a close enough template in the quickstarts repository, mix in a bit from another one, get something that's missing from the template generated by the portal, wrapping up by making some final tweaks manually. For the unavoidable manual tweaks, if you use Visual Studio Code, the Azure Resource Manager (ARM) Tools extension is rather helpful, providing some syntax highlighting, warnings and errors, as well as some auto-completion capabilities. What do I need for the presentation The presentation I'm preparing is around the topic of event-driven systems, so I want to have some running services, databases for them to store stuff and event infrastructure for them to communicate. For the services, I'm going with Azure App Service, as it's a simple way to host traditional ASP.NET Core applications. Azure Functions would also be a good option, particularly in the event-driven context, but I don't want to focus the presentation on that, so keeping it simple with good old ASP.NET Core apps. For the database, I'm using Azure SQL, as SQL is very prevalent and how it fits in an event-driven system is pretty relevant. I'm still thinking if I should use Azure Cosmos DB as well. On one hand adding a NoSQL database into the mix is very interesting, but on the other, the presentation cannot last hours 😛. For event infrastructure, we have a bunch of options, like Azure Service Bus, Azure Event Grid and Azure Event Hubs. Like the databases, they aren't mutually exclusive and I could use all, depending on the circumstance, but to keep things simple, I'll pick one and move on. Right now I'm more inclined towards Event Hubs, as it works similarly to Apache Kafka, which is a good fit for the presentation context. Sample template Right out of the gate, even if this section is called "sample template", I won't put it all here, as it's a big JSON file which won't be readable on the blog. I'll drop a couple of bits here, but to see the whole thing, better head to GitHub. Parameters In the parameters section, I'm putting some things I might want to tweak a bit. "parameters": { "location": { "type": "string", "defaultValue": "[resourceGroup().location]", "metadata": { "description": "Location for all resources." } }, "administratorLogin": { "type": "string", "defaultValue": "InsecureAdminLogin", "metadata": { "description": "The administrator username of the SQL logical server." } }, "administratorLoginPassword": { "type": "securestring", "metadata": { "description": "The administrator password of the SQL logical server." } } }, First heads-up, related to users. When creating the Azure SQL server, I'm providing the administratorLogin and administratorLoginPassword, to be used as their name implies. In a well designed Azure deployment, we should probably use an AD account for this, but as I mentioned from the beginning, for demo purposes we can simplify. Variables In the variables section, I'm putting things that'll be used more than once in the template, but I'm not really interested in parameterizing. "variables": { "appName": "[concat('sampleApp-', uniqueString(resourceGroup().id))]", "appCurrentStack": "dotnet", "sqlLogicalServerName": "[concat('sampleSqlLogicalServer-', uniqueString(resourceGroup().id))]", "sqlDBName": "SampleSqlDB", // ... }, Probably the only added note that can be interesting here, is the usage of the uniqueString template function. Many of the resources we create need to have a unique name. An easy way to do that is using this function, which creates a deterministic hash string based on the given parameters (more info in the docs). Resources In the resources section comes, well, everything else. I'll drop here just a portion of the App Service configuration. "resources": [ { "apiVersion": "2020-06-01", "name": "[variables('appName')]", "type": "Microsoft.Web/sites", "location": "[parameters('location')]", "dependsOn": [ "[resourceId('Microsoft.Insights/components/', variables('appInsights'))]", "[resourceId('Microsoft.Web/serverfarms/', variables('appHostingPlanName'))]", "[resourceId('Microsoft.EventHub/namespaces/eventhubs/', variables('eventHubNamespaceName'), variables('eventHubName'))]", "[resourceId('Microsoft.DocumentDB/databaseAccounts', variables('cosmosDbAccountName'))]" ], "properties": { "siteConfig": { "connectionStrings": [ { "name": "SqlConnectionString", "type": "SQLAzure", "connectionString": "[concat('Server=tcp:', variables('sqlLogicalServerName'), '.database.windows.net,1433;Initial Catalog=', variables('sqlDBName'), ';Persist Security Info=False;User ID=', parameters('administratorLogin'), ';Password=', parameters('administratorLoginPassword'), ';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;')]" }, { "name": "EventHubConnectionString", "type": "Custom", "connectionString": "[listKeys(variables('eventHubSendRuleId'), providers('Microsoft.EventHub', 'namespaces/eventHubs').apiVersions[0]).primaryConnectionString]" }, { "name": "CosmosDbConnectionString", "type": "Custom", "connectionString": "[listConnectionStrings(resourceId('Microsoft.DocumentDB/databaseAccounts', variables('cosmosDbAccountName')), providers('Microsoft.DocumentDB', 'databaseAccounts').apiVersions[0]).connectionStrings[0].connectionString]" } ], "appSettings": [ { "name": "APPINSIGHTS_INSTRUMENTATIONKEY", "value": "[reference(resourceId('microsoft.insights/components/', variables('appInsights')), '2018-05-01-preview').InstrumentationKey]" }, // ... ], "metadata": [ { "name": "CURRENT_STACK", "value": "[variables('appCurrentStack')]" } ], "netFrameworkVersion": "[variables('appNetFrameworkVersion')]", "alwaysOn": false }, "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('appHostingPlanName'))]" } }, ] Let's go through some interesting bits. For starters, you'll notice the usage of parameters and variables across the resource definition. In the dependsOn property we indicate all the services in which the App Service depends. We should avoid as much as possible to have this, as it makes it impossible to parallelize resource creation, but if we really need to follow a certain sequence, this is the way. In this case, as we're depending on the creation of the databases and event hubs to get their connection string, we need to use it. It could probably be avoided by using an alternative way to authenticate, like AD users. This brings us to another heads up topic, the connection strings I'm providing to the application. For instance, the SQL connection string is using the administrator credentials, which is far from ideal. Again, my excuse is, good enough for demos 🙃. Another interesting thing to point out is the appSettings property. If you work with ASP.NET Core, you're probably aware of the way configurations work, being composable from multiple sources. In this case, what we pass here will end up as environment variables read by the application. Same applies to the connection strings we provide in the connectionStrings property. Deploying the template with Azure CLI Ok, so we have an ARM template, how do we deploy it? There are multiple ways (Portal, CLI, PowerShell, ...) but for my tests I went with the Azure CLI. After installing it, we can interact with it by typing az in the command line. First thing we need to do to start working with it is to login, using az login. After we're logged in, we can start creating stuff! To deploy the developed template, we need to create a resource group, where our resources will live. We can also create an ARM template for this, but a single line on the CLI easily does the trick: az group create --name SettingUpDemosInAzure --location westeurope Then, to create the resources, we provide the ARM template: az deployment group create --resource-group SettingUpDemosInAzure --name SampleResources --template-file all-the-things.json --parameters administratorLoginPassword="SomePassword!" Let's briefly go through the parameters we're passing to az deployment group create. --resource-groupindicates the resource group where the resources should be created. --nameis the name of the deployment. It's important to identify the deployment, so if we apply the template (or iterations of it) multiple times, the resource manager knows it should apply to the same one, and not deploy something new. --template-fileis where we pass the ARM template file. --parameterslet's us specify values for the parameters we use inside the template file. In this example we're passing in the admin password for Azure SQL. We can also pass parameters using parameter files, which can be particularly useful if we want to have multiple parameter files, one per environment for example. In the end, when we want to tear everything down, we can just delete the resource group: az group delete --name SettingUpDemosInAzure If we want to delete all resources but keep the group, we can use an empty ARM template and deploy it in complete mode, which unlike the default incremental mode, deletes resources that are not referenced in the template. It would look something like this: az deployment group create --mode Complete --resource-group SettingUpDemosInAzure --name SampleResources --template-file empty.json Some alternatives to ARM templates Before wrapping up, it's probably worth taking a quick look at possible alternatives to ARM templates. Scripting The first two options that come to mind are Azure CLI and Azure PowerShell. Both allow to create resources using commands, so we could write a shell or PowerShell script to automate things. While it's possible to do everything with scripts, not really sure it's the best idea. ARM templates and other alternatives follow a declarative approach, where we define what we want, but then it's up to the thing that uses them to check if things are in place, deploy what's needed, remove what's not, adjust as required, all automatically, based on the current configuration and the one being applied. Using scripts falls more on the imperative side, where we tell which things to do. This means that to achieve the same iterative approach, we'd need to do it manually (e.g. does this thing exist? delete it, create this new thing, ...), which is much less appealing. Declarative Established that it's probably better to go with a declarative approach, what can we use besides ARM templates? - Project Bicep - Terraform - Pulumi - A bunch more, this isn't supposed to be an exhaustive list Project Bicep is developed by Microsoft as a DSL (domain specific language) for deployment of Azure resources. Bicep code is transpiled to the standard JSON files we saw earlier. I thought about going with Bicep, but when I started looking at things it was still experimental, and as I just want to prepare some resources, drop them in a repo and move on, decided against it. Terraform is pretty well known in the infrastructure as code space, supporting not only multiple cloud providers, but also Kubernetes. It uses a specialized programming language called Hashicorp Configuration Language (HCL). Finally there's Pulumi, which is also cloud provider agnostic (and also supports Kubernetes). Unlike Terraform, Pulumi provides SDKs for multiple programming languages, so we can use C# to define our infrastructure, just like we used it to define the build using Nuke. Next time I need to setup something like this, I'll probably try Pulumi out. Outro There's much more to ARM templates than what I described in this post, but I hope is a good enough intro to the topic and motivates you to look further into infrastructure as code. In summary, we looked at: - Why use ARM templates or other infrastructure as code tools - Overview of ARM templates - How to get started using them - Some alternative infrastructure as code tools Along with the ARM stuff, you'll also find a simple example application in the GitHub repository, making use of all the resources created with the developed template. Final reminder that the samples don't necessarily follow the best cloud architecture practices, particularly regarding security, as the focus of these posts is on setting up an environment for coding demos, not to be production ready. Links in the post: - What are ARM templates? - ARM template documentation - Infrastructure as code - Wikipedia - Azure Quickstart Templates - Azure Quickstart Templates - GitHub repository - Azure Resource Manager (ARM) Tools extension for Visual Studio Code - Azure App Service - Azure Functions - Azure SQL Database - Azure Cosmos DB - Azure Service Bus - Azure Event Grid - Azure Event Hubs - Apache Kafka - String functions for ARM templates - Create Resource Manager parameter file - Azure CLI - Azure PowerShell - Project Bicep - Terraform - Pulumi The source code for this post is in the SettingUpDemosInAzure repository. Thanks for stopping by, cyaz! Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/joaofbantunes/setting-up-demos-in-azure-part-1-arm-templates-31ee
CC-MAIN-2021-17
refinedweb
3,019
52.09
Spring XD Singlenode Forked from spring-xd-docker/image-singlenode Contains up-to-date Spring XD distro. The single node docker image is the easiest XD deployment in which to get started. It runs everything you need in a single container. In this README we will discuss how to deploy the singlenode and how to create a series of basic streams. To do this we will also use the XD shell. The shell is a more user-friendly front end to the REST API which Spring XD exposes to clients. Retrieving the images To retrieve the single node image execute the following : docker pull maxromanovsky/springxd-singlenode To retrieve the shell image execute the following : docker pull maxromanovsky/springxd-shell Start the singlenode and the shell Start the singlenode To start it, you just need to execute the following command: docker run --name springxd-singlenode \ -d \ -p 9393:9393 \ maxromanovsky/springxd-singlenode Note how we exposed the 9393 port above. This is the http port that will receive http requests from the shell. Now let's observe singlenode's log by executing the following: docker logs -f springxd-singlenode Start the shell Now from a new terminal lets start the shell: docker run --name springxd-shell \ -it \ maxromanovsky/springxd-shell You should see the following prompt, especially if your docker daemon is not running on localhost (e.g. using boot2docker): ._____ __ _______ / ___| (-) \ \ / / _ \ \ `--. _ __ _ __ _ _ __ __ _ \ V /| | | | `--. \ '_ \| '__| | '_ \ / _` | / ^ \| | | | /\__/ / |_) | | | | | | | (_| | / / \ \ |/ / \____/| .__/|_| |_|_| |_|\__, | \/ \/___/ | | __/ | |_| |___/ eXtreme Data 1.0.1.BUILD-SNAPSHOT | Admin Server Target: ------------------------------------------------------------------------------- Error: Unable to contact XD Admin Server at ''. Please execute 'admin config info' for more details. ------------------------------------------------------------------------------- Welcome to the Spring XD shell. For assistance hit TAB or type "help". server-unknown:> To connect to the singlenode execute the following command from the prompt: server-unknown:> admin config server http://<host>:9393 Replacing <host> with your host running Docker. Create a ticktock stream In this simple example, the time source simply sends the current time as a message each second, and the log sink outputs it using the logging framework at the WARN logging level. From the shell's xd:> prompt, type the following and press return stream create --name ticktock --definition "time | log" --deploy Now view the logs -f output from the singlenode you should see: 20:18:51,104 INFO DeploymentsPathChildrenCache-0 server.ContainerRegistrar - Deploying module 'time' for stream 'ticktock' 20:18:51,277 INFO DeploymentsPathChildrenCache-0 server.ContainerRegistrar - Deploying module [ModuleDescriptor@6ded4936 moduleName = 'time', moduleLabel = 'time', group = 'ticktock', sourceChannelName = [null], sinkChannelName = [null], sinkChannelName = [null], index = 0, type = source, parameters = map[[empty]], children = list[[empty]]] 20:18:52,360 INFO task-scheduler-1 sink.ticktock - 2014-10-07 20:18:52 20:18:52,384 INFO Deployer server.StreamDeploymentListener - Deployment status for stream 'ticktock': DeploymentStatus{state=deployed} 20:18:52,387 INFO Deployer server.StreamDeploymentListener - Stream Stream{name='ticktock'} deployment attempt complete 20:18:53,362 INFO task-scheduler-5 sink.ticktock - 2014-10-07 20:18:53 20:18:54,363 INFO task-scheduler-6 sink.ticktock - 2014-10-07 20:18:54 20:18:55,364 INFO task-scheduler-6 sink.ticktock - 2014-10-07 20:18:55 To destroy the stream go back to the shell and from the xd:> prompt, type the following and press return stream destroy ticktock It is also possible to stop and restart the stream instead, using the undeploy and deploy commands. Create a http source stream In this example, we will create a http source that will listen for http posts on the 9000 port and will write the results to our log. Cleanup First lets stop our last example. - Stop the logs -f. To do this just ctrl-c to stop the log tailing. Now we want to stop our singlenode instance and it can be done by executing the following: docker stop springxd-singlenode Create the hello world http stream Now lets start up our singlenode with the 9000 port open and this time will set our name for the container to be httpSourceTest: docker run --name httpSourceTest \ -d \ -p 9393:9393 \ -p 9000:9000 \ maxromanovsky/springxd-singlenode Again now let's monitor our httpSourceTest instance by executing the following: sudo docker logs -f httpSourceTest Notice we are not restarting our shell. This is because the shell is not making a sustained connection to the singlenode, but rather executing individual rest calls. So to create the stream to receive http posts, go to the the shell and from the xd:> prompt, type the following and press return: stream create httpsource --definition "http | log" --deploy Now lets post a http "hello world" message to XD httpSourceTest. From the shell xd:> prompt, type the following and press return (you could also use curl -d if you wanted). http post --target http://<host>:9000 --data "hello world" Replacing <host> with your host running Docker. The result you will see in the httpSourceTest log will be: 21:06:08,208 INFO pool-11-thread-4 sink.httpsource - hello world Writing data to a file Continuing with theme above where we receive data via http, we will replace the log sink with a file sink. By default XD will write all files to the /tmp/xd/output directory. But in order for us to view the resulting file, we will mount a directory on our machine to the /tmp/xd/output directory in the container. Cleanup First lets stop our last example. - Stop the logs -f. To do this just ctrl-c to stop the log tailing. Now we want to stop our singlenode instance and it can be done by executing the following: docker stop httpSourceTest Create the stream with a http source and file sink Now when we start our singlenode we will mount a local directory to the /tmp/xd/output directory in the container. docker run --name fileSinkTest \ -d \ -p 9393:9393 \ -p 9000:9000 \ -v <dir on your machine>:/tmp/xd/output \ maxromanovsky/springxd-singlenode Replacing <dir on your machine> with a directory on your docker machine. So to create the stream that will receive http posts and write the results to a file, go to the the shell and from the xd:> prompt, type the following and press return: stream create httpfilestream --definition "http | file" --deploy Now lets post a http "hello world" message to XD fileSinkTest and have it write the result to /tmp/xd/output/httpfilestream.out file. From the shell xd:> prompt, type the following and press return http post --target http://<host>:9000 --data "hello world" Replacing <host> with your host running Docker. From a new terminal you should be able to view the output file and it will be located in your <dir on your docker machine>/httpfilestream.out Unless a dir is specified for the file sink it will always write its results to the /tmp/xd/output directory. Also if no file name is specified it will use the stream name as the base for the output file name. So in this case our stream name was httpfilestream and thus the files name will be httpfilestream.out Additional Resources To read more about the modules (sources, processors, sinks & jobs) that are available to you from XD as well as the Docker XD Guide please checkout XD's wiki at.
https://hub.docker.com/r/maxromanovsky/springxd-singlenode/
CC-MAIN-2018-09
refinedweb
1,232
59.03
import "golang.org/x/perf/storage/app" Package app implements the performance data storage server. Combine an App with a database and filesystem to get an HTTP server. app.go local.go query.go upload.go ErrResponseWritten can be returned by App.Auth to abort the normal /upload handling. type App struct { DB *db.DB FS fs.FS // Auth obtains the username for the request. // If necessary, it can write its own response (e.g. a // redirect) and return ErrResponseWritten. Auth func(http.ResponseWriter, *http.Request) (string, error) // ViewURLBase will be used to construct a URL to return as // "viewurl" in the response from /upload. If it is non-empty, // the upload ID will be appended to ViewURLBase. ViewURLBase string // BaseDir is the directory containing the "template" directory. // If empty, the current directory will be used. BaseDir string } App manages the storage server logic. Construct an App instance using a literal with DB and FS objects and call RegisterOnMux to connect it with an HTTP server. RegisterOnMux registers the app's URLs on mux. Package app imports 17 packages (graph) and is imported by 2 packages. Updated 2017-07-14. Refresh now. Tools for package owners.
http://godoc.org/golang.org/x/perf/storage/app
CC-MAIN-2017-34
refinedweb
196
61.53
x += x++; Other. PingBack from For some reason, there's been a lot of buzz lately around immutability in C#. If you're interested in Welcome to the XXXIX issue of Community Convergence. The big news this week is that Microsoft has begun Nice work! One question about "stealing" the base class - what about introducing another layer of abstraction and instead of declaring your type like this: do something like this: public class DateSpan: ValueObject<Record<DateTime, DateTime, bool>> {...} The ValueObject class would then expose the record to as a protected property and pass Equals, ToString... to the Record object. It would make the declarations a little bit more complicated, but your biggest drawback is not a problem anymore. I think I'm missing something. Aren't you still stealing the base class with the ValueObject class? I.E. you cannot declare DateSpan to inherit from a MySpan class. Well, I was thinking about ValueObject more in terms of a type system base class, where you can put your additional behaviour and not spoil the Record class. You are right that still you cannot have a deeper hierarchy of types with this solution. BTW: don't you think it would be nice to have a language support for immutable types? Something like: public readonly class MyClass... where all fields are readonly fields of readonly types? Yeah, you are right. We talked about having first class readonly classes a bunch of times, but we never came up with a proposal we are happy with. We are still discussing the topic. Isn't code generation a much better option at this time, while 1st class readonly classes aren't available? It is an option. I wouldn't say it is better. It has pros and cons (i.e. readibility, amount of code, mantainability). As for me, I prefer library solutions to codegen whenever available (and roughly usable). I think you guys in immutable space have frankly lost it. It is not new, it is common principles that do not apply in all fields, and just because functional attempts are again popular you are hitting on attempting to solve a problem with a wrong tool: .net type system. If you think magically you will somehow parallelise code and algorithms, you are in for a big 'immutable surprise'. No, silver, bullet.. All these posts say is: iff you need an immutable class, here are a bunch of options of how to do it in C#. I don't think I'm claiming anything more than that. Am I? Previous posts: Part I - Background Tuples are a way for you not to name things. In Object Oriented languages
http://blogs.msdn.com/lucabol/archive/2008/01/11/creating-an-immutable-value-object-in-c-part-v-using-a-library.aspx
crawl-002
refinedweb
443
75.61
SPI Communication with PIC Microcontroller PIC16F877A PIC Microcontrollers are a powerful platform provided by microchip for embedded projects; its versatile nature has enabled it to find ways into many applications and is yet to grow a lot. If you have been following our PIC tutorials then you would have noticed we have already covered a wide range of tutorials on PIC microcontroller starting from the very basics. In the same flow we are proceeding to learn the communication protocols available with PIC and how to use them. We have already covered I2C with PIC Microcontroller. In the vast system of embedded applications, no microcontroller can perform all the activities by itself. At some stage of time it has to communicate to other devices to share information, there are many different types of communication protocols to share these information’s, but the most used ones are USART, IIC, SPI and CAN. Each communication protocol has its own advantage and disadvantage. Let’s focus on the SPI Protocol for now since that is what we are going to learn in this tutorial. What is SPI Communication Protocol? The term SPI stands for “Serial Peripheral Interface”. It is a common communication protocol that is used to send data between two microcontrollers or to read/write data from a sensor to a microcontroller. It is also used to communicate with SD cards, shift registers, Display controllers and much more. How SPI Protocol Works? The SPI communication is synchronous communication, meaning it works with the help of a clock signal which is shared between the two devices that are exchanging the data. Also it a full-duplex communication because it can send and receive data using a separate bus. The SPI communication requires 5 wires to operate. A simple SPI communication circuit between a master and slave is shown below The five wires required for the communication are SCK (Serial Clock), MOSI (Master Out Slave In), MISO (Master In Slave Out) and SS (Slave Select). The SPI communication always takes places only between a master and slave. A master can have multiple slaves connected to it. The master is responsible for generating the clock pulse and the same is shared with all slaves. Also all communications can be initiated only by the master. The SCK pin (a.k.a SCL-serial clock) shares the clock signal generates by the master with the slaves. The MOSI pin (a.k.a SDA –Serial Data Out) is used to send the data from the master to the salve. The MISO pin (a.k.a SDI – Serial Data In) is used to get the data from the salve to the master. You can also follow the arrow mark in the above figure to understand the movement of data/signal. Finally the SS pin (a.k.a CS –Chip select) is used when there are more than one slave modules connected to the master. This in can be used to select the required slave. A sample circuit where more than one slave is connected with the master for SPI communication is shown in the circuit below. Difference between I2C and SPI Communication We have already learnt I2C communication with PIC and so we must be familiar with how I2C works and where we can use them like I2C can be used to interface RTC module. But now, why do we need SPI protocol when we already have I2C. The reason is both I2C and SPI communications are advantages in its own ways and hence is application specific. To an extent the I2C communication can be considered to have some advantages over SPI communication because I2C uses less number of pin and it gets very handy when there are a large number of slaves connected to the bus. But the drawback of I2C is that it has the same bus for sending and receiving data and hence it is comparatively slow. So it’s purely based on application to decide between SPI and I2C protocol for your project. SPI with PIC16F877A using XC8 Compiler: Enough of basics, now let us learn how we can use SPI communication on the PIC16F877A microcontroller using the MPLABX IDE and XC8 compiler. Before we start make it clear that this tutorial only talks about SPI in PIC16F877a using XC8 compiler, the process will be the same for other microcontrollers but slight changes might be required. Also remember that for advanced microcontrollers like the PIC18F series the compiler itself might have some library in-built to use the SPI features, but for PIC16F877A nothing like that exists so let’s build one on our own. The library explained here will be given as a header file for download at the bottom which can be used for PIC16F877A to communicate with other SPI devices. In this tutorial we will write a small program that uses SPI communication to write and read data from the SPI bus. We will then verify the same using Proteus simulation. All the code related to SPI registers will be made inside the header file called PIC16f877a_SPI.h. This way we can use this header file in all our upcoming projects in which SPI communication is required. And inside the main program we will just use the functions from the header file. The complete code along with the header file can be downloaded from here. SPI Header File Explanation: Inside the header file we have to initialize the SPI communication for PIC16F877a. As always the best place to start is the PIC16F877A datasheet. The registers which control the SPI communication for PIC16F8777a is the SSPSTAT and the SSPCON Register. You can read more about them on page 74 and 75 of the datasheet. There are many parameters options that has to be chosen while initializing the SPI communication. The most commonly used option is the clock frequency will be set to Fosc/4 and will be done in the middle and the clock will be set as low at ideal state. So we are also using the same configuration for our header file, you can easily change them by changing the respective bits. SPI_Initialize_Master() The SPI initialize Master function is used to start the SPI communication as the master. Inside this function we set the respective pins RC5 and RC3 as output pins. Then we configure the SSPTAT and the SSPCON register to turn on the SPI communications void SPI_Initialize_Master() { TRISC5 = 0; // SSPSTAT = 0b00000000; //pg 74/234 SSPCON = 0b00100000; //pg 75/234 TRISC3 = 0; //Set as output for slave mode } SPI_Initialize_Slave() This function is used to set the microcontroller to work in slave mode for SPI communication. During slave mode the pin RC5 should be set as output and the pin RC3 should be set as input. The SSPSTAT and the SSPCON is set in the same way for both the slave and the master. void SPI_Initialize_Slave() { TRISC5 = 0; // SDO pin should be declared as output SSPSTAT = 0b00000000; //pg 74/234 SSPCON = 0b00100000; //pg 75/234 TRISC3 = 1; //Set as in out for master mode } SPI_Write(char incoming) The SPI Write function is used to write data into the SPI bus. It gets the information from the user through the variable incoming and then uses it to pass to the Buffer register. The SSPBUF will be cleared in the consecutive Clock pulse and the data will sent into the bus bit by bit. void SPI_Write(char incoming) { SSPBUF = incoming; //Write the user given data into buffer } SPI_Ready2Read() The SPI ready to Read function is used to check if the data in the SPI bus is received completely and if it can be read. The SSPSTAT register has a bit called BF which will set once the data has been received completely, so we check if this bit is set if it is not set then we have to wait till it gets set to read anything from the SPI bus. unsigned SPI_Ready2Read() { if (SSPSTAT & 0b00000001) return 1; else return 0; } SPI_Read() The SPI Read is used to read the data from the SPI bus into the microcontroller. The data present in the SPI bus will be stored in the SSPBUF, we have to wait till the complete data is stored in the Buffer and then we can read it into a variable. We check the BF bit of SSPSTAT register before reading the buffer to make sure the data reception is complete. char SPI_Read() //Read the received data { while ( !SSPSTATbits.BF ); // Hold till BF bit is set, to make sure the complete data is read return(SSPBUF); // return the read data } Main program Explanation: The functions explained in the above section will be in the header file and they can be called into the main c file. So let’s write a small program to check if the SPI communication is working. We will just write few data into the SPI bus and use the proteus simulation to check if the same data is being received in the SPI debugger. As always begin the program by setting the configuration bits and then it is very important to add the header file that we just explained into the program as shown below #include <xc.h> #include "PIC16F877a_SPI.h" If you have opened the program from the zip file downloaded above then by default the header file will be present inside the header file directory of your project file. Else you have to add the header file manually inside your project, once added your project files will look like this below Inside the main file we have to initialize the PIC as Master for SPI communication and then inside an infinite while loop we will write random three hex values into the SPI bus to check if we receive the same during simulation. void main() { SPI_Initialize_Master(); while(1) { SPI_Write(0X0A); __delay_ms(100); SPI_Write(0X0F); __delay_ms(100); SPI_Write(0X15); __delay_ms(100); } } Notice that the random values used in the program are 0A, 0F and 15 and they are hex values so we should see the same during simulation. That is it the code is all done, this is just a sample but we can use the same methodology to communicate with other MCU or with other sensors module operating on SPI protocol. Simulating PIC with SPI debugger: Now that our program is ready we can compile it and then proceed with simulation. Proteus has a nice handy feature called SPI debugger, which can be used to monitor the data over an SPI bus. So we use the same and build a circuit like shown below. Since there is only one SPI device in the simulation we are not using the SS pin and when not used it should be grounded as shown above. Just load the hex file into the PIC16F877A microcontroller and click on the play button to simulate our program. Once the simulation starts you will get a pop-up window which displays the data in the SPI bus as shown below Let’s take a closer look at the data coming in and check if it is the same as the one that we wrote in our program. The data is received in the same order that we wrote in our program and the same is highlighted for you. You can also try simulating a program to communicate with two PIC microcontrollers using SPI protocol. You have to program one PIC as the master and the other as the slave. All the required header files for this purpose are given in the header file already. This is just a glimpse of what SPI can do, it can also read and write data to multiple devices. We will cover more about SPI in our upcoming tutorials by interfacing various modules that work with SPI protocol. Hope you understood the project and learnt something useful from it. If you have any doubts post them in the comment section below or use the forums for technical help. Complete Main code has been given below; you can download header files with all the code from here /* * File: PIC_SPI.c * Author: Aswinth * * Created on 15 May, 2018, 1:46 PM */ // = OFF // Low-Voltage In-Circuit Serial Programming Enable bit #pragma config CPD = OFF // Data EEPROM Memory Code Protection bit #pragma config WRT = OFF // Flash Program Memory Write Enable bits #pragma config CP = OFF // Flash Program Memory Code Protection bit #include <xc.h> #include “PIC16F877a_SPI.h” #define _XTAL_FREQ 20000000 void main() { SPI_Initialize_Master(); while(1) { SPI_Write(0X0A); __delay_ms(100); SPI_Write(0X0F); __delay_ms(100); SPI_Write(0X15); __delay_ms(100); } } Read more Detail:SPI Communication with PIC Microcontroller PIC16F877A JLCPCB – Prototype 10 PCBs for $2 (For Any Color) China’s Largest PCB Prototype Enterprise, 600,000+ Customers & 10,000+ Online Orders Daily See Why JLCPCB Is So Popular:
https://pic-microcontroller.com/spi-communication-with-pic-microcontroller-pic16f877a/
CC-MAIN-2019-18
refinedweb
2,125
66.47
Often times when you are building an application you need to hook multiple components together in such a way that when one component changes others must do something. When you are building custom components there is often the temptation to build a custom set of listeners to go along with it. This seems like good component etiquette; after all this is how most of the javax.swing.* components are built. Still, that's a big pain to create new listener types that must be implemented, just for observing simple changes. Plus it tightly couples your classes which can make your code brittle when making changes later. There must be a better way. And there is! javax.swing.* PropertyChangeListeners PropertyChangeListener All Swing components implement the property pattern, meaning that there is an addPropertyChangeListener() method on all subclasses of JComponent. There is even a special version which lets you listen to just a particular property by passing the property name into addPropertyChangeListener() along with your listener. Most component properties are already set up to send events when they change. This includes things like the text of a JTextField and the background color of a JButton. JComponent also has a firePropertyChange() method which lets your custom Swing components send their own change events without ever having to much with event classes. The result is a very easy way to hook components together with a minimum of new code, and no new interfaces or classes. addPropertyChangeListener() JComponent JTextField JButton firePropertyChange() Here's an example. I was working on a tiny bitmap tile editor. There is a "+" button which lets you make the grid a bit bigger. When it's pressed the grid editor panel needs to make itself bigger. When the grid becomes bigger a small label needs to reflect the new size of the grid. You can see a chain of events here that have to be implemented. I could create custom events (like GridSizeChangeEvents or something equally horrendous), but that's a lot of work for what boils down to "update yourself". Let's see how it would work using the PropertyChangeListener instead. GridSizeChangeEvents First, I create the editor. It is a custom JPanel with a method called setGridwidth(). Each time the grid width is changed it rebuilds the internal grid data and then fires a property change event about it. Chose to set the property from true to false, though it doesn't really matter what I send as long as it's different. I just want to indicate that a change has happened. I have un-creatively named the property "grid" and anyone else can listen to it. Here is what it looks like: JPanel setGridwidth() "grid" public class TileBuilderEditorPanel extends JPanel { ... public void setGridwidth(int gridwidth) { this.gridwidth = gridwidth; rebuildGrid(); } private void rebuildGrid() { int[][] newgrid = new int[gridwidth][gridheight]; ... this.gridcells = newgrid; repaint(); firePropertyChange("grid",false,true); } } Back in my main class I want to update the label whenever the grid size changes. This is simple with an anonymous listener class like this: editor.addPropertyChangeListener("grid", new PropertyChangeListener() { public void propertyChange(PropertyChangeEvent propertyChangeEvent) { viewer.repaint(); grid_size.setText(realeditor.getGridwidth() + " x " + realeditor.getGridheight()); } }); Notice that the addPropertyChangeListener takes an argument to specify the property to listen for. By telling the component that I only want grid events I don't have put a check for the property name inside of my listener. Since I don't actually care what the grid changed to, just that it changed at all, I don't need to mess with the PropertyChangeEvent (though the information is there if I want it). addPropertyChangeListener grid PropertyChangeEvent Now I can create an action listener on the "+" button that will make the grid width one cell bigger whenever the width_plus button is pressed. width_plus private void width_plusActionPerformed(java.awt.event.ActionEvent evt) { // TODO add your handling code here: editor.setGridwidth(realeditor.getGridwidth()+1); } Note, the width_plusActionPerformed method is generated by Netbean's GUI builder and then attached to the actual width_plus button behind the scenes, which means I have even less code to write. width_plusActionPerformed Now when I press the button it will call editor.setGridwidth() which will update the internal data structure, repaint itself, then fire a grid property event. An anonymous listener will look for the event and then update the label. It's that simple! editor.setGridwidth() Here's what the (currently unfinished) application looks like: For more on PropertyChangeListeners you can read the javadocs here or this section of the JavaBeans tutorial. Most of the state of any component is stored in the model which most likely will not be a Component or JComponent but a cmplex data structure. Whenever data is updated in the model , the view has to change which is possible only by writing custom events and event handlers. Posted by: psychostud on February 27, 2006 at 10:38 AM It's true that complex components usually have complicated internal data structures, however I have found that my other components usually only want to listen to a particular part of the data structure, not the whole thing. PropertyChangeListenters let me do this. It's not the complete solution of course and there are certainly times where I do create a full model with events, but this is a nice tool to have in my box sometimes. Posted by: joshy on February 27, 2006 at 10:06 PM psychostud: In addition to what Josh added, I've found that using the JavaBeans pattern for my domain data is really useful. The oft untold part of the JavaBeans API is the use of property change events. I like to make my custom beans extend a common supertype that does nothing but add: public addPropertyChangeListener(PropertyChangeListener pcl); public removePropertyChangeListener(PropertyChangeListener pcl); protected firePropertyChanged(String name, Object old, Object new); I can then easily reuse property change listeners for all of my normal event notification. Even in situations where changing one property via setXXX ends up altering multiple properties, this is useful: public void setXxx(Object xxx) { Object old = this.xxx; Object oldyyy = getYyy(); Object oldzzz = getZzz(); this.xxx = xxx; firePropertyChange("xxx", old, this.xxx); firePropertyChange("yyy", old, getYyy()); firePropertyChange("zzz", old, getZzz()); } The other nice thing about property change listeners is that they are generic. I can attach one listener to a bean and watch many events. Perhaps I want to keep a log of all events fired. Or maybe I'm writing a generic library to track which properties have changed so that I know whether the bean has been edited and needs to be saved. Etc. Posted by: rbair on February 28, 2006 at 08:46 AM Hi Josh, This includes things like the text of a JTextField This seems like an obvious one that would be a bound property (a property that fires a property change event is bound), but in truth the text property of a JTextComponent is not bound. Here's the doc: * Note that text is not a bound property, so no PropertyChangeEvent * is fired when it changes. To listen for changes to the text, * use DocumentListener. From your code: firePropertyChange("grid",false,true); The bean spec says the name used in the property change event should correspond to the name of the property. In your case the method is setGridwidth, which would correspond to a property name of gridwidth. Additionally the values in the property change event should correspond to that of the property. In your code you're firing the change with a Boolean, it should be an int. I'm sure you're code works as is, so why do these changes? Consistancy! Without looking at your code I would expect that if you have a setGridwith method I could listen for changes by way of the gridwidth property and that the values would be Integers (or null). Additionally the beans spec is really nothing more than a pattern, and to say your object is bean implies you're following the pattern. Here's what your code should look like: public void setGridwidth(int gridwidth) { int oldWidth = this.gridwidth; this.gridwidth = gridwidth; rebuildGrid(); firePropertyChange("gridwidth", oldWidth, gridwidth); } firePropertyChange will only notify listeners if the old and new values differ, but not your rebuild code. You may as well further refine this to be: public void setGridwidth(int gridwidth) { if (gridwidth != this.gridwidth) { int oldWidth = this.gridwidth; this.gridwidth = gridwidth; rebuildGrid(); firePropertyChange("gridwidth", oldWidth, gridwidth); } } -Scott Posted by: zixle on March 08, 2006 at 06:34 AM
http://weblogs.java.net/blog/joshy/archive/2006/02/all_hail_the_pr.html
crawl-002
refinedweb
1,411
53.92
Coding Conventions The golden rule is to follow whatever conventions are already being used in the module you're editing. If you. Writing a compiler is hard enough without adding more things to worry about. General Rules - Tabs are 8 spaces. No discussion will be entered into on this point. - We prefer literal tabs at the start of lines instead of hard spaces. - Each top-level definition should have a comment explaining what it is for. One liners are fine. - Running comments in the body of functions are encouraged. Write down what you were expecting the code to do when you wrote it, so it reads like a story. Aim for about 1 comment line every 5-10 code lines, depending on how complex the code is. - If a function does several things in a regular way, then it should look like') Haskell / Disciple Specifics Try to put the do on the same line as the = Use this: fun x = do y <- thing return (x + y) Instead of this: fun x = do y <- thing return (x + y) = do restOfThing1 restOfThing2 Note that the continuation is named after the initial one, with a "_good" suffix that identifies the preconditions that the initial one sets up.
http://trac.haskell.org/ddc/wiki/Development/CodeConventions?version=2
CC-MAIN-2016-26
refinedweb
204
69.82
CS1411 - 160 - Programming Principles I, Spring 2005 Lab assignment 1 Motivation This weeks you will get started writing and compiling simple C++ programs Mandatory work Your TA will check that you have done the following things (you may leave once their done) : Write a simple program that will output: This is my first program! (or something similar) with both the XCode and the Visual Studio environment! (in Visual Studio select Win32 console application, in XCode select C++ Tool) Write a program that will ask the user for its name and then prints Hello, Max! It is nice to meet you in two lines as shown (where it will print the inputed name instead of Max) Optional work Some other things that you might try: - change the two-lines program so that you use only one cout statement (other than the asking for the name) - change the two-lines program so that you use one cout line for every word that is outputted - Try inputting an empty name - Try adding a second variable: string city;. Ask the user: Where are you from? and output something like Hi Max, glad you came all the way from Lubbock to be here! - Open Explorer (Windows) or Finder (Mac) and try to find your program on the hard disk. It should be a file with the extension .cpp. Open that file with a text editor (notepad on windows, textedit on Mac). - At home: Install Visual Studio or XCode on your machine. Write a simple program there. How? Please check the labs page on how to get started. There are instructions for visual studio (use version 6) and XCode Please give the Macs a try. If you simply cannot work with them, go to PE 119, but let your TA know. Sample programs Here are two sample programs from class to get you started #include <iostream> using namespace std; int main() { cout << "Hello, world!" << endl; return 0; } #include <iostream> #include <string> using namespace std; int main() { string name; cout << "What is your name? "; cin >> name; cout << "Good morning, " << name << "!" << endl; } For help: ask your TA! He is here to help you! If the instructions are unclear or you have any other questions, please email me
https://max.berger.name/teaching/s05/lab1.html
CC-MAIN-2021-21
refinedweb
369
78.18
Re: Bad system call: aio_read() On Sat, 12 Oct 2002 11:57:23 -0400, Craig Rodrigues [EMAIL PROTECTED] said: I am trying to port the ACE library ( ) to FreeBSD-CURRENT, and it is very confusing that code in -STABLE which compiled and worked, does not work the same way in -CURRENT. Re: devfs oddity? On Sun, 6 Oct 2002 06:08:44 -0400 (EDT), Matthew N. Dodd [EMAIL PROTECTED] said: Has our CDROM driver ever supported multiple ISO filesystems per CD? Has it supported multi-session CDROMs? The notion of partitions on CDROMs is a little ambiguous. I'm hoping that GEOM can improve this. Re: Journaled filesystem in CURRENT On Thu, 26 Sep 2002 20:06:00 -0700, David O'Brien [EMAIL PROTECTED] said: On Thu, Sep 26, 2002 at 09:13:41PM +0200, Alexander Leidinger wrote: Yes, bg-fsck isn't really usable at the moment. They work fine for me for quite a while. The last buildworld on my server was Sept 15th. Worked `lorder' problem Anyone experiencing this problem might want to try the following (beware cutpaste). I still don't understand why it is that I don't see it. Is there a hidden build dependency? (I.e., does `sort' need to be added to the list of build-tools?) I'm to tired right now to look at ncurses, but it Re: Who broke sort(1) ? On Tue, 24 Sep 2002 13:30:11 -0700, Peter Wemm [EMAIL PROTECTED] said: Oh man, this is going to suck. There are thousands and thousands of third party scripts that use +n syntax. I am most unhappy with this change. :-( The time to complain about it was back in 1992when the old syntax was Re: Who broke sort(1) ? On Tue, 24 Sep 2002 14:09:31 -0700, Bill Fenner [EMAIL PROTECTED] said: When's the first time the FreeBSD sort(1) man page mentioned that this syntax was deprecated? Can we at least start from there? It does not appear to have ever been properly documented. I don't object to maintaining Re: Who broke sort(1) ? On Tue, 24 Sep 2002 14:26:43 -0700, Peter Wemm [EMAIL PROTECTED] said: Closed payware standards do not count as 'fair warning'. I still have never been able to see a posix standard. Go to a library. Or go to and register for free on-line access. -GAWollman To Re: Crashdumps available for download ... please help On Wed, 18 Sep 2002 08:27:08 +0200 (CEST), Martin Blapp [EMAIL PROTECTED] said: 10. Upgraded to gcc3.2. I was seeing now some SIG11 during builds, and - panics ! Softupdates and fs panics mostly. I turned off softupdates. The panic was different, but all the time it was in mmap. Re: No way to tell when `long long' is or is not supported? On Mon, 9 Sep 2002 01:10:46 -0700, David O'Brien [EMAIL PROTECTED] said: Looking at GCC on other platforms, _LONGLONG seems to be the most preferred symbol. How does this patch look? Works for me. I'd still like to see `-posix' go away, if we're going to be changing freebsd-spec.h further. Re: No way to tell when `long long' is or is not supported? On Mon, 9 Sep 2002 00:07:00 -0700, David O'Brien [EMAIL PROTECTED] said: It seems to work for me: $ cat foo.c #ifdef __STRICT_ANSI__ #error __STRICT_ANSI__ #endif $ /usr/bin/cc -ansi foo.c foo.c:2:2: #error __STRICT_ANSI__ OK, so this is now one of those magic No way to tell when `long long' is or is not supported? GCC used to define a macro __STRICT_ANSI__ when `-ansi' was given on the command line. The current version does not do this, which breaks detection of whether `long long' is allowed. (For some reason this is not hit in -current builds, but I have made some fixes to stdlib.h which trigger it in Another compiler bug? World-testing some changes to libc, I had g++ bomb out with the following assertion: In file included from /usr/obj/usr/src/i386/usr/include/g++/locale:46, from /usr/obj/usr/src/i386/usr/include/g++/bits/ostream.tcc:37, from Re: web browsers (was: Re: aout support broken in gcc3)). Re: aout support broken in gcc3 On Tue, 3 Sep 2002 23:32:22 +0100 (BST), Richard Tobin [EMAIL PROTECTED] said: So they need a C compiler that can generate a.out format .o files, and a linker that can link a.out format .o files against an a.out format executable. Not necessarily. There is always `objcopy', at least for Re: aout support broken in gcc3. Re: cvs commit: src/release/i386 drivers.conf On Wed, 24 Jul 2002 20:28:02 +0200 (SAT), John Hay [EMAIL PROTECTED] said: We can save some space (I think) by not gziping the individual help files, but leave them unzipped. Then the final gzip of the whole image should be able to do a better job of it. But I doubt if it will give us enough Re: bug in awk implementation? [Since you insisted on CC'ing me...] On Tue, 16 Jul 2002 16:57:42 -0700 (PDT), Gordon Tetlow [EMAIL PROTECTED] said: No, you are quoting from the gawk(1) man page. The awk(1) man page makes no such statement. The awk(1) manual page does not define the correct behavior of gawk(1). IEEE Std. Re: bug in awk implementation? On Mon, 15 Jul 2002 09:06:36 -0700 (PDT), Gordon Tetlow [EMAIL PROTECTED] said: Ah, okay, there is a distinct lack of documentation to that fact. I have figured out that I can just set RS= and that does the same thing. I suppose it would be helpful to have an awk book around. =) The cvs commit: src/lib/libc/gen statvfs.c On Thu, 11 Jul 2002 15:54:12 -0700 (PDT), Garrett Wollman [EMAIL PROTECTED] said: wollman 2002/07/11 15:54:12 PDT Added files: lib/libc/gen statvfs.c Log: A simple implementation of statvfs(3) (one step above the trivial one). Not yet connected to the build additional queue macro On Tue, 2 Jul 2002 09:54:02 -0500, Jonathan Lemon [EMAIL PROTECTED] said: Essentially, this provides a traversal of the tailq that is safe from element removal, while being simple to drop in to those sections of the code that need updating, as evidenced in the patch below. The queue macros Re: additional queue macro On Tue, 2 Jul 2002 16:07:36 -0700 (PDT), Julian Elischer [EMAIL PROTECTED] said: I would by the way argue that the statement The queue macros always guaranteed that traversal was safe in the presence of deletions to be false. Nowhere was this guaranteed, in fact the Manual page goes to Re: duplicate -ffreestanding in kernel build On Sat, 15 Jun 2002 12:49:29 -0700, Maxime Henrion [EMAIL PROTECTED] said: IIRC, -ffreestanding prevented GCC3 from being stupid optimizations like `-ffreestanding' tells the compiler that it is to operate as a free-standing implementation (in the words of the C standard); i.e., that there is Re: Updating GNU Tar in the base system On Tue, 4 Jun 2002 13:36:17 +1000, Tim J. Robbins [EMAIL PROTECTED] said: Having two tar's, two cpio's, two awk's, gzip and zlib in the base system is bloat. It may be bloat, but it's tolerable bloat that our users are well-accustomed to. -GAWollman To Unsubscribe: send mail to [EMAIL State of the ports collection On Mon, 3 Jun 2002 13:42:24 -0700, Kris Kennaway [EMAIL PROTECTED] said: * (35 ports) Something caused sys_nerr to change prototypes. It looks like this might be because the definition of __const from sys/ctypes.h has changed, but I can't see why. See for example Mutex statistics script [Please direct followups to -chat.] On 26 May 2002 02:12:11 +0200, Dag-Erling Smorgrav [EMAIL PROTECTED] said: Here's a list of the ten most frequently acquired mutices (over a ObLanguagePeeve: ``Mutex'' is a portmanteau of ``MUTual EXclusion''; a Latinate plural is thus entirely Re: Junk in new gcc include path which confuses AC_CHECK_HEADER in some Re: Keywords: pre-GCC3 tcsh coredump free/malloc reentrancy signal On Sun, 12 May 2002 23:27:42 +0200, Poul-Henning Kamp [EMAIL PROTECTED] said: The correct solution is probably to set a flag in the signal handler and resize the buffer before the next line is read. Or, somewhat less optimally, to block SIGWINCH (and any other signals with similar handler Re: alpha tinderbox failure On Sun, 12 May 2002 00:37:05 +0300, Giorgos Keramidas [EMAIL PROTECTED] said: It is ugly. I'm not sure if it's non-standard too. Duff's device was valid in C89. I can't speak for whether C99 has broken this. That's not necessarily a bad thing, since most of the time people use it to prove Re: cc1 crashes with SIGBUS while building XFree86-Server-4.2.0_2 On Wed, 1 May 2002 13:32:35 +0200, Jose M. Alcaide [EMAIL PROTECTED] said: I have been building XFree86 without problems, I just rebuilt both -current (Friday or Saturday timeframe) and all of X (last night) without a problem. (Well, other than all of the old binaries I need to recompile Re: new expr(1) behaviour breaks libtool On Sun, 21 Apr 2002 20:17:29 +0200 (SAT), John Hay [EMAIL PROTECTED] said: expr -lgrove : -l\(.*\) expr -- -L/export/ports/textproc/jade/work/jade-1.2.1/lib/.libs : -l\(.*\) If we are going to leave this behaviour, we will have to teach libtool how to call expr(1) differently on -stable New expr(1) breaks ports On Sun, 24 Mar 2002 16:59:36 -0800, Kris Kennaway [EMAIL PROTECTED] said: expr --prefix=/usr/local : -*prefix=\(.*\) expr: syntax error Is expr to blame, or w3m? w3m is to blame. See expr(1) for more details and a workaround which is portable to both historic and POSIX expr Re: gcc -O broken in CURRENT [Unnecessary carbon copies trimmed.] On Fri, 15 Mar 2002 22:41:26 -0500, Brian T.Schellenberger [EMAIL PROTECTED] said: If you mean the FreeBSD-native netscape 4.x; yes, it's perfectly silly to run *that*. I don't see anything silly about it. It works with all the Web sites I care about Re: HEADS UP: cvs commit: src/sys/conf kern.pre.mk (fwd) On Mon, 25 Feb 2002 23:35:12 -0700 (MST), M. Warner Losh [EMAIL PROTECTED] said: volatile int conspeed; int *foo = conspeed; The answer to this is Not all warnings are indicative of errors. It is unreasonable to expect all warnings to be removed, since the compiler has insufficient Re: rdr 127.0.0.1 and blocking 127/8 in ip_output() On Thu, 14 Feb 2002 11:09:41 +0200, Ruslan Ermilov [EMAIL PROTECTED] said: ping -s 127.1 1.2.3.4 telnet -S 127.1 1.2.3.4 If someone explicitly overrides source-address selection, they are presumed to know WTF they are doing, and the kernel should not be trying to second-guess them. rdr 127.0.0.1 and blocking 127/8 in ip_output(). -GAWollman To Unsubscribe: send Re: rdr 127.0.0.1 and blocking 127/8 in ip_output() On Wed, 13 Feb 2002 17:58:51 +0200, Ruslan Ermilov [EMAIL PROTECTED] said: RFC1122 requires the host to not send 127/8 addresses out of loopback, whether or not its routes are set up correctly. As we have already seen, there is not consensus on this particular issue, or on the general issue Re: function name collision on getcontext with ports/editors/joe On Mon, 11 Feb 2002 12:16:44 -0500 (EST), Daniel Eischen [EMAIL PROTECTED] said: How do you easily forward declare something that is a typedef? There is a reason style(9) says not to use such typedefs. Unfortunately, this one it written into a standard. Since We Are The Implementation, there Support for atapi cdrw as scsi in -current? On Sat, 02 Feb 2002 20:10:20 +, [EMAIL PROTECTED] said: I noticed a patch on freebsd-scsi a while back that added a not very complete form of atapi as scsi support to the freebsd kernel. Are there plans to complete this and add it to -current sometime before -current turns into Two recent lock order reversals Sources about a week old: 1st 0xc4074c34 filedesc structure @ ../../../kern/kern_descrip.c:925 2nd 0xc03946e0 Giant @ ../../../kern/kern_descrip.c:959 1st 0xc3f3dd00 pcm0 @ ../../../dev/sound/pcm/sound.c:132 2nd 0xc3f3db40 pcm0:play:0 @ ../../../dev/sound/pcm/sound.c:189 -GAWollman To Re: Processes hanging in ``inode' state On Sat, 26 Jan 2002 19:08:04 +0100 (CET), [EMAIL PROTECTED] said: I noticed it since last summer, too. Using both vi and vile. I have not seen it recently. My most recent crashes all involve triple-faults. A new machine, running very fresh -current, hasn't been up for long enough to evoke spam On Thu, 20 Dec 2001 20:46:16 -0600, Joe Halpin [EMAIL PROTECTED] said: Is this just a normal part of being on the list? Yes, lots of spammers spew at FreeBSD mailing-lists. If you can identify a persistent source of spam, the postmaster is fairly responsive in filtering them. -GAWollman To Re: Still panic() with userland binary on CURRENT Re: libfetch kqueue patch On Mon, 26 Nov 2001 15:27:45 -0500 (EST), Andrew R. Reiter [EMAIL PROTECTED] said: As from OpenBSD (in shorter form): fd_set *fds = calloc(howmany(fd+1, NFDBITS), sizeof(fd_mask)); But this is not portable. The application is not allowed to assume anything about the structure of an Re: libfetch kqueue patch On Thu, 22 Nov 2001 13:02:23 +0200, Maxim Sobolev [EMAIL PROTECTED] said: For what it's worth, it also makes code less portable. On the other hand, it would also make libfetch useful in a larger variety of applications; viz., those which have so many file descriptors open that the one used by Re: re-entrancy and the IP stack. On Fri, 16 Nov 2001 16:13:41 -0800 (PST), Julian Elischer [EMAIL PROTECTED] said: (and anyhow Garrett got rid of the 'static' uses of mbufs, not 'travelling' 'per packet' uses..) Only because I did not have the time or stomach then to introduce `struct packet' everywhere. All of the queueing namespace pollution with struct thread? On Mon, 12 Nov 2001 14:01:35 -0800, Steve Kargl [EMAIL PROTECTED] said: I WINE developer has suggested that this is namespace pollution on the part of FreeBSD, but he hasn't given any details to support what he means. Applications which include sys/user.h, or any other non-standard header Re: malloc.h On Sat, 10 Nov 2001 14:52:22 +0100, Jens Schweikhardt [EMAIL PROTECTED] said: As I understand it, the only problem is if some implementation indicates non-conformance with #define __STDC__ 0, which is unheard of to me, and, if I were an implementor of such a system, I'd just leave it Re: devfs question On Sat, 27 Oct 2001 11:32:07 +0200, Poul-Henning Kamp [EMAIL PROTECTED] said: Right, but the only way to get an error message is to let /sbin/init die and have the kernel print the message. /sbin/init cannot print the message when there is no /dev/console can it ? Yes, it can, if the kernel Re: cu(1) (Was: Re: cvs commit: src/etc/mtree BSD.var.dist) On Fri, 26 Oct 2001 17:59:33 +0100, Mark Murray [EMAIL PROTECTED] said: Do you have a problem with cu being a port and not in the base system? (ie, a port that gives you _just_ cu with no other UUCP crap?) I think that's a POLA question; I have no fundamental objection. -GAWollman To Re: panic: blockable sleep lock (sx) allproc @ /usr/local/src/sys/kern/kern_proc.c:212 On Sun, 7 Oct 2001 14:49:16 +1000 (EST), Bruce Evans [EMAIL PROTECTED] said: Is using xconsole significantly better than tail -f /var/log/messages? I don't know. I think `xterm -C' is better than either one, if it can be made to work properly. (I have held off on updating to latest -current SIOCGIFDATA On Wed, 3 Oct 2001 15:42:57 -0400, Kenneth Culver [EMAIL PROTECTED] said: I was wondering if anyone had thought of implementing the above ioctl. Right now from what I can tell, (from wmnet, and netstat) all stats for a network device are kvm_read out of the kernel. These applications Re: uucp user shell and home directory On Mon, 01 Oct 2001 11:51:32 -0600, Lyndon Nerenberg [EMAIL PROTECTED] said: And you should *never* allow remote site UUCP logins (those that run uucico) under the `uucp' login, for obvious security reasons. I remember, back in the mists of ancient time, it was common practice to provide Re: kldxref broken, maybe? On Thu, 20 Sep 2001 22:19:22 -0700, Peter Wemm [EMAIL PROTECTED] said: foreach $path (`sysctl -n kern.module_path | sed -e 's/;/ /`) if (-d $path) kldxref $path endif endfor module_path=$(sysctl -n kern.module_path) OIFS=$IFS; IFS=; set ${module_path} IFS=$OIFS for directory; do Seen this lock order reversal? lock order reversal 1st 0xd3a5c11c process lock @ ../../../vm/vm_glue.c:469 2nd 0xc0e3fe30 lockmgr interlock @ ../../../kern/kern_lock.c:239 This is on relatively old (~ three months) sources. The first lock is from swapout_procs(); I assume the second lock actually refers to the call to Re: kern.flp blown out again On Thu, 13 Sep 2001 15:16:43 -0400 (EDT), I wrote: -rwxr-xr-x 1 wollman sources 590239 Sep 13 15:13 lots-of-modules.ko.gz* Here's another one, with all of the modules except for those which cannot possibly be used for installation (e.g., sound, discard interface, bktr, etc.): Re: kern.flp blown out again On Thu, 13 Sep 2001 02:00:57 -0700, Jordan Hubbard [EMAIL PROTECTED] said: It's just easier to keep band-aiding it, as ugly a scenario as that might be. If we added a third disk with modules (This is based on somewhat dated sources, but I think that the idea is right.) HEADSUP!! KSE commit imminent. On Tue, 11 Sep 2001 12:57:40 -0700 (PDT), Julian Elischer [EMAIL PROTECTED] said: Peter, Matt and I, (and a bunch of testers) have been banging on the KSE kernel for two weeks now. The state of the patch is: Everything runs except nwfs and smbfs (my head hurts whe I read them) I'm glad to Re: proctitle progress reporting for dump(8) On Sat, 1 Sep 2001 21:55:09 +0200, Jeroen Ruigrok/Asmodai [EMAIL PROTECTED] said: You mean dump should get a signal handler for SIGINFO to print/display the current status of the application? Yes! Just like in fsck, and for the same reasons. -GAWollman To Unsubscribe: send mail to [EMAIL Re: proctitle progress reporting for dump(8) On Sat, 01 Sep 2001 22:48:37 +0200, Arne Dag Fidjestøl [EMAIL PROTECTED] said: You'd still need somewhere to put the status message; the dump process above has no controlling terminal. If it has no controlling terminal then it's not going to receive ctty signals like SIGINFO. -GAWollman Re: proctitle progress reporting for dump(8) On Sat, 01 Sep 2001 23:08:48 +0200, Arne Dag Fidjestøl [EMAIL PROTECTED] said: But I agree, SIGINFO is not a good solution here :) I'm not sure who you're agreeing with, since I did not say that. -GAWollman To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-current in Re: proctitle progress reporting for dump(8) On Sun, 02 Sep 2001 00:39:22 +0200, Arne Dag Fidjestøl [EMAIL PROTECTED] said: Could you please clarify your position on this issue? Is setproctitle() the wrong way to do this, and if so, why? I don't expect setproctitle() to be useful to me one way or the other. SIGINFO, on the other Re: proctitle progress reporting for dump(8) On Sat, 1 Sep 2001 19:47:06 +0200, Jeroen Ruigrok/Asmodai [EMAIL PROTECTED] said: 79240 ?? S 0:06,85 dump: /dev/da0h(0): 92.44% done, finished in 0:43 (dump) SIGINFO! SIGINFO! SIGINFO! -GAWollman To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-current in the Re: symlink(2) [Was: Re: tcsh.cat] [Attribution deleted for clarity; see referenced messages in the archives.] $ ln -s '' foo $ cp foo bar cp: foo is a directory (not copied) No, foo certainly _is_ a directory. It is precisely the same thing as .. No, the empty pathname has been invalid and not an alias HEADS UP: ACPI CHANGES AFFECTING MOST -CURRENT USERS On Wed, 29 Aug 2001 19:58:59 -0700, Mike Smith [EMAIL PROTECTED] said: - I pushed the power button, and my system shut down cleanly! Yes. ACPI brings some useful new features. 8) FSVO ``useful''. It's a real PITA to have to physically unplug the machine when the kernel is wedged rather RE: Headsup! KSE Nay-sayers speak up! On Mon, 27 Aug 2001 09:34:06 -0700 (PDT), John Baldwin [EMAIL PROTECTED] said: Just to get this out in the public: I for one think 5.x has enough changes in it and would like for KSE to be postponed to 6.0-current and 6.0-release. I agree. I'd like to see this stuff happen, but I think it's Re: Headsup! KSE Nay-sayers speak up! On Mon, 27 Aug 2001 15:34:14 -0500, Jim Bryant [EMAIL PROTECTED] said: FreeBSD is going to be left in the dust unless both the SMPng *AND* KSE projects are integrated into 5.0. I care about having a system that works well and does what I ask of it. What the Linux horde is doing is of little Re: Copyright Contradiction in libalias On Wed, 22 Aug 2001 20:04:46 +0400, Andrey A. Chernov [EMAIL PROTECTED] said: I mean common part of international copyright law. There is no such thing as ``international copyright law''. There is only national copyright law. Parties to the various international copyright conventions agree Re: Copyright Contradiction in libalias On Wed, 22 Aug 2001 10:35:11 +0400, Andrey A. Chernov [EMAIL PROTECTED] said: No, author part of copyright can't be deattached, unless fraud happens. Only if you live in a country whose legal system recognizes ``moral rights''. -GAWollman To Unsubscribe: send mail to [EMAIL PROTECTED] with Re: race-to-the-root, hello anyone out there? On Mon, 13 Aug 2001 14:57:54 -0700, David O'Brien [EMAIL PROTECTED] said: I should have mentioned this. /tmp is on /, which is UFS, mounted noatime and with softupdates enabled. Plenty of freespace:/dev/da0s1a 1.3G 843M 361M 70% / Same thing happens to me, but it's when sending mail RE: Random Lockups On Wed, 08 Aug 2001 07:53:57 -0700 (PDT), John Baldwin [EMAIL PROTECTED] said: Usually it involves an NMI switch. :( Can you cvs (or cvsup) by dates on the kernel sources to narrow down exactly what day (and possibly what commit) causes these lockups? I.e., does it work fine on August 4th, ntpd 4.1 On Thu, 2 Aug 2001 12:25:13 +0200, Ollivier Robert [EMAIL PROTECTED] said: The question I have is the following: authentication was done with md5 code builtin and I disabled DES support (not supported anymore). Now, with 4.1, it can be linked to openssl but it is still an optional component. RE: Lock order reversals that aren't problematic On Mon, 30 Jul 2001 09:28:03 -0700 (PDT), John Baldwin [EMAIL PROTECTED] said: However, the networking stack is being redone, By whom? I haven't seen anything about this posted to -net. -GAWollman To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-current in the body Re: libedit replacement for libreadline On Mon, 16 Jul 2001 03:19:32 -0700, Kris Kennaway [EMAIL PROTECTED] said: Personally, I think it's worth it to get rid of a GNU dependency in the base system, as well as reducing the overall amount of functional code duplication. I don't, particularly since the two programs which use it are Use of M_WAITOK in if_addmulti(). On Mon, 16 Jul 2001 00:13:14 +0900 (JST), Hajimu UMEMOTO [EMAIL PROTECTED] said: Current if_addmulti() calls MALLOC() with M_WAITOK. However, if_addmulti() can be called from in[6]_addmulti() with splnet(). It may lead kernel panic. This is not a problem (or should not be). It is Re: cannot print to remote printer I wrote: The new POSIX draft, at least, sanctions the automatic reset of SIGCHLD to SIG_DFL upon exec(). Terry Lambert appears to have written: How does the NOHUP program continue to function in light of this reset demand? There is no ``demand'' involved. The behavior of the system when Re: cannot print to remote printer On 12 Jul 2001 22:40:12 +0300, Giorgos Keramidas [EMAIL PROTECTED] said: I might be wrong in many ways, but... is it then mandatory that you `reset' SIGCHLD to SIG_DFL ? Possibly. In the general case (as specified by standards), what happens to SIGCHLD if it was set to SIG_IGN before exec() Re: picking a DB (Re: nvi maintainer?) On Tue, 10 Jul 2001 17:24:37 +0200, Sheldon Hearn [EMAIL PROTECTED] said: The next step is for someone to see how hard it is to provide libc hooks for the DB 3.x functionality that nvi requires, without removing the existing DB 1.x functionality that other subsystems require. It's actually Re: Ok, try this patch. (was Re: symlink(2) [Was: Re: tcsh.cat]) On Mon, 18 Jun 2001 15:40:23 +1000 (EST), Bruce Evans [EMAIL PROTECTED] said: NetBSD committed essentially this patch 4 years ago (as part of rev.1.23). I like it, except it seems to be incompatible with POSIX.1-200x. I think I agree with your interpretation. Quoting from XBDd7, page 101, Re: Ok, try this patch. (was Re: symlink(2) [Was: Re: tcsh.cat]) On Mon, 18 Jun 2001 20:59:45 +0400, Andrey A. Chernov [EMAIL PROTECTED] said: Maybe it is just my bad English understanding, but it seems last two cases must be ./foo/ .// ./foo/bar .//bar No, because the ``resulting filename'' begins with a slash. -GAWollman Re: Ok, try this patch. (was Re: symlink(2) [Was: Re: tcsh.cat]) On Mon, 18 Jun 2001 21:35:17 +0400, Andrey A. Chernov [EMAIL PROTECTED] said: ./foo/ .// ./foo/bar .//bar No, because the ``resulting filename'' begins with a slash. It seems resulting filename (pathname?) begins with ./ (not a slash). No, it doesn't. The Re: Ok, try this patch. (was Re: symlink(2) [Was: Re: tcsh.cat]) On Mon, 18 Jun 2001 10:46:27 -0700 (PDT), Matt Dillon [EMAIL PROTECTED] said: In anycase, I can't imagine that POSIX actually intended null symlinks to act in any particular way The standard specifies precisely how pathname resolution is supposed to behave. FreeBSD should conform to the Re: convert libgmp to a port? On 18 Jun 2001 03:32:10 +0200, Assar Westerlund [EMAIL PROTECTED] said: But telnet in historic BSD didn't have sra or any other authentication mechanism that uses libmp. Or are you saying that we cannot change `historical BSD software'? No, I'm saying that the author of the SRA patches did Re: convert libgmp to a port? On Sat, 16 Jun 2001 23:38:45 -0700, Peter Wemm [EMAIL PROTECTED] said: telnet* should never have used libmp in the first place, Yes, it should have, since telnet is historic BSD software and libmp is the historic BSD arbitrary-precision-math library. That is also (one reason) why we should The PR db is not for -current problems, right? On Sat, 16 Jun 2001 14:51:38 +0200, Jens Schweikhardt [EMAIL PROTECTED] said: against problems on -current: my understanding is that -current users know what they are doing, especially that they're living on the bleeding edge and that they must be subscribed to current@ where they shall Re: tcsh.cat On Fri, 15 Jun 2001 23:33:07 +1000 (EST), Bruce Evans [EMAIL PROTECTED] said: Here's an example of a complication: what is the semantics of /tmp/foo/bar where foo is a symlink to ? I think the pathname resolves to /tmp//bar and then to /tmp/bar, but this is surprising since foo doesn't Re: tcsh.cat On Fri, 15 Jun 2001 22:57:56 +1000 (EST), Bruce Evans [EMAIL PROTECTED] said: Maybe, but this doesn't seem to be permitted by POSIX.1-200x: The response I got to a similar question about symbolic links suggests that the definition of ``path name resolution'' is supposed to be the final word on Re: tcsh.cat On Sat, 16 Jun 2001 01:24:19 +0400, Andrey A. Chernov [EMAIL PROTECTED] said: POSIX explicitly disallow filenames everywhere. I think it should be so for symlinks too. But it doesn't, and that's the end of it for at least another five years. -GAWollman To Unsubscribe: send mail to [EMAIL Re: fts_open() (was: Re: Patch to restore WARNS feature) On Wed, 13 Jun 2001 23:28:29 +1000 (EST), Bruce Evans [EMAIL PROTECTED] said: 3. Provide an alternative to qsort() that takes an comparison function that takes an additional function pointer arg (use this arg to avoid the global in (1)). Actually, doing this would solve a number of Re: fts_open() (was: Re: Patch to restore WARNS feature) On Wed, 13 Jun 2001 18:21:57 +0300, Ruslan Ermilov [EMAIL PROTECTED] said: How should we call this function? (I'll implement this tomorrow.) I was would call it `qsort_with_arg' or something similar. There is a namespace issue here; stdlib.h reserves very little namespace for the Re: fts_open() (was: Re: Patch to restore WARNS feature) On Wed, 13 Jun 2001 11:55:30 -0400, Alfred Perlstein [EMAIL PROTECTED] said: Why not do something like the rpc code does? Check if threaded, if so cons up a thread specific key otherwise use a global. The Standard does not appear to say whether qsort() is reentrant, but I believe that it Patch to restore WARNS feature On Tue, 12 Jun 2001 15:53:18 +0300, Ruslan Ermilov [EMAIL PROTECTED] said: + qsort((void *)sp-fts_array, nitems, sizeof(FTSENT *), + (int (*) __P((const void *, const void *)))sp-fts_compar); This is wrong. The declaration of the comparison function should be fixed, rather than
https://www.mail-archive.com/search?l=freebsd-current%40freebsd.org&amp;q=from%3A%22Garrett+Wollman%22&amp;o=newest&amp;start=100
CC-MAIN-2019-22
refinedweb
5,111
70.94
Is there a more effective, simplified means of converting a "formatted" Java UUID - without dashes - to a Java compatible format - with dashes - in PHP, and ultimately: How would I do it? I have code that already performs this action, but it seems unprofessional and I feel that it could probably be done more effectively. [... PHP code ...] $uuido = $json['id']; $uuidx = array(); $uuidx[0] = substr( $uuido, 0, 8 ); $uuidx[1] = substr( $uuido, 8, 4 ); $uuidx[2] = substr( $uuido, 12, 4); $uuidx[3] = substr( $uuido, 16, 4); $uuidx[4] = substr( $uuido, 20, 12); $uuid = implode( "-", $uuidx ); [... PHP code ...] Input: f9e113324bd449809b98b0925eac3141 Output: f9e11332-4bd4-4980-9b98-b0925eac3141 $json['id'] file_get_contents( $url ) json_decode( $file ) Your question doesn't make much sense but assuming you want to read the UUID in the correct format in Java you can do something like this: import java.util.UUID; class A { public static void main(String[] args){ String input = "f9e113324bd449809b98b0925eac3141"; String uuid_parse = input.replaceAll( "(\\w{8})(\\w{4})(\\w{4})(\\w{4})(\\w{12})", "$1-$2-$3-$4-$5"); UUID uuid = UUID.fromString(uuid_parse); System.out.println(uuid); } } Borrowed from maerics, see here: Or in PHP you can do something like: <?php $UUID = "f9e113324bd449809b98b0925eac3141"; $UUID = substr($UUID, 0, 8) . '-' . substr($UUID, 8, 4) . '-' . substr($UUID, 12, 4) . '-' . substr($UUID, 16, 4) . '-' . substr($UUID, 20); echo $UUID; ?> Borrowed from fico7489: And then you can send that to Java where you can create a UUID object using fromtString().
https://codedump.io/share/2dMMcTLTQEww/1/format-quotrawquot-string-to-java-uuid-in-php
CC-MAIN-2017-04
refinedweb
238
64.41
gcc-6 has a robust feature that can compile and dispatch different functions according to hardware supported. (One example:) However, when compile with Eigen gcc-6 -msse2 -I$EIGEN_DIR /* codes start #include <Eigen/Dense> __attribute__((target_clones("avx512f","avx2","default"))) int func1(){ ...Eigen releated... } */ codes end From the assembly view, Eigen compiled with only SSE2 support. The func1 do have avx512, avx2, and sse variants, but the contents are almost the same. I know Eigen decides the feature supported in the compile time (template), determined by the compile flags pass in. Is it possible to hot switch while target_clones gives different compile flags in his variant function? Thank you. I don't see how this would even theoretically solvable. Maybe it is possible to clutter all Eigen functions with __attribute__((target_clones( ... ))) and hope that inlining makes this magically work. The alternative is to compile multiple versions of the function with different flags and link them together. -- GitLab Migration Automatic Message -- This bug has been migrated to gitlab.com's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance:.
https://eigen.tuxfamily.org/bz/show_bug.cgi?id=1542
CC-MAIN-2020-16
refinedweb
194
57.27
Database not getting updated799603 Feb 8, 2011 8:35 AM I ran the above program in tomcat,i didn't get any error but unfortunately database is not getting updated which is the sole purpose of this applicationI ran the above program in tomcat,i didn't get any error but unfortunately database is not getting updated which is the sole purpose of this application package com.sandy.servlets; import javax.servlet.*; import java.sql.*; import javax.servlet.http.*; import java.io.*; public class FirstServlet extends HttpServlet { Connection con; PreparedStatement ps; public void init(ServletConfig config) throws ServletException { String driver=config.getInitParameter("driver"); String cs = config.getInitParameter("url"); String user=config.getInitParameter("user"); String password=config.getInitParameter("pwd"); try { Class.forName(driver); con=DriverManager.getConnection(cs,user,password); ps=con.prepareStatement("INSERT INTO USER2 VALUES(?,?)"); } catch (ClassNotFoundException e) { e.printStackTrace(); } catch(SQLException e) { e.printStackTrace(); } }//init() public void destroy() { try { if(ps!=null) ps.close(); if(con!=null) con.close(); } catch (SQLException e) { e.printStackTrace(); } } public void doPost(HttpServletRequest request,HttpServletResponse response) throws ServletException,IOException { String a = request.getParameter("FirstName"); String b = request.getParameter("LastName"); try { if(a!=null) { if(b!=null) ps.setString(1,a); ps.setString(2,b); ps.executeUpdate(); } } catch (SQLException e) { e.printStackTrace(); } response.sendRedirect("user.html"); } } Edited by: sandy on Feb 8, 2011 12:33 AM This content has been marked as final. Show 7 replies 1. Re: Database not getting updatedOpal Feb 8, 2011 7:26 AM (in response to 799603)First of all check out the tomcat logs, maybe there's something interesting? Secondly, debug 'a', and 'b' variables as well as params from request. Are You sure that they have not null values? 2. Re: Database not getting updatedPhHein Feb 8, 2011 8:18 AM (in response to 799603)sandy, code tag, please: 3. Re: Database not getting updated799603 Feb 8, 2011 8:38 AM (in response to Opal)if i remove 'if' block,i am getting null pointer exception in browser....For now,i am not getting any error in tomcat console.only problem is db is not getting updated Edited by: sandy on Feb 8, 2011 12:38 AM 4. Re: Database not getting updatedgimbal2 Feb 8, 2011 11:56 AM (in response to 799603)do you have some kind of auto commit? I don't see you commit the transaction anywhere. By the way: this is terrible design. Perhaps you can open a connection on servlet init (still not a good design, but you seem to be in the learning phase so lets not focus on that now), but you should create the PreparedStatement, use it, commit it and then close it inside your doPost() method. 5. Re: Database not getting updated799603 Feb 8, 2011 1:32 PM (in response to 799603)solved it myself,i did a mistake in web.xml file..Anyhow,thanks all the guys for your valuable suggestions 6. Re: Database not getting updated841760 Feb 22, 2011 3:36 PM (in response to 799603)use con.close inside finally block at the end of ur program 7. Re: Database not getting updatedEJP Feb 22, 2011 10:40 PM (in response to 841760)A futile suggestion. This is not a program, it is a servlet.
https://community.oracle.com/thread/2173704?tstart=120
CC-MAIN-2016-36
refinedweb
539
50.12
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project. On Thu, 1 Aug 2019, Uros Bizjak wrote: > On Wed, Jul 31, 2019 at 1:21 PM Richard Biener <rguenther@suse.de> wrote: > > > > On Sat, 27 Jul 2019, Uros Bizjak wrote: > > > > > On Sat, Jul 27, 2019 at 12:07 PM Uros Bizjak <ubizjak@gmail.com> wrote: > > > > > > > > How would one write smaxsi3 as a splitter to be split after > > > > > reload in the case LRA assigned the GPR alternative? Is it > > > > > even worth doing? Even the SSE reg alternative can be split > > > > > to remove the not needed CC clobber. > > > > > > > > > > Finally I'm unsure about the add where I needed to place > > > > > the SSE alternative before the 2nd op memory one since it > > > > > otherwise gets the same cost and wins. > > > > > > > > > > So - how to go forward with this? > > > > > > > > Sorry to come a bit late to the discussion. > > > > > > > > We are aware of CMOV issue for quite some time, but the issue is not > > > > understood yet in detail (I was hoping for Intel people to look at > > > > this). However, you demonstrated that using PMAX and PMIN instead of > > > > scalar CMOV can bring us big gains, and this thread now deals on how > > > > to best implement PMAX/PMIN for scalar code. > > > > > > > > I think that the way to go forward is with STV infrastructure. > > > > Currently, the implementation only deals with DImode on SSE2 32bit > > > > targets, but I see no issues on using STV pass also for SImode (on > > > > 32bit and 64bit targets). There are actually two STV passes, the first > > > > one (currently run on 64bit targets) is run before cse2, and the > > > > second (which currently runs on 32bit SSE2 only) is run after combine > > > > and before split1 pass. The second pass is interesting to us. > > > > > > > > The base idea of the second STV pass (for 32bit targets!) is that we > > > > introduce a DImode _doubleword instructons that otherwise do not exist > > > > with integer registers. Now, the passes up to and including combine > > > > pass can use these instructions to simplify and optimize the insn > > > > flow. Later, based on cost analysis, STV pass either converts the > > > > _doubleword instructions to a real vector ones (e.g. V2DImode > > > > patterns) or leaves them intact, and a follow-up split pass splits > > > > them into scalar SImode instruction pairs. STV pass also takes care to > > > > move and preload values from their scalar form to a vector > > > > representation (using SUBREGs). Please note that all this happens on > > > > pseudos, and register allocator will later simply use scalar (integer) > > > > registers in scalar patterns and vector registers with vector insn > > > > patterns. > > > > > > > > Your approach to amend existing scalar SImode patterns with vector > > > > registers will introduce no end of problems. Register allocator will > > > > do funny things during register pressure, where values will take a > > > > trip to a vector register before being stored to memory (and vice > > > > versa, you already found some of them). Current RA simply can't > > > > distinguish clearly between two register sets. > > > > > > > > So, my advice would be to use STV pass also for SImode values, on > > > > 64bit and 32bit targets. On both targets, we will be able to use > > > > instructions that operate on vector register set, and for 32bit > > > > targets (and to some extent on 64bit targets), we would perhaps be > > > > able to relax register pressure in a kind of controlled way. > > > > > > > > So, to demonstrate the benefits of existing STV pass, it should be > > > > relatively easy to introduce 64bit max/min pattern on 32bit target to > > > > handle 64bit values. For 32bit values, the pass should be re-run to > > > > convert SImode scalar operations to vector operations in a controlled > > > > way, based on various cost functions. > > > > I've looked at STV before trying to use RA to solve the issue but > > quickly stepped away because of its structure which seems to be > > tied to particular modes, duplicating things for TImode and DImode > > so it looked like I have to write up everything again for SImode... > > ATM, DImode is used exclusively for x86_32 while TImode is used > exclusively for x86_64. Also, TImode is used for different purpose > before combine, while DImode is used after combine. I don't remember > the details, but IIRC it made sense for the intended purpose. > > > > It really should be possible to run the pass once, handling a set > > of modes rather than re-running it for the SImode case I am after. > > See also a recent PR about STV slowness and tendency to hog memory > > because it seems to enable every DF problem that is around... > > Huh, I was not aware of implementation details... > > > > Please find attached patch to see STV in action. The compilation will > > > crash due to non-existing V2DImode SMAX insn, but in the _.268r.stv2 > > > dump, you will be able to see chain building, cost calculation and > > > conversion insertion. > > > > So you unconditionally add a smaxdi3 pattern - indeed this looks > > necessary even when going the STV route. The actual regression > > for the testcase could also be solved by turing the smaxsi3 > > back into a compare and jump rather than a conditional move sequence. > > So I wonder how you'd do that given that there's pass_if_after_reload > > after pass_split_after_reload and I'm not sure we can split > > as late as pass_split_before_sched2 (there's also a split _after_ > > sched2 on x86 it seems). > > > > So how would you go implement {s,u}{min,max}{si,di}3 for the > > case STV doesn't end up doing any transform? > > If STV doesn't transform the insn, then a pre-reload splitter splits > the insn back to compare+cmove. OK, that would work. But there's no way to force a jumpy sequence then which we know is faster than compare+cmove because later RTL if-conversion passes happily re-discover the smax (or conditional move) sequence. > However, considering the SImode move > from/to int/xmm register is relatively cheap, the cost function should > be tuned so that STV always converts smaxsi3 pattern. Note that on both Zen and even more so bdverN the int/xmm transition makes it no longer profitable but a _lot_ slower than the cmp/cmov sequence... (for the loop in hmmer which is the only one I see any effect of any of my patches). So identifying chains that start/end in memory is important for cost reasons. So I think the splitting has to happen after the last if-conversion pass (and thus we may need to allocate a scratch register for this purpose?) > (As said before, > the fix of the slowdown with consecutive cmov insns is a side effect > of the transformation to smax insn that helps in this particular case, > I think that this issue should be fixed in a general way, there are > already a couple of PRs reported). > > > You could save me some guesswork here if you can come up with > > a reasonably complete final set of patterns (ok, I only care > > about smaxsi3) so I can have a look at the STV approach again > > (you may remember I simply "split" at assembler emission time). > > I think that the cost function should always enable smaxsi3 > generation. To further optimize STV chain (to avoid unnecessary > xmm<->int transitions) we could add all integer logic, arithmetic and > constant shifts to the candidates (the ones that DImode STV converts). > > Uros. > > > Thanks, > > Richard. > > > > > The testcase: > > > > > > --cut here-- > > > long long test (long long a, long long b) > > > { > > > return (a > b) ? a : b; > > > } > > > --cut here-- > > > > > > gcc -O2 -m32 -msse2 (-mstv): > > > > > > _.268r.stv2 dump: > > > > > > Searching for mode conversion candidates... > > > insn 2 is marked as a candidate > > > insn 3 is marked as a candidate > > > insn 7 is marked as a candidate > > > Created a new instruction chain #1 > > > Building chain #1... > > > Adding insn 2 to chain #1 > > > Adding insn 7 into chain's #1 queue > > > Adding insn 7 to chain #1 > > > r85 use in insn 12 isn't convertible > > > Mark r85 def in insn 7 as requiring both modes in chain #1 > > > Adding insn 3 into chain's #1 queue > > > Adding insn 3 to chain #1 > > > Collected chain #1... > > > insns: 2, 3, 7 > > > defs to convert: r85 > > > Computing gain for chain #1... > > > Instruction conversion gain: 24 > > > Registers conversion cost: 6 > > > Total gain: 18 > > > Converting chain #1... > > > > > > ... > > > > > > (insn 2 5 3 2 (set (reg/v:DI 83 [ a ]) > > > (mem/c:DI (reg/f:SI 16 argp) [1 a+0 S8 A32])) "max.c":2:1 66 > > > {*movdi_internal} > > > (nil)) > > > (insn 3 2 4 2 (set (reg/v:DI 84 [ b ]) > > > (mem/c:DI (plus:SI (reg/f:SI 16 argp) > > > (const_int 8 [0x8])) [1 b+0 S8 A32])) "max.c":2:1 66 > > > {*movdi_internal} > > > (nil)) > > > (note 4 3 7 2 NOTE_INSN_FUNCTION_BEG) > > > (insn 7 4 15 2 (set (subreg:V2DI (reg:DI 85) 0) > > > (smax:V2DI (subreg:V2DI (reg/v:DI 84 [ b ]) 0) > > > (subreg:V2DI (reg/v:DI 83 [ a ]) 0))) "max.c":3:22 -1 > > > (expr_list:REG_DEAD (reg/v:DI 84 [ b ]) > > > (expr_list:REG_DEAD (reg/v:DI 83 [ a ]) > > > (expr_list:REG_UNUSED (reg:CC 17 flags) > > > (nil))))) > > > (insn 15 7 16 2 (set (reg:V2DI 87) > > > (subreg:V2DI (reg:DI 85) 0)) "max.c":3:22 -1 > > > (nil)) > > > (insn 16 15 17 2 (set (subreg:SI (reg:DI 86) 0) > > > (subreg:SI (reg:V2DI 87) 0)) "max.c":3:22 -1 > > > (nil)) > > > (insn 17 16 18 2 (set (reg:V2DI 87) > > > (lshiftrt:V2DI (reg:V2DI 87) > > > (const_int 32 [0x20]))) "max.c":3:22 -1 > > > (nil)) > > > (insn 18 17 12 2 (set (subreg:SI (reg:DI 86) 4) > > > (subreg:SI (reg:V2DI 87) 0)) "max.c":3:22 -1 > > > (nil)) > > > (insn 12 18 13 2 (set (reg/i:DI 0 ax) > > > (reg:DI 86)) "max.c":4:1 66 {*movdi_internal} > > > (expr_list:REG_DEAD (reg:DI 86) > > > (nil))) > > > (insn 13 12 0 2 (use (reg/i:DI 0 ax)) "max.c":4:1 -1 > > > (nil)) > > > > > > Uros. > > > > > > > -- > >)
https://gcc.gnu.org/legacy-ml/gcc-patches/2019-08/msg00011.html
CC-MAIN-2020-29
refinedweb
1,622
67.69
I created a new NB RCP module project. It had the default moduleType. I set it to eager. Rebuilding did not apply the change. I had to clean and build to make that happen. I have noticed this for other things too such as the license. This seems to affect both module.xml and the new attributes of the 3.8.x nbm-maven-plugin. I am using version 3.8.1. I'm pretty sure this also affects new top components or anything which will register anything, so not sure exactly why things do not get picked up, but when you have a fairly large module having to clean because of certain changes can be pretty heavy during the course of a days work. will do more in depth evaluation, but most likely an issue with jar maven plugin, all the parameters you mention end up in manifest, the rest of the jar stays the same it's actually also happening with ant based projects. the cause is the following code in CreateModuleXML.java ant task that skips the regeneration when the file exists. File xml = new File(xmldir, codenamebase.replace('.', '-') + ".xml"); if (xml.exists()) { // XXX should check that the old file actually matches what we would have written log("Will not overwrite " + xml + "; skipping...", Project.MSG_VERBOSE); return; } reassigning to apisupport harness Is it so problematic to do clean build? There are some changes that require that and I see no problem invoking clean build on such circumstances. If clean build helps, then I don't see a reason to spend time on fixing issues of this kind (we can't fix them all anyway). > I'm pretty sure this also affects > new top components or anything which will register anything If a creation of new top component required clean build, then it would be a different story. That happens frequently. But switching eager on/off is very infrequent change. @ServiceProvider registration requires a clean as well. The build time difference is around a minute btw as I have a big module. It could definitely be more modularized and that would make this better, but I'm on a team, and that as a requirement is low on the pole. It really is a pain in the butt to do a clean. Over the course of the day if one uses lookup to really decouple things, then various providers are added often, and the pain is a little obscured. I'm refactoring a good bit of code, and this comes up a lot. Once I get the providers in then I can better break this up, but for now this is eating a lot of time. Imagine one is working on various files and adds a single provider reg and then forgets they haven't cleaned that go around...they run it...get an unexpected state...go hmm...ah...so they not only rebuild, but forget something so minute, ran, and had to wait not only on the build but the application to start and close as well. I already use the command line to build only the modules I'm working on plus use nbm:cluster-app because of the difference in my day and time it makes. I reopened this because of this. If I need to break things down into separate problems I suggest we change the summary of this to reflect an overarching issue and do the initial gathering here and then file separate issues for the specific details after that. That will at least keep the conversation in one place should it be needed for now. Even for things such as eager however it is still a pain. One forgets if it doesn't affect them over the course of a month or two. Then when you do you try a couple things before you go...oh yeah...needed to clean and build. For new people it is worse because they don't know they need to clean because of such a change. Heck, its not even new people. I have to help team members with such things often, and is one of those things where you just have to be on your toes. They spin their wheels and finally call me and I can set them straight, but that is a big loss in man hours because the tool doesn't inform you of that, and I don't think that is clearly documented in pom.xml comments generated by the IDE, nor in the project properties dialog any where, or any where else I have seen. It impacts the quality of the IDE from the users experience in my opinion. This seems even worse when one uses the message annotations. I find if I do not do a clean build on a module using @Messages, I will get all kinds of errors about the resource not being able to be found. That means every time I need to rerun the application, then I can't do a simple mvn install of the module, but am forced to do mvn clean install. The bit about @Messages happens on Windows and Linux both. OK, if you want to move this issue forward, I need you to set an Ant based project up, attach it and I am ready to modify a single file and observe that something is broken. If there are multiple things that can go wrong, we can go piece by piece or you can create separate bugs for each case. Incomplete until I will get the test case. Jarda, I'm reopening. I'm going to attach a couple projects. One is Ant based, and the other is Maven. This maybe needs assigned back to Milos. I don't know. These projects I'm going to attach do not get into the moduleType, but instead deal with @ServiceProvider. In the Ant project, if you add a new "FakeProvider" class, and you will see, I suggest you add one called JardaFakeProvider, and then build and run the app, then you will see that show fine in the provided top component. In the Maven project if you add a new FakeProvider to the netbeans-221781-maven-main project, lets call it JardaFakeProvider or MilosFakeProvider and do something similar to what I have done there, such as return "Jarda" or "Milos" from the getName method, right click and build netbeans-221781-maven-main, then right click and build netbeans-221781-maven-parent, then right click and run netbeans-221781-maven-app, you will not see the new FakeProvider(s) show up in the provided top component. So, it seems other than the moduleType issue, everything else is specific to Maven based NB RCP projects. Whether a module is eager or not etc...sure perhaps one can live with having to clean to change that, and of course it is a pain to figure out since nothing really tells you...hey if you change this clean and build, but these other things make it nearly unbearable for a good sized app where things are being added and refactored all the time. The issues with @Messages, @ServiceProvider, as well as new top components all appear to be mvn RCP related and require "mvn clean install" before they will show up. You can try to add a new top component to netbeans-221781-maven-main and can see what I'm talking about. I'm using NetBeans 7.2.1 and JDK 7 along with Maven 3.0.4. I will try to put together use cases which demonstrate how the @Messages annotations causes issues in mvn based NB RCP projects as well. Created attachment 130341 [details] Ant based project showing that new classes annotated with @ServiceProvider work with just a build versus a clean and build Created attachment 130342 [details] Maven project showing that new classes added with @ServiceProvider require a clean install or a clean and build before they show up or have the annotations processed at build time I did a clean before I attached the projects, but I accidentally left the private folder in the nbproject folder of the Ant project...just a heads up if it has any impact at all. Thanks. I went ahead and reassigned this to Maven. moduleType perhaps can be lived with not updating, but the rest of the issues here are hard to live with considering full rebuilds can be time consuming overall especially for service provider and top component registrations. I still need to figure out better use cases for the @Messages problems. I did notice though that uses of @Messages with a Maven based project results in quite a few issues if one does an "mvn install" versus an "mvn clean install", and this seems to not affect the Ant based projects. do you have compile on save on or off? that's the only possible place where netbeans IDE support could be involved, otherwise it's either maven itself (via some of the plugins) and/or the annotation processor codebase. I've tried with CoS off. 1. on the main project level, adding a @provider annotation gets correctly processed (META-INF/services/FakeProvider file gets updated. However removing the annotation does not update it. That seems correct to me. The update appears to get transfered correctly from target/classes to jar to nbm (that's something you can verify on your side as well) 2. on the app project level, we compare the dependency's nbm file timestamp - File.lastModified() -against the .lastModified timestamp file in the cluster folder. If the nbm file is newer it's content gets copied into the application's clusters. you can verify that by enabling debug message in maven build and looking for something along these lines: Copying org.netbeans.modules.tests.issue-221781-maven:netbeans-221781-maven-main:nbm-file:1.0-SNAPSHOT to cluster netbeans221781maven The .lastModified files are touched after all the nbm files are updated. This also seemed to work for me fine and the nbm file appeared to get properly copied over. 3. the rest is runtime stuff, maybe some caches are in the way. I didn't try that one. You can check this by deleting the user directory in application/target/ after building the app and before running it. I should have some time in the next couple days to try some things out Milos. Thanks. Milos, just curious, since you mentioned caches and not trying that one, had you built once before adding a new provider? I noticed you said removing the annotation didn't remove it which would be a rebuild. If you add a provider, mvn install, then add another provider, mvn install, and then mvn nbm:cluster-app, and then run what do you get? Which maven version are you using? (In reply to comment #15) > Milos, just curious, since you mentioned caches and not trying that one, had > you built once before adding a new provider? yes. mvn install with one provider. check output directory. Add another provider, mvn install, check output directory again. > I noticed you said removing the > annotation didn't remove it which would be a rebuild. Yes, only mvn clean install removes the generated content. > If you add a provider, > mvn install, then add another provider, mvn install, and then mvn > nbm:cluster-app, and then run what do you get? Which maven version are you > using? 3.0.4 I guess. important is also my note about CoS, always make sure that the maven build output contains note about compiling some files, if none are compiled, maybe CoS compiled it for you without processing the annotation.. (just guessing here) OK, so I had done a clean. I came in. I did a clean install on the main parent POM. I added a provider. mvn install on the module, mvn install on application, and then ran. It showed up. Next, I added another provider, saved, mvn install, checked under META-INF/services the file org.netbeans.modules.tests.issue221781maven.FakeProvider and the new providers class name is not there. For the module, I checked compile on save, and it was set to "for application execution only". I have not disabled it, and will try the entire process over. "I have not disabled it"...should have read..."I have NOW disabled it" ... and will try over. I didn't start this time with cleaning, but I am still not getting any new providers placed into the META-INF/services file. I am going to try by cleaning with compile on save off, and then will see what happens. I will try this exact same project on a completely different machine later tonight. That machine will have Windows 7 and JDK 6 I think. This setup is Ubuntu 12.04 and JDK 7. I'm using mvn 3.0.4. OK, so it seems it is indeed related to compile on save. The thing is...that is the default apparently. First, I did have to do a clean to get things to start working correctly after I turned off compile on save. I only had to turn it off for the module; at least for the changes to appear in the META-INF/services file. Next, I went back and turned compile on save back on. I did a clean, I added a class, build, shows up in the services file. I added another, build, and it did not show up in the services file. Too, I did check the output window, and it had a message there was nothing to compile. It seems the annotation processing of the compile on save feature is not working correctly. I have not tried to see if the other issues are related to this or not, but I will. Specifically one of them that gives me the most trouble other than service provider is the messages annotation. If I don't perform a clean and build I will get all kinds of sporadic errors at runtime. Some value won't be in the Bundle etc. Milos, can you confirm on your side whether or not compile on save has such an effect for you? I feel like for Maven projects that should not be the default if it causes such problems. I have not yet checked to see what happened in the Ant based project with regard to compile on save, but will check it now; at least for service provider. OK. It has been a while since I have worked with Ant based projects in the IDE...at least RCP ones, and there is not option under the project properties to turn compile on save off or on. I suppose that would explain why it didn't have the same problem. OK, so I tested my theory on needing to do a clean and build after turning off compile on save to have the effect of actually not compiling on save or compiling on save, and that seems to be the case indeed. I changed the property in my project properties; in this case I went from NOT compiling on save to compiling on save. I changed a source file. I saved. I then did a build. I see "compiling 1 file"... I made more changes, saved, same thing. Next, I did a clean and build. I then made more changes, and I see there is "nothing to compile". So, that seems odd too. There seem to be multiple issues listed here in this issue, and apparently they are all related to compile on save. I just tested messages. I added a few providers I had already created. I turned compile on save off, and did a clean and build (mvn clean install). Next, I changed a couple to use messages like: package org.netbeans.modules.tests.issue221781maven; import org.openide.util.NbBundle.Messages; import org.openide.util.lookup.ServiceProvider; /** * * @author wade */ @ServiceProvider(service=FakeProvider.class) @Messages({"BooProvider.name=Boo"}) public class BooProvider implements FakeProvider { @Override public String getName() { return Bundle.BooProvider_name(); } } I did a build as I changed each one, and I ran the application. Everything ran fine. I turned compile on save BACK ON, and then I made some changes, saved, built, etc. I then changed the above example to: package org.netbeans.modules.tests.issue221781maven; import org.openide.util.NbBundle.Messages; import org.openide.util.lookup.ServiceProvider; /** * * @author wade */ @ServiceProvider(service=FakeProvider.class) @Messages({"BooProvider.name_value=Boo"}) public class BooProvider implements FakeProvider { @Override public String getName() { return Bundle.BooProvider_name_value(); } } Then saved and built. At runtime it produced the following trace: java.util.MissingResourceException: Can't find resource for bundle org.openide.util.NbBundle$PBundle, key CTL_FakeProviderDisplayAction at java.util.ResourceBundle.getObject(ResourceBundle.java:393) [catch] at org.netbeans.core.startup.layers.BinaryFS$AttrImpl.getValue(BinaryFS.java:719) ... Which is a resource from the top component FakeProviderDisplayTopComponent which is in the project: @Messages({ "CTL_FakeProviderDisplayAction=FakeProviderDisplay", "CTL_FakeProviderDisplayTopComponent=FakeProviderDisplay Window", "HINT_FakeProviderDisplayTopComponent=This is a FakeProviderDisplay window" }) public final class FakeProviderDisplayTopComponent extends TopComponent So, it seems with compile on save turned on, the @Messages annotation processing gets messed up, and I assume the Bundle.properties file is being stepped on. Next, I went back, turned compile on save OFF again, and sure enough I have not been able to reproduce the problem again. I have been changing values in @Messages annotations for a while where, and I build the module, and then run the application after each change, and it all seems to be working fine. So, definitely, there are issues with compile on save and annotation processing. I went ahead and changed the summary to reflect the biggest problem and nature of the issue. compile on save is off by default in 7.3, with exception of web related packagings that were explicitly requested by the j2ee team. *** Bug 225285 has been marked as a duplicate of this bug. *** jlahoda: is there a way to enable Compile on save infrastructure to include the generated non-class files in the target/classes folder of the project? I don't see any other way to fix the issue. Maybe apart from warning users someplace in ui that after changing annotations of any kind, clean build is required. How are the classes themselves included? Is external logic such as a compiler putting them there for you? What about file system listeners if so? What would not work with that approach? If not, at least for NB annotations some kind of an annotation context could be used that could detail created files that need to be copied over. Wouldn't help necessarily with 3rd party annotations though, so not the best solution. I mean it could be an open API for those using NB, but still doesn't solve whole thing. It seems like files just need to be monitored some how. As far as putting them in target/classes goes isn't there an expectation that before releasing one should clean and build with compile on save turned on? What happens on compile on save when new classes are generated by annotations? How are those being included? Are they causing the same type issue, or is there special logic for them already? I am trying to understand the difference between the way they are treated versus say a generated .properties file. I might have found a "solution". running a non-clean build with -DlastModGranularityMs=X where X is a high enough NEGATIVE number will force a re-compile of the java files matching the range, typically the compile on save ones (or all) The question is how to determine the best value for X. (In reply to comment #29) > I might have found a "solution". well, the solution only applies to running Build project instead of Clean Build project. Unless we find a way to determine when to suppress skipping lifecycle phases in Run/Debug/Profile. So far I have no idea how to figure that an annotation processor processes a non-java file. >. Or fix it right - I don't care. > So far I have no idea how to figure that an annotation processor processes a non-java file. Ugly thought: Find a way to hook into javac so that you can be notified when annotation processors run at all, and a way to hook into the filesystems API to detect what files are read. Hmm, I wonder if you could do the first hook with an annotation processor which you inject into the classpath which consumes all annotations... then...do you need to know what was read or what was written? (In reply. Please remember that the project where the annotation change occurs is different from the project being run. There might not even be a way to execute a build across both of the projects. > > Or fix it right - I don't care. The more likely alternate is that I'm going to disable CoS on nbm+application projects again. The bug is still existing ;-) I get time by time a java.util.MissingResourceException: Can't find resource for bundle org.openide.util.NbBundle$PBundle, key Plugins at java.util.ResourceBundle.getObject(ResourceBundle.java:450) at java.util.ResourceBundle.getString(ResourceBundle.java:407) at org.openide.util.NbBundle.getMessage(NbBundle.java:642) COS is a feature by the IDE and I can choose it for maven projects. If I disable COS, I get warnings on running test cases. IMHO it should work or completly disabled for maven projects with @Messages. (In reply to _ tboudreau. I second tboudreau's comment, and wadechandler's frustrations as well, especially in connection with @Message annotations. Figuring out the exact situations when a 3-minute "Clean & Build" was necessary was a major stumbling block for me when I started learning the platform. I found it especially deceptive that a _clean_ build of the entire project should be necessary. In my platform app this has the side effect of resetting my development user directory, meaning I need to reconfigure my app the next time I run it etc. The problem is even bigger for beginners because all the tutorials tend to go through steps that involve service annotations, creating forms in the form designer (which causes internationalization bundle changes), etc. In the long term of a project such changes might be rare, but when you're first setting up and trying to learn what the different settings do, it happens all the time. (In reply to ebakke from comment #34) > I found it > especially deceptive that a _clean_ build of the entire project should be > necessary. In my platform app this has the side effect of resetting my > development user directory, meaning I need to reconfigure my app the next > time I run it etc. Yes, that's the main reason I don't invoke Clean & build so often. There should be at least an option to clean the project without deleting testuserdir. Another reason is that it takes quite a few minutes to rebuild the project consisting of almost 100 modules.
https://netbeans.org/bugzilla/show_bug.cgi?id=221781
CC-MAIN-2018-51
refinedweb
3,841
73.27
As you may know the MD5CryptoService, which the .NET Framework uses, doesn't create an md5 sum's string. So you only have the pure byte[] which isn't that useful if you want to use it in a database or on your Web Server to compare the sum with your file's sum! So, if many people have no idea how to get their sum into the right way, here's my howto to create the same sums like "md5sum" does: byte[] This is the second version with the Hex-bug fixed. using System.Security.Cryptography; ..... ..... string MD5SUM(byte[] FileOrText) //Output: String<-> Input: Byte[] // { return BitConverter.ToString(new MD5CryptoServiceProvider().ComputeHash(FileOrText)).Replace("-","").ToLower(); } // Quite simple isn't it? I hope you enjoyed this tiny article and //I hope I could help ya! // You can use this example of course for SHA1 as well, just change the MD5CryptoServiceProvider to SHA1CryptoServiceProv.
https://www.codeproject.com/Articles/9232/MD5SUM-FOR-C?fid=141753&df=90&mpp=10&sort=Position&spc=None&tid=1973504
CC-MAIN-2017-34
refinedweb
150
62.88
Important: Please read the Qt Code of Conduct - [Solved] LNK1104: cannot open file 'debug\Project1.exe' Hello, I try to make work a small program with qt in visual basic. I've already tried to show only a small button and it worked, then Qt is well installed. But with my new program the compiling gives the message: @LINK : fatal error LNK1104: cannot open file 'debug\Project1.exe' @ I don't know from where it comes, maybe an error in my lines, but there are quite simple.. I give the code from main.cpp, MaFenetre.cpp and MaFenetre.h main.cpp @#include <iostream> #include "maFenetre.h" int main (int argc, char *argv[]) { QApplication app(argc, argv); MaFenetre window; window.show(); return app.exec(); }@ MaFenetre.h @#ifndef DEF_MAFENETRE #define DEF_MAFENETRE #include <QtGui> #include <QApplication> #include <QtWidgets> #include <QPushButton> class MaFenetre: public QWidget { public: MaFenetre(); ~MaFenetre(); void show(); private: QPushButton *bouton; QProgressBar *barre; }; #endif@ MaFenetre.cpp @#include "maFenetre.h" MaFenetre::MaFenetre(): QWidget() { setFixedSize(200,100); bouton = new QPushButton("Hey"); bouton->setText("Salut!"); bouton->setFont(QFont("Lucida Handwriting",20)); barre = new QProgressBar; barre->move(200,300); } MaFenetre::~MaFenetre() { delete bouton; delete barre; } void MaFenetre::show() { bouton->show(); barre->show(); }@ Hi, is that the only error message or are there some more? Should "project1.exe" be the name of your executable? Hi, thank you for your reply.'(38,5): error MSB3073: The command "qmake && nmake debug" exited with code 2.@ Yes, the project1.exe is the name of my executable Is "Project1.exe" currently executing ? The times I've seen this error is because the application is currently active. Check you process list, or in your case the task manager. I agree with mranger90: project1.exe must be running yet, kill the process and you will be able to continue It was the problem, the last time I executed the program there was nothing to show so that I couldn't quit the executable. Thank you very much! bq. I try to make work a small program with qt in visual basic. Are you using Qt Creator? The code your are using is C++ by the way. bq. the last time I executed the program there was nothing to show so that I couldn’t quit the executable. Using Qt Creator it's possible to stop a running executable using the "Application Output" pane by clicking the red button there. There is a good tutorial "on the wiki": which shows the things you want to achieve. Do you realy want to make a program which shows a button and a progresbar in a separate window? Normally we put widgets like that in a layout together in one window. Yeah, I'm training myself with visual because I must create a software with it and using Qt. So I try not to use Qt Creator except for Qt Designer. Thanks for the tutorial I will read that. I have seen for the separate windows, I created a QWidget and I included his name in the different QWidgets to obtain just one window at the end with the included elements. Is a layout very usefull if I always want the same window size? Why should I use it? If you always have the same window size you will probably not need a layout. With a layout you are more flexible and smart if the size can be changed by the user. - Chris Kawa Moderators last edited by Layouts make your ui style independent. What I mean by that is different users will have different system styles applied. boxes and buttons will differ in size, borders, font etc. Things like barre->move(200,300) will look like you intended only on your (and some amount of other) computers. It might well be that someone will set larger font or different DPI settings and it will result in overlapping ui elements. Layouts take care of that for you so you don't have to figure out those exact numbers for all possible system configurations out there. Also, it's a better idea not to manage destruction of the ui elements yourself. It's just too easy to forget one and leak. Instead of explicit delete bouton in the destructor just give it a parent in the constructor: bouton = new QPushButton("Hey", this); Then you don't have to think about it again since parent will delete all of its children when it's time.
https://forum.qt.io/topic/39416/solved-lnk1104-cannot-open-file-debug-project1-exe
CC-MAIN-2021-04
refinedweb
736
66.13
Susantha, Will you be fixing this on interop grounds? Samisa... -----Original Message----- From: Susantha Kumara [mailto:susantha@opensource.lk] Sent: Thursday, July 22, 2004 11:31 AM To: 'Apache AXIS C Developers List' Subject: RE: Removal of prefix from HeaderBlock I explained this API change by But there was no feedback for this mail. > -----Original Message----- > From: Sanjiva Weerawarana [mailto:sanjiva@opensource.lk] > Sent: Thursday, July 22, 2004 10:18 AM > To: Apache AXIS C Developers List > Subject: Re: Removal of prefix from HeaderBlock > > "Susantha Kumara" <susantha@opensource.lk> writes: > > > > IMO if a server is expecting a specific prefix it is wrong. It is the > > namespace that is represented by a prefix is important. Please correct > > me if I am wrong. > > +1. > > There may be crappy servers that only work with certain prefixes .. > that server is broken but if you have to interop with it then there > isn't much choice. > > > The namespace prefixes that are automatically added by the Serializer > > are in form ns<n> where n is the sequence number. > > > > So if any stub or handler adds a prefix of that kind there can be > > namespace prefix conflicts. This api change was aimed at avoiding this > > kind of conflicts. > > Yes there is indeed such a risk .. the solution is to say that the > prefix given when registering a namespace is *desired* prefix. If > that prefix is already in use then an auto generated one can be used. > If its not in use, then there's no reason not to use it. We can change the APIs back to enable setting *desired* namespace prefixes. But this is not a lasting solution because if the namespace prefix is in use and the Serializer puts dynamically generated prefix again that particular server will fail. Susantha. > > In any case, if the API existed before then removing it should be > done after lots of flags and warnings .. otherwise anyone who depended > on it is screwed. > > The suggestion Samisa made of "deprecating" old APIs before removing > is very good. Maybe doxygen supports something like the @deprecated > tag?? > > Sanjiva. --------------------------------------------------------------------------------------------------.
http://mail-archives.apache.org/mod_mbox/axis-c-dev/200407.mbox/%3C28B8E04BCFBEB74B9B19F765DE6598E902595555@enetslmaili.enetsl.virtusa.com%3E
CC-MAIN-2018-13
refinedweb
344
65.32
Details - Type: Bug - Status: Open - Priority: Minor - Resolution: Unresolved - Affects Version/s: Scala 2.9.0-1, Scala 2.9.1 - Fix Version/s: None - Component/s: Misc Compiler - Description We get a compiler error when trying to partially apply a /: (left fold) def f(xs: List[Int]) = (0 /: xs) _ <console>:15: error: missing arguments for method /: in trait TraversableOnce; follow this method with `_' if you want to treat it as a partially applied function Here are some workarounds: def f(xs: List[Int]) = xs.foldLeft(0) _ def f(xs: List[Int]) = xs./:(0) _ def f(xs: List[Int]): ((Int, Int) => Int) => Int = (0 /: xs) This is essentially a duplicate of SI-1980 because they're both caused by the odd desugaring performed on right associative operations. Maybe I can get both.
https://issues.scala-lang.org/browse/SI-5073
CC-MAIN-2014-15
refinedweb
136
60.65
How can I tell Sublime not to open previously-open directories when I launch it? My ideal workflow would be $ cd path/where/I/want/to/work $ subl . After doing that, it's a pain to have to close other windows that come up automatically. I'm guessing there's a magic setting I can toggle somewhere? I figured it out. In Packages/User/Global.sublime-settings, the following lines got me what I want: { "hot_exit": false, "remember_open_files": false } Is it possible to change these setting from command line rather than editing the setting permanently in sublime? For when I have 2 or 3 projects in sublime but want a new folder to open in sublime without those previous projects and don't want the new project to overwrite sublime_session when I close the window. That way when I open sublimetext normally it will still have those 2 or 3 other projects? It seems simple from the command line perspective. Just a option to ignore last session and don't write session. But I don't know how difficult it would be to implement in sublime if it doesn't already exist. Thanks So, guys, any progress? It really seems to be a tiny change — only add a command switch to 'subl' to ignore last session. +1 Such an option would make Sublime far more useful at the command-line for one-off file editing. Currently it's a workflow PITA: Eg, I have Sublime set as my editor for git. Every time I "git commit -a", Sublime opens with a bunch of tabs (and possibly windows). Now,Say I write my commit and then close with alt+f4, Git won't commit because I have another sublime window open and it's waiting for the process to quit, plus I've now broken my window collection from my last development session. So I have to write my commit, then carefully save it, close the tab and then exit Sublime with File..Exit to keep my windows/projects as they were before. What a faff! I've tried all kinds of workarounds for this, like loading a blank project with the cmdline etc., but none of them play nice. Blank projects have the annoying bug of launching a separate window, and randomly focusing it instead of the window with the file launched from the cmdline. What we want here is a simple launch option that disables hot_exit, doesn't load any session/projects, and launches sublime in its own process for one-off editing. This would be dead handy for command line editing, but also for other purposes too, like making sublime the editor launched by other apps. Sublime is beautiful, but it's doesn't play nice for people wanting to launch from the command line for one-off editing. As a result, I usually fall back to vim or whatever for those jobs. Hey guys, I'm posting just because I'm interested in this topic. I have the same issue. I'm looking into projects at the moment. I wonder if a project file could accomplish what we are looking for ? You running under tmux? If so, you need to install reattach-to-user-namespace and set an alias like:alias subl='reattach-to-user-namespace /Applications/Sublime\ Text.app/Contents/SharedSupport/bin/subl' --project now works, it will load your project, or file, or whatever. This was frustrating me and I just figured it out a few minutes ago. Then I found this thread about my other gripe. I want the option to not load my previous session (as a command line flag, not as a global config) HUGE +1 I use sublime as my default editor in linux, the fact that version 3 can elevate to root on the fly makes it damn comfy to useand a switch telling subl to ignore session and just open passed path would be very nice Still nothing new on this? Would be great with a command line option. +1 but I guess "technical support" isn't the right place to post this +1 This would be really useful, particularly for git commit workflows etc useful command (running Linux): subl3 -wn "Enter File Here" Also add to config: { "close_windows_when_empty": true } Edit: Add Code Section and Added config fle
https://forum.sublimetext.com/t/disable-automatic-loading-of-last-session/4132
CC-MAIN-2016-36
refinedweb
719
71.04
Hi, in java, I can use packages to combine classes which have similar traits. For example, I have a package "worker" and classes in this package contains "Programmer.java" and "Designer.java" I want to know how to do this in C++. What I did was this: 1. created a source folder called "actor" 2. inside the directory, I created an "actor.h" and "actor.cpp" 3. now I created another folder called "camera" 4. I created a "camera.h" and "camera.cpp" inside it. 5. "camera.h" depends on "actor.h" -> this is where the problem started. It seems that camera.h cannot see actor.h I tried #include "actor/actor.h" in the camera.h but it still did not find the header. What am I doing wrong? My goal is just I do not want to put all headers and cpp files in a single directory. Suggestions are welcome. Thanks!
https://www.daniweb.com/programming/software-development/threads/238883/c-packaging-help
CC-MAIN-2018-13
refinedweb
153
73.44
The Java String compareTo() method. Java String compareTo() method We have following two ways to use compareTo() method: int compareTo(String str) Here the comparison is between string literals. For example string1.compareTo(string2) where string1 and string2 are String literals. int compareTo(Object obj) Here the comparison is between a string and an object. For example string1.compareTo("Just a String object") where string1 is a literal and it’s value is compared with the string specified in the method argument. Java String compareTo() method How to find length of a string using String compareTo() method Here we will see an interesting example of how to use the compareTo() method to find the length of a string. If we compare a string with an empty string using the compareTo() method then the method would return the length of the non-empty string. For example: String str1 = "Negan"; String str2 = ""; //empty string //it would return the length of str1 in positive number str1.compareTo(str2); // 5 //it would return the length of str1 in negative number str2.compareTo(str1); //-5 In the above code snippet, the second compareTo() statement returned the length in negative number, this is because we have compared the empty string with str1 while in first compareTo() statement, we have compared str1 with empty string. Lets see the complete example: public class JavaExample { public static void main(String args[]) { String str1 = "Cow"; //This is an empty string String str2 = ""; String str3 = "Goat"; System.out.println(str1.compareTo(str2)); System.out.println(str2.compareTo(str3)); } } Output: Is Java String compareTo() method case sensitive? In this example, we will compare two strings using compareTo() method. Both the strings are same, however one of the string is in uppercase and the other string is in lowercase. public class JavaExample { public static void main(String args[]) { //uppercase String str1 = "HELLO"; //lowercase String str2 = "hello";; System.out.println(str1.compareTo(str2)); } } Output: As you can see that the output is not zero, which means that compareTo() method is case sensitive. However we do have a case insensitive compare method in string class, which is compareToIgnoreCase(), this method ignores the case while comparing two strings. If both the strings has space in them? Also if the no of characters in both the strings are not equal then what will be the output? I think a better description of what this method does is that it compares the FIRST character that is not equal in both strings. Anything after that will NOT be compared, whether one string has more or less characters are equal or not. The interger given in the result is based on the difference in value of the characters in the ASCII chart. This would apply for the space as well. If space is placed as the first distinct character in the two strings, then it would be compared depending the ASCII values of the two characters. For example, if you compared a space character (which has a ASCII value of 32) to a comma (which has an ASCII value of 44) using the compareTo() method, assuming that they are the first distinct chracters found in the two strings, then the result would be the difference between 32 and 44, which is -12.
https://beginnersbook.com/2013/12/java-string-compareto-method-example/
CC-MAIN-2021-10
refinedweb
542
60.65
do-while loop in C++ Programming In C++ programming, loop is a process of repeating a group of statements until a certain condition is satisfied. Do-while loop is a variant of while loop where the condition isn't checked at the top but at the end of the loop, known as exit controlled loop. This means statements inside do-while loop are executed at least once and exits the loop when the condition becomes false or break statement is used. The condition to be checked can be changed inside the loop as well. Syntax of do-while loop do { statement(s); ... ... ... }while (condition); Flowchart of do-while loop Example C++ program to print the sum of n natural numbers. #include <iostream> #include <conio.h> using namespace std; int main() { int n,i=1,s=0; cout <<"Enter n:"; cin >> n; do { s=s+i; i++; }while (i<=n); cout <<"Sum = "<<s; getch(); return 0; } This program prints the sum of first n natural numbers. The number till which the sum is to be found is asked from user and stored in a variable n. The variables i and s are used to store the number count from 1...n and sum of the numbers respectively. Inside the do-while loop, sum is calculated by repeated addition and increment. In each repetition, whether number count, i, is smaller or equals to inputted number, n, is checked. If it is, then the loop continues, but exits the loop if it isn't. After the control exits from the loop, sum is printed. Output Enter n:7 Sum = 28
https://www.programtopia.net/cplusplus/docs/do-while-loop
CC-MAIN-2019-30
refinedweb
267
72.66
This is your resource to discuss support topics with your peers, and learn from each other. 08-05-2011 11:00 AM Hello guys!, I need your help cause I cannot find any solution focusing managers. I have One manager(parent manager) which contains custom managers(child managers), and these have more fields like labels and edit fields. Right now, I want to focus ONLY the (child managers) as whole manager, im not interested on focus the fields inside of them(knowing that there are edit fields). All I could do is to focus the manager using setFocusListener but do not why the draw focus is not working. Parent manager implements FocusChangeListner() -Childmanager.setChangeListener(parentManager) --LabelField --EditField -Chidlmanager.setChangeListener(parentManager) --EditField --LabelField Thanks a lot! Solved! Go to Solution. 08-05-2011 11:06 AM You can't. Managers do not accept focus. But you can fake it. Add a focusable NullField to the Manager, make everything else non focusable. Add a Focus Change Listener to the NullField, and have this process, invalidate the Manager. Then change the Manager so that before it paints itself, it checks to see if the NullField is inFocus, and if it is, it paints itself in a focused way - typically with a Blue background. This question has been asked before to, similar answers have been given. Search and you should find them. Good luck. 08-05-2011 12:03 PM Thanks pete, I have read the solution of these forum ( thanks a lot 08-05-2011 12:47 PM Yes I think there are problems with both the supplied methods. Method 1 assumes that the Manager's 'graphic' area and the Screen 'graphic' area are the same, which typically they will not be. The second one assumes that NullField has access to the graphic area of the Manager, again this is not valid. Really the NullField can only paint on its own space, and it doesn't have any. So ignore both these options, they are not valid. I've also had a look and I can't actually find a Thread that discusses this and has a solution that will suit you. So have a go at what I suggested above and we will help get that code working. 08-05-2011 01:37 PM - edited 08-05-2011 01:39 PM Just to elaborate slightly on what Peter has said: since you want access to your Manager's viewing area, you'll need to define some of the actions on the Manager's level. One way to achieve this is to define a FocusChangeListener inside the Manager and attach that listener to the NullField. Inside that listener, simply invalidate the manager on focus changes. Override the Manager's paintBackground to check your NullField's isFocus(), pick a color depending on that and paint it (graphics.setBackgroundColor followed by graphics.clear is one of the possibilities - just don't forget to reset that background color to its previous value afterwards). 08-05-2011 02:28 PM Still without working... here is a similar code how i am trying to focus it. Consider that my testManager is attached to a screen. And i want to focus just the MyManager class. Ty public class MyManager extends HorizontalFieldManager implements FocusChangeListener{ public MyManager(){ NullField focus = new NullField(FOCUSABLE); focus.setFocusListener(this); VerticalFieldManager vfm = new VerticalFieldManager(); vfm.add(new LabelField("Hello World2")); vfm.add(new LabelField("Hello World3")); this.add(new LabelField("Hello World")); this.add(vfm); } public void focusChanged(Field field,int eventType){ this.getManager().invalidate(); } protected void paintBackground(Graphics arg0) { arg0.setBackgroundColor(Color.LIGHTBLUE); super.paintBackground(arg0); } } public class TestManager extends VerticalFieldManager(){ public TestManager(){ this.add(new MyManager()); this.add(new MyManager()); this.add(new MyManager()); } } 08-05-2011 02:58 PM You forgot one line: add(focus); (or this.add(focus); as you seem to like). In addition, super.paintBackground will probably achieve nothing. You can override your paintBackground like this: protected void paintBackground(Graphics g) { int prevBg = g.getBackgroundColor(); if (focus.isFocus()) { g.setBackgroundColor(Color.LIGHTBLUE); } else { g.setBackgroundColor(Color.WHITE); } g.clear(); g.setBackgroundColor(prevBg); } Should work much better. 08-05-2011 07:08 PM Now there is a nother problem, I didnt start new forum cause the problem is relationed... The focus works when I navigate through the managers but when I click on them nothing happens even the managers didnt focus. 08-06-2011 01:04 AM It is a different problem, but yes, it often needs to be solved at the same time. I was waiting for this question to be asked. You need to override touchEvent on your Manager's level. The exact code is not so obvious, so I'll provide you with it: protected boolean touchEvent(TouchEvent message) { int x = message.getX(1); int y = message.getY(1); if (message.getEvent() == TouchEvent.DOWN) { if (x >= 0 && x < getWidth() && y >= 0 && y < getHeight()) { focus.setFocus(); } } return super.touchEvent(message); } 08-06-2011 08:23 AM I use the default eclipse emulator(9550) and it is touchable, the method you gave me seems that works only in touchable phones... should I override the navigationClick or the trackwheelClick method with similar actions at Manager level to do this work on untouchable phones?
https://supportforums.blackberry.com/t5/Java-Development/Help-Focus-on-managers/m-p/1240971
CC-MAIN-2016-44
refinedweb
872
59.6
Develop Office Client Applications using Visual Studio The first thing that you notice after opening your existing projects in Visual Studio 2010 is the very familiar Visual Studio Conversion wizard. Internally we call it Project Migration wizard. Personally I love this wizard because of its simplicity, my converted projects are only few clicks away. In this post I’ll walk through this wizard and tell you what’s happening behind the scenes. There is very useful information shown on the first page of the wizard. This information is very similar to what was on Visual Studio 2008 version. I have the screen shot below and I can bet that most of you will not read the entire statement. I have been using Visual Studio since Visual Studio 6.0 and never read this myself until I joined this team and started owning Project Migration J. Personally I would have written this as bulleted list. Here is my version of the information. 1. You see this wizard if this project was created in previous versions of VS, or if you upgraded your copy of Microsoft Office. * The second part of above statement is very interesting in VSTO scenarios, I will elaborate that later. 2. Once you convert this project to Visual Studio 2010, you may not be able to use it with earlier versions of Visual Studio, better take backups J 3. We will checkout the project files automatically if the project is under source control and provided this machine is configured properly to use source control. 4. If the project is not under source control remove the read only attribute from the files and folders. The second step in the wizard gives us a chance to backup existing source files. We highly recommend taking a backup so that’s why we select the option by default. If you click Finish on the previous page, we would take the backup by default. By default we create a folder named “Backup” under the project directory and store the backup there. The next step of upgrade process will tell you what we are going to do with your project. Again here is my bulleted list of what is written there. 1. We will checkout the files automatically, please make sure that the source control is configured properly on the machine. 2. We will upgrade the project to the currently installed version of Office. If you do not want to upgrade this uncheck “Always upgrade to installed version of Office” option on the “Office Tools” section of the Tools | Options dialog. · Note that you can upgrade the Add-Ins even if you do not have any version of Microsoft Office installed on the machine. · VSTO in Visual Studio 2010 does not support Office 2003 so we upgrade to Office 2007 format by default. Update: Please note that it is still possible to create Office 2003 Shared AddIns in Visual Studio 2010, Misha has a nice blog about the COM Shim wizard being upgraded to use with VS 2010 Once you click Finish we start upgrading the project to Visual Studio 2010 format. Once this is converted, you will get the final page of this wizard. If there were any errors during upgrade the conversion log checkbox will be checked by default. If you do not have .Net Framework 3.5 SP1 installed on the machine the project migration will go through an extra step which I will explain later in “Retargeting During Project Upgrade” section below. If you check the checkbox then the Conversion Report will display as shown below. It is clear from the report below that we touch only the .sln and .csproj/.vbproj files during project upgrade. If the machine does not have .Net Framework 3.5 SP1 installed, the Project Migration wizard will ask you to either download .Net Framework 3.5 SP1 or to retarget the project to .Net Framework 4. If you choose to download .Net Framework 3.5 SP1 the solution and project file will be upgraded to Visual Studio 2010 compatible format and the project will be unloaded in the Solution Explorer. After downloading .Net Framework 3.5 SP1 you can right click on the project in Solution Explorer and select Reload Project. Please keep few things in mind when you choose to retarget the project to .Net Framework 4, this is a big decision; it comes with lots of power and with power comes lots of responsibilities. The major benefit of switching to .Net Framework 4 for VSTO solutions is the use of Type Embedding in VSTO Runtime. Once you retarget the solution to .Net Framework 4 all the referenced Primary Interop Assemblies will be embedded into the customization assembly and the end users will no longer need to install Microsoft Office Primary Interop Assemblies on their machines to run this customization. There is a nice MSDN article which explains this, so be sure to go through it before deciding to switch to .Net Framework 4. Be aware that you may need to change some code as we have changed the programming model a bit. Take a look at some of the How Do I videos we have on the VSTO Dev Center that explain how to migrate projects. Note: If you are upgrading a document level customization that targets Office 2003 you may need to install Visual Studio for Office Runtime Second Edition on developer machine before upgrading the project due to a known compatibility break we found while designing Visual Studio for Office Runtime 3.0 during the Visual Studio 2008 timeframe. Check “Upgrading Microsoft Office 2003 Projects” section of this article for more details. So I hope that this post helps explain what is happening behind the scenes when you migrate your Office solutions to Visual Studio 2010. You may find following articles useful while migrating projects to Visual Studio 2010. Upgrading and Migrating Office Solutions Migrating Office Solutions to the .NET Framework 4 Project Upgrade, Options Dialog Box Have fun. VSTO in Visual Studio 2010 does not support Office 2003?? Where can I find more information about that? And how are we supposed to migrate a solution that contains an Excel 2003 into VS2010? Sorry for my last comment. I didn't finalize to read the post. I just see the section "Upgrading Microsoft Office 2003 Projects" that you mention. And I read that: "Visual Studio 2010 does not support upgrading Microsoft Office projects created by using Visual Studio Tools for Office, Version 2003. To continue to develop one of these projects in Visual Studio 2010, create a new Office project and manually port your code into the new project." So I undersand I will be able to mantain the Excel 2003 add in. I'll try. Thanks! Hi Pau, In Visual Studio 2010 you can not create customizations that target Office 2003. In Visual Studio 2010 you can create customizations for Office 2007 and Office 2010, As soon as you open the Office 2003 customization in Visual Studio 2010, we will migrate it to Office 2007 or Offfice 2010, which ever is installed on developer machine. As I said in confirm section above "VSTO in Visual Studio 2010 does not support Office 2003 so we upgrade to Office 2007 format by default." Thanks, Navneet Ups... So that's a problem. Many of my custumers still use Excel 2003. I need to target to Excel 2003. I've a Excel 2003 add in, and I need to continue developing THIS project. Migrating to Excel 2007 is not an option, because that would be ANOTHER project. It's a real shame. Many large global organisations still use Office 2003 on the desktop (Billion $ turn over). The biggest obstacle to upgrading is the change in user interface and loss of productivity whilst users learn how to redo things via the Ribbon, etc. As a developer I want to use Visual Studio 2010 and target Office 2003. There are new features in the 2010 compiler which will work with .NET 2.0 and of course the improvements to the 2010 IDE. For now I have to use VS2008 until organisation updates to Office 2003/2010 (Large cost) or someone works out how to target Office 2003 via VS2010. You can still create Office 2003 Shared AddIns in Visual Studio 2010. Misha has a nice blog about COM Shim wizard being upgraded to use with VS 2010. AddinExpress might be a good solution. And there's always targeting the COM IExtensibility2 (the Misha article) as well. It's a lot more work, but you can target all versions of office all the way back to Office 2000. I've done that on several commercial office addin's I've worked on. Are there any articles on what all the "changes to the programming model" actually are. I'm noticing LOTS of namespace collisions when upgrading a non-trivial word addin to VSTO4 and .net4 McLean has an awesome articles related to programming model changes blogs.msdn.com/.../fixing-compile-and-run-time-errors-after-retargeting-vsto-projects-to-the-net-framework-4-mclean-schofield.aspx. McLean has an awesome article related to programming model changes blogs.msdn.com/.../fixing-compile-and-run-time-errors-after-retargeting-vsto-projects-to-the-net-framework-4-mclean-schofield.aspx. i have upgraded my.project using above.procedure but after converting 3-4 project ...now its stop upgarding...
http://blogs.msdn.com/b/vsto/archive/2010/04/15/upgrading-vsto-projects-to-use-with-visual-studio-2010-navneet-gupta.aspx
CC-MAIN-2015-32
refinedweb
1,569
65.12
ABAP News for Release 7.50 – ABAP Keyword Documentation Although ABAP and ABAP CDS are rather self-explaining some of you still tend to use F1 on ABAP or ABAP CDS keywords in the ABAP Workbench and/or ADT. Then, wondrously, the ABAP Keyword Documentation appears, and if you’re lucky, you directly find what you searched for (of course that kind of context sensitive help is overrated; in HANA Studio, you boldly program without such fancies). But since it still seems to be used, one or the other improvement is also done for the ABAP Keyword Documentation. (B.t.w.: What are the most commonly looked up ABAP statements? – SELECT, READ TABLE, DELETE itab). Full Text Search in ADT In the SAP GUI version of the ABAP Keyword Documentation you can carry out an explicit full text search since long. When calling the documentation by transaction ABAPHELP or from SE38 you find the respective radio button: From the documentation display you can also explicitly call the full text seach by selecting Extended Search: Not so in ADT (Eclipse). In ADT the so called Web Version of the ABAP Keyword Documentation is used. That’s not the one you find in the portal but one that can be reached by an ICF service maintained in transaction SICF and that you can also call with program ABAP_DOCU_WEB_VERSION (of course the contents of all versions are the same, only the display and the functionality of the displays differ, with the functionality of the ADT version coming closer and closer to the SAP GUI version). Up to now, the Web Version used in ADT started with an index search and only switched to full text search if nothing was found. With ABAP 7.50 you can call the full text search in the Web Version and in ADT explicitly. Simply put double quotes around your search term: So simple. Furthermore, you can continue searching in the hit list of an index search: A piece of cake! Wonder why I had waited so long to implement it. The next one was harder. Special Characters in Syntax Diagrams In good old times, the special characters { }, [ ], | had no special meaning in ABAP. Therefore, they were used with special meanings in the syntax diagrams of the ABAP Keyword Documentation, denoting logical groups, optional additions and alternatives. This has changed since string templates. Mesh paths and of course the DDL of ABAP CDS are other examples. This resulted in syntax diagrams, where you couldn’t decide which special characters are part ot the syntax and which are markups, e.g. embedded expressions: After some complaints from colleagues (customers never complain, almost). I changed that and use another format for markup characters from ABAP 750 on. Embedded expressions now: A small step for mankind, but a large step for me. Why that? I had to check and adjust about eighteen thousand { }, [ ], | characters manually arrrgh! Couldn’t figure out a way of replacing them with a program. The results are worth a look. Maybe it takes a little getting used to, but my colleagues didn’t complain any more. Time for customers to complain? Any suggestions regarding the format? Now I have only a few lines of code to change in order to adjust the format of all markup characters in one go … Horst, I'd like to complain that the public grammar should be set up by the SAP "compiler" team (the grammar that the compiler really uses), with also a public class to nicely parse the code better than the existing (AFAIK) lexers 😉 Sandra, if I understand you right, you propose that the keyword documentation´s syntax diagrams are generated from compiler resources and the info developer only adds the documentation? Yes, I´d like to have that too, but its not so easily possible, especially, if one wants to split the documentation into senseful sections. Some years ago we gave it a try with the generated syntax diagrams that you find by selecting the icon close to the headings of keyword chapters of the documentation in SAP GUI. The basics of those come from the compiler team. Unfortunately they are not supported any longer and are getting out of date. I switch them off one by one in the moment a statement gets new additions. It was more a hope than a real request 😀 I understand "easily that it's complex" (the ABAP grammar must be one of the most complex in all languages I know); it's interesting to know that you tried to do it several years ago. I wish you a lot of courage for all these future additions !
https://blogs.sap.com/2015/11/25/abap-news-for-release-750-abap-keyword-documentation/
CC-MAIN-2022-21
refinedweb
775
60.24
In day 5 we will look in to how to bundling and minification to maximize performance of MVC application. We will also look in to the concept and advantages of view model which is nothing but a bridge between model and the view. In case you have missed the past 4 days of the MVC learn series below are the links for the same. Both these concepts bundling and minification helps us to increase performance. Web projects always need CSS and script files.Bundling helps us to combine to multiple javascript and CSS file in to a single entity during runtime thus combining multiplerequests in to a single request which in turn helps to improve performance. For example consider the below web request to a page. The below request are recorded by using chrome developer tools. This page consumes two JavaScript files “Javascript1.js” and “Javascript2.js”. So when this is page is requested it makes three request calls:- Now if you think a little the above scenario can become worst if we have lot of javascript files(especially JQUERY files) resulting in multiple requests thus decreasing performance. If we can somehow combine all the JS files in to a single bundle and request them as a single unit that would result in increased performance (See the next figure which has a single request). Minification reduces the size of script and CSS files by removing blank spaces , comments etc. For example below is a simple javascript code with comments. // This is test var x = 0; x = x + 1; x = x * 2; After implementing minification the javascript code looks something as below. You can see how whitespaces and comments are removed to minimize file size and thus increasing performance as the file size has become smaller and compressed. var x=0; x=x+1; x=x*2; So let’s demonstrate a simple example of bundling and minification with MVC 4.0 step by step. So to understand bundling and minification , let’s go ahead and create a empty MVC project. In that let’s add a “Script” folder and inside”Script” folder , lets add two javascript files as shown in the below figure. Below is the code for “Javascript1” file. Below is the code for “Javascript2” file. alert("Hello"); Now let’s go ahead and create a controller which invokes a view called as “MyView” which consumes both the javascript files. public class SomeController : Controller { // // GET: /Some/ public ActionResult MyView() { return View(); } } Below is the ASPX view which consumes both javascript files. <html> <script src="../../Scripts/JavaScript1.js"></script> <script src="../../Scripts/JavaScript2.js"></script> <head> </head> <body> <div> This is a view. </body> </html> Now run the MVC application in Google chrome , press CNTRL + SHIFT + I keyboard keys to see the below output. You can see there are three requests: - Now bundling is all about making those two JavaScript call’s in to one. Bundling and minification is done by “System.Web.Optimization” namespace. Now this DLL is not a part of .NET or ASP.NET framework. We need to use NUGET to download this DLL. So go to NUGET and search for ASPNET.Web.Optimization. In case you are new to Nuget and do not know how to use it please see this video for basics So once you have got the optimization in your package , click on install to get the references in your project. Now this step depends on which MVC template you have selected. If you have selected the “Basic” template then the “BundleConfig” file is ready made and if you have selected the “Empty” template then you have do lot of work. Currently we have selected “Empty” template so that we can know things from scratch. So go ahead and add ‘BundleConfig” class file and create a “RegisterBundles” method as shown in the below code. In the below code the “bundles.add” says that add all the javascript files in the “Scripts” folder in to one bundle called as “Bundles”. Important note: - Do not forget to import “using System.Web.Optimization;” in the class file or else you will end up with errors. public class BundleConfig { public static void RegisterBundles(BundleCollection bundles) { bundles.Add(new ScriptBundle("~/Bundles").Include( "~/Scripts/*.js")); BundleTable.EnableOptimizations = true; } } This step is not necessary to be performed by project who have created using “Basic” template. But below goes the steps for people who have created using “Empty” template. Open the global.asax.cs file and in the application start call the “RegisterBundles” method as shown in the below code. protected void Application_Start() { … … BundleConfig.RegisterBundles(BundleTable.Bundles); } Once the bundling is done we need to remove the “script” tag and call the “Optmization” dll to render the bundle. <script src="../../Scripts/JavaScript1.js"></script> <script src="../../Scripts/JavaScript2.js"></script> Below is the code which will bundle both javascript files in to one unit thus avoiding multiple request call for each file. <%= System.Web.Optimization.Scripts.Render("~/Bundles") %> Below is the complete bundling code called inside the MVC view. <%= System.Web.Optimization.Scripts.Render("~/Bundles") %> <head runat="server"> <meta name="viewport" content="width=device-width" /> <title>MyView</title> </head> <body> <div> This is a view. </body> </html> So now that you are all set,it’s time to see the bundling and minification in real. So run google chrome, press CNTRL + SHIFT + I and you can see the magic there is only one call for both the javascript files. If you click on the preview tab you can see both the javascript files have been unified and…. GUESS , yes minification has also taken place. Remember our original javascript file. You can see in the below output how the comments are removes , white spaces are remove and the size of the file is less and more efficient. which So the view model class can have following kind of logics: - Let’s do a small lab to understand MVC view model concept using the below screen which we discussed previously. I will use a top down approach for creating the above screen: - So let’s go ahead and create a “Customer” model with the below properties. Because this is a view model I have avoided the color property in the below class. public class CustomerModel { private string _CustomerName; public string CustomerName { get { return _CustomerName; } set { _CustomerName = value; } } private double _Amount; public double Amount { get { return _Amount; } set { _Amount = value; } } } The next thing is to create a view model class which will wrap “Customer” model and add UI properties. So let’s create a folder “ViewModels” and in that add a class “CustomerViewModel”. Below goes the code for “CustomerViewModel” class. Below are some important points to note about the view model class: - public class CustomerViewModel { private CustomerModel Customer = new CustomerModel(); public string TxtName { get { return Customer.CustomerName; } set { Customer.CustomerName = value; } } public string TxtAmount { get { return Customer.Amount.ToString(); } set { Customer.Amount = Convert.ToDouble(value); } } public string CustomerLevelColor { get { if (Customer.Amount > 2000) { return "red"; } else if (Customer.Amount > 1500) { return "orange"; } else { return "yellow"; } } } } The next step is to create a strongly typed MVC view where we can consume the view model class. In case you are not aware of MVC strongly typed views please see Learn MVC Day 1, Lab 4 If you see the view it is now decorated or you can say binded with view model class. The most important thing to watch is your view is CLEAN. It does not have decision making code for color coding. Those gel codes have gone inside the view model class. This is what makes VIEW MODEL a very essential component of MVC. This view can be invoked from a controller which passes some dummy data as shown in the below code. public class CustomerController : Controller { // // GET: /Customer/ public ActionResult DisplayCustomer() { CustomerViewModel obj = new CustomerViewModel(); obj.Customer.CustomerName = "Shiv"; obj.Customer.Amount = 1000; return View(obj); } } Lot of architects make mistakes of creating a view model class by inheriting. If you see the above view model class it is created by composition and not inheritance. So why does composition make more sense?. If you visualize we never say “This Screen is a child of Business objects”, that would be a weird statement. We always say “This screen uses those models”. So it’s very clear it’s a using relationship and not an IS A (child parent) relationship. Some of the scenarios where inheritance will fail are: - So do not get lured with the thought of creating a view model by inheriting from a model you can end up in to a LISKOV issue(read about SOLID LISOV from here) It looks like a duck, quacks like a duck but it is not a duck. It looks like a model has properties like a model but it is not exactly a model. Image from [TestMethod] publicvoid TestCustomerLevelColor() { CustomerViewModel obj = new CustomerViewModel(); obj.TxtName = "Shiv"; obj.TxtAmount = "1000"; Assert.AreEqual("Red",obj.CustomerLevelColor); } When it comes to exception handling Try…Catch block is the favorite choice among .NET developers. For example in the below code we have wrapped the action code in a TRY CATCH block and if there are exceptions we are invoking an “Error” view in the catch block. public ActionResult TestMethod() { try { //.... return View(); } catch (Exception e) { //Handle Exception; return View("Error"); } } The big problem with the above code is REUSABILITY of exception handling code. MVC provides you to reuse exception handling code at three levels: - Let’s go step by step to demonstrate all the above 3 ways of handling errors in MVC. So the first thing is to add a simple controller and action which raises some kind of exception. In the below code you can see we have added a “TestingController” with an action “TestMethod” where we have raised divide by zero exception. public class TestingController : Controller { public ActionResult TestMethod() { int x = 0; x /= x; //Above line leads to DivideByZeroException return View(); } } So if you execute the above action you would end up with error as shown in the below figure. Now once the error is caught by any of the three above methods we would like throw some error page for display purpose. So let’s create a simple view by the name “Error” as shown in the figure below. So now that we have an error and also an error view it’s time to do demo’s using all the three ways. So first let’s start with “OnException” i.e. exception code reusability across actions but in the SAME CONTROLLER. To implement exception go to the “TestingController” and override the “OnException” method as shown in the below code. This method executes when any error occurs in any action of the “TestingController”. The viewname i.e. “Error” is set in the result property of “filterContext” object as shown in the below code. protected override void OnException(ExceptionContext filterContext) { Exception e = filterContext.Exception; //Log Exception e filterContext.Result = new ViewResult() { ViewName = "Error" }; } Now if you try to invoke the “TestMethod” from the “TestController” you should see the “Error” view as shown in the below figure. The “OnException” method helps to provide error handling for a specific controller but what if we want to reuse the exception logic across any controller and any action. That’s where we have “FilterConfig” way which is the next thing. In Web.Config simply enable custom error as follows. <customErrors mode="On"> In App_Start folder open FilterConfig.cs and make sure that HandleErrorAttribute is added to GlobalFilterCollection. HandleErrorAttribute at global level confirms that exceptions raised by each and every action in all the controllers will be handled. Note: If you are curious about how to make “HandleErrorAttribute” controller specific or action specific then click here and learn in detail about exception handling in ASP.NET MVC. If you execute the controller you should get the same error page as shown in step 3. To handle error across MVC project we can use listed to “Application_Error” event in global.asax file and write logic of error handling in the same. That’s it. A simple demonstration of Exception handling in MVC using all the 3 ways. Note: We have uploaded a supporting article for our step by step series. It explains in detail exception handling in ASP.NET MVC. If you are willing to read more about it, click here. In ASP.NET MVC we have a concept of Areas using which we can break our system into modules and organize our project in a better manner. Assume we have a system which consists of two modules Customer and Order Processing. Normally when we create Asp.Net MVC Project our structure consist of 3 folders – Controllers, Model and Views. So Project structure will be something like this. As you can see nothing is organized. When it comes to code management it will be very difficult. Side image is the structure of the project when we had 2 modules, imagine a situation when we have hundreds of modules in single system. Examples for Areas in real world Country is divided into states to make development and management easy. Just like real world we use the concept of area in Asp.Net MVC to break single system into modules. One area represents one module by means of logical grouping of controllers, Models and Views. To add area right click your project and say Add>>Area as shown in the below figure. Put all your related files into respected areas as shown in the below figure. In the below figure you can see I have created two areas “Customer” and “Order”. And each of these areas have their own View , controller and models. Note: - Area is logical grouping not physical, so no separate dlls will be created for every area. One can ask a question why we should use Areas for breaking system into modules when we can just use folders. In simple words, answer for this question is “To avoid of huge manual work” In order to achieve same functionality using simple folder you have to do following things. Default View Search Customized View Search Final note, you can watch my C# and MVC training videos on various sections like WCF, Silver light, LINQ, WPF, Design patterns, Entity framework etc. By any chance do not miss my .NET/C# interview questions and answers book from. For technical training related to various topics including ASP.NET, Design Patterns, WCF, MVC, BI, WPF contact SukeshMarla@gmail.com or visit In case you are completely a fresher I will suggest to start with the below 4 videos which are 10 minutes approximately so that you can come to MVC quickly. In case you want to start with MVC 5 start with the below video Learn MVC 5 in 2 days. Every lab I advance in this 7 days series I am also updating a separate article which discusses about important MVC interview questions which are asked during interviews. Till now I have collected 60 important questions with precise answers you can have a look at the same.
http://www.codeproject.com/Articles/724559/Learn-MVC-Model-View-Controller-Step-by-Step-in?msg=4760313
CC-MAIN-2016-30
refinedweb
2,510
64.2
makes sense to be extra careful here. Standard HTTP POST messages can be easily forged and that potentially opens a security hole when using webhooks. And that’s the reason why we have gone and implemented an authentication mechanism for all Webhooks sent via Logentries. To enable the authentication, simply include a username and a password in Webhook’s URL. For example: We use the Hash Message Authentication Code for this which provides a relatively simple mechanism to verify both authenticity and consistency of the message. The idea is that for the request we calculate a cryptographic hash which is unfeasible to recreate without a shared secure code (that’s a password in our case). Both the sender and receiver calculates the hash value independently using their shared secure code and compares them for identity. Note that HMAC is not a simple hash of the message content. Since standard cryptographic hashes are stream-based (Merkle–Damgård), they are prone to a length-extension attack. Thus, an attacker can easily pad the content to fit the hash block size and continue with the calculation of the hash with the appended data. To avoid this situation, HMAC uses a slightly more complex hashing scheme to close this (and a few other) holes. The following example explains what the HMAC looks like in HTTP headers. This is an example of an alert report configured to be sent to. This means, authentication credentials (user, password) are encoded in the configuration URL. POST /webhook HTTP/1.1 User-Agent: Logentries/1.2 Host: example.com Date: Mon, 28 Jan 2013 22:01:58 GMT Content-Type: application/x-www-form-urlencoded Content-Md5: A4O7taYfMqO/3vugWHFriA== Content-Length: 1632 Connection: keep-alive X-Le-Nonce: nfblZ9aBldYSHT64Kw2bbVwt X-Le-Account: f1cac763 Authorization: LE user:qc2s3YmnX42K1Nvtxw/p1Br1ehI= Accept-Encoding: identity payload=... The hash code is contained in the Authorization header encoded in base 64. To calculate the hash, we create a canonical string which is hashed in a special manner to avoid some cryptographic attacks. The canonical string contains all important fields to guarantee that the message cannot be tempered without noticing. Since HTTP headers can be extended and/or modified on their way across proxies and load balancers, we have to select only a subset of them in order to get a stable result and still guarantee authenticity. These fields are method type (POST in this case), Content-Type, MD5 hash of the content (needs to be calculated although it’s duplicated in the header), Date, path, and X-Le-Nonce. Nonce (aka salt) is a random string generated on the server side to help with a detection of replay attacks. Firstly, we calculate MD5 of the POST data and encode it using base 64 encoding. The value of this hash is also stored in the HTTP header Content-Md5, but don’t use that for HMAC calculation as it can be tempered: import hashlib, base64 content_md5 = base64.b64encode( hashlib.md5( content).digest()) The canonical string then contains all the selected headers delimited with new lines in the following ‘exact’ ordering. Assuming we store headers in variables, the code may look like this: canonical = '\n'.join([ 'POST', content_type, content_md5, request_date, path, nonce ]) Hashing this canonical string produces a signature. Here is how to calculate it with a secret password using a hmac library: import hmac signature = base64.b64encode( hmac.new( password, canonical, hashlib.sha1).digest()) And the authentication header takes the following form, where username is a desired user’s name. auth_header = 'LE ' +username +':' +signature Note that comparing authentication headers is not enough however to be sure you are protected! We have to check that the Date is reasonably accurate (say, not older than 30 seconds) and that nonce (unique for every webhook) hasn’t been seen yet to avoid replay attacks. Find all details (including Python and Ruby sample implementations) in docs. Consider the current implementation as in beta – we are actively testing it. If you have any feedback, let us know! We always love hearing from you and more importantly it makes Logentries stonger and stronger!
https://blog.rapid7.com/2013/02/05/webhooks-are-hmac-authenticated/
CC-MAIN-2018-05
refinedweb
681
54.52
1. Log on to the Windows Azure Platform Portal, on the lower left side of the page click Service Bus, Access Control & Caching. 2. In the left pane expand AppFabric and click Cache 3. Click the New Namespace button on the toolbar. 4. In the Create a new Service Namespace dialog, verify that the Cache checkbox is selected under Available Services. 5. Type a proposed namespace in the Choose a Service Namespace text box, and then click the Check Availability button. If the message “Available” appears under the text box, then the namespace name is available, and you can continue. 6. Select a target region in the Country/Region list. 7. Select your target subscription from the Choose a Subscription list. 8. Select a cache size in the Cache Size list. Unless you have a lot of caching going on in custom features you have developed yourself, 128MB should be plenty. 9. Click the Create Namespace button to create the namespace and associated cache. The cache can have a status of Activating for 5-15 minutes before it changes to Active. See the next step: Preparing a Local Copy of mojoPortal for Deployment See Also Created 2011-11-15 by Joe Audette Updated 2011-11-16 by Joe Audette
https://www.mojoportal.com/create-windows-azure-appfabric-cache
CC-MAIN-2021-43
refinedweb
209
64.1
Before. also read:. There are numerous ways to solve a Producer-Consumer problem and in this post I will show one simple way to solve this problem by using the Data Structures and other constructs provided in the JDK. Java 5 introduced a new set of concurrency related APIs in its java.util.concurrent package. As I said here, not many of the developers are aware of these APIs and very few of the make use of it in their code. There were quite a few new Collection classes which got introduced in Java 5 and one of them is the BlockingQueue. BlockingQueue in Java The JavaDoc says: BlockingQueue is “A Queue that additionally supports operations that wait for the queue to become non-empty when retrieving an element, and wait for space to become available in the queue when storing an element.” There are multiple methods which are supported to retrieve and add elements to the queue wherein one pair throws an exceptions, one of them waits for some fixed time and one pair which blocks until the queue is full/empty. The methods which blocks the thread are the put(e) and take. In a typical producer-consumer problem we would want the consumer thread to be blocked until there is something in the queue to be consumed and the producer thread to be blocked until there is some free space in the queue to add some element. With the use of normal collection classes it because quite a bit of work in implementing inter thread communication by waiting and notifying other threads about the status of the queue. The put(e) and take() methods of the BlockingQueue class are the ones which make it very easy to solve producer-consumer like problems. Lets take a scenario where the producer thread would watch for the files being modified in some directory and add those files to the queue and the consumer thread would print the contents of those files on to the console. Producer thread If you are not familiar with implementing the WatchService in Java, you must first read this to get an idea of how it works. class FileProducer implements Runnable{ BlockingQueue<Path> filesList; Path rootPath; public FileProducer(BlockingQueue<Path> filesList, Path rootPath){ this.filesList = filesList; this.rootPath = rootPath; } @Override public void run() { try { WatchService service = FileSystems.getDefault().newWatchService(); rootPath.register(service, StandardWatchEventKinds.ENTRY_MODIFY); while(true){ WatchKey key = service.take(); for (WatchEvent event : key.pollEvents()){ Path relativePath = (Path)event.context(); Path absolutePath = Paths.get(rootPath.toString(), relativePath.toString()); filesList.put(absolutePath); } //reset is invoked to put the key back to ready boolean valid = key.reset(); //If the key is invalid, just exit. if ( !valid){ break; } } } catch (IOException e) { e.printStackTrace(); } catch (InterruptedException e) { e.printStackTrace(); Thread.currentThread().interrupt(); } } } The producer thread above watches a certain directory for file modifications and adds the absolute path of the file into the BlockingQueue collection passed to the producer thread via its constructor. Consumer Thread The consumer thread would invoke take() on the BlockingQueue instance and then use the Files API to read the contents. As take() is a blocking call, if the filesList collection is empty then it would just block and wait for the data to be available in the filesList collection. class FileConsumer implements Runnable{ BlockingQueue<Path> filesList; Path rootPath; public FileConsumer(BlockingQueue<Path> filesList, Path rootPath){ this.filesList = filesList; this.rootPath = rootPath; } @Override public void run(){ try { while(true){ Path fileToRead = filesList.take(); List<String> linesInFile = Files.readAllLines(fileToRead, Charset.defaultCharset()); System.out.println("reading file: "+fileToRead); for ( String line : linesInFile){ System.out.println(line); } } } catch (InterruptedException e) { e.printStackTrace(); Thread.currentThread().interrupt(); } catch (IOException e) { e.printStackTrace(); } } } Note: If you are writing to a file using Vim or some other editors which creates temporary files then you have to make sure you exclude such files being added to the queue. Invoking the Producer and consumer public class ProducerConsumerSample { public static void main(String[] args) { BlockingQueue<Path> filesList = new LinkedBlockingQueue<>(10); Path rootPath = Paths.get("/tmp/nio"); Thread producerThread = new Thread(new FileProducer(filesList, rootPath)); Thread consumerThread = new Thread(new FileConsumer(filesList, rootPath)); producerThread.start(); consumerThread.start(); } } Pretty straight forward- create instances of both the threads and then launch them. You can create multiple consumer threads as well! In the above example we make use of the LinkedBlockingQueue which is one of the implementations of the BlockingQueue. You have to make sure you import corresponding classes in your source code. You can have all the three classes defined in the same file and name the file as ProducerConsumerSample.java and compile and run the code. Once you have the code running, then go to your terminal and type: /tmp/nio$ touch file1 /tmp/nio$ echo "this is file1" >> file1 /tmp/nio$ touch file2 /tmp/nio$ echo "this is file2" >> file2 and the output you see on the terminal of your java program is: reading file: /tmp/nio/file1 reading file: /tmp/nio/file1 this is file1 reading file: /tmp/nio/file2 reading file: /tmp/nio/file2 this is file2 Note: This code was compiled and tested on a Linux platform, please find similar ways of creating files on Windows when you run your code. […] Using BlockingQueue to solve the Producer and consumer problem. More details can be found here. […]
http://www.javabeat.net/implementing-producer-consumer-scenario-using-blockingqueue-java/
CC-MAIN-2014-15
refinedweb
883
61.87
CONCEPTS USED: Dynamic programming DIFFICULTY LEVEL: Medium. PROBLEM STATEMENT$($SIMPLIFIED$)$: PrepBuddy is too good with string observations but now the Dark Lord thought to defeat PrepBuddy. So he gave him two strings and told him to find the length of the longest common subsequence. For Example : > aabbcc abcc In this example "abcc" is the longest common subsequence. >prep rep "rep" is the longest common subsequence here. OBSERVATION: The question demands you to find the longest subsequence . What is a subsequence? A subsequence is a sequence that can be derived from another sequence by zero or more elements, without changing the order of the remaining elements. Suppose,X and Y are two sequences over a finite set of elements. We can say that Z is a common subsequence of X and Y, if Z is a subsequence of both X and Y. SOLVING APPROACH: Brute Force: We need to first find the number of possible different subsequences of a string with length n with the help of best websites to learn coding., i.e., find the number of subsequences with lengths ranging from 1,2,..n-1. Number of subsequences of length 1 are: nC1.Similarly,Number of subsequences of length 2 are: nC2 and so on and so forth.This gives - nC0+nC1+nC2+nC3+...+nCn-1+nCn=2n. Can you find the time complexity of this brute force? A string of length n has 2n-1 different possible subsequences since we do not consider the subsequence with length 0. This implies that the time complexity of the brute force approach will be O(n * 2n). Note that it takes O(n) time to check if a subsequence is common to both the strings. This time complexity can be improved using dynamic programming. You are encouraged to try on your own ,before looking at the solution. See original problem statement here DYMANIC PROGRAMMING: The problem of computing their longest common subsequence, or LCS, is a standard problem and can be done in O(nm) time using dynamic programming. Let's define the function f. Given i and i, define f(i,j) as the length of the longest common subsequence of the strings A1,i and B1,j. Notice that A=A1,n and B=B1,m , so the length of the LCS of A and B is just f(n,m), by definition of f. Thus, our goal is to compute f(n,m). Look at the image carefully and observe how the table is filled. So how do we compute f(i,j) ? Let's consider the letters Ai and Bj. There are two cases: If Ai==Bj, then the last letter of the LCS must be Ai, because if not, then we can just always add Ai at the end to get a longer LCS. The remaining letters of the LCS must then be a common subsequence of A1,i-1 and B1,j-1 --- in fact a longest common subsequence. Therefore in this case, the length of the LCS of A1,i and B1,j must be f(i,j)=1+f(i-1,j-1). If Ai!=Bj, then Ai and Bj cannot both appear at the end of the longest common subsequence, which means either one can be ignored. If we ignore Ai , then the LCS of A1,i and B1,j becomes the LCS of A1,i-1 and B1,j, and if we ignore Bj, then the LCS becomes the LCS of A1,i and B1,j-1. The longer one of those must be the LCS of A1,i and B1,j, therefore in this case we get f(i,j)=max(f(i-1,j),f(i,j-1)). Pseudocode: function f(i, j): if i == 0 or j == 0: return 0 else if A[i] == B[j]: return 1 + f(i-1, j-1) else: return max(f(i-1, j), f(i, j-1)) // the answer is now f(n, m) SOLUTIONS: #include <stdio.h> #include<stdlib.h> #include<string.h> int max(int x,int y) { if(x>y)return x; return y; } int main() { //write your code here int t;scanf("%d",&t); while(t--) { char s1[1005],s2[1002]; scanf("%s%s",s1,s2); int dp[strlen(s1)+1][strlen(s2)+1]; int n=strlen(s1),m=strlen(s2);]); } } printf("%d\n",dp[n][m]); } return 0; } #include <bits/stdc++.h> using namespace std; int main() { //write your code here int t;cin>>t; while(t--) { string s1,s2; cin>>s1>>s2; int dp[s1.length()+1][s2.length()+1]; int n=s1.length(),m=s2.length();]); } } cout<<dp[n][m]<<"\n"; } return 0; } import java.util.*; import java.io.*; class LongestCommonSubsequence { /* Returns length of LCS for X[0..m-1], Y[0..n-1] */ public static int getLongestCommonSubsequence(String a, String b){ int m = a.length(); int n = b.length(); int[][] dp = new int[m+1][n+1]; for(int i=0; i<=m; i++){ for(int j=0; j<=n; j++){ if(i==0 || j==0){ dp[i][j]=0; }else if(a.charAt(i-1)==b.charAt(j-1)){ dp[i][j] = 1 + dp[i-1][j-1]; }else{ dp[i][j] = Math.max(dp[i-1][j], dp[i][j-1]); } } } return dp[m][n]; } int max(int a, int b) { return (a > b) ? a : b; } /* Utility function to get max of 2 integers */ public static void main(String[] args) { Scanner sc = new Scanner(System.in); int t= sc.nextInt(); while(t-- >0 ){ //LongestCommonSubsequence lcs = new LongestCommonSubsequence(); String s1= sc.nextLine(); String s2= sc.nextLine(); System.out.println(getLongestCommonSubsequence(s1,s2)); } } } *Space complexity: O(NM)**
https://www.prepbytes.com/blog/dynamic-programming/longest-common-subsequence/
CC-MAIN-2021-39
refinedweb
944
65.32
Important: Please read the Qt Code of Conduct - How to repaint QWidget encapsulated in QDeclarativeItem in QML? I work in a C++/QML environment and I use Qt 4.8 with QtQuick 1.0. I have a QWidget derivated class, QCustomPlot, and I encapsulated it in a custom QDeclarativeItem derived class. I use a QGraphicsProxyWidget to embed the QWidget, and it appears nicely upon creation. I would like to update the chart periodically, but I simply cannot, no matter what I do it stays however I initiated it in the constructor. I think I am missing a command from the C++ code that would notify the QML code that the Item should be updated. Here is the code (somewhat simplified) I have: flowgrafik.h: class FlowGrafik : public QDeclarativeItem { Q_OBJECT public: explicit FlowGrafik(QDeclarativeItem *parent = 0); ~FlowGrafik(); void addFlow(double flow); signals: public slots: private: QCustomPlot * customPlot; QGraphicsProxyWidget * proxy; QVector<double> x, y; }; flowgrafik.cpp: FlowGrafik::FlowGrafik(QDeclarativeItem *parent) : QDeclarativeItem(parent) { customPlot = new QCustomPlot(); proxy = new QGraphicsProxyWidget(this); proxy->setWidget(customPlot); this->setFlag(QGraphicsItem::ItemHasNoContents, false); customPlot->setGeometry(0,0,200,200); /* WHAT I WRITE HERE WILL BE DISPLAYED */ // pass data points to graph: customPlot->graph(0)->setData(x, y); customPlot->replot(); } FlowGrafik::~FlowGrafik() { delete customPlot; } void FlowGrafik::addFlow(double flow) { //THIS PART DOES NOT GET DISPLAYED for (int i=0; i<99; ++i) { y[i] = y[i+1]; } y[99] = flow; customPlot->graph(0)->setData(x, y); customPlot->replot(); this->update(); } mainview.qml: Rectangle { id: flowGrafik objectName: "flowGrafik" x: 400 y: 40 width: 200 height: 200 radius: 10 FlowGrafik { id: flowGrafikItem } } I would really appreciate if anyone could tell me why my QCustomPlot QWidget does not replot. @Chillax Usually paintmethod is used to draw something. The updatewill force the Item to re-paint by calling the paintmethod. @p3c0 The problem is that I am new to QtQuick and I probably misunderstand something essential. This is how I think things work: - The customPlot->replot();updates the underlying QWidget - The QGraphicsProxyWidgetdoes the painting of the QDeclarativeItemeverytime updateis called. But if I call the paintmethod instead of updateI get a compile error, because QDeclarativeItem::paint()is virtual But if I call the paint method instead of update I get a compile error, because QDeclarativeItem::paint() is virtual Yeah it is not supposed to be called by user. It is called when a re-paint is requested using update. But since QGraphicsProxyWidgetis in picture here the re-implementing paintis not required. Have you tried to call addFlowfrom the QML ? This will ensure that the addFlowis called when the FlowGrafikItem is ready. @p3c0 I created a timer in my QML item, which calls this function periodically: void FlowGrafik::refresh() { qDebug() << "refreshed"; customPlot->replot(); customPlot->repaint(); this->update(); } addFlowstill gets called from the C++ code periodically and changes the plot values. The weird thing is that whatever I write in the constructor of FlowGrafikgets displayed perfectly. I even tried to delete all internal objects of FlowGrafikin the addFlowfunction, like QGraphicsProxyWidgetand customPlot, and I created new ones, but the results are the same. Only the code I write in the constructor gets displayed, and my QML Item never gets updated. Could the root of the problem be the fact that I load the QML code from a QmlApplicationViewer? main.cpp: QmlApplicationViewer flowView; flowView.setSource(QUrl("qrc:///qml/qml/FlowView.qml")); FlowView.qml: import QtQuick 1.1 import FlowGrafik 1.0 Rectangle { id: flowGrafik objectName: "FlowGrafikRect" x: 0 y: 0 width: 200 height: 200 radius: 10 Timer { interval: 500; running: true; repeat: true onTriggered: flowGrafikItem.refresh() } FlowGrafik { id: flowGrafikItem objectName: "FlowGrafik" } } @Chillax Did you check by adding the code which changes plot values (same as addFlow) inside refresh? - p3c0 Moderators last edited by p3c0 @Chillax After having a quick look at the QCustomPlot's API, I found QCustomPlothas toPainter method which accepts a QCPPainter as an argument. This QCPPainterin turn accepts a QPaintDeviceargument which can be a QImageor QPixmapwhich means the QCustomPlotcan be rendered onto an image or pixmap. So after rendering this QCustomPlot's data into an image/pixmap you can periodically call update and re-paint this image/pixmap inside the re-implemented paintmethod and which will definitely updated on the QML side. @p3c0 Thank you so much for the tip! We are getting really close! So I reimplemented the paintmethod of my custom QDeclarativeItem, FlowGrafik, like this: void FlowGrafik::paint(QPainter* painter) { if (customPlot) { qDebug() << "paint"; QPixmap picture(200,200); QCPPainter qcpPainter(&picture); customPlot->replot(); customPlot->toPainter(&qcpPainter); painter->drawPixmap(QPoint(), picture); } } Unfortunately the results are the same, but I found out that the paintmethod never gets called! (I didn't see the "paint" at the application output). I tried calling it in the refresh()function like this: this->paint(new QPainter);, and then I saw the "paint" at the application output, but the graph still did not replot. Any ideas on how to move on? @Chillax As said earlier to call paintyou need to call update. So what you can do is call a C++ function from QML periodically which will update the plot values and then call update. @p3c0 Yes, I call updateperiodically, but that does not call paint void FlowGrafik::refresh() { qDebug() << "refresh"; customPlot->replot(); customPlot->repaint(); this->update(); } I see "refresh" at the application output, but I don't see "paint". The question is why doesn't updatecall paint? - p3c0 Moderators last edited by p3c0 @Chillax It should actually. Depending on whether or not the item is visible in a view, the item may or may not be repainted Well I guess the Item is visible. @p3c0 FML :D The item is indeed visible. But it didn't help to hide it before the update and then show it again. I saved the pixmap to a jpg file, and it looks just as I expect it to look like. So the problem is that the QML won't repaint the QDeclaretiveItem, because it is visible... Can you think of a better solution for displaying my QCustomPlot Qwidget? I am pretty sure this is possibe in QML, and it shouldn't be this hard. So the problem is that the QML won't repaint the QDeclaretiveItem, because it is visible... No. It's useless painting invisible objects. Can you try following ? On the QML side specify widthand heightfor FlowGrafikitem. May be this could be the reason the item is not painted i.e having widthand heightas 0is close to having an invisible item. Create a very minimal project with QDeclarativeItemand without using QCustomPlotand keeping rest the same. This is to make sure if paintmethod is invoked atleast in this case. I dont have Qt 4.8 at hand so I can't test. Have moved to Qt5 long time back :) @p3c0 Unfortunately specifying with and height did not solve the issue. I created a minimal project (with QtQuick Application template). It is using QCustomPlot and everything works fine. But this is a Desktop version, and if I am not mistaken it is using Qt 5.3, but I need it to work on my embedded linux Qt 4.8 version. The embedded linux version uses qmlapplicationviewerto load QML files with this comment on top: # This file was generated by the Qt Quick Application wizard of Qt Creator. # The code below adds the QmlApplicationViewer to the project and handles the # activation of QML debugging. the desktop version uses qtquickapplicationviewer: # This file was generated by the Qt Quick 1 Application wizard of Qt Creator. # The code below adds the QtQuick1ApplicationViewer to the project. I'm not sure if the applicationviewer or the qt version difference causes my problem. @Chillax qtquickapplicationviewerwas a helper class added back then which calls other in-built functions to load QML files. It also provided some extra functions for ease. You can also directly use QQuickViewor QQmlApplicationEngineto load the QML files depending upon the root object. This is all Qt 5.x related. Similarly qmlapplicationvieweris an helper class. If you look into its source you can see it is actually subclassed from QDeclarativeViewwhich actually loads and displays the QML files. I'm not sure if the applicationviewer or the qt version difference causes my problem. AFAIK definitely not applicationviewerbut may be Qt version. Also did you try running same application with Qt 4.8 on desktop ? I think you should try that too to rule out the system problem if any. @p3c0 So I spent the last couple of hours trying to find out how to install Qt version 4.8 on Ubuntu, but it turns out that only the versions newer than 5.0 support linux. So I installed Qt Creator and Qt version 4.8 on Windows and it works there as well :) @p3c0 Interesting news: Until now, my QmlApplicationViewer flowViewwas just a new window on top of the GUI I was using until now. But if I show only flowViewin fullscreen and nothing else, it works (updates/repaints). qmlRegisterType<FlowGrafik>("FlowGrafik",1,0,"FlowGrafik"); QmlApplicationViewer flowView; flowView.setSource(QUrl("qrc:///qml/qml/FlowView.qml")); flowView.showFullScreen(); @p3c0 What's more interesting: If I call addFlowfrom QML, it works (replots). But if I call it from C++, it updates the QWidget, but not the QML Item. Even if I call refresh()from QML right after it, which should replot and repaint the QML Item too. Any explanation to this? Why is there a difference between changing the QWidget in the C++ code and asking for a replot from QML and doing everything in QML? As if the QML Item and the C++ QDeclarativeItem were two independent objects. So now my plan is to give the flow value from C++ to the FlowGrafikQML item, and then call addFlowfrom QML. That should work. @Chillax Unfortunately I too dont understand this behavior. The fact that it works in desktop environment and not on embedded only points that there should be some bug in that environment. So now my plan is to give the flow value from C++ to the FlowGrafik QML item, and then call addFlow from QML. Try. I had suggested the same earlier :) Let us know if it works or if possible the exact reason so that it may help others too. @p3c0 So thanks a lot for the help! I would have given up without you :) The problem was that I created two instances of FlowGrafik. One in the QML and one in C++. I changed the C++ one and expected the QML one to refresh. Now I created a pointer in the C++ code that points to the Item in the QML. QmlApplicationViewer flowView; flowView.setSource(QUrl("qrc:///qml/qml/FlowView.qml")); QObject * flowViewObject = flowView.rootObject(); FlowGrafik * flowGrafik = flowViewObject->findChild<FlowGrafik *>(QString("FlowGrafik")); But now if I want to use this pointer (like flowGrafik->addFlow(...)) I get a segmentation fault. Do you know why? The problem was that I created two instances of FlowGrafik. One in the QML and one in C++. I changed the C++ one and expected the QML one to refresh. but now this makes me wonder as to why it worked on desktop. But now if I want to use this pointer (like flowGrafik->addFlow(...)) I get a segmentation fault. Do you know why? Was the FlowGrafikobject found ? You can add a small condition to verify it. if(flowGrafik) { flowGrafik->addFlow(...) } @p3c0 You are right, it was not found, because it was actually the rootobject, not a child :) Thanks again for the help!
https://forum.qt.io/topic/66842/how-to-repaint-qwidget-encapsulated-in-qdeclarativeitem-in-qml/7
CC-MAIN-2020-34
refinedweb
1,900
56.76
Design elements of a multi-language standard library14 Aug 2014. It’s bad enough that libraries written in different languages are incompatible, but even in environments like Java and C# with relatively good libraries, I have seen basic concepts like “points” and “colors” redefined, because the standard library wasn’t good enough. This need to change. Purpose not use the standard library of language A; it must instead use a standard library that exists in both languages. That library is the MLSL.. MLSL has other purposes too:? Although professional programmers normally learn multiple programming languages, it’s tough to remember all the nuances of several different standard libraries. By using a MLSL, you can program in different languages more easily (as long as those different languages all have an implementation of the MLSL).. The MLSL does not exist yet. I am calling for volunteers to help build it. So, here are some of the design elements I envision for the MLSL…. MLSL should try to fit into its environment, at least a little bit In C#, method names normally start with a capital letter. In Java, method names normally start with a lower-case letter. The MLSL should accommodate such minor style differences. So MLSL interfaces in C# should start with a capital I, while MLSL interfaces in Java should not. However, variation among languages must be limited to things that a cross-language converter could handle automatically. A C#-to-Java converter could reasonably adjust class and method names, but it could not change anything important. This is not a lowest-common-denominator approach The design of the MLSL should generally keep in mind design patterns and limitations found in the majority of popular statically-typed languages; for instance, it should fit naturally into a language that has single inheritance and dynamic dispatch on the first parameter only. However, the MLSL should not be a truly lowest-common-denominator design; we must not allow ourselves to be limited by the least powerful target languages. Firstly, the MLSL should not be held back by unusual limitations of one or two languages. For example, most languages support anonymous inner functions or closures, so the MLSL should be designed to take advantage of these by providing features that work best with closures (such as filter/map/reduce functions corresponding to LINQ’s Where/Select/Aggregate). To deal with unusual languages that don’t support closures, special supplementary functionality could be added (e.g. common predicate functions for use with filter).. Secondly, the MLSL should not bother to target unusually limited languages at all, but only mainstream and new, powerful languages.: // C#-like pseudo-code, where "T??" represents an "optional" T value (string, string??) SplitAt(string s, char c) { int?? i = s.IndexOf(c); return i.HasValue ? (s.Slice(0, i.Value), s.Slice(i.Value + 1)) : (s, null); } Pair structure… //)); } or out parameters… public static UString SplitAt(this string s, char c, out UString result2) { int i = s.IndexOf(c); if (i == -1) { result2 = UString.Null; return s; else { result2 = s.USlice(i + 1); return s.USlice(0, i); } } Most likely we will standardize upon one of these techniques and use it everywhere. In Java, which supports neither out parameters nor value types, a heap object may be the only way to return two values, but perhaps we could consider allowing the caller to supply the heap object:); } } This allows the caller to reduce (but not eliminate) heap allocations by re-using the same “result” object across many calls. How a technique like this will relate to a cross-language converter is an open problem, though. In general, to design the MLSL, we should imagine a powerful and flexible language, an impure OO/functional hybrid language, design the MLSL for this powerful language, and then figure out how to adapt it to less powerful languages. If adaptation is impossible, then and only then will we redesign it for “weaker” languages. MLSL should be efficient in most languages Speed is always an important design consideration–not the #1 design goal, but it’s up there. We don’t want anyone to choose another library over the MLSL for performance reasons.. At times there will be a conflict between good design and efficiency. In that case we will just have to find the best compromise we can. TODO: find a good example of this and explain it. A bit of a quagmire is to be expected The SplitAt method above raises a couple of issues about string representations, especially in .NET. In my experience, code that analyzes slices. A slice. UString is a slice data type in Loyc.Essentials, which is a .NET library that you can think of as a precursor to the MLSL. The standard string type can be converted implicitly to UString thanks, while the reverse conversion requires an explicit cast. In Java the situation is rather different, because the standard String type is already String and StringBuilder types are not only not treatable as arrays, and not only typed separately from all .NET collection classes, but they don’t even implement the standard collection interfaces like IReadOnlyList<char> or IList<char>. Yuck.. Interoperability with the “native” standard library Code based on the MLSL should interoperate easily with code based on the standard library of the same language, and the MLSL data structures in a particular language should provide interfaces (or be convertible to interfaces) that are normal for data structures in that language. IEnumerable<T> and IList<T>, and it should be easy to wrap standard .NET collections into MLSL collections. Interfaces should be designed for both static and dynamic dispatch Interfaces should not be overly chatty, meaning they should not require many method calls to accomplish tasks. The “interface” of D input ranges is a good example of an overly chatty interface: // An input range provides the following methods: bool empty(); T front(); void popFront(); A loop that uses an input range r must call all three methods on every iteration: while (!r.empty) { DoSomethingWith(r.front()); r.popFront(); } If r is statically typed and the source code of r’s type is available to the compiler, then this is no problem. All three of the methods can potentially be inlined, so the code will be fast. But if r is an interface or abstract base class, this loop will require three dynamic dispatches and will probably be much slower than necessary. In an environment where dynamic dispatch is normal, interfaces should not be so “chatty”: they should not require so many calls. Here is a better interface, where T?? represents some “optional” type: T?? tryPopFront(); bool empty(); T front();(). Usually, classes in the MLSL should offer both static (direct) dispatch and dynamic (interface) dispatch, since each kind of dispatch has advantages that the other does not. MLSL Collections Data structures, algorithms and interfaces for collections are the keystone of any modern standard library.: - (R)VLists: These types act like growable arrays, but are actually linked lists of arrays (read more). I have written immutable FVList<T>and RVList<T>list types along with mutable forms FWList<T>and RWList<T>. Any mutable list can be converted to an immutable list in O(1) time and vice versa. - hash trees: I have written immutable Set<T>and Map<T>types along with mutable forms MSet<T>and MMap<T>. Any mutable set can be converted to an immutable set in O(1) time and vice versa. All four types are wrappers around the same internal implementation, InternalSet<T>. - all the node) are marked read-only, the data structure as a whole (or rather, the object that you, the user, interact with) can still be mutable. When you modify the data structure, any nodes that must be modified to satisfy the mutation request are duplicated. These data structures are closely related to “persistent” data structures. . For the sake of code clarity, I tend to believe that it is better to have separate types for the mutable and immutable forms. When you see Set<T> in code, you know it is immutable; when you see MSet<T> you know it is mutable. If you see AList<T>,. Multithreaded data structures are also important to have in an MLSL. In particular I’d like to have the Disruptor, a fast queue (more flexible than normal queues) designed for multithreaded environments. Geometry is not a GUI thing, Point (two integer coordinates) and PointF (two float coordinates) are defined in the System.Drawing namespace, implying they are part of the WinForms GDI+ toolkit. These types are underdeveloped, with, for example, no overloaded operators and no corresponding Vector or VectorF. Microsoft’s second GUI toolkit, WPF, defines a second Point type in the System.Windows (i.e. WPF) namespace, this one with double coordinates, overloaded operators and a corresponding Vector type.. Even some things that are mostly about GUIs, such as layout algorithms, box models, or a structure that holds font metrics, should be isolated from any particular GUI toolkit and standardized. Perhaps the MLSL itself should standardize a GUI toolkit, but even in that case, we should still make a clear distinction between the “generic” stuff that applies to all windowing systems from the functionality of a particular system. Let’s have lots of interfaces not need implementations in the MLSL, but it is important to standardize the interfaces to increase interoperability between code writen by different people related to a particular field. I will need expert volunteers to design some of these interfaces and provide written rationales for their design. Error handling. Closing thoughts Usually when I seek feedback about research topics like this, I get none. But hey, maybe this time it’ll be different. I think it’s obvious that someday a standard for cross-language interoperability must emerge. Let’s make it sooner rather than later. Here are some questions for you, the reader: - What are some of the best-designed interfaces you know of, regardless of programming language? - In your programming specialty, whether it’s DNA processing or ORMs, which libraries do you feel are the best designed? I’m not asking about libraries that are useful; I’m talking about design aesthetics, like elegance, power, extensibility and ease-of-use, all shaved by occam’s razor. - What “antipatterns” or poor design elements do you think should be avoided? - What programming language features do you think are important for facilitating good library design (e.g. type system features, code contracts, shorthand notations)? I’ve been meaning to look at the KDE Frameworks project as a possible starting point for the MLSL, but I haven’t got around to it yet. Can anyone point me to a primer on the Tier 1 components? A new programming language? P. - “Basic”: a subset of the language such that a compiler for it is relatively easy to implement, a language that is powerful for its size: a competitor to Lua. - “Universal”: a subset of the language designed to allow code written in it to be automatically converted to popular statically-typed languages like C#, Java, and C++. This version of the language could and should contain features that the target languages do not have,. - “Flagship”: a maximal version of the language using all standard modules, including features that some popular languages cannot efficiently support, e.g. pointers, fibers, multiple dispatch. To be clear, that’s not a list of modules, it’s a list of subsets. The language would consist of dozens of modules that can be put together in various ways; a “subset” refers to a specific collections of modules, configured in a specific way. The language will not have a single syntax either. There should be a simple “canonical” syntax, probably LES, but other parsers could be written that would allow the Loyc language to directly compile subsets of other languages such as C# or Julia. The type system is a very important piece that I haven’t really worked out yet. I’m looking into things like higher-kinded types, multiple kinds of type aliases, dependent types, and union and intersection types. So let me know if you’re interested in helping me design this language.
http://loyc.net/2014/design-elements-of-mlsl.html
CC-MAIN-2019-22
refinedweb
2,036
54.02
This action might not be possible to undo. Are you sure you want to continue? Scaling Storytime April 2007 Brad Fitzpatrick brad@danga.com danga.com / livejournal.com / sixapart.com This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License. To view a copy of this license, visit or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA. 1 This Talk’s Gracious Sponsor Buy your servers from them Much love They didn’t even ask for me to put this slide in. :) 2 The plan... Refer to previous presentations for more details... Questions anytime! Yell. Interrupt. Part N: − Part I: − − show where talk will end up Part II: − − What is LiveJournal? Quick history. LJ’s scaling history explain all our software, explain all the moving parts 3 net. LiveJournal Backend: Today (Roughly.) BIG-IP bigip1 bigip2 perlbal (httpd/proxy) mod_perl Global Database proxy1 proxy2 proxy3 web1 web2 web3 web4 ... webN Memcached master_a master_b slave1 slave2 ... slave5 djabberd proxy4 proxy5 mc1 mc2 mc3 mc4 ... mcN User DB Cluster 1 uc1a uc1b User DB Cluster 2 uc2a uc2b User DB Cluster 3 uc3a uc3b User DB Cluster N ucNa ucNb Job Queues (xN) jqNa jqNb djabberd djabberd gearmand Mogile Storage Nodes sto1 ... sto2 sto8 Mogile Trackers gearmand1 gearmandN tracker1 tracker3 “workers” MogileFS Database mog_a mog_b gearwrkN theschwkN slave1 slaveN 4 LiveJournal Overview college hobby project, Apr 1999 4-in-1: − blogging − forums − social-networking (“friends”) − aggregator: “friends page” + RSS/Atom 10M+ accounts Open Source! − server, − infrastructure, − original clients, − ... 5 Stuff we've built... memcached − distributed caching MogileFS − distributed filesystem Perlbal − HTTP load balancer, web server, swiss-army knife gearman − LB/HA/coalescing lowlatency function call “router” TheSchwartz − reliable, async job dispatch system djabberd − the super-extensible everything-is-a-plugin mod_perl/qpsmtpd of XMPP/Jabber servers ..... OpenID federated identity protocol 6 “Uh, why?” NIH? (Not Invented Here?) Are we reinventing the wheel? 7 Yes. We build wheels. − − ... when existing suck, ... or don’t exist. 8 Yes. We build wheels. − − ... when existing suck, ... or don’t exist. 8 Yes. We build wheels. − − ... when existing suck, ... or don’t exist. 8 Yes. We build wheels. − − ... when existing suck, ... or don’t exist. (yes, arguably tires. sshh..) 8 Part I Quick Scaling History 9 Quick Scaling History 1 server to hundreds... you can do all this with just 1 server! − − then you’re ready for tons of servers, without pain don’t repeat our scaling mistakes 10 Terminology Scaling: − − Fast still matters, − − NOT: “How fast?” But: “When you add twice as many servers, are you twice as fast (or have twice the capacity)?” 2x faster: 50 servers instead of 100... but that’s not what scaling is. that’s some good money 11 Terminology “Cluster” − − − varying definitions... basically: making a bunch of computers work together for some purpose what purpose? Load Balancing? High Availability? Venn Diagram time! − load balancing (LB), high availablility (HA) I love Venn Diagrams 12 LB vs. HA Load Balancing High Availability 13 LB vs. HA Load Balancing round-robin DNS, data partitioning, .... High Availability LVS heartbeat, http cold/warm/hot spare, reverse proxy, ... wackamole, ... 14 Favorite Venn Diagram Times When I’m Truly Happy Times When I’m Wearing Pants 15 My Hiring Venn Diagram Talk Think Do (need 2 out of 3! and right mix of 2 out of 3 in a team...) 16 enough Venn Diagrams! 17 One Server Simple: mysql apache 18 Two Servers apache mysql 19 Two Servers - Problems Two single points of failure! No hot or cold spares Site gets slow again. − − CPU-bound on web node need more web nodes... 20 Four Servers 3 webs, 1 db Now we need to load-balance! − LVS, mod_backhand, whackamole, BIG-IP, Alteon, pound, Perlbal, etc, etc.. ... 21 Four Servers - Problems Now I/O bound... ... how to use another database? − 22 introducing MySQL replication Five Servers We buy a new DB MySQL replication Writes to DB (master) Reads from both 23 More Servers Chaos! 24 net. Where we're at.... BIG-IP bigip1 bigip2 mod_proxy mod_perl proxy1 proxy2 proxy3 web1 web2 web3 web4 ... web12 slave1 slave2 ... slave6 Global Database master 25 Problems with Architecture or, “This don't scale...” DB master is SPOF Adding slaves doesn't scale well... − only spreads reads, not writes! 500 reads/s 250 reads/s 200 write/s 250 reads/s 200 write/s 200 writes/s 26 Eventually... databases eventual only writing 3 reads/s 3 r/s 3 reads/s 3 r/s 3 reads/s 3 r/s 3 reads/s 3 r/s 3 reads/s 3 r/s 3 reads/s 3 r/s 3 reads/s 3 r/s 400 400 write/s write/s 400 400 write/s write/s 400 400 write/s write/s 400 400 write/s write/s 400 400 write/s write/s 400 400 write/s write/s 400 400 write/s write/s 27 Spreading Writes Our database machines already did RAID We did backups So why put user data on 6+ slave machines? (~12+ disks) − − overkill redundancy wasting time writing everywhere! 28 Partition your data! Spread your databases out, into “roles” − roles that you never need to join between different users or accept you'll have to join in app Each user assigned to a cluster number Each cluster has multiple machines − writes self-contained in cluster (writing to 2-3 machines, not 6) 29 User Clusters 30 User Clusters SELECT userid, clusterid FROM user WHERE user='bob' 30 User Clusters SELECT userid, clusterid FROM user WHERE user='bob' userid: 839 clusterid: 2 30 User Clusters SELECT userid, clusterid FROM user WHERE user='bob' SELECT .... FROM ... WHERE userid=839 ... userid: 839 clusterid: 2 30 User Clusters SELECT userid, clusterid FROM user WHERE user='bob' SELECT .... FROM ... WHERE userid=839 ... userid: 839 clusterid: 2 OMG i like totally hate my parents they just dont understand me and i h8 the world omg lol rofl *! :^^^; add me as a friend!!! 30 Details per-user numberspaces − don't use AUTO_INCREMENT − PRIMARY KEY (user_id, thing_id) − so: Can move/upgrade users 1-at-a-time: − per-user “readonly” flag − per-user “schema_ver” property − user-moving harness job server that coordinates, distributed longlived user-mover clients who ask for tasks − balancing disk I/O, disk space 31 Shared Storage (SAN, SCSI, DRBD...) Turn pair of InnoDB machines into a cluster − looks like 1 box to outside world. floating IP. One machine at a time mounting fs, running MySQL Heartbeat to move IP, {un,}mount filesystem, {stop,start} mysql filesystem repairs, innodb repairs, don’t lose any committed transactions. No special schema considerations MySQL 4.1 w/ binlog sync/flush options − good − The cluster can be a master or slave as well 32 Shared Storage: DRBD Linux block device driver − “Network RAID 1” − Shared storage without sharing! − sits atop another block device − syncs w/ another machine's block device cross-over gigabit cable ideal. network is faster than random writes on your disks. InnoDB on DRBD: HA MySQL! − can hang slaves off HA pair, − and/or, − HA pair can be slave of a master floater ip mysql ext3 drbd sda mysql ext3 drbd sda 33 MySQL Clustering Options: Pros & Cons No magic bullet... − − − − − Master/Slave Master/Master DRBD doesn’t scale with writes special schemas only HA, not LB special-purpose MySQL Cluster .... lots of options! − :) − :( 34 Part II Our Software 35 Caching caching's key to performance − store result of a computation or I/O for quicker future access (classic space/time trade-off) Where to cache? − mod_perl/php internal caching memory waste (address space per apache child) − shared memory limited to single machine, same with Java/C#/ Mono − MySQL query cache flushed per update, small max size − HEAP tables fixed length rows, small max size 36 memcached our Open Source, distributed caching system run instances wherever free memory two-level hash − client hashes to server, − server has internal hash table no “master node” protocol simple, XML-free − perl, java, php, python, ruby, ... popular. fast. scales. 37 Perlbal 38 Web Load Balancing BIG-IP, Alteon, Juniper, Foundry − good for L4 or minimal L7 − not tricky / fun enough. :-) Tried a dozen reverse proxies − none did what we wanted or were fast enough Wrote Perlbal − fast, smart, manageable HTTP web server / reverse proxy / LB − can do internal redirects and dozen other tricks 39 Perlbal Perl single threaded, async event-based − uses epoll, kqueue, etc. console / HTTP remote management − live config changes handles dead nodes, smart balancing multiple modes − static webserver − reverse proxy − plug-ins (Javascript message bus.....) plug-ins − GIF/PNG altering, .... 40 Perlbal: Persistent Connections perlbal to backends (mod_perls) − know exactly when a connection is ready for a new request clients persistent; not tied to a specific backend connection no complex load balancing logic: just use whatever's free. beats managing “weighted round robin” hell. 41 Perlbal: can verify new connections #include <sys/socket.h> int listen(int sockfd, int backlog); connects to backends often fast, but... send OPTIONs request to see if apache is there − − are you talking to the kernel’s listen queue? or apache? (did apache accept() yet?) Huge improvement to user-visible latency! Apache can reply to OPTIONS request quickly, then Perlbal knows that conn is bound to an apache process, not waiting in a kernel queue 42 Perlbal: multiple queues high, normal, low priority queues paid users -> high queue bots/spiders/suspect traffic -> low queue 43 Perlbal: cooperative large file serving large file serving w/ mod_perl bad... − mod_perl has better things to do than spoon-feed clients bytes 44 Perlbal: cooperative large file serving internal redirects − mod_perl can pass off serving a big file to Perlbal − − client sees no HTTP redirect “Friends-only” images either from disk, or from other URL(s) one, clean URL mod_perl does auth, and is done. perlbal serves. 45 Internal redirect picture 46 MogileFS 47 oMgFileS 48 MogileFS our distributed file system open source userspace hardly unique − − based all around HTTP (NFS support now removed) production-quality − − Google GFS Nutch Distributed File System (NDFS) lot of users lot of big installs 49 MogileFS: Why alternatives at time were either: − closed, non-existent, expensive, in development, complicated, ... − scary/impossible when it came to data recovery new/uncommon/ unstudied on-disk formats because it was easy − initial version = 1 weekend! :) − current version = many, many weekends :) 50 MogileFS: Main Ideas files belong to classes, which dictate: − replication policy, min replicas, ... tracks what disks files are on − set disk's state (up, temp_down, dead) and host keep replicas on devices on different hosts − (default class policy) − No RAID! multiple tracker databases − all share same database cluster (MySQL, etc..) big, cheap disks − dumb storage nodes w/ 12, 16 disks, no RAID − 51 MogileFS components clients mogilefsd (does all real work) database(s) (MySQL, .... abstract) storage nodes 52 MogileFS: Clients tiny text-based protocol Libraries available for: − Perl tied filehandles MogileFS::Client − − − − − − clients don't do database access Java PHP Python? porting to $LANG is be trivial future: no custom protocol. only HTTP my $fh = $mogc->new_file(“key”, [[$class], ...]) 53 MogileFS: Tracker (mogilefsd) The Meat event-based message bus load balances client requests, world info process manager − heartbeats/watchdog, respawner, ... interfaces client protocol w/ db(s), etc Child processes: − − − − ~30x client interface (“query” process) ~5x replicate ~2x delete ~1x fsck, reap, monitor, ..., ... 54 Trackers' Database(s) Abstract as of Mogile 2.x − MySQL − SQLite (joke/demo) − Pg/Oracle coming soon? − Also future: wrapper driver, partitioning any above − small metadata in one driver (MySQL Cluster?), − large tables partitioned over 2-node HA pairs Recommend config: − 2xMySQL InnoDB on DRBD − 2 slaves underneath HA VIP 1 for backups read-only slave for during master failover window 55 MogileFS storage nodes (mogstored) HTTP transport − GET − PUT − DELETE mogstored listens on 2 ports... HTTP. --server={perlbal,lighttpd,...} configs/manages your webserver of choice. perlbal is default. some people like apache, etc − management/status: iostat interface, AIO control, multi-stat() (for faster fsck) files on filesystem, not DB − sendfile()! future: splice() − filesystem can be any filesystem 56 Large file GET request 57 Auth: complex, but quick Large file GET request 57 Spoonfeeding: slow, but eventbased Auth: complex, but quick Large file GET request 57 And the reverse... Now Perlbal can buffer uploads as well.. − Problems: LifeBlog uploading − cellphones are slow LiveJournal/Friendster photo uploads − cable/DSL uploads still slow − decide to buffer to “disk” (tmpfs, likely) on any of: rate, size, time blast at backend, only when full request is in 58 Gearman 59 manaGer 60 Manager dispatches work, but doesn't do anything useful itself. :) 61 Gearman system to load balance function calls... scatter/gather bunch of calls in parallel, different languages, db connection pooling, spread CPU usage around your network, keep heavy libraries out of caller code, ... ... 62 Gearman Pieces gearmand − the function call router − event-loop (epoll, kqueue, etc) workers. − Gearman::Worker – perl − register/heartbeat/grab jobs clients − Gearman::Client[::Async] -- perl − also start of Ruby client recently − submit jobs to gearmand − opaque (to server) “funcname” string − optional opaque (to server) “args” string − opt coallescing key 63 Gearman Picture 64 Gearman Picture gearmand gearmand gearmand 64 Gearman Picture gearmand gearmand gearmand Worker Worker 64 Gearman Picture gearmand gearmand gearmand can_do(“funcA”) can_do(“funcA”) can_do(“funcB”) Worker Worker 64 Gearman Picture gearmand gearmand gearmand can_do(“funcA”) can_do(“funcA”) can_do(“funcB”) Client Worker Worker 64 Gearman Picture gearmand gearmand gearmand call(“funcA”) can_do(“funcA”) can_do(“funcA”) can_do(“funcB”) Client Worker Worker 64 Gearman Picture gearmand gearmand gearmand call(“funcA”) can_do(“funcA”) can_do(“funcA”) can_do(“funcB”) Client Client Worker Worker 64 Gearman Picture gearmand gearmand gearmand call(“funcA”) call(“funcB”) Client Client can_do(“funcA”) can_do(“funcA”) can_do(“funcB”) Worker Worker 64 Gearman Protocol efficient binary protocol No XML! but also line-based text protocol for admin commands − telnet to gearmand and get status − useful for Nagios plugins, etc 65 Gearman Uses Image::Magick outside of your mod_perls! DBI connection pooling (DBD::Gofer + Gearman) reducing load, improving visibility “services” − can all be in different languages, too! 66 Gearman Uses, cont.. running code in parallel − − running blocking code from event loops spreading CPU from ev loop daemons calling between different languages, ... DBI from POE/Danga::Socket apps query ten databases at once 67 Gearman Misc Guarantees: − none! hah! :) please wait for your results. if client goes away, no promises − all retries on failures are done by client but server will notify client(s) if working worker goes away. No policy/conventions in gearmand − all policy/meaning between clients <-> workers ... 68 Sick Gearman Demo Don’t actually use it like this... but: use strict; use DMap qw(dmap); DMap->set_job_servers("sammy", "papag"); my @foo = dmap { "$_ = " . `hostname` } (1..10); print "dmap says:\n @foo"; $ ./dmap.pl dmap says: 1 = sammy 2 = papag 3 = sammy 4 = papag 5 = sammy 6 = papag 7 = sammy 8 = papag 9 = sammy 10 = papag 69 Gearman Summary Gearman is sexy. − Check it out! − especially the coalescing it's kinda our little unadvertised secret oh crap, did I leak the secret? 70 TheSchwartz 71 TheSchwartz Like gearman: − − − − job queuing system opaque function name opaque “args” blob clients are either: But not like gearman: − − − submitting jobs workers Reliable job queueing system not low latency currently library, not network service fire & forget (as opposed to gearman, where you wait for result) 72 TheSchwartz Primitives insert job “grab” job (atomic grab) − mark job done temp fail job for future − − for 'n' seconds. replace job with 1+ other jobs ... atomic. optional notes, rescheduling details.. 73 TheSchwartz backing store: − − a database uses Data::ObjectDriver MySQL, Postgres, SQLite, .... but HA: you tell it @dbs, and it finds one to insert job into − likewise, workers foreach (@dbs) to do work 74 TheSchwartz uses outgoing email (SMTP client) − millions of emails per day − TheSchwartz::Worker::SendEmail − Email::Send::TheSchwartz LJ notifications − ESN: event, subscription, notification one event (new post, etc) -> thousands of emails, SMSes, XMPP messages, etc... pinging external services atomstream injection ..... dozens of users shared farm for TypePad, Vox, LJ 75 gearmand + TheSchwartz gearmand: not reliable, low-latency, no disks TheSchwartz: latency, reliable, disks In TypePad: − TheSchwartz, with gearman to fire off TheSchwartz workers. disks, but low-latency future: no disks, SSD/Flash, MySQL Cluster 76 djabberd 77 djabberd Our Jabber/XMPP server S2S: works with GoogleTalk, etc perl, event-based (epoll, etc) done 300,000+ conns tiny per-conn memory overhead − powers our “LJ Talk” service release XML parser state if possible 78 djabberd hooks everything is a hook − − − − − − − not just auth! like, everything. auth, roster, vcard info (avatars), presence, delivery, inter-node cluster delivery, − ala mod_perl, qpsmtpd, etc. hooks phases can take as long as they want before they answer, or decline to next phase in hook chain... we use Gearman::Client::Async async hooks − − 79 Thank you! Questions to: brad@danga.com Software: Gracious sponsor of this talk 80 Bonus Slides if extra time 81 Data Integrity Databases depend on fsync() − fsync() almost never works work − but databases can't send raw SCSI/ATA commands to flush controller caches, etc Linux, FS' (lack of) barriers, raid cards, controllers, disks, .... disk-checker.pl Solution: test! & fix − client/server spew writes/fsyncs, record intentions on alive machine, yank power, checks. 82 Persistent Connection Woes connections == threads == memory − My pet peeve: max threads − − want connection/thread distinction in MySQL! w/ max-runnable-threads tunable DBD::Gofer + Gearman Ask limit max memory/concurrency Data::ObjectDriver + Gearman 83 This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/document/187293/Linux-Fest
CC-MAIN-2016-50
refinedweb
2,943
62.58
Creating Smooth Button Animation in FlashBy Blue_Chi | Flash 8 | ActionScript | Intermediate This tutorial will teach you how to create a menu with smooth button animation using the ActionScript Tween Class. Using the technique illustrated in this tutorial will make you able to create smooth hover effect for buttons that does not jump or skip when the mouse is moved too quickly over the buttons. This is an intermediate level tutorial which assumes knowlege on how to use the Tween Class. All of our buttons above are made up of two layers, one of the button graphic coloured, and the other of the same image in black and white. When the mouse rolls over each graphic, the black and white image fades out to unveil the coloured version, and when it rolls out of the button the black and white image fades back in to cover up the coloured version again. Read on to learn how to use the Tween Class to control this animation process. Starting Off We prepared all the button graphics required for this tutorial so that we concentrate on creating the effect itself. Download this file and extract all the images in it somewhere accessible. Once that is done, open up Flash and create a new ActionScript 1/2 flash movie, set the frame-rate to 30 fps. and the background to white. The dimensions do not really matter as long as they are not too small. The next step is to import all the images extracted from the zip file you downloaded earlier into your Flash movie. Go through File>Import>Import to Library..., browser to file where your files were extracted, select them all and import them in one go. Open up your library now (Ctrl+L) to view what you have got now, you should have eight images and eight graphic symbols containing a copy of each of these images. We do not need those extra graphic instances, so you can delete them. You should end up with eight images only. Creating the Buttons Creating the button symbols is a simple but a repetitive process. While the library is still opened, drag an instance of the image color1.png onto the stage, select it, and then press F8 to convert it to a Symbol, select Movie Clip and NOT BUTTON, name it Button 1, and click OK. You should have your button on the stage now as a Movie Clip symbol. Access the Properties Inspector and set the Instance Name of our first button to my1_btn. We will insert the black and white copy of the graphic inside our movie clip. Double-click your button to edit its own timeline. Once in this new timeline, add a new layer above the existing one, and then open up the Library and drag into it an instance of b&w1.png right on top of the coloured version of the same image (You might need to use the align panel (Ctrl+K) to properly do that). While the black and white image is still selected, press F8 to convert it to a Movie Clip symbol, name it b&w1 and click OK. Access the Properties Inspector now and set the Instance Name of this movie clip to cover_mc. OK, that should do it, our first button is now ready. Go back to the main timeline by clicking twice in any empty spot on the stage. Creating the Other Buttons You will have to repeat the process used above to create the rest of the images, the only change that you would have to make is the instance name of the button, it should be my2_btn, my3_btn, and my4_btn accordingly. The instance name of cover_mc should be the same for all of these buttons. Here is the summary of what you have to do: - Create an Movie Clip symbol of the coloured button graphic. - Assign the appropriate instance name to it. - Double click on it to edit its own time line. - Add a new layer. - Insert the black and white version of the button graphic in this new layer. - Assign the instance name conver_mc to it. Once done you should have four buttons on the stage, you can align them in order above each other this way. Scripting the Buttons On the main time line of the Flash movie, right-click the only frame you have and select Actions to open up the Actions Panel. Copy and paste the code below and then test your movie to see your effect working, explanation will follow: import mx.transitions.easing.*; for (i=1; i<=4; i++) { var current_btn = this["my"+i+"_btn"]; current_btn.onRollOver = function() { var currentAlpha = this.cover_mc._alpha; var myHoriTween:Tween = new Tween(this.cover_mc, "_alpha", Strong.easeOut, currentAlpha, 0, 0.5, true); }; current_btn.onRollOut = function() { var currentAlpha = this.cover_mc._alpha; var myHoriTween:Tween = new Tween(this.cover_mc, "_alpha", Regular.easeIn, currentAlpha, 100, 0.5, true); }; } The first two lines merely import the assets of the Class Tween required to make this effect work. Review the Tween Class Tutorial to learn more on how to use it. import mx.transitions.easing.*; The rest of the code is a loop that attach the same code to the four buttons, the counts from 1 to 4, which is the total number of buttons that we have, if you would like to change the number of buttons you have in your movie you will have to change the number here. Through each iteration through the loop we generate a referece to the current button we plan on attaching the code to, the square bracket [] operator is required to create references to an object. The next part is the real code that creates this effect. An instance of the Tween Class executed through an .onRollOver event handler property attached to the button. Right before we run the Tween we record the current value of the _alpha property of the cover_mc to ensure that when the Tween starts it starts from that point and not from a pre-specified value such as 100 or zero so that the animation does not 'jump'. It is worth noting that the Tween object is the cover_mc and not the actual button calling the script. var currentAlpha = this.cover_mc._alpha; var myHoriTween:Tween = new Tween(this.cover_mc, "_alpha", Strong.easeOut, currentAlpha, 0, 0.5, true); }; The last section of the code sets the .onRollOut event handler property of the button, it looks almost identical to the earlier segment, except that the Tween's ending point here is 100 so that the cover_mc object is fully opaque. var currentAlpha = this.cover_mc._alpha; var myHoriTween:Tween = new Tween(this.cover_mc, "_alpha", Regular.easeIn, currentAlpha, 100, 0.5, true); }; } The Buttons Actual Command To make your buttons do things, as in go to another section or open up a new page, you will have to set these commands individually for each button using the .onRelease event handler property. The code below is a sample code you can use while testing your movie, this code goes below or above the code we created earlier: trace ("Home"); } my2_btn.onRelease = function(){ trace ("About"); } my3_btn.onRelease = function(){ trace ("Work"); } my4_btn.onRelease = function(){ trace ("Contact"); } This concludes our tutorial, you can download the end source file in Flash 8 format here. Feel free to post any questions you have at the Oman3D Forum. - End of Tutorial.
http://www.republicofcode.com/tutorials/flash/smoothbuttonanimation/
crawl-002
refinedweb
1,232
73.07
object in Kubernetes. As mentioned it is a group of containers. These mean containers share pod resources. Even though they share pod resources they cannot access each other’s processes as they are separated by namespaces. Containers on the same pod can access each other on localhost. This is because they share the same network namespace. They can also share storage if mentioned in their specifications. You can consider pods as a host where you can deploy multiple containers and they can talk to each other. Why are pods needed? You can use a pod to run an instance of your application. You generally use replicaset to create a set of the pod to run your application. Replicaset can control how many pods of your specification will run and it can make sure they are running. If a pod dies, replicaset will launch another one to satisfy the threshold. There can also be an init container that comes up before other containers and does some admin tasks. You can read about it below What are the different stages in pod’s lifecycle? Pending The pod is created in the cluster but is not launched completely yet. It may be due to a resource issue that is not scheduled, maybe due to one of the containers not coming up due to some health check or maybe downloading the image of the container. Running In this state, the pod is running and is ready to take traffic or do its tasks. In this state, the containers are created and one of them is running properly. Succeded All the containers in the pod have completed their tasks and are terminated successfully. Failed One of the containers is terminated due to failure. Failure means non 0 exit code. Unknown As mentioned the state is not known. There are other states that you can see but they are not actually pod’s lifecycle states. The states can be CRASHLOOPBACKOFF, IMAGEPULLBACKOFF, etc. Below is the spec that you can use to create a pod in Kubernetes. apiVersion: v1 kind: Pod metadata: name: example spec: containers: - name: busybox image: busybox:1.25 volumeMounts: - name: quobytevolume mountPath: /persistent command: - /bin/sh - sleep 30 volumes: - name: quobytevolume quobyte: registry: ignored:7861 volume: testVolume readOnly: false user: root group: root In this spec. You can see different fields that you can use to launch the pod. In this pod, only one container is launched with an image busybox, and a volume is attached at /persistent path. When you submit this YAML a pod will be launched with the above specifications. I am not going deep into each pod field. You can read the below documentation for reading more about these. Next: You can read about what exactly are deployments. This was very basic of what is pod we can read more about how scheduled happens etc in our next articles. If you like the article please share and subscribe.
https://www.learnsteps.com/what-exactly-is-a-pod-basics-on-kubernetes/
CC-MAIN-2022-27
refinedweb
489
74.79
I’ve updated the UK station data on my site to include the usage data for 2016-2017 Sunday, December 03, 2017 Tuesday, November 28, 2017 Land Registry house price data October 2017 I’ve uploaded the latest house price data from the Land Registry to my website. Prices continue on their slow march upwards Sunday, November 26, 2017 GB postcode data for November 2017 I’ve uploaded the latest postcode data from the ONS to my website. I’ve also added the Index of Multiple Deprivation field to the CSV downloads. Monday, October 30, 2017 Property sales for September 2017 I’ve uploaded the latest property sales data from the Land Registry to my website. Prices continue to creep slowly upwards… Saturday, September 30, 2017 Property data for August 2017 I’ve uploaded the latest property data for England and Wales from the Land Registry to my website. Prices continue to rise at a slow pace Friday, September 22, 2017 Convert a list to a comma string It’s nothing special, but I’ve found this useful in the past and probably will again so here for my memory bank is a function that converts a list to a comma string using System.Collections.Generic; using System.Text; namespace Utils { public static class ListHelper { public static string ToCommaString<T>(this IEnumerable<T> list, string separator = ", ") { var builder = new StringBuilder(); foreach (var t in list) { if (builder.Length > 0) builder.Append(separator); builder.Append(t); } return builder.ToString(); } } } Wednesday, August 30, 2017 House price data for July 2017 I’ve uploaded the latest Land Registry house sales data to my website. Annual price rises seem to have settled at about 1% Saturday, August 26, 2017 GB postcode data for August 2017 I’ve uploaded the latest batch of postcode data from the ONS to my website. It all appears in order, but let me know if you spot any anomalies Friday, July 28, 2017 Land Registry sales data for June 2017 I’ve uploaded the latest house price data for England and Wales to my website. House price inflation is now below 1%, but Hull is doing well from being this year’s City of Culture Wednesday, June 28, 2017 Land Registry data for May 2017 I’ve uploaded the latest Land Registry data for England and Wales to my website. My earlier predictions of house price growth going negative appear to be incorrect. Prices continue to go up but at an ever decreasing pace.… Wednesday, May 31, 2017 Property data for England and Wales April 2017 I’ve uploaded the April 2017 Land Registry data to my website. The annual change is still positive and although it’s heading towards zero, the progress is slowing. Tuesday, May 30, 2017 GB postcode data May 2017 I’ve updated my postcode pages with the latest ONS postcode data. Any problems, let me know Thursday, May 04, 2017 Land Registry house price data for March 2017 Another month passes by and we have a new batch of house price data from the Land Registry. The annual change remains positive but continues to creep downwards. In some areas, the annual change has gone negative, you can see more detailed figures here. Tuesday, March 28, 2017 Land Registry house price data February 2017 The latest Land Registry sales data for England and Wales has been released and my server is currently churning through the data. The top level annual change remains positive but continues to trend downwards. Thursday, March 02, 2017 Postcode data for February 2017 The ONS has released its latest postcode data, which is now on my site. I’ve run my usual sanity checks and it all seems to be in order, but let me know if you spot any anomalies. Tuesday, February 28, 2017 Land Registry data for January 2017 Another month has flown by and it’s time for a house price update from the Land Registry. Sale volumes remain stable and prices continue to rise at an ever slower rate. It looks like the annual change will turn negative in 5 months if things continue in the same way. Tuesday, January 31, 2017 Land Registry house price data December 2016 I’ve uploaded the Land Registry house price data for December 2016 to my website. The annual change is still positive but continues to head downwards. It now looks like it may turn negative in 6 months time.
https://doogalbellend.blogspot.com/2017/
CC-MAIN-2018-47
refinedweb
740
57.5
Introduction to Stream Control Transmission Protocol Listing 3. read_sctp_msg.c #include <stdio.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <stdlib.h> #include <netinet/sctp.h> /* call by nread = read_sctp_msg(sockfd, &msg) */ int read_sctp_msg(int sockfd, uint8_t **p_msg) { int rcv_buf_size; int rcv_buf_size_len = sizeof(rcv_buf_size); uint8_t *buf; struct sockaddr_in peeraddr; int peer_len = sizeof(peeraddr); struct sctp_sndrcvinfo sri; int total_read = 0; *p_msg = NULL; /* default fail value */ if (getsockopt(sockfd, SOL_SOCKET, SO_RCVBUF, &rcv_buf_size, &rcv_buf_size_len) == -1) { return -1; } if ((buf = malloc(rcv_buf_size)) == NULL) { return -1; } while (1) { int nread; int flags; nread = sctp_recvmsg(sockfd, buf+total_read,rcv_buf_size, (struct sockaddr *) &peeraddr,&peer_len, &sri, &flags); if (nread < 0) { return nread; } total_read += nread; if (flags & MSG_EOR) { /* trim the buf and return msg */ printf("Trimming buf to %d\n", total_read); *p_msg = realloc(buf, total_read); return total_read; } buf = realloc(buf, total_read + rcv_buf_size); } /* error to get here? */ free(buf); return -1; } SCTP has full support out of the box for IPv6 as well as IPv4. You simply need to use IPv6 socket addresses instead of IPv4 socket addresses. If you create an IPv4 socket, SCTP will deal only with IPv4 addresses. But, if you create an IPv6 socket, SCTP will handle both IPv4 and IPv6 addresses. This article provides a brief introduction to the IETF Stream Control Transmission Protocol and explains how it can be used as a replacement for TCP. In future articles, we will examine additional features of SCTP and show their use. Resources The Principal Site for SCTP (contains pointers to the RFCs and Internet Drafts for SCTP): The Linux Kernel Project Home Page:. Stream Control Transmission Protocol (SCTP): A Reference Guide by Randall Stewart and Qiaobing Xie, Addison-Wesley. Unix Network Programming (volume 1, 3rd ed.) by W. Richard Stevens, et al., has several chapters on SCTP, although some of it is out of date. Jan Newmarch is Honorary Senior Research Fellow at Monash University. He has been using Linux since kernel 0.98. He has written four books and many papers and also has given courses on many technical topics, concentrating on network programming for the last six years. His Web site is jan.newmarch.name. - «! An excellent article concerning introduction to SCTP. Very good! /Best regards J
http://www.linuxjournal.com/article/9748?page=0,3
CC-MAIN-2015-22
refinedweb
369
57.67
On 09/09/2012 20:26, Alessio Treglia wrote: > On Sun, Sep 9, 2012 at 8:09 PM, Jean-Christophe Dubacq > <jcdubacq1@free.fr> wrote: >> gnome-maps ? > > I'd avoid gnome-maps, as upstream != GNOME > I am not a specialist, but it looks to me like p gnome-inm-forecast - the Spanish weather forecast applet for the GNOME Desktop whose homepage is is official gnome either. (I picked this one among others; I have no grudge against that package, nor against any package using the gnome- namespace whose upstream is not GNOME). Sincerly, -- Jean-Christophe Dubacq Attachment: signature.asc Description: OpenPGP digital signature
https://lists.debian.org/debian-devel/2012/09/msg00189.html
CC-MAIN-2016-07
refinedweb
103
75.5
How FRS Works Updated: March 28, 2003 Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2 In this section - FRS Terminology - FRS Architecture - FRS Protocols - FRS Interfaces - FRS Physical Structures - FRS Processes and Interactions - Network Ports Used by FRS -. These sections provide an in-depth view of how FRS works in an optimal environment. An optimal environment for FRS is defined as follows: - DNS and Active Directory replication are working properly. - FRS Active Directory objects are properly configured and present on all domain controllers. - Kerberos authentication is working properly. - No firewalls prevent replication from working. - All servers that participate in replication (known as replica members) have adequate free space on NTFS file system volumes to support the FRS physical structures. - All replica members have hardware that is sized appropriately for the roles that the servers perform and the hardware is functioning properly. - No replica members have network, disk, RAM, or CPU bottlenecks. - All members are running Windows Server 2003 with Service Pack 1 (SP1) or the pre-SP1 release of Ntfrs.exe (as described in article 823230, “Issues That Are Fixed in the Pre-Service Pack 1 Release of Ntfrs.exe,” in the Microsoft Knowledge Base). - For SYSVOL replica sets, the number of domain controllers in a domain falls well below the recommended limit of 1,200 domain controllers. - No replica members are running disk defragmentation, backup, or antivirus software that is known to be incompatible with FRS. - The update sequence number (USN) journal and staging folder on all replica members are sized according to Microsoft’s recommendations. - The system clocks on all replica members are set to within 30 minutes of each other. - No files are blocking replication by being held open for an extended period of time on any replica member. - Changes to a particular file are only made on one replica member so that no conflict resolution is required. - Administrators do not attempt to copy files from one replica member to another in an attempt to help replication. FRS Terminology Before you review the FRS components and processes, it is helpful to understand FRS terminology. The following sections provide a brief introduction and illustration of the basic components of FRS as well as a glossary of terms. Basic FRS Components When you set up DFS replication, you choose a shared folder whose contents you want to replicate. For the SYSVOL shared folder on domain controllers, replication is established automatically during the domain controller promotion process. The set of data to be replicated to all servers is known as the replica tree, and the servers to receive this replica tree are known as replica members. The group of replica members that participates in the replication of a given replica tree is known as a replica set. Replica members are connected in a topology, which is a framework of connections between servers and defines the replication path. There are a number of common topologies, such as full mesh, ring, and hub and spoke. The following figure illustrates these terms. Basic FRS Components (Ring Topology) It is also helpful to understand the terminology used to describe the relationship between two members. This terminology is illustrated in the following figure. Replication Partner Terminology As illustrated in the figure, any two replica members with a direct connection between them are known as direct replication partners. In the figure, ServerA and ServerB are direct replication partners, as are ServerB and ServerC. Although changes that originate on ServerA are eventually replicated to ServerC, these two servers do not have a direct connection to each other, so they are known as transitive replication partners. When a changed file or folder is received on a server, either because the change originated there or because the server received the file or folder from another partner, that server becomes the upstream partner, also known as inbound partner, for any direct replication partner that has not yet received the change. For example, in the figure titled “Replication Partner Terminology,” a change originates on ServerA, and ServerA is considered the upstream or inbound partner for ServerB. ServerB is considered the downstream (also known as outbound) partner for ServerA. Glossary of FRS Terms The following terms are used to describe the components and processes of FRS. Active Directory object A distinct set of named attributes that represents a network resource. FRS uses Active Directory objects to represent servers that participate in replica sets and the topology that FRS uses to replicate data. Authoritative restore (also called D4) The process whereby one replica member is made the authoritative member, and all the data in its replica tree is replicated to all other replica members. This procedure should be used only in extreme circumstances under the supervision of your support provider or Microsoft Product Support Services. This process is informally referred to as a “D4” in reference to the registry setting used to invoke this process. Burflags Short for “backup restore flags,” an FRS-related registry entry used to change the behavior of the FRS service at startup. The values for this registry entry are D2 (for nonauthoritative restores) and D4 (for authoritative restores). Change order A message that contains information about a file or folder that has changed on a replica member. The change order is sent to the member’s outbound partners. If the outbound partners accept the change, the partners request the associated staging file. After installing the changed file in their individual replica trees, the partners propagate the change order to their outbound partners. Directed change order A change order that is directed to a single outbound partner and primarily produced when the partner is doing a vvjoin, such as during initial synchronization or a nonauthoritative restore (D2). File event time The time at which a change to a file or folder is made. This might not be the same as the file create or last-write time. For example, restoring a file from a backup tape preserves the file create and last-write times, but the file event time is the time when the actual file restoration was performed. File GUID An identifying property of a file or folder in a replica tree. FRS creates and manages file globally unique identifiers (GUIDs), which, along with the replication version number and event time, are stored in the file ID table in the FRS database. Each file and folder stores its file GUID as part of its NTFS attributes; therefore, corresponding files and folders across all replica set members have the same file GUID. File ID table A table in the FRS database that contains an entry with version and identity information for each file and folder in the replica tree. only incremented by the member that originated the file update. Other members that propagate the update do not change the version number. Filter A setting that excludes subfolders (and their contents) or files from replication. FRS debug logs Text files in the \systemroot\Debug folder that store FRS transaction and event detail. Inbound connection For a given replica member, a component of the NTFRS Member object in Active Directory that identifies inbound partners. An inbound connection exists for each inbound partner. Inbound log A table in the FRS database that stores pending change orders to be processed. As entries are processed, acknowledgments are sent to the inbound partners. Knowledge Consistency Checker (KCC) new domain controllers, domain controllers moved to and from sites, changing costs and schedules, and domain controllers that are temporarily unavailable. Link target In a DFS namespace, the mapping destination of a link, typically a shared folder on a server. FRS can be used to keep link targets (shared folders) synchronized on different servers. Local change order A change order that is created because of a change to a file or folder on the local server. The local server becomes the originator of the change order and constructs a staging file. MD5 checksum A cryptographically secure one-way hashing algorithm that is used by FRS to verify that a file on each replica member is identical. Morphed folder A folder that FRS creates when resolving a name conflict between two or more identically named directories that originate on different replica members.. Nonauthoritative restore (also called D2) The process whereby a given replica member is reinitialized by obtaining a replica tree from another replica member. This process is often used to resynchronize a replica member’s replica tree with one of its inbound partners, such as after a server failure. This process is informally referred to as a “D2” in reference to the registry setting used to invoke this process. Originator GUID A globally unique identifier (GUID) that is associated with each replica member. All change orders produced by a given replica member carry the replica member’s originator GUID, which is saved in the file ID table. Outbound connection For a given replica member, a component of the NTFRS Member object in Active Directory that identifies outbound partners. An outbound connection exists for each outbound partner. Outbound log A table in the FRS database that stores pending change orders to be sent to outbound partners. The changes can originate locally or come from an inbound partner. These change orders are eventually sent to all outbound replica partners. Parent GUID The globally unique identifier (GUID) of the parent folder that contains a particular file or folder in the replica tree. Preinstall folder A hidden subfolder under the root of the replica tree. When a newly created file or folder is replicated to a downstream partner, the file or folder is first created in the preinstall folder on the downstream partner. After the file or folder is completely replicated, it is renamed to its target location in the replica tree. This process is used so that partially constructed files are not visible in the replica tree. Reanimation change order A change order that FRS uses to reconcile a recently received change order against a previously deleted file or folder. Remote change order A change order received from an inbound (or upstream) partner that originated elsewhere in the replica set. Retry change order A change order that is in some state of completion but was blocked for some reason and must be retried later. Schedule The frequency at which data replicates. Staging folder A folder that acts as a queue for changed files and folders to be replicated to downstream partners. Staging file A file that FRS constructs in the staging folder as a result of a file or folder change. FRS constructs staging files by using backup application programming interfaces (APIs) and then replicates the staging files to downstream partners. Downstream partners use restore APIs to reconstruct the staging files in the preinstall folder before renaming the files into the replica tree. Staging files are compressed to reduce the network bandwidth used during replication. SYSVOL On a domain controller, a shared folder that stores a copy of the domain’s public files, including system policies and Group Policy settings, that are replicated to all other domain controllers in the domain by FRS. Update sequence number (USN) A monotonically increasing sequence number for each volume. The USN is incremented every time a modification is made to a file on the volume. USN journal On NTFS volumes, a persistent log that tracks all changes on the volume, including file creations, deletions, and changes. The USN journal has a configurable maximum size and is persistent across reboots and system crashes. FRS uses the USN journal to monitor changes made in the replica tree. USN journal wrap An error that occurs when large numbers of files change so quickly that the USN journal must discard the oldest changes (before FRS has a chance to detect the changes) to stay within the specified size limit. To recover from a journal wrap, you must perform a nonauthoritative restore on the server to synchronize its files with the files on the other replica members. Version vector A vector of volume sequence numbers (VSNs), with one entry per replica set member. All change orders carry the originator GUID of the originating member and the associated VSN. As each replica member receives the update, it tracks the VSN in a vector slot that is assigned to the originating member. This vector describes whether the replica tree is current with each member. The version vector is then used to filter updates from inbound partners that might. FRS Architecture The File Replication service (Ntfrs.exe) is the primary component of the FRS architecture. This service and its related components are illustrated in the following figure. FRS Architecture The following table describes the components related to the FRS architecture. Components of the FRS Architecture FRS Protocols FRS stays at application layer of Open Systems Interconnection (OSI) model, and uses RPC as transport medium for communication between replica members. FRS Interfaces Windows Server 2003 includes an FRS writer for the Volume Shadow Copy service. The FRS writer allows Volume Shadow Copy service-compatible backup programs, such as Windows Backup, to make point-in-time, consistent backups of the replica tree. Using a Volume Shadow Copy service-compatible backup program ensures that replica tree backups do not contain partial files (which can occur when a replica member receives an updated file and begins overwriting the previous version of the file in the replica tree with the updated version of the file). This process, known as the install process, creates a window during which the file is half-written. The window exists because FRS must move the updated file from the preinstall folder and overwrite the existing file in the replica tree. Before a Volume Shadow Copy service-compatible backup program takes a shadow copy of a replica tree, the program instructs FRS to stop requesting new work items from the install command server queue. After all currently active installations are complete, FRS enters a frozen state during which no installations occur. Shadow copies made while FRS is in a frozen state will not contain partial files. The install service thaws either when the backup program signals a thaw or at a predetermined time-out period. For more information about the Volume Shadow Copy service, see “Volume Shadow Copy Service Technical Reference.” FRS Physical Structures FRS requires a number of physical structures to be stored in Active Directory and on each replica member. These structures are illustrated in the following figure and are described in the sections that follow. Overview of FRS Physical Structures FRS Files and Folders on Replica Members Each replica member contains a number of files and folders used by FRS for SYSVOL and DFS replica sets. The following figure illustrates the files and folders that are stored on a DFS replica member. FRS Files and Folders on DFS Replica Members The following figure illustrates the files and folders stored on domain controllers. Note how the location of these folders differs from DFS replica members. FRS Files and Folders on Domain Controllers The following sections briefly describe each of these folders. Replica tree on DFS replica members The replica tree is the set of files and folders that are kept in sync on all replica members. The replica tree begins at the replica root, which is the shared folder where all the replicated data is stored. The replica tree can exist on any NTFS volume for DFS replica members, and FRS does not impose limits on the size of the replica tree. SYSVOL folder domain controllers SYSVOL is a folder that stores numerous FRS-related folders and a copy of the domain’s public files, including system policies and Group Policy settings that are replicated to all other domain controllers in the domain by FRS. The actual replica root begins at the \systemroot\SYSVOL\domain folder, but the folder that is actually shared is the \systemroot\SYSVOL\sysvol folder. These folders appear to contain the same content because SYSVOL uses junction points (also called reparse points). A junction point is a physical location on a hard disk that points to data that is located elsewhere on the hard disk or on another storage device. The following table lists the folders in SYSVOL that contain junction points and the locations to which these junction points resolve: SYSVOL Junction Points Staging folder The staging folder is a hidden folder that acts as a queue for changed files to be replicated to downstream partners. After the changes are made to a file and the file is closed, FRS creates the file in the staging folder by using backup application programming interfaces (APIs) and replicates the file according to schedule. Files in the staging folder are called staging files. Any further use of corresponding files in the replica tree does not prevent FRS from replicating the staging files to other members. In addition, if the file is replicated to multiple downstream partners or to members with slow data links, using staging files ensures that the underlying files in the replica tree can still be accessed. The default size of the staging folder is approximately 660 megabytes (MB), the minimum size is 10 MB, and the maximum size is 2 terabytes. For performance and disk space purposes, the staging folder can be located on a volume other than where the replica tree is stored. For more information about the staging folder, see “How the Staging Folder Works” later in this section. Preinstall folder The preinstall folder is a hidden system folder named DO_NOT_REMOVE_NtFrs_PreInstall_Directory. When a downstream partner begins receiving a new file (or folder), the file is installed in the preinstall folder on the downstream partner so that a partially constructed file is not added to the replica tree. After the downstream partner receives the entire file in the preinstall folder, the file is renamed to its target location in the replica tree. FRS Jet database FRS creates a Microsoft Jet database at \systemroot\ntfrs\jet\ to store FRS transactions. FRS does not impose size limits on the Jet database, but replication stops working if the volume where the database is stored runs out of disk space. For performance and disk space purposes, the Jet database can be moved to a volume other than where the replica tree is stored. For more information about the FRS Jet database, see “FRS Database Tables” later in this section. FRS debug logs FRS creates debug logs in the \systemroot\Debug folder. By default, the debug logs store FRS transaction and event detail in sequentially numbered files from Ntfrs_0001.log through Ntfrs_0005.log. The characteristics of the debug logs are determined by the values of several registry entries. These values allow you to set the number of log files used, the number of entries per log, the level of detail logged, and so forth. FRS creates another log file, Ntfrsapi.log, to track events that occur during the promotion and demotion of domain controllers. Information in the Ntfrsapi.log file includes which server was chosen as the parent for the initial replication (seeding) process and the creation of registry keys. Because errors during the promotion and demotion processes are also tracked in this file, Ntfrsapi.log is useful for troubleshooting problems where the SYSVOL and Netlogon shared folders are not shared correctly. For performance and disk space purposes, the debug logs can be moved to a volume other than the one on which the replica tree is stored. For more information about the FRS debug logs, see “How FRS Debug Logs Work” later in this section. Pre-existing data folder The pre-existing folder, named NtFrs_PreExisting___See EventLog, is an optional folder that is located under the replica root. If the pre-existing data folder is present on a replica member, FRS created it after one of the following events: - The server was added to a replica set but the server already had one or more files in the shared folder that became the replica tree. In this case, FRS moved that data into the pre-existing data folder and then replicated the replica tree from one of the upstream partners to the new member. - The replica member had a nonauthoritative restore (also called D2) performed on it by an administrator. This process is used to bring a replica member back up to date with its partners after problems such as assertions in the FRS service, corruption of the local FRS Jet database, journal wrap errors, and other replication failures. When you perform a nonauthoritative restore on a server, FRS moves the existing data in the replica tree to the pre-existing data folder and then receives the replica tree from one of the upstream partners. - The server was prestaged before it was added to the replica set. During the prestaging process, files in the replica tree are temporarily moved from the replica tree to the pre-existing data folder. For more information about the prestaging process, see “How Prestaging Works” later in this section. Only one pre-existing data folder can exist at a time. If one of the previously listed events occurs, causing the pre-existing data folder to be created, and then another one of the events occurs, the previous pre-existing data folder is deleted and replaced with another pre-existing data folder. FRS Database Tables The Jet database (Ntfrs.jdb) used by FRS has a set of five tables: - Connection record table - Version vector table - File ID table - Inbound change order table - Outbound change order table You can view the contents of three of these tables (the file ID table, inbound change order table, and outbound change order table) by using Ntfrsutl.exe (a Windows Support Tool), though doing so is typically only necessary for troubleshooting purposes. Windows Server 2003 provides several Windows Support Tools scripts, Topchk.cmd, Connstat.cmd, and Iologsum.cmd, that can present the output of Ntfrsutl in a more readable format. The following sections briefly describe the tables in Ntfrs.jdb. Connection record table The connection record table tracks the state of each inbound and outbound connection for the replica set and tracks the delivery state of each outbound change order. Version vector table The version vector table contains a vector of volume sequence numbers (VSNs), one entry per replica member. All change orders carry the originator globally unique identifier (GUID) of the originating member and the associated volume sequence number. As each replica set member receives the update, it tracks the VSN in a vector slot assigned to the originating member. This vector now describes how up-to-date this member’s replica tree is with respect to each member. The version vector is then used to filter out updates from inbound partners that might have already been applied. The version vector is also delivered to the inbound partner when two members join. When a new connection is created, the version vector is used to scan the file ID table for more recent updates not seen by the new outbound partner. File ID table The file ID table lists all files in the replica sets on the local server of which FRS is aware. Data stored in the file ID table includes the file name, file GUID, NTFS file ID, parent GUID, parent file ID, version number, event time, and originator GUID. You can view entries in the file ID table by using Ntfrsutl.exe with the idtable parameter. Inbound change order table (inbound log) The inbound change order table, referred to as the inbound log, stores pending change orders to be processed. As entries are processed, acknowledgments are sent to the inbound partners. State variables in the record track the progress of a change order and determine where to continue after a system crash or a retry of a failed operation. Data stored in the inbound log includes the change order’s GUID, file name, originator GUID, file GUID, version number, and event time. You can view change orders in the inbound log by using Ntfrsutl.exe with the inlog parameter. Outbound change order table (outbound log) The outbound change order table, referred to as the outbound log, stores pending change orders to be sent to outbound partners. By default, change orders remain in the outbound log for seven days, even if all replication partners have received the change. Also in the outbound log is the leading (next change) and trailing (last acknowledged) index for each partner. You can view change orders in the inbound log by using Ntfrsutl.exe with the outlog parameter. FRS Change Orders A change order is a message that contains information about a file or folder that has changed on a replica member. Change orders are stored in the inbound and outbound logs on each replica member and contain information such as the change order’s GUID, the originator GUID, the file GUID, the file name, event time, and file version number. You can view change orders in the inbound and outbound logs by using Ntfrsutl.exe with the inlog and outlog parameters. You can also use the Windows Support Tools script Iologsum.cmd to parse the Ntfrsutl output into a more readable format. FRS uses the following five types of change orders: Local change order A change order that is created because of a change to a file or folder on the local server. The local server becomes the originator of the change order and constructs a staging file. Remote change order A change order received from an upstream partner that originated elsewhere in the replica set. Retry change order A change order that is in some state of completion but was blocked for some reason and must be retried later. This is a change order property in that both local and remote change orders can become retry change orders. Directed change order A change order that is directed to a single outbound partner and primarily produced when the partner is performing an initial sync (vvjoin) or a nonauthoritative restore (D2). Reanimation change order A change order that FRS uses to reconcile a recently received change order against a previously deleted file or folder. FRS Registry Entries FRS registry entries are located in the registry under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NtFrs. For a list of FRS registry entries, see “FRS Tools and Settings.” File Replication Service Event Log The File Replication service event log contains events logged by FRS. It is created when FRS is first enabled on a server, and the creation process involves setting registry keys under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog\File Replication Service. The event log file name is NtFrs.Evt, which is located under systemroot\System32\Config. FRS Objects in Active Directory For FRS to function properly, certain critical Active Directory objects (as well as their attributes and parent containers) must exist in Active Directory. These objects, which define a replica set’s topology, schedule, and filters, are created in Active Directory when you use the Distributed File System snap-in to configure replication for DFS or when you promote a server to a domain controller using the Active Directory Installation Wizard (Dcpromo.exe). Active Directory replication is used to replicate these objects to all domain controllers in a domain, and missing or corrupted objects can cause FRS replication to fail. Hierarchy of FRS Objects in Active Directory for DFS Replica Sets The following figure illustrates the hierarchy of FRS-related Active Directory containers and objects for DFS replica sets. The figure also illustrates how several of these objects are linked together using reference attributes. For example, in the NTFRS Member object, there is an attribute called frsComputerReference that refers to the computer object for the server. FRS Containers and Objects in Active Directory for DFS Replica Sets The following sections briefly describe the Active Directory objects, containers, and their attributes for DFS replica sets. The class name of each object is shown in parentheses. NTFRS Settings object The NTFRS Settings object (class nTFRSSettings) is used as a container for the NTFRS Replica Set objects. The NTFRS Settings object can contain other NTFRS Settings objects; therefore, it provides a way to form a hierarchy to better organize the NTFRS Replica Set objects. NTFRS Replica Set object Every NTFRS Replica Set object (class nTFRSReplicaSet) represents a set of computers that replicate a specified folder tree and a common set of data among them. There is one NTFRS Replica Set object for every replica set. The NTFRS Replica Set object must be directly under an NTFRS Settings object. The most commonly used attributes on this object are fRSReplicaSetType (set to 3 for DFS replica sets), fRSFileFilter, fRSDirectoryFilter, and Schedule. If you set the Schedule attribute, it applies to all the NTDS Connection objects in the replica set that do not have a Schedule attribute. NTFRS Member object Every NTFRS Member object (class nTFRSMember) corresponds to a computer that is part of the replica set. The relationship between the NTFRS Member object and the computer object is indicated by the frsComputerReference attribute. The NTFRS Member object can contain one or more NTDS Connection objects that define the inbound partners that a member replicates from. NTDS Connection objects refer to other member objects in the same replica set object using the fromServer attribute. NTDS Connection object NTDS Connection objects (class nTDSConnection) define the inbound and the outbound partners of a replica member. NTDS Connection objects are located under the NTFRS Member object in the domain naming context for DFS replica sets. The NTDS Connection object is inbound to the NTFRS Member object that it is located under, and it is outbound from the NTFRS Member object that its fromServer attribute points to. Computer object Each computer object represents a computer in the domain. The relationship between the NTFRS Member object and the computer object is indicated by the frsComputerReference attribute. NTFRS Subscriptions container The NTFRS Subscriptions object (class nTFRSSubscriptions) is similar to the NTFRS Settings object in that it is primarily used as a container to group NTFRS Subscriber objects. The fRSWorkingPath attribute defines the location of the Ntfrs.jdb file, which is typically located in the %SystemRoot%\Ntfrs folder. NTFRS Subscriber object Every NTFRS Subscriber object (class nTFRSSubscriber) under a computer’s computer object corresponds to a replica set that the computer is a member of. The fRSMemberReference attribute of the NTFRS Subscriber object points to the NTFRS Member object of the replica set that it corresponds to. Every NTFRS Subscriber object also has both an fRSRootPath attribute that specifies the folder tree to replicate and an fRSStagingPath attribute that specifies the folder to store the staging files under. Hierarchy of FRS Objects in Active Directory for SYSVOL Replica Sets SYSVOL replica sets have a different hierarchy of objects in Active Directory than do DFS replica sets. This hierarchy is illustrated in the following figure. FRS Containers and Objects in Active Directory for SYSVOL Replica Sets The following sections describe any attributes and relationships that are specific to SYSVOL replica sets. In some cases, the objects for DFS and SYSVOL replica sets are identical. NTFRS Settings object Same as for DFS replica sets. NTFRS Replica Set object Same as for DFS replica sets except that only one NTFRS Replica Set object can be of the SYSVOL type in a domain. In this case, the fRSReplicaSetType attribute is set to 2. NTFRS Member object In SYSVOL replica sets, the serverReference attribute of the NTFRS Member object points to the NTDS Settings objects (class nTDSDSA) that contain the NTDS Connection objects that this member replicates from. NTDS Connection object For SYSVOL replica sets, FRS uses both manually generated connection objects and connection objects that are generated by Knowledge Consistency Checker (KCC) that are located in the NTDS Settings object in the configuration naming context. (You can use the Active Directory Sites and Services snap-in to view these connection objects.) These connection objects are also used during Active Directory replication. For SYSVOL replica sets, the NTDS Connection object is inbound to the NTFRS Member object that corresponds to the NTDS Settings object that the NTDS Connection object is located under. It is outbound from the NTFRS Member object that corresponds to the NTDS Settings object that its fromServer attribute points to. Computer object Same as for DFS replica sets. NTFRS Subscriptions container Same as for DFS replica sets. NTFRS Subscriber object Same as for DFS replica sets. FRS Processes and Interactions The following figure illustrates four categories of FRS processes and interactions described in this section. FRS Processes and Interactions The following sections describe the four categories of processes and interactions. How changes to topology, schedule, filters, and settings are detected FRS periodically polls Active Directory to obtain new or updated information about replica set topology, schedule, and file and folder filters. FRS also polls the registry to detect updated settings. For information about the Active Directory and registry polling process and the intervals at which polling occurs, see “How FRS Polling Works” later in this section. How new and existing partners communicate Replica members communicate with each other to determine whether new or changed files and folders need to be replicated from one member to another. When a downstream partner joins an upstream partner for the first time, the process is known as a version vector join (vvjoin). For more information, see “How the Vvjoin Process Works” later in this section. Routine communication between members is described in “How the Partner Join Process Works” later in this section. How replica sets are created and how members are added and removed The creation of new replica sets and the addition and removal of replica members involves many processes. For example, when a new replica member is added to a replica set, the FRS physical structures are created on the member and in Active Directory, the FRS service is started, registry keys are created, the member polls the registry and Active Directory, the new member vvjoins with an upstream partner, and so on. The following sections describe these processes and interactions: - What Happens When You Create a DFS Replica Set - “What Happens When SYSVOL Is Created During Domain Controller Promotion” - How Prestaging Works - What Happens When You Remove a Member from a Replica Set How changed files and folders are tracked and replicated A number of processes occur when a file or folder is changed, closed, and then replicated from one member to another. Change orders are generated and flow from one member to another, staging files are built in the staging folder, and file and folder filters prevent files and folders with specified names or extensions from replicating. Replication takes place according to a schedule, change orders are periodically removed from the outbound log, and details about these processes are written to the FRS debug logs. You can find information about these processes in the following sections: - Types of Changes That Trigger Replication - How Files and Folders Are Replicated - How the Staging Folder Works - How Replication Schedules Work - How File and Folder Filters Work - How the Outbound Log Cleanup Process Works - How FRS Debug Logs Work How FRS Polling Works FRS periodically polls Active Directory (using LDAP) to obtain new or updated information about replica set topology, schedules, and file and folder filters. (This information is stored in the Active Directory objects described in “FRS Objects in Active Directory” earlier in this section.) FRS also polls the registry to detect updated settings, such as staging directory size, USN journal size, debug log settings, and so forth. Polling takes place after FRS starts up, such as when the host computer or the service starts (when net start ntfrs is typed at the command line, for example) and during regular intervals after that. The following sections describe the polling process and the intervals at which polling occurs. How Active Directory Polling Works The Active Directory polling process is as follows. Note that if any of these steps fails, replication does not occur. Failure is typically caused by network connectivity issues or missing, damaged, or duplicate Active Directory objects. - FRS locates the computer object for the server it is running on and enumerates the NTFRS Subscriber objects. - For a given subscriber, FRS follows the fRSMemberReference attribute to the NTFRS Member object - Next, FRS reads the NTFRS Replica Set object (always directly above the NTFRS Member object) to determine whether it is SYSVOL or DFS replica set. After FRS determines the replica type (SYSVOL or DFS), FRS enumerates the connection objects. This process is different for SYSVOL and DFS. How FRS enumerates connection objects for DFS replica sets - FRS finds all the NTDS Connection objects that represent an inbound (or upstream) connection to this member. (All NTDS Connection objects that represent an inbound connection are directly under the NTFRS Member object.) - FRS finds all the NTDS Connection objects that represent an outbound (or downstream) connection to this member. (All NTDS Connection objects that represent an outbound connection have this member’s name in the fromServer attribute.) - FRS forms a list of NTFRS Member objects that have an inbound or outbound connection with this member. FRS enumerates connection objects for SYSVOL replica sets - FRS finds all the NTDS Connection objects that represent an inbound connection to this member. The serverReference attribute on the NTFRS Member object points to the NTDS Settings object. (All the NTDS Connection objects that represent an inbound connection are directly under the NTDS Settings object.) - FRS finds all NTDS Connection objects that represent an outbound connection to this member. (All NTDS Connection objects that represent an outbound connection have this member’s name in the fromServer attribute.) - FRS forms a list of member objects that have an inbound connection or outbound connection with this member. Forming the list of members is a two-step process for SYSVOL replica sets. First, FRS forms the list of NTDS Settings objects, and then FRS finds the NTFRS Member objects that point to each of these NTDS Settings objects. Registry Polling Works FRS polls Active Directory and the registry at the same interval. Some settings stored in Active Directory are also stored in the registry, but the Active Directory settings override the registry settings. Though some registry changes are detected and applied during polling, other changes are not applied until the FRS service is stopped and restarted. For more information about registry entries that require the service to be restarted, see Registry Reference in Tools and Settings Collection. Polling Intervals Each time a replica member is started or the FRS service is started, FRS polls the registry and the computer object in Active Directory in eight short intervals. If no changes in Active Directory are detected, FRS polls in long intervals until it detects configuration or subscriber list changes in Active Directory. The lengths of the short and long intervals are as follows: - For domain controllers, the default short and long polling intervals are five minutes each. (Domain controllers always use the short interval, regardless of the long interval setting.) - For member servers, the default short polling interval is five minutes and the long polling interval is 60 minutes. Note - These polling intervals are specified in the DS Polling Long Interval in Minutes and DS Polling Short Interval in Minutes registry values. To view the current interval and the length of the long and short polling interval, use the Ntfrsutl.execommand with the poll parameter. If there have been no FRS configuration changes in Active Directory after the completion of eight short polling intervals, FRS automatically begins polling according to the long polling interval. If one of the following events occurs, the polling cycle is reset: - A replica member is added or removed. - A connection is added or removed. - A schedule is changed. - A file or folder filter is changed. Unlike changes in Active Directory, registry changes do not affect the polling cycle. How the Vvjoin Process Works A version vector is an array that contains pairings of the member originator GUID of the originating member and the VSN of the change. The VSN is a monotonically increasing sequence number assigned to each change that originates on a given replica member. Although the VSN is similar to the USN, FRS does not store the actual USN of the change in the version vector because a replica set can be moved to a different volume, or the volume could be formatted and the replica tree restored from backup. In these cases, the USN could be reset. The following graphic illustrates a version vector on three servers, ServerA, ServerB, and ServerC. Example of Three Version Vectors Each entry in the version vector is considered the high water mark as to the most recent change order received from the corresponding originator. For example, the version vector on ServerA indicates that this member has received and processed all changes up to VSN 9 originating on ServerC, VSN 12 originating on ServerD, and VSN 32 originating on ServerE. ServerB’s version vector shows that its replica tree is not as up-to-date as the replica tree on ServerA. The new member, ServerF, has an empty version vector. A version vector join (vvjoin) is the process by which a downstream partner joins with an upstream partner for the first time. During this process, the two members compare version vectors to determine which files need to be replicated from the upstream partner to the downstream partner. Vvjoins can occur in a number of scenarios, which are explained using the following figure as an example. Sample Replica Members for Vvjoin Scenarios In this figure, ServerA and ServerB are members of a replica set. ServerA is upstream and ServerB is downstream for connection C. Using this figure as an example, a vvjoin can occur in the following scenarios: Scenario A Connection C is new, or connection C is deleted and a new connection is created in its place. Scenario B Connection C is disabled and later enabled. Scenario C ServerA polled a domain controller that has not seen the NTDS Connection object for connection C yet. At this point ServerA acts as though the connection is deleted. Later the NTDS Connection object for connection C replicates to this domain controller and ServerA reads it. The effect is similar to Scenario B. Scenario D ServerB is a newly added member of the replica set. Scenario E ServerA or ServerB performs a nonauthoritative restore (D2). Scenario F ServerA performs an authoritative restore (D4), and ServerB performs a nonauthoritative restore (D2). Scenario G ServerB is offline for longer than the Outlog Change History In Minutes registry entry, which is 10080 minutes (seven days) by default. Vvjoins take place when the schedule first opens between the two members in the previous scenarios. The type of vvjoins performed in these scenarios can be either full or optimized vvjoins. These vvjoins are explained in the following sections. When Optimized Vvjoins Occur In scenarios A, B, C, and D, FRS attempts to perform an optimized vvjoin instead of a full vvjoin. An optimized vvjoin occurs when the upstream partner’s outbound log contains all the changes that the downstream partner is missing. Optimized vvjoins are desirable in scenarios where a downstream partner (ServerB in the previous figure) needs to vvjoin and has almost all files in the replica tree already. For example, assume the connection between ServerA and ServerB is new, but ServerB was offline for two days and only missed changes from a prior upstream partner, say ServerC, for those two days. In this case, the new upstream partner (ServerA) retains the change orders in the outbound log for seven days and sends its outbound log to the downstream partner (ServerB) instead of enumerating its entire database. ServerA also sends its existing staging files, so ServerA needs to regenerate staging files only for staging files that were purged if the staging folder reached 90 percent of its limit or that were purged during the staging folder cleanup process. The staging folder cleanup process is described in “How the Outbound Log Cleanup Process Works” later in this section. When Full Vvjoins Occur If the upstream partner’s outbound log does not contain all the changes that the downstream partner is missing, a full vvjoin occurs. A full vvjoin requires the upstream partner to enumerate its entire database before determining whether to send changed files and folders to the downstream partner. As a result, the full vvjoin process is not as efficient as an optimized vvjoin, but a full vvjoin does not necessarily mean that the upstream partner sends the entire replica across the network to the downstream partner. The following sections describe the scenarios in which full vvjoins are performed and whether all or part of the data is replicated. When a new replica member is prestaged and added to the replica set In scenario D, the new member (ServerB) has been prestaged with a restored version of the replica tree. When you prestage files, you use a backup program to back up the replica tree and restore it on the new member. Using a backup program is required to preserve the file GUIDs and security descriptors. Although it might seem that an optimized vvjoin would work in this scenario, optimized vvjoins do not look for prestaged content. Therefore, you must either clear the outbound log of the direct upstream partners or wait until the replica set is at least seven days old (or the duration specified in the Outlog Change History In Minutes registry value). Either of these steps will trigger a full vvjoin, which looks for prestaged content and replicates to the new member only the files and folders that have changed since the new member was prestaged. If these steps are not followed, an optimized vvjoin is performed instead, causing all the data in the replica to be replicated across the network to the new member. Note - The Outlog Change History In Minutes registry entry does not take effect until the next polling interval or until you use the Ntfrsutl poll /now command. For more information about prestaging, see “How Prestaging Works” later in this section. When a new replica member is not prestaged and the replica set is more than seven days old In scenario D, if the new member (ServerB) has not been prestaged with a restored copy of the replica tree, and the replica set is more than seven days old (or older than the duration specified in the Outlog Change History In Minutes registry value), ServerA enumerates its entire database, sends directed change orders for every file and folder in the replica tree, generates staging files for every file in the replica tree, and then sends all files in the replica tree across the network. In this scenario, the full vvjoin requires a lot of CPU time, disk I/O, and network bandwidth. An existing replica member was offline for more than seven days In scenario G, if a server rejoins a replica set after being offline for more than seven days (or longer than the duration specified in the Outlog Change History In Minutes registry value), the upstream partner enumerates its database but only sends across the network files and folders that have changed since the server was offline. For more information about what happens when a server is offline for more than seven days, see “How the Outbound Log Cleanup Process Works” later in this section. How the Partner Join Process Works The partner join process is a handshake between two existing partners before they begin replicating. The process begins after any of the following events: - The connection schedule opens. - The FRS service is started while the schedule is open. - A connection is reestablished after the loss of network connectivity or server shutdown while the schedule is open. During the partner join process, which is similar to a server message block (SMB) session setup, the partners negotiate capability level, exchange version vectors, and test for compression support. The schedule must be open for a partner join to occur. On a given connection, the upstream partner has the leading (next change) and trailing (last acknowledged) indexes and calculates what files need to be replicated to the downstream partner. What Happens When You Create a DFS Replica Set After you use the Configure Replication Wizard in the Distributed File System snap-in to configure replication for DFS link targets, the following events take place: - FRS-related Active Directory objects and their attributes are created on the server acting as the primary domain controller (PDC) emulator master. (Note that these objects will eventually replicate to all domain controllers.) - The FRS service is started on all members and set to automatic, if the members are not currently part of another replica set. The File Replication service event log is also created the first time the service starts. - Active Directory is immediately polled after the FRS service starts. If the FRS service was already running, the poll takes place according to the current polling interval. (After FRS detects a change, it begins the eight short polling intervals as described in “How FRS Polling Works” earlier in this section.) - If the Active Directory objects have replicated to the domain controller that FRS polls, FRS verifies that the replica root is a valid path, verifies that the staging directory is a valid path (and creates it if necessary), and checks the database path and creates the database. If the Active Directory objects are not yet present on the domain controller that FRS polls, there is a delay in this process until the Active Directory objects are replicated to this domain controller. The final steps of the process differ for the initial master and the remaining replica members. For initial master: - On the initial master, FRS walks the replica tree, adds the files and folders to the file ID table, and stamps a file GUID on every file and folder. - FRS declares itself as online and sends a join command to the other members of the replica set. This confirms that FRS is ready to replicate. For the remaining replica members: - The remaining replica members come online in the seeding state, which indicates that they are ready to seed (replicate in) the data from an online server. - Eventually the remaining replica members vvjoin with other online members. What Happens When SYSVOL is Created During Domain Controller Promotion The SYSVOL shared folder is built by the Active Directory Installation Wizard (Dcpromo) during the installation of Active Directory. The process is as follows: - Dcpromo calls FRS to prepare for promotion. If FRS is already running on the server to be promoted, Dcpromo stops the FRS service. - Dcpromo deletes information from previous demotion or promotions (primarily FRS-related registry keys). - The Net Logon service stops sharing the SYSVOL shared folder (if it exists), and the SysvolReady registry entry is set to 0 (false). - Dcpromo creates the SYSVOL folder and the necessary subfolders and junction points. - The FRS service is started. - Dcpromo makes a call to FRS to start a promotion thread that sets the necessary registry keys. - Dcpromo reboots the server. - When the server is restarted, FRS detects that the server is a domain controller and then checks the registry for the SysVol Information is Committed registry entry. Because this entry is set to 0 (false), FRS creates the necessary Active Directory objects and then populates information from the registry to the Active Directory objects and creates the reference attributes as necessary. - FRS begins to source the SYSVOL content from the computer that is identified in the Replica Set Parent registry entry. (This key is temporary and is deleted after SYSVOL has been successfully replicated.) The connection to the server specified in Replica Set Parent is also temporary. This connection, called a volatile connection, is used to perform the initial vvjoin so that the new domain controller does not need to rely on Active Directory replication for the new connection objects to be replicated. - When SYSVOL is finished replicating, FRS sets the SysvolReady registry entry to 1 (true), and then the Net Logon service shares the SYSVOL folder and publishes the computer as a domain controller. How Prestaging Works Prestaging is the process of restoring a version of the replica tree on a new server before adding it to a replica set, thus avoiding the need to send the replica tree across the network. Prestaging works for both DFS and SYSVOL replica sets, though the process is slightly different: - Prestaging a DFS replica member requires you to restore a backup of the replica tree on a server before you add the server to an existing replica set. - Prestaging a domain controller requires you to restore a system state backup (which includes the SYSVOL shared folder) to be used as the data source when promoting a server running Windows Server 2003 to a domain controller. How Prestaging DFS Replica Sets Works The following procedure describes how FRS responds when you prestage a DFS replica set. - Replication is enabled between two members, Server1 and Server2. The initial master, Server1, does not have any data in its replica tree when replication is enabled. - After you copy files into the replica tree (using \\Server1\Apps as an example replica root), FRS generates staging files and sends change orders to the downstream partner (Server2). FRS also generates an MD5 (a hash algorithm) checksum during the staging file generation and saves the result in the file ID table on Server1 and in the change order sent to Server2. - When Server2 processes this change order, it saves the MD5 checksum in the file ID table on Server2. This process is the only way an MD5 checksum is saved in the file ID table and the presence of the MD5 is necessary to avoid overhead when new members are added later. When step 3 is finished, the replicated files exist on both Server1 and Server2, and both file ID tables have MD5 checksums for each file and folder in the replica tree. (You can verify this by using the Ntfrsutil.exe command with the idtable parameter. Search for “MD5CheckSum” in the output.) - Windows Backup or another backup program is used to back up the contents of the replica tree from either Server1 or Server2. The backup is then restored to another server, for example, Server3. - If fewer than seven days have passed since the replica set containing Server1 and Server2 was created, the outbound log must be cleared so that a full vvjoin is triggered when the next member joins. (Setting the Outlog Change History In Minutes registry entry to 0 clears the outbound log.) - When Server3 is added to the replica set using the Distributed File System snap-in, FRS on Server3 moves all files from the restored target folder (\\Server3\Apps) to the pre-existing data folder, and then initiates a full vvjoin from each computer that Server3 has inbound NTDS Connection objects. Note that the vvjoin process is serialized, meaning that the downstream partner performs a vvjoin with one upstream member at a time. Subsequent vvjoins are much faster because the downstream partner will typically have a complete (or nearly complete) replica tree after the first vvjoin completes. The key requirement in this step is that Server3 has inbound connections from an upstream partner, Server1 or Server2 in this case, whose file ID table contains MD5 checksums for files contained in the replica set of interest. - FRS on Server1 enumerates all the files and folders in its file ID table and sends directed (that is, single target) change orders to Server3. Because the file ID table has an MD5 checksum, it is included in the change order. As Server3 processes these change orders, this server takes the file GUID for the file or folder from the change order and attempts to locate the corresponding file in the pre-existing data folder. If the server locates the file, it re-computes the MD5 checksum on the content of that file, compares the result to the MD5 checksum it received in the change order and, if they match, uses the pre-existing file instead of attempting to obtain the file from Server1. If Server3 does not find the file, or if the MD5 checksum or attributes do not match, the server obtains the file from Server1. Any change to the file content, such as to the access control lists or data streams, can cause an MD5 mismatch and the file is obtained from Server1 or other upstream partner. When all replication activity has settled out, the file ID tables on all three servers have an identical MD5 checksum and identical file content in the replicated folder. How Prestaging SYSVOL Works The process for prestaging SYSVOL is part of the overall process of using backup media to create additional domain controllers. FRS follows the same method of determining whether the restored files are identical to those on the upstream partner as it does for DFS replica members. Specifically, the following requirements must be met for SYSVOL prestaging to be successful: - FRS must have constructed MD5 checksum data for the files in the SYSVOL tree. MD5 checksum data is constructed after one of the following events occurs: - Every file in SYSVOL is modified after there are two or more domain controllers in the domain. - All data is moved to a non-replicated folder outside of SYSVOL and then moved back into SYSVOL after there are two or more domain controllers in the domain. - The system state backup must contain all files in SYSVOL. If any files in the system state backup are out of date, those files are replicated across the network to the new member. - The outbound log on the upstream partner must be cleared if SYSVOL is less than seven days old. As described in “How the Vvjoin Process Works” earlier in this section, if the replica set is less than seven days old, FRS attempts an optimized vvjoin, which does not look for prestaged content and will cause the entire replica tree to be replicated across the network to the new member. To work around this, you can clear the outbound log on all upstream partners, or you can clear the outbound log on one upstream partner and then do one of the following steps to make sure that the new member vvjoins with this partner: - Create a single inbound connection to the member with the cleared outbound log. - Set up connection schedules such that the parent with the cleared outbound log is the only schedule that is open. - Stop the FRS service on all other upstream partners. Note - When prestaging SYSVOL, do not clear the outbound log on bridgehead servers. Bridgehead servers typically have many outbound connections, and clearing the outbound log can cause full vvjoins to occur for every connection that has pending outbound change orders. If this occurs, the bridgehead server can experience a significant decrease in FRS performance and significantly increased CPU, memory, and bandwidth usage. To avoid this situation, we recommend that you prestage from an upstream replica member that has few outbound connections. What Happens When You Remove a Member from a Replica Set When a member is removed from a replica set, certain events occur both on the removed member and in Active Directory. The following sections describe what happens when a domain controller is demoted (and thus removed from the SYSVOL replica set) and what happens when you remove a member from a DFS replica set. Demoting a Domain Controller Dcpromo (via Ntfrsapi.dll) performs the following steps during a domain controller demotion: - Dcpromo prepares for demotion by: - Stopping FRS. - Cleaning out old demotion state in the registry (if any). - Restarting FRS. - Binding to Active Directory using a different domain controller in the domain. The binding occurs because the demotion will invalidate the local domain controller and thus Active Directory replication would not take place after the system restarts. (If this is the last domain controller in the domain then this step is not performed.) - Dcpromo starts the demotion by: - Setting the SYSVOL Replica Set Command registry entry to Delete. - Instructing FRS to tombstone the SYSVOL replica set. - Dcpromo commits the demotion by: - Stopping the FRS service and setting the service to manual. - Setting the Sysvol Ready registry entry to 0. - Deleting the NTFRS Subscription object and NTFRS Replica Set object for SYSVOL. Removing a Member from a DFS Replica Set The following process occurs when you use the Distributed File System snap-in to remove a replica member from a replica set. - If the topology is hub and spoke and the hub member is removed, the topology type is changed to custom. - The NTFRS Subscriber object and its container are deleted. - The connections are adjusted based on current topology preference. For example, if the topology is a ring topology, a ring topology is reformed using the remaining members. - The connections from the removed member to other members are deleted. - The NTFRS Member object is deleted. Types of Changes That Trigger Replication Because a file is a basic unit of replication, any change to a file will cause the entire file to be replicated after the handle to the file is closed. (Replication takes place either immediately or when the replication schedule opens.) The following table provides a comprehensive description of all changes that can cause a file to be replicated. This table includes the code logged in the FRS outbound logs to indicate the type of change that caused replication to occur. These changes must be followed by a close record in the USN journal before replication begins. Types of Changes That Trigger Replication The following changes do not trigger replication: - Changes to a file or folder’s last access time - Changes to a file or folder’s archive bit - Changes to encrypted files - Changes related to reparse points The process that FRS uses to determine whether a file has changed is as follows: - FRS monitors the USN journal for changes. When FRS detects a close record for a file (a handle being closed will trigger this), FRS gathers relevant information about the recently closed file, including the file’s attributes and MD5 checksum, from the file ID table. - FRS computes an MD5 checksum for the recently closed file. The MD5 checksum is calculated based on the file’s data, including its security descriptors. File attributes are not included in this calculation. If the MD5 checksum and attributes of the recently closed file are identical to the information about the file stored in the file ID table, the file is not replicated. If either the MD5 checksum or the attributes are different, FRS begins the change order process described in “How Files and Folders Are Replicated” later in this section. Certain types of programs can make identical updates a file without actually changing the file’s content (in other words, the file’s MD5 checksum for the last change is identical to the previous change to that same file). These programs typically include backup, defragmentation, and antivirus programs that were not written to be compatible with FRS. Setting file system policy on files by using Group Policy can also cause frequent identical updates that trigger replication. These types of updates include: - Overwriting a file with a copy of the same file. - Setting the same ACLs on a file multiple times. - Restoring an identical copy of the file over an existing one. FRS suppresses excessive replication of files caused by these types of updates and logs FRS event ID 13567 in the File Replication service event log. More specifically, this event is logged when FRS detects that 15 identical updates were made to FRS replicated files within a one-hour period and this condition has occurred over three consecutive hours. The duplicate changes might have occurred on a single file 15 times, or 15 unique files one time each, or any combination between those two. Note that these thresholds apply only to event logging; FRS suppression works continuously. In addition, FRS suppression does not apply to folders. Therefore, frequent updates to folders affect replication and staging even when suppression is turned on. How Files and Folders Are Replicated The following figure describes the change order flow process, which begins when a file or folder is added to a replica tree or when a file or folder is changed and then closed in a replica tree. The sections that follow describe each step in the process. The differences in the flow process for file deletions are also described. How Change Orders Are Processed Step 1: NTFS creates a change journal entry When a file in a replica set is changed and closed, NTFS makes an entry in the USN journal. The USN journal records changes to all files on the NTFS volume (such as file creations, deletions, and modifications). In Windows Server 2003, the default journal size is 128 MB (or 512 MB in Windows Server 2003 with SP1 or the pre-SP1 release of Ntfrs.exe) and is persistent across restarts and crashes. If the FRS service is stopped or fails, this will have no effect on the replication of FRS content. Even with the service stopped, file changes are still recorded at the file system level in the USN journal. When the FRS service is restarted, stored USN journal entries will trigger replication. Step 2: FRS monitors the USN journal FRS monitors the USN journal for changes that apply to the replica trees on ServerA. Only closed files are checked. File and folder filters are applied against changes in the files and folders in the replica tree. Step 3: Aging cache is used The aging cache is a three-second delay designed to catch additional changes to a file. This prevents replication when a file is undergoing rapid updates. Step 4: FRS creates entries in the inbound log and file ID table ServerA records the change as a change order in its inbound log. It also creates an entry in the file ID table so that recovery can take place if a crash occurs. The inbound log contains change orders arriving from all inbound partners. Change orders are logged in the order that they arrive. Each change order contains information about a change to a file or folder on a replica member, such as the name of the file or the time it was changed. This information is used to construct a message about the change. Step 5: FRS creates the staging file in the staging folder ServerA uses backup APIs to create a backup of the changed file or folder in ServerA’s staging folder. These backup files, known as staging files, encapsulate the data and attributes associated with a replicated file or folder. By creating the staging file in the staging folder, FRS ensures that file data can be supplied to partners regardless of any activity that might prevent access to the original file. All staging files in the staging folder are compressed to save disk space and network bandwidth during replication. At step 1 if a file is deleted, FRS does not create a staging file. Step 6: FRS creates the entry in the outbound log ServerA updates the outbound log. The outbound log contains change orders generated for a specified replica set. These change orders can originate locally or come from an inbound partner. Change orders recorded in the outbound log are eventually sent to all outbound partners. If a file is deleted, the change order contains the event time of the deletion and the tombstone lifetime (by default 60 days). When the tombstone for the deleted file expires, the file’s entry in the file ID table is purged. Step 7: FRS sends a change notification ServerA sends a change notification to ServerB. Step 8: FRS records and acknowledges the change notification ServerB stores the change order in its inbound log and file ID table. If ServerB decides to accept the change order, it sends a change order acknowledgement (CO ACK) to ServerA to replicate the modified file’s staging file. Step 9: FRS replicates the staging file ServerB copies the staging file from ServerA to the staging folder on ServerB. ServerB then writes to its outbound log so other outbound partners can pick up the change. If the change order is for a deleted file, no staging file is fetched. The change notification for the deleted file is added to ServerB’s outbound log, Step 10: FRS constructs the staging file and moves it into the replica tree ServerB uses restore APIs to reconstruct (that is, restore) the file or folder in the preinstall folder, and then FRS renames the file or folder into the replica tree. If the change order was for a file deletion, ServerB deletes the file from the replica tree. How the Staging Folder Works The staging folder is a queue for changes to be replicated to downstream partners. After the changes are made to a file and the file is closed, the file content is compressed, written to the staging folder, and replicated according to schedule. Any further use of that file does not prevent FRS from replicating the staging file to other members. The following sections describe various aspects of the staging folder. Staging folder size The size of the staging folder governs the maximum amount of disk space that FRS can use to hold those staging files and the maximum file size that FRS can replicate. The default size of the staging folder is approximately 660 MB, the minimum size is 10 MB, and the maximum size is 2 terabytes. The largest file that FRS can replicate is determined by the staging folder size on both the upstream partner and downstream partners and whether the compressed replicated file can be accommodated by the current staging folder size. Therefore, the largest file that FRS can replicate is 2 terabytes, assuming that the staging folder size is set to the maximum on upstream and downstream partners. Staging folder compression FRS replica members running Windows 2000 Server with Service Pack 3 (SP3) or later or Windows Server 2003 compress the files replicated among them. Compression reduces the size of files in the staging folder on the upstream partners, over the network between compression-enabled partners, and in the staging folder of downstream partners prior to files being moved into their final location. All files and subfolders within the staging folder are compressed. However, compression is not enabled on the staging folder itself. How the staging folder stays below its maximum size limit When FRS tries to allocate space for a staging file and is not successful because the size of the files in the staging folder has reached 90 percent of the staging folder size, FRS starts to remove files from the staging folder. Staged files are removed (in the order of the longest time since the last access) until the size of the staging folder has dropped below 60 percent of the staging folder limit. Additionally, staging files for downstream partners that have been inaccessible for more than seven days are deleted. As a result, FRS does not stop replicating if the staging folder runs out of free space. This means that if a downstream partner goes offline for an extended period of time, the offline member does not cause the upstream partner’s staging folder to fill with accumulated staging files. Note - During the process of deleting staging files to reach the 60 percent limit, FRS does not delete staging files that have change orders in the inbound log pending install, because they are still needed by this member. The fact that FRS removes files from the staging folder does not mean that the underlying file is deleted or will not be replicated. The change order in the outbound log still exists, and the file will eventually be sent to downstream partners when they process the change order. However, before replication can take place, the upstream partner must recreate the staging file in the staging folder, which can affect performance. Recreating the staging file can also cause a replication delay if the file on the upstream partner is in use, preventing FRS from creating the staging file. Staging folder cleanup Staging files are cleaned up during the outbound log cleanup process. Specifically, staging files are deleted from the staging folder after all replica members have acknowledged the change orders that generated the staging files. For more information about the outbound log cleanup process and its effect on the staging folder, see “How the Outbound Log Cleanup Process Works” later in this section. How Replication Schedules Work The replication schedule defines the periods during which replication takes place. The following sections describe various aspects of replication schedules. SYSVOL Schedules. The SYSVOL replication schedule can be configured so that replication takes place four, two, one, or zero times per hour. - When set to four times per hour, replication starts at 0:00, 0:15, 0:30, and 0:45. This schedule is essentially continuous replication. - When set to two times per hour, replication occurs at 15-minute intervals starting at 0:15 and 0:45. - When set to one time per hour, replication starts at 0:00 and ends at 0:15 assuming that all changes have been replicated. It is possible to change the SYSVOL schedule, but this should be done only after careful consideration of the implications and alternatives. Also, note that schedules exist on both site links and NTDS Connection objects. If you change the schedule on an NTDS Connection objects, the connection then becomes a manual connection that cannot be managed by KCC. DFS Replica Set Schedules For DFS replica sets, FRS uses the NTDS Connection objects, topology, and schedule built by the Distributed File System snap-in. The snap-in sets the schedule as follows: - When you view the properties of a link and set the schedule in the Linkname Properties dialog box, the snap-in applies the schedule to all NTDS Connection objects for this replica set. The snap-in does not apply the schedule to the NTFRS Replica Set object. - If you view individual connections in the Customize Topology dialog box and modify the schedule on a connection, the snap-in applies that schedule to the NTDS Connection object for the connection. - If you add a new member to an existing replica set, the new member’s NTDS Connection objects get the default “always on” schedule. - If you use the snap-in to set schedules on individual connections, and then you view the schedule from the Linkname Properties dialog box, the snap-in displays the schedule for the last enumerated NTDS Connection object. If you click OK in this dialog box, the schedule shown is applied to all NTDS Connection objects. To avoid this, click Cancel to close the dialog box. When replication is set to available, replication occurs continuously. Unlike SYSVOL schedules, DFS replica set schedules are on/off schedules. As long as the schedule is open, the connection stays in the joined state and change orders are sent out as they are processed by the upstream partner. Any queued change orders in the upstream partner’s outbound log are sent first. When the schedule closes, FRS unjoins the connection. How Local Time Affects the Schedule Replication schedules are stored in Coordinated Universal Time (UTC). However, when you view the schedule, the schedule is shown in the local time of the server. Daylight savings time causes the schedule to shift by an hour for any server that is located in an affected time zone. How File and Folder Filters Work Filters exclude subfolders (and their contents) or files from replication. You exclude subfolders by specifying their name, and you exclude files by using wildcard characters to specify file names and extensions. By default, no subfolders are excluded. The default file filters exclude the following files from replication: - File names starting with a tilde (~) character - Files with .bak or .tmp extensions Filtering takes place when FRS monitors the USN journal for changed files and folders. If a new or updated file meets a filter that excludes it from replication, FRS ignores the change in the USN journal and does not generate a change order. Because filtering occurs at the USN journal, filters act as exclusion filters only for new files and folders added to a replica set. They have no effect on existing files in the replica set. For example, if you change the existing file filter from “*.tmp, *.bak” to “*.old, *.bak,” FRS does not go through the replica set and exclude all files that match *.old, nor does it go through the replica set and begin to replicate all files that match *.tmp. After the filter change, new files matching *.old that are added to the replica set are not replicated. New files matching *.tmp that are added to the replica set are replicated. Any file in the replica set that was excluded from replication under the old file filter (such as Test.tmp, created when the old filter was in force) is automatically replicated under the new file filter only after the file is modified. Likewise, changes to any file that was not excluded from replication under the old filter (such as Test.old, created when the old filter was in force) continue to replicate under the new filter, until you explicitly delete the file. These rules apply in the same manner to the folder exclusion filter. If a folder is excluded, all subfolders and files under that folder are also excluded. Regardless of the filters you set, FRS always excludes the following from replication: - NTFS mounted drives. - Files encrypted by using EFS. - Any reparse points except those associated with DFS namespaces. If a file has a reparse point used for Hierarchical Storage Management (HSM) or Single Instance Store (SIS), FRS replicates the underlying file but not the reparse point. - Files on which the temporary attribute has been set. How the Outbound Log Cleanup Process Works The outbound log cleanup process runs periodically to remove change orders from the outbound log. The process is scheduled by the outlog thread, and the first cleanup is scheduled to run 60 seconds after the outlog thread starts. Service initialization code starts the outlog thread. At every scheduled cleanup, the time it took to perform the last cleanup is calculated and the next scheduled time is set. The time before the next scheduled cleanup is set to 50 multiplied by the time it took to run the last cleanup. This interval is then adjusted so that it is not less than 60 seconds, it is not more than the Outlog Change History In Minutes registry entry (which is seven days, by default), and it is not more than eight hours. As a result, the cleanup process tries not to take more than 2 percent of the processing time. For large outbound logs, the cleanup can take several minutes to run. The cleanup process does not need to run every time it is scheduled. The cleanup process only runs for the replica sets that have received at least one change order acknowledgement (ACK) since the last time cleanup ran on that replica set. In situations where a replica member does not receive ACKs from any partners for a long time, there is an interval after which the cleanup process runs on that replica member, even if it has not received an ACK from its partners. The interval is either the amount specified in the Outlog Change History In Minutes registry entry (seven days, by default) or eight hours, whichever is smaller. The cleanup function goes through all the change orders in the outbound log and does the following. - Deletes all vvjoin change orders and their corresponding staging files if they have been acknowledged by the partner they were meant for. (Vvjoin change orders are deleted because they are directed change orders, which means they are generated for a specific partner and are of no use after they are acknowledged by that partner.) - Deletes staging files for all the change orders that have been acknowledged by all the partners. Staging files are not kept longer than required; only change orders are kept. Staging files are regenerated as needed. - Deletes change orders that have been in the outbound log for more than the Outlog Change History In Minutes registry entry. If there is a connection that has not yet sent the change order that is being deleted, that connection is marked for deletion at the next poll. The deletion of these connections will take them out of the list of active connections and trigger cleanup on the change orders that were backed up for them. Because these connections are not actually deleted from Active Directory, they will reappear at the second poll and will rejoin and go through a complete vvjoin. This functionality provides a way for FRS to clean up change orders in the outbound log and staging files for members that have not accepted changes for a long time. How FRS Debug Logs Work FRS creates text-based logs in the systemroot\Debug folder to help you debug problems. The Ntfrs log files store transaction and event detail in sequentially numbered files (Ntfrs_0001.log through Ntfrs_0005.log). Transactions and events are written to the log with the highest sequence number in existence at that time. The characteristics of the log files are determined by the values of several registry entries in the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NtFrs\Parameters subkey. After the number of logs specified by the value of the Debug Log Files registry entry has been filled, the lowest log version is deleted and the remaining log file names are decreased by n–1 to make room for a new log file. Log detail is controlled by the value of the Debug Log Severity registry entry, ranging from 0 to 5, with 5 providing the most detail. Log size is determined by the value of the Debug Maximum Log Messages registry entry. The default value of 20,000 lines for Debug Maximum Log Messages results in a log file of approximately 2 MB, for a total of 10 MB of logs (Debug Log Files * [Debug Maximum Log Message * 120]). Setting Debug Log Messages to 50,000 results in log files of approximately 5 MB and 25 MB of total log space with default settings. Solving problems by using the Ntfrs logs requires ensuring that the value of the Debug Log Severity registry entry is set high enough to capture the events needed to identify the problem. Severity settings range from 0 to 5 and are cumulative, meaning that a setting of 4 includes log events with a severity of 0 to 3. Error logs hold a severity setting of 0 and are always displayed. Setting the severity level at 5 will cause every action to be logged and can make finding more important information difficult. The default log setting is 2. All log records have a date and time stamp and an identifying string annotated by :: with a letter or pair of letters between each colon. These identifying strings make it easy for you to identify errors, warning messages, and milestone events in the log files and filter out less important information. For example, a log entry that has a :U: as an identifying string includes information related to the USN journal. A log entry containing information about Active Directory polling will have an identifying string of :DS:. A tracking log entry has an identifying string of :T: and summarizes a change order that has finished or is in the process of updating a given file or folder. The following table lists the log record identifiers. Log Record Identifiers Tracking record entries (:T:) can be a helpful way to identify and understand problems that can occur during the change order process. A tracking record entry will tell you what files have been changed and where the change originated. The following tracking record entry describes a remote change order that is creating a new file called test_file in Replica-A. The version number is zero. 7/31-08:40:08 :T: CoG: d42cda60 CxtG: 000001b7 [RemCo ] Name: test_file 7/31-08:40:08 :T: EventTime: Mon Jul 31, 2003 08:40:04 Ver: 0 7/31-08:40:08 :T: FileG: ceff96a6-5c9f-433a-989c841454a1593b FID: 61a70000 0000036c 7/31-08:40:08 :T: ParentG: 1a89f4e1-a0c0-43e4-aedbe869f767f372 Size: 00000000 00000008 7/31-08:40:08 :T: OrigG: 2eea81b4-f92d-4941-9f269d4bbdd7ea05 Attr: 00000020 7/31-08:40:08 :T: LocnCmd: Create State: IBCO_COMMIT_STARTED ReplicaName: Replica-A (1) 7/31-08:40:08 :T: CoFlags: 0000040c [Content Locn NewFile ] 7/31-08:40:08 :T: UsnReason: 00000002 [DatExt ] The individual fields that comprise the tracking log entry are described in the following table. In some cases only the first DWORD of a GUID is actually displayed in the log. Tracking Log Entries The following tracking log entry describes a local change order (a change order originating on the computer where the log was produced) that is updating the same file, test_file. The version number is now 1. Notice that the originator GUID is different from that of the tracking log entry above. The file GUID and parent GUID of both log entries are the same for both change orders because the same file is involved and it has not changed parent directories. 7/31-08:56:55 :T: CoG: cd55ad6f CxtG: 37b12c93 [LclCo ] Name: test_file 7/31-08:56:55 :T: EventTime: Mon Jul 31, 2003 08:56:52 Ver: 1 7/31-08:56:55 :T: FileG: ceff96a6-5c9f-433a-989c841454a1593b FID: 61a70000 0000036c 7/31-08:56:55 :T: ParentG: 1a89f4e1-a0c0-43e4-aedbe869f767f372 Size: 00000000 00000200 7/31-08:56:55 :T: OrigG: 8f759ded-e611-43c4-be05c10138dfdea4 Attr: 00000020 7/31-08:56:55 :T: LocnCmd: NoCmd State: IBCO_COMMIT_STARTED ReplicaName: Replica-A (1) 7/31-08:56:55 :T: CoFlags: 00000024 [Content LclCo ] 7/31-08:56:55 :T: UsnReason: 00000002 [DatExt ] Network Ports Used by FRS FRS uses the following two network ports: Network Ports Used by FRS By default, FRS replication over remote procedure calls (RPCs) occurs dynamically over an available port by using RPC Endpoint Mapper (also known as Remote Procedure Call Server Service or RPCSS) on port 135; the process is the same for Active Directory replication. Related Information The following resources contain additional information that is relevant to this section:
http://technet.microsoft.com/en-us/library/cc758169(v=ws.10).aspx
CC-MAIN-2014-49
refinedweb
14,411
58.72
Software engineers are used to the notion that global variables are a bad idea. Globals are usually accessed by asking, not by telling. They introduce tight coupling between any module that uses the global and the one that declares it, and (more dangerously) implicit coupling between all of the modules that use the global. It can be hard to test code that uses globals. You typically need to supply a different object (by which I mean a linker object) that exports the same symbol, so that in tests you get to configure the global as you need it. Good luck if that’s deeply embedded in other code you do need, like NSApp. While we’re on the subject of global variables, the names of classes are all in the global namespace. We know that: there is only one NSKeyedUnarchiver class, and everyone who uses the NSKeyedUnarchiver class gets the same behaviour. If any module that uses NSKeyedUnarchiver changes it (e.g. by swizzling a method on the metaclass), then every other client of NSKeyedUnarchiver gets the modified behaviour. [If you want a different way of looking at this: Singletons are evil, and each class is the singleton instance of its metaclass, therefore class objects are evil.] Now that makes it hard to test use of a class method: if I want to investigate how my code under test interacts with a class method, I have these options: - Swizzle the method. This is possible, but comparatively high effort and potentially complicated; should the swizzled method call through to the original or reimplement parts of it behaviour? What goes wrong if it doesn’t, and what needs to be supported if it does? Using the example of NSKeyedArchiver, if you needed to swizzle +archiveRootObject:toFile: you might need to ensure that the object graph still receives its -encodeWithCoder: messages – but then you might need to make sure that they send the correct data to the coder… - Tell the code under test what class to use, don’t let it ask. So you’d have a property on your object @property (nonatomic, strong) Class coderClass; and then you’d use the +[coderClass archiveRootObject:toFile:] method. In your app, you set the property to NSKeyedArchiver and in the tests, to whatever it is you need. That’s simple, and solves the problem, but rarely seen. I think this is mainly because there’s no way to tell the Objective-C compiler “this variable represents a Class that’s NSCoder or one of its subclasses”, so it’s hard to convince the type-safety mechanism that you know archiverClass responds to +archiveRootObject:toFile:. - Use instance methods instead of class methods, and tell the code you’re testing what instance to use. This is very similar to the above solution, but means that you pass a fully-configured object into the code under test rather than a generic(-ish) class reference. In your code you’d have a property @property (nonatomic, strong) NSCoder *coder in to -encodeRootObject: and rely on it having been configured correctly to know what to do as a result of that. This solution doesn’t suffer from the type-safety problem introduced in the previous solution; you told the compiler that you’re talking to an NSCoder (but not what specific type of NSCoder) so it knows you can do -encodeRootObject:. Notice that even if you decide to go for the third solution here (the one I usually try to use) and use object instances instead of classes for all work, there’s one place where you must rely on a Class object: that’s to create instances of the class. Whether through “standard” messages like +alloc or +new, or conveniences like +arrayWithObjects:, you need a Class to create any other Objective-C object. I’d like to turn the above observation into a guideline to reduce the proliferation of Class globals: an object’s class should only be used at the point of instantiation. Once you’ve made an object, pass it to the object that needs it, which should accept the most generic type available that declares the things that object needs. The generic type could be an abstract class like NSCoder or NSArray (notice that all array objects are created by the Foundation library, and application code never sees the specific class that was instantiated), or a protocol like UITableViewDataSource. Now we’ve localised that pesky global variable to the smallest possible realm, where it can do the least damage.
http://www.sicpers.info/2012/05/page/2/
CC-MAIN-2018-51
refinedweb
752
54.66
It's finally happened. I did a proper Javascript thing. Now before you start to judge me, let me clarify that although I've never written a Javascript post ever, it's not like I don't know how to use it, okay? Sure I started out with jQuery back in 2015, big whoop, almost everybody I know has used jQuery at some point in their careers 😤. In fact, my superficial need for external validation made me so self-concious about using jQuery in 2015 that I soon treated Ray Nicholus's You Don't Need jQuery! like some holy reference for a while until I weaned myself off jQuery. But that's beside the point. Up till now, I've always been doing client-side Javascript. I'd partner up with a “Javascript person” who would handle the middleware-side of things, and write the nice APIs I would consume and be on my merry way. I'm pretty much known for my inordinate love of all things CSS, because I took to it like a duck to water 🦆. Learning Javascript was like being a duck trying to fly. Zoology lesson: ducks can fly! It's just that they're not optimised for flying at will. But on the whole, it is obvious that ducks can fly and may even take wing at a fast pace of about 50 miles per hour. So after a couple years, I felt it was time to stand on my own two feet and figure out how this middleware-server-api-routing stuff worked. The use case Everybody and their cat can build or has built an app, right? The time had come for me to join that club. I'd been tracking the list of books I want to read/borrow from the world-class Singapore National Library with a plain text file stored on Dropbox. It worked great till the list grew to past 40 books. The solution to this unwieldy list was obvious: (so say it with me) Just build an app for that. To track list of books by title, dewey decimal number and available locations That was the basic gist of the idea. The key functionality I wanted was to be able to filter the list depending on which library I was visiting at the time, because some books had copies in multiple libraries. Critical information would be the book title and dewey decimal number to locate said book. Simple enough, I thought. But it never is. This being my first ”app”, I thought it'd be interesting to document the thought process plus questions I asked myself (mostly #noobproblems to be honest). Besides, I never had a standard format for writing case studies or blog posts. I also ramble a lot. Source code if you really want to look at noob code. TL:DR (skip those which bore you) - Technology stack used: node.js, Express, MongoDB, Nunjucks - Starting point: Zell’s intro to CRUD tutorial - Database implementation: mLAb, a hosted database solution - Templating language: Nunjucks - Data entry: manually, by hand - Nunjucks syntax is similar to Liquid - Responsive table layout with HTML tables - Filtering function utilises indexOf() - Implementing PUT and DELETE - Offline functionality with Service Worker - Basic HTTP authentication - Deployment: Heroku What technology stack should I use? I went with node.js for the server, Express for the middleware layer, MongoDB as the database because I didn't really want to write SQL queries and Nunjucks as the templating language because it's kind of similar to Liquid (which I use extensively in Jekyll). But before I settled on this stack, there was a lot of pondering about data. Previously, I had been terribly spoiled by my Javascript counterparts who would just pass me endpoints from which I could access all the data I needed. It was like magic (or just abstraction, but aren't the two terms interchangeable?). I'm used to receiving data as JSON, so my first thought was to convert the data in the plain text file into a JSON file, then do all the front-endy stuff I always do with fetch. But then I realised, I wanted to edit the data as well, like remove books or edit typos. So persistence was something I didn't know how to deal with. There was a vague memory of something related to SQL queries when I once peeked into the middleware code out of curiosity, which led me to conclude that a database had to be involved in this endeavour 💡. I'm not as clueless as I sound, and I know how to write SQL queries (from my Drupal days), enough to know that I didn't want to write SQL queries for this app. You have no idea how to write this from scratch, do you? Nope, not a clue. But my buddy Zell wrote a great tutorial earlier on how to build a simple CRUD app, which I used as a guide. It wasn't exactly the same, so there was a lot of googling involved. But the advantage of not being a complete noob was that I knew which results to discard and which were useful 😌. Zell's post covers the basic setup for an app running on node.js, complete with idiot-proof instructions on how to get the node.js server running from your terminal. There's also basic routing, so you can serve the index.html file as your home page, which you can extend for other pages as well. Nodemon is used to restart the server every time changes are made so you don't have to do it manually each time. He did use a different stack from me, like EJS instead of Nunjucks, but most of the instructions were still very relevant, at least in part 1. Most deviations happened for the edit and delete portion of the tutorial. So this mLab thing is a hosted database solution? Yeah, Zell used mLab in the tutorial, it's a Database-as-a-Service so I kinda skipped over the learning how to set up MongoDB bit. Maybe next time. Documentation on how to get started using mLab is pretty good, but one thing made me raise an eyebrow (omg, when is this emoji coming?!), and that was the MongoDB connection URI contained the user name and password to the database. I'm not a security expert but I know enough to conclude that is NOT a good idea. So next thing to find out was, what is the best way to implement this as a configuration? In Drupal, and we had a settings.php file. Google told me that StackOverflow says to create a config.js file then import that for use in the file where you do your database connections. I did that at first, and things were peachy, until I tried to deploy on Heroku. We'll talk about this later, but point is, store credentials in separate file and do NOT commit said file to git. You don’t want to use EJS like Zell, then how? It's not that EJS is bad, I just wanted a syntax I was used to. But not to worry, because most maintainers of popular projects dedicate time to writing documentation. I learned the term RTFM quite early on in my career. Nunjucks is a templating engine by Mozilla, which is very similar to Jekyll's (technically Shopify made it) Liquid. Their documentation for getting started with Express was very understandable to me. Couldn’t think of a way to automate data entry? Nope, I could not. I did have prior experience doing data entry in an earlier era of my life, so this felt...nostalgic? Anyway, the form had to be built first. Book title and dewey decimal number were straight-forward text fields. Whether the book had been borrowed or not would be indicated with radio buttons. Libraries were a bit trickier because I wanted to make them a multi-select input, but use Nunjucks to generate each option. After building my nice form, and testing that submitting the form would update my database. I grabbed a cup of coffee, warmed up my fingers and went through around half an hour of copy/paste (I think). I'm very sure there is a better way to generate the database than this, but it would have definitely taken me longer than half an hour to figure out. Let's KIV this item, okay? Can you Nunjucks like you do Liquid? Most templating languages probably can do the standard looping and conditionals, it's just a matter of figuring out the syntax. In Jekyll, you chuck your data into .yml or .json files in the _data folder and access them using something like this: {% for slide in site.data.slides %} <!-- markup for single slide --> {% endfor %} Jekyll has kindly handled the mechanism for passing data from those files into the template for you, so we'll have to do something similar for using Nunjucks properly. I had two chunks of data to send to the client-side, my list of libraries (a static array) and the book data (to be pulled from the database). And I learned that to do that we need to write something like this: app.get('/', (req, res) => { db.collection('books').find().toArray((err, result) => { if (err) return console.log(err) res.render('index', { libraries: libraries, books: result }) }) }) I'm fairly confident this is an Express functionality, where the render() function takes two parameters, the template file and an object which contains the data you want to pass forward. After this, I can magically loop this data for my select dropdown and books table in the index.html file. Instead of having to type out an obscenely long list of option elements, Nunjucks does it for me. <select name="available_at[]" multiple> {% for library in libraries %} <option>{{ library.name }}</option> {% endfor %} </select> And another 💡 moment happened when I was working out how to render the book list into a table. So the libraries field is a multi-value field, right? As I made it a multi-select, the data is stored in the database as an array, however, single values were stored as a string. This screwed up my initial attempts at formatting this field, until I realised it was possible to force a single value to be stored as an array using [] in the select's name attribute. Better make the list of books responsive, eh? Yes, considering how I pride myself in being a CSS person, it'd be quite embarrassing if the display was broken at certain screen widths. I already had a responsive table setup I wrote up previously that was made up of a bunch of divs that pretended to be a table when the width was wide enough. Because display: table is a thing. I know this because I researched it before. So I did that at first, before realising that the <table> element has extra properties and methods that normal elements don't. 💡 (at the rate this is going, I'll have enough light bulbs for a nice chandelier). This doesn't have anything to do with the CSS portion of things, but was very relevant because of the filtering function I wanted to implement. Then it occurred to me, if I could make divs pretend to be a table, I could make a table act like a div. I don't even understand why this didn't click for me earlier 🤷. Long story short, when things started to get squeezy, the table, rows and cells got their display set to block. Sprinkle on some pseudo-element goodness and voila, responsive table. Let’s talk about this filtering thing, alright? I'll be honest. I've never written a proper filtering function by myself before. I did do an autocomplete, once. But that was it. I think I just used someone else's library (but I made sure it was like really tiny and optimised and everything) when I had to. What I wanted was to have a select dropdown that would only show the books available at one particular library. The tricky thing was that the library field was multi-value. So you couldn't just match the contents of the library cell with the value of the option selected, or could you? So I found this codepen by Philpp Unger which filtered a table based on text input. The actual filtering leverages the indexOf() method, while the forEach() method loops through the whole slew of descendants in the book table. So like I mentioned earlier, a normal HTMLElement doesn't have the properties that a HTMLTableElement does, like HTMLTableElement.tBodies and HTMLTableElement.rows. MDN documentation is great, here are the links for indexOf(), forEach() and HTMLTableElement. Why was your edit and delete different from Zell’s? Because I had more data, and I didn't want to use fetch for the first pass. I wanted CRUD to work on the basic version of the app without client-side Javascript enabled. It's fine if the filtering doesn't work without Javascript, I mean, I probably could make it so the filtering was done on the server-side, but I was tired. Anyway, instead of fetch, I put in individual routes for each book where you could edit fields or delete the whole thing. I referred to this article by Michael Herman, for the put and delete portions. Instead of fetch, we used the method-override middleware. The form action then looked like this: <form method="post" action="/book/{{book._id}}?_method=PUT"> <!-- Form fields --> </form> The form itself was pre-populated with values from the database, so I could update a single field without having to fill the whole form out each time. Though that did involve putting in some logic in the templates, for the multi-select field and my radio buttons. I've heard some people say templates should be logic-free, but 🤷. <select name="available_at[]" multiple> {% for library in libraries %} {% if book.available_at == library.name %} <option selected>{{ library.name }}</option> {% else %} <option>{{ library.name }}</option> {% endif %} {% endfor %} </select> <fieldset> <legend>Borrowed?</legend> {% if book.borrowed == "yes" %} {{ checked }} {% set checked = "checked" %} {% else %} {{ notchecked }} {% set notchecked = "checked" %} {% endif %} <label> <span>Yes</span> <input type="radio" name="borrowed" value="yes" {{ checked }}> </label> <label> <span>No</span> <input type="radio" name="borrowed" value="no" {{ notchecked }}> </label> </fieldset> One problem that took me a while to figure out was that I kept getting a null value when trying to query a book using its ID from my database. And I was sure I was using the right property. What I learned was, the ID for each entry in MongoDB is not a string, it's an ObjectID AND you need to require the ObjectID function before using it. Oooo, let’s also play with Service Worker! Have you read Jeremy Keith's wonderful book, Resilient Web Design yet? If you haven't, stop right now and go read it. Sure it's a web book, but it also works brilliantly offline. So I've known about Service Worker for a bit, read a couple blog posts, heard some talks, but never did anything about it. Until now. The actual implementation wasn't that hard, because the introductory tutorials for the most basic of functionalities are quite accessible, like this one by Nicola Fioravanti. You know how when you build a thing and you ask the business users to do testing, and somehow they always manage to do the one obscure thing that breaks things. That was me. Doing it to myself. So I followed the instructions and modified the service-worker according to the files I needed cached, and tested it out. If you use Chrome, DevTools has a Service Worker panel under Application, and you can trigger Offline mode from there. First thing I ran into was this error: (unknown) #3016 An unknown error occurred when fetching the script, but no biggie, someone else had the same problem on Stack Overflow. The next thing that tripped me up for a day and a half was that, unlike normal human beings, I reflexively reload my page by pressing ⌘+Shift+R, instead of ⌘+R. That Shift key was my undoing, because it triggers reload and IGNORES cached content. It turned out my Service Worker had been registered and running all this while 🤦♀️. Ah, the life of a web developer. Let’s put some authentication on this baby Okay, I actually took one look at Zell's demo app and realised it kind of got a bit out of hand because it was a free-for-all form input and anyone could submit anything they wanted. Which was kind of the point of the demo, so no issues there. But for my personal app, I'm perfectly capable of screwing around with the form submission all by myself, thank you. Authentication is a big thing, in that there are a tonne of ways to do it, some secure and some not, but for this particular use-case I just needed something incredibly simple. Like a htpasswd (you guys still remember what that is, right?). Basic HTTP authentication is good enough for an app which will only ever have one user. Ever. And surprise, surprise, there's an npm module for that. It's called http-auth, and implementation is relatively straight-forward. You can choose to protect a specific path, so in my case, I only needed to protect the page that allowed for modifications. Again, credentials in a separate file, kids. const auth = require('http-auth') const basic = auth.basic({ realm: 'Modify database' }, (username, password, callback) => { callback(username == username && password == password) }) app.get('/admin', auth.connect(basic), (req, res) => { // all the db connection, get/post, redirect, render stuff }) What about deployment? Ah yes, this part of development. If you ask me, the easiest way to do this is with full control of a server (any server), accessible via ssh. Because for all my short-comings in other areas (*ahem* Javascript), I'm fully capable of setting up a Linux server with ssh access plus some semblance of hardening. It's not hard if you can follow instructions to a T and besides, I've had lots of practice (I've lost count of the number of times I wiped a server to start over). But I'm a very very cheap person, who refuses to pay for stuff, if I can help it. I've also run out of ports on my router so those extra SBCs I have lying around will just have to continue to collect dust. The go-to free option seems to be Heroku. But it was hardly a smooth process. Chalk it up to my inexperience with node.js deployment on this particular platform. It was mostly issues with database credentials, because I originally stored them in a config.js file which I imported into my main app.js file. But I realised there wasn't a way for me to upload that file to Heroku without going through git, so scratch that plan. Let's do environment variables instead, since Heroku seems to have that built in. What took me forever to figure out was that on Heroku, you need to have the dotenv module for the .env file to be recognised (or wherever Heroku handles environment variables). Because on my local machine, it worked without the dotenv module, go figure. Wrapping up Really learned a lot from this, and got a working app out of it, so time well spent, I say. I also learned that it's actually pretty hard to find tutorials that don't use a truck-load of libraries. Not that I'm against libraries in general, but as a complete noob, it's a bit too magical for me. Sprinkle on the fairy dust a little later, thanks. Anyway, I'll be off working on the next ridiculous idea that pops into my mind, you should try it some time too 🤓. Originally published at on July 13, 2017. Discussion Hell yeah! Incredibly thorough writeup, any noobiness that persists is definitely not here for long. Thanks for the encouragement 🤗 !!! > But the advantage of not being a complete noob was that I knew which results to discard and which were useful This is key, as it is so easy to get lost in the results, chasing unicorns.
https://dev.to/huijing/built-my-first-crud-app
CC-MAIN-2020-50
refinedweb
3,421
72.05
I have received cipher documents which are believed to be secret messages. So, my mission is to break the encoding and reveal what the secrets are that each file contains. However,it is believed that each file contains a message protected with a "null cipher" which surround the real characters in messages with one or more "null" characters. So, to reveal the message, I have to read in each character in the file, but only print out every other, or every third, or every fourth character. I already built my project and it debugged successful. However, I can't decode the secret and I stuck right here. I am confusing and wondering that I have built it correctly or not. So please take a look over my project and give me your opinion. Thanks. #include <iostream> #include <cstdio> #include <cstdlib> #include <ctime> #include <iomanip> using namespace std; int main(int argc, char * argv[]) { int c, count = atoi(argv[1]); srand(time(NULL)); c = rand(); char b; do { for (int i = 1; i < count; i++) c = cin.get(); c = cin.get(); c = cin.get(); cout.put(c); }while(!cin.eof()); }
https://www.daniweb.com/programming/software-development/threads/389646/null-cipher
CC-MAIN-2017-17
refinedweb
189
72.26
Given a sorted array with n elements which is rotated at some point. There is a number key which needs to be searched in this array. Write a program which searches for this key in the array. The worst case time complexity of this solution should not exceed O(logn) . Example Test Cases Sample Test Case 1 Array: 5, 6, 9, 0, 1, 2, 3, 4 Key: 6 Expected Output: 1 Explanation: Since 1 is the index of 6 Solution Searching in a sorted array is a classic binary search problem and can be done in O(logn) time. Since this array is rotated along some point, normal binary search cannot be applied to it. However this problem can be solved by modifying the binary search (without affecting it’s time complexity) by making the following observations. Since the array was sorted (before rotation), we can think of this array as concatenation of two separate sorted arrays. for example array 5, 6, 9, 0, 1, 2, 3, 4can be thought of as two concatenation of 5, 6, 9and 0, 1, 2, 3, 4 Since, the 2 parts of the above array are themselves sorted, we can apply normal binary search on these parts. The only problem is that we don’t know the length of each of these two sorted parts. The 1st element of the whole array combined will always be greater than the last element. Let us call the 1st element as startand the last element as end. So it means that start > end. Now, when we apply binary search to the problem either of these 4 cases can happen: Case 1 : Both the arr[mid]and the keyare less than the end. It means they are in the same half and we can apply normal binary search. Case 2 : Both the arr[mid]and the keyare greater than start. It means they are in the same half and we can apply normal binary search. Case 3 : The keyis less than endbut arr[mid]is greater than start. It means in this case, we need to change direction (to get midin the same half as key) before applying normal binary search. So in this case we will do lo = mid + 1 Case 4 : The keyis greater than endbut arr[mid]is less than start. It means in this case, we need to change direction (to get midin the same half as key) before applying normal binary search. So in this case we will do hi = mid - 1 Therefore, to solve this problem we will start by applying normal binary search and then depending on which case we find, we will either apply normal binary search or we will try to switch direction and try to either go to the right half or to the left half. See the code below for implementation Implementation #include <iostream> #include <vector> using namespace std; // Used to check which case we are in and give appropriate direction. // 0 means Do normal binary search int getDirection(vector<int>& arr, int mid, int key) { int start = arr[0]; int end = arr[arr.size() - 1]; if (arr[mid] <= end && key <= end) { return (0); } else if (arr[mid] >= start && key >= start) { return (0); } else if (arr[mid] >= start && key < start) { return 1; } else { return -1; } } int main() { vector<int> arr = {5, 6, 9, 0, 1, 2, 3, 4 }; int key = 6; int lo = 0, hi = arr.size() - 1; int mid; int ans = -1; while (lo <= hi) { mid = lo + (hi - lo)/2; if (arr[mid] == key) { ans = mid; break; } else { int direction = getDirection(arr, mid, key); // Case 3 if (direction > 0) { lo = mid + 1; } // Case 4 else if (direction < 0) { hi = mid - 1; } // Case 1 and 2 (applying normal binary search) else { if (arr[mid] < key) { lo = mid + 1; } else { hi = mid - 1; } } } } if (ans == -1) { cout << "Key " << key << " not found\n"; } else { cout << "Key " << key << " found at " << ans << " position\n"; } return 0; }
https://prepfortech.in/interview-topics/modified-binary-search/searching-in-a-rotated-sorted-array
CC-MAIN-2021-17
refinedweb
655
63.93
Sharing extension with arguments does not start Hi, Does anyone have an sharing extension running successfully which receives arguments at the call? I don't even get the simplest program running which only contains print 'hello'. When I leave the shortcut parameter "Arguments" blank the program runs. When I set the arguments to any string or number the program is not called. Stefan Yes, see above: print 'hello';-) You don't really need code, one line printing some text (or sys.argv) is enough. Go to Settings / Share Extension Shortcuts -> create a new shortcut -> set Arguments to any value. After that run the extension by sharing any file. That's it. Sorry... I'm a little tired ! It works for me though. Prints hello. Did you try using the apex module? import appex if appex.is_running_extension(): #share url from safari print(appex.get_url()) Can reproduce: Wow... I just need to sleep. Sorry guys. I now see the problem too...!
https://forum.omz-software.com/topic/3331/sharing-extension-with-arguments-does-not-start/6
CC-MAIN-2020-29
refinedweb
159
78.85
You can subscribe to this list here. Showing 10 results of 10 I think that Hecl is mostly at a place where it's possible to create simple apps, like the shopping list application I published on my site. Having recordstore means that you can even do persistent data. Things could be especially interesting if you combined recordstore and http to dynamically fetch applications and run them from the web! In any case, I think it's time to work on making Hecl scripting super easy for those wishing to write scripts for their phones. My vague ideas on how to do that involve 1) a simple GUI that lets you update the .jar file 2) updated documentation for all the GUI commands. 3) more example scripts. -- David N. Welton - Linux, Open Source Consulting - I implemented a very simple recordstore interface: rs_list - lists all recordstores rs_get rsname - fetches data out of recordstore rsname and returns it. rs_put rsname data - puts data into recordstore rsname. I haven't tested this much at all to see what kinds of limits there are, and I suspect that varies from phone to phone in any case, but it does run on my phone. -- David N. Welton - Linux, Open Source Consulting - Hi, I've just put a new release up. If you're really interested, of course it's still probably better to keep up via CVS. Changes include: * Double(floating point numbers) support in J2SE version. * New lcdui widgets: "listbox", "choicegroup", "datefield", "alert". * Compiled in the http_micro extension. * Lots of minor fixes, tweaks, and improvements. Ciao, -- David N. Welton - Linux, Open Source Consulting - No, not Richard Stallman... but the storage system for phones. What would a good interface to this look like? What do people use this for, mostly? For the first cut at it, do we want to be low level, or aim for "just slop it all into memory" ? I think even a low level interface is going to deal with Hecl 'Things'. I guess the best thing is to just start hacking at it and see how it turns out, then change it if needs be:-) -- David N. Welton - Linux, Open Source Consulting - David N. Welton wrote: >. Well, I didn't want to do anything weird like return a list of two integers, so I decided to just divide by 1000 when getting, and multiply by 1000 when setting, and hope that this version of Hecl is not in use in 2038:-) -- David N. Welton - Linux, Open Source Consulting -. -- David N. Welton - Linux, Open Source Consulting - Hi all, I added another tweak to the GUICmds.java code - the ability to select which text type you want to use. It was the last little bit I needed in order to finish a 'real application' using Hecl: Nothing fancy, but I like that it utilizes the web for input, where it's easy, and keeps the cell phone UI very simple, with just a simple checklist. Actually trying to input shopping list items into a cell phone would be an excersize in pain, in my opinion:-) Anyway, it 'works for me', although there is certainly a lot of tweaking that could be done. Let me know what you think. Ciao, -- David N. Welton - Linux, Open Source Consulting - Hi everyone, The latest problem I have encountered is the differences between the List and ChoiceGroup widgets. These are supposed to provide radio buttons or checkboxes, depending on which attributes you specify, with List being the "full screen" version, and ChoiceGroup being an Item that can be included as one component of a form. Form items can register for callbacks via itemStateListener, which is called when the form item's state changes. Lists do not have this capability though, so you have no way to register callbacks when a List item is selected or deselected. To remedy this, I decided that I'd just make my own 'full screen list' by creating a form with one ChoiceGroup, which works something like this: public class ListBox extends Form { public ChoiceGroup cg = null; public ListBox(String title, int choicetype, String []choices) { super(title); cg = new ChoiceGroup("", choicetype, choices, null); this.append(cg); } public int append(String item) { return cg.append(item, null); } } some details ommitted, but that's the idea. So far, it seems to be working exactly as I needed - I was able to define a callback that performs an action when all checkboxes were checked. The only thing I can think of that I'd need an actual List for is to define some sort of menu that performs some action when an entry is selected, via the IMPLICIT Choice type. -- David N. Welton - Linux, Open Source Consulting - Hi, two things: I swapped out the sun 1.5 sdk on my machine for a 1.4 installation, and realized that -target cldc1.0 is a 1.5 feature, so I checked a preverifier and appropriate build instructions into CVS. This should make it possible to build J2ME Hecl with older Java SDK's. The emulator complains about running the 'http' command and then locks up. Warning: To avoid potential deadlock, operations that may block, such as networking, should be performed in a different thread than the commandAction() handler. The FAQ says this: When running my MIDlet, a security alert is displayed and cannot be dismissed. You are probably trying to access a protected API (for example, opening a connection) from your commandAction() method, which is locking the UI thread. Access to protected APIs should be done from a separate thread. Refer to the NetworkDemo for an example on how this can be done. So... damn. This complicates things in an unpleasant way. I'd hoped to put off dealing with Threads until some later date. Perhaps real phones don't lock like this, but I don't want to spend money testing it until I'm pretty sure things work ok! I understand that it's *possible*, but wish it would try just the same without just freezing up. -- David N. Welton - Linux, Open Source Consulting - I added the gauge, choicegroup, alert commands. They work pretty much like you'd expect. I also tweaked the 'listbox' command to work like this: listbox label foo list {"choice 1" "choice 2" apples oranges} With that, an initial version of all widgets in the 'high level' portion of the gui is pretty much there, modulo some bits and pieces. Canvas and Images are going to go into a separate package, so that those who don't require them don't incur the size penalty. I'm not sure I'm ready to tackle those right away though. Also to be considered is an implementation of ItemStateListener so that the GUI can react to changes not brought on by commands. After that comes an implementation of "RMS" so that Hecl can store things on cell phones. Ciao, -- David N. Welton - Linux, Open Source Consulting -
http://sourceforge.net/p/hecl/mailman/hecl-devel/?viewmonth=200511
CC-MAIN-2015-32
refinedweb
1,154
71.75
I am trying to write a program based off of a flowchart. The program is supposed to sort an array. I am having trouble getting it to sort though. I know it has something to do with my code, but I don't know what or where. Here is my code: Code:#include <iostream> using namespace std; int n; int a[6]; int main() { int i; cout << "How many elements do you want in the array?\nNote: It must be less than or equal to five and greater than 0. "; cin >> n; cout << endl; if((n > 5) || (n < 1)) { cerr << "The size of the array is not valid!\n\n"; return 0; } for(i = 1; i <= n; i++) { cout << "Enter a number to put in the array: "; cin >> a[i]; } cout << "\nOriginal array: "; for(i = 1; i <= n; i++) cout << a[i] << " "; void sort(int n, int a[]); cout << "\nSorted array: "; for(i = 1; i <= n; i++) cout << a[i] << " "; cout << endl; return 0; } void sort(int n, int a[]) { int j; cout << "sort test before call to move\n"; for(j = 1; j <= n-1; j++) { if(a[j] > a[j+1]) int move(int a[], int j); } cout << "sort test after call to move\n"; return; } int move(int a[], int j) { int k; int temp = a[j+1]; a[j+1] = a[j]; cout << "move test before call to findkay\n"; int Findkay(int k, int j, int a[], int temp); cout << "move test after call to findkay\n"; a[k] = temp; return k; } int Findkay(int k, int j, int a[], int temp) { k = j; int sw = 0; while((k > 1) && (sw = 0)) { if(a[k-1] > temp) { a[k] = a[k-1]; k = k-1; } else sw = 1; } cout << "findkay test\n"; return k; } So, the sort doesn't work at all. The user inputs the number of elements they want in the array and goes through the algorithm and is supposed to print out the sorted array. However, I don't think the sort function is even being called because I put in cout statements to make sure that it went to the function and the statement isn't being printed out.
http://forums.devshed.com/programming-42/sort-array-using-flowchart-help-509778.html
CC-MAIN-2017-51
refinedweb
368
72.73
Welcome to WebmasterWorld Guest from 50.19.0.90 <?php function checkUrl($url) { ini_set('default_socket_timeout', 7); $a = file_get_contents($url,FALSE,NULL,0,20); return ( ($a!= "") && ($http_response_header!= "") ); } ?> Check that "default_socket_timeout" is in seconds and not milliseconds. I think it is, but I'm not sure. TJ [edited by: trillianjedi at 7:32 pm (utc) on July 3, 2007] [jellyandcustard.com...] If you're on PHP4, just use:- $a = file_get_contents($url) The downside is it will download the whole page, which is a waste of bandwidth if you're just checking if something is alive or dead. You also need to check the manual for $http_response_header. Depending on your version of PHP it may not return an empty string for a 404. It might return full headers that you'll need to explode and check for a 404. Added:- Looking at your snippet I guess you're building a backlink checker, in which case you want to download the whole page anyway and RegEx for your domain inside <a href tags and ensure there isn't a no-follow or anything like that in there. Warning: file_get_contents() [function.file-get-contents]: php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution in /home/#*$!/public_html/backlinks2.php on line 13 Warning: file_get_contents() [function.file-get-contents]: failed to open stream: Permission denied in /home/#*$!/public_html/backlinks2.php on line 13 Turbo The timeout reflects to the time between 'open the site' and 'determine if the site is down', correct? No. That function works at the TCP/IP socket layer, so will be based on a timeout further down the stack than HTTP. Sounds to me like it's hanging waiting for an ACK. TCP/IP is blocking in nature, so your app is forced to sit and wait for a response (or a timeout from the OS) before it can do anything. The timeout will come from the OS, but the very nature of TCP/IP (and design of it) is such that it is allowed to be "down" for a period of time. So the PHP timeout may not happen until the underlying OS has timed-out and it sounds to me like it's getting around 9 seconds. It might be a good idea to go multi-threaded, but that would mean coding rather than scripting (I think - I've written anything multi-threaded in PHP so don't even know if it's possible).
https://www.webmasterworld.com/php/3385364.htm
CC-MAIN-2016-07
refinedweb
402
62.58
Contents Introduction to Financial Python Multiple Linear Regression Introduction In the last chapter we introduced simple linear regression, which has only one independent variable. In this chapter we will learn about linear regression with multiple independent variables. A simple linear regression model is written in the following form:\[ Y = \alpha + \beta X + \epsilon \] A multiple linear regression model with p variables is given by:\[ Y = \alpha + \beta_1 X_1 + \beta_2 X_2 + \dots + \beta_p X_p + \epsilon \] Python Implementation In the last chapter we used the S&P 500 index to predict Amazon stock returns. Now we will add more variables to improve our model's predictions. In particular, we shall consider Amazon's competitors. import numpy as np import pandas as pd import quandl import matplotlib.pyplot as plt import statsmodels.formula.api as sm # Get stock prices quandl.ApiConfig.api_key = 'tAyfv1zpWnyhmDsp91yv' spy_table = quandl.get('BCIW/_SPXT') amzn_table = quandl.get('WIKI/AMZN') ebay_table = quandl.get('WIKI/EBAY') wal_table = quandl.get('WIKI/WMT') aapl_table = quandl.get('WIKI/AAPL') Then we fetch closing prices starting from 2016: spy = spy_table .loc['2016',['Close']] amzn = amzn_table.loc['2016',['Close']] ebay = ebay_table.loc['2016',['Close']] wal = wal_table .loc['2016',['Close']] aapl = aapl_table.loc['2016',['Close']] After taking log returns of each stock, we concatenate them into a DataFrame, and print out the last 5 rows: spy_log = np.log(spy.Close) .diff().dropna() amzn_log = np.log(amzn.Close).diff().dropna() ebay_log = np.log(ebay.Close).diff().dropna() wal_log = np.log(wal.Close) .diff().dropna() aapl_log = np.log(aapl.Close).diff().dropna() df = pd.concat([spy_log,amzn_log,ebay_log,wal_log,aapl_log],axis = 1).dropna() df.columns = ['SPY', 'AMZN', 'EBAY', 'WAL', 'AAPL'] df.tail() As before, we use the 'statsmodels' package to perform simple linear regression: simple = sm.ols(formula = 'amzn ~ spy', data = df).fit() print simple.summary() Similarly, we can build a multiple linear regression model: model = sm.ols(formula = 'amzn ~ spy + ebay + wal', data = df).fit() print model.summary() As seen from the summary table, the p-values for Ebay, Walmart and Apple are 0.174, 0.330 and 0.068 respectively, so none of them are significant at a 95% confidence level. The multiple regression model has a higher \( R^2 \) than the simple one: 0.254 vs 0.234. Indeed, \( R^2 \) cannot decrease as the number of variables increases. Why? If an extra variable is added to our regression model, but it cannot account for variations in the response (amzn), then its estimated coefficient will simply be zero. It's as though that variable was never included in the model, so \( R^2 \) will not change. However, it is not always better to add hundreds of variables or we will overfit our model. We'll talk about this in a later chapter. Can we improve our model further? Here we try the Fama-French 5-factor model, which is an important model in asset pricing theory. We will cover it in the later tutorials. The data needed are publicly available on French's website. We have saved a copy for convenience. The following code fetches the data. import urllib2 from datetime import datetime url = '' response = urllib2.urlopen(url) fama_table = pd.read_csv(response) # Convert time column into index fama_table.index = [datetime.strptime(str(x), "%Y%m%d") for x in fama_table.iloc[:,0]] # Remove time column fama_table = fama_table.iloc[:,1:] With the data, we can construct a Fama-French factor model: fama = fama_table['2016'] fama = fama.rename(columns = {'Mkt-RF':'MKT'}) fama = fama.apply(lambda x: x/100) fama_df = pd.concat([fama, amzn_log], axis = 1) fama_model = sm.ols(formula = 'Close~MKT+SMB+HML+RMW+CMA', data = fama_df).fit() print fama_model.summary() The Fama-French 5-factor model has a higher \( R^2 \) of 0.387. We can compare the predictions from simple linear regression and Fama-French multiple regression by plotting them together on one chart: result = pd.DataFrame({'simple regression': simple.predict(), 'fama_french': fama_model.predict(), 'sample': df.amzn}, index = df.index) # Feel free to adjust the chart size plt.figure(figsize = (15,7.5)) plt.plot(result['2016-7':'2016-9'].index,result.loc['2016-7':'2016-9','simple regression']) plt.plot(result['2016-7':'2016-9'].index,result.loc['2016-7':'2016-9','fama_french']) plt.plot(result['2016-7':'2016-9'].index,result.loc['2016-7':'2016-9','sample']) plt.legend() plt.show() Although it's hard to see from the chart above, the predicted return from multiple regression is closer to the actual return. Usually we don't plot the predictions to determine which model is better; we read the summary table. Model Significance Test Instead of using \( R^2 \) to assess whether our regression model is a good fit to the data, we can perform a hypothesis test: the F test. The null and alternative hypotheses of an F test are:\[ H_0: \beta_1 = \beta_2 = \dots = \beta_p = 0 \] \[ H_1: \text{At least one coefficient is not 0} \] We won't explain F test procedure in detail here. You just need to understand the null and alternative hypotheses. In the summary table of an F test, the 'F-statistic' is the F score, while 'prob (F-statistic)' is the p-value. Performing this test on the Fama-French model, we get a p-value of `2.21e-24` so we are almost certain that at least one of the coefficient is not 0. If the p-value is larger than 0.05, you should consider rebuilding your model with other independent variables. In simple linear regression, an F test is equivalent to a t test on the slope, so their p-values will be the same. Residual Analysis Linear regression requires that the predictors and response have a linear relationship. This assumption holds if the residuals are zero on average, no matter what values the predictors \( X_1, \dots, X_p \) take. Often it's also assumed that the residuals are independent and normally distributed with the same variance (homoskedasticity), so that we can contruct prediction intervals, for example. To check whether these assumptions hold, we need to analyse the residuals. In statistical arbitrage, residual analysis can also be used to generate signals. Normality The residuals of a linear model usually has a normal distribution. We can plot the residual's density to check for normality: plt.figure() #ols.fit().model is a method to access to the residual. fama_model.resid.plot.density() plt.show() As seen from the plot, the residual is normally distributed. By the way, the residual mean is always zero, up to machine precision: print 'Residual mean:', np.mean(fama_model.resid) [out]: Residual mean: -2.31112163493e-16 print 'Residual variance:', np.var(fama_model.resid) [out]: Residual variance: 0.000205113416293 Homoskedasticity This word is difficult to pronounce but not difficult to understand. It means that the residuals have the same variance for all values of X. Otherwise we say that 'heteroskedasticity' is detected. plt.figure(figsize = (20,10)) plt.scatter(df.spy,simple.resid) plt.axhline(0.05) plt.axhline(-0.05) plt.xlabel('x value') plt.ylabel('residual') plt.show() As seen from the chart, the residuals' variance doesn't increase with X. The three outliers do not change our conclusion. Although we can plot the residuals for simple regression, we can't do this for multiple regression, so we use statsmodels to test for heteroskedasticity: from statsmodels.stats import diagnostic as dia het = dia.het_breuschpagan(fama_model.resid,fama_df[['MKT','SMB','HML','RMW','CMA']][1:]) print 'p-value: ', het[-1] [out]:p-value of Heteroskedasticity: 0.144075842844 No heteroskedasticity is detected at the 95% significance level. You can also see our Documentation and Videos. You can also get in touch with us via Chat. Did you find this page helpful?
https://www.quantconnect.com/tutorials/introduction-to-financial-python/multiple-linear-regression
CC-MAIN-2020-16
refinedweb
1,277
53.07
The AWT provides java.awt.TextField class through the component box input text or just text box. This component enables the editing and input a single line of text in a very convenient way for the user.Hence, this control is called editable control. Any data entered through this component is treated primarily as text, should be explicitly converted to another type if desired. The TextField control is a sub-class of the TextComponent class. It can be created using the following constructors: TextField text=new TextField( ); TextField text=new TextField(int numberofchar); TextField text=new TextField(String str); TextField text=new TextField(String str, int numberolchar); The first constructor creates the default TextField. The second creates a TextField of size specified by the number of characters. The third type creates a text field with the default constructor. The text field control has the methods getText() and set Text(String str) for getting and setting the values in the text field. The text field can be made editable or non-editable by using the following methods: setEditable(Boolean edit); isEditable() The isEditable() method returns the boolean value true if the text field is editable and the value false if the text field is non-editable. Another important characteristic of the text field is that the echo character can be set, while entering password-like values. To set echo characters the following method is called: void setEchoCharacter(char ch) The echo character can be obtained and it can be checked it the echo character has been set using the following methods: getEchoCharacter(); boolean echoCharlsSet(); Since text field also generates the ActionEvent, the ActionListener interface can be used to handle any type of action performed in the text field. The selected text within the text field can be obtained by using the TextField Interface. The java.awt.TextField class contains, among others, the following constructors and methods: Following is a table that lists some of the methods of this class: import java.awt.*; import java.awt.event.*; public class TextFieldJavaExample extends Frame { TextField textfield; TextFieldJavaExample() { setLayout(new FlowLayout()); textfield = new TextField("Hello Java", 12); add(textfield); addWindowListener(new WindowAdapter(){ public void windowClosing(WindowEvent we) { System.exit(0); } }); } public static void main(String argc[]) { Frame frame = new TextFieldJavaExample(); frame.setTitle(" TextField in Java Example"); frame.setSize(320, 200); frame.setVisible(true); } }
https://ecomputernotes.com/java/awt-and-applets/textfield
CC-MAIN-2022-05
refinedweb
388
52.6
The training data is floating point. The epoches is set to 10000,then jump to custom function. In the function, the batch_num is 95. I set a batch of data to be 29 rows. Each row of data is 20 dimensions. I try to run on 3080. but it is too slow. How can I solve it? the main func: for epo in range(epoch): print(“In iteration: %d” % (epo + 1)) mse_all = model.SAE_GRU_Network(X, Y, U, time_steps, batch_size, hidden_size) loss = mse_all.cuda() opt.zero_grad() loss.backward() opt.step() print(epo+1, sqrt(mse_all/(batch_size*time_steps*data_dim))) the SAE_GRU_Network: def SAE_GRU_Network(self, X, Y, U, time_steps, batch_size, hidden_size): mse_batch = 0 h0 = torch.zeros(1, hidden_size, device=device) for i in range(batch_size): x_input = X[i][0] mse_steps = 0 h = h0 for j in range(time_steps): y_target = Y[i][j] u_input = U[i][j] h, y_output = self.one_step(h, x_input, u_input) x_input = y_output y_true = torch.reshape(y_target, [-1]) y_pred = torch.reshape(y_output, [-1]) mse_steps += torch.sum(torch.square(y_true - y_pred)) mse_batch += mse_steps return mse_batch
https://discuss.pytorch.org/t/my-torch-program-runs-very-slowly-please-help-me/123848
CC-MAIN-2022-27
refinedweb
173
71.31
When legacy code is giving you a hard time, it's a good time to learn yourself a few new tricks. You might not think there's a method to this chaos in your codebase, but there are a few. In this post, I share with you four prooven methods of working on improving legacy code. I've prepared a repository with exercises you can do to put your newly acquired skills to the test after completing the article. I use it when conducting the workshop on working with legacy code. Before starting my official career as a software developer and a few years in, I don’t think I came across the term legacy code too often. I was more focused on learning new technologies, new languages, new tools, new frameworks, and design patterns (mostly to better understand new frameworks). Some of them were new at the time (AngularJS), and some of them were just new for me (Ruby). I gravitated towards greenfield projects, and greenfield project opportunities gravitated towards me, mostly due to giving talks at the local meetups about new tech I’ve given a try. My initial intuition was that legacy code is something that sloppy Java developers are producing, and somehow, it doesn’t exist in a non-enterprise environment. The fact that at different stages of my education, I was using Java never found it very appealing, only magnified my lack of interest in the topic. I also didn’t pursue Java positions, but I’m not trying to bash on Java. After all, it was my language of choice for academia and still more fun than alternative C++ or C# (sorry if I hurt your feelings). That said, I wasn’t entirely wrong associating enterprise codebases with legacy code, but I was missing a thorough understanding as to why it’s that way (spoiler: project lifetime). It wasn’t until I started working on projects with many real users, got more interested in software design and architecture that I changed my outlook on legacy code. Legacy code Let’s start by trying to define a legacy code, so we have a common understanding of what is legacy code. - code maintained by someone other than the original author (since the term legacy) - code using outdated APIs or frameworks - code that feels easier to rewrite than change None of these bullet points is a strict definition. I’m sure you could, with ease, come up with an exception to these rules. Does it mean these rules are wrong? Not necessarily, but they are open for interpretation. This is less than ideal when you try to make everyone on your team to be on board. The simpler, the better. Still, this is more or less aligned with the intuition you may have about legacy code. Legacy code is a code that is hard to work with. Michael Feathers, the author of a timeless book Working Effectively with Legacy Code, came up with a simple definition of legacy code. - code without tests This is a controversial and adamant definition. According to this simple heuristic, not practicing TDD means producing legacy code every time. As someone who enjoys the peace of mind that comes from writing tests first, I find this definition very correct at my day to day work. It is impossible to rely on tests when I can remove a line of code and still have green tests. We need much better coverage to make changes with confidence in an unfamiliar codebase. The other definition I would like to share with you is the extension of the previous one. Dylan Beattie came up with this definition of legacy code. - code too scary to update and too profitable to delete I already explained that code without extensive tests makes me uneasy about changing it. Why is that? Without tests, the only source of truth about what the code genuinely does is the code itself, and now you're changing it. The tests are the second source of truth that can keep you in check, and it does that in an automated manner. I would argue that tests written before the code was, are better. But then you don't have legacy code! Since we do work with legacy code, we have to settle on the next best thing. Tests we write for already existing code. Tests we write even years after the original code have infinitely more value than not having tests at all. Why am I mumbling so much about tests? Because tests are the foundation of working with legacy code. Methods for tackling legacy code Identify Seams to break dependencies Legacy code not only doesn't have tests but often is written in a way that makes it very difficult to test in isolation. It might be accessing the database, making network requests, depends on global values, etc. To make code easier to test we can identify seams. A seam is a place where you can alter behavior without editing in that place. — Michael Feahters, Working Effectively with Legacy Code The most popular type of seam is an object seam. Each method of the given class is a potential seam. class MyComponent { performAwfulSideEffect() { ... } init() { this.performAwfulSideEffect(); ... } } What you can do to test the init method is to create a testable instance of MyComponent just for your tests. This way you can override the problematic method and change the init method behavior without changing its code. class MyTestAbleComponent extends MyComponent { performAwfulSideEffect() { // over my dead body } } This is all nice, but what if you don't practice object-oriented programming? Can you still use seams? You cannot use the object seam per se, but the idea still applies. The caveat is that in a badly written functional code, more often than in case of object-oriented programming, you have to create an opportunity for a seam to exist. - Access global values via a value/function passed as an argument (and compose functions) - Extract side effects to a separate function passed as an argument (and compose functions) - Access global values using wrapper in a separate module you can import (and replace in tests) - Extract side effects to a separate module you can import (and replace in tests) Sprout Method The Sprout Method can be applied when you don't have time to test the legacy code, but you have to add a new feature. Change that you have to make might be in a code that is difficult to run in an isolated unit test and there's no existing test suite for this module. You can write tests only for this new feature written as an entirely new code. With tests passing, you can go back to the legacy code and replace or extend old implementation with a call to your new and tested code. const InternalizationService = { translate(key) { ... } // bunch more code here } This is InternalizationService that relies on the global configuration. You have to add a new feature to format amounts of money. Instead of growing our untested InternalizationService, you decide to write a new code somewhere else and use it within InternalizationService. import { formatCurrency } from "./formatCurrency"; const InternalizationService = { translate(key) { ... } formatCurrency(amount) { return formatCurrency(globalConfig.locale, this.language, amount); } // bunch more code here } You can do the same with the component framework. Instead of adding even more complexity to the existing component you can extract a portion of the rendered tree to a separate component, implement additional logic there, and don't forget about covering it with unit tests first. Wrap Method The Wrap Method fulfills a similar purpose as Sprout Method. When you're under the time pressure, use this method when adding a new feature or modifying the existing code without testing the legacy code first. This is not an ideal situation, but we don't live in an ideal world. You take the existing function or method and change its name. Then, you implement a new function with the same name as the old function and keep the same signature (input arguments and output). Within the new function, you're going to call the old method. const ProductsService = { getProducts(category) { return this.oldGetProducts().filter(isInStock); } oldGetProducts(category) { // ... } } This gives you the ability to run new code before, after, or instead of the old implementation. You can use it to implement new precondition, wrap a function in an error handler, modify its inputs or outputs, and so on. While Sprout Method and Wrap Method allow you to practice TDD even in the worst parts of your legacy code, there's a tradeoff. You traded improving existing legacy code for the ease of implementing new code. In other words, you decided to make things not worse instead of making them better. Snapshot testing Snapshot is a captured output stored next to the test files or inlined in a test file. Each time your test runs the expected value is compared against the snapshot. You're probably already familiar with snapshot tests. Snapshots are widely used for smoke testing UI on the frontend (components) and UI layer on the backend (controllers and views). Snapshots don't fit TDD as they assume that code is already written and capture its output. Snapshots also introduce a lot of noise, blow up pull request size, and if they change often, people tend to update them without giving it much thought. function Component({ name }) { return <h1>{`Hello, ${name}!`}</h1>; } it("renders correctly", () => { expect(render(<Component name="Bob" />).container).toMatchInlineSnapshot(` <div> <h1> Hello, Bob! </h1> </div> `); }); While this doesn't work for TDD, it's an excellent trait of a tool for the exploration of code we don't fully understand or want to capture unfamiliar behavior. We can then keep using snapshots to make sure that any change we make is intentional and we didn't accidentally break something. We can keep adding more specific checks in tests as long as we don't need snapshots anymore. Keep in mind Before we wrap up, I would like to share a few thoughts on the general approach to legacy code. Sooner or later, the legacy code is going to break. It can be a week from now or ten years from now. Even when the code doesn't change, the surrounding world does. Good code is easy to unit test. As long it at least runs, even the worst, almost unreachable code can be tested using high-level integration tests. Focus on a happy path first and make sure your edge case bug fix doesn’t break the happy path. Do not let the perfect to be the enemy of good. Don't aim for perfection from day one. One or two unit tests are infinitely better than no tests. Make sure your setup is right. If you run code coverage with a tight enough budget that is reported on each PR, by adding one test to a previously untested module, you can fail the build by lowering the coverage. Impossible? This is because some testing frameworks account only for code that runs in tests already. From a business perspective, the rewrite is one of the worst things that can happen to team performance. There's a great chance that we underestimate the complexity of the existing solution and overestimate the ability of our team to deliver a new solution. Conclusion Put your knowledge to the test and play with the exercises I prepared! Create an issue if anything is missing and share your feedback by commenting here or on GitHub. If you have more time and want to get your hands dirty try Ugly Trivia Game Kata. Don't get bogged down by some legacy code in your project and seek a learning experience in it. I based this article on the "Working Effectively with Legacy Code" book, so it makes sense to read it and learn more if you enjoyed this post. Photo by Thao Le Hoang on Unsplash.
https://michalzalecki.com/fighting-legacy-javascript-code/
CC-MAIN-2020-45
refinedweb
1,991
62.78
On Tue, 2009-07-14 at 15:45 +0530, Jaswinder Singh Rajput wrote:> > >From 35c89da82e969a2fd157478940e7ecde1e19ccc4 Mon Sep 17 00:00:00 2001> > From: Ingo Molnar <mingo@elte.hu>> > Date: Fri, 10 Jul 2009 21:38:02 +0200> > Subject: [PATCH] dma-debug: Fix the overlap() function to be correct and readable> > > > Linus noticed how unclean and buggy the overlap() function is:> > > > - It uses convoluted (and bug-causing) positive checks for> > range overlap - instead of using a more natural negative> > check.> > > > - Even the positive checks are buggy: a positive intersection> > check has four natural cases while we checked only for three,> > missing the (addr < start && addr2 == end) case for example.> > > > - The variables are mis-named, making it non-obvious how the> > check was done.> > > > - It needlessly uses u64 instead of unsigned long. Since these> > are kernel memory pointers and we explicitly exclude highmem> > ranges anyway we cannot ever overflow 32 bits, even if we> > could. (and on 64-bit it doesnt matter anyway)> > > > All in one, this function needs a total revamp. I used Linus's> > suggestions minus the paranoid checks (we cannot overflow really> > because if we get totally bad DMA ranges passed far more things> > break in the systems than just DMA debugging). I also fixed a> > few other small details i noticed.> > > > Reported-by: Linus Torvalds <torvalds@linux-foundation.org>> > Cc: Joerg Roedel <joerg.roedel@amd.com>> > Signed-off-by: Ingo Molnar <mingo@elte.hu>> > ---> > lib/dma-debug.c | 24 ++++++++++++------------> > 1 files changed, 12 insertions(+), 12 deletions(-)> > > > diff --git a/lib/dma-debug.c b/lib/dma-debug.c> > index c9187fe..02fed52 100644> > --- a/lib/dma-debug.c> > +++ b/lib/dma-debug.c> > @@ -856,22 +856,21 @@ static void check_for_stack(struct device *dev, void *addr)> > "stack [addr=%p]\n", addr);> > }> > > > -static inline bool overlap(void *addr, u64 size, void *start, void *end)> > +static inline bool overlap(void *addr, unsigned long len, void *start, void *end)> > {> > - void *addr2 = (char *)addr + size;> > + unsigned long a1 = (unsigned long)addr;> > + unsigned long b1 = a1 + len;> > + unsigned long a2 = (unsigned long)start;> > + unsigned long b2 = (unsigned long)end;> > > > - return ((addr >= start && addr < end) ||> > - (addr2 >= start && addr2 < end) ||> > - ((addr < start) && (addr2 > end)));> > + return !(b1 <= a2 || a1 >= b2);> > }> > > > If b1 = a2 (overlap) then this function will say 0> If a1 = b2 (overlap) then this function will say 0> > if b1 > (a2 + infinite) which is not overlap this function will say 1> > I think we need to test both edges.> > So it should be :> > return ((a2 <= b1 && b2 >= a1) || (a1 <= b2 && a2 <= b1));> We can make it more beautiful like : return ((a2 <= b1 && b2 >= a1) || (a1 <= b2 && b1 >= a2));--JSR
http://lkml.org/lkml/2009/7/14/100
CC-MAIN-2014-49
refinedweb
433
58.72
Opened 7 years ago Closed 6 years ago #4747 closed (wontfix) [multi-db] patch to bring multiple-db-support up to date with rev 6110 Description These are my updates to the multiple-db-support branch, bringing it up to date with trunk (-r5559). I've attached incremental patches from -r4189 which is when it was last worked on and also on comprehensive patch against trunk. There are still some tests failing, and I'd like for people to be able to get eyes on the code to try and give work on the branch a bump start! Attachments (8) Change History (24) Changed 7 years ago by ben <ben.fordnz@…> comment:1 Changed 7 years ago by ben <ben.fordnz@…> - Needs documentation unset - Needs tests unset - Patch needs improvement unset Sorry, I couldn't add the incremental patches for some reason... This is the full patch to bring multiple-db-support up to date with rev 5598 comment:2 Changed 7 years ago by Simon G. <dev@…> - Summary changed from Updates to multiple-db-support to [multi-db] patch to bring multiple-db-support up to date with rev 5598 - Triage Stage changed from Unreviewed to Ready for checkin - Version changed from SVN to other branch comment:3 Changed 7 years ago by (removed) @ben, where are you mirroring this? Specifically, got a vcs branch for this? Changed 7 years ago by ben <ben.fordnz@…> Changed 7 years ago by ben <ben.fordnz@…> comment:4 Changed 7 years ago by ben <ben.fordnz@…> Koen has supplied a patch bringing multi-db up to -r6110. This is a cut-n-past of the email he sent to me regarding the changes he's made: - removed (most of) the schema evolution stuff (also the tests) - get_db_prep_save gets the model instance so it can determine what backend it uses (needed for mysql exception in datetimefield - queryset gets set in manager since some backends have custom querysets (eg oracle) - (hopefully) all quote_name references have been resolved to the models connection ops via model._default_manager.db.connection.ops.quote_name - query.py: quote_only_if_word uses the backends quote_name (passed in) - management stuff: syncdb put model connection determination all around, seems to work, needs testing - validation: needs a closer look - sql: table_list ok, create ok, delete ?, flush NOT OK, others TO DO - thread_isolation test failing : TODO - test_client test failing (flush?): TODO This means Unicode, backend and management refactoring should be more or less ok. There are some things to look at, but I need a break now. I have not done to much testing yet though. I’ll see when I can get round to that (maybe later this evening when the kids are asleep). I hope this helps. Koen comment:5 Changed 7 years ago by micsco - Needs documentation set - Needs tests set - Summary changed from [multi-db] patch to bring multiple-db-support up to date with rev 5598 to [multi-db] patch to bring multiple-db-support up to date with rev 6110 - Triage Stage changed from Ready for checkin to Accepted comment:6 Changed 7 years ago by levity@… it doesn't work for me. i get: Traceback (most recent call last): File "/home/cdn/ui/scripts/send_alerts.py", line 1, in <module> from tools.alert import email_live_alerts File "/home/cdn/ui/scripts/tools/__init__.py", line 12, in <module> from django.db import connection File "/usr/local/lib/python2.5/site-packages/django/db/__init__.py", line 356, in <module> IntegrityError = backend.IntegrityError File "/usr/local/lib/python2.5/site-packages/django/db/__init__.py", line 325, in __getattr__ **self.__kw)) File "/usr/local/lib/python2.5/site-packages/django/db/__init__.py", line 352, in <lambda> lambda: connections[_default].backend) File "/usr/local/lib/python2.5/site-packages/django/db/__init__.py", line 147, in __getitem__ return self.connect(k) File "/usr/local/lib/python2.5/site-packages/django/db/__init__.py", line 170, in connect cnx[name] = connect(settings) File "/usr/local/lib/python2.5/site-packages/django/db/__init__.py", line 38, in connect return ConnectionInfo(settings, **kw) File "/usr/local/lib/python2.5/site-packages/django/db/__init__.py", line 55, in __init__ self.connection = self.backend.DatabaseWrapper(settings) TypeError: __init__() takes exactly 1 argument (2 given) i'm using two mysql backends. comment:7 Changed 7 years ago by ben <ben.fordnz@…> Without some more detail this might be a bit difficult to diagnose... I have noticed that mysql_old DatabaseWrapper's init looks like: def init(self, kwargs) Although that still doesn't explain why it chokes on the settings argument... Can you provide more detail: - What version of trunk - Which patch - A diff of backends/mysql/base.py (or the whole file) - If you could have a play inside ipython using pdb that would probably throw up some interesting info too! Cheers comment:8 Changed 7 years ago by anonymous i'm using mysql, not mysql_old... but it also has __init__(self, **kwargs). but the call to it from db/__init__.py is DatabaseWrapper(settings), without a keyword specified, so it worked when i changed the constructor to __init__(self, *args, **kwargs). but then i ran into another error, so i thought maybe i was doing something more fundamentally wrong with the patch. all i did was pull a fresh copy of r6110, and apply multi-db-6110.patch. comment:9 Changed 7 years ago by ben <ben.fordnz@…> Ok, I just had a quick look through that patch and it doesn't seem to have changed any of the base.py files in respective backends. I'm not sure why this is, I know Koen looked at a lot of things and tried to refactor a bit to take advantage of recent changes in trunk. All I can say is that it looks pretty different from my earlier patch, and I'm not sure why! I wish I had time to look into this for you today, but I really can't, I have way too much on. All I can suggest is sending an email to Koen and ask him to help out (He's in Belgium so it'll be a few more hours until he's up!) Sorry I can't be of more help! Ben Changed 7 years ago by Koen Biermans <koen.biermans@…> comment:10 Changed 7 years ago by Koen Biermans <koen.biermans@…> - Patch needs improvement set Ok, the patch is now up with 6433. Beware! This was and is work in progress. Especially the management.py commands are still in a bad state. A number of the commands should be made to accept a parameter for the named database connection (which is not the case now). There are also other areas where I think things need to be redone. For me now a lot seems to work (using existing databases that is). Sorry that I haven't had much time to proceed on this any further. Koen Changed 7 years ago by Koen Biermans <koen.biermans@…> comment:11 Changed 7 years ago by mail@… according to the patch multidb_6453.diff: there's an error with filefields. In django/db/models/fields/init.py the method get_db_prep_save for class FileField has to be changed to this: def get_db_prep_save(self, model_instance, value): "Returns field's value prepared for saving into a database." # Need to convert UploadedFile objects provided via a form to unicode for database insertion if value is None: return None return Field.get_db_prep_save(self, model_instance, unicode(value)) Changed 6 years ago by Koen Biermans <koen.biermans@…> comment:12 Changed 6 years ago by Koen Biermans <koen.biermans@…> A new effort to bring multiple database support into trunk. The patch is from trunk r7534. A lot seems to be working, but there are still some areas that need looking into. I corrected some of the management commands (the sql ones), but eg loaddata and sequencereset are still to be looked at. It is passing most of the tests (with sqlite that is), except the thread-isolation that came from the multidb branch. I have only tried it with sqlite and postgres, so for instance oracle will probably not work (since I am not yet passing separate query classes in for those backends that have a custom query class). I'm continuing the work, but please feel free to help out! Changed 6 years ago by Koen Biermans <koen.biermans@…> comment:13 Changed 6 years ago by mail@… I cannot get the patch working. When trying to apply it, it always reports failures: # patch -p0 --verbose --dry-run < ./multidb.diff Hmm... Looks like a unified diff to me... The text leading up to this was: -------------------------- |=== django/test/simple.py |================================================================== |--- django/test/simple.py (/mirror/django/trunk) (revision 5420) | |+++ django/test/simple.py (/local/django/mymultidb) (revision 5420) | -------------------------- Patching file django/test/simple.py using Plan A... Hunk #1 succeeded at 137 with fuzz 2 (offset -1 lines). Hmm... The next patch looks like a unified diff to me... The text leading up to this was: -------------------------- | |=== django/test/utils.py |================================================================== |--- django/test/utils.py (/mirror/django/trunk) (revision 5420) | |+++ django/test/utils.py (/local/django/mymultidb) (revision 5420) | -------------------------- Patching file django/test/utils.py using Plan A... Hunk #1 FAILED at 1. 1 out of 1 hunk FAILED -- saving rejects to file django/test/utils.py.rej Hmm...missing header for unified diff at line 39 of patch The next patch looks like a unified diff to me... can't find file to patch at input line 39 Perhaps you used the wrong -p or --strip option? The text leading up to this was: -------------------------- | from django.dispatch import dispatcher | -------------------------- File to patch: (and lots more... even at django/models/sql/where.py) it seems, that only the first part per file of the diff gets applied. What's wrong here? Changed 6 years ago by Koen Biermans <koen.biermans@…> new attempt to generate correct diff comment:14 Changed 6 years ago by Koen Biermans <koen.biermans@…> Sorry about that, I created the diff with svk (on windows), which obviously is not working right. I created one with svn now, I hope this one is allright. comment:15 Changed 6 years ago by Koen Biermans <koen.biermans@…> Ben Ford set up a mercurial repository for new work on multiple databases at. A track setup is available at. comment:16 Changed 6 years ago by mtredinnick - Resolution set to wontfix - Status changed from new to closed The multi-db is no longer active. Alternative approaches are being considered for trunk. Full Patch -r4189:5589
https://code.djangoproject.com/ticket/4747
CC-MAIN-2014-23
refinedweb
1,761
66.64
Exploring AWS CDK - Loading DynamoDB with Custom Resources Matt Morgan ・7 min read One of the reasons AWS CDK has me so intrigued is the promise of being able to spin up environments in minutes. If I can provision all my infrastructure, databases and applications with a single structure that is source controlled, I can do all kinds of things most engineering teams have only dreamed of: - Run N test environments to avoid logjams/branch conflicts. - Team or individual developer sandboxes spun up (and down) in minutes. - Isolated environments for CI/CD and test automation strategies. - Staging/demo/eval/load test environments on demand and discarded after use. - Customer isolation into separate accounts or VPCs. Managing data can be somewhat tricky when it comes to trying to pull off something like this so I really wanted to find out if I could use CDK to load the database I've just provisioned. A fresh developer account with all infrastructure and apps provisioned but NO DATA AT ALL is probably not going to deliver the smooth experience I'm striving for. So how can CDK help me meet this goal? Table of Contents - CDK and Tools Review - tl;dr - DynamoDB - Create a Table - AWS Custom Resource - Fake Friends via Faker - Call the API - Make it Go Faster! - And Faster! - Unlimited Data! - Next Steps CDK and Tools Review I explained my thoughts on how to set up CDK projects in my last article. If you want to know why I've changed some of the project setup or my ideas about how linting should be done, it's all there. tl;dr Skip the article and check out the code, if you prefer. DynamoDB DynamoDB is the managed nosql solution from AWS. I'm not going to do a deep dive into DynamoDB here. I chose DynamoDB for this example because it's serverless and fully managed. That'll make it cheap to play around with and fast to provision. I haven't done it yet, but I'm confident we could apply similar techniques to RDS. Create a Table There's no need to create schemas or define columns with DynamoDB. I only need to create a Table and specify its PartitionKey attribute. Naturally this is simple to do in CDK. import { AttributeType, Table } from '@aws-cdk/aws-dynamodb'; import { Construct, RemovalPolicy, Stack, StackProps } from '@aws-cdk/core'; export class CdkDynamoCustomLoaderStack extends Stack { constructor(scope: Construct, id: string, props?: StackProps) { super(scope, id, props); const tableName = 'friends'; new Table(this, 'FriendsTable', { tableName, partitionKey: { name: 'id', type: AttributeType.STRING }, removalPolicy: RemovalPolicy.DESTROY, }); } } I'm creating a table called friends. Since the life of a developer is lonely, I will use an AWS Custom Resource to generate some friends. AWS Custom Resource It's a bit daunting at first to think I'm just learning CDK and I already want to go ahead and start creating custom resources, but actually they are pretty simple and straightforward to use. There are two strategies supported by CDK, Provider Framework and Custom Resources for AWS APIs. Provider Framework lets me write my own custom lambda handler for resource lifecycle events while Custom Resources for AWS APIs lets me call AWS APIs during my deployment. This is going to be the simpler option so it's what I'll use in this article. Fake Friends via Faker I like using Faker to generate fake data. It has a lot of great options and is almost always good for a laugh. My plan is that I will use an AWS API to insert a fake friend record into the database I've just provisioned. To do that, I'll need a way to generate that data. In order to keep things simple, I'll just add a private method to my stack that knows how to do this. import { commerce, name, random } from 'faker'; // now inside my stack constructor private generateItem = () => { return { id: { S: random.uuid() }, firstName: { S: name.firstName() }, lastName: { S: name.lastName() }, shoeSize: { N: random.number({ max: 25, min: 1, precision: 0.1 }) }, favoriteColor: { S: commerce.color() }, }; }; Each attribute specifies the type, in this case S for string and N for number. If I were using mysql instead of DynamoDB, this would probably be a sql string. My linter doesn't like the fact that the above method doesn't specify a return type and I like the idea of defining my data types so I'm going to create an interface. interface IFriend { id: { S: string }; firstName: { S: string }; lastName: { S: string }; shoeSize: { N: number }; favoriteColor: { S: string }; } Note that the official TypeScript style guide says not to prefix your interface, but my linting rule expects it. I'm just not going to get into it right now. Call the API I'll use the AwsCustomResource constructor to call the DynamoDB API. What CDK is going to do here is create a lambda function and use the SDK for JavaScript to make the call. import { AwsCustomResource } from '@aws-cdk/custom-resources'; // inside constructor new AwsCustomResource(this, 'initDBResource', { onCreate: { service: 'DynamoDB', action: 'putItem', parameters: { TableName: tableName, Item: this.generateItem(), }, physicalResourceId: 'initDBData', }, }); This code will create a lambda function that invokes the AWS JavaScript SDK. It will call putItem on the DynamoDB import and pass it my parameters. I can explore this API in the SDK docs, but unfortunately not in the CDK types as they are not narrow enough. Maybe some day. Note that this creates a resource with the given ID and executes this API call when it's created. There are onUpdate and onDelete calls available too. With the above code, I can npm run build (or watch) and cdk deploy and I'll find my table gets created and has a single friend in it. Since I used onCreate, the API call is only made on my first deploy - when the Custom Resource is created. If I changed that to onUpdate, then I'd get a new one every time I deploy. To break that down just a little more, when I npm run build, that transpiles the TypeScript code into JavaScript. I now have JavaScript code that calls some faker methods and eventually produces a cloudformation template. If I'm putting programming structures like conditional statements and loops into my CDK code, it's really important to understand when those conditionals and loops will be evaluated, and that is when the template is generated. Make it Go Faster! Adding just one record on startup might work for some use cases, but what if that's just not enough data to be useful? DynamoDB has a batchWriteItem method that might help. That lets me put 25 items into my table in a single API call. I'm going to add another private method that will help me generate data in batches of 25. private generateBatch = (batchSize = 25): { PutRequest: { Item: IFriend } }[] => { return new Array(batchSize).fill(undefined).map(() => { return { PutRequest: { Item: this.generateItem() } }; }); }; Now I just need to swap putItem with batchWriteItem and update my parameters block to look like this: parameters: { RequestItems: { [tableName]: this.generateBatch(), }, }, batchWriteItem allows writes to multiple tables, so the payload is just a little different - I specify the table per item I want to insert. And Faster! Now what if 25 items still aren't enough? I could put my resource in a loop. for (let i = 0; i < 10; i++) { new AwsCustomResource(this, `initDBResourceBatch${i}`, { onCreate: { service: 'DynamoDB', action: 'batchWriteItem', parameters: { RequestItems: { [tableName]: this.generateBatch(), }, }, physicalResourceId: `initDBDataBatch${i}`, }, }); } This will generate 250 items. I could loop even more times, but eventually I will hit the limit of how large my cloudformation template can be. This technique can write hundreds of items, but likely not thousands and definitely not tens or hundreds of thousands. Unlimited Data! If I need to generate more than a few hundred items, I can use the Provider Framework and write my own lambda function to do exactly what I want. Maybe I'll give that a shot in a future post. For truly large amounts of data, I might need to start looking at Data Pipeline. Next Steps I wouldn't consider this example ready for wide use yet, but I've gained a pretty good understand of Custom Resources and their use. I think to get around template size limits, what I'd really want to do is upload some kind of csv or json payload to S3 and ingest that via lambda when I create my resources. I would also want to separate my concerns by publishing this as a separate construct or at least importing it into my main stack, not just adding private members to the class. Hope this was helpful and informative. Would be glad to see others experiences with loading data via CDK or cloudformation (or even other means) in the comments! Health issues you face being a Developer 🏥 You may be a JavaScript Developer or a Ruby Developer, but there is one thing ... Hi Matt, I wrote an article on Importing data into DynamoDB as fast as possible with as little as possible effort. rehanvdm.com/serverless/dynamodb-i... So you can put your Custom Resource on steroids if you rather pass the S3 path to the data you want to import as a param. Then stream from S3 and write to Dynamo in parallel. I only started to play with CDK a week ago and absolutely love it, it is a must for anyone doing raw cloud formation. Those are some great insights, thanks Rehan! I've been working on generating the data in a lambda and loading it - this should help a lot. I also think that teams might want to check a csv into source control representing different scenarios (for test automation, for example) that could get automatically provisioned and streamed to the DB. Yes brilliant idea, new environments will then have consistent data after being created, great for testing scenarios.
https://dev.to/elthrasher/exploring-aws-cdk-loading-dynamodb-with-custom-resources-jlf?utm_source=newsletter&utm_medium=email&utm_content=offbynone&utm_campaign=Off-by-none%3A%20Issue%20%2371
CC-MAIN-2020-05
refinedweb
1,657
62.68
I’ve been a Python programmer pretty much full time for the last 7 or 8 years, so I keep an eye out for new tools to help me with this. I’ve been using Visual Studio Code for a while now, and I really like it. Microsoft have just announced Pylance, the new language server for Python in Visual Studio Code. The language server provides language-sensitive help like spotting syntax errors, providing function definitions and so forth. Pylance is based on the type checking engine Pyright. Python is a dynamically typed language but recently has started to support type annotations. Dynamic typing means you don’t tell the interpreter what “type” a variable is (int, string and so forth) you just use it as such. This contrasts with statically typed languages where for every variable and function you define a type as you write your code. Type annotations are a halfway house, they are not used by the Python interpreter but they can be used by tools like Pylance to check code, making it more likely to run correctly on first go. Pylance provides a range of “Intellisense” code improvement features, as well as type annotation based checks (which can be switched off). I was interested to use the type annotations checking functionality since one of the pleasures of working with statically typed languages is that once you’ve satisfied your compiler that all of the types are right then it has a better chance of running correctly than a program in a dynamically typed language. I will use the write_dictionary function in my little ihutilities library as an example here, this function is defined in the file io_utils.py. The appropriate type annotation for write_dictionary is: def write_dictionary(filename: str, data: List[Dict[str,Any]], append:Optional[bool]=True, delimiter:Optional[str]=",") -> None: Essentially each variable is followed by a colon, and then a type (i.e str). Certain types are imported from the typing library (Any, List, Optional and Dict in this instance). We supply the types of the elements of the list, or dictionary. The Any type allows for any type. The Optional keyword is used for optional parameters which can have a default value. The return type is put at the end after the ->. In a *.pyi file described below, the function body is replaced with ellipsis (…). Actually the filename type hint shouldn’t be string but I can’t get the approved type of Union[str, bytes, os.PathLike] to work with Pylance at the moment. As an aside Pylance spotted that two imports in the io_utils.py library were unused. Once I’d applied the type annotation to the function definition it inferred the types of variables in the code, and highlighted where there might be issues. A recurring theme was that I often returned a string or None from a function, Pylance indicated this would cause a problem if I tried to measure the length of None. There a number of different ways of providing typing information, depending on your preference and whether you are looking at your own code, or at a 3rd party library: - Types provided at definition in the source module – this is the simplest method, you just replace the function def line in the source module file with the type annotated one; - Types provided in the source module by use of *.pyi files – you can also put the type-annotated function definition in a *.pyi file alongside the original file in the source module in the manner of a C header file. The *.pyi file needs to sit in the same directory as its *.py sibling. This definition takes precedence over a definition in the *.py file. The reason for using this route is that it does not bring incompatible syntax into the *.py files – non-compliant interpreters will simply ignore *.pyi files but it does clutter up your filespace. Also there is a risk of the *.py and *pyi becoming inconsistent; - Stub files added to the destination project – if you import write_dictionary into a project Pylance will highlight that it cannot find a stub file for ihutilities and will offer to create one. This creates a `typings` subdirectory alongside the file on which this fix was executed, this contains a subdirectory called `ihutilities` in which there are files mirroring those in the ihutilities package but with the *.pyi extension i.e. __init__.pyi, io_utils.py, etc which you can modify appropriately; - Types provided by stub-only packages – PEP-0561 indicates a fourth route which is to load the type annotations from a separate, stub only, module. - Types provided by Typeshed – Pyright uses Typeshedfor annotations for built-in and standard libraries, as well as some popular third party libraries; Type annotations were introduced in Python 3.5, in 2015, so are a relatively new language feature. Pyright is a little over a year old, and Pylance is a few days old. Unsurprisingly documentation in this area is relatively undeveloped. I found myself looking at the PEP (Python Enhancement Proposals) references as often as not to understand what was going on. If you want to see a list of relevant PEPs then there is a list on the Pyright README.md, I even added one myself. Pylance is a definite improvement on the old Python language server which was itself more than adequate. I am currently undecided about type annotations, the combination of Pylance and type annotations caught some problems in my code which would only come to light in certain runtime circumstances. They seem to be a bit of an overhead which I suspect I would only use for frequently used library routines, and core code which gets run a lot and is noticeable by others when it fails. I might start by adding in some *.pyi files to my active projects. 1 ping
http://www.ianhopkinson.org.uk/2020/07/type-annotations-in-python-an-adventure-with-visual-studio-code-and-pylance/
CC-MAIN-2021-17
refinedweb
977
60.75
Armstrong4,941 Points Regular Expressions in Python Players and Class Definitions Challenge... I'm not sure what I'm doing wrong. I've got the first part of the challenge done, but I'm having trouble getting the class right. import re string = '''Love, Kenneth: 20 Chalkley, Andrew: 25 McFarland, Dave: 10 Kesten, Joy: 22 Stewart Pinchback, Pinckney Benton: 18''' line = re.compile(r'^(?P<last_name>[-\w ]+),\s(?P<first_name>[-\w ]+)\:\s(?P<score>[0-9]+)$', re.X|re.M) players = line.match(string) class Player: line = re.compile(r'^(?P<last_name>[-\w ]+),\s(?P<first_name>[-\w ]+)\:\s(?P<score>[0-9]+)$', re.X|re.M) def __init__(self, inputString): self.last_name = '' self.first_name = '' self.score = '' self.players = line.match(inputString) player = Player(string) 1 Answer jcorum71,813 Points Robert, congrats on the first part. Here's a suggestion for the second: class Player: def __init__(self, last_name, first_name, score): self.last_name = last_name self.first_name = first_name self.score = score There's no need to repeat line in the class, and the init needs to have 4 parameters, rather than 2. Further, the purpose of the init is to provide values for the variables, so each needs to be set to the appropriate value being passed in.
https://teamtreehouse.com/community/regular-expressions-in-python-players-and-class-definitions-challenge
CC-MAIN-2022-27
refinedweb
206
59.5
UART handler error Hello, When I call uart irq like this: uart.irq(trigger = UART.RX_ANY, handler = hand) and hand function looks like this: def hand(): return I've got this error: Uncaught exception in callback handler TypeError: And another thing is that when I write: bytes = UART.read() I've got this error: argument num/types mismatch What can cause those problems? This post is deleted! @michalt38 said in UART handler error: I've got WiPy 1 I can not help more because i have only Wipy2.0. Maybe someone else... @michalt38 Do you have Wipy1 or Wipy2.0? And I found UART.irq doc here: This doc is for Wipy1. For Wipy2.0 doc are here uart is here @livius Sorry, it was UART.read() and it is solved And I found UART.irq doc here: Where do you found uart.irq docs? and what do you pass as argument of UART.write() ? It can not be empty - and it must by bytes()
https://forum.pycom.io/topic/917/uart-handler-error/2?lang=en-US
CC-MAIN-2021-04
refinedweb
165
86.1
Podman and Buildah for Docker users >>IMAGE. How does Docker work?: - Pull and push images from an image registry - Make copies of images in a local container storage and to add layers to those containers - Commit containers and remove local container images from the host repository - Ask the kernel to run a container with the right namespace and cgroup, etc.: - A single process could be a single point of failure. - This process owned all the child processes (the running containers). - If a failure occurred, then there were orphaned processes. - Building containers led to security vulnerabilities. - All Docker operations had to be conducted by a user (or users) with the same full root authority.).. - What is Buildah and why might I need it? Installing Podman Podman commands are the same as Docker’s? Podman and container images. Container images are compatible between Podman and other runtimes helps users move to Kubernetes. What is Buildah and why would I use: - It allows for finer control of creating image layers. This is a feature that many container users have been asking for for a long time. Committing many changes to a single layer is desirable. - Buildah’s can build images from scratch, that is, images with nothing in them at all. Nothing. In fact, looking at the container storage created as a result of a. Conclusion I hope this article has been useful and will help you migrate to using Podman (and Buildah) confidently and successfully. For more information: - Podman.io and Buildah.io project web sites - github.com/containers projects (get involved, get the source, see what’s being developed): Related Articles - Containers without daemons: Podman and Buildah available in RHEL 7.6 and RHEL 8 Beta - Podman: Managing pods and containers in a local container runtime - Managing containerized system services with Podman (Use systemd to manage your podman containers) - Building a Buildah Container Image for Kubernetes - Podman can now ease the transition to Kubernetes and CRI-O - Security Considerations for Container Runtimes (Video of Dan Walsh’s talk from KubeCon 2018) - IoT edge development and deployment with containers through OpenShift: Part 1 (Building and testing ARM64 containers on OpenShift using podman, qemu, binfmt_misc, and Ansible) Get Red Hat Enterprise Linux RHEL is known as the popular enterprise deployment platform, but it's a great agile platform for building hybrid applications – across cloud, physical, virtual, and container-centric infrastructures - using the latest development tools.START WITH RED HAT LINUX.
https://developers.redhat.com/blog/2019/02/21/podman-and-buildah-for-docker-users/?sc_cid=701f20000012i69AAA
CC-MAIN-2019-35
refinedweb
407
54.83
This section contains miscellaneous recipes for solving problems in docassemble. Require a checkbox to be checked Using validation code question: | You must agree to the terms of service. fields: - I agree to the terms of service: agrees_to_tos datatype: yesnowide validation code: | if not agrees_to_tos: validation_error("You cannot continue until you agree to the terms of service.") --- mandatory: True need: agrees_to_tos question: All done. Using datatype: checkboxes question: | You must agree to the terms of service. fields: - no label: agrees_to_tos datatype: checkboxes minlength: 1 choices: - I agree to the terms of service validation messages: minlength: | You cannot continue unless you check this checkbox. --- mandatory: True need: agrees_to_tos question: All done Use a variable to track when an interview has been completed One way to track whether an interview is completed is to set a variable when the interview is done. That way, you can inspect the interview answers and test for the presence of this variable. objects: - user: Individual --- question: | What is your name? fields: - First name: user.name.first - Last name: user.name.last --- mandatory: True code: | user.name.first user_finish_time final_screen --- code: | user_finish_time = current_datetime() --- event: final_screen question: | Goodbye, user! buttons: Exit: exit You could also use Redis to store the status of an interview. objects: - user: Individual - r: DARedis --- question: | What is your name? fields: - First name: user.name.first - Last name: user.name.last --- mandatory: True code: | interview_marked_as_started user.name.first interview_marked_as_finished final_screen --- code: | redis_key = user_info().filename + ':' + user_info().session --- code: | r.set(redis_key, 'started') interview_marked_as_started = True --- code: | r.set(redis_key, 'finished') interview_marked_as_finished = True --- event: final_screen question: | Goodbye, user! buttons: Exit: exit Exit interview with a hyperlink rather than a redirect Suppose you have a final screen in your interview that looks like this: mandatory: True code: | kick_out --- event: kick_out question: Bye buttons: - Exit: exit url: When the user clicks the “Exit” button, an Ajax request is sent to the docassemble server, the interview logic is run again, and then when the browser processes the response, the browser is redirected by JavaScript to the url (). If you would rather that the button act as a hyperlink, where clicking the button sends the user directly to the URL, you can make the button this way: mandatory: True code: | kick_out --- event: kick_out question: Bye subquestion: | ${ action_button_html("", size='md', color='primary', label='Exit', new_window=False) } Ensure two fields match question: | What is your e-mail address? fields: - E-mail: email_address_first datatype: email - note: | Please enter your e-mail address again. datatype: email - E-mail: email_address datatype: email - note: | Make sure the e-mail addresses match. js hide if: | val('email_address') != '' && val('email_address_first') == val('email_address') - note: | <span class="text-success">E-mail addresses match!</span> js show if: | val('email_address') != '' && val('email_address_first') == val('email_address') validation code: | if email_address_first != email_address: validation_error("You cannot continue until you confirm your e-mail address") --- mandatory: True question: | Your e-mail address is ${ email_address }. Progresive disclosure modules: - .progressivedisclosure --- features: css: progressivedisclosure.css --- template: fruit_explanation subject: | Tell me more about fruit content: | ##### What is a fruit? A fruit is the the sweet and fleshy product of a tree or other plant that contains seed and can be eaten as food. --- template: favorite_explanation subject: | Explain favorites content: | ##### What is a favorite? If you have a favorite something, that means you like it more than you like other things of a similar nature. Add progressivedisclosure.css to the “static” data folder of your package. a span.pdcaretopen { display: inline; } a span.pdcaretclosed { display: none; } a.collapsed .pdcaretopen { display: none; } a.collapsed .pdcaretclosed { display: inline; } Add progressivedisclosure.py as a Python module file in your package. import re __all__ = ['prog_disclose'] def prog_disclose(template, classname=None): if classname is None: classname = ' bg-light' else: classname = ' ' + classname.strip() the_id = re.sub(r'[^A-Za-z0-9]', '', template.instanceName) return u"""\ <a class="collapsed" data-<span class="pdcaretopen"><i class="fas fa-caret-down"></i></span><span class="pdcaretclosed"><i class="fas fa-caret-right"></i></span> {}</a> <div class="collapse" id="{}"><div class="card card-body{} pb-1">{}</div></div>\ """.format(the_id, template.subject_as_html(trim=True), the_id, classname, template.content_as_html()) This uses the collapse feature of Bootstrap. New object or existing object The object datatype combined with the disable others can be used to present a single question that asks the user either to select an object from a list or to enter information about a new object. Another way to do this is to use show if to show or hide fields. This recipe gives an example of how to do this in an interview that asks about individuals. objects: - boss: Individual - employee: Individual - customers: DAList.using(object_type=Individual) --- mandatory: True question: | Summary subquestion: | The boss is ${ boss }. The employee is ${ employee }. The customers are ${ customers }. % if boss in customers or employee in customers: Either the boss or the employee is also a customer. % else: Neither the boss nor the employee is also a customer. % endif --- question: Are there any customers? yesno: customers.there_are_any --- question: Is there another customer? yesno: customers.there_is_another --- code: | people = ([boss] if defined('boss') and boss.name.defined() else []) \ + ([employee] if defined('employee') and employee.name.defined() else []) \ + customers.complete_elements() --- reconsider: - people question: | Who is the boss? fields: - Existing or New: boss.existing_or_new datatype: radio default: Existing choices: - Existing - New - Person: boss show if: variable: boss.existing_or_new is: Existing datatype: object choices: people - First Name: boss.name.first show if: variable: boss.existing_or_new is: New - Last Name: boss.name.last show if: variable: boss.existing_or_new is: New - Birthday: boss.birthdate datatype: date show if: variable: boss.existing_or_new is: New --- reconsider: - people question: | Who is the employee? fields: - Existing or New: employee.existing_or_new datatype: radio default: Existing choices: - Existing - New - Person: employee show if: variable: employee.existing_or_new is: Existing datatype: object choices: people - First Name: employee.name.first show if: variable: employee.existing_or_new is: New - Last Name: employee.name.last show if: variable: employee.existing_or_new is: New - Birthday: employee.birthdate datatype: date show if: variable: employee.existing_or_new is: New --- reconsider: - people question: | Who is the ${ ordinal(i) } customer? fields: - Existing or New: customers[i].existing_or_new datatype: radio default: Existing choices: - Existing - New - Person: customers[i] show if: variable: customers[i].existing_or_new is: Existing datatype: object choices: people - First Name: customers[i].name.first show if: variable: customers[i].existing_or_new is: New - Last Name: customers[i].name.last show if: variable: customers[i].existing_or_new is: New - Birthday: customers[i].birthdate datatype: date show if: variable: customers[i].existing_or_new is: New This recipe keeps a master list of individuals in an object called people. Since this list changes throughout the interview, it is re-calculated whenever a question is asked that uses people. When individuals are treated as unitary objects, you can do things like use Python’s in operator to test whether an individual is a part of a list. This recipe illustrates this by testing whether boss is part of customers or employee is part of customers. If you want users to be able to resume their interviews later, but you don’t want to use the username and password system, you can e-mail your users a URL created with interview_url(). default screen parts: under: | % if show_save_resume_message: [Save and resume later](${ url_action('save_and_resume') }) % endif --- mandatory: True code: | target = 'normal' show_save_resume_message = True multi_user = True --- mandatory: True scan for variables: False code: | if target == 'save_and_resume': if wants_email: if email_sent: log("We sent an e-mail to your e-mail address.", "info") else: log("There was a problem with e-mailing.", "danger") show_save_resume_message = False undefine('wants_email') undefine('email_sent') target = 'normal' final_screen --- question: | What is your favorite fruit? fields: - Favorite fruit: favorite_fruit --- question: | What is your favorite vegetable? fields: - Favorite vegetable: favorite_vegetable --- question: | What is your favorite legume? fields: - Favorite legume: favorite_legume --- event: final_screen question: | I would like you to cook a ${ favorite_fruit }, ${ favorite_vegetable }, and ${ favorite_legume } stew. --- event: save_and_resume code: | target = 'save_and_resume' --- code: | send_email(to=user_email_address, template=save_resume_template) email_sent = True --- question: | How to resume your interview later subquestion: | If you want to resume your interview later, we can e-mail you a link that you can click on to resume your interview at a later time. fields: - no label: wants_email input type: radio choices: - "Ok, e-mail me": True - "No thanks": False default: True - E-mail address: user_email_address datatype: email show if: wants_email under: "" --- template: save_resume_template subject: | Your interview content: | To resume your interview, [click here](${ interview_url() }). E-mailing or texting the user a link for purposes of using the touchscreen Using a desktop computer is generally very good for answering questions, but it is difficult to write a signature using a mouse. Here is an example of an interview that allows the user to use a desktop computer for answering questions, but use a mobile device with a touchscreen for writing the signature. include: - docassemble.demo:data/questions/examples/signature-diversion.yml --- mandatory: True question: | Here is your document. attachment: name: Summary of food filename: food content: | [BOLDCENTER] Food Attestation My name is ${ user }. My favorite fruit is ${ favorite_fruit }. My favorite vegetable is ${ favorite_vegetable }. I solemnly swear that the foregoing is true and correct. ${ user.signature.show(width="2in") } ${ user } This interview includes a YAML file called signature-diversion.yml, the contents of which are: mandatory: True code: | multi_user = True --- question: | Sign your name subquestion: | % if not device().is_touch_capable: Please sign your name below with your mouse. % endif signature: user.signature under: | ${ user } --- sets: user.signature code: | signature_intro if not device().is_touch_capable and user.has_mobile_device: if user.can_text: sig_diversion_sms_message_sent sig_diversion_post_sms_screen elif user.can_email: sig_diversion_email_message_sent sig_diversion_post_email_screen --- question: | Do you have a mobile device? yesno: user.has_mobile_device --- question: | Can you receive text messages on your mobile device? yesno: user.can_text --- question: | Can you receive e-mail messages on your mobile device? yesno: user.can_email --- code: | send_sms(user, body="Click on this link to sign your name: " + interview_url_action('mobile_sig')) sig_diversion_sms_message_sent = True --- code: | send_email(user, template=sig_diversion_email_template) sig_diversion_email_message_sent = True --- template: sig_diversion_email_template subject: Sign your name with your mobile device content: | Make sure you are using your mobile device. Then [click here](${ interview_url_action('mobile_sig') }) to sign your name with the touchscreen. --- question: | What is your e-mail address? fields: - E-mail: user.email --- question: | What is your mobile number? fields: - Number: user.phone_number --- event: sig_diversion_post_sms_screen question: | Check your text messages. subquestion: | We just sent you a text message containing a link. Click the link and sign your name. Once we have your signature, you will move on automatically. reload: 5 --- event: sig_diversion_post_email_screen question: | Check your e-mail on your mobile device. subquestion: | We just sent you an email containing a link. With your mobile device, click the link and sign your name. Once we have your signature, you will move on automatically. reload: 5 --- event: mobile_sig need: user.signature question: | Thanks! subquestion: | We got your signature: ${ user.signature } You can now resume the interview on your computer. Multi-user interview for getting a client’s signature This is an example of a multi-user interview where one person (e.g., an attorney) writes a document that they want a second person (e.g, a client) to sign. It is a multi-user interview (with multi_user set to True). The attorney inputs the attorney’s e-mail address and uploads a DOCX file containing: {{ signature }} where the client’s signature should go. The attorney then receives a hyperlink that the attorney can send to the client. When the client clicks on the link, the client can read the unsigned document, then agree to sign it, then sign it, then download the signed document. After the client signs the document, it is e-mailed to the attorney’s e-mail address. mandatory: True code: | multi_user = True signature = '(Your signature will go here)' --- mandatory: True code: | intro_seen email_address template_file notified_of_url agrees_to_sign signature_reset signature document_emailed final_screen --- code: | notified_of_url = True prevent_going_back() force_ask('screen_with_link') Validating uploaded files Here is an interview that makes the user upload a different file if the file the user uploads is too large. Mail merge Here is an example interview that assembles a document for every row in a Google Sheet. modules: - docassemble.demo.google_sheets --- objects: - court: DAList.using(object_type=Person, auto_gather=False) --- code: | court.clear() for row in read_sheet('court_list'): item = court.appendObject() item.name.text = row['Name'] item.address.address = row['Address'] item.address.city = row['City'] item.address.state = row['State'] item.address.zip = row['Zip'] item.address.county = row['County'] del item court.gathered = True --- attachment: name: | ${ court[i].address.county } court filename: | ${ space_to_underscore(court[i].address.county) }_court_info variable name: court[i].info_sheet content: | [BOLDCENTER] ${ court[i] } [NOINDENT] Your friendly court for ${ court[i].address.county } is located at: ${ court[i].address_block() } --- mandatory: True question: | Court information subquestion: | Here are information sheets for each court in your state. attachment code: | [item.info_sheet for item in court] Documents based on objects This example is similar to the mail merge example in that it uses a single template to create multiple documents. In this case, however, the same template is used to generate a document for two different objects. objects: - plaintiff: Individual - defendant: Individual --- code: | plaintiff.opponent = defendant defendant.opponent = plaintiff --- code: | title = "Summary of case" --- question: | What is the name of the plaintiff? fields: - Name: plaintiff.name.first --- question: | What is the name of the defendant? fields: - Name: defendant.name.first --- generic object: Individual attachment: variable name: x.document name: Document for ${ x.name.first } docx template file: generic-document.docx --- mandatory: True question: | Here are your documents. attachment code: | [plaintiff.document, defendant.document] This makes use of the generic object modifier. The template file generic-document.docx refers to the person using the variable x. Inserting Jinja2 with Jinja2 If you use Jinja2 to insert Jinja2 template tags into a document assembled through docx template file, you will find that the tags in the included text will not be evaluated. However, you can conduct your document assembly in two stages, so that first you assemble a template and then you use the DOCX output as the input for another assembly. code: | inserted_paragraph = "My favorite fruit is {{ favorite_fruit }} and I want the world to know." --- question: | What is your favorite fruit? fields: - Favorite fruit: favorite_fruit --- attachment: variable name: the_template docx template file: twostage.docx valid formats: - docx --- mandatory: True question: | All done attachment: name: A Document filename: a_document docx template file: code: | the_template.docx Altering metadata of generated DOCX files This example demonstrates using the docx package to modify the core document properties of a generated DOCX file. attachment: variable name: assembled_file docx template file: docx-with-metadata.docx valid formats: - docx --- mandatory: True code: | assembled_file user_name from docx import Document docx = Document(assembled_file.path()) docx.core_properties.author = user_name docx.save(assembled_file.path()) del docx --- mandatory: True question: Your document attachment code: assembled_file --- question: | What planet are you from? fields: - Planet: planet --- question: | What is your name? fields: - Name: user_name Note that this interview uses Python code in a code block that should ideally go into a module file. The docx variable is an object from a third party module and is not able to be pickled. The code works this way in this interview because the code block ensures that the variable user_name is defined before the docx variable is created, and it deletes the docx variable with del docx before the code block finishes. If the variable user_name was undefined, docassemble would try to save the variable docx in the interview answers before asking about user_name, and this would result in a pickling error. If the docx variable only existed inside of a function in a module, there would be no problem with pickling. Log out a user who has been idle for too long Create a static file called idle.js with the following contents. var idleTime = 0; var idleInterval; $(document).on('daPageLoad', function(){ idleInterval = setInterval(idleTimerIncrement, 60000); $(document).mousemove(function (e) { idleTime = 0; }); $(document).keypress(function (e) { idleTime = 0; }); }); function idleTimerIncrement() { idleTime = idleTime + 1; if (idleTime > 60){ url_action_perform('log_user_out'); clearInterval(idleInterval); } } In your interview, include idle.js in a features block. features: javascript: idle.js --- mandatory: True code: | welcome_screen_seen final_screen --- question: | Welcome to the interview. field: welcome_screen_seen --- event: final_screen question: | You are done with the interview. --- event: log_user_out code: | command('logout') This logs the user out after 60 minutes of inactivity in the browser. To use a different number of minutes, edit the line if (idleTime > 60){. Seeing the progress of a running background task Since background tasks run in a separate Celery process, there is no simple way to get information from them while they are running. However, Redis lists provide a helpful mechanism for keeping track of log messages. Here is an example that uses a DARedis object to store log messages about a long-running background task. It uses check in to poll the server for new log messages. objects: r: DARedis --- mandatory: True code: | log_key = r.key('log:' + user_info().session) messages = list() --- mandatory: True code: | if the_task.ready(): last_messages_retrieved final_screen else: waiting_screen --- code: | the_task = background_action('bg_task', 'refresh', additional=value_to_add) --- question: | How much shall I add to 553? fields: - Number: value_to_add datatype: integer --- event: bg_task code: | import time r.rpush(log_key, 'Waking up.') time.sleep(10) r.rpush(log_key, 'Ok, I am awake now.') value = 553 + action_argument('additional') time.sleep(17) r.rpush(log_key, 'I did the hard work.') time.sleep(14) r.rpush(log_key, 'Ok, I am done.') background_response_action('bg_resp', ans=value) --- event: bg_resp code: | answer = action_argument('ans') background_response() --- event: waiting_screen question: | Your process is running. subquestion: | #### Message log <ul class="list-group" id="logMessages"> </ul> check in: get_log --- event: get_log code: | import json new_messages = '' while True: message = r.lpop(log_key) if message: messages.append(message.decode()) new_messages += '<li class="list-group-item">' + message.decode() + '</li>' continue break background_response('$("#logMessages").append(' + json.dumps(new_messages) + ')', 'javascript') --- code: | while True: message = r.lpop(log_key) if message: messages.append(message.decode()) continue break last_messages_retrieved = True --- event: final_screen question: | The answer is ${ answer }. subquestion: | #### Message log <ul class="list-group" id="logMessages"> % for message in messages: <li class="list-group-item">${ message }</li> % endfor </ul> Since the task in this case (adding one number to another) is not actually long-running, the interview uses time.sleep() to make it artificially long-running. Sending information from Python to JavaScript If you use JavaScript in your interviews, and you want your JavaScript to have knowledge about the interview answers, you can use get_interview_variables(), but it is slow because it uses Ajax. If you only want a few pieces of information to be available to your JavaScript code, there are a few methods you can use. One method is to use the script modifier. imports: - json --- question: | What is your favorite color? fields: - Color: favorite_color --- question: | What is your favorite fruit? fields: - Fruit: favorite_fruit script: | <script> var myColor = ${ json.dumps(favorite_color) }; console.log("I know your favorite color is " + myColor); </script> --- mandatory: True question: | Your favorites subquestion: | Your favorite fruit is ${ favorite_fruit }. Your favorite color is ${ favorite_color }. Note that the variable is only guaranteed to be defined on the screen showing the question that includes the script modifier. While the value will persist from screen to screen, this is only because screen loads use Ajax and the JavaScript variables are not cleared out when a new screen loads. But a browser refresh will clear the JavaScript variables. Another method is to use the "javascript" form of the log() function. imports: - json --- question: | What is your favorite color? fields: - Color: favorite_color --- initial: True code: | log("var myColor = " + json.dumps(favorite_color) + ";", "javascript") --- question: | What is your favorite fruit? fields: - Fruit: favorite_fruit script: | <script> console.log("I know that your favorite color is " + myColor); </script> --- mandatory: True question: | Your favorites subquestion: | Your favorite fruit is ${ favorite_fruit }. Your favorite color is ${ favorite_color }. In this example, the log() function is called from a code block that has initial set to True. Thus, you can rely on the myColor variable being defined on every screen of the interview after favorite_color gets defined. Another method is to pass the values of Python variables to the browser using the DOM, and then use JavaScript to retrieve the values. imports: - json --- question: | What are your favorites? fields: - Color: color - Flavor: flavor --- question: | What is your favorite fruit? subquestion: | <div id="myinfo" data-color=${ json.dumps(color) } data-flavor=${ json.dumps(flavor) }</div> fields: - Fruit: favorite_fruit script: | <script> var myInfo = $("#myinfo").data(); console.log("You like " + myInfo.color + " things that taste like " + myInfo.flavor + "."); </script> --- mandatory: True question: | Your favorite fruit is ${ favorite_fruit }. All of these methods are read-only. If you want to be able to change variables using JavaScript, and also have the values saved to the interview answers, you can insert <input type="hidden"> elements onto a page that has a “Continue” button. imports: - json --- question: | What is your favorite color? fields: - Color: favorite_color --- question: | What is your favorite fruit? subquestion: | <input type="hidden" name="${ encode_name('favorite_color') }" value=${ json.dumps(favorite_color) }> fields: - Fruit: favorite_fruit script: | <script> var myColor = val('favorite_color'); console.log("You said you liked " + myColor); setField('favorite_color', 'dark ' + myColor); console.log("But now you like " + val('favorite_color')); </script> --- mandatory: True question: | Your favorites subquestion: | Your favorite fruit is ${ favorite_fruit }. Your favorite color is ${ favorite_color }. This example uses the encode_name() function to convert the variable name to the appropriate field name. For more information on manipulating the docassemble front end, see the section on custom front ends. The example above works for easily for text fields, but other data types will require more work. Running actions with Ajax Here is an example of using JavaScript to run an action using Ajax. code: | favorite_fruit = "apples" --- id: guess favorite fruit mandatory: True question: | Guess my favorite fruit. fields: - Your guess: guess - note: | ${ action_button_html("#", id_tag="getFavoriteFruit", label="Verify", size="md", color="primary") } script: | <script> $(document).on('daPageLoad', function(){ // hide the Continue button // and disable the form for // this question if ($(".question-guess-favorite-fruit").length > 0){ $(".da-field-buttons").remove(); $("#daform").off().on('submit', function(event){ event.preventDefault(); return false; }); }; }); $("#getFavoriteFruit").click(function(event){ event.preventDefault(); if (!/\S/.test(val("guess"))){ flash("You need to guess something!", "danger", true); return false; } flash("Verifying . . .", "info", true); action_call("verify_favorite_fruit", {"fruit": val("guess")}, function(data){ if (data.success){ flash("You're right!", "info", true); } else { flash("You're totally wrong. I actually like " + data.fruit + ".", "danger", true); } }); return false; }); </script> --- event: verify_favorite_fruit code: | # No need to save the interview # answers after this action. set_save_status('ignore') # Pretend we have to think # about the answer. import time time.sleep(1) if favorite_fruit.lower() == action_argument('fruit').lower(): success = True else: success = False json_response(dict(success=success, fruit=favorite_fruit)) The features used in this example include: action_button_html()to insert the HTML of a button. - Running Javascript at page load time using the daPageLoadevent. - Setting an idand using the CSS custom class that results. flash()to flash a message at the top of the screen. action_call()to call an action using Ajax. val()to obtain the value of a field on the screen using JavaScript. set_save_status()to prevent the interview answers from being saved after an action completes. action_argument()to obtain the argument that was passed to action_call(). json_response()to return JSON back to the web browser.
https://docassemble.com.br/docs/recipes.html
CC-MAIN-2020-45
refinedweb
3,837
51.44
a script to be able to read the progress of a render. I would like to be able to output the frame number that the current render is on. I just need to be able to output it into the console - the rest I can figure out. I basically already have code that listens to if the render is starting and complete - now I would like to have interval console outputs whenever a new frame is being rendered. I looked into the python syntaxes but I couldn't get anything useful! Anyone know how to solve this?? Thanks in advance! Hi @NibblingFeet may I ask what code you are using to "listen if the render starts and ends". I ask this because there are different ways to do this and depending on that there are several solutions for you. The simplest solution would be that if you are the renderer via c4d.documents.RenderDocument, you could set a render progress callback that would report the current progress or even a callback when each frame is exported to a file, that way you could know the exact frame that has finished rendering. Cheers, Maxime. Hi @m_adam , Thanks for reaching out - I tried to look through the linked document but I couldn't seem to output any render information. Below is the python code I am using - I am essentially making a thread to check while it is still rendering, and the moment it's complete it will continue the script. I would now like to make a callback for everytime a frame is done rendering, and how long it took to render it, etc import c4d import os,time,_thread, datetime def isRendering(time,os) : RenderData = doc.GetActiveRenderData() FrameStart = RenderData[c4d.RDATA_FRAMEFROM].GetFrame(doc.GetFps()) FrameEnd = RenderData[c4d.RDATA_FRAMETO].GetFrame(doc.GetFps()) while c4d.CheckIsRunning ( c4d.CHECKISRUNNING_EXTERNALRENDERING ) : #print(c4d.RENDERRESULT_OK) time.sleep(1) print("render complete.") def main() : c4d.CallCommand(12099) #Render To PV if c4d.CheckIsRunning ( c4d.CHECKISRUNNING_EXTERNALRENDERING ) : _thread.start_new(isRendering,(time,os)) if __name__=='__main__': main() Hi @NibblingFeet sorry for the late reply, as said everything you need is provided by RenderDocumen callbacks. Find bellow a code that start the render and print something to the console for each frame rendered. import c4d def PythonWriteCallBack(mode, bmp, fn, mainImage, frame, renderTime, streamnum, streamname): """Function passed in RenderDocument. It will be called automatically by Cinema 4D when the file rendered file should be saved. Args: mode (c4d.WRITEMODE): The write mode. bmp (c4d.bitmaps.BaseBitmap): The bitmap written to. fn (str): The path where the file should be saved. mainImage (bool): True for main image, otherwise False. frame (int): The frame number. renderTime (int): The bitmap frame time. streamnum (int): The stream number. streamname (streamname: str): The stream name. """ if not fn: return if frame == -1: frame = 0 print(f"ProgressWriteHook called [Frame: {frame} / Render Time: {renderTime}, Name: {fn}]") def main(): rd = doc.GetActiveRenderData() # Allocate a picture, that will receive the rendered picture. # In case of animation this one will be cloned for each frame of the animation # and this one will be used for the last frame. Therefor if you want to access the picture of each frame # you should do it through the ##wprog callback passed to RenderDocument bmp = c4d.bitmaps.MultipassBitmap(int(rd[c4d.RDATA_XRES]), int(rd[c4d.RDATA_YRES]), c4d.COLORMODE_RGB) # Renders the document if c4d.documents.RenderDocument(doc, rd.GetData(), bmp, c4d.RENDERFLAGS_EXTERNAL | c4d.RENDERFLAGS_OPEN_PICTUREVIEWER | c4d.RENDERFLAGS_CREATE_PICTUREVIEWER, wprog=PythonWriteCallBack) != c4d.RENDERRESULT_OK: raise RuntimeError("Failed to render the temporary document.") if __name__ == "__main__": main() Hi @m_adam thanks for the response. I did go through this but it doesn't seem to work for the following reasons: Maybe I am looking at the wrong direction? Maybe I should be looking at Octane documentation and how to grab info from its terminal? Apologies for the somewhat silly assumptions and questions - I am very new to all of this! Thanks in advance Sorry for the late reply
https://plugincafe.maxon.net/topic/14044/getting-current-render-frame
CC-MAIN-2022-27
refinedweb
657
58.08
Probably, we should start with Couch basics. Database is a set of JSON objects called documents. Each document is uh.. again, a JSON object (set of key-value pairs) with two additional fields: _id and _rev. _id is unique identifier of document in database. _rev is a revision (version) identifier used to distinct changes to same document. Now, _id is not forced to anything. You can pass _id when you create object and specify arbitrary value. You can set it as first name in your persons database. But, probably, you will come up with another identifier. If you don't specify _id, couchdb generates random one as long hexadecimal string. You can make direct queries by _id field to couchdb. With couchdb-python interface it looks like: mark = db['Mark'] # IF! you are storing persons documents with first name as identifier, that will return (!) full document (JSON object) as python dict into name `mark`. That said, you don't need to do anything special to get full JSON objects if you query docs by _id. Also, you can instruct couchdb to create a view (analog of views in relational databases), you write a map, and, optionally, reduce function for your view to generate some data, i.e. filter documents (persons), by age. I didn't try to create views via couchdb-python, i've done it in Futon (couchdb builtin admin console). Futon is accessible with browser at (or other address/port, where you run couchdb). There you can select your database, "temporary view" from drop-down list in upper-right corner, put function map(doc) { if (doc.age && doc.age > 17) emit(doc.name, doc.age); } into "map function" text area and click run. Couchdb will run the view against your database. And you don't see anything until it processed all your documents. Very fast for <5K docs. Then you will see a table of results, something like this: Bart 27 Mark 19 Ominuk 24 In fact, couchdb returns a list of JSON objects {key: "Bart", value: 27, _id: "Bart", _rev: "1-2932932"},...etc So here, executing views, yes you get only part of documents. But, also, you can instruct couchdb to return full documents even in views! Append include_docs=true to your GET query args. And the couch will return list of something like: {key: "Bart", value: 27, _id: "Bart", _rev: "1-2932932", doc: {name: "Bart", age: 27, has_sister: true, alive: false}},...etc In couchdb-python that would be RG = db.view("_design/DESIGN-DOC/_view/VIEW-NAME", include_docs=True) # for permanent view VIEW-NAME stored in design document DESIGN-DOC. Then you will have RG as generator of results. R = list(RG) # would read all results from generator into list R More on that, please, read couchdb-python reference. On Fri, Jun 26, 2009 at 12:22 PM, ?)
http://mail-archives.apache.org/mod_mbox/incubator-couchdb-user/200906.mbox/%3C2d8fb9950906260445o13464a1bh933406458aee6bd4@mail.gmail.com%3E
CC-MAIN-2014-52
refinedweb
472
76.11
AWFY shows there's a huge imaging-gaussian-blur regression (15%, > 800 ms slower) with JM enabled. Bisecting points to, bug 640494. gaussian-blur calls Math.abs *a lot*. Math.abs calls setNumber and we end up in JSDOUBLE_IS_NEGZERO via JSDOUBLE_IS_INT32. For a micro-benchmark (calling abs(-3.3) 100000000 times) with/without the change to JSDOUBLE_IS_NEGZERO in jsvalue.h: -m before: 1891 ms -m after: 2277 ms interp before: 6290 ms interp after: 6819 ms TM and JM+TI are not affected because they inline Math.abs. JSDOUBLE_IS_NEGZERO used to do this on OS X: -- return (d == 0 && signbit(d)); -- 0x0009f7f6 <JSDOUBLE_IS_NEGZERO+0>: ucomisd 0x1e954c(%ebx),%xmm1 0x0009f7fe <JSDOUBLE_IS_NEGZERO+8>: jne 0x9f810 <JSDOUBLE_IS_INT32+26> 0x0009f800 <JSDOUBLE_IS_NEGZERO+10>: jp 0x9f810 <JSDOUBLE_IS_INT32+26> 0x0009f802 <__inline_signbitd+0>: movsd %xmm1,0x20(%esp) 0x0009f808 <JSDOUBLE_IS_NEGZERO+18>: mov 0x24(%esp),%eax 0x0009f80c <JSDOUBLE_IS_NEGZERO+22>: test %eax,%eax 0x0009f80e <JSDOUBLE_IS_NEGZERO+24>: js 0x9f840 <DOUBLE_TO_JSVAL_IMPL> -- The common case (x != 0) is very fast. Now we do this: -- union { jsdouble d; uint64 bits; } x; x.d = d; return x.bits == JSDOUBLE_SIGNBIT; -- 0x0009f772 <JSDOUBLE_IS_NEGZERO+0>: movsd %xmm1,0x20(%esp) 0x0009f778 <JSDOUBLE_IS_INT32+6>: mov 0x24(%esp),%eax 0x0009f77c <JSDOUBLE_IS_INT32+10>: sub $0x80000000,%eax 0x0009f781 <JSDOUBLE_IS_INT32+15>: or 0x20(%esp),%eax 0x0009f785 <JSDOUBLE_IS_INT32+19>: je 0x9f7b0 -- Like 5 instructions if x != 0 and some loads/stores. Created attachment 529947 [details] [diff] [review] Possible patch This should be faster. It would be simpler written as return d == 0 && JSDOUBLE_IS_NEG(d), but including jsnum.h in jsvalue.h is a nightmare. Yes, this fixes the regression on the micro-benchmark and the asm is similar to the (d == 0 && signbit(d)) version. Forgot to mention this earlier but I had to to pull in JSDOUBLE_HI32_SIGNBIT and change u to d to make gcc happy. (In reply to comment #3) > and change u to d to make gcc happy. x instead of d of course. Created attachment 529951 [details] [diff] [review] Possible patch Ah yes, obviously, thanks Hey JimB - is this patch worthwhile and/or do we have another plan here? This bug is tracking-firefox6, but it's been sleepy for a few weeks. Review ping for jimb. Comment on attachment 529951 [details] [diff] [review] Possible patch Review of attachment 529951 [details] [diff] [review]: ----------------------------------------------------------------- We'll get to this approval requests on Monday's 2pm PT triage. Comment on attachment 529951 [details] [diff] [review] Possible patch Moving the nomination to Aurora. We may end up just taking this on central and eating the regression until 8. Anyone have further input that would cause us to rush this into 7? Comment on attachment 529951 [details] [diff] [review] Possible patch Given that this isn't a terrible regression we're seeing in the wild, the triage team wants to just wait on this to make its way through the channels. Re-nominate if you want to try to make a better case for this. bc and all, the suspicion is that this code pattern and regression might not show up widely on the web. anyone have some harder evidence that it might? Not tracking for 6 then per previous comments.
https://bugzilla.mozilla.org/show_bug.cgi?id=654664
CC-MAIN-2016-26
refinedweb
515
66.33
Happy Pi Day, everyone! (3/14, get it?) I got back from PyCon last night and have been trying to figure out how to integrate the energy and direction from the conference into my regular life here in Boston. It’s a challenge, but PyCon is always an invigorating experience, and I’m really glad to have gone. In honor of Pi Day, I’ll present you with two puzzles I heard at PyCon, one as part of Google’s recruiting efforts, and one as part of a panel about Python in middle school. Google’s puzzle: A number is a palindrome if the digits read the same backwards as forwards: 1, 88, 343, 234565432, and so on. What is the sum of the digits in the number of palindromes less than a googol (10100)? That is, count all the palindromes greater than zero and less than a googol, then sum all the digits in that number, not the sum of the digits in all the palindromes. What’s your answer? They actually posed it as “write a program to compute the sum of the digits, etc,” and were interested in the shortest program, but I prefer it as a pure math question. The education question was a puzzle presented to middle-school kids, who were asked to write programs to find the answer. Imagine a set of stairs with n steps from bottom to top. You can walk up the stairs by taking every step, or by skipping a single step any time you want. You can’t skip more than one step at a time. How many different ways are there to walk up a flight of n steps? For example, representing a step as t and a skip as k, you could do a flight of 3 steps as ttt, tk, kt, and 4 steps could be tttt, ttk, tkt, ktt, or kk. Update: I posted my solutions. You can't skip more than one step at a time. [...] 4 steps could be tttt, ttk, tkt, ktt, or kk umm 4 steps could be tttt, ttk, tkt and ktt, but how can they be kk? You mean Half Tau Day? Aren't there int("1" + 50*"9") palindrome numbers less than a googol (10**100)? I'd go with an answer of 9 for the Google question, simply because it would have been hard for them to ask the question if the answer wasn't 9. 1. For google's question we can create a nice palindrom sum generator: def palindrom_sums_generator(n): i=0 while (i 1. something like: return [sum(map(lambda(x):int(x),list(str(x)))) for x in xrange(10**100) if palindrom(x)] And the palindrom function returns true iff the number is palindrom: def palindrom(x): l1=list(str(x)) l2 = list(str(x)) l2.reverse() return l1==l2 2. recursively it is defined as the Fibonacci series where F(n)=F(n-1)+F(n-2) Sorry, for the first question here is the solution where counting the number of palindroms and then summing the digits of the count (it's almost the same): return sum(list(str(len([x for x in xrange(10**100) if palindrom(x)]))) Pybarak: I see you're not into that whole "checking if my code even runs before posting it" thing. Even if it did run I doubt it would finish in a reasonable time. @ZeD: Why couldn't four steps be taken in two bounds, skipping a step each time? @pybarak: I'm with Ram here: any solution involving iterating 10**100 times isn't really a solution! @Ram: you're just *guessing* 9? Don't you think the number of palindromes under a googol would be very large? Possibly with more than 9 digits itself? How could the sum of those digits be 9? Oh, I confused "sum of digits" with "repeated digital sum". @Ram: in that case, yours was a good (and correct!) guess. Using some math one can show that the number of n-digit palindromes is 9*(10^(n/2-1)) when n is even and 9*(10^(n/2)) when n is odd. So the number we are looking for would be 9*100=900 (9 for each additional digit, as the first number smaller than googol has 100 digits)The returned list should be filtered to remove the zero-prefixed elements Here is a more elegant (recursive) solution to create an n-digit palindrome: @Ram: I actually ran this one on my laptop. Here is the output for palindromes of length 2 (I won't print 3 since there are 90 of them): I think I came up with the correct answer: 450. I don't see any other comments that mention it, though. Here's the code that gave me that answer: Why do I get 451? Do you guys count 0 as a palindromic number or not? @christos, correct, I did not count 0 in my code. The problem was a bit ambiguous on this point. The problem said, "count all the palindromes greater than zero and less than a googol" You can't iterate to solve this, the numbers are too big. Luckily, the numbers are quite round, so it stays fairly terse. We want a number less than half the length of 10**100, and we want it twice. Once we will duplicate it for the second half of the palindrome, and next we will duplicate it but pivot on the center character. This way we get palindromes of an even and odd number of characters. palindromes = (10**50-1)*2 sum_of_digits = sum([int(x) for x in str(palindromes)]) The sum of all palindromes up to 10**100 (in LaTeX math format) is: $45 \(1 + \sum_{n=2}^{50} 10^{\lfloor \frac{n-1}{2} \rfloor} \(10^{n-1} + 10^{n-2} \)\)$ I leave it as an exercise for the student why this is so. Actually, n should be summed up to 100, not 50. Oops. I am new to programming and am trying to figure out if the solution to the palindrome problem is a math question or a programming question, as far as Google point of view. I have confirmed the math solution by iterating 10**1 to 10**10, noting the pattern in the answers and extrapolating out the answer for 10**100. Is there a way to do the entire problem programmatically without the use of math type functions? It would seem to me that if on average only about 6% of the numbers per **10 are palindromes, then it would be faster to generate all of the palindromes for the range and then add them. Is this correct if you were trying to solve it entirely with a program? Ancient Egyptians may have thought Pi was 256/81. 256/81 is about 3.16049382716049382716, which is approx 0.6 percent above the value of Pi. 22/7 is approx 0.04 percent) Add a comment:
https://nedbatchelder.com/blog/201103/two_pi_day_puzzles_from_pycon.html
CC-MAIN-2021-04
refinedweb
1,169
79.19
std::bad_array_new_length class in C++ with Examples Standard C++ contains several built-in exception classes, std::bad_array_new_length is one of them.It is an exception on bad array length and thrown if the size of array is less than zero and if the array size is greater than the limit. Below is the syntax for the same: Header File: <new> Syntax: class bad_array_new_length; Return: The std::bad_array_new returns a null terminated character that is used to identify the exception. Note: To make use of std::bad_array_new, one should set up the appropriate try and catch blocks. Below are the Programs to understand the implementation of std::bad_array_new in a better way: Program 1: std::bad_array_new_length Program 2: std::bad_array_new_length Reference: Don’t stop now and take your learning to the next level. Learn all the important concepts of Data Structures and Algorithms with the help of the most trusted course: DSA Self Paced. Become industry ready at a student-friendly price. Recommended Posts: - std::bad_weak_ptr class in C++ with Examples - std::is_trivially_copy_assignable class in C++ with Examples - std::uniform_real_ distribution class in C++ with Examples - C++ boost::dynamic_bitset Class with Examples - Difference between Base class and Derived class in C++ - How to convert a class to another class type in C++? - std::any Class in C++ - std::hash class in C++ STL - Array class in C++ - std::string class in C++ - std:: valarray class in C++ - Structure vs class in C++ - How to implement our own Vector Class in C++? - Virtual base class in C++ - C++ string class and its applications - C++ String Class and its Applications | Set 2 - Difference between namespace and class - Friend class and function in C++ - Why is the size of an empty class not zero in C++? - What all is inherited from parent.
https://www.geeksforgeeks.org/stdbad_array_new_length-class-in-c-with-examples/?ref=rp
CC-MAIN-2020-29
refinedweb
298
54.05