text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
I'm on a foreign exchange program and studying computer architecture at university. The problem is this university it assumes the students have studied C-programming as their course in basic programming, whereas at my regular university the course in basic programming is ADA. I am trying to catch up but need some help since I don't really know how C-language is structured. My current problem: Shift operations, 32-bit. Got two functions int func1 (unsigned word) { return (int) ((word<<24) >> 24); } int func2 (unsigned word) { return ((int) word <<24) >> 24; } How does these functions work? I understand the logical shift but not the functions. My guess is func1 first shift all bits 24 bits to left, and then 24 bits to right, resulting in the 8 LSB will be the same but the 24 MSB will turn into 0's. Even if my guess is correct I can't see how func2 would work differently. I would greatly appreciate if someone could ellaborate for me. Examples with 127, 128, 255, 256 as in values? EDIT: Represented by two's complement
http://cboard.cprogramming.com/c-programming/130129-studying-computer-architecture-without-knowledge-c-programming-got-some-questions.html
CC-MAIN-2015-32
refinedweb
183
72.36
Chris Hare wrote: > I am having a problem getting around this variable namespace thing. > > Consider these code bits > > File a.py > from Tkinter import * > import a1 > > def doAgain(): > x = a1.Net() > x.show("Again!") > > root = Tk() > root.title("test") > f = Frame(root,bg="Yellow") > l = Button(root,text="window 1",command=doAgain) > f.grid() > l.grid() > a = 5 > x = a1.Net() > x.show("window 2") > if __name__ == "__main__": > root.mainloop() > > File a1.py >.hide) > button.grid() > button2.grid() > def Again(self,event): > x = Net() > x.show(a) > def hide(self,event): > self.window.destroy() > > > When I run a.py, it imports a1.py and click on the Again button, I get the error > > Exception in Tkinter callback > Traceback (most recent call last): > File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-tk/Tkinter.py", line 1410, in __call__ > return self.func(*args) > File "/Volumes/Development/py/a1.py", line 17, in Again > x.show(a) > NameError: global name 'a' is not defined > > I believe this is the expected behavior. so my question is this -- how do I tell the code in a1.py about the variable a, which exists in a.py? Do I have to pass it as part of the function call, or what? using > > global a > > in a1.py doesn't change anything. The global keyword does not make a variable global. It tells the interpreter that the variable in question can be find in the module scope, not the function/method scope. In other words, the variable is global to the module, but not to the whole program. What you'll need to do is pass a into Net when you instanciate it, like so (untested): def doAgain(): x = a1.Net(a) x.show("Again!") and in Net: class Net: def __init__(self, some_number): self.some_number = some_number self.window = Toplevel() . . . def Again(self,event): x = Net(self.some_number) x.show() Keep in mind, though, that if you change a in a.py after you've instanciated Net, your Net instance will not see the change. For the change to show up, a would need to be mutable, and you would have to mutate it. The other option is to change the Net instance's some_number directly (i.e. x.some_number = 9). Hope this helps. ~Ethan~
https://mail.python.org/pipermail/python-list/2010-August/583376.html
CC-MAIN-2017-17
refinedweb
383
70.8
three classes: GD::Image An image class, which holds the image data and accepts graphic primitive method calls. GD::Font A font class, which holds static font information and used for text rendering. GD::Polygon; Notes: binmode()on the file you are writing the image to. The following class methods allow you to create new GD::Image objects. The new() method is the main constructor for the GD::Image class. Called with two integer arguments, it creates a new blank image of the specified width and height. For example: $myImage = new GD::Image(100,100) || die; This will create an image that is 100 x 100 pixels wide. If you don't specify the dimensions, a default of 64 x 64 will be chosen. The optional third argument, $truecolor, tells new() to create a truecolor GD::Image object. Truecolor images have 24 bits of color data (eight bits each in the red, green and blue channels respectively), allowing for precise photograph-quality color usage. If not specified, the image will use an 8-bit palette for compatibility with older versions of libgd. Alternatively, you may create a GD::Image object based on an existing image by providing an open filehandle, a filename, or the image data itself. The image formats automatically recognized and accepted are: PNG, JPEG, XPM and GD2. Other formats, including WBMP, and GD version 1, cannot be recognized automatically at this time. If something goes wrong (e.g. insufficient memory), this call will return undef.); The newPalette() and newTrueColor() methods can be used to explicitly create an palette based or true color image regardless of the current setting of trueColor(). The newFromPng() method will create an image from a PNG file read in through the provided filehandle or file path. The filehandle must previously have been opened on a valid PNG file or pipe. If successful, this call will return an initialized image which you can then manipulate as you please. If it fails, which usually happens if the thing at the other end of the filehandle is not a valid PNG file, the call returns undef. Notice that the call doesn't automatically close the filehandle for you. But it does call binmode(FILEHANDLE) for you, on platforms where this matters. The optional $truecolor (0/1) value can be used to override the global setting of trueColor() to specify if the return image should be palette-based or truecolor. You may use any of the following as the argument: 1) a simple filehandle, such as STDIN 2) a filehandle glob, such as *PNG 3) a reference to a glob, such as \*PNG 4) an IO::Handle object 5) the pathname of a file In the latter case, newFromPng() will attempt to open the file for you and read the PNG information from it. Example1: open (PNG,"barnswallow.png") || die; $myImage = newFromPng GD::Image(\*PNG) || die; close PNG; Example2: $myImage = newFromPng GD::Image('barnswallow.png'); To get information about the size and color usage of the information, you can call the image query methods described below. The newFromPngData() method will create a new GD::Image initialized with the PNG format data contained in $data. These methods will create an image from a JPEG. These methods will create an image from a GIF. Bear in mind that GIF is an 8-bit format, so there will initially be at most 256 colors even if you specify truecolor. This works in exactly the same way as newFromPng, but reads the contents of an X Bitmap (black & white) file: open (XBM,"coredump.xbm") || die; $myImage = newFromXbm GD::Image(\*XBM) || die; close XBM; There is no newFromXbmData() function, because there is no corresponding function in the gd library.; This works in exactly the same way as newFromGd() and newFromGdData, but use the new compressed GD2 image format. This class method allows you to read in just a portion of a GD2 image file. In additionto). This creates a new GD::Image object starting from a filename. This is unlike the other newFrom() functions because it does not take a filehandle. This difference comes from an inconsistency in the underlying gd library. $myImage = newFromXpm GD::Image('earth.xpm') || die; This function is only available if libgd was compiled with XPM support. NOTE: The libgd library is unable to read certain XPM files, returning an all-black image instead.. This returns the image data in GIF format. You can then print it, pipe it to a display program, or write it to a file. This returns the image data in GD format. You can then print it, pipe it to a display program, or write it to a file. Example: binmode MYOUTFILE; print MYOUTFILE $myImage->gd; Same as gd(), except that it returns the data in compressed GD2 format. This returns the image data in WBMP format, which is a black-and-white image format. Provide the index of the color to become the foreground color. All other pixels will be considered background. These methods allow you to control and manipulate the GD::Image color table.); This allocates a color with the specified red, green, and blue components, plus the specified alpha channel. The alpha value may range from 0 (opaque) to 127 (transparent). The alphaBlending function changes the way this alpha channel affects the resulting image. This allocates a color with the specified red, green, and blue components, plus the specified alpha channel. The alpha value may range from 0 (opaque) to 127 (transparent). The alphaBlending function changes the way this alpha channel affects the resulting image.); This returns the index of the color closest in the color table to the red green and blue components specified. If no colors have yet been allocated, then this call returns -1. Example: $apricot = $myImage->colorClosest(255,200,180); This also attempts to return the color closest in the color table to the red green and blue components specified. It uses a Hue/White/Black color representation to make the selected colour more likely to match human perceptions of similar colors. If no colors have yet been allocated, then this call returns -1. Example: $mostred = $myImage->colorClosestHWB(255,0,0);;; This returns the total number of colors allocated in the object. $maxColors = $myImage->colorsTotal; In the case of a TrueColor image, this call will return undef. This returns the color table index underneath the specified point. It can be combined with rgb() to obtain the rgb color underneath the pixel. Example: $index = $myImage->getPixel(20,100); ($r,$g,$b) = $myImage->rgb($index); This returns a list containing the red, green and blue components of the specified color index. Example: @RGB = $myImage->rgb($peachy);. If you call this method without any parameters, it will return the current index of the transparent color, or -1 if none. Example: open(PNG,"test.png"); $im = newFromPng GD::Image(PNG); $white = $im->colorClosest(255,255,255); # find white $im->transparent($white); binmode STDOUT; print $im->png; GD implements a number of special colors that can be used to achieve special effects. They are constants defined in the GD:: namespace, but automatically exported into your namespace when the GD module is loaded.- elipses, as well as to perform various special operations like flood-fill.););); These methods() draw ellipses. ($cx,$cy) is the center of the arc, and ($width,$height) specify the ellipse width and height, respectively. filledEllipse() is like Ellipse() except that the former produces filled versions of the ellipse.);. $srcX,$srcY,$width,$height) This is the simplest of the several);; $srcX,$srcY,$width,$height,$percent) This copies the indicated rectangle from the source image to the destination image, merging the colors to the extent specified by percent (an integer between 0 and 100). Specifying 100% has the same effect as copy() -- replacing the destination pixels with the source image. This is most useful for highlighting an area by merging in a solid rectangle. Example: $myImage = new GD::Image(100,100); ... various drawing stuff ... $redImage = new GD::Image(50,50); ... more drawing stuff ... # copy a 25x25 pixel region from $srcImage to # the rectangle starting at (10,10) in $myImage, merging 50% $myImage->copyMerge($srcImage,10,10,0,0,25,25,50); $srcX,$srcY,$width,$height,$percent) This is identical to copyMerge() except that it preserves the hue of the source by converting all the pixels of the destination rectangle to grayscale before merging. $srcX,$srcY,$destW,$destH,$srcW,$srcH)); . This method converts a truecolor image to a palette image. The code for this function was originally drawn from the Independent JPEG Group library code, which is excellent. The code has been modified to preserve as much alpha channel information as possible in the resulting palette, in addition to preserving colors as well as possible. This does not work as well as might be hoped. It is usually best to simply produce a truecolor output image instead, which guarantees the highest output quality. Both the dithering (0/1, default=0) and maximum number of colors used (<=256, default = gdMaxColors) can be specified. Gd also provides some common image transformations: These methods can be used to rotate, flip, or transpose an image. The result of the method is a copy of the image. These methods are similar to the copy* versions, but instead modify the image in place..); Just like the previous call, but draws the text rotated counterclockwise 90 degrees. These methods draw single characters at position (x,y) in the specified font and color. They're carry-overs from the C interface, where there is a distinction between characters and strings. Perl is insensible to such subtle distinctions.) x,y X and Y coordinates to start drawing the string string The string itself If successful, the method returns an eight-element list giving the boundaries of the rendered string: @bounds[0,1] Lower left corner (x,y) @bounds[2,3] Lower right corner (x,y) @bounds[4,5] Upper right corner (x,y) @bounds[6,7] Upper left corner (x,y) In case of an error (such as the font not being available, or FT support not being available), the method returns an empty list and sets $@ to the error anti-ali im,. By default, GD (libgd 2.0.2 and above) does not attempt to save full alpha channel information (as opposed to single-color transparency) when saving PNG images. (PNG is currently the only output format supported by gd which can accommodate alpa. These are various utility methods that are useful in some circumstances.. This method will return a two-member list containing the width and height of the image. You query but not not change the size of the image once it's created. Return the width and height of the image, respectively. This method will return a boolean representing whether the image is true color or not. Compare two images and return a bitmap describing the differenes found, if any. The return value must be logically ANDed with one or more constants in order to determine the differences. The following constants are available: GD_CMP_IMAGE The two images look different GD_CMP_NUM_COLORS The two images have different numbers of colors GD_CMP_COLOR The two images' palettes differ GD_CMP_SIZE_X The two images differ in the horizontal dimension GD_CMP_SIZE_Y The two images differ in the vertical dimension GD_CMP_TRANSPARENT The two images have different transparency GD_CMP_BACKGROUND The two images have different background colors GD_CMP_INTERLACE The two images differ in their interlace GD_CMP_TRUECOLOR The two images are not both true color The most important of these is GD_CMP_IMAGE, which will tell you whether the two images will look different, ignoring differences in the order of colors in the color palette and other invisible changes. The constants are not imported by default, but must be imported individually or by importing the :cmp tag. Example: use GD qw(:DEFAULT :cmp); # get $image1 from somewhere # get $image2 from somewhere if ($image1->compare($image2) & GD_CMP_IMAGE) { warn "images differ!"; }. A few primitive polygon creation and manipulation methods are provided. They aren't part of the Gd library, but I thought they might be handy to have around (they're borrowed from my qd.pl Quickdraw library). Also see GD::Polyline. Create an empty polygon with no vertices. $poly = new GD::Polygon; Add point (x,y) to the polygon. $poly->addPt(0,0); $poly->addPt(0,50); $poly->addPt(25,25); $myImage->fillPoly($poly,$blue);); $myImage->fillPoly($poly,$blue); Return the number of vertices in the polygon. $points = $poly->length; Return a list of all the verticies in the polygon object. Each member. Please see GD::Polyline for information on creating open polygons and splines. The libgd library (used by the Perl GD library) has built-in support for about half a dozen fonts, which were converted from public-domain X Windows fonts. For more fonts, compile libgd with TrueType support and use the stringFT() call. If you wish to add more built-in fonts, the directory bdf_scripts contains two contributed utilities that may help you convert X-Windows BDF-format fonts into the format that libgd uses internally. However these scripts were written for earlier versions of GD which included its own mini-gd library. These scripts will have to be adapted for use with libgd, and the libgd library itself will have to be recompiled and linked! Please do not contact me for help with these scripts: they are unsupported. Each of these fonts is available both as an imported global (e.g. gdSmallFont) and as a package method (e.g. GD::Font->Small). This is the basic small font, "borrowed" from a well known public domain 6x12 font. This is the basic large font, "borrowed" from a well known public domain 8x16 font. This is a bold font intermediate in size between the small and large fonts, borrowed from a public domain 7x13 font; This is a tiny, almost unreadable font, 5x8 pixels wide. This is a 9x15 bold font converted by Jan Pazdziora from a sans serif X11 font. This returns the number of characters in the font. print "The large font contains ",gdLargeFont->nchars," characters\n"; This returns the ASCII value of the first character in the font height-2000, Lincoln D. Stein. It is distributed under the same terms as Perl itself. See the "Artistic License" in the Perl source code distribution for licensing terms. The latest versions of GD.pm are available at GD::Polyline, GD::SVG, GD::Simple, Image::Magick
http://search.cpan.org/~lds/GD-2.19/GD.pm
crawl-002
refinedweb
2,398
55.03
eureca Nodejs remote procedure call built on top of sockjs Want to see pretty graphs? Log in now!Want to see pretty graphs? Log in now! npm install eureca eureca eureca (easy unobstructive remote call) is a node.js RPC library using sockjs as a network layer. it's inspired from nowjs library, but intend to be simpler and provide only the strict necessary stuff to do RPC correctly. This is the firs release, many bug may exist! Setup and test npm install eureca test node --harmony-proxies node_modules/eureca/test/rmi.js note the usage of --harmony-proxies command line argument, this switch enables harmony proxies witch is used by eureca library (for more information about harmony proxies see this link) it you don't use --harmony-proxies, eureca will still work using a workaround but this is not recommanded. usage example Server side var EURECA = require('eureca').EURECA; //the allow parameter define witch function names are accepted in the client side. //the configuration bellow will unable server to call foo() and bar() ine the client side if the client define them var eureca = new EURECA({allow : ['foo', 'bar']}); Server = eureca.exports; //all functions declared in this namespace will be availbale to clients //EURECA Server side functions exposed to clients Server.add = function(a, b) { return a+b; }; /////////////////////////////////////////////////// eureca.install(app); client side <html> <head> <script src="/eureca.js"></script> </head> <body> <script> var eureca = new EURECA(); var me = eureca.exports; //function defined inside this namespace can be called from server if they are allowed me.foo = function(a) { // the function foo() was allowed by server so it'll be available alert('FOO() '+a); } </script> Server side (Call client functin) since the client defined a function called foo() we can call it, but first we need to get the client instance the onConnect event provide a useful callback to handle incoming connections, we will use it to get the client instance and call foo() eureca.onConnect(function(conn) { client = eureca.getClient(conn.id); client.foo(' From server'); }); you can use the onConnect callback to keep track of your clients so you can use them later. Client side (Call server functin) in the client side, the code is simpler, since we don't need to find the server instance, Eureca object will directly expose remote function to call server side add function, all we have to do is eureca.add(10, 20); // eureca is an instance of EURECA() we can add a callback function to get the result eureca.add(10, 20, function(result) { console.log('the result is ', result); });
https://www.npmjs.org/package/eureca
CC-MAIN-2014-15
refinedweb
430
55.74
VanillaSnake21Member Content count806 Joined Last visited Community Reputation175 Neutral About VanillaSnake21 - RankAdvanced Member Personal Information Social - TymAfterDark - GithubTimAkgayev - SteamVanillaSnake Framework Design OOP vs Other VanillaSnake21 posted a topic in General and Gameplay ProgrammingWasn't sure whether to post this in Game Design section, but it seems that that one leans towards actual gameplay design, whereas this topic is more about programming decisions. In any case mods can move it. Just want to say that this is the first in a series of questions I planned out oriented towards using proper OOP. A while back I came across a post on here that made me rethink my ways in writing OO code. I cannot quote the individual but he said something along the lines of "it's not OO programming if you're just throwing related functions and data in a class and calling it a day". It kind of struck a chord with me and I got to thinking how most of the code I had was just that, things were Objects for no reason. For example in my initial iterations of frameworks, I had an Application class, a WindowsInterface class, a Utility "class" that just had all static functions (because it was convenient to just write Utility::TransformVector or something like that). Everything was a class just because. About a year ago I decided to step back from that and write code without any classes at all just to see the difference. I rewrote my entire framework in that style and really never looked back since. I no longer have to worry about class instances, about what seem like useless restrictions like having to pass hidden "this" pointers in window parameters or managing objects that didn't even seem like objects to being with. So on to my question. I'm now reading Code Complete 2 (after having read 1 years back) and with everything I've learned I'm tempted to give OOP another go. With a renewed mindset and a renewed appreciation for constraint. However that also got me thinking of whether or not all things conform well to OOP, maybe things like general low level frameworks or systems programming are inherently anti-oop? Maybe I'm just trying to push for something that's not really needed at this point? The reason I came to that conclusion is because I'm re-reading some design chapters right now in CC2 and he speaks of designing software at high level first thinking of it like a house. Designating subsystems, then designating modules in the subsystems, then designing classes and planning their interactions and only then moving on to functional execution of class methods. As I sat down to rewrite my code, I realized that it's really difficult to even begin. I can't specify subsystem interaction for example and as per the book I have to restrict subsystem interaction because "it's chaos if every subsystem can access every other subsystem". Well that's the way I have right now, I have a SoftwareRenderer, UserInterface, Resources, Windows, Application and GameEngine subsystems. I see no reason to restrict SoftwareRendrer to any one, and as of right now since I'm coding in C, all a subsystem has to do is include "SoftwareRasterizer.h" and it's good to go with making calls. It's flexible and convenient. So besides subsystem interaction, I'm also having difficulty breaking things down into meaningfully classes. I blame it on the fact that a framework is by definition a low level system, so the obejcts can't be really straighforward common sense abstractions, they must be EventListeners and FrameDescriptors, which are abstractions non-the-less but at a much less intuitive level. Despite being confident that I don't really need OOP to get the job done, it's nagging me that it's so difficult for me to find an OO solution to a framework. Does that mean that I don't understand what a framework requires if I can't easily delineate the objects required to create it? Should I still push to get there? Or are some things just not as suited for OOP as others? Thanks. C++ Pointer becomes invalid for unknown reason VanillaSnake21 replied to VanillaSnake21's topic in General and Gameplay ProgrammingThat did it. Thanks jpetrie. I totally forgot about the difference in bit layouts C++ Pointer becomes invalid for unknown reason VanillaSnake21 posted a topic in General and Gameplay ProgrammingI've restructured some of my code to use namespaces and started getting problems in a module that was working correctly previously. The one in question is a DebugWindow, what happens is I give it a pointer to a variable that I want to monitor/change and it's job is to display that variable in a separate window along with a some + and - buttons to in/decrement the variable. These are the relevant portions: WindowManager.h namespace WindowManager { /* WindowManager functions snipped */ namespace DebugWindow { void AddView(double* vard, std::wstring desc, double increment); void AddView(std::wstring* vars, std::wstring desc); void CreateDebugWindow(int width, int height, int x, int y); } } Application.cpp is the main app, it calls the above functions to set the watch on the variables I need to see in real-time void ApplicationInitialization() { //create the main window UINT windowID = SR::WindowManager::CreateNewWindow(LocalWindowsSettings); //initialize the rasterizer InitializeSoftwareRasterizer(SR::WindowManager::GetWindow(windowID)); //create the debug window SR::WindowManager::DebugWindow::CreateDebugWindow(400, LocalWindowsSettings.clientHeight, LocalWindowsSettings.clientPosition.x + LocalWindowsSettings.clientWidth, LocalWindowsSettings.clientPosition.y); //display some debug info SR::WindowManager::DebugWindow::AddView((double*)&gMouseX,TEXT("Mouse X"), 1); SR::WindowManager::DebugWindow::AddView((double*)&gMouseY, TEXT("Mouse Y"), 1); } The variables gMouseX and Y are globals in my application, they are updated inside the App's WindProc inside the WM_MOUSEMOVE like so : case WM_MOUSEMOVE: { gMouseX = GET_X_LPARAM(lParam); gMouseY = GET_Y_LPARAM(lParam); /* .... */ }break; Now inside the AddView() function that I'm calling to set the watch on the variable void AddView(double* vard, std::wstring desc, double increment) { _var v; v.vard = vard; // used when variable is a number v.vars = nullptr; // used when varialbe is a string (in this case it's not) v.desc = desc; v.increment = increment; mAddVariable(v); } _var is just a structure I use to pass the variable definition and annotation inside the module, it's defined as such struct _var { double* vard; //use when variable is a number double increment; //value to increment/decrement in live-view std::wstring* vars; //use when variable is a string std::wstring desc; //description to be displayed next to the variable int minusControlID; int plusControlID; HWND viewControlEdit; //WinAPI windows associated with the display, TextEdit, and two buttons (P) for plus and (M) for minus. HWND viewControlBtnM; HWND viewControlBtnP; }; So after I call AddView it formats this structure and passes it on to mAddVariable(_var), here it is: void mAddVariable(_var variable) { //destroy and recreate a timer KillTimer(mDebugOutWindow, 1); SetTimer(mDebugOutWindow, 1, 10, (TIMERPROC)NULL); //convert the variable into readable string if it's a number std::wstring varString; if (variable.vard) varString = std::to_wstring(*variable.vard); else varString = *variable.vars; //create all the controls variable.viewControlEdit = CreateWindow(/*...*/); //text field control variable.minusControlID = (mVariables.size() - 1) * 2 + 1; variable.viewControlBtnM = CreateWindow(/*...*/); //minus button control variable.plusControlID = (mVariables.size() - 1) * 2 + 2; variable.viewControlBtnP = CreateWindow(/*...*/); //plus button control mVariables.push_back(variable); } I then update the variable using a timer inside the DebugWindow msgproc case WM_TIMER: { switch (wParam) { case 1: // 1 is the id of the timer { for (_var v : mVariables) { SetWindowText(v.viewControlEdit, std::to_wstring(*v.vard).c_str()); } }break; default: break; } }; break; When I examine the mVariables, their vard* is something like 1.48237482E-33#DEN. Why does this happen? Also to note is that I'm programming in C like fashion, without using any objects at all. The module consists of .h and .cpp file, whatever I expose in .h is public, if a function is only exposed in .cpp it's private . So even though I precede some functions with m_Function it's not in a member of a class but just means that it's not exposed in the header file, so it's only visible within this module. Thanks. C++ How to correctly scale a set of bezier curves? VanillaSnake21 replied to VanillaSnake21's topic in General and Gameplay ProgrammingOh so I should contain all the curves in a region and just map it from 0 to 1. Right, that makes sense. I guess come to think of it every font editor has set size glyphs, not sure why I thought I needed arbitrary sizes inside the editor. Thanks. Also @JoeJ, I didn't consider the baseline and letter metrics, thanks. C++ How to correctly scale a set of bezier curves? VanillaSnake21 replied to VanillaSnake21's topic in General and Gameplay ProgrammingIt's not easy to explain or even draw or demonstrate, I'll try again but bear with me. When I'm making a font in my editor I do so by plotting single control vertices. For every click I create a single control vertex, after I plot 4 of them it makes a 4d bezier curve. Now how do I store the actual positions of the control vertex? Right now I just have them as exact mouse coordinates where I clicked on the screen so CV1(100px, 100px) CV2( 300px, 300px) and so on until CV4 which now makes a curve. These are all in screen space. Now I add a few more curve lets say which form a letter, so all these curves are being manipulated in pixel coordinates. So now if I want to actually use these letters and scale them to any font size, I can't use these screen coordinates anymore, I have to fit the letter in some scalable space like 0 to 1. So I have to convert all the vertex coordinates into that space. Right now I'm doing that manually, I just have a button in my editor called Normalize, so once I'm happy with the letter I've formed, I click normalize and it transforms all the vertices into normalized 0 to 1 space. My question was whether I can avoid doing the normalization manually and work in some space that is normalized from the get go. As in when I plot the point with the mouse, I wouldn't store the location of the mouse as the vertex coordinate, but right away transoform the mouse coordinate into a normalized space. I hope that clears up what my intentions with the question were. It's not really a problem as everything works just fine as of now, I just wanted to know if there is a more elegant way of doing this. C++ How to correctly scale a set of bezier curves? VanillaSnake21 replied to VanillaSnake21's topic in General and Gameplay ProgrammingI mean suggesting to dig through a mature open source library code to see how it does a certain specific action is a bit of an overkill imo. If there are some docs you can point me to that deal with this issue, that's another thing. C++ How to correctly scale a set of bezier curves? VanillaSnake21 replied to VanillaSnake21's topic in General and Gameplay ProgrammingBecause I can't normalize until I get the final shape of the letter, lets say letter A. Takes 3 curves, / -- \ , if I just renormalize after I add the second curve, the structure shifts to renormalized units, meaning it shifts to the center of the canonical box as I have it now, so I have to manually renormalize once I finalize the letter. That's how I have it now, it's a bit tedious and I was looking for a way to maybe use alternate coordinate systems to have a more elegant implementation, but not every piece of code has to be perfect I guess, just have to settle on this for now. C++ How to correctly scale a set of bezier curves? VanillaSnake21 replied to VanillaSnake21's topic in General and Gameplay ProgrammingIt's my own framework, I'm not willing to use anything but the most low level libraries, as I'm not even using a graphics api. My question was how to represent the spline correctly internally so it could both be used in letter glyphs as well as modified in the editor. I've settled on having a duplicate structure at this point, I have one representation for a spline when I'm dragging around it's vertices in the editor and another normalized representation for when it's rendered, I was just looking for a single elegant implementation in this question. C++ How to correctly scale a set of bezier curves? VanillaSnake21 posted a topic in General and Gameplay ProgrammingI've got a working implementation of 4d and 1d bezier curve font generator, however I'm not sure how to now transition into actually making text. As of right now I create my font by clicking and dragging control vertices on the screen, once I have a few curves aligned I designate it as a letter and save the font. But I'm not sure what coordinate system to use to make sure that I can scale the existing curves to any size? I'm thinking to have the letter sit in like a canonical box with -1 to 1 in both x and y, but then how do I re normalize the curves and still have the ability to plot points directly on screen? As of right now the control vertices are in viewport space of [0 to screen dimention], so when I plot a point I just take the client mouse coordinates. But if I choose to project the final letter to -1 to 1 space, I can only do so once I draw all the curves for that letter as I need the bounding box of all the curves. So what is the right way to approach this? This is probably a bit convoluted, the point of the question is how do I transition from font editor to actual font. Do I have to unproject the curves when I open them in font editor and duplicate as working copies and only bake the final normalized letter into the font when I'm done editing it or else how would I do it at runtime? How to properly get delta time on modern hardware? VanillaSnake21 replied to VanillaSnake21's topic in General and Gameplay ProgrammingThanks How to properly get delta time on modern hardware? VanillaSnake21 replied to VanillaSnake21's topic in General and Gameplay ProgrammingI don't need real-time rendering, I should have mentioned it initially, I need smooth animations, real time rendering is obviously not happening with 120 ms cycle. I don't think this is something unusual. I need for example a graphic to play in the top right corner on game load, something like a snake eating it's tail. I don't have to render it out in realtime, just render it at game load and then playback at any fps when needed. What my mistake was is that I was in fact trying to render in real time by using time delta. How to properly get delta time on modern hardware? VanillaSnake21 replied to VanillaSnake21's topic in General and Gameplay ProgrammingThe conceptual process looks like this while(!done) { CurrentTime = GetTheCurrentTime(); ElapsedTime = CurrentTime - PreviousTime; PreviousTime = CurrentTime; Update(ElapsedTime); Render(ElapsedTime); } You get the current time at the top of the loop, subtract it from whatever the time was at the top of the last iteration of the loop. That's your elapsed time. You're measure the time it took to go from the top of the loop, through all the instructions in the body, and get back to the top. There's no need to concern yourself with trivial details like the cycle count of a jump instruction. This sounds like a problem that you should fix. Updating animations (and game logic) via delta time is correct. But using a fixed timestep to do that is not going to solve the problem where it takes too long to render. What makes your game run like a slideshow is the fact that it takes 120ms to render stuff. That's like... 8 frames per second. If you subtract out the time spent rendering, your animations will make smaller adjustments between any two presented frames. But they will still only render at 8 FPS, and when you eventually fix the renderer or switch to a real one, all of your assumed times will be wrong. I understand the loop, the way I had it didn't include the Update and Render functions on purpose because I thought that it wasn't what I needed. I was in a way right because in my case I don't really need Render and Update timing. What I was asking is why can't I see the delta of the jump instruction reflected by QPC. But in any case it's not important I suppose. <But they will still only render at 8 FPS> No, they will render at whatever fps I instruct. 8 FPS would be realtime render, the buffered animation can be played at any fps (up to capture limit) after the fact. Edit: I mean playback, playback at any fps I need. How to properly get delta time on modern hardware? VanillaSnake21 replied to VanillaSnake21's topic in General and Gameplay ProgrammingSo in other words you're saying it may help but it's not an ideal way? So what do you suggest I do then if not this? How to properly get delta time on modern hardware? VanillaSnake21 replied to VanillaSnake21's topic in General and Gameplay ProgrammingMy animations are based on delta time, akin to this <scalar*dir.x*delta_time, .scalar*dir.y*delta_time> so because my render function takes more time than acceptable refresh it completely destroys the look of the animation. What I think I have to do is to untie my anims from realtime (timedelta). But you're saying fixed timestep is not the way to go? So what should I do? Edit: but is there really no way to avoid including my raster function in time loop? I don't even understand how a high rez timer timer does capture the time to get from bottom of loop to top, you're saying that the jmp instruction is taking less that one tick? And QPC can see 1 clock tick right? How to properly get delta time on modern hardware? VanillaSnake21 replied to VanillaSnake21's topic in General and Gameplay ProgrammingBut this will completely destroy my animations then. By the time I finish rendering the elapsed time delta will be so huge that it's just going to be like watching bad stop motion anim. I guess I'm using the wrong approach to begin with. I need to cancel out the render time delta right, so just using a fixed time step would do it?
https://www.gamedev.net/profile/99658-vanillasnake/?tab=friends
CC-MAIN-2018-05
refinedweb
3,184
59.03
inet6_rthdr_getflags() Get the flags for a segment in an IPv6 routing header Synopsis: #include <netinet/in.h> int inet6_rthdr_getflags(const struct cmsghdr *cmsg, int index); Arguments: - cmsg - A pointer to the Ancillary data containing the routing header. - index - A value between 0 and the number returned by inet6_rthdr_segments(). Library: libsocket Use the -l socket option to qcc to link against this library. Description: This function returns the flags for the segment specified by index in the routing header described by cmsg. The index must have a value between 0 and the number returned by inet6_rthdr_segments(). Addresses are indexed starting at 1, and flags starting at 0. They're consistent with the terminology and figures in RFC2460.: - IPV6_RTHDR_LOOSE or IPV6_RTHDR_STRICT for an IPv6 Type 0 routing header - -1 on error.
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/i/inet6_rthdr_getflags.html
CC-MAIN-2021-21
refinedweb
129
57.37
": [] } } public class users { public string[] links; public int chatter_count; public string[] moderators; public string[] staff; public string[] admins; public string[] global_mods; public string[] viewers; } No, the C# class you have doesn't really correlate correctly to the JSON: linksmember doesn't match the name _linksin JSON. _linksis defined as an array, but should be an object - it's {}in JSON, not []. chatters, which should be a custom class as well. Starting with Visual Studio 2013 Update 2, you can generate a C# class from a JSON sample. This is what it generated for your JSON: public class Rootobject { public _Links _links { get; set; } public int chatter_count { get; set; } public Chatters chatters { get; set; } } public class _Links { } public class Chatters { public string[] moderators { get; set; } public object[] staff { get; set; } public object[] admins { get; set; } public object[] global_mods { get; set; } public object[] viewers { get; set; } } As you can see, it maps moderators properly to a string[] but gets a bit confused and uses object[] for the rest, because the snippet contains to data for it to base the type on. If you can get a JSON sample with more data - ideally, with every field being present and having representative data - you'll get the best mapping. Also, you should change Rootobject to your own class name, of course. User or TwitchUser should do it. Once you have a class that corresponds correctly to your JSON, using JSON.NET to parse it is very simple: Rootobject yourData = JsonConvert.DeserializeObject<Rootobject>(inputJsonString); And you're done.
https://codedump.io/share/WZ6KJiiAJgr8/1/how-do-i-correctly-parse-a-json-string-using-newtonsoftjson-to-an-object-in-c
CC-MAIN-2017-09
refinedweb
253
57.4
A Kubernetes Admission Controller for verifying image trust with Notary. Portieris is a Kubernetes admission controller for the enforcment of image security policies. You can create image security policies for each Kubernetes namespace, or at the cluster level, and enforce different rules for different images. Portieris uses a Kubernetes Mutating Admission Webhook to modify your Kubernetes resources, at the point of creation, to ensure that Kubernetes runs only policy compliant images. When configured to do so, it can enforce Docker Content Trust with optional trust pinning, or can verify signatures created using RedHat's simple signing model and will prevent the creation of resources using untrusted or unverified images. If your cloud provider provides a Notary server (sometimes referred to as Content Trust), Portieris accesses trust data in that Notary server that corresponds to the image that you are deploying. In order to verify RedHat simple signatures they must be accessible via registry extension APIs or a configured signature store. When you create or edit a workload, the Kubernetes API server sends a request to Portieris. The AdmissionRequest contains the content of your workload. For each image in your workload, Portieris finds a matching security policy. If trust enforcement is enabled in the policy, Portieris pulls signature information for your image from the corresponding Notary server and, if a signed version of the image exists, creates a JSON patch to edit the image name in the workload to the signed image by digest. If a signer is defined in the policy, Portieris additionally checks that the image is signed by the specified role, and verifies that the specified key was used to sign the image. If simple signing is specified by the policy, Portieris will verify the signature using using the public key and identity rules supplied in the policy and if verified similarly mutates the image name to a digest reference to ensure that concurrent tag changes cannot influence the image being pulled. While it is possible to require both Notary trust and simple signing, the two methods must agree on the signed digest for the image. If the two methods return different signed digests, the image is denied. It is not possible to allow alternative signing methods. If any image in your workload does not satisfy the policy the entire workload is prevented from deploying. Portieris receives AdmissionRequests for creation of or edits to all types of workload. To prevent Portieris from impacting auto-recovery, it approves requests where a parent exists. Portieris' Admission Webhook is configured to fail closed. Three instances of Portieris make sure that it is able to approve its own upgrades and auto-recovery. If all instance of Portieris are unavailable, Kubernetes will not auto-recover it, and you must delete the MutatingAdmissionWebhook to allow Portieris to recover. Portieris exposes two metrics for monitoring the policy decisions made for workload images, these metrics are available on port 8080, and are exposed via annotations to prometheus. The metrics are portieris_pod_admission_decision_allow_count portieris_pod_admission_decision_deny_countand are counters that increment each time a decision is made. To locally view these metrics, they are available on the :8080/metricspath for each pod running. Portieris is installed using a Helm chart. Before you begin, make sure that you have Kubernetes 1.16 or above on your cluster and Helm 3.0 or above installed on your workstation. To install Portieris in the default namespace (portieris): tar xzvf portieris-0.9.4.tgz sh ./portieris/gencerts. The gencertsscript generates new SSL certificates and keys for Portieris. Portieris presents this certificates to the Kubernetes API server when the API server makes admission requests. If you do not generate new certificates, it could be possible for an attacker to spoof Portieris in your cluster. helm install portieris --create-namespace --namespace portieris ./portieris. You can use a different namespace if you choose, including an existing one if you omit the --create-namespaceoption, but note that the namespace forms part of the webhook certificate common name so you need to generate the certificate for the target namespace. sh portieris/gencerts. helm install portieris --create-namespace --namespace ./portieris. To manage certificates through an installed cert-manager, you do not need to unpack the charts in this case: helm install portieris --set UseCertManager=true portieris-0.9.4.tgz. By default, Portieris' admission webhook runs in all namespaces including its own install namespace, so that Portieris is able to review all the pods in the cluster. However, this can prevent the cluster from self healing in the event where Portieris becomes unavailable. Portieris also supports skipping namespaces with a certain label set. You can enable this by adding --set AllowAdmissionSkip=trueto your install command, but make sure to control who can add labels to namespaces and who can access namespaces with this label so that a malicious party cannot use this label to bypass Portieris. Another way to avoid update deadlock is to specify --set webHooks.failurePolicy=Ignore. You can uninstall Portieris, at any time, by running helm delete portieris --namespace. Uninstall Notes: 1. All your image security policies are deleted when you uninstall Portieris. 1. If you no longer need the namespace it will have to be manually deleted. i.e. kubectl delete namespace/1. If you have issues uninstalling portieris, via helm, try running the cleanup script: helm/cleanup.sh portieris Image security policies define Portieris' behavior in your cluster. You must configure your own policies in order for Portieris to enforce your desired security posture. Policies are described separately. You can configure Kubernetes RBAC rules to define which users and applications have the ability to modify your security policies. For more information, see the IBM Cloud docs. If Portieris is installed with AllowAdmissionSkip=true, you can prevent Portieris' admission webhook from being called in specific namespaces by labelling the namespace with securityenforcement.admission.cloud.ibm.com/namespace: skip. Doing so would allow pods in that namespace to recover when the admission webhook is down, but note that no policies are applied in that namespace. For example, the Portieris install namespace is configured with this label to allow Portieris itself to recover when it is down. Make sure to control who can add labels to namespaces and who can access namespaces with this label so that a malicious party cannot use this label to bypass Portieris. To report a security issue, DO NOT open an issue. Instead, send your report via email to [email protected] privately.
https://xscode.com/IBM/portieris
CC-MAIN-2021-10
refinedweb
1,069
53.71
On behalf of the global SAP Screen Personas product team, I am delighted to announce that SAP Screen Personas 3.0 has entered ramp-up. This is the long-awaited all-HTML version of SAP Screen Personas. SAP Screen Personas 3.0 is the most advanced SAP Screen Personas solution developed to date. It solidifies SAP Screen Personas’ position as the best-in-class personalization offering for SAP Dynpro Screens. We have completey rebuilt the product from the ground up, leveraging what we have learned over the past 18 months from our existing customers. Now, SAP Screen Personas is part of the SAP GUI for HTML (Web GUI), so there is no separate Silverlight client. We are releasing SAP Screen Personas 3.0 into ramp-up. “Ramp-up” means that the product is available for download to customers that meet the specific ramp-up criteria. By releasing SAP Screen Personas 3.0 to a carefully screened group of customers, we can ensure that the product meets SAP’s performance, scalability, and quality targets when it becomes generally available. >>> Tips on how to apply for SAP Screen Personas 3.0 ramp-up. <<< For more information about the ramp-up in general, you can read “Demystifying SAP’s Ramp-Up process”. SAP Screen Personas 3.0 builds on what we have learned from many customer deployments. Thank you to the dozens of customers that have provided constructive feedback and suggestions on how to improve the product. I expect many of you will see some of your ideas here. Here are some details on what to expect from SAP Screen Personas 3.0. Of course, the business value of SAP Screen Personas remains the same. - Improve SAP adoption by making screens easier to use - Increase employee productivity by simplifying screens to reduce typing - Reduce training costs by making SAP ERP screens more intuitive - Enhance data quality by reducing free text entry Next steps 1. Decide if you are ready to get started? 2. Apply for ramp-up through your account executive. We are looking for customers with a project scope aimed at going live with SAP Screen Personas within 3-4 months. For the SAP Screen Personas product team, Peter Spielvogel. Does the enhanced WebGUI use UI5 libs ? Can I migrate my old flavours and scripts ? Hi Owen, yes, there is a migration process for existing flavors in place. We are currently working on a tutorial to make it easy for existing SAP Screen Personas customers migrate: Getting Started – Personas 2.0 to 3.0 Flavor Migration – SAP Imagineering – SCN Wiki SAP Screen Personas now relies on the rendering of the SAP GUI for HTML. So while most of the SAP Screen Personas specific pieces are in fact build using SAPUI5 the SAP GUI for HTML parts leverages a different internal framework called Unified Rendering. When will the customers in ramp-up get access to the files? Dear Michael, As soon as a customer is accepted for ramp-up, and the restricted shipment department has the information to give you access for downloading. Best regards, Sylvia Barnard Congrats and all the best to the team (especially to Sylvia and Peter :-)) Hello, Is there any enablement material for SAP consultants to deliver SP 3.0? Thanks. Leonardo. There is a collection of ramp-up knowledge transfer (RKT) materials here: Regards, Peter Hi Peter, Is SAP Kernel 7.42 already available? We already downloaded Personas 3.0 but when searching for the SAP Kernel, we didn’t find it yet… Thx! Regards, Hans Dear Hans, The new SAP Kernel is not yet available. We will announce it as soon as the new SAP Kernel will be available for you – it is planned to have it soon. Best regards, Sylvia Hi Sylvia, Do you have any idea when? Next week? Next month? …? Thx! Regards, Hans Dear Hans, I knew that this question was going to come! 😀 It will definitely be this month, rather sooner than later…I simply cannot be more precise. Best regards, Sylvia Oké, Thx! 🙂 Maybe end of June: SAP Screen Personas 2015 Outlook (Roadmap) Hi Peter, When should we expect Personas 3.0 to be generally available for download? Regards, Manoj Nile Dear Manoj, Personas 3.0 will be general available after the ramp-up phase has ended successfully. As the Personas 3.0 ramp-up projects only started quite recently it will surely take a few months time before we will go GA with Personas 3.0. Best regards, Sylvia Hi all experts , I have a questions , Is possible use Smartphone or tablet for SAP SCREEN PERSONAS ? Officially this is not supported because Personas is a desktop tool at this point (plus it will run on a Windows 8.x tablet too). If you are using Personas 3.0 then you can try to create a flavor and see if it works on your mobile device. If it does, you’re in luck but if it doesn’t, there is no support for it right now. Peter, It’s very good to know that 3.0 has been announced, i would like to know a few more details. We are currently in the works and producing Personas screens and are facing performance issues due to Silver Light i believe. would like to know when will 3.0 available for deployment? If it’s anytime soon we would like to know all the details. Naveed Mohsin Delta Air Lines Program Manager Hi Naveed, We will release SAP Screen Personas 3.0 as generally available after we complete the ramp-up process. Our customers now control the timeline for this. Once several customers go live, then we will complete the ramp-up. This typically takes several months. Regards, Peter Hello, could you indicate a guide that inform what are the minimum pre requisites for personas 3.0 ? Thanks. Lucas Dear Lucas, Technical requirements You will find more information on Peter’s blog here: but please be aware that all our ramp-up slots are filled! Best regards, Sylvia Thank you Sylvia! Still in Ramp-up? or is it available already? Hi Ricardo, SAP Screen Personas 3.0 is still in ramp-up. When it becomes generally available, we will announce it on SCN. Regards, Peter Hello, it is possible to render SAP Screen Personas 3.0 with browser-able mobile devices? I’m not sure, because it is based on HTML 5, otherwise the mobile device need maybe LAN connection for rendering and therefore it is not possible for all browser-able mobile devices. Thanks for helping me. Regards Hi Benjamin, It appears that your question was answered here: Regards, Peter Hi Sylvia / Peter, We are attempting to get Personas 3.0 activated and are facing issues with respect to kernel as it appears from the screenshot of Health Check below. Please throw some light as to how this can be fixed: You need the latest DW patch which is patch 36 or 37. looks like you need to activate the /default_host/sap/bc/personas3 service as well in sicf Thank you Mike for the quick response. I activated the ‘personas3’ service through SICF. By DW patch do you mean kernel patch level need to be upgraded to 36 or 37? that is correct you need the DW SAR file rather than the full SAPEXE SAR file. Dear Manoj, yes, please update the kernel patch to the latest one. Also check the recommendations in the latest note. And as Michael pointed out, your SICF service seems to be off. Best regards, Sylvia Hi Sylvia / Mike / Peter I got the kernel update to latest patch level i.e. 37 however the health check tool still shows like this: Also let me know which latest note I should refer to. Regards, Manoj Nile All the notes listed here – Mike, While applying all notes from 2050325, I get the following message: System setting does not allow changes to be made to object NOTE 0002098656. However am not sure which object it is referring to. I need to share the object info with my basis colleague. Regards, Manoj Nile Go to transaction SE06 and click on the button ‘System Change Option”. Scroll down in the Software Component list to PERSONAS and check if it is set to ‘Modifiable’. My guess is that it isn’t, so change the setting. Do the same for namespace /PERSONAS/. Save and try to apply the notes again. Hi Tamas, Appreciate your response. This helped. Namespace was set to ‘Not modifiable’. You guys rock!. 🙂 Cheers, Manoj Nile Hello All, I was wondering if there were any updates regarding general availability of Personas 3.0? Thanks, Brian Dear Brian, We are still in ramp-up. As soon as this changes we will post it. Best regards, Sylvia Maybe end of June: SAP Screen Personas 2015 Outlook (Roadmap) Hello Mike / Sylvia / Peter I am done with all the configuration & notes implementation on our system to enable Personas 3.0. However I still can’t get blue line & personas icon [as mentioned in Personas 3.0 config guide]. Don’t know what is missing now…please guide Following is current status in health check tool: Also using the following URL to access Personas 3.0: Have you given yourself all the personas roles? /PERSONAS/ADMIN_ROLE is the one I use. Yes Mike…I have the role assigned to me. Just one clarification though we couldn’t find authorization object /pers/aobj hence we used S_PERSONAS which has all the activities listed in config guide. Is that OK? That sounds like you are using the your own defined role rather than the SAP provided role, as I don’t cover the security side of our systems so for the ramp-up we are only using the SAP Provided Roles. Mike I am using the standard role provided i.e. /PERSONAS/ADMIN_ROLE as you can see in the screenshot below: Have you removed the Z_PERSONAS role, could this be conflicting with the installation? I always find it’s best to run through the installation to the letter without doing anything custom and then trying the basic functionality before beginning to customize. If it still doesn’t work, then as a RAMP-UP customer I would suggest you raise an OSS Note. Z_PERSONAS is a role that was created for assigning S_PERSONAS object & associated activities with it [Personas config guide page # 6]. I removed it to check the impact but no luck 🙁 I guess, I will have to raise OSS then.. Also which browser are you using Firefox? I would suggest you try using Internet Explorer or Google Chrome I am using Google Chrome. hi Manoj, Please create an OSS message and we will take a look. There is no space here to discuss further. And please create a new discussion so that its helpful for others too Regards, Susahnt Hi Sushant, I have created an OSS message # 115621 / 2015 today. Please take a look & let me know. Regards, Manoj Nile Based on the requirement for Kernel 7.42 which is currently for systems on NetWeaver 7.4 I can’t imagine the support will change. I really hope it’s not just available for ERP 6.0/EHP7. We have a lot of interest in using Personas 3.0 and having to do an EhP upgrade will add a lot of work to the project. Personas is very much developed around the Kernel with it being overlayed on top of the webgui. An EhP upgrade is only as bad as an SPS if you decided to enable the functionality supplied in the SPS. Otherwise it is just a technical upgrade utilizing the most up to date NetWeaver release. To maximize the number of customers that can benefit from SAP Screen Personas 3.0, we are planning to make it available on kernels other than 7.42. This would happen some time after general availability of the product. More details here: Hi Mike / Sylvia / Tobias, We are implementing Personas 2.0 with SP level 0002 & support package SAPK-20002INPERSOS installed for a client. However we are facing issues launching personas with the URL generated during configuration. We have implemented all the notes required. However we still see the clientaccesspolicy.xml & crossdomain.xml not configured correctly as per the config check tool of Personas. The two xml files are available on the application server folder /usr/sap/ECD/DVEBMGS00/data Not sure if the files are in correct folder. Can one of you guide us here? I have more questions but I need quick one for this first. Regards, Manoj Hi manoj, As requested before too, please post your questions in a discussion thread and besides this post if for Personas 3.0. Your question has already been asked and answered in several thread. Please do a quick search before posting. And if you do not get answer, based on other posts, create a new discussion thread. Sushant (P.s. I think you need to restart your server instance before those files become accessible) Hello Guru’s Nice knowing that SP3.0 will solve a lot of problems, but as far as I know it’s still in Ramp-up. Is there any idea when this fase is closed an SP 3.0 will becom avaliable for al SAP customers?? Please inform Hi Kees, We do not have a firm date. The release date is planned for after our ramp-up customers go live, to demonstrate that the product is sufficiently robust for general availability. I expect this will be in the first half of this year. The complete outlook is here: Regards, Peter Hi Peter, Could you please let me know if Screen Personas 3.0 is still in Ramp-up phase as I could see in Product availability matrix in Marketplace that the general availability is planned in Q2 2015? Thanks & Regards, Aishwarya. Dear Aishwarya, The information shown on the Service Marketplace is correct. We are in ramp-up with SAP Screen Personas. Best regards, Sylvia Dear Sylvia, Thank you for your update. Regards, Aishwarya. Hi there, We’re in the process of implementing Personas 2.0 and realised that from an administration perspective, there is very little in the way of reporting, e.g. User reports (users by group assignment, users by flavor assignment, users by Personas role assignment) and Flavor reports (flavor by name and/or description, flavor by group, flavor by transactional content) etc. Is this lack of basic reporting functionality addressed in the new 3.0 release? Kind regards, Robin We are planning to include more reporting capabilities in SAP Screen Personas 3.0 and in subsequent service packs. Regards, Peter Is there any update for the release date of SAP Personas 3.0 ? I am searching the SAP sites for a news about it but could not see any new information. As a SAP Personas developer from a company which is not participating in the ramp-up period, I’m working on Personas 2.0 screens at the same time waiting for the new version for using the enhancements. Hi Eralper, Personas 3.0 will probably end the ramp-up fase end of this month. So, SP01 is almost there! 😉 Cheers! Regards, Hans Hi Peter, We have a customer who is interested in using SAP Personas 3.0. However from the SAP PAM, it looks like v3.0 goes out of Mainstream Maintenance at exactly the same time as v2.0 and v1.0 on 31.12.2017. This seems a bit strange. Can you confirm if it is indeed correct? Thanks, Eamonn. Hi Eamonn, This is not correct. It will be later than that. We’ll update the information when we release SAP Screen Personas 3.0 as generally available. Regards, Peter Thanks Peter, Is there a steer on what that date will be, which I can relay to the customer? Rgds, Eamonn. Hello, there is no changing, the date is still on 31.12.2017. So, the date remains? Thanks for an Information. Dear Benjamin, yes, the date is currently 31.12.2017 as we went GA and shipped a first Service Pack in 2015. As soon as we bring out a new SP, for example in 2016, it will get prolonged automatically to 31.12.2018 and so on. Best regards, Sylvia Hi Peter, Are we still on schedule for the release of SAP Screen Personas 3.0? It was scheduled for June, 30 right? Thanks & Regards Helly Dear Heely, Please wait for our announcement that will come soon. Best regards, Sylvia Dear Helly, I hope you saw the announcement in the meantime that SAP Screen Personas 3.0 is GA: SAP Screen Personas 3.0 Is Generally Available, including Service Pack 1 Best regards, Sylvia Hi Sylvia, We just upgraded the Screen Persona to Version 3.0 in central personas system. I want to know the way to establish connection to backend system from central personas system in Screen Personas 3.0. The backend systems are ERP EHP5 systems. In Old version, the connection is provided using a dashboard at the main screen and by clicking on it, it takes us to the backend server to create flavour. When I open the Perosna from SICF service Personas, it takes me directly to the Persona server. I am confused how can I connect to backend server to create flavour. Please advise. Thanks & Regards, Aishwarya. Hi Tamas, I too saw that thread. But solution is not provided there. Please help. Thanks & Regards, Aishwarya. Dear Aishwarya, Please be aware that there are technical prerequisites necessary, this was what Tamas was referring to. You need EHP7 for Personas 3. It will not work on EPH5. Have a look at the pre- and post installation checklist: Pre and post installation Checklist – SAP Imagineering – SCN Wiki. There you can also find a link to the configuration guide. Best regards, Sylvia Hi Sylvia, Thank you so much for your update. Regards, Aishwarya. You are warmly welcome, Aishwarya! Dear Aishwarya, I would like to add three links: 1. Link to the Personas 3 SP1 note: 2. Link to subscribe to the Personas Support Newsletter: SAP Screen Personas – Support News 3. Link to Configuration Guide: SAP Screen Personas 3.0 – SAP Help Portal Page Best regards, Sylvia
https://blogs.sap.com/2014/08/13/announcing-sap-screen-personas-30/
CC-MAIN-2020-24
refinedweb
3,049
75.4
As your application grows, you would like to move from one big .cr file to separating your code into smaller files. You can use require from your main .cr file to add code from other files: require „./tools/*“ This statement will add code from the tools directory, relative to your main .cr file. This will help to separate your application “physically”, but it also may be desirable to separate it logically – maybe some of the code can be reused in other projects, and maybe you also want to avoid namespace collisions. What is a namespace collision? If a method or class name, or constant name is used two times in different source files in the global namespace, you will have a collision. How is the compiler supposed to know, which method / class you actually want of the two which are defined? To avoid namespace collisions you separate your code into modules. for instance: module Debug class SampleClass end SAMPLE_CONSTANT = “some value” macro samplemacro(some_variable) #some macro code end #your code goes here end Accessing code in modules Now you can access the code (after having required the file the module is defined in) like this: sampleinstance = Debug::SampleClass.new() p Debug::SAMPLE_CONSTANT Debug.samplemacro Note that a macro is called with a “.” syntax, not a “:” syntax. You can consider it as a method of the Module. Including modules If, on the other hand, you find the additional notation tiresome, you can include the module using it’s method name like this: include Debug then you will be able to access the methods, classes, constants and macros without any additional notation – as if the code was literally copy & pasted into your other source file: sampleinstance = SampleClass.new() p SAMPLE_CONSTANT samplemacro
https://pi3g.com/2019/01/22/using-modules-in-crystal/
CC-MAIN-2019-39
refinedweb
289
63.09
Hi, On 18.4.12 16:14, Julian Reschke wrote: > Hi there, > > to get more of the TCK passing, we really need to get the path handling > done. > > Reminder, this involves: > > 1) handling expanded names, > > 2) handling identifier pathsa, and > > 3) mapping (and occasionally creating) namespace prefixes. I created OAK-61 for this last week. > > We have decided early on that the MK persists prefixes, not namespace > names, plus mapping information (*). It may seem like this makes things > easier, but it does not; the namespace mapping used by the JCR client > may be different from the one in the MK, so remapping needs to happen in > any case. Its not about making it easier, its about specialising for the most common case which is that there are only a few re-mappings if any at all. > > Let's use "jcr path" and "mk path" (and * name) as terminology here. > > Where does this mapping need to happen? oak-jcr or oak-core? The realisation of the map should live in oak-core. The effective remapping of names should be done in oak-jcr. Implementation wise that map can be represented by a table with three rows: prefix of the mk name (let's call that mk prefix), namespace and jcr prefix. Since JCR namespace mapping is bijective, we can do all necessary resolutions: from expanded form to qualified form and back and also from mk name to jcr name and back. Furthermore it also covers all the necessary remapping operations. Finally, if we adhere to the convention that the mk prefix is the same as the jcr prefix at the time the respective namespace was first used, we get a representation of mk paths which largely coincides with the one of jcr paths. Michael > > Also, I'd like to point out that spi-commons already has necessary > concepts for this; do we *really* want to reinvent that? > > Best regards, Julian > > > > > > (*) I'm still unhappy about that, for the record.
http://mail-archives.apache.org/mod_mbox/jackrabbit-oak-dev/201204.mbox/%3C4F8F304A.4040402@apache.org%3E
CC-MAIN-2015-40
refinedweb
328
70.73
or is not set, (this is not true for e.g. sys, __main__, builtins and other key modules where reloading is frowned upon).. Changed in version 3.4: Returns None when called instead of raising NotImplementedError.. Changed in version 3.4: Returns None when called instead of raising NotImplementedError.. namespace package. If loader is None and portion is the empty list then no loader or location for a namespace package were found (i.e. failure to find anything for the module). Changed in version 3.4: Returns (None, []) instead of raising NotImplementedError. (see importlib.util.module_for_loader())..module_for_loader() decorator can handle the details for __package__. The loader used to load the module. The importlib.util.module_for_loader() decorator can handle the details for __package__. Changed in version 3.4: Raise ImportError when called instead of NotImplementedError. An optional method which when implemented calculates and returns the given module’s repr, as a string. The module type’s default repr() will use the result of this method as appropriate. New in version 3.3. Changed in version 3.4: Made optional instead of an abstractmethod.__. Changed in version 3.4: Raises IOError instead of NotImplementedError. An abstract base class for a loader which implements the optional PEP 302 protocol for loaders that inspect modules. Return the code object for a module. None should be returned if the module does not have a code object (e.g. built-in module). ImportError is raised if loader cannot find the requested module. Note While the method has a default implementation, it is suggested that it be overridden if possible for performance. Changed in version 3.4: No longer abstract and a concrete implementation is provided.. Changed in version 3.4: Raises ImportError instead of NotImplementedError. An abstract method to return a true value if the module is a package, a false value otherwise. ImportError is raised if the loader cannot find the module. Changed in version 3.4: Raises ImportError instead of NotImplementedError..load. Changed in version 3.4: Raises ImportError instead of NotImplementedError.(). Reads path as a binary file and returns the bytes from it.. If the path cannot be handled, IOError is raised. New in version 3.3. Changed in version 3.4: Raise IOError instead of NotImplementedError. Optional abstract method which returns the modification time for the specified path. Deprecated since version 3.3: This method is deprecated in favour of path_stats(). You don’t have to implement it, but it is still available for compatibility purposes. Raise IOError if the path cannot be handled. Changed in version 3.4: Raise IOError instead of NotImplementedError. when called.method find_module(fullname, path=None)¶. - classmethod invalidate_caches()¶ Calls importlib.abc.PathEntryFinder.invalidate_caches() on all finders stored in sys.path_importer_cache. Changed in version 3.4: Calls objects in sys.path_hooks with the current working directory for '' (i.e. the empty string)..Execution. New in version 3.4. otherwise). Container of extra module-specific data for use during loading (or None). (__cached__) String for where the compiled module should be stored (or None). (__package__) (Read-only) Fully-qualified name of the package to which the module belongs as a submodule (or None). (Read-only)..modules. A decorator for importlib.abc.Loader.load_module() to set the __package__ attribute on the returned module. If __package__ is set and has a value other than None it will not be changed..
http://docs.python.org/dev/library/importlib.html
CC-MAIN-2013-48
refinedweb
560
54.79
This section provides an overview of what sympy is, and why a developer might want to use it. It should also mention any large subjects within sympy, and link out to the related topics. Since the Documentation for sympy is new, you may need to create initial versions of those related topics. Alternate ways to install SymPy from conda. conda is the recommended way, but these are some alternate ways. Including: git, pip, etc. Sympy is a Python library for doing symbolic — rather than numeric — calculations. For instance, consider the quadratic equation in x, x**2 + HELLO * x + WORLD = 0 where HELLO and WORLD are constants. What's the solution of this equation? In Python, using Sympy we can code, from sympy import symbols, solve, latex x, HELLO, WORLD = symbols('x, HELLO, WORLD') print ( latex ( solve ( x**2 + HELLO * x + WORLD, x ) ) ) Since I made a call to Latex the solutions are almost ready for publication! Sympy provides the two of them packed in a list. Here's one: If you need to do more work on an expression then you would leave out the call to latex. The easiest and recommended way to install SymPy is to install Anaconda. If you already have Anaconda or Miniconda installed, you can install the latest version with conda: conda install sympy Another way of installing SymPy is using pip: pip install sympy Note that this might require root privileges, so one might acually need sudo pip install sympy Most linux distributions also offer SymPy in their package repositories. For Fedora one would install SymPy with sudo dnf install python-sympy sudo dnf install python3-sympy The first one installs the python 2 version of the package, the latter python 3. On OpenSuse the respective commands are: sudo zypper install python-sympy sudo zypper install python3-sympy The packages for OpenSuse 42.2 seem rather outdated, so one of the first methods should be prefered. Sympy is made for symbolic math, so let's have a look at some basic integration and differentiation. from sympy import symbols, sqrt, exp, diff, integrate, pprint x, y = symbols('x y', real=True) pprint(diff(4*x**3+exp(3*x**2*y)+y**2,x)) pprint(diff(4*x**3+exp(3*x**2*y)+y**2,y)) pprint(integrate(exp(x*y**2)+sqrt(x)*y**2,x)) pprint(integrate(exp(x*y**2)+sqrt(x)*y**2,y)) First we import the necessary functions from sympy. Next we define our variables x and y. Note that these are considered complex by default, so we tell sympy that we want a simple example by making them real. Next we differentiate some expression with respect to x and then y. Finally we integrate some expression, again with respect to x and then y. The call of pprint ensures that our functions get written in some nice human readable style.
https://riptutorial.com/sympy
CC-MAIN-2019-22
refinedweb
483
61.87
1.2 Version Feedback: axis-dev@ws.apache.org This guide records some of the rationale of the architecture and design of Axis. Axis consists of several subsystems working together, as we shall see later. In this section we'll give you an overview of how the core of Axis works. Put simply, Axis is all about processing Messages. When the central Axis processing logic runs, a series of Handlers are each invoked in order. The particular order is determined by two factors - deployment configuration and whether the engine is a client or a server. The object which is passed to each Handler invocation is a MessageContext. A MessageContext is a structure which contains several important parts: 1) a "request" message, 2) a "response" message, and 3) a bag of properties. More on this in a bit. There are two basic ways in which Axis is invoked: In either case, the Axis framework's job is simply to pass the resulting MessageContext through the configured set of Handlers, each of which has an opportunity to do whatever it is designed to do with the MessageContext. The server side message path is shown in the following diagram. The small cylinders represent Handlers and the larger, enclosing cylinders represent Chains (ordered collections of Handlers which will be described shortly). A message arrives (in some protocol-specific manner) at a Transport Listener. In this case, let's assume the Listener is a HTTP servlet. It's the Listener's job to package the protocol-specific data into a Message object (org.apache.axis.Message), and put the Message into a MessageContext. The MessageContext is also loaded with various properties by the Listener - in this example the property "http.SOAPAction" would be set to the value of the SOAPAction HTTP header. The Transport Listener also sets the transportName String on the MessageContext , in this case to "http". Once the MessageContext is ready to go, the Listener hands it to the AxisEngine. The AxisEngine's first job is to look up the transport by name. The transport is an object which contains a request Chain, a response Chain, or perhaps both. A Chain is a Handler consisting of a sequence of Handlers which are invoked in turn -- more on Chains later. If a transport request Chain exists, it will be invoked, passing the MessageContext into the invoke() method. This will result in calling all the Handlers specified in the request Chain configuration. After the transport request Handler, the engine locates a global request Chain, if configured, and then invokes any Handlers specified therein. At some point during the processing up until now, some Handler has hopefully set the serviceHandler field of the MessageContext (this is usually done in the HTTP transport by the "URLMapper" Handler, which maps a URL like "" to the "AdminService" service). This field determines the Handler we'll invoke to execute service-specific functionality, such as making an RPC call on a back-end object. Services in Axis are typically instances of the "SOAPService" class (org.apache.axis.handlers.soap.SOAPService), which may contain request and response Chains (similar to what we saw at the transport and global levels), and must contain a provider, which is simply a Handler responsible for implementing the actual back end logic of the service. For RPC-style requests, the provider is the org.apache.axis.providers.java.RPCProvider class. This is just another Handler that, when invoked, attempts to call a backend Java object whose class is determined by the "className" parameter specified at deployment time. It uses the SOAP RPC convention for determining the method to call, and makes sure the types of the incoming XML-encoded arguments match the types of the required parameters of the resulting method. The Message Path on the client side is similar to that on the server side, except the order of scoping is reversed, as shown below. The service Handler, if any, is called first - on the client side, there is no "provider" since the service is being provided by a remote node, but there is still the possibility of request and response Chains. The service request and response Chains perform any service-specific processing of the request message on its way out of the system, and also of the response message on its way back to the caller. After the service request Chain, the global request Chain, if any, is invoked, followed by the transport. The Transport Sender, a special Handler whose job it is to actually perform whatever protocol-specific operations are necessary to get the message to and from the target SOAP server, is invoked to send the message. The response (if any) is placed into the responseMessage field of the MessageContext, and the MessageContext then propagates through the response Chains - first the transport, then the global, and finally the service. Axis comprises several subsystems working together with the aim of separating responsibilities cleanly and making Axis modular. Subsystems which are properly layered enable parts of a system to be used without having to use the whole of it (or hack the code). The following diagram shows the layering of subsystems. The lower layers are independent of the higher layers. The 'stacked' boxes represent mutually independent, although not necessary mutually exclusive, alternatives. For example, the HTTP, SMTP, and JMS transports are independent of each other but may be used together. In fact, the Axis source code is not as cleanly separated into subsystems as the above diagram might imply. Some subsystems are spread over several packages and some packages overlap more than one subsystem. Proposals to improve the code structure and make it conform more accurately to the notional Axis subsystems will be considered when we get a chance. Handlers are invoked in sequence to process messages. At some point in the sequence a Handler may send a request and receive a response or else process a request and produce a response. Such a Handler is known as the pivot point of the sequence. As described above, Handlers are either transport-specific, service-specific, or global. The Handlers of each of these three different kinds are combined together into Chains. So the overall sequence of Handlers comprises three Chains: transport, global, and service. The following diagram shows two sequences of handlers: the client-side sequence on the left and the server-side sequence on the right. A web service does not necessarily send a response message to each request message, although many do. However, response Handlers are still useful in the message path even when there isn't a response message, e.g. to stop timers, clean up resources, etc. A Chain is a composite Handler, i.e. it aggregates a collection of Handlers as well as implementing the Handler interface as shown in the following UML diagram:. Back to message processing -- a message is processed by passing through the appropriate Chains. A message context is used to pass the message and associated environment through the sequence of Handlers. The model is that Axis Chains are constructed offline by having Handlers added to them one at a time. Then they are turned online and message contexts start to flow through the Chains. Multiple message contexts may flow through a single Chain concurrently. Handlers are never added to a Chain once it goes online. If a Handler needs to be added or removed, the Chain must be 'cloned', the modifications made to the clone, and then the clone made online and the old Chain retired when it is no longer in use. Message contexts that were using the old Chain continue to use it until they are finished. This means that Chains do not need to cope with the addition and removal of Handlers while the Chains are processing message contexts -- an important simplification. The deployment registry has factories for Handlers and Chains. Handlers and Chains can be defined to have 'per-access', 'per-request', or 'singleton' scope although the registry currently only distinguishes between these by constructing non-singleton scope objects when requested and constructing singleton scope objects once and holding on to them for use on subsequent creation requests. A Targeted Chain is a special kind of chain which may have any or all of: a request Handler, a pivot Handler, and a response Handler. The following class diagram shows how Targeted Chains relate to Chains. Note that a Targeted Chain is an aggregation of Handlers by virtue of extending the Chain interface which is an aggregation of Handlers. A service is a special kind of Targeted Chain in which the pivot Handler is known as a "provider". Now let's consider what happens when a fault occurs. The Handlers prior to the Handler that raised the fault are driven, in reverse order, for onFault (previously misnamed 'undo'). The scope of this backwards scan is interesting: all Handlers previously invoked for the current Message Context are driven. Need to explain how "FaultableHandlers" and "WSDD Fault Flows" fit in. The current structure of a MessageContext is shown below. Each message context may be associated with a request Message and/or a response Message. Each Message has a SOAPPart and an Attachments object, both of which implement the Part interface. The typing of Message Contexts needs to be carefully considered in relation to the Axis architecture. Since a Message Context appears on the Handler interface, it should not be tied to or biassed in favour of SOAP. The current implementation is marginally biassed towards SOAP in that the setServiceHandler method narrows the specified Handler to a SOAPService. Axis has an abstract AxisEngine class with two concrete subclasses: AxisClient drives the client side handler chains and AxisServer drives the server side handler chains. The relationships between these classes is fairly simple: The EngineConfiguration interface is the means of configuring the Handler factories and global options of an engine instance. An instance of a concrete implementation of EngineConfiguration must be passed to the engine when it is created and the engine must be notified if the EngineConfiguration contents are modified. The engine keeps a reference to the EngineConfiguration and then uses it to obtain Handler factories and global options. The EngineConfiguration interface belongs to the Message Flow subsystem which means that the Message Flow subsystem does not depend on the Administration subsystem. The Administration subsystem provides a way of configuring Axis engines. The configuration information an engine needs is a collection of factories for runtime artefacts such as Chains and SOAPServices and a set of global configuration options for the engine. The Message Flow subsystem's EngineConfiguration interface is implemented by the Administration subsystem. FileProvider enables an engine to be configured statically from a file containing a deployment descriptor which is understood by the WSDDDeployment class. SimpleProvider, on the other hand, enables an engine to be configured dynamically. WSDD is an XML grammer for deployment descriptors which are used to statically configure Axis engines. Each Handler needs configuration in terms of the concrete class name of a factory for the Handler, a set of options for the handler, and a lifecycle scope value which determines the scope of sharing of instances of the Handler. The structure of the WSDD grammar is mirrored by a class hierarchy of factories for runtime artefacts. The following diagram shows the classes and the types of runtime artefacts they produce (a dotted arrow means "instantiates"). The XML syntax of a SOAP message is fairly simple. A SOAP message consists of an envelope containing: The only body entry defined by SOAP is a SOAP fault which is used for reporting errors. Some of the XML elements of a SOAP message define namespaces, each in terms of a URI and a local name, and encoding styles, a standard one of which is defined by SOAP. Header entries may be tagged with the following optional SOAP attributes: So the SOAP message model looks like this: The classes which represent SOAP messages form a class hierarchy based on the MessageElement class which takes care of namespaces and encodings. The SOAPHeaderElement class looks after the actor and mustUnderstand attributes. During deserialization, a parse tree is constructed consisting of instances of the above classes in parent-child relationships as shown below. The class mainly responsible for XML parsing, i.e. deserialization, is DeserializationContext ('DC'). DC manages the construction of the parse tree and maintains a stack of SAX handlers, a reference to the MessageElement that is currently being deserialized, a stack of namespace mappings, a mapping from IDs to elements, a set of type mappings for deserialization (see Encoding Subsystem) and a SAX event recorder. Elements that we scan over, or ones for which we don't have a particular deserializer, are recorded - in other words, the SAX events are placed into a queue which may be 'played back' at a later time to any SAX ContentHandler. Once a SOAPEnvelope has been built, either through a parse or manual construction by the user, it may be output using a SerializationContext (also see Encoding Subsystem). MessageElements all have an output() method which lets them write out their contents. The SAX handlers form a class hierarchy: and stack up as shown in the following diagram: Initially, the SAX handler stack just contains an instance of EnvelopeHandler which represents the fact that parsing of the SOAP envelope has not yet started. The EnvelopeHandler is constructed with a reference to an EnvelopeBuilder, which is the SAX handler responsible for parsing the SOAP envelope. During parsing, DC receives the events from the SAX parser and notifies either the SAX handler on the top of its handler stack, the SAX event recorder, or both. On the start of an element, DC calls the SAX handler on the top of its handler stack for onStartChild. This method returns a SAX handler to be used to parse the child, which DC pushes on its SAX handler stack and calls for startElement. startElement, amongst other things, typically creates a new MessageElement of the appropriate class and calls DC for pushNewElement. The latter action creates the parent-child relationships of the parse tree. On the end of an element, DC pops the top SAX handler from its handler stack and calls it for endElement. It then drives SAX handler which is now on the top of the handler stack for onEndChild. Finally, it sets the MessageElement that is currently being deserialized to the parent of the current one. Elements which are not defined by SOAP are treated using a SOAPHandler as a SAX event handler and a MessageElement as a node in the parse tree. Encoding is most easily understood from the bottom up. The basic requirement is to transform between values of programming language datatypes and their XML representations. In Axis, this means encoding (or 'serializing') Java objects and primitives into XML and decoding (or 'deserializing') XML into Java objects and primitives. The basic classes that implement these steps are serializers and deserializers. Particular serializers and deserializers are written to support a specific XML processing mechanism such as DOM or SAX. So serializer factories and deserializer factories are introduced to construct serializers and deserializers for a XML processing mechanism which is specified as a parameter. As is apparent from the above class diagrams, each pair of Java type and XML data type which needs encoding and decoding requires specific serializers and deserializers (actually one of each per XML processing mechanism). So we need to maintain a mapping from a pair of Java type and XML data type, identified by a QName, to a serializer factory and a deserializer factory. Such a mapping is known as a type mapping. The type mapping class hierarchy is shown below. Notice how the default type mapping instantiates the various serializer and deserialiser factories. There is one final level of indirection. How do we know which type mapping to use for a particular message? This is determined by the encoding which is specified in the message. A type mapping registry maintains a map from encoding name (URI) to type mapping. Note that the XML data type QNames are defined by the encoding. So, in summary, to encode a Java object or primitive data value to a XML datatype or to decode the latter to the former, we need to know: The WSDL Tools subsystem contains WSDL2Java and Java2WSDL. The Axis runtime does not depend on these tools -- they are just there to make life easier for the user. This tool takes a description of a web service written in WSDL and emits Java artefacts used to access the web service. There are three layers inside the tool: tbd. The client side Axis processing constructs a Call object with associated Service, MessageContext, and request Message as shown below before invoking the AxisClient engine. An instance of Service and its related AxisClient instance are created before the Call object. The Call object is then created by invoking the Service.createCall factory method. Call.setOperation creates a Transport instance, if a suitable one is not already associated with the Call instance. Then Call.invoke creates a MessageContext and associated request Message, drives AxisClient.invoke, and processes the resultant MessageContext. This significant method calls in this sequence are shown in the following interaction diagram. While most pluggable components infrastructures (jaxp/xerces, commons-logging, etc) provide discovery features, it is foreseen that there are situations where these may evolve over time. For example, as leading-edge technologies are reworked and adopted as standards, discovery mechanisms are likely to change. Therefore, component discovery must be relegated to a single point of control within AXIS, typically an AXIS-specific factory method. These factory methods should conform to current standards, when available. As technologies evolve and/or are standardized, the factory methods should be kept up-to-date with appropriate discovery mechanisms. We need to consider what's going on here. If you take a sequence of Handlers and then introduce a distribution boundary into the sequence, what effect should that have on the semantics of the sequence in terms of its effects on message contexts? The following diagram shows a client-side Handler sequence invoking a server-side Handler sequence. We need to consider how the semantics of this combined sequence compares with the sequence formed by omitting the transport-related Handlers.
http://ws.apache.org/axis/java/architecture-guide.html
crawl-001
refinedweb
3,042
52.6
Regardless of where you look, web services are a hot subject, and not just on resumés. Something of a mystique surrounds web services; like the latest hot video game, everybody wants one, even if nobody is quite sure what one is. Ah, to be a kid again, wanting something just because I want it. Who am I kidding? I'm still that way, obsessing over games such as Stargate: The Alliance for XBox, and books such as Practical Guide to Red Hat Linux: Fedora Core and Red Hat Enterprise Linux. However, unlike businesses, my pockets aren't full of much other than lint, which means that I have to wait, whereas businesses can just whip out the checkbook. Alright, because everybody wants a web service, there are only two questions. The first question is, what is a web service? And the second question is, how does a web service work? Let's start by answering the first question: What is a web service? A web service is a piece of software designed to respond to requests across either the Internet or an intranet. In essence, it is a program that executes when a request is made of it, and it produces some kind of result that is returned to the caller. This might sound a lot like a web page, but there is a significant difference: With a web page, all the caller is required to know about the page is the URI. With a web service, the caller needs to know both the URI and at least one of the web service's public methods. Consider, for example, the C# web service shown in Listing 7-5. Knowing the URI, which, incidentally, is, isn't enough. It is also necessary to know that the public method is called monster. using System; using System.Collections; using System.ComponentModel; using System.Data; using System.Diagnostics; using System.Web; using System.Web.Services; namespace AJAX4 { public class myService : System.Web.Services.WebService { public myService() { string monster() { return "Grrr!"; } } } Great, now we have a web servicewhoopee, we're done, right? Wrong! Having a web service is only part of the battle; it falls into the same category as having a swimming pool and not knowing how to swim. Yeah, it is impressive, but deep down, there is a nagging feeling of feeling stupid for the unnecessary expense. What is needed is the knowledge of how to invoke the web service. Impressive word, invoke; it conjures up images of smoke, candles, pentagrams, and demons, the kind that could rip a soul from a body and torment it for eternityor, at least, during the annual performance evaluation. As with invoking a demon, invoking a web service is all a matter of how things are phrased, knowing both what to ask and how to ask. In both cases, mistakes can lead to, um, undesirable results. Unlike demonology, which requires the use of Latin (of the Roman variety, not the swine variety), invoking a web service requires the use of a dialect of XML called SOAP. And as with everything even remotely computer related, SOAP is an acronym standing for Simple Object Access Protocol. Fortunately, with SOAP, the little elves who name things didn't lie: It is actually simple, and who would have thought it? The basic structure of a SOAP request is an envelope, which is also a pretty good analogy of not only what it is, but also what it does. It serves as a wrapper around the request and any parameters being passed to the web service. Consider the example of SOAP shown in Listing 7-6, whose purpose is to invoke the web service from Listing 7-5. <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="" xmlns:xsd="" xmlns: <soap:Body> <monster xmlns="" /> </soap:Body> </soap:Envelope> Doesn't look like much does it? All that the SOAP envelope does is specify the method, monster, along with a namespacewhich, in this case, is the default, basically a placeholder. If the method requires any parameters, they would be passed as children of that method. For example, let's add the method shown in Listing 7-7 to the web service from Listing 7-5. [WebMethod] public string echo(string text) { return text; } Beyond changing the method from monster to echo, there is the little problem of the parameter named text. Because of the parameter, it is necessary to change the body of the SOAP request to the one shown in Listing 7-8.. <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="" xmlns:xsd="" xmlns: <soap:Body> <echo xmlns=""> <text>Dijon Ketchup</text> </echo> </soap:Body> </soap:Envelope> Now that we've got the basics down of the SOAP envelope (yes, there is more) let's consider how to deliver it to the web service. Unfortunately, FedEx and UPS are both out of the question, although it might be fun to call and ask the rates for delivering a SOAP envelope to a web serviceat least, until they got a restraining order. This leaves the XMLRequest object as the best available resource: neither rain, nor snow, and all that stuff. Everything necessary to deliver the SOAP envelope is already in there, so the only issue is how to send our SOAP envelopeafter all, there are no mailboxes with little red flags. Fortunately, we have a good chunk of the code down already, including the SOAP envelope itself. Instead of beating around the bush, Listing 7-9 shows the client-side JavaScript necessary to invoke the monster method of our web service. try { objXMLHTTP = new XMLHttpRequest(); } catch(e) { objXMLHTTP = new ActiveXObject('Microsoft.XMLHTTP'); } objXMLHTTP.onreadystatechange = asyncHandler; objXMLHTTP.open('POST', '', true); objXMLHTTP.setRequestHeader('SOAPAction',''); objXMLHTTP.setRequestHeader('Content-Type','text/xml'); objXMLHTTP.send(soap); function asyncHandler() { if(objXMLHTTP.readyState == 4) alert(objXMLHTTP.responseText); } The first noticeable change from the earlier asynchronous request (refer to Listing 7-2) is that the method has been changed from GET to POST; this is because it is necessary to post the SOAP envelope to the web service. This leads to the second change; the URI in the open method is now the address of the web service instead of a filename. Perhaps the biggest changes are the addition of two setRequestHeader methods. The first one sets the SOAPAction to the web service's namespace and the method to be invoked. It is important to note that it is absolutely necessary for the SOAPAction header to be identical to the method in the SOAP envelope. If they aren't identical, it won't work. Personally, I spent a lot of time chasing my tail trying to figure out what was wrong whenever the methods were different, but, then, I was raised by wolves and have a strong tendency to chase my tail. The second setRequestHeader is the easy one; all that it does is set the Content-type to text/xml. As if we'd be doing anything else. But this raises the question of what the response from the web service will look like, beyond being XML. Well, there are essentially two possible responses; either it worked or it didn't. If it worked, it will look a lot like the response shown in Listing 7-10. However, there could be some differences. For instance, it could be an XML document instead of the "Grrr!", but this is only an example, so why strain ourselves? <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:soap="" xmlns:xsi="" xmlns: <soap:Body> <monsterResponse xmlns=""> <monsterResult>Grrr!</monsterResult> </monsterResponse> </soap:Body> </soap:Envelope> The second possible response is broken into two parts. The first part is called a SOAP fault. Basically, it means that something is wrong with the request, such as the methods not being identical. Listing 7-11 shows a SOAP fault that was created when I changed the SOAPAction in the request header to xxxx when it should have been monster. <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:soap="" xmlns:xsi="" xmlns: <soap:Body> <soap:Fault> <faultcode>soap:Client</faultcode> <faultstring>) </faultstring> <detail/> </soap:Fault> </soap:Body> </soap:Envelope> The final two possible responses also cover errors. For example, there could be errors that are not handled correctly in the web service. This could result in the web service returning text concerning the error instead of either a SOAP response or a SOAP fault. It is important to take this into consideration when creating a web service. Although the language C# was used here for writing the web services, it is important to remember that these techniques can be applied to a whole slew of languages. In the end, the choice of language is yours, or it belongs to the powers-that-be, or somewhere in the hierarchy.
http://www.yaldex.com/ajax_tutorial_2/ch07lev1sec6.html
CC-MAIN-2019-18
refinedweb
1,467
62.88
Trapping Rain Water in a Matrix Introduction What do you mean by a matrix? Matrix is a two-dimensional structure identified by two indices, one for rows and another for columns. Every matrix is a two-dimensional array but not vice versa. Trapping rainwater is a famous and routine question asked in interviews of many companies like Google, Amazon, Samsung, Adobe, etc. This particular problem we will be heading to is of the topic matrix. Matrix will contain the height of each cell that can hold the rainwater, and we have to return the total volume of water it can trap after raining. Let’s move to our problem statement for better understanding so that you can think of the appropriate approach to solve the problem. Problem Statement So the problem statement is pretty simple, we are given an mxn matrix of positive integers where each element of the matrix represents the height of the bars; we aim to find the total volume of water trapped in the matrix after raining. Let’s understand the idea of how to solve this problem through an example. This is like a container, and the water-filled in the container will be till the minimum boundary. So in this container, first, I will be processing the outside boundaries because everything will be stored inside these boundaries, and slowly, I will push in the boundary. After visiting the boundary, we can see that the minimum value we are getting from here is ‘1’. After visiting ‘1’, we will reduce the boundary and include ‘13’ into the boundary. The minimum boundary value is ‘12’ and updates the minimum value from ‘1’ to ‘12’. We will push the boundary by checking the adjacent of each boundary having a value as ‘12’. If the adjacent ‘12’ is already inside the boundary, it will do nothing; otherwise, add the boundary if the boundary value is greater than ‘12’. If the adjacent boundary value is less than ‘12’, then water will be filled inside that boundary. The amount of water-filled will be the minimum value of boundary - value of adjacent boundary(12-10=2). The adjacent boundary of ‘10’ left outside is also less than ‘12’, i.e., ‘8’. So the amount of water filled will be 12 - 8 = 4. The adjacent boundary of ‘8’ left outside the boundary is also less than ‘12’, i.e., ‘4’. So the amount of water filled will be 12 - 4 = 8. The total amount of rainwater trapped will be 2+4+8=14 Now the minimum boundary value will get updated to ‘13’ and check for the adjacent boundaries that are unvisited or visited. We can see that all the adjacent boundaries of ‘13’ are visited. So will get the total volume of rainwater trapped in between the bars after raining. Note: Please try to solve Trapping rainwater in a matrix on CodeStudio before heading into the solution. Approach: Priority Queue Priority queue gonna help us to store the minimum value of the boundary. Every element in the priority queue is assigned with some priority, which will help fetch the minimum value more efficiently. Implementation import java.util.ArrayList; import java.util.PriorityQueue; import java.util.Queue; import java.util.List; public class Trappingrainwater1 { // To define the coordinate of the cells class Pair{ int x; int y; int height; public Pair(int x, int y, int height){ this.x =x ; this.y =y; this.height = height; } } public int trapRainWater(int[][] mat) { // Base conditions if(mat == null || mat.length == 0 || mat[0].length == 0) return 0; int m = mat.length; int n = mat[0].length; // To keep an update regarding the visited boundaries boolean[][] visited = new boolean[m][n]; // To store the minimum value from the boundary PriorityQueue<Pair> pq = new PriorityQueue<>((a,b)->(a.height - b.height)); // To process the boundaries for(int i = 0; i<m;i++){ for(int j = 0; j<n; j++){ if(i==0 || j==0 || i == m-1 || j == n-1){ visited[i][j] = true; pq.offer(new Pair(i,j,mat[i][j])); } } } // For the BFS traversal, we can go UP, DOWN, LEFT, and RIGHT. // To explore the adjacent cells we are creating the 2D direction matrix. int[][] dirs = new int[][]{{0,1},{1,0},{0,-1},{-1,0}}; // To store the total volume of rain water int res = 0; while(!pq.isEmpty()){ Pair cur = pq.poll(); for(int i = 0 ; i<dirs.length; i++){ int newX = cur.x + dirs[i][0]; int newY = cur.y + dirs[i][1]; if(newX < 0 || newY <0 || newX>=m || newY>=n || visited[newX][newY]){ continue; } visited[newX][newY] = true; // When the neighour of the minimum boundary value is less than its value res += Math.max(0,cur.height - mat[newX][newY]); pq.offer(new Pair(newX,newY,Math.max(mat[newX][newY],cur.height))); } } return res; } } Analysis of Complexity Time Complexity: row:matrix height, col: matrix width --> step1: O(rowcol log(2m+2n-4)) = O(rowcol log(m+n)). Space Complexity: boolean matrix and PriorityQueue--> O(m*n) FAQ’S 1). What is the motive behind the problem of Trapping rainwater in a matrix? Trapping rainwater means finding the total volume of water stored between the bars after raining. 2). What do you mean by Priority Queue? A priority queue is the extension of the queue data structure where each element is associated with some priority. Through priority, vital data gets through faster. 3). What is the time complexity to solve this problem? Time Complexity: (row * col * log(col * row)) Auxiliary Space: O(col * row) Key Takeaways In this article, we have covered the problem of Trapping rainwater in a matrix using a priority queue in the Java language. With the help of the pictorial representations and examples, it will be clear to you. You can practice more questions similar to this like Trapping Rain Water, Container With Most Water, and many more. If you are not familiar with priority queues, you can visit CodeStudio for more information. For more practice, you can use CodeStudio for various DSA questions typically asked in interviews. It will help you in mastering efficient coding techniques. Keep Coding!!!
https://www.codingninjas.com/codestudio/library/trapping-rain-water-in-a-matrix
CC-MAIN-2022-27
refinedweb
1,026
65.52
Created on 2010-06-27 01:38 by ehohenstein, last changed 2013-03-22 17:27 by kristjan.jonsson. This issue is now closed. This error is unfortunately difficult to reproduce. I've only seen it happen on Windows XP running on a dual core VMWare VM. I haven't been able to reproduce it on a non-VM system running Windows 7. The only way I've been able to reproduce it is to run the following unit test repeatedly on the XP VM repeatedly until it fails: import unittest import urllib2 class DownloadUrlTest(unittest.TestCase): def testDownloadUrl(self): opener = urllib2.build_opener() handle = opener.open('', timeout=60) handle.info() data = handle.read() self.assertNotEqual(data, '') if __name__ == "__main__": unittest.main() This unit test obviously depends on a web server running on localhost. In the test environment where I was able to reproduce this problem the web server is Win32 Apache 2.0.54 with mod_php. When the test fails, it fails with Windows error code 10035 (WSAEWOULDBLOCK) being generated by the call to the recv() method rougly once every 50-100 times the test is run. The following is a the final entry in the stack when the error occurs: File "c:\slave\h05b15\build\Ext\Python26\lib\socket.py", line 353, in read (self=<socket._fileobject ...03B78170>, size=1027091) data = self._sock.recv(left) The thing to note is that the socket is being created with a timeout of 60. The implementation of the socket.recv() method in socketmodule.c in the _socket import module is to use select() to wait for a socket to become readable for socket objects with a timeout and then to call recv() on the socket only if select() did not return indicating that the timeout period elapsed without the socket becoming readable. The fact that Windows error code 10035 (WSAEWOULDBLOCK) is being generated in the sock_recv_guts() method in socketmodule.c indicates that select() returned without timing out which means that Windows must have indicated that the socket is readable when in fact it wasn't. It appears that there is a known issue with Windows sockets where this type of problem may occur with non-blocking sockets. It is described in the msdn documentation for WSAAsyncSelect() (). The code for socketmodule.c doesn't seem to handle this type of situation correctly. The patch I've included with this issue report retries the select() if the recv() call fails with WSAWOULDBLOCK (only if MS_WINDOWS is defined). With the patch in place the test ran approximately 23000 times without failure on the system where it was failing without the patch. I also see this issue on occasion on windows XP SP 3, using python 2.6.5 to fetch large files via http. The error is infrequent, but it is happening in my situation without a VM. > It appears that there is a known issue with Windows sockets where this > type of problem may occur with non-blocking sockets. It is described in > the msdn documentation for WSAAsyncSelect() > (). That documentation doesn't seem to describe the same kind of situation; it is about delayed notification through Windows messages (if you read the sequence they given in example, it's quite logical why it can fail). Have you tried instrumenting sock_recv_guts() and dumping the actual return values and errnos (from both internal_select() and recv())? Actually, it's possible that select(2) incorrectly reports sockets as ready for reading : for example if the socket receives data, but then discards it because of an invalid checksum (and I guess you're more likely to get this type of problem on a VM or while copying large files). So it means we should indeed retry on a socket with timeout... But we must take care not to exceed the original timeout, so we must measure the time taken by each select() call. Unfortunately, select doesn't necessarily update the timeout variable with the remaining time, so we can't rely on this. This would mean having the select enclosed within gettimeofday and friends, which seems a bit overkill... > Unfortunately, select doesn't necessarily update the timeout variable with the remaining time, so we can't rely on this. This would mean having the select enclosed within gettimeofday and friends, which seems a bit overkill... How is it "overkill" to be correct? > Unfortunately, select doesn't necessarily update the timeout variable > with the remaining time, so we can't rely on this. This would mean > having the select enclosed within gettimeofday and friends, which > seems a bit overkill... Well, given the general cost of Python function calls and bytecode interpretation, it would probably not be much of a runtime overhead. So it's mainly some additional, not very exciting code to write :-) (luckily, in 3.2 we have a cross-platform gettimeofday() abstraction in pytime.h which will ease things quite a bit) Yes, I was concerned with the runtime overhead (especially under VMs, where clocks are emulated) and strace output ;-) As for the dull code writting, I could give it a shot unless someone else wants to go ahead. @ehohenstein: sorry, I hadn't seen you'd already submitted a patch, so you may as well update it to take into account what was said above. Also, please note that this issue is not specific to windows, so it would be nice to fix it for Unices too. Here is a proof of concept patch for Python 3.2. It only wraps recv() and recv_into(). recvfrom() and recvfrom_into() should receive the same treatment, at least. Here is an updated patch wrapping all variants of recv() and send(), except sendall() which already has its own retry loop. Committed in 3.2 in r85074. I don't plan to backport it, since the _PyTime_gettimeofday abstraction is not available on earlier versions. I have ported the changes related to this problem from the 3.2 branch to the 2.6 version of socketmodule.c. The changes are attached as a diff from Python 2.6.2. The changes apply to all platforms but I've only tested them on Windows. The _PyTime_gettimeofday method is not available in 2.6 which is why the changes in 3.2 weren't originally back ported. I admit to adding a disgusting hack which was to copy some of the _PyTime_gettimeofday interface code from 3.2 to the socketmodule.c file and implement it using the time.time() method, falling back to the crt time() method. It's not as efficient as the implementation in 3.2 but I believe it should be equally correct. The motivation for doing this was that I continued to see 10035 errors happening using Python 2.6 though in different code paths. Specifically, errors were being thrown when uploading a file using a PUT request using httplib which calls sendall(). It's noteworthy that analysing the changes made for this issue to Python 3.2 revealed that no change was made to the sendall() method. sendall() is actually problematic in that the timeout on the socket may actually be exceeded without error if any one call to select() doesn't exceed the socket's timeout but in aggregate the calls to select do wait longer than the timeout. The same generic solution that was applied to the other socket methods is not appropriate for sendall(). I elected to continue this behavior by just checking for EAGAIN and EWOULDBLOCK if the socket has a positive timeout value and the call to send failed and continuing the select/send loop in that case. As far as I can tell, sendall() will still fail with these recoverable errors in Python 3.2. I won't feel bad if this patch is rejected for 2.6 but the changes to sendall() should really be considered for the 3.2 branch. 2.6 is closed except for security fixes, which this does not seem to be. If the problem is in 2.7, then it potentially could be fixed there, but with the same caveats. I will let Antoine reclose if he thinks appropriate. I will not bother backporting myself but an other core developer can do it if (s)he desires. I don't have the expertise to backport it myself, but the problem certainly is still present in python 2.7.1 on Windows 7. It is especially pronounced when using threading to read from multiple url files. I bumped into this issue at one of my customers that use Python to control a machine through Beckhoff EtherCAT. The Python code communicates with the Beckhoff EtherCAT program using TCP/IP. They use non blocking sockets like so; s = select.select([self._socket], [], [], timeout)[0] if not s: raise NoData self._socket.recv(length) They also found that the recv occasionally raises a 10035. I changed the code in to; s = select.select([self._socket], [], [], timeout)[0] if not s: raise NoData try: buffer_ = self._socket.recv(length) except socket.error as inst: if (gaius.utils.Platform.WINDOWS and 10035 == inst.args[0]) or \ (gaius.utils.Platform.LINUX and 11 == inst.args[0]): raise NoData So this issue applies also to sockets without timeout, albeit it can be worked around easily. Also note that this also applies to Linux as the man page of select states in the BUG section;. Note that Linux select is not Posix compliant as Posix states; A descriptor shall be considered ready for reading when a call to an input function with O_NONBLOCK clear would not block, whether or not the function would transfer data successfully. See MSDN select says; For other sockets, readability means that queued data is available for reading such that a call to recv, WSARecv, WSARecvFrom, or recvfrom is guaranteed not to block. says (for Windows 95); The Winsock select() API might fail to block on a nonblocking socket and return WSAEWOULDBLOCK as an error code when either send() or recv() is subsequently called. For example, select() may return indicating there is data to read, yet a call to recv() returns with the error code WSAEWOULDBLOCK, indicating there is no data immediately available. Windows NT 4.0 does not exhibit this behavior. Finally I have 2 questions; Is this select behavior on Windows 'normal'? Noting that it is not documented. or does it behaves this way due to crippled NIC's or drivers (VMWare)? Can this behavior be reproduced? I need this for automatic testing. Now these exceptional paths cannot be tested. Backport to 2.7 should be done: See Issue #16274. I will backport this. I have recently seen this happening in 2.7 in our company and it would make sense to fix this before 2.7.4 is released. Here is a patch for 2.7 Since 2.7 doesn't have pytime.c, we export floattime() as _Py_floattime out of time.c New changeset 8ec39bfd1f01 by Kristján Valur Jónsson in branch '2.7': Issue #9090 : Error code 10035 calling socket.recv() on a socket with a timeout New changeset f22b93b318a5 by Kristján Valur Jónsson in branch '2.7': issue #9090 : Limit the fix to windows since getting a portable simple New changeset 0f5e1e642dc3 by Kristján Valur Jónsson in branch '2.7': issue #9090 : Take the same approach for socketmodule as daytimemodule fixed for 2.7
http://bugs.python.org/issue9090
CC-MAIN-2016-40
refinedweb
1,885
66.74
Aug 24, 2006 12:58 PM|mstasi|LINK I have an Asp.Net app in .Net 1.1 and using the cache application block. I have one main web app and many sub web apps. The main web app is obviously a virtual folder in IIS and each sub web app is a virtual folder under the main one. There is only one web.config file in the main web app. When I added the cache app block I needed to add the cachingconfiguration.config and an entry for it in the web.config file in the main web app virtual folder. When I added cache code to the sub web apps then it forced me to add a web.config and cachingconfiguration.config in their virtual folder also. Is there a way to only have one web.config and cachingconfiguration.config? I do not want to add the files to each virtual folder, only the main one. This is only this way in development and not in production. Thanks Contributor 2363 Points Sep 21, 2006 07:28 AM|Ganesh@Nilgris|LINK what is this cachingconfiguration.config ? is it a default file or created manually None 0 Points Apr 24, 2007 08:17 AM|dhaval4friends|LINK Hi I am facing problem in using ‘GetStoredProcCommand’ I also struggled a lot to use ‘GetStoredProcCommandWrapper’ finally I came to know about ‘GetStoredProcCommand’ in v 2.0 But I am not able to use that also. I have EL v2.0 (Microsoft Enterprise Library January 2006) and I am using following namespaces using System.Data.Common; using Microsoft.Practices.EnterpriseLibrary.Data; using Microsoft.Practices.EnterpriseLibrary.Common; Please let me know if you can help me. If possible give me some source code also. Thanks in advance None 0 Points Apr 24, 2007 08:24 AM|dhaval4friends|LINK sorry for the misplaced post plz ignore it... 3 replies Last post Apr 24, 2007 08:24 AM by dhaval4friends
https://forums.asp.net/t/1020494.aspx?cachingconfiguration+config
CC-MAIN-2018-26
refinedweb
323
69.48
Type: Posts; User: math8 I think I can use an ArrayList and an 'enhanced for loop' to loop through all its entries. I was hoping there was a way to refer to the 'argument's number' of a method, but apparently not. My question is, is there a way for the method to loop using each of its arguments one by one? For instance, in the code below, is there a way to write the line "M[i] = argi;" in a correct way so... I am not trying to change the number of arguments the method has, I am just wondering, just like you can find the number of entries an array arr has by typing arr.length, is there a way to find the... hmmm, by thinking about this a little bit more, I think this should work: double[][] M = new double[2][a.length]; M[0]=a; M[1]=b; But still, let say, instead of two arrays, I have two... say I have two int arrays of length 3 (ex: a = [ 1 2 3] and b = [4 5 6]) I would like a method that returns the 2x3 Matrix: M = 1 2 3 4 5 6. I have tried using Thanks, that makes sense I know a virtual function is a member function of a class whose functionality can be overridden in its derived class. If you have a virtual statement, do you need to assign your destructor as a... alright, but aside from that, what is wrong with what I wrote before? hmmm, let see. Maybe something like: for(int i=0; i<=n/10;i++) { if((n-(i*10))%25 ==0) { x=i; //number of 10's z=(n-(i*10))/25 ; //number... I wasn't sure what you meant by "evenly". But first of, intuitively, for this problem to make sense , the "cents part" n must be either 0 or >=10 and at least n must be a multiple of 5 . So the only... when you say "evenly" do you mean by solving: 25x+10x=y, where y is the cents value? The code below gives me the maximum odd number and the minimum even number of a list of numbers that the user type. However, for the loop to stop the user needs to type 1000. Is there a better way... I think the code is performing how it should, I think the problem is behind the logic (I guess that's the design). I am hoping to get 23 10's, one 5, two 1's ,2 quarters and 3 dimes. The logic... yeah, it should, my bad. Unfortunately, it doesn't. My coin/money change code works when there can be an exact change each time, i.e. when the 1 cent option is available. However, when the change options are only $10, $5, $1, 25 cents and 10 cents, it... Never mind! found the error!! I needed to change the () into a [] in eKey(i), because eKey is not a function . I have two functions bool check_key(string cKey, string eKey) and bool check_digit(char digit1, char digit2), and I have declared both of them globally (is this the right terminology?) right after... great, thanks! I am trying to add matrices a and b. I am getting an error in the "add" function, probably because I have m[i] in it, and m is not an array. What is the correct way of writing the "add" member... hmmm, I rearranged some things around, and this is what I came up with. #include<iostream> #include<fstream> using namespace std; ifstream file1; ofstream file2; void menu0(); Thanks, I will keep that in mind. However, I am running into this new problem: In the void Frequency function that I had , besides calculating the frequency, it was also calculating the number of... Alright, I see what you are saying. But technically speaking, in my method, I write the same amount of text "cout<< / file2<<" - wise (except that I have 2 different but similar functions to call as... Could you give me an example of a function of type double, float,or char that can return some text both for the screen and for the ouput file. I tried earlier, and I couldn't make it work, that's why... I just figured out what was wrong. I cannot use the same function Frequency when printing out on the output file. I had to create a copy of the Frequency function and in the new one, I replaced... Thanks, I can see it is an error. If I wanted to get the results of void Frequency(alphabet[p], num, a) on the screen, I would just write Frequency(alphabet[p], num, a) ; (with no cout<<). However,...
http://forums.codeguru.com/search.php?s=d3611d1a3618309afd7118296f307532&searchid=2455213
CC-MAIN-2014-10
refinedweb
791
80.41
iHazeFactoryState Struct Reference [Mesh plugins] This interface describes the API for the sprite factory mesh object. More... #include <imesh/haze.h> Detailed Description This interface describes the API for the sprite factory mesh object. When multiple hulls are used, they must have the same number of vertices, vertices are taken to be numbered in the same ordering. The factory also implements the iHazeHullCreation interface Definition at line 126 of file haze.h. Member Function Documentation add a new layer - increasing the layer count Get the topmiddle point of the texture. Get the number of layers of hulls. Get the convex hull used for layer. Get the layer scale. Get the point of origin. Set the topmiddle point of the texture. Set the convex hull to be used as layer. Increfs the hull. Set the texture percentage used by a layer (total of 1.0 is max). Set the point of origin, the center of the texture. The documentation for this struct was generated from the following file: Generated for Crystal Space 2.0 by doxygen 1.6.1
http://www.crystalspace3d.org/docs/online/new0/structiHazeFactoryState.html
CC-MAIN-2016-07
refinedweb
178
69.68
I have an EventListener plugin who's on_activated method isn't getting called on startup in ST3. Is this as designed in ST3? If so, what's the recommended way of getting a plugin to do something on startup? I have the following, which feels a bit dirty and causes a visual hiccup between the time of Sublime starting and the timeout firing: def plugin_loaded(): sublime.set_timeout(force_active) def force_active(): view_id = sublime_api.window_active_view(sublime_api.active_window()) sublime_plugin.on_activated(view_id) ...and actually, I ran into a different but related issue - there's a different plugin I'm using that seems to be crashing over what appears to be a race condition between its EventListener.on_activated and its plugin_loaded function. Its on_activated is making the assumption that plugin_loaded is always run beforehand, but that doesn't seem to be the case (restarting ST3 multiple times in a row shows it sometimes is and sometimes isn't). on_activated isn't called during startup. I'll make a change for the next build so a call is synthesised, which makes more sense. In general, I believe on_activated cannot be called before plugin_loaded, unless another plugin is changing focus in its plugin_loaded call, which would cause on_activated to be called before the next plugin receives its plugin_loaded call. Great, thanks! I would expect the plugin_loaded function is always run before any other plugin implementations (commands, events). It may even happen that events depend on the plugin_loaded function to have run before (e.g. if it imports modules or defines globals). So, maybe it would be possible keep track on all the event-firing calls within plugin_loaded and fire them after all plugins finished loading? I can imagine that being complicated, though. There is no neat solution unfortunately. If events were queued up, then that would break other contracts. For example, on_modified is currently always called before run_command returns, and queueing up the event would break this. The reality is that Sublime Text relies on plugins being good citizens in any case (e.g., not doing blocking io in the main thread, not doing a lot of computation in the main thread), and not doing things like changing input focus in response to plugin_loaded in part of that: I can't imagine how that would ever be a good user experience, for example. I'm not quite sure why jburnett would have been seeing on_activated getting called before plugin_loaded sometimes, while a plugin switching input focus in response to plugin_loaded() would cause this, I'd be surprised if such a plugin even exists at this point in time. A bit more detail (one was a red herring): One plugin (my own hacky thing) was doing an open() in plugin_loaded. This apparently causes a change to sys.modules (at least it does on Windows), which led to the crash in 3010 (and fixed in 3011 - again, thanks!). The red herring was that another (public) plugin after that was crashing in its on_activated because its on_loaded never got the chance to run (because of the crash above). I'm still not sure why it sometimes happened and sometimes didn't. Is plugin load order somewhat guaranteed to be in some kind of order? In any case, I'll chalk to latter bit to it being late There seems to be a bit more happening out of order... I'm seeing things like on_selection_modified happening before on_activated. Here's a boiled down example: import sublime_plugin class TestListener(sublime_plugin.EventListener): def on_activated(self, view): print('test.on_activated') def on_selection_modified(self, view): print('test.on_selection_modified') def on_selection_modified_async(self, view): print('test.on_selection_modified_async') Dump that into a test py file, make sure remember_open_files is enabled so you get a buffer opened by default, and restart ST3 (3012 here). After restart, immediately click in the open view, and then open the console. You'll see that on_selection_* was called before on_activated. ...is this as designed now? One more (again, this is different from ST2, but not sure if this is as designed now)... When on_activated is called after open a file, the passed in view does not yet exist in the any of the views you get from the global sublime.windows(). import sublime import sublime_plugin class TestListener(sublime_plugin.EventListener): def on_activated(self, view): this_view_id = view.id() all_view_ids = set([view.id() for window in sublime.windows() for view in window.views()]) print('view exists:', this_view_id in all_view_ids, this_view_id, all_view_ids) In ST2, the above will print "view exists: True" when you open a new file. ST3 will print twice, the first is False, the second True. I am running build 3103 and I noticed that when launching ST3, the plugin's EventListener on_activated is called for the active view, but on_activated_async doesn't - is this as designed? I guess one way to workaround it for now, is to have an on_activated listener that will use sublime.set_timeout_async to execute the desired code. on_activated sublime.set_timeout_async Yeah, thats what I did in the end - it just seemed weird. this would be a really easy fix @jps and @wbond to integrate the workaround I suggested directly - lines 106 and 107 in sublime_plugin.py are: if "on_activated" in dir(obj): on_activated_targets.append(obj) and line 131 does the calling.
https://forum.sublimetext.com/t/on-activated-on-startup/8747/9
CC-MAIN-2017-09
refinedweb
871
56.25
Important: Please read the Qt Code of Conduct - QtWebEnginePage.runJavaScript In the C++ class you can use a callback to get the value returned from the javascript but it doesn't appear that you can get a return value/object with PySide2. Any suggestions would be great. Here is a basic example... When using QT, temp = None. When the same script is run using Selenium, temp = 5. import sys import time from PySide2 import QtCore, QtGui, QtWidgets from PySide2 import QtWebEngineWidgets from PySide2.QtWebEngineWidgets import QWebEngineView app = QtWidgets.QApplication(sys.argv) view = QtWebEngineWidgets.QWebEngineView() view.setUrl(QtCore.QUrl("")) view.show() temp = view.page().runJavaScript('return 1+4') print(temp) import sys import time from selenium import webdriver browser_options = webdriver.FirefoxOptions() browser_options.add_argument('-headless') browser = webdriver.Firefox(firefox_options=browser_options) browser.get('') temp = browser.execute_script('return 1+4') print(temp) browser.close() browser.quit() I found the answer here: I faced the same issue, but in PySide. Back in PySide (1) I could do this: self.page().mainFrame().evaluateJavaScript("getHits()") but now, there is no more evaluateJavaScript() but a runJavaScript. This function can take only one or two params. Since it is asynchronous, you apparently have to pass a function as the second parameter. When the script finishes execution, it will call that second function and you will then get the data. Argh... seems PYQT5 can do this, but not PySide2. Too bad since doing this was easy in PySide. @skidooman We convered this recently in. A couple of posts there claim workarounds, I didn't look at them but you might want to....
https://forum.qt.io/topic/100303/qtwebenginepage-runjavascript/1
CC-MAIN-2021-39
refinedweb
262
61.83
Two Ways to Use Ceylon in the Browser Two Ways to Use Ceylon in the Browser Up until now, using Ceylon in a browser wasn't really straightforward. The good news is, Ceylon 1.2.1 brought two major features that overcome this problem. Let's check them out. Join the DZone community and get the full member experience.Join For Free Deploying code to production can be filled with uncertainty. Reduce the risks, and deploy earlier and more often. Download this free guide to learn more. Brought to you in partnership with Rollbar. As you might (or might not) know, Ceylon is more than a JVM language. It has been possible to compile Ceylon code to JavaScript for a long time, but other platforms such as Dart or LLVM are around the corner. Having a JS backend means that you can actually write Ceylon code that can be run in a web browser, giving the opportunity to share code between the server and the client. The web IDE is a very good example of this. Up until now, using Ceylon in a browser wasn't really straightforward, though. The good news is, Ceylon 1.2.1 brought two major features that overcome this problem: Let's see how these fit together. Creating a New Project First things first, we need an empty project that will hold two modules: com.acme.clientis a native("js")module that imports ceylon.interop.browser: native("js") module com.acme.client "1.0.0" { import ceylon.interop.browser "1.2.1-1"; } com.acme.serveris a native("jvm")module that imports ceylon.net: native("jvm") module com.acme.server "1.0.0" { import ceylon.net "1.2.1"; } Serving Ceylon Modules In order to run com.acme.client in a browser, we have to import it from an HTML file. The recommended way is to use RequireJS: <html lang="en"> <head> <meta charset="utf-8"> <title>Hello from Ceylon!</title> </head> <body> <div id="container"> </div> <script src="//requirejs.org/docs/release/2.1.22/minified/require.js"></script> <script type="text/javascript"> require.config({ baseUrl : 'modules' }); require( [ 'com/acme/client/1.0.0/com.acme.client-1.0.0' ], function(app) { app.run(); } ); </script> </body> </html> Here, we tell RequireJS to use the prefix modules when downloading artifacts from the server, which means we need something on a server that will listen on /modules, parse module names, and serve the correct artifact. Option 1: Using a Ceylon Server Ceylon SDK 1.2.1 introduced a new endpoint named RepositoryEndpoint, that uses a RepositoryManager to look up module artifacts in one or more Ceylon repositories like the compiler or Ceylon IDE do: import ceylon.net.http.server.endpoints { RepositoryEndpoint } import ceylon.net.http.server { newServer } "Run the module `com.acme.server`." shared void run() { value modulesEp = RepositoryEndpoint("/modules"); value server = newServer { modulesEp }; server.start(); } By default, this endpoint will look for artifacts in your local Ceylon repository, but also in the compiler's output directory. This greatly simplifies our development workflow, because each time we modify files in com.acme.client, Ceylon IDE will rebuild the JS artifacts, which can then be immediately refreshed in the browser. Finally, to serve static files (HTML, CSS, images, etc.), we need a second endpoint that uses serveStaticFile to look up files in the www folder, and serve index.html by default: function mapper(Request req) => req.path == "/" then "/index.html" else req.path; value staticEp = AsynchronousEndpoint( startsWith("/"), serveStaticFile("www", mapper), {get} ); value server = newServer { modulesEp, staticEp }; If we start the server and open, we can see in the web inspector that the modules are correctly loaded: Option 2: Using Static HTTP Servers Option 1 is interesting if you already have a backend written in Ceylon. Otherwise, it might be a little too heavy because you're basically starting a Ceylon server just to serve static files. Luckily, there's a way to create a standard Ceylon repository containing a module and all its dependencies: ceylon copy. ceylon copy --with-dependencies com.acme.client This command will copy the module com.acme.client and all its dependencies to a given folder (by default ./modules), preserving a repository layout like the one RequireJs expects. This means we can start httpd or nginx and bind them directly on the project folder. Modules will be loaded from ./modules, we just have to configure the server to look for other files in the www directory. Attention though, each time we modify dependencies of com.acme.client, we will have to run ceylon copy again to update the local repository. Option 2 is clearly the way to go for client apps that don't require a backend. Like option 1, it doesn't force you to publish artifacts in ~/.ceylon/repo. Of course, if you are running a local Ceylon JS application, and your browser allows you to include files directly from the filesystem, you can also avoid the HTTP server and load everything for the filesystem. Using Browser APIs Now that we have bootstrapped a Ceylon application running in a browser, it's time to do actual things that leverage browser APIs. To do this, we'll use the brand new ceylon.interop.browser which was introduced in the Ceylon SDK 1.2.1 a few days ago. Basically, it's a set of dynamic interfaces that allow wrapping native JS objects returned by the browser in nice typed Ceylon instances. For example, this interface represents the browser's Document: shared dynamic Document satisfies Node & GlobalEventHandlers { shared formal String \iURL; shared formal String documentURI; ... shared formal HTMLCollection getElementsByTagName(String localName); shared formal HTMLCollection getElementsByClassName(String classNames); ... } An instance of Document can be retrieved via the toplevel object document, just like in JavaScript: shared Document document => window.document; Note that window is also a toplevel instance of the dynamic interface Window. ceylon.interop.browser contains lots of interfaces related to: Making an AJAX call, retrieving the result, and adding it to a <div> is now super easy in Ceylon: import ceylon.interop.browser.dom { document, Event } import ceylon.interop.browser { newXMLHttpRequest } shared void run() { value req = newXMLHttpRequest(); req.onload = void (Event evt) { if (exists container = document.getElementById("container")) { value title = document.createElement("h1"); title.textContent = "Hello from Ceylon"; container.appendChild(title); value content = document.createElement("p"); content.innerHTML = req.responseText; container.appendChild(content); } }; req.open("GET", "/msg.txt"); req.send(); } Going Further Dynamic interfaces are really nice when it comes to using JavaScript objects in Ceylon. They are somewhat similar to TypeScript's type definitions, which means in theory, it is possible to use any JavaScript framework directly from Ceylon, provided that someone writes dynamic interfaces for its API. The Ceylon team is currently looking for ways to load TypeScript definitions and make them available to Ceylon modules, which would greatly simplify the process of adding support for a new framework/API. The complete source code for this article is available on GitHub. A live example is available on the Web IDE. This article was written by Bastien Jansen Deploying code to production can be filled with uncertainty. Reduce the risks, and deploy earlier and more often. Download this free guide to learn more. Brought to you in partnership with Rollbar. Published at DZone with permission of Gavin King , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/ceylon-in-the-browser-again
CC-MAIN-2018-30
refinedweb
1,243
50.33
soya-user, hello pyopengl-users. This message isn't _totally_ relevant to pyopengl-users, but it's not unlikely that you guys have the requesite knowledge to help me fix this problem. I've been trying to get Soya3d () working for a while, but it's got problems with initialization. Soya3d talks to the OpenGL API directly with Pyrex. The problem is that a bunch of the calls it's making to OpenGL are returning NULL, while PyOpenGL calls to the same functions are returning expected values. At first we thought this was a problem with the way Soya was initializing GL (It uses SDL_Init to accomplish this), like it wasn't properly waiting for the OpenGL system to be initialized before it was making the calls. But to disprove that, I wrote this bit of Pyrex code: .. inside of soya's init() function... from OpenGL import GL print "PyOGL:", GL.glGetString(GL.GL_VENDOR) my_dump_info() cdef void my_dump_info(): cdef char* gl_vendor gl_vendor = <char*> glGetString(GL_VENDOR) if gl_vendor == NULL: print "OGL: Wargh glGetString returned NULL" check_gl_error() else: print "OGL:", PyString_FromString(gl_vendor) GL.glGetString was returning the expected "NVIDIA Corporation", but the direct call to the C glGetString is still returning NULL. The call to check_gl_error does a glCheckError, but that's returning GL_NO_ERROR. The same thing happens for stuff like glGetIntegerv. So the only conclusion I can draw here is that somehow the PyOpenGL interface is calling these functions in a way that's different from pyrex's direct calls to them. I tried reading through the source of PyOpenGL but the SWIG stuff wasn't elucidative. It just seems like the wrappers are pretty direct. Does PyOpenGL involve some kind of different context that it might use where a direct call to the C interface wouldn't? -- Twisted | Christopher Armstrong: International Man of Twistery Radix | Release Manager, Twisted Project ---------+ We don't do anything particularly fancy in the wrapper for glGetString, here's what it looks like for the generated OpenGL 1.1 wrapper from PyOpenGL 2.0.1.08: static PyObject *_wrap_glGetString(PyObject *self, PyObject *args) { PyObject *resultobj; GLenum arg1 ; GLubyte *result; PyObject * obj0 = 0 ; if(!PyArg_ParseTuple(args,(char *)"O:glGetString",&obj0)) return NULL; arg1 = (GLenum) PyInt_AsLong(obj0); if (PyErr_Occurred()) return NULL; { result = (GLubyte *)glGetString(arg1); if (GLErrOccurred()) return NULL; } { if (result) { resultobj= PyString_FromString(result); } else { Py_INCREF(resultobj = Py_None); } } return resultobj; } I can't see anything there which is markedly different from your approach. PyOpenGL is fairly minimal wrt what it sets up for contexts. AFAIK we aren't doing anything funky with initialising our reference to OpenGL, leaving the context-creation work to the GUI libraries as much as possible, (though I should note that that stuff would all have been written by someone else, so it could be we spend thousands of lines of code on it somewhere and I've just missed it during maintenance). We do some minimal stuff such as defining functions for retrieving the current context under OpenGL 1.0, but I doubt that's relevant. From the docs: () GL_INVALID_OPERATION is generated if glGetString is executed between the execution of glBegin and the corresponding execution of glEnd. I'd confirm that you are not calling this within those functions.? Seems your check_gl_error() isn't picking up the failure for some reason, but that doesn't solve the base problem. Good luck, Mike ________________________________________________ Mike C. Fletcher Designer, VR Plumber, Coder Mike C. Fletcher wrote: Thanks for the help, Mike! I don't know if you remember, but I think we met at PyCon this year; I was rambling about how horrible the scene is for open source 3d game engines with you and Tamer ;) > We don't do anything particularly fancy in the wrapper for glGetString, > here's what it looks like for the generated OpenGL 1.1 wrapper from > PyOpenGL 2.0.1.08: > [snip C code] Here's the C code that's being generated from pyrex from that snippet I showed in my post: char (*__pyx_v_gl_vendor); /* ... */ int __pyx_2; /* ... */ /* "/home/radix/Projects/soya/init.pyx":307 */ __pyx_v_gl_vendor = ((char (*))glGetString(GL_VENDOR)); /* "/home/radix/Projects/soya/init.pyx":308 */ __pyx_2 = (__pyx_v_gl_vendor == 0); if (__pyx_2) { /* etc */ So the (__pyx_v_gl_vendor == 0) is ALWAYS true, no matter WHAT I pass to glGetString (I tried random numbers like 9999), and glCheckError is NEVER returning an error code. Are all of those (implicit) casts reasonable? I also checked that the value of GL_VENDOR is expected; it's the same as PyOpenGL.GL.GL_VENDOR. > From the docs: > () > GL_INVALID_OPERATION is generated if glGetString is executed > between the > execution of glBegin and the corresponding execution of glEnd. > > I'd confirm that you are not calling this within those functions. I checked with Jiba, as well as the soya source code, and indications are that it's not being called between glBegin and glEnd. >? Something might be going wrong with my error checking code, since no matter what I pass to glGetString, I'm not getting any errors. I should mention that this code is working for other people; but I don't know if any of those other people are using the implementation of OpenGL provided by NVidia's proprietary Linux drivers (anyone?). Thanks for the help, I'll flail around at the problem some more. -- Twisted | Christopher Armstrong: International Man of Twistery Radix | Release Manager, Twisted Project ---------+
https://sourceforge.net/p/pyopengl/mailman/message/3123715/
CC-MAIN-2017-43
refinedweb
887
61.67
STUDENT LOAN? WHAT S IN A - Bruce Harper - 4 years ago - Views: Transcription 1 WHAT S IN A STUDENT LOAN? A student loan can help to finance your tertiary studies. You can get a loan to help with your fees, course-related costs and weekly living costs. But remember, only borrow what you need. This booklet tells you the things you need to know about getting a student loan and paying one back. 2 Things to think about 3 Seven steps to apply 4 What s in a student loan? 6 Other things you need to know about getting a student loan 9 Passing at least half of your previous study 11 Paying off your student loan 12 Definitions 14 When you get a student loan you ll be in contact with two government departments StudyLink and Inland Revenue. StudyLink s role is to tell you about the student loan and help you organise your loan application. Inland Revenue takes responsibility for your loan once you start to draw it down and it s transferred to them. Your loan details will normally be transferred to Inland Revenue daily. You can use myir, Inland Revenue s secure online service to view or update your information. Inland Revenue assists you to comply with your obligations as a student loan borrower, until the loan is paid back. For more information, go to Go to to find all the information you need and to understand how StudyLink and Inland Revenue work together to administer student loans. If you would like to know more about applying for a student loan, visit StudyLink s website If you would like to know more about repaying a student loan or interest charges, whether you re in New Zealand or overseas, visit Inland Revenue s website 3 THINGS TO THINK ABOUT One of the challenges for people considering tertiary education for the first time is having an understanding of all of the costs involved with study and the financial help available to help with those costs. The Thinking about study section of StudyLink s website includes case studies and videos of students talking about their experiences to help you understand what it will cost you to study. It also provides information on the different options you may have for financing this investment into your future. A student loan can help to finance your tertiary studies. But remember, it s a loan that you have to pay back. When you have a student loan you need to start paying it back once you earn over a certain amount it could take years to pay off. So, think carefully about how you use it: > > Do you really understand what s involved in paying back a loan? > > Is there a scholarship or grant you may be able to get? > > Have you thought about working part-time while you study? > > Are you entitled to a student allowance that can help with your living expenses and which you don t have to pay back? > > Is there extra help with costs (such as help with health costs or childcare) that you may be entitled to? REMEMBER What you borrow you ll have to pay back, so only borrow what you need. Tama borrowed $29,000 to complete his studies. He earns $35,000 a year and makes the minimum repayments each week. It will take him a little over 15 years to pay off his loan. Kim worked part-time while studying so she only borrowed $14,000. She also earns $35,000 a year and pays the minimum amount each week which means she ll pay off her loan in less than 7 years. You can do your own calculations at Inland Revenue s website DO THE NUMBERS Visit the Tools and Calculators section of our website to get an idea of what it will cost you to live, how much you may need to borrow and what it will take to pay back your loan. GET IT ALL DONE ONLINE WHAT S IN A STUDENT LOAN 3 4 SEVEN STEPS TO APPLY EXAMS FINISH Step 1 Apply at for your financial assistance Step 2 We will start processing your application Step 3 We will contact you If you re applying for the first time, you will need to collect these things before going online: your bank account details your IRD number current address name of your education provider(s) your course start and end dates We ll check the information you give us and get things underway. You don t need to do anything at this point unless you hear from us. You ll receive a letter. Make sure you read it, sign and return it (if required) and send us any documents we ask for. If you ve had a student loan before and you re 18 years or older, you can view and accept your student loan contract online using MyStudyLink. Step 4 Use MyStudyLink to track your application Step 5 We check your details with your education provider Use MyStudyLink to: check to see if your documents have been received check your student allowance and student loan status view and accept your student loan contract view and update your personal details apply for your course-related costs view details of your next payment and previous transactions view your mail. We ll contact your education provider(s) to confirm your study details. You need to make sure you are fully enrolled before this happens. Step 6 We will finish processing your application We ll send you a letter saying what you qualify for and when your payments will start. COURSE STARTS Step 7 Your payments can start The earliest your payments can start is in the second week of your course. This is because we make payments in arrears. REMEMBER: You need to allow enough time for all seven steps to be completed, so apply as soon as you 4 5 If you have applied for Jobseeker Support Student Hardship, we ll write to you soon to let you know what happens next. We may need to see proof of your: birth certificate or passport passport or citizenship papers (if you weren t born in New Zealand) marriage certificate or deed poll papers (if applicable) income partner s income parent(s) income if you re under 24 years accommodation and/or disability costs. Please note: MyStudyLink only displays payments and information relating to student loan, student allowance and scholarships. Sometimes this won t happen until the week before your course starts. This depends on the way each education provider works. You can use MyStudyLink to check if your enrolment details have been confirmed. If you ve signed up to view your mail at MyStudyLink we ll notify you (by text or ) when you can view your letter online. Use MyStudyLink to check your bank account number and address details prior to the start of your payments. can. If you don t apply on time or don t give us all the information we need, we can t pay you on time. GET IT ALL DONE ONLINE WHAT S IN A STUDENT LOAN 5 6 WHAT S IN A STUDENT LOAN? There are three parts you can borrow from a student loan: compulsory fees; living costs; and course-related costs. You choose the parts you want to use depending on what you need and what you qualify for. You ll need to apply for a new loan each time you start a new course. You ll also need to sign a new loan contract (StudyLink will send this to you) for every loan account. A loan account usually runs for 52 weeks. A $60 establishment fee is added to your loan every time you open a new loan account. Once you start using your student loan, StudyLink will transfer your loan information to Inland Revenue on a daily basis. You ll be able to check your loan balance through myir, Inland Revenue s online service you need to register for a myir account to do this. You ll also receive loan statements from Inland Revenue twice a year. Interest is charged on the money you have borrowed and the establishment fee. The current interest rate is set out in your loan Terms and Conditions but this may vary from time to time. This may be written off by Inland Revenue, who will check if you qualify for an interest-free loan. You ll also be charged an administration fee of $40 by Inland Revenue in every tax year you have an outstanding loan balance with them of $20 or more, unless you ve been charged an establishment fee with StudyLink in the same tax year. Think carefully about how much you need and only borrow that amount. COMPULSORY FEES This pays the compulsory fees for your course. It doesn t include special charges such as penalty fees for late enrolment, administration charges for paying by instalments or optional service fees such as student association fees. You can borrow for fees as long as your course is approved by the Tertiary Education Commission and full-time, or part-time and 32 weeks or longer. If you are studying part-time for less than 32 weeks, you need be studying at least 0.25 EFTS 1. Your education provider will tell StudyLink what your fees are. Your fees are then paid directly to them. The payment will be made two weeks before your course starts or seven days from the day your first Loan Entitlement Advice (LEA) letter is sent to you whichever is later. You can pay some of your fees yourself. If you ve already paid for all of your fees with your own money, or if someone else has paid your fees on your behalf, you won t be able to use this part of the loan. If you ve partially paid your fees, the amount you ve paid will be deducted from the amount you can borrow. For example, if your fees are $1,500 and you paid a $100 deposit before you applied for the student loan, your education provider can only ask for $1,400 for compulsory fees. If you withdraw from your course you are still responsible for repaying your loan. If your education provider refunds all or part of your fees, the refund will be paid directly to your loan account. 1 See page 14 for definitions. 6 7 LIVING COSTS This helps with your weekly living costs, especially if you don t qualify for the full amount of student allowance. Your course must be full-time 1 or approved limited full-time 1 status to qualify for living costs. Your education provider can tell you whether your course is full-time. In certain circumstances, you may still be able to get a loan if you are unable to study full-time and are approved limited fulltime status by StudyLink. To find out more go to StudyLink s website You can borrow up to $ a week for living costs while you re studying, or on a study break of three weeks or less, like mid-semester breaks. If your break is more than three weeks and you re unable to find work, you may be able to get the Jobseeker Support Student Hardship. For more details, visit StudyLink s website If you choose to receive the maximum amount of living costs each week you may choose to have this adjusted automatically through the Consumers Price Index 3 (CPI). You decide how much you want to borrow so if you don t need the full amount, you can ask for less. You ll only get paid living costs from when you apply for them they can t be backpaid if you apply late or if, at any stage, you increase the amount you want to get. Tama borrows $100 a week at the start of his course. A few weeks later, he decides to increase his living costs to $120 a week. This means he will only start receiving $120 from the week that he requested it. He won t be back-paid the balance of $20 for the weeks where he requested $100. The earliest your living costs payments can start is the second week of your course because you re paid one week in arrears. This means your payment for each week isn t made to you until the following week. The payments are direct credited to your bank account once a week. You can change the amount of living costs you borrow or view your payments at any time using MyStudyLink at WHAT HAPPENS IF YOU RE GETTING A STUDENT ALLOWANCE? In some situations you may be eligible for living costs and also be receiving a student allowance. In these cases the amount of living costs you can receive will be reduced by the net amount of any student allowance payments received. For example, where a student allowance rate is $ and the nominated living costs is $176.86, the amount of living costs payable will be $1.76. If your loan for living costs is approved before your student allowance, you ll get the full amount that you asked for. Once your student allowance is approved, any back-payments you get will automatically be used to repay the difference in the living costs you ve already borrowed for the same period. 1 See page 14 and 15 for definitions. 2 These rates are reviewed on 1 April each year. Go to for the most up-to-date rates 3 Go to for definitions GET IT ALL DONE ONLINE WHAT S IN A STUDENT LOAN 7 8 COURSE-RELATED COSTS This helps with costs related to your studies, for example: > > text books > > computer equipment > > childcare > > travel > > student association fees. You need to be studying full-time 1 or limited full-time 1 with StudyLink s approval to be eligible for course-related costs. You can borrow up to $1,000 per loan account (usually a 52 week period) for course-related costs. You don t have to claim all your costs at once, just specify when you apply how much you need. You can claim the balance at any time up until your course finishes. Please note that if you pay back your course-related costs within a year, you are not entitled to borrow the money again in that same year. You can apply for your course-related costs online using your MyStudyLink account. Using MyStudyLink you re able to: > > apply for course-related costs > > view the status of your course-related costs applications including payment date. Course-related costs are paid directly to your bank account the earliest StudyLink can do this is 14 days before your course starts. You don t have to apply for the full amount of course-related costs, living costs or course fees you decide which part to apply for and how much you need to borrow from each. IF YOU RE GETTING A WORK AND INCOME BENEFIT You can t get a loan for living costs if you ll be getting a Work and Income benefit while studying. Remember to talk to Work and Income about your study plans as it could affect your benefit entitlement. If you get the Training Incentive Allowance, the amount you can get for compulsory fees and course-related costs goes down by the amount of Training Incentive Allowance you get (not including any Training Incentive Allowance you get for childcare, transport and disability-related costs). 1 See page 14 and 15 for definitions 8 9 OTHER THINGS YOU NEED TO KNOW ABOUT GETTING A STUDENT LOAN IF YOU RE UNDER 18 YEARS OLD If you re under 18 years old, one of your parents 1 (or a guardian) must sign your loan contract to show they give their consent to you taking out all three parts of the loan. It doesn t mean they re guaranteeing your loan. You re still fully responsible for paying it back. Once your parent has signed the contract they can t withdraw their consent. If you re legally married or in a civil union, have a dependent child or eligible for an Independent Circumstances Allowance 2, you don t need a parent to sign your contract. IF YOU RE A YOUTH GUARANTEE RECIPIENT If you re a Youth Guarantee recipient (or enrolled in a trades academies or tertiary high school course funded as part of the Youth Guarantee programme) you will not be eligible for a student loan, as your course is fully funded by the Government. You may be eligible for a student allowance if you meet the eligibility criteria. From 1 January 2014 the Youth Guarantee Programme will be available to some year olds. These students may be eligible for the living costs and course-related costs components of the Student Loan if the meet the relevant criteria. Go to to find out more. LEVEL 1 AND 2 STUDY If you study a fees-free Level 1 or Level 2 qualification that starts on or after 1 January 2014, and you are under 18 when you start this course, you will not be eligible for any part of the student loan. You may be eligible for a student allowance if you meet the eligibility criteria. From 1 January 2014 fees-free Level 1 and Level 2 qualifications will be available to some students aged years old. These students may be eligible for the living costs and course-related costs components of the student loan if they meet the relevant criteria. Go to to find out more. 1 Go to for definitions 2 The Independent Circumstances Allowance is a student allowance for year olds with exceptional circumstances. For more details visit StudyLink s website GET IT ALL DONE ONLINE WHAT S IN A STUDENT LOAN 9 10 YOU RE AGED 55 OR OLDER If you are aged 55 or older on the date your course starts you are not entitled to living costs or course-related costs. You can continue to receive a fees only loan. HOW LONG YOU CAN GET A STUDENT LOAN FOR There is a life-time limit of 7 EFTS 1 for student loans. The life-time limit includes all study that you have had a student loan for, from 1 January Full-time students generally have study loads of between 0.8 EFTS 1 up to a maximum of 2 EFTS for a year. For a full-time student, 7 EFTS is equal to about 7 or 8 years of study. Parttime students use less EFTS each year. Once you have used any part of the student loan, such as living costs, fees, or course-related costs, the EFTS for that loan will count towards the 7 EFTS limit. You can use more than 7 EFTS in some situations, including: > > finishing a paper or course even if it takes you over the 7 EFTS limit > > up to an additional 1 EFTS to complete post graduate study 2 > > up to an additional 3 EFTS if you undertake doctoral study, less any additional EFTS already used to complete post graduate study. Generally, you will not be able to receive more than 10 EFTS of student loan entitlement when these extensions are included. If you withdraw from your course and get a full refund of your tuition fees, we won t include that course in your lifetime limit. 2 EFTS CAP There is a limit on borrowing of 2 EFTS per loan account. This means for any course that takes you over 2 EFTS, you won t qualify for a loan. For more information go to IF YOU ARE UNDERTAKING PILOT TRAINING There is a limit on borrowing for compulsory fees for pilot training. The maximum that may be borrowed is $35,000 per 1 EFTS. The amount that you can borrow is proportional to the EFTS of your programme. For example, if you are enrolled in 0.5 EFTS the maximum you will be able to borrow for fees is $17,500. TRANSITIONAL PROVISION FOR PILOT TRAINING Existing pilot training students who have been enrolled (but not necessarily accessed the Student Loan Scheme) on a pilot training course at any time between 1 January 2009 and 31 December 2012, will continue to be able to borrow all of their fees to complete that qualification, or until 31 December 2015, whichever occurs first. OVERDUE REPAYMENT OBLIGATION Students who have an overdue repayment obligation of $500 or more and at least some portion of that amount has been overdue for a year or more, won t be able to get a student loan. 1 See page 14 for definitions. 2 Postgraduate study includes Masters and Bachelor Honours study 10 11 ALTERNATIVE CONTACT PERSON To apply for a student loan you need to provide us with the details of an alternative contact person in New Zealand. Their details will be provided to Inland Revenue once your loan is approved. If you have an overdue student loan repayment and haven t been in touch with Inland Revenue, they may contact your alternative contact person. Your contact person does not have to repay any of your student loan, but must let Inland Revenue know how you can be contacted. Once we have provided these details to Inland Revenue you will need to notify them if this person and / or their details change. PASSING AT LEAST HALF THE EFTS OF YOUR PREVIOUS STUDY Students need to have passed at least half the total EFTS 1 of their course load in order to continue receiving a student loan. HOW THIS IS CALCULATED The assessment of performance includes courses of study ending in 2009 or later. The EFTS count starts once you have used one or more parts of the student loan (eg living costs, course fees, or course-related costs). Once you have completed 1.6 EFTS of study (this is about two years of full-time study), you will need to have passed at least half the total EFTS of your previous study in order to continue receiving a student loan. This performance is assessed using a rolling five year assessment period 1 and includes any study you use a student loan for and any you pay for in another way. If you lose access to the student loan you can regain it by passing at least half of your total EFTS without using a student loan or by providing evidence that there are sufficient reasons beyond your control for not passing at least half. You can also regain eligibility as part of the rolling five year assessment period. If you withdraw from your course and get a full refund of your tuition fees, we won t include that course when we assess whether you ve passed your previous study to get another student loan. For more information on passing at least half the EFTS of your previous study go to 1 See page 14 for definitions. GET IT ALL DONE ONLINE WHAT S IN A STUDENT LOAN 11 12 PAYING OFF YOUR STUDENT LOAN How and when you repay your student loan depends on the type(s) and amount of your income. The information below sets out generally when you ll need to make repayments. For more information go to IF YOU EARN SALARY OR WAGES OR GET A STUDENT ALLOWANCE You need to select a tax code on your Tax code declaration (IR 330) that includes the SL repayment code. Give this to your employer who will make student loan repayment deductions, based on the pay period repayment threshold, from your income on your behalf. If you have a secondary job, select a secondary tax code that includes the SL repayment code. If your main source of income is under the pay period repayment threshold, you may qualify for a student loan special deduction rate, this will reduce the amount of student loan deducted from your secondary income. If you re studying full-time and working you may qualify for a student loan repayment deduction exemption. For more information on the student loan special deduction rate and student loan repayment deduction exemption, go to If you re on the wrong tax code, Inland Revenue can ask your employer to change the code to the right one and you may have to make extra repayments to clear any missed payments. IF YOU HAVE INCOME OTHER THAN SALARY OR WAGES Repayments won t automatically be deducted from this income. Your repayment calculation depends on the sources of your non-salary/wage income. ADJUSTED NET INCOME REQUIRING A TAX RETURN If you have non-salary or wage income, e.g. self-employed, rental, business, you ll need to file an Individual income tax return (IR 3) at the end of the year and Inland Revenue will work out how much you need to repay towards your student loan. ADJUSTED NET INCOME REQUIRING A PERSONAL TAX SUMMARY If you don t have to file an IR 3 but have income from interest, dividends, casual agricultural work, election day worker or Maori authority distributions, you may need to request a personal tax summary (PTS) if you haven t received one. Inland Revenue will then work out how much you need to repay towards your student loan. For more information and to find out if you need to request a PTS, go to For more information on adjusted net income go to WHEN REPAYMENTS ARE DUE Any end-of-year student loan repayments will normally be due 7 February the following year. INTERIM PAYMENTS You may also have to make interim payments for the following year. IF YOU DON T MAKE REPAYMENTS If you don t make repayments when your income is over the threshold, you ll end up with a bill at 12 13 the end of the tax year. You may incur penalties on this and late payment interest may be charged. For more information on repaying your loan and interim payments go to MAKING EXTRA REPAYMENTS If you are able to make extra repayments towards your loan, you ll pay it off faster. For more information on repaying your loan go to or see Inland Revenue s guide Student loans making repayments (IR 224). GOING OVERSEAS You ll need to contact Inland Revenue before you leave New Zealand. Generally, you re not eligible for an interest-free student loan if you re away for six months (184 days) or more. But you may qualify for a repayment holiday of up to one year. You ll need to provide a willing alternative contact person based in New Zealand. If you re not on a repayment holiday you ll have an overseas-based obligation of up to $5,000 per year. Note: Even if you re on a repayment holiday you ll still be charged loan interest. For more information about going overseas and how it affects your student loan go to or see Inland Revenue s factsheet Student loans going overseas (IR 223). NOTIFY ME This is Inland Revenue s newsletter that provides useful information to help you manage your loan. To subscribe go to GET IT ALL DONE ONLINE WHAT S IN A STUDENT LOAN 13 14 DEFINITIONS EFTS EFTS stands for Equivalent Full-time Student. EFTS is a measure of the amount of study or the workload involved in undertaking a course. The Tertiary Education Commission assesses the content of each course and then allocates the appropriate EFTS value. A year of full-time study can vary between 0.8 EFTS up to a maximum of 2 EFTS. Part-time or part-year students use smaller amounts of EFTS for their study. If you re unsure of the EFTS value of your course, check with your education provider. Visit our website to use the student loan performance calculator and for links to a number of education provider websites that explain how they calculate EFTS for their courses. FIVE YEAR ROLLING PERFORMANCE ASSESSMENT This is an assessment where the student s performance over the five years of study prior to their current application is considered. This means that a student who applies for study in 2015 will have the results of their study from 2010 to 2014 checked. When they enrol again for study in 2016 the results from their 2010 study will be excluded and only the five years from 2011 to 2015 will be checked. The performance assessment only starts once a student has studied more than 1.6 EFTS in total since receiving a student loan for study ending in 2009 or later. FOR EXAMPLE: STUDY YEAR STUDENT LOAN RECEIVED EFTS STUDIED YEARS ASSESSED TOTAL EFTS TO CHECK 2009 Yes 1.0 None - assessment only starts once 1.6 EFTS have been studied 2010 Yes 1.0 None - assessment only starts once 1.6 EFTS have been studied 2011 Yes Yes Yes Yes Yes FULL-TIME The Tertiary Education Commission assigns an EFTS value to each course to determine if it meets the full-time status criteria for student loans. 14 15 FOR EXAMPLE: LENGTH OF INDIVIDUAL COURSE MINIMUM EFTS VALUE REQUIRED TO BE FULL-TIME 12 weeks weeks weeks 0.8 LIMITED FULL-TIME Limited full-time status is a provision for students who are applying for financial help from StudyLink but are unable to undertake a full-time course due to one of the following reasons: > > you re completing a recognised programme and to do this you need to study less than fulltime but more than half of a full-time course in this enrolment > > your education provider supports your application to study less than full-time for one of the following reasons: you have an illness that stops you studying full-time, or you can t study full-time where there is sufficient cause outside your control (this could include a disability which stops you studying full-time), or it s in your academic best interests 1 to study less than full-time. To apply for limited full-time status you need to complete a Limited Full-time application. You can download this from StudyLink s website LOAN COMPONENTS The study status of your course determines what components of the student loan you can access. FOR EXAMPLE: STUDY STATUS FEES COURSE- RELATED COSTS LIVING COSTS Full-time, full year (ie studying both 1st and 2nd semester) Yes Yes Yes Full-time, part year (ie studying either 1st or 2nd semester Yes Yes Yes Part-time, full year Yes No No Part-time, part year (Minimum EFTS value required is 0.25) Yes No No Note: if the length of your course does not meet the required EFTS value to be full-time, StudyLink will check to see if there is any part or segment of your course which may still qualify as full-time. For example: If your course is 32 weeks long (full-year), you need to be studying 0.8 EFTS to be full time. If you are studying 0.4 EFTS in the first semester and 0.3 in the second semester, StudyLink will determine you are full-time for the first semester, but not for the full year. 1. Academic best interests means that the student would be likely to fail, for academic reasons, if he or she undertook a full-time course but would be likely to pass more than half of the course if he or she studied part-time. GET IT ALL DONE ONLINE WHAT S IN A STUDENT LOAN 15 16 MYSTUDYLINK GET IT ALL DONE ONLINE > > check out what financial assistance you may be able to get > > apply for your student finances > > check your student allowance and student loan application status > > check to see if your documents have been received > > view and update your personal details > > change the amount of your living cost payments and apply for your course-related costs > > view details of your next payment and previous transactions. > > view your mail > > view and accept your student loan contract. studylink.govt.nz INLAND REVENUE ONLINE > > check all your loan details > > see interest charged and written off > > send secure mail to Inland Revenue. Go to and register under myir Secure online services. You ll need your IRD number to register and can activate your account by calling Inland Revenue also has a student loan newsletter called Notify Me that provides useful information to help you manage your loan. To subscribe go to all you need is your address. HOW TO CONTACT US STUDYLINK Website: Phone: Fax: INLAND REVENUE Website: Phone: self-service: To find all the information you need and to understand how StudyLink and Inland Revenue work together to administer student loans, go to SL LOAN B (APRIL 2015) Student loans making repayments 1 IR 224 April 2014 Student loans making repayments 2 STUDENT LOANS making repayments Go to our website for information, and to use our services and tools. myir secure online WAYS 2 FUND YOUR STUDY WAYS 2 FUND YOUR STUDY Student Allowance 3 Student Loan 5 Work and Study 7 Scholarships 8 Extra costs while studying 9 Next steps 10 How to apply 10 Seven steps to apply 11 Useful links 12 While you re Budget 2012 Changes to Student Loans and Allowances Budget 2012 Changes to Student Loans and Allowances On 24 May the Government announced changes to Student Loans and Student Allowances as part of Budget 2012. The changes are: The Student Allowance will Student Allowance additional application form Student Allowance additional application form Complete this form if you have already received a Student Allowance this year and you are applying again in the same year because your course has finished LIMITED FULL-TIME APPLICATION FORM STUDENT ALLOWANCE/STUDENT LOAN/SCHOLARSHIP LIMITED FULL-TIME APPLICATION FORM COMPLETE THIS FORM IF YOU WOULD LIKE TO APPLY FOR LIMITED FULL-TIME STATUS BECAUSE YOU ARE UNABLE TO STUDY FULL-TIME. Usually Change of Circumstances application form Change of Circumstances application form Complete this form if your circumstances have changed in any way. The fastest and easiest way to tell us about changes is using MyStudyLink. Using a MyStudyLink Student Allowance/Student Loan/Scholarship Overseas study application form Student Allowance/Student Loan/Scholarship Overseas study application form Complete this form if you want to apply for a Student Allowance/Student Loan/Scholarship for overseas study. You can also apply Student Allowance Partner s application form Student Allowance Partner s application form This form is to be completed by the partner 1 of the person applying for the Student Allowance. The student will also need to complete a Student Allowance application. Approved issuer levy (AIL) IR 395 October 2014 Approved issuer levy (AIL) A A guide for payers 1 Introduction If you (a borrower) pay interest to a non-resident lender (the person who you ve borrowed from), and want Training Incentive Allowance Application Training Incentive Allowance Application CLIENT NUMBER Please read this before you start Training Incentive Allowance helps people with a number of employment-related training costs. This may include course IR 261 August 2014. Direct selling. Tax facts for people who distribute for direct selling organisations IR 261 August 2014 Direct selling Tax facts for people who distribute for direct selling organisations 3 Contents Introduction 4 4 How to get our forms and guides 4 Part 24+ Advanced Learning Loan. Loans Factsheets 24+ Advanced Learning Loan Loans Factsheets 24+ Advanced Learning Loan Aged 24 or over and thinking about further education? If you re starting a course on or after 1 August 2013 you may qualify for a UNPAID PRACTICAL WORK / MASTERS / DOCTORATE EXTENSION APPLICATION FORM STUDENT ALLOWANCE / STUDENT LOAN UNPAID PRACTICAL WORK / MASTERS / DOCTORATE EXTENSION APPLICATION FORM You can no longer apply for masters or doctorate extensions for Student Allowance payments for study PAYMENT PERIOD EXTENSION APPLICATION FORM STUDENT ALLOWANCE / STUDENT LOAN PAYMENT PERIOD EXTENSION APPLICATION FORM COMPLETE THIS FORM IF YOU NEED AN EXTENSION BEYOND YOUR CURRENT STUDY END DATE FOR: YOUR STUDENT LOAN PAYMENTS IF YOU ARE DOING Student Loan Scheme Amendment Bill Student Loan Scheme Amendment Bill Commentary on the Bill Hon Peter Dunne Minister of Revenue First published in September 2011 by the Policy Advice Division of Inland Revenue, PO Box 2198, Wellington Application for Tertiary Education Withdrawal For assistance with completing this form please go to, or contact us on 0800 WHAI RAWA (0800 942 472). Please note processing of request will normally take up to five working days from Frequently asked questions about Student Finance from September 2012 Frequently asked questions about Student Finance from September 2012 Who is this document for? Unless otherwise specified, details of the student finance package described in this document relate only Student Allowance Transfer Grant application Student Allowance Transfer Grant application The Student Allowance Transfer Grant is a one-off payment to help if you have a partner 1 and/ or child(ren) who are dependent on you and you are in hardship REPAYING YOUR STUDENT LOAN. Student Loans Company REPAYING YOUR STUDENT LOAN Student Loans Company CONTENTS Page Introduction 3 Income Contingent Loans What is an Income Contingent Loan? 4 What happens during repayment and who do I deal with? 5 Income Frequently Asked Questions (FAQs) Learners 24+ Advanced Learning Loan Frequently Asked Questions (FAQs) Learners 24+ Advanced Learning Loans are being introduced by the Government for learners aged 24 and over studying at level 3 and above Student Allowance One Parent application form Student Allowance One Parent application form Complete this form if you are the parent 1 of a student who needs to have only one parent s income tested for the Student Allowance. The student will also 24+ Advanced Learning Loans are being introduced by the Government for learners aged 24 and over studying at level 3 and above from 1 August 2013. 24+ Advanced Learning Loans Frequently Asked Questions (FAQs) 24+ Advanced Learning Loans are being introduced by the Government for learners aged 24 and over studying at level 3 and above from 1 August Frequently Asked Questions (FAQs) Learners 24+ Advanced Learning Loan Frequently Asked Questions (FAQs) Learners 24+ Advanced Learning Loans are being introduced by the UK government for learners aged 24 and over studying at Level 3, Level 4. New Zealand Superannuation application New Zealand Superannuation application Why not fill in your application online? Our interactive form will guide you through only the questions you need to answer. When you have finished, you will need? Deferred Loan Application Deferred Loan Application MINISTRY OF HEALTH MANATU HAUO RA Deferred Loans When a residential care loan becomes repayable, a deferred loan can be offered to people in certain circumstances. Residential Overseas pensions and annuity schemes IR 257 December 2014 Overseas pensions and annuity schemes This guide contains information on the taxation of foreign superannuation lump sums and overseas pensions. For information about overseas social First-time employer s guide First-time employer s guide Information you ll need if you re thinking of employing workers for the first time IR 333 February 2014 Contents About this guide 3 Why paying tax matters 3 Are you an employer? Frequently Asked Questions (FAQs) Learners 24+ Advanced Learning Loan Frequently Asked Questions (FAQs) Learners 24+ Advanced Learning Loans have been introduced by the UK government for learners aged 24 and over studying at Level 3, Level 4 or Get a degree-level qualification without breaking the bank. There are many benefits to entering higher education but you should also be fully aware of the costs. It is important to bear in mind that as a graduate, it is estimated you will earn, on average, 150,000 New Zealand tax residence IR 292 August 2014 New Zealand tax residence Who is a New Zealand resident for tax purposes? 1 Introduction The tax residence rules determine whether a person is a New Zealand tax resident. Community Services Card. Helping you with the costs of health care Community Services Card Helping you with the costs of health care Contents You pay less for some health services You pay less for some health services 3 You might pay less for prescriptions 4 Can I get Issue No 49 March 2014 IR 381 Business tax update Inland Revenue s tax news for businesses Issue No 49 March 2014 IR 381 Making the right student loan deductions Employees who have a student loan need to have their repayments deducted IR100 April 2016. Helping you to understand child support IR100 April 2016 Helping you to understand child support 2 HELPING YOU TO UNDERSTAND CHILD SUPPORT Child support information If you'd like more information about child support, including our forms and Student finance New full-time students 2014/15 Student finance New full-time students 2014/15 sound advice on STUDENT FINANCE SFW/FSHE/V14 What is Student Finance Wales? Student Finance Wales is a service provided by the Frequently asked questions about Student Finance from September 2012 Frequently asked questions about Student Finance from September 2012 Who is this document for? Unless otherwise specified, details of the student finance package described in this document relate only A GUIDE TO FINANCIAL SUPPORT FOR NEW FULL-TIME STUDENTS IN HIGHER EDUCATION IN 2014/15. A GUIDE TO FINANCIAL SUPPORT FOR NEW FULL-TIME STUDENTS IN HIGHER EDUCATION IN 2014/15 contents. WhAt is student finance england? Student Finance England is a service provided Provisional tax. Paying your income tax in instalments. IR 289 May 2015. Classified Inland Revenue Public Provisional tax Paying your income tax in instalments IR 289 May 2015 Classified Inland Revenue Public Contents About this guide 3 Why paying tax matters 4 How provisional tax works 5 Your options for VET FEE HELP STUDENT FREQUENTLY ASKED QUESTIONS VET FEE HELP STUDENT FREQUENTLY ASKED QUESTIONS VET FEE-HELP ASSISTANCE Why will the Government loan me this money? VET FEE-HELP is an extension of the Higher Education Loan Program (HELP). The program Student finance loans for part-time study 2015/16 sound advice on STUDENT FINANCE Student finance loans for part-time study 2015/16 SFW/FSNPT/V15 WHO SHOULD READ THIS GUIDE? This guide is for new and continuing part-time FREQUENTLY ASKED QUESTIONS myra my RETIREMENT ACCOUNT FREQUENTLY ASKED QUESTIONS 2 6 11 14 17 19 ABOUT myra OPENING AN ACCOUNT MANAGING YOUR ACCOUNT CONTRIBUTIONS AND WITHDRAWALS TRANSFERS AND ROLLOVERS BEYOND myra JANUARY 2015 Childcare and OSCAR Subsidy Application Childcare and OSCAR Subsidy Application If you need help with this form call us on % 0800 559 009. Who can get this subsidy If you need help filling in this form, please ask at your nearest Work and Income 24+ Advanced Learning Loans EALING, HAMMERSMITH & WEST LONDON COLLEGE A guide to 24+ Advanced Learning Loans 2 3 Ealing, Hammersmith & West London College have created this guide to help answer questions that you may have about the Buying a home with KiwiSaver Buying a home with KiwiSaver Page 1 of 20 Table of Contents... 3 Buying your first home with KiwiSaver as a first home buyer... 3 (as a previous property owner)... 6 The role of Housing New Zealand... Student Loans and Allowances: 2010 Student Loans and Allowances: 2010 Embargoed until 10:45am 02 December 2011 Key facts In 2010: 212,469 students borrowed from the student loan scheme, an increase of 13,746 students (6.9 percent) compared Hot Topic #4. Student Loans. Sample Hot Topic #4 Student Loans Contents Student Loans... 3 Cost of taking out a student loan... 4 Activity... 4 Key Facts... 5 Paying the loan back... 5 Activity... 6 This Young Enterprise Trust teaching resource IRD number application - non-resident/offshore individual IR742 February 2016 IRD number application - non-resident/offshore individual For full details go to (search keyword: offshore). Only use this form if you're a non-resident or offshore Undergraduate and PGCE courses starting in the academic year 2014/15. Student Financial Undergraduate and PGCE courses starting in the academic year 2014/15 Student Financial Support contents Undergraduate fees 2014/15... 4 Student loans and grants... 5 Tuition fee loan... 5 Maintenance loan Special Needs Grant International Custody Dispute Payment Special Needs Grant International Custody Dispute Payment CLIENT NUMBER If you need help with this form call us on % 0800 559 009. Who can get this payment If you need help filling in this form, please Buying a home with KiwiSaver Buying a home with KiwiSaver 1 April 2015 Page 1 of 22 Table of Contents... 3 Buying your first home with KiwiSaver as a first home buyer... 3 (as a previous property owner)... 7 The role of Housing New Student Loans - A guide to terms and conditions 2015/16. Student Loans - A guide to terms and conditions 2015/16 Contents 1 What s this guide about? 2 2 Your loan contract 2 3 Who does what? 3 4 Your responsibilities 4 5 Which Repayment Trusts and estates income tax rules IR 288 June 2012 Trusts and estates income tax rules Types of trusts and how they re taxed 2 TRUSTS AND ESTATES Go to our website for information, services and tools. Secure online services ARE YOU AGED 24+? 24+ A Guide to 24+ Advanced Learning Loans 24+ ARE YOU AGED 24+? A Guide to 24+ Advanced Learning Loans Fees guidance for students aged 24 years old or older and applying to study Level 3 4 qualifications in England. What you need Issue No 165 February 2014 IR 787 AGENTS ANSWERS Inland Revenue s tax agents update End-of-year filing performance tips Issue No 165 February 2014 IR 787 Welcome to the first issue of Agents Answers for 2014. We hope you had a great break. PROFESSIONAL LEGAL STUDIES COURSE APPLICATION FORM PROFESSIONAL LEGAL STUDIES COURSE Please read the instructions carefully before you complete this application The purpose of this form is to obtain from you the information we need to Student loans A guide to terms and conditions 2014/15 Student loans A guide to terms and conditions 2014/15 sound advice on STUDENT FINANCE For more information and to apply visit SFW/SLTC/V14/D CONTENTS SECTION 1 What this guide Student Finance Package Repayments Application Information SFE Resources Questions & Comments Tuition Fee Loan Maintenance (Living Cost) Support Scholarships & Bursaries Additional Support! Figures used Payroll giving IR 617 1 Payroll giving IR 617 1 October 2009 Payroll giving 2 Contents What is payroll giving? 3 The employer s role 4 The employee s role 6 What is a donee organisation? 7 Calculating tax credits BUSINESS TAX UPDATE REMINDERS. Getting your tax invoices right. Swinton Appliances. Inland Revenue s tax news for businesses. BUSINESS TAX UPDATE Inland Revenue s tax news for businesses Getting your tax invoices right Issue No 53 July 2014 IR 381 Businesses that are GST registered need to provide a correct tax invoice within Student Loan Scheme Amendment Bill (No 2) Student Loan Scheme Amendment Bill (No 2) Officials Report to the Finance and Expenditure Committee on s on the Bill November 2012 Prepared by the Policy Advice Division of Inland Revenue and the Treasury Student Loan Scheme Amendment Bill (No 2) Student Loan Scheme Amendment Bill (No 2) Government Bill Explanatory note General policy statement The Bill is designed to remove barriers to student loan borrowers living overseas returning to New Zealand A Guide to Student Finance for New Students in 2015-16 A Guide to Student Finance for New Students in 2015-16 Department of Education and Children Rheynn Ynsee as Paitchyn May 2015 DEC/1.0 CONTENTS 1 Introduction 4 Page 2 Eligibility for an award 4 2.1 Residency Factsheet for students who are over 24 years Factsheet for students who are over 24 years The government no longer subsidises level 3 courses funding for students over 24 years of age. This means that fees will be more expensive than in previous Extra help - Dependants' Grants Extra help - Dependants' Grants 2015/16 What is Student Finance England? Contents Student Finance England is a service provided by the Student Loans Company. We provide financial How to Guide (Getting your Deferment Application Form right) How to Guide (Getting your Deferment Application Form right) Use these notes to help you complete your student loan Deferment Application Form If you need any help, please go to 2015/16. When to apply. Evidence. Payment. Frequently asked questions by parents. SF_England /SFEFILM. SFEngland 2015/16 When to apply Evidence Payment Frequently asked questions by parents SF_England /SFEFILM SFEngland September 2014 When should my child apply and how long will it take to process their application? PhD, Master s Thesis, and Dissertation Students Financial and Enrolment Information PhD, Master s Thesis, and Dissertation Students 2015 Financial and Enrolment information PhD, Master s Thesis, and Dissertation Students This guide IR 361 April 2011. Tax and your property transactions IR 361 April 2011 Tax and your property transactions 2 Tax and your property transactions Go to our website for information, services and tools. Secure online services login to check your Student Loan Scheme Amendment Bill (No 2) A briefing note prepared for the Finance and Expenditure Committee Student Loan Scheme Amendment Bill (No 2) A briefing note prepared for the Finance and Expenditure Committee Policy Advice Division, Inland Revenue 13 November 2012 Student Loan Scheme Amendment Bill Income Protection Options Policy summary Income Protection Options Policy summary This summary tells you the key things you need to know about our Income Protection Options policy. It doesn t give you the full terms of the policy. You can find Key Features. of the Suffolk Life SIPP (Deed Poll Scheme) Key Features of the Suffolk Life SIPP (Deed Poll Scheme) This document is part of a set, all of which should be read together. Key Features Your Personal Illustration Schedule of Fees Schedule of Allowable Student finance - new full-time students 2015/16. Student finance - new full-time students 2015/16 What is Student Finance England? Contents Student Finance England is a service provided by the Student Loans Company. We provide Federal tax benefits for higher education Federal tax benefits for higher education Tax year 2010 You know that higher education is an investment in time, energy and money. Thankfully, there are ways to offset your education expenses through tax sound advice on STUDENT FINANCE Student finance new full-time students 2016/17 SFW/FSHE/V16 sound advice on STUDENT FINANCE Student finance new full-time students 2016/17 SFW/FSHE/V16 What is Student Finance Wales? Student Finance Wales is a service provided by the Policy Document Fee-Help and Its Administration at ACPE Policy Document Fee-Help and Its Administration at ACPE Contents 1. Preamble... 2 2. Nature of the FEE-HELP Scheme... 2 3. FEE-HELP Entitled students... 2 4. FEE-HELP Eligible Units of Study... 3 5. General Tertiary Education Report: Budget 2014 Suspending the student loan repayment threshold This document has been released under the Official Information Act 1982 (the Act). Some information has been withheld under section 9(2)(a) - to protect the privacy of natural persons, including deceased Employer s guide Information to help you with your responsibilities as an employer IR 335 April 2015 Employer s guide Information to help you with your responsibilities as an employer 1 Introduction If you have anyone working for you, it s your responsibility to deduct Application for Home Help Payments Application for Home Help Payments If you need help with this form call us on % 0800 559 009. Who can get this payment If you need help filling in this form, please ask at your nearest Work and Income STUDENT FINANCIAL SUPPORT Undergraduate and PGCE courses starting in the academic year 2015/16 If you require this publication in an alternative format, please contact the Corporate Communications Unit. Email: corporatecommunications@canterbury.ac.uk classic retirement benefits A brief guide to the benefits available classic retirement benefits A brief guide to the benefits available Who should read this booklet? This booklet provides a guide to pension benefits for anyone leaving and taking their classic pension.
https://docplayer.net/3485406-Student-loan-what-s-in-a.html
CC-MAIN-2019-43
refinedweb
8,678
54.76
Security Model - Security Model - Securing Objects - SAM Database - The Flow of a User Logon - Summary Security Model In This Chapter Securing Objects Components The Flow of a User Logon In this chapter, I focus on the Windows 2000 internal components that provide security. First, I look at the schema Windows 2000 uses to protect objects, and then I look at the mechanisms that enforce those protections. It's important to keep in mind where the objects I discuss actually livein user memory or (protected) kernel memory. The answer, not surprisingly, is that these objects are located in a little bit of both spaces. Like the process, thread, and job objects you saw in Chapter 2, "Processes and Threads," both the kernel and the user-mode portion of the Win32 subsystem keep security information. For the most part, the information kept in kernel space is the "real thing," and the user-mode structures are just used to pass information back and forth to the kernel. This is clear when you consider that security descriptors are attached to kernel objects, which live in kernel space, and tokens are kernel objects themselves. All of this will become clear in the following pages. Securing Objects At the heart of the Windows 2000 security model are Security Descriptors (SDs) and Access Control Lists (ACLs). Every securable object (files, devices, pipes, processes, threads, timers, printers, you name it) has a security descriptor attached to it. A security descriptor contains the following pieces of information: The SID of the object owner The SID of the primary owning group Discretionary Access Control List (DACL) System Access Control List (SACL) WINNT.H, which is included in the Windows SDK, contains the SECURITY_DESCRIPTOR structure, as well as a brief explanation of the fields: typedef struct _SECURITY_DESCRIPTOR { BYTE Revision; BYTE Sbz1; SECURITY_DESCRIPTOR_CONTROL Control; PSID Owner; PSID Group; PACL Sacl; PACL Dacl; } SECURITY_DESCRIPTOR, *PISECURITY_DESCRIPTOR; // Where: // // Revision - Contains the revision level of the security // descriptor. This allows this structure to be passed between // systems or stored on disk even though it is expected to // change in the future. // // Control - A set of flags that qualify the meaning of the // security descriptor or individual fields of the security // descriptor. // // Owner - A pointer to an SID representing an object's owner. // If this field is null, then no owner SID is present in the // security descriptor. If the security descriptor is in // self-relative form, then this field contains an offset to // the SID, rather than a pointer. // // Group - A pointer to an SID representing an object's primary // group. If this field is null, then no primary group SID is // present in the security descriptor. If the security descriptor // is in self-relative form, then this field contains an offset to // the SID, rather than a pointer. // // Sacl - A pointer to a system ACL. This field value is only // valid if the DaclPresent control flag is set. If the // SaclPresent flag is set and this field is null, then a null // ACL is specified. If the security descriptor is in // self-relative form, then this field contains an offset to // the ACL, rather than a pointer. // // Dacl - A pointer to a discretionary ACL. This field value is // only valid if the DaclPresent control flag is set. If the // DaclPresent flag is set and this field is null, then a null // ACL (unconditionally granting access) is specified. If the // security descriptor is in self-relative form, then this field // contains an offset to the ACL, rather than a pointer. // The only thing I can add to this discussion is that the group SID is only used by POSIX applications. NOTE The POSIX standard requires that an object can be owned by a group. Folks familiar with UNIX will recognize this. Windows 2000, being POSIX-compliant, supports this notion, but it is not used in Win32 applications nor by the system. Now that you're wondering what the heck an SID is, take a look. SIDs Ever feel as if you were just a number? Well, in Windows 2000, that's exactly what you are. Internally, Windows 2000 represents each account, group, machine, and domain with a security identifier, or SID. The SID is independent of the account name. You'll recall that if you delete an account, Windows displays the warning message shown in Figure 3.1. Figure 3.1 Windows 2000 alludes to the existence of SIDs when you attempt to delete an account. Here, I have assembled some pieces of WINNT.H so you can see how SIDs are defined: // Pictorially the structure of an SID is as follows: // // 1 1 1 1 1 1 // 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 // +---------------------------------------------------------------+ // | SubAuthorityCount |Reserved1 (SBZ)| Revision | // +---------------------------------------------------------------+ // | IdentifierAuthority[0] | // +---------------------------------------------------------------+ // | IdentifierAuthority[1] | // +---------------------------------------------------------------+ // | IdentifierAuthority[2] | // +---------------------------------------------------------------+ // | | // +- - - - - - - - SubAuthority[] - - - - - - - - -+ // | | // +---------------------------------------------------------------+ typedef struct _SID_IDENTIFIER_AUTHORITY { BYTE Value[6]; } SID_IDENTIFIER_AUTHORITY, *PSID_IDENTIFIER_AUTHORITY; typedef struct _SID { BYTE Revision; BYTE SubAuthorityCount; SID_IDENTIFIER_AUTHORITY IdentifierAuthority; #ifdef MIDL_PASS [size_is(SubAuthorityCount)] DWORD SubAuthority[*]; #else // MIDL_PASS DWORD SubAuthority[ANYSIZE_ARRAY]; #endif // MIDL_PASS } SID, *PISID; #endif In English, an SID is a variable-length numeric structure that contains the following fields (right-to-left in the pictorial): 8-bit SID revision level 8-bit count of the number of subauthorities contained within 48 bits containing up to three identifier authority SIDs Any number of subauthority SIDs and relative identifiers (RIDs) Let's take apart an actual SID to make this more concrete. In regedt32, I look at an SID on my local machine in the HKEY_USERS subtree (see Figure 3.2). The name of the subtree is the user's SID. Figure 3.2 The HKEY_USERS Registry subtree contains a key for each local user. You can see the textual representation of my SID is S-1-5-21-1960408961-1708537768-1060284298-1000 I say textual representation because the S prefix and hyphens separating the fields are added to make SIDs more readable. The true internal representation is just a bunch of numbers all run together. The SID has a revision level of 1. You can see from WINNT.H that Windows 2000's current revision level is 1: #define SID_REVISION (1) // Current revision level What follows are three identifier authorities: namely 5 and 21. There are several built-in authorities, as shown in Table 3.1. Table 3.1 Built-in Identifier Authorities Back to the SID, 5 means that this SID was assigned by the Windows 2000 security authority (as most are), and 21 means simply that this is not a built-in SID. Windows 2000 calls these non-unique, which means that a relative identifier (RID) is required to make the SID unique. You'll see this in just a second. The three subauthority values identify the local machine and the domain (if any) the machine belongs to. In the example, all accounts on my machine share the same three subauthorities. At setup, Windows 2000 creates a random-base SID based on a number of items, including the current date and time and Ethernet address (if available). Windows goes to great pains to make sure this is a globally unique identifier. It is very unlikely that any machine anywhere in the world has the same base SID. The last chunk, namely the 1000, is the RID. This number is tacked on to the end of the machine SID to create unique SIDs for users and groups. Windows starts numbering at 1,000, so this account is the first account created on this machine. The next account or group created would have the RID 1001, and so on. Voila, we have a unique identifier. There are a number of predefined (or built-in) SIDs that serve special roles within Windows. Again, WINNT.H serves as the reference here: ///////////////////////////////////////////////////////////////////////////// // // // Universal well-known SIDs // // // // Null SID S-1-0-0 // // World S-1-1-0 // // Local S-1-2-0 // // Creator Owner ID S-1-3-0 // // Creator Group ID S-1-3-1 // // Creator Owner Server ID S-1-3-2 // // Creator Group Server ID S-1-3-3 // // // // (Non-unique IDs) S-1-4 // // // ///////////////////////////////////////////////////////////////////////////// ///////////////////////////////////////////////////////////////////////////// // // // NT well-known SIDs // // // // NT Authority S-1-5 // // Dialup S-1-5-1 // // // // Network S-1-5-2 // // Batch S-1-5-3 // // Interactive S-1-5-4 // // Service S-1-5-6 // // AnonymousLogon S-1-5-7 (aka null logon session) // // Proxy S-1-5-8 // // ServerLogon S-1-5-9 (aka domain controller account) // // Self S-1-5-10 (self RID) // // Authenticated User S-1-5-11 (Authenticated user somewhere) // // Restricted Code S-1-5-12 (Running restricted code) // // // // (Logon IDs) S-1-5-5-X-Y // // // // (NT non-unique IDs) S-1-5-0x15-... // // // // (Built-in domain) s-1-5-0x20 // // // ///////////////////////////////////////////////////////////////////////////// You'll notice the NT non-unique IDs) S-1-5-0x15-... reference. That matches the SID I discuss as hexadecimal 15 = 21. So an S-1-5-21-... SID is an NT non-unique SID that receives a RID to make it unique. This schema describes all user and group accounts created by the administrator. Similarly, there are built-in RIDs that you must recognize. They are usually referred to as "well-known" SIDs or RIDs because the SID can easily be determined. The nice source commenting in WINNT.H you saw earlier breaks down at this part, so I created some tables to illustrate. Table 3.2 identifies the well-known user RIDs. Table 3.2 Well-Known User RIDs That means that on my machine, the SID of the local administrator account is S-1-5-21-1960408961-1708537768-1060284298-500 The SIDs of the guest account and Kerberos TGT are also well known. Table 3.3 identifies the well-known group RIDs. Table 3.3 Well-Known Group RIDs You see the same idea here. The SID of my local administrators group is S-1-5-21-1960408961-1708537768-1060284298-512 Table 3.4 identifies the well-known alias RIDs. Table 3.4 Well-Known Alias RIDs Here you see the aliases that any NT administrator is familiar with. These too are well-known SIDs. Table 3.5 outlines miscellaneous reserved RIDs. Table 3.5 Miscellaneous reserved RIDs NOTE Are these well-known SIDs a security risk? Well, the answer is "It depends." If your machine has NetBIOS open to the outside world, then yes, it is a risk. It is trivial for a cracker to enumerate the account names and SIDs and thus find the real administrator account name, even if it was renamed (which some security checklists recommend). However, if NetBIOS is blocked to the outside world (as it should be), the SIDs aren't accessible and there is no risk. Now that you're comfortable with SIDs, take a look at the other cornerstone of Windows 2000 security: Access Control Lists. Access Control Lists (ACLs) Access Control Lists (ACLs) contain the actual permissions assigned to an object, as well as audit instructions for the kernel. An ACL consists of a header followed by zero or more Access Control Entries (ACEs). An ACL with zero ACEs is called a null ACL. There are two types of ACLs: discretionary ACLs (DACLs) and system ACLs (SACLs). DACLs define access permissions to the object they protect, whereas SACLs contain audit instructions for the system. Again, turn to WINNT.H for the definition of an ACL: // Define an ACL and the ACE format. The structure of an ACL header // followed by one or more ACEs. Pictorially the structure of an ACLclSize | Sbz1 | AclRevision | // +-------------------------------+---------------+---------------+ // | Sbz2 | AceCount | // +-------------------------------+-------------------------------+ // // The current AclRevision is defined to be ACL_REVISION. // // AclSize is the size, in bytes, allocated for the ACL. This includes // the ACL header, ACES, and remaining free space in the buffer. // // AceCount is the number of ACES in the ACL. typedef struct _ACL { BYTE AclRevision; BYTE Sbz1; WORD AclSize; WORD AceCount; WORD Sbz2; } ACL; typedef ACL *PACL; Similarly, the definition of an ACE follows: // The structure of an ACE is a common ace header followed by ace type // specific data. Pictorially the structure of the common aceceSize | AceFlags | AceType | // +---------------+-------+-------+---------------+---------------+ // // AceType denotes the type of the ace; there are some predefined ace // types // // AceSize is the size, in bytes, of ace. // // AceFlags are the Ace flags for audit and inheritance, defined shortly. typedef struct _ACE_HEADER { BYTE AceType; BYTE AceFlags; WORD AceSize; } ACE_HEADER; typedef ACE_HEADER *PACE_HEADER; Notice that in the ACL header, the size field is defined to contain the size of the ACL header plus ACEs plus any extra space. A whole ACL looks like Figure 3.3. Figure 3.3 A complete ACL. Currently, there are four defined ACE structures, outlined in Table 3.6. The type of ACE determines whether the ACL is system (SACL) or discretionary (DACL). Table 3.6 Currently Defined ACE Types Even though this mysterious SYSTEM_ALARM_ACE type is defined, it is not yet supported in Windows 2000. Let's look at the ACCESS_ALLOWED_ACE type: typedef struct _ACCESS_ALLOWED_ACE { ACE_HEADER Header; ACCESS_MASK Mask; DWORD SidStart; } ACCESS_ALLOWED_ACE You see here that an ACCESS_ALLOWED_ACE contains the generic ACE header, which you just saw, as well as an ACCESS_MASK and an SID. You can guess the story here: The existence of an ACCESS_ALLOWED_ACE grants the access specified by the ACCESS_MASK to the user or group identified by the SID. Similarly, the existence of a SYSTEM_AUDIT_ACE causes the system to log an event to the security audit log when the user or group specified by the SID requests the access specified in the ACCESS_MASK. Pretty straightforward. You can imagine that the ACCESS_DENIED_ACE and the eventual ACCESS_ALARM_ACE ACEs do pretty much the same thing. For a discussion of access masks, turn to WINNT.H: // Define the access mask as a longword sized structure divided // +---------------+---------------+-------------------------------+ // |G|G|G|G|Res'd|A| StandardRights| SpecificRights | // |R|W|E|A| |S| | | // +-+-------------+---------------+-------------------------------+ // // typedef struct _ACCESS_MASK { // WORD SpecificRights; // BYTE StandardRights; // BYTE AccessSystemAcl : 1; // BYTE Reserved : 3; // BYTE GenericAll : 1; // BYTE GenericExecute : 1; // BYTE GenericWrite : 1; // BYTE GenericRead : 1; // } ACCESS_MASK; // typedef ACCESS_MASK *PACCESS_MASK; // // But to make life simple for programmers we'll allow them to specify // a desired access mask by simply OR'ing together multiple single rights // and treat an access mask as a DWORD. For example // // DesiredAccess = DELETE | READ_CONTROL // // So we'll declare ACCESS_MASK as DWORD typedef DWORD ACCESS_MASK; typedef ACCESS_MASK *PACCESS_MASK; This is the overall structure of the access mask. You can see that there are bits for generic read, write, execute, and all privileges. It is also possible to specify specific rights that vary depending on the object type. For example, printers have different rights (clear queue, change paper to tray assignments, and so on) than files do. The specific rights are defined on a per-object-type basis. Programmers should consult the SDK for information on specific objects or on how to create specific rights for your objects. I have some special cases to address for completeness. If an object's SD contains no DACL, then everyone has full access to the object. This is important to keep in mind. On the other hand, if the DACL is null (contains 0 ACEs), then no one has access to the object (except the owner, as you'll see in the next section). If the SACL is null (or contains no ACEs), then no auditing occurs. Assigning and Inheriting ACLs and ACEs The algorithm for assigning ACLs to new objects follows: If the caller explicitly provides an SD when creating the object, that SD is used if possible (as long as the caller has sufficient rights). Any ACEs marked for mandatory inheritance in the object hierarchy above our new object are included. If the caller didn't specify an SD, the object manager searches for ACEs marked as inheritable in the object hierarchy above our new object and includes those. If none of the preceding steps applies, the default ACL from the caller's token is applied. This is the most common occurrence. Windows 2000 contains some significant improvements over previous versions of Windows NT when it comes to ACL inheritance. In previous versions, ACEs could only be inherited at object creation or when a new ACL was explicitly set on an existing object. The system did not propagate inheritable ACEs to child objects. Moreover, the system did not differentiate between inherited and directly applied ACEsso an object was unable to protect itself from inherited ACEs. In Windows 2000, the SetNamedSecurityInfoEx() and SetSecurityInfoEx() functions support automatic propagation of inheritable ACEs. For example, if you use these functions to add an inheritable ACE to a directory in an NTFS volume, the system applies the ACE as appropriate to the ACLs of any subdirectories and files. Win2K also introduces a new inheritance model where directly assigned ACEs have precedence over inherited ACEs. This is accomplished by adjusting the order of the ACEs in the ACL. To ensure that non-inherited ACEs have priority over inherited ACEs, all non-inherited ACEs precede all inherited ACEs. This ordering ensures, for example, that a non-inherited access-denied ACE is enforced regardless of any inherited ACE that allows access. With the new inheritance model comes a few additional particulars: inherited by child objects. If automatic inheritance results in the removal of all ACEs from a child object's DACL, the child object has an empty DACL rather than no DACL. These rules can have the unexpected result of converting an object with no DACL to an object with an empty DACL. You'll recall that there is a big difference here: An object with no DACL allows full access, but an object with an empty DACL allows no access. To ensure that inheritable ACEs do not affect a child object with no DACL, set the SE_DACL_PROTECTED flag in the object's security descriptor. SD/ACL/ACE Summary Summing up, Figure 3.4 shows a few hypothetical objects, their security, and the effects on the system. Figure 3.4 Some ACLs containing useful ACEs. Tokens If security descriptors and ACLs are the locks, then tokens are the keys. Each process and thread has a token, which contains the SID of the process's owner as well as the SIDs of the groups to which they belong. The system uses an access token to identify the user when a thread tries to obtain a handle to a securable object or tries to perform a privileged system task. Access tokens contain the following information: The SID of the logon account SIDs for the groups the account is a member of A logon SID that identifies the current logon session A list of the privileges held by either the user or the user's groups The kernel's security reference monitor (covered later in this chapter) compares the SID in the token with the SIDS in the ACLs to determine whether access will be permitted, whether auditing will be performed, and so on. Let's look at a token. Instead of looking at the Win32 structures, I use the kernel debugger's !tokenfields command. The Win32 structures are similar, but for the illustration the !tokenfields output is more concise. Remember, the values shown are the offsets to the fields, not actual values. This is just what the kernel structure looks like: kd> !tokenfields TOKEN structure offsets: TokenSource: 0x0 AuthenticationId: 0x18 ExpirationTime: 0x28 ModifiedId: 0x30 UserAndGroupCount: 0x3c PrivilegeCount: 0x44 VariableLength: 0x48 DynamicCharged: 0x4c DynamicAvailable: 0x50 DefaultOwnerIndex: 0x54 DefaultDacl: 0x6c TokenType: 0x70 ImpersonationLevel: 0x74 TokenFlags: 0x78 TokenInUse: 0x79 ProxyData: 0x7c AuditData: 0x80 VariablePart: 0x84 kd> In the token you find all the fields you expect plus a few additional fields for housekeeping purposes. Using the kernel debugger, let's take apart the token associated with the WINWORD.EXE process I am currently using to write this book: PROCESS fdd4b020 Cid: 02a0 Peb: 7ffdf000 ParentCid: 0280 DirBase: 02023000 ObjectTable: fdd24a48 TableSize: 98. Image: winword.exe VadRoot fdd2b388 Clone 0 Private 348. Modified 3. Locked 0. DeviceMap fdf14128 Token e1eec030 ElapsedTime 0:02:04.0539 UserTime 0:00:00.0270 KernelTime 0:00:00.0600 QuotaPoolUsage[PagedPool] 37828 QuotaPoolUsage[NonPagedPool] 5540 Working Set Sizes (now,min,max) (1325, 50, 345) (5300KB, 200KB, 1380KB) PeakWorkingSetSize 1325 VirtualSize 40 Mb PeakVirtualSize 46 Mb PageFaultCount 1500 MemoryPriority FOREGROUND BasePriority 8 CommitCharge 425 Looking at the process object, you see the token is stored at 0xe1eec030. Using the !token extension to elaborate kd> !token e1eec030 TOKEN e1eec030 Flags: 9 Source User32 \ AuthentId (0, 5ceb) Type: Primary (IN USE) Token ID: 127b6 ParentToken ID: 0 Modified ID: (0, a6a8) TokenFlags: 0x9 SidCount: 10 Sids: e1eec180 RestrictedSidCount: 0 RestrictedSids: 0 PrivilegeCount: 17 Privileges: e1eec0b4 Here is the actual token and its values. This is a primary token because the WinWord process isn't impersonating anyone. (I get to impersonation in a moment.) Most interesting are the SIDs, which I explore here. First, I display the dwords (with the kernel debugger dd command) starting at the memory location specified in the token, namely 0xe1eec180: kd> dd e1eec180 e1eec180 e1eec1d0 00000000 e1eec1ec 00000007 e1eec190 e1eec208 00000007 e1eec214 0000000f e1eec1a0 e1eec224 00000007 e1eec234 00000007 e1eec1b0 e1eec244 c0000007 e1eec258 00000007 e1eec1c0 e1eec264 00000007 e1eec270 00000007 e1eec1d0 00000501 05000000 00000015 74d97781 e1eec1e0 65d637a8 3f32a78a 000003e8 00000501 e1eec1f0 05000000 00000015 74d97781 65d637a8 Taking note of the SidCount = 10 in the token, I display the 10 SIDs using the !sid command: kd> !sid e1eec1d0 SID is: S-1-5-21-1960408961-1708537768-1060284298-1000 kd> !sid e1eec1ec SID is: S-1-5-21-1960408961-1708537768-1060284298-513 kd> !sid e1eec208 SID is: S-1-1-0 kd> !sid e1eec214 SID is: S-1-5-32-544 kd> !sid e1eec224 SID is: S-1-5-32-547 kd> !sid e1eec234 SID is: S-1-5-32-545 kd> !sid e1eec244 SID is: S-1-5-5-0-23483 kd> !sid e1eec258 SID is: S-1-2-0 kd> !sid e1eec264 SID is: S-1-5-4 kd> !sid e1eec270 SID is: S-1-5-11 Great. You see a list of SIDs that are contained in this token. Boy, there sure seem to be a lot. Table 3.7 lists the SIDs with some descriptions. Table 3.7 SIDs Included in Jeff's Word Token NOTE The Authenticated Users group was added in a patch to Windows NT 4.0 as a result of the "red button" vulnerability. Figure 3.5 shows that I am a member of the local administrators group. My membership in the power users group comes from the fact that authenticated users is defined as a member of power users on a Windows 2000 Professional machine, as mine is. This is shown in Figure 3.6. The net effect is that all non-anonymous users (that is, authenticated users) are also power users. The SIDs in the token you examined are my keys to the system. As I access resources, the security reference monitor makes sure that the security descriptor (and contained DACLs) attached to each resource permit one of the SIDs listed in my token to execute whatever action I am requesting. Similarly, if there is an SACL contained within the security descriptor, the system checks whether auditing is necessary given my list of SIDs and the access I am requesting. Pretty cool, eh? Figure 3.5 I am a member of the local administrators group, as the SIDs suggest. Figure 3.6 The group authenticated users is a member of the group power users. Privileges and User Rights I sidetrack here for just a moment and talk about privileges. Privileges, unlike access rights, affect the system as a whole, whereas access rights affect only a certain securable object. As an example, the ability to read the file c:\hi.txt is an access rightthe type of thing I've been talking about thus far. However, shutting down the system, for example, is a privilege because its scope is not limited to just one object. Table 3.8 comes directly from the SDK and gives a brief description of each privilege. Table 3.8 Common Privileges Many of these should be familiar to NT administrators as the "user rights" seen in User Manager, and now in the MMC, as in Figure 3.7. Windows 2000 gives administrators GUI access to most of these privileges. Figure 3.7 User rights and privileges are synonymous. Taking a look at the WINWORD.EXE processthis time using the PVIEW.EXE tool that ships with the Windows NT 4 Resource Kityou see the privileges that are enabled and the ones that are disabled (see Figure 3.8). Note that the enabled groups are the same as you saw in the exercise earlier. (Scrolling down reveals the rest of them.) Figure 3.8 The PVIEW.EXE tool in the Windows NT 4 Resource Kit shows more token information. NOTE Yes, I am using a tool from the Windows NT 4.0 Resource Kit to explore my Windows 2000 machine. The PVIEW.EXE tool is not included in the pre-release build of the Windows 2000 Resource Kit that I have. Because the structure of the token has not changed between NT 4 and Windows 2000, the NT 4 Resource Kit does the trick in this case. However, be careful when mixing versions of system-level tools. Some privileges are very powerful, so Microsoft put a two-tier mechanism in place for using the rights associated with privileges. When attempting a privileged action, not only must the privileges be held by the client, but they must also be enabled. Say I am attempting to call the LogonUser() Win32 function, which requires the SE_TCB_NAME privilege. My administrator granted my account (or a group to which I belong) the SE_TCB_NAME privilege using User Manager or another tool. However, this privilege (and all others) are disabled by default. Thus, my program does the following: Call OpenThreadToken() to get a handle to my primary (or impersonation) token. Call AdjustTokenPrivileges() to enable the necessary privileges, in this case SE_TCB_NAME. Do the call to LogonUser(). Call AdjustTokenPrivileges() to disable the privilege. SYSTEM Context Many system processes run under a special access token called SYSTEM. Analyzing the SYSTEM token as I did my own token earlier, you see the following: kd> !token e10007d0 TOKEN e10007d0 Flags: 9 Source *SYSTEM* AuthentId (0, 3e7) Type: Primary (IN USE) Token ID: 3ea ParentToken ID: 0 Modified ID: (0, 3e9) TokenFlags: 0x9 SidCount: 4 Sids: e1000950 RestrictedSidCount: 0 RestrictedSids: 0 PrivilegeCount: 21 Privileges: e1000854 kd> dd e1000950 e1000950 e1000970 00000000 e100097c 0000000e e1000960 e100098c 00000007 e1000998 00000007 ... kd> !sid e1000970 SID is: S-1-5-18 kd> !sid e100097c SID is: S-1-5-32-544 kd> !sid e100098c SID is: S-1-1-0 kd> !sid e1000998 Table 3.9 lists the SIDs that are contained within the SYSTEM token. Table 3.9 SIDs Contained in the SYSTEM Token The SYSTEM token contains local administrator rights to the computer and pretty much nothing else. NOTE The fact that the SYSTEM token has no network rights is important. Many administrators scratch their heads wondering why the neat little batch script works fine when they run it themselves but fails when run as a scheduled job by the scheduler service. The answer most often is that the script needs network rights (for example, it contains a NET command or NetBIOS path), and because the scheduler runs under SYSTEM context, the network access fails. Note that because the SYSTEM token is a member of the local administrators group, processes running under this context can enable any privilege it might need. Impersonation Okay, back to tokens. As you saw, every process has a primary token that contains the security context of the user account associated with the process. By default, the system uses the primary token when a thread of the process interacts with a securable object. However, a thread can impersonate a client account. This is a powerful feature that allows the thread to interact with securable objects using a client's security context. A thread impersonating a client has both a primary token and an impersonation token. Impersonation is commonly used in server processes that handle requests from the network. For example, the Windows 2000 server service provides RPC support and file, print, and named pipe sharing. A quick look with PVIEW.EXE shows that the process is indeed running with the SYSTEM token. In fact, most server processes run with SYSTEM context. You'll note that the server service actually shares a binary with SERVICES.EXE, as several system services do. See Chapter 5, "Services," for more information about this technique. Great, so the server service has local administrator power. But what happens when a client that has less than administrator privileges accesses a file share on this machine? Simple, the server service impersonates the client before attempting to access the resources. If the client does not have sufficient privilege, then neither will the server (while impersonating) and thus the access will fail. When the server service is done handling this client's requests, it can revert back to its primary (non-impersonated) token. As another example, the Internet Information Server (IIS) process also runs under the SYSTEM context. However, administrators familiar with IIS will recognize the special IIS anonymous account (IIS_domainname) that IIS uses for anonymous clients (HTTP and FTP). The general idea is that you assign that special account the rights that you want for folks accessing your server anonymously. What happens under the hood is that the IIS worker thread that actually services the request impersonates the guest account before accessing resources. When the user's request has been filled, the thread reverts back to its primary token. Impersonation is a powerful and elegant way to handle access checks. You can imagine the alternative: Each server process, running as SYSTEM, would have to manually check access for each user to each object that he requests. This would require a huge volume of repetitive and error-prone code in each server service. Restricted Tokens New in Windows 2000 is the ability to create restricted tokens. Restricted tokens, as you might expect, contain a subset of the SIDs and privileges of the original token. You create restricted tokens using the CreateRestrictedToken() function, which simply takes as arguments a handle to an existing token as well as a list of privileges and SIDs to remove. Token Security With all this talk about opening tokens, I'm sure you're wondering how tokens are secured. Tokens are just kernel objects and thus, like every other kernel object, are securable and have security descriptors attached. See the SDK for the access rights that apply to tokens, as well as how to adjust them. Moving On Now that you're familiar with the background structures and concepts, look at the components that make up the Windows 2000 security architecture.
https://www.informit.com/articles/article.aspx?p=131215&amp;seqNum=2
CC-MAIN-2020-16
refinedweb
5,115
53.92
GHC goes out of memory while compiling simple program with optimizations When compiling the following program with ghc -O Main.hs, GHC goes out of memory. import Data.Bits (bit) main :: IO () main = putStrLn (show (f undefined)) f :: [Int] -> Int f = sum . zipWith ((+) . bit) [0..] . map undefined . scanl undefined undefined I have 6 GB RAM and 8 GB swap free, so that shouldn't be the problem. It only happens with optimizations on, and it happens during a simplifier. Any simpler expressions do work. It even happens when [0..] is replaced by take 1 [0..], but it compiles with take 0 [0..]. A straightforward workaround is to replace zipWith f [0..] by \xs -> zipWith f [0..length xs] xs. I ran into this problem when updating to GHC 8.2.1 from 8.0.2. But the given program doesn't compile on older versions as well. GHC 7.10.3 was the lowest version I could run before running into other problems. All GHC versions I tested come from the Debian repo.
https://gitlab.haskell.org/ghc/ghc/-/issues/14272
CC-MAIN-2021-10
refinedweb
173
78.75
Let me start with some definitions and a description of the problem I'm trying to solve. One of the well known problems of the modern web is an ability to embed pieces of someone else's code, sometimes repackaged into a different presentation and displayed as part of your own page. We all see banners on portal pages, little Google maps displaying directions to your department store or a doctor's office. We see pages like iGoogle which you can customize to your needs by choosing and arranging your clocks, daily weather, GMail, cnn.com news, stock market tickers, and numerous other tools on one page. Let's follow Wikipedia and refer to these little snippets as web widgets. Your widget usually resides on someone else's page. For the purposes of this article, I'm going to call this page a mashup. A concept of a widget and a mashup is similar to the concept of a portlet and a portal; the major difference being that aggregation of portlets into a portal page usually takes place on the portal server; and aggregation of widgets into a mashup is usually performed by the [client] browser. See Wikipedia for more on mashups and portals. In this article, we are only going to demonstrate the "client" approach since it's more scalable and is a preferable choice in modern web development. Now, we have a widget, and a mashup page where this widget resides. Let's say you want to develop a widget using ASP.NET and C#, and you want to provide for others an ability to easily include your widget in their mashup page. And a mashup page is not necessarily developed on ASP.NET, and you do not necessarily have control over what technologies are used to develop this page. If you are a LAMP developer, there is plenty of information on the web, and not so much when it comes to ASP.NET. This article demonstrates two techniques to develop such widgets using ASP.NET: IFRAME, and embedding widgets dynamically into a mashup page using JavaScript and jQuery. IFRAME I did an extensive research on the web when writing this article, and there are some very good articles out there describing the problems you might face while developing your widgets, and possible solutions. Let me review them here. When it comes to developing widgets, one of the most challenging problems is something called "cross-domain communication". To put it simply, if you have your mashup page and a widget on the same server, everything is very simple. Everything becomes more complicated when you have a mashup on one server (say,) and your widget resides on another server (say,). So, if you have a simple scenario (where you can control both the mashup and the widget environments), what do you do? There are excellent articles out there, let me reference some of them (keep in mind, some articles assume that ASP.NET is used to develop both the mashup and the widget, others allow using mixed technologies): There is a great PowerPoint presentation that lists all the known technologies used to develop widgets (remember, the most challenging problem we have to overcome is a "cross-domain posting"): The IFRAME approach stands alone from other client techniques because it does not deal with "cross-domain communication" issues (even though the widget is hosted on a completely different server). This is the simplest technique, but has its limitations. Simply put, IFRAME is a rectangular area on the host (mashup) page where the widget is loaded into. The widget, in this case, is a fully functional web page, but displayed in that rectangular area. Since it’s an independent page, it can be developed using ASP.NET, and you are not limited to using ASP.NET postbacks and AJAX (ScriptManager etc.). When using IFRAMEs, we need to remember that essentially it is almost like having a different browser window embedded into a mashup page, and it can affect the mashup page in a bad way (for example, if a DIV popup or a modal dialog is displayed on the page and it overlaps with an IFRAME, part of the dialog will not be displayed). Also, please keep in mind that IFRAMEs are resource intensive, so a parent page with multiple IFRAMEs might be slow. DIV If the owner of the mashup page is really concerned with security, malicious scripts running in the widget, then the IFRAME approach is preferable, because the widget’s script will have limited access to the host page and hence couldn’t make much harm to the page where it's embedded into. Also, this approach is preferable if the owner of the widget wants to control the layout and styling of his/her widget. Since the IFRAME is essentially a separate web page, the mashup’s CSS scripts can’t do much harm to the widget. As a widget developer, if you still want to give a mashup developer the ability to customize the style of your widget, you will have to accept a Style ID (an ID that identifies one of the styles that you developed and supported) or a full path to the CSS as a GET parameter of your widget (and in the widget, you will have to generate a proper link command to load the corresponding CSS directly into your IFRAME). Style ID ID Let's demonstrate the IFRAME approach with an example. As an example, let’s develop a simple calculator widget which accepts two arguments and calculates a sum of these numbers. Start with creating a new ASP.NET project, add a new ASP.NET page, and drop the necessary controls on to it. We’re going to have a linked button that generates a postback. The OnClick event is really simple (see the Calculator folder in the Widgets project): OnClick //C#: protected void lbResult_Click(object sender, EventArgs e) { int arg1 = 0; int arg2 = 0; int.TryParse(txtArg1.Text, out arg1); int.TryParse(txtArg2.Text, out arg2); txtResult.Text = (arg1 + arg2).ToString(); } We want to be able to have several alternative styles for the widget, so we’re going to pass an additional “GET” parameter to the widget that will set the theme of our widget. Here’s how we can include the widget onto the mashup page (see CalculatorTest.aspx): <!-- HTML: --> <IFRAME id="calc1" name="calc1" style="width: 200px; height: 100px;" frameborder="0" scrolling="no" src="" /> As you can see, our IFRAMEd widget accepts an additional parameter called theme. In the demo example, because we have both the widget and the host project on the same server, we’re using localhost as the name of the widget’s host; in production, SRC should reference to an actual host where the widget is hosted. theme SRC This is how we’re going to implement styling the control: ThemedPage Css //C#: protected string Css(string url) { string theme = Request["theme"]; return !string.IsNullOrEmpty(theme) ? string.Format(url, "themes/" + theme) : ""; } <!-- HTML: --> <link rel='stylesheet' type='text/css' href="<%= Css("../{0}/default.css") %>" /> As you will see, in runtime, this will generate the actual reference to the CSS file (relative to the calculator’s ASPX file): <!-- HTML: --> <link rel='stylesheet' type='text/css' href="../themes/theme1/default.css" /> This is it! Now we can use the calculator widget with two different themes (themes are maintained by the owner of the widget). Since we’re using IFRAMEs, the owner of the host page cannot break the formatting. With this approach, you also have the benefit of creating complicated widgets which use postbacks, AJAX etc. See the complete example in the companion archive. Let me make a quick note regarding why I decided not to use the standard ASP.NET feature called Themes, and implemented something similar instead. I could’ve used it, which might’ve simplified the code a little (we should’ve just reassigned the widget’s Theme property based on the themes parameter, put all our themes into the standard App_Themes folder instead of the custom Themes, and we would not even have to worry about including CSS links to our pages – ASP.NET would've taken care of that for us automatically). The reason I decided not to proceed this route is that I don’t like when ASP.NET adds references to all CSS files in the themes directory to all pages. That seemed a little inconvenient when you have lots of CSS files for different widgets – all widgets will link all available CSS files. I decided I’d rather control this myself. Theme themes While the IFRAME approach is very simple and straightforward, there are several drawbacks to that approach; mainly, it is slow, resource intensive [on the browser], and does not give the owner of the mashup page an ability to style the widgets the way s/he wants. The alternative approach is to dynamically load your widget into someone else’s page, and embed it into an empty DIV or a SPAN tag. The widget will become an integral part of the mashup page. All parent CSS scripts will affect the widget, and the owner of the mashup page will be able to easily change the styling of the widget if necessary. When developing such widgets, it is recommended not to provide any styling in HTML (and leave all styling to the mashup page). With this approach, it is the responsibility of the host to properly format the widget. SPAN It sounds like a good approach. So, what’s the catch? There are plenty really. HEAD BODY FORM This is actually a disadvantage of this approach: a widget in this scenario has much greater control over the whole page, and can break it easily: by providing bad HTML formatting, by using malicious JavaScript, by redirecting from the host page etc. So, there should be a much higher trust relationship between the owner of the mashup page and the owner of the widget. Bottom line: you should forget about server-side controls that POST. ScriptManager Now, what do you do if you cannot use Microsoft AJAX, and still want your widget to be interactive and react to the user clicks by updating itself? I’m going to demonstrate a technique that uses jQuery to make JSONP requests to a WCF service, which implements business logic and provides results which are then used to update the UI of the widget. So, let me restate all the technologies used to develop this widget, and why I used them: Now, here are some great articles that present parts of the complete picture and might be worth to read to understand more regarding how to use these technologies, in what context etc.: With embedded widgets, the most challenging part is to make it possible to update the host (mashup) page. There is a restriction on the JavaScript where you can update parts of the page with information obtained by only making requests to the same host that serves the page. Simply saying, if the mashup is residing at, without special tricks, it can only update parts of the page located on the same host,. In a real world scenario, we want to have a mashup page residing on one server and widgets downloaded from a different server (e.g.,) and embedded into the mashup page. As I mentioned earlier, we're going to use JSONP to overcome this limitation when downloading our widget and making calls to update parts of our widget. Also, we're going to use jQuery and JSONP to make requests to the WCF backend service to update parts of the widget as a reaction to user clicks. Here’s what you need to do to implement the widget: <!-- HTML: --> <div id="calculator2"> <input id="txtArg1" /> <span id="lblSign">+</span> <input id="txtArg2" /> <a id="lbResult">=</a> <span id="result" /> <span id="error" /> </div> http%3A//localhost/Widgets/Calculator2&method=jsonp1249181681232&_=1249181681315 The result of the call to this service would look something like this: //JavaScript: jsonp1249181681232( "\r\n\r\n\u003c!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\" \"\"\u003e\r\n\r\n\u003chtml xmlns=\"\" \u003e\r\n\u003chead\u003e\u003ctitle\u003e\r\n\r\n \u003c/title\u003e\u003c/head\u003e\r\n\u003cbody\u003e\r\n \u003cform name=\"form1\" method=\"post\" action=\"default.aspx\" id=\"form1\"\u003e\r\n\u003cinput type=\"hidden\" name=\"__VIEWSTATE\" id=\"__VIEWSTATE\" value=\"/wEPDwULLTE2MTY2ODcyMjlkZIrWmIrloRWErn6elOCzG8cGl8eN\" /\u003e\r\n\r\n \u003cdiv id=\"calculator2\"\u003e\r\n \u003cinput id=\"txtArg1\" /\u003e\r\n \u003cspan id=\"lblSign\"\u003e+\u003c/span\u003e\r\n \u003cinput id=\"txtArg2\" /\u003e\r\n \u003ca id=\"lbResult\"\u003e=\u003c/a\u003e\r\n \u003cspan id=\"result\" /\u003e\r\n \u003cspan id=\"error\" /\u003e\r\n \u003c/div\u003e\r\n \u003c/form\u003e\r\n\u003c/body\u003e\r\n\u003c/html\u003e\r\n" ); This is the idea of JSONP – the JSONP call will return a particular JavaScript code (function call) which now can be executed in the browser context. The HTML content of the control is passed as a parameter of the function, and is not treated as something downloaded from a different host, so we can easily “embed” it into a mashup page. jsonp1249181681232 is the name of the JavaScript function automatically generated by the jQuery. This function will be called at the end of the JSONP call, and will pass the actual result (HTML) into our callback, which implements "embedding" the HTML code of the widget into a host page (see next paragraph). jsonp1249181681232 //JavaScript: loadhtml: function(container, urlraw, callback) { var urlselector = (urlraw).split(" ", 1); var url = urlselector[0]; var selector = urlraw.substring(urlraw.indexOf(' ') + 1, urlraw.length); private.container = container; private.callback = callback; private.jsonpcall('Service.ashx', ['downloadurl', escape(url)], function(msg) { // gets the contents of the Html in the 'msg' // todo: apply selector private.container.html(msg); if ($.isFunction(private.callback)) { private.callback(); } }); }, The callback that will be called by the auto generated function (jsonp1249181681232) will finalize embedding HTML code of our widget into the mashup page: //JavaScript: function(msg) { // gets the contents of the Html in the 'msg' // todo: apply selector private.container.html(msg); } where the container is the DIV from the mashup page where the control is going to be embedded. The loadhtml function is used like this: loadhtml loadhtml(container, '', this.Calculator2_"en-us">Init); Basically, we’re passing the URL of our widget into this function, and the function will embed the proper JSONP call, and once the download and embedding is completed, it'll call the Calculator2_Init function, which will initialize our widget. Calculator2_Init //JavaScript: // wire widget after it's loaded Calculator2_Init: function() { // this function is OnClick event for the link $('a#lbResult').click(function() { var btn = $(this); var widget = $('div#calculator2'); widget.find('*').addClass("processing"); widget.find("span#error").html(""); var arg1 = widget.find('input#txtArg1')[0].value; var arg2 = widget.find('input#txtArg2')[0].value; private.jsonpcall("Calculator2/Service.svc/Sum", ["arg1", arg1, "arg2", arg2], function(result) { if (result.Error == null) { widget.find("span#result").html(result.Value); } else { widget.find("span#result").html("Error"); widget.find("span#error").html(result.Error); } widget.find('*').removeClass("processing"); }); return false; }); // initializing the widget. // nothing for now. } Basically, what’s happening here is we assign a click event for our lbResult element. The event will be fired when a user clicks the link. The event obtains the values from the input elements (arg1 and arg2), and then makes a WCF JSONP call to our Calculator2 WCF service (invokes its Sum function). click lbResult arg1 arg2 Calculator2 Sum The most important piece of code here is: //JavaScript: private.jsoncall("Calculator2/Service.svc/Sum", ["arg1", arg1, "arg2", arg2], <OnComplete>); This function makes a call to the WCF JSONP service Calculator2/Service.svc, invokes the Sum function, and passes two arguments (arg1 and arg2). The Sum function is implemented on the back end. It will perform the calculation, and once the calculation is completed, we assign the resulting value into the SPAN result: result //JavaScript: widget.find("span#result").html(result.Value); //C#: [ServiceContract] public interface ICalculator2 { [OperationContract] [WebGet(ResponseFormat = WebMessageFormat.Json)] [JSONPBehavior(callback = "method")] Result Sum(string arg1, string arg2); } Result //C#: [DataContract] public class Result { public Result() { } [DataMember] public string Error; [DataMember] public string Value; } Although we could’ve returned just the number (int or string value), this implementation passes the possible service’s exceptions back to the UI. int string //C#: // Service Implementation public class Calculator2 : ICalculator2 { public Calculator2() { } public Result Sum(string arg1, string arg2) { Result result = new Result(); try { int iarg1 = 0; int iarg2 = 0; int.TryParse(arg1, out iarg1); int.TryParse(arg2, out iarg2); result.Value = (iarg1 + iarg2).ToString(); } catch (Exception ex) { result.Error = ex.Message; } return result; } } <!-- web.config: --> <!-- WCF configuration >>> --> <system.serviceModel> <!-- WCF services --> <services> <service name="Widgets.Calculator2"> <endpoint address="" binding="customBinding" bindingConfiguration="jsonpBinding" behaviorConfiguration="Calculator2_Behavior" contract="Widgets.ICalculator2"/> </service> </services> <behaviors> <endpointBehaviors> <behavior name="Calculator2_Behavior"> <webHttp/> </behavior> </endpointBehaviors> </behaviors> <bindings> <customBinding> <binding name="jsonpBinding"> <jsonpMessageEncoding/> <httpTransport manualAddressing="true"/> </binding> </customBinding> </bindings> <extensions> <bindingElementExtensions> <add name="jsonpMessageEncoding" type="Microsoft.Jsonp.JsonpBindingExtension, Widgets, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"/> </bindingElementExtensions> </extensions> </system.serviceModel> <!-- <<< WCF configuration --> This service uses custom binding (jsonpBinding). See the implementation for this binding in the companion archive. I took the implementation for the custom JSONP Binding from the WCFWFCardSpace examples from Microsoft (see the Microsoft.Jsonp.JsonpBindingExtension class). jsonpBinding WCFWFCardSpace Microsoft.Jsonp.JsonpBindingExtension <!-- HTML: --> <div id="calc"></div> //JavaScript: <script type="text/javascript" src=""></script> As with the IFRAME example, you will have to replace localhost with the actual host where the widget is hosted. Implementation of Calculator2 is based on the function we implemented earlier: //JavaScript: // load widget into 'container' from 'host' Calculator2: function(container, host) { private.host = host; private.loadhtml(container, 'http://' + private.host + '/Widgets/Calculator2', private.Calculator2_Init); } Basically, we download our widget using loadhtml, and then we’ll initialize it. This is it! Congratulations! You’ve implemented your first embedded widget using JSONP, WCF, and ASP.NET: This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not load type 'Widgets.Calculator'. Source Error: Line 1: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="default.aspx.cs" Line 2: Inherits="Widgets.Calculator" %> using (HttpWebResponse res = (HttpWebResponse)req.GetResponse()) <form> General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/38701/Developing-Widgets-with-ASP-NET-WCF-and-jQuery
CC-MAIN-2015-40
refinedweb
3,120
54.22
I have my own framework with a request handler and everything. I first started off without a modular approach so just one folder controllers where all the application controllers were located. Now I want to convert it to modules, so every module has his own controller, model, views, forms and so on. This is in not problem but the problem starts when using namespaces (php 5.3). Default controllers are in Application\Controllers, modules are in Application\Modules\<modulename>\Controllers I have a few options I'm thinking about, you can tell me how you would do it or/and which option you prefer. Keep in mind that namespaces are used and the used namespaces. I would go for option #1 but i'm looking forward to here more from you. Option #1 I have a moduleloader which do some stuff when you register modules to the application (enable them). I was thinking to just create an array which holds all controllers of a module so if the request has controller login it will check the array for the login controller and than we can manipulate the namespace Application\Modules\<modulename>\Controllers Negative: You can't have duplicate controllers, lets say 2 modules have a login controller, this won't work or the first one will be shown. (the possibility that this will happen is small but it exists) Option #2 My urls will look like this /modulename/controller/action/params so we can fetch the modulename from the url. Negative: Long urls and I have to create a lot of rewrite rules to create "beautiful" urls, because I want the urls as short as possible. Option #3 It's like option #2 but we will have a request_map array which has key url and value the full controller class (namespace + controller) Negative: Not so dynamic, we have to add them ourselves to the array. Modular applications (how would you do it)Page 1 of 1 0 Replies - 10193 Views - Last Post: 24 September 2012 - 11:28 AM #1 Modular applications (how would you do it) Posted 24 September 2012 - 11:28 AM Page 1 of 1
http://www.dreamincode.net/forums/topic/293069-modular-applications-how-would-you-do-it/page__pid__1708570__st__0
CC-MAIN-2016-07
refinedweb
357
53.65
[Date Index] [Thread Index] [Author Index] Re: Re: word problem Just finishing up writing a thesis, in WordPerfect, with alot a graphs. My solution to this problem was to export all my graphs as a wmf file and then import them into WordPerfect. I had no problem resizing them once they were in the document. I also found that to had to change the font size in Mathematica to 9 pt so that once they were resized, they lettering in the graphs looked the same size as the text in my document did. On a side note, to wanted to get my Mathematica notebooks in as Apendices. To do this I had mathematica convert them into pdfs and then I copied the pages with Adobe's image tool and pasted them in as pictures. Perhaps is a better way to do this, but it works. mary beth Quoting Jens-Peer Kuska <kuska at informatik.uni-leipzig.de>: >. > > > --
http://forums.wolfram.com/mathgroup/archive/2006/Aug/msg00519.html
CC-MAIN-2016-50
refinedweb
157
69.11
OPENMP is a directory of C++ programs which illustrate the use of the OpenMP application program interface for carrying out parallel computations in a shared memory environment. The directives allow the user to mark areas of the code, such as do, while or for loops, which are suitable for parallel processing. The directives appear as a special kind of comment, so the program can be compiled and run in serial mode. However, the user can tell the compiler to "notice" the special directives, in which case a version of the program will be created that runs in parallel. Thus the same program can easily be run in serial or parallel mode on a given computer, or run on a computer that does not have OpenMP at all. OpenMP is suitable for a shared memory parallel system, that is, a situation in which there is a single memory space, and multiple processors. If memory is shared, then typically the number of processors will be small, and they will all be on the same physical machine. By contrast, in a distributed memory system, items of data are closely associated with a particular processor. There may be a very large number of processors, and they may be more loosely coupled and even on different machines. Such a system will need to be handled with MPI or some other message passing interface. OpenMP descended in part from the old Cray microtasking directives, so if you've lived long enough to remember those, you will recognize some features. OpenMP includes a number of functions whose type must be declared in any program that uses them. A user program calling OpenMP must have the statement # include <omp.h> OpenMP allows you to "request" any number of threads of execution. This is a request, and it's not always a wise request. If your system has four processors available, and they're not busy doing other things, or serving other users, maybe 4 threads is what you want. But you can't guarantee you'll get the undivided use of those processors. Moreover, if you run the same program using 1 thread and 4 threads, you may find that using 4 threads slows you down, either because you don't actually have 4 processors, (so the system has the overhead of pretending to run in parallel), or because the processors you have are also busy doing other things. For this reason, it's wise to run the program at least once in single thread mode, so you have a benchmark against which to measure the speedup you got (or didn't get!) versus the speedup you hoped for. The compiler you use must recognize the OpenMP directives in order to produce code that will run in parallel. Here are some of the compilers available that support OpenMP: The computer code and data files described and made available on this web page are distributed under the GNU LGPL license. OPENMP examples the matrix multiplication problem y=A*x, with and without parallelization by OpenMP. OPENMP_STUBS, a C++ library which implements a "stub" version of OpenMP, so that an OpenMP program can be compiled, linked and executed on a system that does not have OpenMP installed.. PTHREADS, C programs which illustrate the use of the POSIX thread library to carry out parallel program. COMPUTE_PI shows how information can be shared. Several processors cooperate to estimate the value of pi. DOT_PRODUCT compares the computation of a vector dot product in sequential mode, and using OpenMP. Typically, the overhead of using parallel processing outweighs the advantage for small vector sizes N. The code demonstrates this fact by using a number of values of N, and by running both sequential and OpenMP versions of the calculation. HELMHOLTZ is a program that solves the Helmholtz equation on a rectangular grid, using Jacobi iteration with overrelaxation; MXM is a simple exercise in timing the computation of a matrix-matrix product. You can go up one level to the C++ source codes.
http://people.sc.fsu.edu/~jburkardt/cpp_src/openmp/openmp.html
CC-MAIN-2015-11
refinedweb
667
59.13
This is the fourth installment in my series of posts about extension methods. You can find links to the rest of the series here. Today I’m going to talk about extension methods and late binding. Essentially there isn’t much to say about it, other than the fact that we don’t support late bound execution of extension methods. For the most part this isn’t a big deal as one of the primary benefits of extension methods is its interaction with intellisence which doesn’t work in late bound scenarios anyways. Unfortunately, however, there is one big side effect of this decision that you need to be aware of when authoring extension methods. Mainly, we do not allow extension methods to be called off of any expression that is statically typed as “Object”. This was necessary to prevent any existing late bound code you may have written from being broken by extension methods. Too see why, consider the following example: Class C1 Public Sub Method1() Console.WriteLine(“Running c1.Method1”) End Sub End Class Class C2 Public Sub Method1() Console.WriteLine(“Running c2.Method1”) End Sub End Class Module M1 Public Sub Main() Dim x As Object = Nothing If (SomeCondition()) Then x = New C1 Else x = New C2 End If x.Method1() End Sub End Module Here we have a program which uses late binding in order to invoke the method “Method1” on an object that is either of type “c1” or of type “c2”. It does this because the static type of the variable “x” is declared as object, which causes the compiler to resolve any calls to “unknown” methods as late bound calls that will be resolved at runtime based on the dynamic type of the object stored in “x”. Extension methods, however, are always fully resolved at compile time. As a result, if we were to add an extension method defined on Object, like so: <Extension()> _ Public Sub Method1(ByVal x As Object) Console.WriteLine(“Running Extension method m1.Method1”) End Sub Then the act of simply importing the namespace containing the method would cause the formerly late bound method call to Method1 to be transformed into an early bound call to the extension method. This would not be good, as it would silently change the meaning of the program, which would have the potential of making extension methods very, very dangerous. In fact, we explicitly disallow this in early bound scenarios by always having instance methods defined on a type shadow any extension methods defined on it with the same name. This enables you to import extension methods into your existing code without having to worry about things blowing up in your face. In the case of late bound calls, however, we can’t use the same shadowing semantics that we do with early bound method calls because we don’t know the actual type of the object we will be calling the method on, so we don’t have a list of methods we can check against at compile time. As a result, the only way we could prevent extension methods from completely breaking existing late bound code was to prevent them from being used on anything typed as object. This has the effect of slightly changing the meaning of an extension method defined on object. Consider the following non-object extension method: <Extension()> _ Public Sub Method2(ByVal x As System.Windows.Forms.Control) End Sub Here the declaration of “Method2” implies that the method is an extension method applicable to the type System.Windows.Froms.Control and any class that derives from it, such as TextBox or DataGridView. In the case of “Method1”, however, the method declaration implies that it is applicable to “all types except object”. This is a little unfortunate, as it is an inconsistency in the language, but we felt it was much better in this case to be safe than it was to be consistent. That’s all I have for today. In my next post I’ll talk about extension methods and generics. PingBack from What about extensions on unconstrained generic type parameters? I recall Paul Vick’s post about resolving extensions on late bound objects and as fantastical as it seemed at the time I can appreciate your decision to drop it. And given the limited interface of Object it would be much effort for little practical benefit to end users. I suppose the only other compromise would be to disallow extentions on type object with Option Strict Off which is inconsistent and once again kinda useless. I know I’ve been kinda emotional lately about the VB9 dream dying (scope reduction) a little in the last few but I think this was a really good call on the part of the VB Team, and wanted to applaud your good judgment, for what my not-always-so-humble opinion’s worth. -Anthony Anthony, Thanks for the feedback. We like to get both good feedback and bad feedback about what we are doing, so don’t ever feel bad about sharing your opion with us. We wouldn’t be able to build great products without it. Also, the unconstrained type parameter case will behave identically to the object case. -Scott Wisniewski Can someone update the VB9 language overview to reflect all the design and scope decisions since it was last written. I would appreciate an up-to-date and accurate picture of what I’m begging my employer to adopt. I second Anthony’s request for an update to the VB9 document. Not to be negative but its long overdue. This is the sixth installment in my series of posts about extension methods. You can find links to the Seems to me that this should have been a compilation error or warning type that could be turned on or off, so that those who do not use late binding can declare extention methods on the object type. In terms of "better safe than sorry", this compilation error could be enabled by default. Yet another arbitrary restriction imposed by the VB.Net language. Having just noticed this on an unconstrained (Of T) extension, I'm surprised the error message doesn't at least hint at to the issue. "Option Strict disallows late binding" doesn't point to the problem. Also there could be an Error Correction Option that just internalises the extension object, i.e. change obj.Ext to Ext(Obj,… The above is for VS2k8; I haven't checked the behaviour of VS2010.
https://blogs.msdn.microsoft.com/vbteam/2007/01/24/extension-methods-and-late-binding-extension-methods-part-4/
CC-MAIN-2017-22
refinedweb
1,088
59.23
An AI character system needs to be aware of its environment such as where the obstacles are, where the enemy is, whether the enemy is visible in the player’s sight, and so on. The quality of our Non-Player Character (NPC’s) AI completely depends on the information it can get from the environment. Nothing breaks the level of immersion in a game like an NPC getting stuck behind a wall. Based on the information the NPC can collect, the AI system can decide which logic to execute in response to that data. If the sensory systems do not provide enough data, or the AI system is unable to properly take action on that data, the agent can begin to glitch, or behave in a way contrary to what the developer, or more importantly the player, would expect. Some games have become infamous for their comically bad AI glitches, and it’s worth a quick internet search to find some videos of AI glitches for a good laugh. In this article, we’ll learn to implement AI behavior using the concept of a sensory system similar to what living entities have. We will learn the basics of sensory systems, along with some of the different sensory systems that exist. You are reading an extract from Unity 2017 Game AI programming – Third Edition, written by Ray Barrera, Aung Sithu Kyaw, Thet Naing Swe. Basic sensory systems Our agent’s sensory systems should believably emulate real-world senses such as vision, sound, and so on, to build a model of its environment, much like we do as humans. Have you ever tried to navigate a room in the dark after shutting off the lights? It gets more and more difficult as you move from your initial position when you turned the lights off because your perspective shifts and you have to rely more and more on your fuzzy memory of the room’s layout. While our senses rely on and take in a constant stream of data to navigate their environment, our agent’s AI is a lot more forgiving, giving us the freedom to examine the environment at predetermined intervals. This allows us to build a more efficient system in which we can focus only on the parts of the environment that are relevant to the agent. The concept of a basic sensory system is that there will be two components, Aspect and Sense. Our AI characters will have senses, such as perception, smell, and touch. These senses will look out for specific aspects such as enemies and bandits. For example, you could have a patrol guard AI with a perception sense that’s looking for other game objects with an enemy aspect, or it could be a zombie entity with a smell sense looking for other entities with an aspect defined as a brain. For our demo, this is basically what we are going to implement—a base interface called Sense that will be implemented by other custom senses. In this article, we’ll implement perspective and touch senses. Perspective is what animals use to see the world around them. If our AI character sees an enemy, we want to be notified so that we can take some action. Likewise with touch, when an enemy gets too close, we want to be able to sense that, almost as if our AI character can hear that the enemy is nearby. Then we’ll write a minimal Aspect class that our senses will be looking for. Cone of sight A raycast is a feature in Unity that allows you to determine which objects are intersected by a line cast from a point in a given direction. While this is a fairly efficient way to handle visual detection in a simple way, it doesn’t accurately model the way vision works for most entities. An alternative to using the line of sight is using a cone-shaped field of vision. As the following figure illustrates, the field of vision is literally modeled using a cone shape. This can be in 2D or 3D, as appropriate for your type of game: The preceding figure illustrates the concept of a cone of sight. In this case, beginning with the source, that is, the agent’s eyes, the cone grows, but becomes less accurate with distance, as represented by the fading color of the cone. The actual implementation of the cone can vary from a basic overlap test to a more complex realistic model, mimicking eyesight. In a simple implementation, it is only necessary to test whether an object overlaps with the cone of sight, ignoring distance or periphery. A complex implementation mimics eyesight more closely; as the cone widens away from the source, the field of vision grows, but the chance of getting to see things toward the edges of the cone diminishes compared to those near the center of the source. Hearing, feeling, and smelling using spheres One very simple yet effective way of modeling sounds, touch, and smell is via the use of spheres. For sounds, for example, we can imagine the center as being the source and the loudness dissipating the farther from the center the listener is. Inversely, the listener can be modeled instead of, or in addition to, the source of the sound. The listener’s hearing is represented by a sphere, and the sounds closest to the listener are more likely to be “heard.” We can modify the size and position of the sphere relative to our agent to accommodate feeling and smelling. The following figure represents our sphere and how our agent fits into the setup: As with sight, the probability of an agent registering the sensory event can be modified, based on the distance from the sensor or as a simple overlap event, where the sensory event is always detected as long as the source overlaps the sphere. Expanding AI through omniscience In a nutshell, omniscience is really just a way to make your AI cheat. While your agent doesn’t necessarily know everything, it simply means that they can know anything. In some ways, this can seem like the antithesis to realism, but often the simplest solution is the best solution. Allowing our agent access to seemingly hidden information about its surroundings or other entities in the game world can be a powerful tool to provide an extra layer of complexity. In games, we tend to model abstract concepts using concrete values. For example, we may represent a player’s health with a numeric value ranging from 0 to 100. Giving our agent access to this type of information allows it to make realistic decisions, even though having access to that information is not realistic. You can also think of omniscience as your agent being able to use the force or sense events in your game world without having to physically experience them. While omniscience is not necessarily a specific pattern or technique, it’s another tool in your toolbox as a game developer to cheat a bit and make your game more interesting by, in essence, bending the rules of AI, and giving your agent data that they may not otherwise have had access to through physical means. Getting creative with sensing While cones, spheres, and lines are among the most basic ways an agent can see, hear, and perceive their environment, they are by no means the only ways to implement these senses. If your game calls for other types of sensing, feel free to combine these patterns. Want to use a cylinder or a sphere to represent a field of vision? Go for it. Want to use boxes to represent the sense of smell? Sniff away! Using the tools at your disposal, come up with creative ways to model sensing in terms relative to your player. Combine different approaches to create unique gameplay mechanics for your games by mixing and matching these concepts. For example, a magic-sensitive but blind creature could completely ignore a character right in front of them until they cast or receive the effect of a magic spell. Maybe certain NPCs can track the player using smell, and walking through a collider marked water can clear the scent from the player so that the NPC can no longer track him. As you progress through the book, you’ll be given all the tools to pull these and many other mechanics off—sensing, decision-making, pathfinding, and so on. As we cover some of these techniques, start thinking about creative twists for your game. Setting up the scene In order to get started with implementing the sensing system, you can jump right into the example provided for this article, or set up the scene yourself, by following these steps: - Let’s create a few barriers to block the line of sight from our AI character to the tank. These will be short but wide cubes grouped under an empty game object called Obstacles. - Add a plane to be used as a floor. - Then, we add a directional light so that we can see what is going on in our scene. As you can see in the example, there is a target 3D model, which we use for our player, and we represent our AI agent using a simple cube. We will also have a Target object to show us where the tank will move to in our scene. For simplicity, our example provides a point light as a child of the Target so that we can easily see our target destination in the game view. Our scene hierarchy will look similar to the following screenshot after you’ve set everything up correctly: Now we will position the tank, the AI character, and walls randomly in our scene. Increase the size of the plane to something that looks good. Fortunately, in this demo, our objects float, so nothing will fall off the plane. Also, be sure to adjust the camera so that we can have a clear view of the following scene: With the essential setup out of the way, we can begin tackling the code for driving the various systems. Setting up the player tank and aspect Our Target object is a simple sphere game object with the mesh render removed so that we end up with only the Sphere Collider. Look at the following code in the Target.cs file: using UnityEngine; public class Target : MonoBehaviour { public Transform targetMarker; void Start (){} void Update () { int button = 0; //Get the point of the hit position when the mouse is being clicked if(Input.GetMouseButtonDown(button)) { Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition); RaycastHit hitInfo; if (Physics.Raycast(ray.origin, ray.direction, out hitInfo)) { Vector3 targetPosition = hitInfo.point; targetMarker.position = targetPosition; } } } } You’ll notice we left in an empty Start method in the code. While there is a cost in having empty Start, Update, and other MonoBehaviour events that don’t do anything, we can sometimes choose to leave the Start method in during development, so that the component shows an enable/disable toggle in the inspector. Attach this script to our Target object, which is what we assigned in the inspector to the targetMarker variable. The script detects the mouse click event and then, using a raycast, it detects the mouse click point on the plane in the 3D space. After that, it updates the Target object to that position in the world space in the scene. A raycast is a feature of the Unity Physics API that shoots a virtual ray from a given origin towards a given direction, and returns data on any colliders hit along the way. Implementing the player tank Our player tank is the simple tank model with a kinematic rigid body component attached. The rigid body component is needed in order to generate trigger events whenever we do collision detection with any AI characters. The first thing we need to do is to assign the tag Player to our tank. The isKinematic flag in Unity’s Rigidbody component makes it so that external forces are ignored, so that you can control the Rigidbody entirely from code or from an animation, while still having access to the Rigidbody API. The tank is controlled by the PlayerTank script, which we will create in a moment. This script retrieves the target position on the map and updates its destination point and the direction accordingly. The code in the PlayerTank.cs file is as follows: using UnityEngine; public class PlayerTank : MonoBehaviour { public Transform targetTransform; public float targetDistanceTolerance = 3.0f; private float movementSpeed; private float rotationSpeed; // Use this for initialization void Start () { movementSpeed = 10.0f; rotationSpeed = 2.0f; } // Update is called once per frame void Update () { if (Vector3.Distance(transform.position, targetTransform.position) < targetDistanceTolerance) { return; } Vector3 targetPosition = targetTransform.position; targetPosition.y = transform.position.y; Vector3 direction = targetPosition - transform.position; Quaternion tarRot = Quaternion.LookRotation(direction); transform.rotation = Quaternion.Slerp(transform.rotation, tarRot, rotationSpeed * Time.deltaTime); transform.Translate(new Vector3(0, 0, movementSpeed * Time.deltaTime)); } } The preceding screenshot shows us a snapshot of our script in the inspector once applied to our tank. This script queries the position of the Target object on the map and updates its destination point and the direction accordingly. After we assign this script to our tank, be sure to assign our Target object to the targetTransform variable. Implementing the Aspect class Next, let’s take a look at the Aspect.cs class. Aspect is a very simple class with just one public enum of type AspectTypes called aspectType. That’s all of the variables we need in this component. Whenever our AI character senses something, we’ll check the aspectType to see whether it’s the aspect that the AI has been looking for. The code in the Aspect.cs file looks like this: using UnityEngine; public class Aspect : MonoBehaviour { public enum AspectTypes { PLAYER, ENEMY, } public AspectTypes aspectType; } Attach this aspect script to our player tank and set the aspectType to PLAYER, as shown in the following screenshot: Creating an AI character Our NPC will be roaming around the scene in a random direction. It’ll have the following two senses: - The perspective sense will check whether the tank aspect is within a set visible range and distance - The touch sense will detect if the enemy aspect has collided with its box collider, which we’ll be adding to the tank in a later step Because our player tank will have the PLAYER aspect type, the NPC will be looking for any aspectType not equal to its own. The code in the Wander.cs file is as follows: using UnityEngine; public class Wander : MonoBehaviour { private Vector3 targetPosition; private float movementSpeed = 5.0f; private float rotationSpeed = 2.0f; private float targetPositionTolerance = 3.0f; private float minX; private float maxX; private float minZ; private float maxZ; void Start() { minX = -45.0f; maxX = 45.0f; minZ = -45.0f; maxZ = 45.0f; //Get Wander Position GetNextPosition(); } void Update() { if (Vector3.Distance(targetPosition, transform.position) <= targetPositionTolerance) { GetNextPosition(); } Quaternion targetRotation = Quaternion.LookRotation(targetPosition - transform.position); transform.rotation = Quaternion.Slerp(transform.rotation, targetRotation, rotationSpeed * Time.deltaTime); transform.Translate(new Vector3(0, 0, movementSpeed * Time.deltaTime)); } void GetNextPosition() { targetPosition = new Vector3(Random.Range(minX, maxX), 0.5f, Random.Range(minZ, maxZ)); } } The Wander script generates a new random position in a specified range whenever the AI character reaches its current destination point. The Update method will then rotate our enemy and move it toward this new destination. Attach this script to our AI character so that it can move around in the scene. The Wander script is rather simplistic. Using the Sense class The Sense class is the interface of our sensory system that the other custom senses can implement. It defines two virtual methods, Initialize and UpdateSense, which will be implemented in custom senses, and are executed from the Start and Update methods, respectively. Virtual methods are methods that can be overridden using the override modifier in derived classes. Unlike abstract classes, virtual classes do not require that you override them. The code in the Sense.cs file looks like this: using UnityEngine; public class Sense : MonoBehaviour { public bool enableDebug = true; public Aspect.AspectTypes aspectName = Aspect.AspectTypes.ENEMY; public float detectionRate = 1.0f; protected float elapsedTime = 0.0f; protected virtual void Initialize() { } protected virtual void UpdateSense() { } // Use this for initialization void Start () { elapsedTime = 0.0f; Initialize(); } // Update is called once per frame void Update () { UpdateSense(); } } The basic properties include its detection rate to execute the sensing operation, as well as the name of the aspect it should look for. This script will not be attached to any of our objects since we’ll be deriving from it for our actual senses. Giving a little perspective The perspective sense will detect whether a specific aspect is within its field of view and visible distance. If it sees anything, it will take the specified action, which in this case is to print a message to the console. The code in the Perspective.cs file looks like this: using UnityEngine; public class Perspective : Sense { public int fieldOfView = 45; public int viewDistance = 100; private Transform playerTransform; private Vector3 rayDirection; protected override void Initialize() { playerTransform = GameObject.FindGameObjectWithTag("Player").transform; } protected override void UpdateSense() { elapsedTime += Time.deltaTime; if (elapsedTime >= detectionRate) { DetectAspect(); } } //Detect perspective field of view for the AI Character void DetectAspect() { RaycastHit hit; rayDirection = playerTransform.position - transform.position; if ((Vector3.Angle(rayDirection, transform.forward)) < fieldOfView) { // Detect if player is within the field of view if (Physics.Raycast(transform.position, rayDirection, out hit, viewDistance)) { Aspect aspect = hit.collider.GetComponent<Aspect>(); if (aspect != null) { //Check the aspect if (aspect.aspectType != aspectName) { print("Enemy Detected"); } } } } } We need to implement the Initialize and UpdateSense methods that will be called from the Start and Update methods of the parent Sense class, respectively. In the DetectAspect method, we first check the angle between the player and the AI’s current direction. If it’s in the field of view range, we shoot a ray in the direction that the player tank is located. The ray length is the value of the visible distance property. The Raycast method will return when it first hits another object. This way, even if the player is in the visible range, the AI character will not be able to see if it’s hidden behind the wall. We then check for an Aspect component, and it will return true only if the object that was hit has an Aspect component and its aspectType is different from its own. The OnDrawGizmos method draws lines based on the perspective field of view angle and viewing distance so that we can see the AI character’s line of sight in the editor window during play testing. Attach this script to our AI character and be sure that the aspect type is set to ENEMY. This method can be illustrated as follows: void OnDrawGizmos() { if (playerTransform == null) { return; } Debug.DrawLine(transform.position, playerTransform.position, Color.red); Vector3 frontRayPoint = transform.position + (transform.forward * viewDistance); //Approximate perspective visualization Vector3 leftRayPoint = frontRayPoint; leftRayPoint.x += fieldOfView * 0.5f; Vector3 rightRayPoint = frontRayPoint; rightRayPoint.x -= fieldOfView * 0.5f; Debug.DrawLine(transform.position, frontRayPoint, Color.green); Debug.DrawLine(transform.position, leftRayPoint, Color.green); Debug.DrawLine(transform.position, rightRayPoint, Color.green); } } Touching is believing The next sense we’ll be implementing is Touch.cs, which triggers when the player tank entity is within a certain area near the AI entity. Our AI character has a box collider component and its IsTrigger flag is on. We need to implement the OnTriggerEnter event, which will be called whenever another collider enters the collision area of this game object’s collider. Since our tank entity also has a collider and rigid body components, collision events will be raised as soon as the colliders of the AI character and player tank collide. Unity provides two other trigger events besides OnTriggerEnter: OnTriggerExit and OnTriggerStay. Use these to detect when a collider leaves a trigger, and to fire off every frame that a collider is inside the trigger, respectively. The code in the Touch.cs file is as follows: using UnityEngine; public class Touch : Sense { void OnTriggerEnter(Collider other) { Aspect aspect = other.GetComponent<Aspect>(); if (aspect != null) { //Check the aspect if (aspect.aspectType != aspectName) { print("Enemy Touch Detected"); } } } } Our sample NPC and tank have BoxCollider components on them already. The NPC has its sensor collider set to IsTrigger = true . If you’re setting up the scene on your own, make sure you add the BoxCollider component yourself, and that it covers a wide enough area to trigger easily for testing purposes. Our trigger can be seen in the following screenshot: The previous screenshot shows the box collider on our enemy AI that we’ll use to trigger the touch sense event. In the following screenshot, we can see how our AI character is set up: For demo purposes, we just print out that the enemy aspect has been detected by the touch sense, but in your own games, you can implement any events and logic that you want. Testing the results Hit play in the Unity editor and move the player tank near the wandering AI NPC by clicking on the ground to direct the tank to move to the clicked location. You should see the Enemy touch detected message in the console log window whenever our AI character gets close to our player tank: The previous screenshot shows an AI agent with touch and perspective senses looking for another aspect. Move the player tank in front of the NPC, and you’ll get the Enemy detected message. If you go to the editor view while running the game, you should see the debug lines being rendered. This is because of the OnDrawGizmos method implemented in the perspective Sense class. To summarize, we introduced the concept of using sensors and implemented two distinct senses—perspective and touch—for our AI character. If you enjoyed this excerpt, check out the book Unity 2017 Game AI Programming – Third Edition, to explore the brand-new features in Unity 2017. Read Next: How to use arrays, lists, and dictionaries in Unity for 3D game development How to create non-player Characters (NPC) with Unity 2018
https://hub.packtpub.com/ai-unity-game-developers-emulate-real-world-senses/
CC-MAIN-2021-21
refinedweb
3,695
52.8
tag:blogger.com,1999:blog-48820894957082560112011-11-08T13:04:26.668-08:00Marsyas Developer Bloggtzan Marsyas with Python bindings on Ubuntu 11.04 Natty Narwhal - 32 bits<div dir="ltr" style="text-align: left;" trbidi="on"><br /> Combining <i>Marsyas.</i> Unfortunately getting everything installed and setup for the first time takes some effort. <br /> <br /> I recently did an installation on a fresh copy of <i>Ubuntu 11.04 - Natty Narwhal</i> - 32 bits and in this blog post I will share the steps I had to take. As usual the exact details change over time so the instructions are slightly different from previous versions of Ubuntu. These instructions can also be found in the <i>Marsyas User Manual </i>as part of the installation instructions for specific systems. <br /> <br />. <br /> <br /> <blockquote>$ sudo apt-get install subversion <br /> $ sudo apt-get install cmake <br /> $ sudo apt-get install cmake-curses-gui <br /> $ sudo apt-get install libasound2-dev</blockquote><br /> Now we are ready to get the latest Marsyas from svn, configure and compile. <br /> <br /> <blockquote>$ svn co marsyas <br /> $ cd marsyas <br /> $ mkdir build <br /> $ cd build <br /> $ ccmake ../src <br /> (Press [c] to continue and [g] to generate the Makefile)<br /> $ make -j 3 (or higher if you have more cores) </blockquote><br /> <br /> When the compilation is finished you can check that the command-line Marsyas tools were compiled. <br /> <br /> <blockquote>$ cd bin<br /> $ ./helloWorld <br /> (Press ctrl-c to stop the sine wave from playing) <br /> $ cd .. </blockquote><br /> If everything worked you will hear a sine wave playing on your speakers. <br /> <br /> Now we are ready to get the Python/SWIG bindings going. <br /> <br /> <blockquote>$ sudo apt-get install swig <br /> $ sudo apt-get install python-dev <br /> (the python headers) <br /> $ sudo apt-get install python-matplotlib <br /> $ sudo apt-get install ipython </blockquote><br /> Now we need to reconfigure/compile Marsyas to use Swig to create the python bindings. <br /> <br /> <blockquote>$ ccmake ../src <br /> (enable the WITH_SWIG option) <br /> $ make -j 3 <br /> $ sudo make install <br /> (install the Marsyas python bindings so that Python can find them) <br /> $ sudo ldconfig /usr/local/lib <br /> (add /usr/local/lib to the path searched for libraries) </blockquote><br /> If everything has worked you can now combine Marsyas and matplotlib. To check try: <br /> <br /> <blockquote>$ cd src/marsyas_python <br /> $ python windowing.py <br /> $ ipython --pylab windowing.py </blockquote><br /> <div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="" width="400" /></a></div><br /> <br /> After running these commands you should be able to see a nice generated figure looking like the image above this paragraph. If you look at the code of <i>windowing.py</i> you will see that the comlputation of the figure data is done through <i>Marsyas</i>. All you need to do is <i>import marsyas</i> at the top of your <i>Python</i> source code. In a future post I will show some utilities that making plotting data from <i>Marsyas</i> networks easy and save you time. <br /> <br /> <br /> <br /> <br /> <br /> <br /> </div><div class="blogger-post-footer"><img width='1' height='1' src='' alt='' /></div><img src="" height="1" width="1" alt=""/>gtzan a network using the Swig/Python bindings<div dir="ltr" style="text-align: left;" trbidi="on"><br /> The. <br /> <br /> <pre class="prettyprint">net = ["Series/net", ["SoundFileSource/src", ["Fanout/fanout", ["Filter/f1", "Filter/f2"]], "Gain/gain", "SoundFileSink/dest"]] </pre><br /> It is easy to convert this representation (basically a nested list of strings) into various forms. For example using the json module it is straightforward to convert it to json. <br /> <br /> <pre class="prettyprint">print json.dumps(net) </pre><br /> To get the actual network we need to write a function that converts this representation <br /> to the actual Marsyas commands to create the network. Then we can do something like: <br /> <br /> <pre class="prettyprint">msys = create(net) print msys.toStringShort() # typically after creating the network you would # update a couple of controls and then start ticking </pre><br /> Writing such a function in C++ is not trivial but thanks to a little bit of functional magic it is relatively straightforward to write in Python. Here is the create function in Python: <br /> <br /> <pre class="prettyprint"># </pre><br />. <br /> <br /> I hope this post motivates you to try out the Python bindings of Marsyas as well as explore the wonderful ideas of functional programming. Running code can be found under the src/marsyas_python directory of the svn trunk. <br /> <br /> <br /> <br /> <br /> </div><div class="blogger-post-footer"><img width='1' height='1' src='' alt='' /></div><img src="" height="1" width="1" alt=""/>gtzan and Mean Marsyas by Graham Percival<div dir="ltr" style="text-align: left;" trbidi="on">Marsyas can (sometimes with some modifications) be used to write very efficient code that consumes little resources. It has been used in embedded systems and some mutant versions are even running on guitar effect pedals. The following text was provided by Graham Percival and describes how to avoid the startup cost of the MarSystemManager (a factory for creating MarSystems at run-time). This is also important in web application in which the same processing might be initiated every time there is a request and the startup cost needs to be minimized. This initial startup cost is typically not improtant in batch processing applications as it only takes place at the start of processing.<br /> <br /> We all know that "premature optimization is the root of all evil"and the old "1. make it work, 2. make it right, 3. make it fast". But sometimes -- very occasionally, but sometimes -- optimization is not premature. Or else you just feel like being naughty. :) On a more serious note, if you are trying to use Marsyas on a system with limited resources such as an embedded system or mobile phone, it may necessary to optimize before deployment. Most Marsyas examples begin with:<br /> <br /> <pre wrap="">MarSystemManager mng; </pre><pre wrap=""><span class="Apple-style-span" style="font-family: Times, 'Times New Roman', serif;">This initializes one copy of every MarSystem in Marsyas. This "caching" can save resources if you need to create many copies of certain MarSystems, but if you only need a few MarSystems, it means that your program needs to create (and store) a bunch of unnecessary data. Here's a simple example, contrasting the MarSystemManager way to the "lean and mean" way: you can create individual MarSystems as needed, avoiding the MarSystemManager entirely. Note that you need to include the headers you need yourself! The "lean and mean" technique is particularly useful if you want to statically link your application. You can easily contrast the two methods by commenting out the #define.</span> ------------ #define LEAN #ifndef LEAN #include "marsyas/MarSystemManager.h" #else #include "marsyas/Series.h" #include "marsyas/SoundFileSource.h" #include "marsyas/Rms.h" #include "marsyas/Annotator.h" #include "marsyas/WekaSink.h" #endif using namespace Marsyas; int main() { #ifndef LEAN MarSystemManager mng; MarSystem *net = mng.create("Series", "extract"); net->addMarSystem(mng.create("SoundFileSource", "src")); net->addMarSystem(mng.create("Rms", "rms")); net->addMarSystem(mng.create("Annotator", "annotate")); net->addMarSystem(mng.create("WekaSink", "dest")); #else MarSystem *net = new Series("extract"); net->addMarSystem( new SoundFileSource("src")); net->addMarSystem( new Rms("rms")); net->addMarSystem( new Annotator("annotate")); net->addMarSystem( new WekaSink("dest")); #endif // update controls and run the process loop </pre><pre wrap="">} </pre></div><div class="blogger-post-footer"><img width='1' height='1' src='' alt='' /></div><img src="" height="1" width="1" alt=""/>gtzan in Marsyas<div dir="ltr" style="text-align: left;" trbidi="on"><div style="text-align: justify;">In this post I discuss the use of <b>logging </b>in <i><b>Marsyas </b></i>as an alternative to temporary print statements and/or debugging. Most developers are marginally aware of the existing facilities and I want to make a push to make them more widely used. It is my hope that after reading this post code in <i>MarSystems</i> like : </div><div style="text-align: justify;"></div><ul style="text-align: left;"><li> cout << type_ << "/" << name_ << "/mrs_real/ will be replaced with: </div><ul style="text-align: left;"><li> MRSDIAG(type_ << "/" << name_ << "/mrs_real/Notice that the logging macros fully support the C++ stream syntax and therefore can include much richer information than just simply a text message (for example you could output a realvec or a counter variable). </div><div style="text-align: justify;"><br /> </div><div style="text-align: justify;"><i><b>Marsyas</b></i> <i>SoundFileSource</i> <i>MarSystem</i> outputs audio buffers with zeroes if the provided file-name can not be opened rather than doing something more drastic like exiting which could be a desired behavior in batch processing of files. </div><div style="text-align: justify;"> </div><div style="text-align: justify;">Even though a lot can be accomplished by using and combining the existing <i>MarSystems</i> sooner or later most developers have to write their own <i>MarSystems </i>to extend the provided functionality. A frequent process used by the current developers (including me) is to write the new <i>MarSystem</i>, add various <b>cout </b>statements to help understand whether it is working or not and when everything looks good remove or comment out the <b>cout</b> statements from the source code. That way the resulting MarSystem does not produce any output that would clutter the screen as well as possibly confuse things like web-servers. Later if something goes wrong the <b>cout</b> statements are re-introduced and the cycle continues. Because of the heavy computational requirements of audio processing such print statements can have a significant slow down effect. </div><div style="text-align: justify;"><br /> </div><div style="text-align: justify;". </div><div style="text-align: justify;"><br /> </div><div style="text-align: justify;">Logging is an important technique for better understanding the behavior of a program during run time. </div><div style="text-align: justify;">It consists of recording information about the program while it is running at different levels of detail and controlling where that information is stored. The <i><b>Marsyas </b></i. </div><div style="text-align: justify;"><br /> </div><div style="text-align: justify;">There are several different types of log messages supported. They are: </div><ol style="text-align: left;"><li><b>MRSMSG</b>:). </li> <li><b>MRSWARN</b>: Warnings provide information about things that might be a cause of concern but ignoring them will not affect the operation of the system. For example if a sound file can not be opened then a warning message is logged. </li> <li><b>MRSERR: </b>Errors are more severe than warnings and indicate that something serious is going wrong. In certain cases they are followed by exiting the program. </li> <li><b>MRSDIAG: <)</li> <li><b>MRSDEBUG</b>: Debug messages are specific and detailed and trying to help with debugging the code. They are similar to diagnostic messages but more about what went wrong rather than what is going right. </li> </ol><div style="text-align: justify;">Typically Marsyas is configured to display only the first 3 messages types (messages, warnings and errors) in Release and all of them in Debug. By appropriately configuring CMake it is possible to have logging to any combination of standard output, standard input and log file. </div><div style="text-align: justify;"><br /> </div><br /> </div><div class="blogger-post-footer"><img width='1' height='1' src='' alt='' /></div><img src="" height="1" width="1" alt=""/>gtzan is 10 years old - time to start a blog<div dir="ltr" style="text-align: left;" trbidi="on"><br /> Marsyas (<a href=""></a>) is a free software framework for audio processing with specific emphasis on music analysis, synthesis and retrieval. has been under development for over 10 years now. It has been used for a variety of projects in academia and industry. The documentation although far from complete has a lot of information and there is a lot of source code to explore. Although the Marsyas developer community is small and the mailing lists provide a good forum for answer questions in the past 2 or 3 years I have realized that Marsyas has grown to the point that many developers are not aware of all the available functionality. My goal for this blog is to highlight different aspects of working with the Marsyas. The main target audience is Marsyas developers and users but anyone interested in audio and music processing software might find it interesting. I plan to post about once every two weeks and I might increase the frequency if there is enough interest. <br /> <br /> </div><div class="blogger-post-footer"><img width='1' height='1' src='' alt='' /></div><img src="" height="1" width="1" alt=""/>gtzan
http://feeds.feedburner.com/MarsyasDeveloperBlog
CC-MAIN-2018-47
refinedweb
2,170
52.19
Hail 0.2 Hail is a library built on Spark for analyzing large genomic datasets. Hail 0.2 is integrated into Databricks Runtime HLS to simplify and scale your genomic analyses. Important Hail 0.2 integration requires Databricks Runtime HLS, which is in Beta. Properties of the Databricks environment, such as which Python packages are available by default, may change. Pricing is also subject to change before general availability. Create a Hail cluster To create a cluster with Hail installed: In the Custom Spark Version field, paste in the version key for Databricks Runtime HLS. Set the following environment variable: ENABLE_HAIL=true This environment variable causes the cluster to launch with Hail 0.2, its dependencies, and Python 3.6 installed. Use Hail in a notebook For the most part, Hail 0.2 code in Databricks works identically to the Hail documentation. However, there are a few modifications that are necessary for the Databricks environment. Initialization When initializing Hail, pass in the pre-created SparkContext and mark the initialization as idempotent. This setting enables multiple Databricks notebooks to use the same Hail context. import hail as hl hl.init(sc, idempotent=True) Plotting Hail uses the Bokeh library to create plots. The show function built into Bokeh does not work in Databricks. To display a Bokeh plot generated by Hail, you can run a command like: from bokeh.embed import components, file_html from bokeh.resources import CDN plot = hl.plot.histogram(mt.DP, range=(0,30), bins=30, title='DP Histogram', legend='DP') html = file_html(plot, CDN, "Chart") displayHTML(html) See Bokeh in Python Notebooks for more information.
https://docs.databricks.com/applications/genomics/hail.html
CC-MAIN-2019-22
refinedweb
268
59.7
Originally published here Github contribution graph shows your repository contributions over the past year. A filled-up contribution graph is not only pleasing to the eye but points towards your hard work, too(unless if you have hacked it). The graph, though pretty, also displays considerable information regarding your performance. However, if you look closely, it is just a heatmap displaying some time series data. Therefore, as a weekend activity, I tried to replicate the graph for some basic time-series data, sharing the process with you through this article. Dataset and some preprocessing The dataset that I’m going to use in this article comes from the Tabular Playground Series(TPS) competitions on Kaggle. The TPS competitions are month-long competitions launched on the 1st of every month. I’ll be using the dataset from the TPS — July edition competition. The dataset is a time series based data where the task is to predict the values of air pollution measurements over time, based on basic weather information (temperature and humidity) and the input values of 5 sensors. Let’s import the basic libraries and parse the dataset in pandas. import pandas as pd import numpy as np import datetime as dt from datetime import datetimedata = pd.read_csv(‘train.csv’, parse_dates=[‘date_time’]) data.head() This is a good dataset decent enough for our purpose. Let’s get to work. Creating a basic heat map using the Seaborn library Seaborn is a statistical data visualization library in Python. It is based on matplotlib but has some great default themes and plotting options. Creating a heatmap technically is essentially replacing the numbers with colors. To be more precise, it means to plot the data as a color-encoded matrix. Let’s see how we can achieve this via code. but before that, we will have to convert our data into the desired format #Importing the seaborn library along with other dependencies import seaborn as sns import matplotlib.pyplot as plt import datetime as dt from datetime import datetime # Creating new features from the data data['year'] = data.date_time.dt.year data['month'] = data.date_time.dt.month data['Weekday'] = data.date_time.dt.day_name() Subsetting data to include only the year 2010 and then discarding all columns except the month, weekday and deg_C. We’ll then pivot the dataset to get a matrix-like structure data_2010 = data[data['year'] == 2010] data_2010 = data_2010[['month','Weekday','deg_C']] pivoted_data = pd.pivot_table(train_2010, values='deg_C', index=['Weekday'] , columns=['month'], aggfunc=np.mean) Since our dataset is already available in the form of a matrix, plotting a heatmap with seaborn is just a piece of cake now. plt.figure(figsize = (16,6)) sns.heatmap(pivoted_data, linewidths=5, cmap='YlGn',linecolor='white', square=True) The heatmap displayed the average temperature(degree Celcius) in 2010. We can clearly see that July was the hottest month of that year. To emulate Github’s contribution plot, some parameters have been used: - pivoted_data: The dataset used - linewidths: the width of the lines that divide each cell. - line color: The color of the lines dividing the cell - square: to ensure each cell is square-shaped This was a good attempt, but there is still scope for improvement. We aren’t still near Github’s contribution plot. Let’s give it another try with another library. Creating Calendar heatmaps using calmap Instead of tinkering with seaborn, there is a dedicated library available in Python called calmap. It creates beautiful calendar heatmaps from time-series data on the lines of Github’s contribution plot and that too in a single line of code. #Installing and importing the calmap library pip install calmap#import the library import calmap We’ll use the same dataset as used in the previous section and use the yearplot() method to the plot. #Setting the date_time column as the index data = data.set_index('date_time')#plotting the calender heatmap for the year 2010 plt.figure(figsize=(20,10)) calmap.yearplot(data['deg_C'], cmap='YlGn', fillcolor='lightgrey',daylabels='MTWTFSS',dayticks=[0, 2, 4, 6], linewidth=2) Above, we have customized the color, linewidth and the fillcolor i.e., the color to use for days without data. You can set these values as per your requirements. More information can be obtained from the documentation. It is also possible to plot all years as subplots into one figure using the calendarplot() method. fig_kws=dict(figsize=(20, 10) As you can see, there isn’t much data for 2011, but I’m sure you have got the idea. Wrap up Heatmaps are useful visualization tools and help convey a pattern by giving a perspective of depth using colors. It helps visualize the concentration of values between two dimensions of a matrix that is more obvious to the human eye than mere numbers. This article shows you how to brighten up and jazz up your heatmaps and have fun while creating them.
https://parulpandey.com/2021/08/04/create-githubs-style-contributions-plot-for-your-time-series-data/?shared=email&msg=fail
CC-MAIN-2021-43
refinedweb
813
56.15
<rdar://problem/4463713> Implement maxlength for new text fields (6987) Created attachment 6821 [details] patch for maxlength + if (evt->type() == khtmlInsertTextEvent && evt->isInsertTextEvent()) { What's the difference between these two checks? + // Make sure that the text to be inserted will not violate the maxLength + unsigned ml = maxLength(); + if (ml <= 0 || ml > 1024) + ml = 1024; Apply the restriction on maxlength once, when it's set, instead of every time it's used. + unsigned currentLength = value().length(); + String text = static_cast<InsertTextEventImpl *>(evt)->text(); + int selectionLength = getDocument()->frame()->selection().toString().length(); + + // Truncate the inserted text if necessary + if (currentLength + text.length() - selectionLength > ml) { + if (currentLength > ml) + text = String(""); When will currentLength ever be > ml? You could assert(currentLength <= ml). - DOMString value = input->value(); + DOMString value = input->value().copy(); Use String, DOMString is a synonym allowed for backward compatibility, but should not be used in anything new. The NodeStyles computed during the test rendering will become invalid if the InsertTextEvent handler changes the value of the string. You need to send the event earlier, where you do m_text = plainText(range);, and do another test rendering if the event handler forces you to create a new fragment. + if (startNode && startNode->rootEditableElement() && startNode->rootEditableElement()->isAnonymousDiv()) { + // Send InsertTextEvent and update text if necessary + RefPtr<EventImpl> evt = new InsertTextEventImpl(newText); + HTMLAnonymousDivElementImpl *div = static_cast<HTMLAnonymousDivElementImpl*>(startNode->rootEditableElement()); + div->shadowParentNode()->dispatchEvent(evt, exceptionCode, true); + } Why must you check for isAnonymousDiv()? Couldn't you dispatch the InsertTextEvents to all rootEditable elements, and let normal divs disregard the event, and make anonymousDivs send the event to their shadow parent? I don't think that the editing code should know about anonymous divs and shadow parents, it should only know about sending InsertTextEvents and the possibility that whoever handles that event will change the text to be inserted. r- because of the test rendering bit. Comment on attachment 6821 [details] patch for maxlength see my comments above Created attachment 6834 [details] new patch to address Justin's comments Justin and I discussed this, and I think this new patch addresses his comments Darin and I talked about how the khtmlInsertTextEvent that I'm using after the text inserted, can just be a normal EventImpl. And I'd like to give it a better name. Comment on attachment 6834 [details] new patch to address Justin's comments New top-level classes should be in separate files, even though we didn't split render_block.cpp yet. So HTMLAnonymousDivElementImpl should be in its own file as should InsertTextEventImpl. We might be able to come up with a better name for the HTMLAnonymousDivElementImpl class too. Since its implementation has a special case for the case where its parent has a RenderTextField for a renderer, it should probably have a name and location specific to its use instead of pretending to be more general purpose. For example, it could go in the rendering directory even though it's a subclass of a DOM element class, since it's used as part of form element rendering. HTMLAnonymousDivElementImpl::defaultEventHandler checks if the parent's renderer is a text field, but it doesn't check if its parent is an HTMLInputElementImpl -- instead it assumes that. That should be checked too. As we discussed, khtmlInsertTextEvent does not need to be an InsertTextEventImpl. There's no need to add an unused isAnonymousDiv() function. + HTMLAnonymousDivElementImpl(DocumentImpl *doc, NodeImpl* shadowParent = 0); Should just be DocumentImpl*. No need to give a name. + virtual void defaultEventHandler(EventImpl *evt); Similarly, should just be EventImpl*, no need for "evt". In HTMLInputElementImpl, I'd like to see the enforcement of range from 0 to 1024 be when m_maxLen is set, not in the maxLength() accessor. + return (m_div->textContent()); No need for parentheses on a return statement like this. macro(khtmlHorizontalmousewheel) \ + macro(khtmlBeforeInsertText) \ + macro(khtmlInsertText) \ macro(khtmlMove) \ The other events are in alphabetical order, so you should move BeforeInsertText up. + InsertTextEventImpl(const AtomicString &); + InsertTextEventImpl(const AtomicString &, String); No spaces before the & characters. + virtual void setText(String); Why is this function virtual? Also, it should take a const String&, not a String. We might want to consider a version of the BeforeInsertText event that sends a DOM fragment through instead of a string -- it seems a bit limiting to have this paste preflighting be specific to plain text pasting -- but perhaps the plain text one is easier to implement for now. You should undo the change to ReplaceSelectionCommand.h. The code in HTMLAnonymousDivElementImpl::defaultEventHandler calls subtreeHasChanged *before* calling the default handler. That seems like it's too early -- will the insertion be done at that point? If this event is fired only after text is inserted then perhaps it should be named InsertedText, TextInserted, or AfterInsertText? I'm about to upload a patch that addresses most of Darin's comments. At one point, I had a version of the new event that stored a DocumentImpl. I switched over to storing just the text to make it easier to get a working version of this up and running. I think its a good enhancement to make for the future. subtreeHasChanged only gets called after the editing command has been applied, and the text has already been inserted. I've renamed the event to khtmlTextInsertedEvent to make more sense. Also, Justin and I discussed changing subtreeHasChanged so that instead of always updating the InputElement's value, we could just invalidate it, and lazily update the value. I didn't do that for this patch, but I'm planning on doing it soon. Created attachment 6878 [details] patch Also, Hyatt and I talked about whether the new element class (now HTMLTextFieldInnerElementImpl) should move to the rendering directory, and decided to leave it in the html dir for now. We also talked about ways to eliminate the need for this class all together- by adding the ability to have default event listeners. Actually, I meant to add a fixme in HTMLTextFieldInnerElementImpl.cpp about this. I will do that in my local tree before I commit. Comment on attachment 6878 [details] patch One major difference between this maxLength implementation and the one I did in KWQTextField is that this one limits you to a certain number of UTF-16 characters. But the one in KWQTextField limits you to a certain number of "composed character sequences". That means that an e with an umlaut over it counts as 1 character even though it can be two Unicode characters in a row (the e followed by the non-spacing umlaut) and a single Japanese character that is outside the "BMP" that requires two UTF-16 codes (a "surrogate pair") to encode also counts as a single character. The code that deals with this in KWQTextField is _KWQ_numComposedCharacterSequences and _KWQ_truncateToNumComposedCharacterSequences:. We will need to replicate this, although I guess it's fine not to at first, but I'd like to see another bug about that. To tell if a character is half of a surrogate pair, you use macros in <unicode/utf16.h>, such as U16_LENGTH or U16_IS_LEAD. To tell if the character is going to combine with the one before it is more difficult. There's code in CoreFoundation that does this analysis and I presume there's some way to do it with ICU, but I don't know what that is. In addition to determining such things, code will have to be careful not to do math on the length of strings, since composing means that "length of A plus length of B" is not necessarily the same as "length of A plus B". HTMLTextFieldInnerElementImpl does not need an explicit destructor. The compiler-generated destructor will do the job. So don't declare or define a destructor. +void HTMLTextFieldInnerElementImpl::defaultEventHandler(EventImpl *evt) The above should have the * next to the EventImpl. The various files that say: \ No newline at end of file should be fixed so they have a newline at the end of the file. RenderTextField.h should not include HTMLTextFieldInnerElementImpl.h. To compile, all it needs is a forward declaration of the class. The .cpp file can include the complete header. if (m_div) { - m_div->removeEventListener(DOMCharacterDataModifiedEvent, m_mutationListener.get(), false); m_div->detach(); } Should remove the braces from the above. + m_div = new HTMLTextFieldInnerElementImpl(document(), element()); node() is more efficient than element() in cases like the above. There's no need to include "PlatformString.h" in BeforeTextInsertedEventImpl.h -- dom2_eventsimpl.h contains classes with String in them, and because of that it has the class defined already. + BeforeTextInsertedEventImpl(String&); That should be "const String&". In the ReplaceSelectionCommand code to send BeforeTextInsertedEvent, I notice that you compute the text before you check if there's a frame. That's unnecessary and a little bit unclear. I suggest computing the range and the text right when you are about to use it. I'd also like to see you declare "ec" as ExceptionCode instead of int for clarity and name it ec in both places you use it instead of calling it exceptionCode in some places. These issues are minor ones. r=me filed to address the composed character sequence issue.
https://bugs.webkit.org/show_bug.cgi?id=6987
CC-MAIN-2019-13
refinedweb
1,501
54.52
Implementing a state management in modern web Angular applications can be tricky. There are many libraries, such Ngrx, ngxs, Akita, which can be integrated to manage stores but, these are strongly opinionated and have impact on the architecture of the solution. If we omit the concept displayed by Jonas Bandi in his interesting article, a common alternative to not using 3rd party libraries, is the development of custom stores with RxJS. In both cases, libraries or custom, RxJS is used 🤷. Even though RxJS is a wonderful piece of technology, is nowadays a de facto standard when it comes to Angular development, and installed per default with almost any starter kits, it can be still opted-out. That’s why I was interested to get to know if it would be possible to develop an Angular application using a modern state management but, without using RxJS. Goals To narrow the goals of the experiment, these are those I was looking to test: - Can a property be bind and updated in a template without having to write extra code or trigger the change detection, as it would be solved with an observable? - Can store’s values be accessed in different routes? - Can store’s values be retrieved in child components? - Can store’s values be used in providers? - Is it easy to integrate it in unit tests? Let’s try to answer these questions, but first, let setup another kind of state management. Stencil Store The @stencil/store is a lightweight shared state library by the StencilJS core team. It implements a simple key/value map that efficiently re-renders components when necessary. I use it in our web open source editor for presentations, DeckDeckGo, and I have to admit, I kind of have a crush for this lightweight store. It is so bare minimum simple, and effective, I obviously selected it to perform my experiment. Even though it would work out of the box with Angular, note that I had to create a fork. The Webpack’s build was complaining about it and, since we do not need this requirement in case of an Angular usage, I just removed it. If I, or anyone, would use it for a real application, the library dependency could be patched easily I am guessing. Source Code Before going further, note that this experiment’s source code is available on GitHub. Setup To set up for such a store for an application, we can create a new TypeScript file, such as clicks.store.ts , and use the createStore function exposed by the Stencil Store. import {createStore} from '@stencil/store'; const { state } = createStore({ clicks: 0 }); export default {state}; That’s it. It is the minimum to expose a global clicks store for an app. Because I was eager to give a try to the few other features of the store, I also added the usage of the functions onChange , to test if property listening to changes would also be re-rendered, and the dispose feature needed for testing purpose. import {createStore} from '@stencil/store'; const { state, onChange, reset, dispose } = createStore({ clicks: 0, count: 0 }); onChange('clicks', value => { state.count = value * 2; }); export default {state, dispose}; Pretty slim in my humble opinion 😉. It is also worth to notice that it is possible to create as many stores as we would need. #1: Property Binding And Re-render I tried different ways to use the properties of the store in the templates and, figured out that the easiest way was to bind the state with a component’s variable. import { Component } from '@angular/core'; import store from '../../stores/clicks.store'; @Component({ selector: 'app-page1', templateUrl: './page1.component.html', styleUrls: ['./page1.component.css'] }) export class Page1Component { state$$ = store.state; } It can then be use in a template to display the values of the store. <p>Clicks: {{state$$.clicks}}</p> <p>Count: {{state$$.count}}</p> Does it get re-rendered, when the store changes? To try out this hypothesis, I added a function to the component, which increments the clicks. inc() { store.state.clicks++; } Therefore, if everything works as expected, each time I would call the above function, the clicks should be incremented and displayed and. Because I registered an onChange on such property, the count should be actualized with twice the value. Success ✅ It exactly behaves as expected. Store properties are modified and, the layout is re-rendered. In addition, I did not have to implement any custom change detection calls or what so ever. #2: Routes The second question I was looking to answer, was related to sharing data between routes. To answer it, I created another page component, added it to the routing and used the store exactly in the same way as previously. import { Component } from '@angular/core'; import store from '../../stores/clicks.store'; @Component({ selector: 'app-page2', template: `<h1>Page 2</h1> <p>Clicks: {{state$$.clicks}}</p> <p>Count: {{state$$.count}}</p>` }) export class Page2Component { state$$ = store.state; } If this would work out, once I would navigate, I would find the exact same value in each page without having to implement anything else respectively without the need to pass values between routes. Success ✅ Indeed, stores data can be shared between routes. #3: Components Likewise, instead of routes, are data accessible from a component? To test this hypothesis, I refactored the page2 to move the code to a separate component card . import { Component } from '@angular/core'; import store from '../../stores/clicks.store'; @Component({ selector: 'app-card', template: `<p>Clicks: {{state$$.clicks}}</p> <p>Count: {{state$$.count}}</p>`, styleUrls: ['./card.component.css'] }) export class CardComponent { state$$ = store.state; } I then used it in page2 . Note that doing so, this component, page, does not have to include the store anymore. import { Component } from '@angular/core'; @Component({ selector: 'app-page2', template: `<h1>Page 2</h1> <app-card></app-card>` }) export class Page2Component { } As for previous test, this would be validated, if values would be displayed and updated even if use in a child component. Success ✅ As previously, it works as expected. #4: Services I was asking my self if data could also be used in providers , therefore I added a service to test this specific question. import { Injectable } from '@angular/core'; import store from '../stores/clicks.store'; @Injectable({ providedIn: 'root' }) export class AlertService { show() { alert(`Count: ${store.state.count}`); } } If I call the service’s function, an alert should be triggered and the current count value of the store should be displayed. Success ✅ Providers have access to the store. #5: Test In addition to the runtime, I was also curious about the integration in unit tests. Probably even more than the integration in applications, the usage of stores and RxJS in tests can be also tricky. Therefore, I created a test which should increment the clicks and validate that the value has, well, been incremented. import { ComponentFixture, TestBed } from '@angular/core/testing'; import { Page1Component } from './page1.component'; import store from '../../stores/clicks.store'; describe('Page1Component', () => { let component: Page1Component; let fixture: ComponentFixture<Page1Component>; beforeEach(async () => { await TestBed.configureTestingModule({ declarations: [ Page1Component ] }) .compileComponents(); }); beforeEach(() => { fixture = TestBed.createComponent(Page1Component); component = fixture.componentInstance; fixture.detectChanges(); }); beforeEach(() => { store.dispose(); }); it('should create', () => { expect(component).toBeTruthy(); }); it('should increment', () => { component.inc(); fixture.detectChanges(); const paragraph = fixture.nativeElement.querySelector('p:first-of-type'); expect(paragraph.textContent).toEqual('Clicks: 1'); }); }); If this would be correct, the test should pass. Success ✅ It is possible to use the store in unit tests and thus, without any particular headache. It works in tests the same manner as it work when used in the application. Summary All hypothesis, re-rendering data, accessing these and testing the store were a success ✅. Considerations The scope of this experiment was to some extension, limited and, it might need a bit more analysis before being applied to a real life application. I think in particular at the following questions: - Would it be possible to scope the store, not to the root, but to a particular module? Even though providers provided in rootare often used, I think, it would be a nice add-on. - How does the rendering performs with a lot of nodes contained in the store? My spontaneous guess is that it behaves exactly like it would behave with or without any other stores but, it is probably worth a try to go a step further and try to render a lot of information. - What’s the cost of the Stencil Store in comparison to any other libraries based on RxJS or RxJS itself. If I would have to bet right now, I would bet on the fact that the Stencil Store is maybe the lightest. According bundlephobia, it costs only 899 bytes (minified + gzipped) 🤯. - Stencil is server side rendering (SSR) and pre-rendering compatible. Therefore, as the store has been developed in first place for such technology, I am guessing that it would be also the case with Angular. However, this would have to be tested too. If you are interested in these questions, let me know. I would love to hear from you, to get your feedbacks and would be happy to continue the experiment 😃. Take Away Honestly? I am that close to find a new idea of application, just to try out concretely the Stencil Store in a modern web Angular application. There is often no better way to experiment, than developing a real application. To infinity and beyond! David Reach me out on Twitter and, why not, give a try to DeckDeckGo for your next presentations! It deploys your decks online as Progressive Web Apps and can even push your slides’ source code to GitHub. Discussion I'm not sure that you don't get the same result if you just use simple plain objects. re-rendering works for sure. the only feature missing is the onChange event. but including stencil's reactivity into angular makes not much sense (correct me if I'm wrong) if you "just" add a simple plain objects in a provider/service for example, it re-renders only if you takes care of triggering the change detection "manually", what I was looking to avoid. but of course I understand your point. I could have use another store, like Zustand or else. I picked the stencil one because it is lightweight and bare simple. "if you takes care of triggering" - how come does stencil take care? well, first thing I should probably mention is even though it is called "stencil store" there is no "stencil / web components" inside. as I said in another comment, if I would have to bet, I would say that my spontaneous understanding is that the store is using a proxy and that's why the change detection is noticing the modifications. But, I can't be totally uncorrect. If anyone want to give a better explanation, would like to hear it too. I did check the source of stencil store before my first comment, and I found only stencil specific updates reacting on store changes. But being a proxy wouldn't trigger angular CD either. I'm still sure about my original statement, stencil store work only because the same reason a plain js object would. Really great write up! 👍 You piqued my interest in how the stencil store notifies the component that it has changed Thank you Simon 🙏. I am curious about it too I have to say. Spontaneously I tend to think that changes are notified because the store is a proxy, but I can be fully wrong. Ok, please do one more experiment for us: on page2 create a button to change the store, set the card component to onPush, and I bet there will be no update of the numbers in the card... Indeed did that experiment earlier this evening, if all components' change detection are set to OnPush and the components are away from the click event, then it wasn't automatically re-rendered. For such case I would have to hook on onChances and triggers or mark the change detection. Afterwards I improved this test by using the store directly in the template instead of the state. The first value pushed in the store did not trigger the re-rendering but any following update to the store were actually detected as changes.
https://dev.to/daviddalbusco/angular-state-management-without-rxjs-an-experiment-3o0j
CC-MAIN-2021-04
refinedweb
2,036
64.3
·1 min de lecture Functional Programming Patterns — A JavaScript journey #10 Few months ago, Kyle Shevlin did a great course on egghead, followed by a series of articles, about some basic concepts of functional programming in JavaScript. As an adept of this very popular paradigm, I found that even very basics, I'm not using all of them on a daily basis. So, here is a quick recap of how cool those patterns are and how they can help you improve your functional code. High order function “The high order function (HOF) is a function that accepts a function as an argument and returns a new function 😅” Just read the example, it will directly make more sense to you. // Our high order function const withCountLog = fn => { let count = 0; // Returns a new function return (...args) => { console.log(`Called ${++count} times`); return fn(...args); } } const add = (x, y) => x + y; const countedAdd = withCountLog(add); // binding countedAdd(52, 4); // Called 1 times countedAdd(63, 5); // Called 2 times countedAdd(74, 6); // Called 3 times Curried function “A curried function is a higher-order function that returns a series of functions each accepting only one argument and only evaluating once we receive our final argument.” const add = x => y => z => x + y + z; console.log(add(1)(2)(3)); // 6 OK this example makes no sense at all. Now, if you start using it for splitting your logic into something simpler and more maintainable, it starts to make a lot of sense: // Our curried function const getFromAPI = baseURL => endpoint => cb => fetch(`${baseURL}${endpoint}`) .then(res => res.json()) .then(data => cb(data)) .catch(err => new Error(err)); // Main level const getGithub = getFromAPI('<>'); // Sub levels const getGithubUsers = getGithub('/users'); const getGithubRepos = getGithub('/repositories'); getGithubUsers(data => console.log(data.map(user => user.login))); Finally, the order of arguments are very important. Here we have baseURL => endpoint => cbfor the reason illustrated above. Always remind yourself this simple rule: the order is from most specific to least specific argument. Composition This one can have its own article or even a series of articles, but I'm sure there are a lot of great resources out there. Basically, it's a function used to compose other functions in a specific order to return to desired output . // Our composition helper const compose = (...fns) => x => fns.reduce((acc, fn) => fn(acc), x); const lower = str => str.toLowerCase(); const sanitize = str => str.replace(/[^a-z0-9 -]/g, ''); const clean = str => str.replace(/\\\\s+/gm, '-'); const slugify = compose( lower, sanitize, clean, ); console.log(slugify('I love $$$ noodles')); // i-love-noodles This way is much cleaner and more readable than something like clean(sanitize(lower('My string')));, especially with more complex structure. The order is also very important; in this example, put sanitizebefore lowerand your Iwill simply disappear. You can also use the kind-of .mapapproach: // Our refactored composition helper const compose = x => ({ map: f => compose(f(x)), end: () => x, }); const lower = str => str.toLowerCase(); const sanitize = str => str.replace(/[^a-z0-9 -]/g, ''); const clean = str => str.replace(/\\\\s+/gm, '-'); const slugify = str => compose(str) .map(lower) .map(sanitize) .map(clean) .end(); console.log(slugify('I love $$$ noodles')); // i-love-noodles Conclusion As I said, simple concepts, but very helpful to produce any kind of functional code. It will also help you to maintain consistency across your code base if you choose this great paradigm. Partager Mise à jour le: 6 May 2021
https://antistatique.net/en/blog/functional-programming-patterns-a-javascript-journey-10
CC-MAIN-2021-49
refinedweb
570
55.64
I've written many AS3 lexers secretly, but here's one in pure AS3. Quick show: const hey: Ãufin = NothingEverDies, yeah Some sample errors: \\u0000 .999f Would it not be simpler to take the Java sources of ASC and port those to AS3 ? also, because a compiler is in general a command-line toolsI would suggest to use Redtamarin @zwetan ASC is very hackish and its AST depends in ASC semantics, what complicates the learning curves. About RedTamarin, maybe it could be a good idea. If I execute a SWF, then I'm able to have normal package dependency. Maybe I need to strip SWF content and only keep DoABC content, so RedTamarin is able to execute it instead of running ASC, which doesn't handle packages flexiblily. E/: Nice. RedTamarin parses SWF DoABC and DoABC2 already! Ok, I'm done with the bullshit despite that you keep posting without giving a shit on this forumyour comment above is what really get me pissed off you're judging ASC source code while on your side you seems to be 12yo, no experience in programming,and never wrote a compiler before so here a bit of advice you arrogant prick, have a bit more respect for the work of other more experienced developers I could go in length explaining this and that but it seems it is over your head, so whatever OK so I gonna gives more context to all that the user @Hydp keep posting on this forum - without selecting a category - use many different account, for ex: @hydroper @SuperAnimexKai - post stuff and then delete them afterward and let be clear, I'm not "losing it" over a couple post without categories,this occurred many many times under different account, one such account was warnedand blocked after continuing such behaviour that was I meant by "despite that you keep posting without giving a shit on this forum" Now about being pissed off@Hydp is obviously new to programming, and that's OK I would not blame anyone for thatbut ... when you're new to stuff you don't come and shit on other developers workespecially when you pick quite advanced subject that are apparently above your head so yeah when I see @Hydp commenting "ASC is very hackish"I'm pretty sure he does not know what he is talking about so here I gonna give a couple of advices, instead of shitting on developers work who probably have decades of programming experience over you, just say that you don't know or you don't understand and you need help don't pretend you are "above it" because it does feel a lot like "you are full of it" (and yes I mean shit) and to be complete, the final comment"I could go in length explaining this and that but it seems it is over your head, so whatever" it is a reaction about you not having a clue what you're talking about in details About RedTamarin, maybe it could be a good idea. If I execute a SWF, then I'm able to have normal package dependency. executing a SWF is not related to have package dependency which is not related to compile source code to byte code Maybe I need to strip SWF content and only keep DoABC content, so RedTamarin is able to execute it instead of running ASC, which doesn't handle packages flexiblily. you are confusing everything Redtamarin is a runtime that does interpret bytecode either from ABC or SWFand to some extend can also interpret raw source code from AS file But ASC is the opposite, it is a compiler, it takes a set of symbols (source code) and convert it to another set of symbols (bytecode) Simply put when I suggest to use Redtamarin is to be able to program in AS3but still get a command-line executable and somewhat similar API to Javato read/write file for example And I would have gladly explained stuff and nudge you in the right direction like I did numerous times before,but you started with that ASC remark ... I'm not entirely new to programming, I started in 3.1 years ago (december of 2014), and my first language was ActionScript 2 or Lua (with which I could script Transformice modules). I've many different things to do, but, well, this project of building a own compiler (but truly, I was just trying to write the compliant parser firstly), starting after looking the luaparse parser, was with the objective of ensuring I've a consistent base for my projects. Now, right now, right in that day, I'm with much headache and am still unsure on whether I'll continue with this journey. I'm done with the lexer: I worked hard for having my own lexer. When I restart something, I preferable do it from the scratch. And there were many different ways I did that lexer. But in the end I must confess I wasted good time doing that, between 2016 and 2018, and this affected what I used to do. Note that by consistent base I meant something that does good things directly for me, which is very portable and modern. Basically I knew ASC recently because I always used Macromedia/Flash Professional (and cracked by someone who cracks Adobe applications). ASC is almost good because it's done in Java, but it's not like MXMLC, which solves package definitions before full AS3 AOT/strict verification. Then I could just use MXMLC, but I found it quite inelegant... But right now in this day I'm thinking differently. I know, I said If I execute a SWF, then I'm able to have normal package dependency. because SWF is the unique output format of MXMLC. (There's also SWC, but it's a ZIP.) You didn't understand what I said. By instead of running ASC, I meant I'd not do a command such as redshell myInput.as, which invokes ASC automatically (AFAIK). redshell myInput.as I have many questions. So what happens if someone is able to create a Lexer? You can compile AS3 source code into byte code or abstract syntax tree or show errors in AS3 source code? I'm guessing an AS3 lexer would be good if you do not want to depend on a specific operating system and then you don't need mxmlc? But MXMLC is a compiler. If the lexer is AS3 then you can put it on the web or desktop or mobile? Sorry, I wasn't totally clear in the topic, but a lexer is basically a lexical scanner: the lexer transforms program source into tokens eventually. The lexer doesn't generate AST or bytecode, it just derives informations that a parser would normally need, which are really tokens. The lexer I posted in GitHub uses a singleton TokenFeed class for updating token informations, such as the token kind (from the Token enumeration) and values (double, String and Booolean). This last lexer I made is a bit more easy to extend. Instead of calculating numeric literals manually I decided to just use parseInt as it won't be a performance worry for now. (Wait, I forgot about parseFloat too.). Also, in earlier lexers I used to do punctuator scan in a hierarchic derived way. This lexer is okay for usage, and is multi-lingual, but I didn't implement messages for languages other than English until now (Portuguese I know, but I think English is more preferred than it). It's basically complete: it follows ECMA-262 rules. Well, like I explained above, the lexer just transforms raw AS3 source into tokens, so it won't serve as a MXMLC compiler alternatiive, but it can be used to prototype a parser, then prototype a compiler in AS3, which can be compiled with MXMLC and then bootstrapped if preferred. This lexer can be used as syntax highlighter, but it's neccessary to scan tokens correctly. Its main method is lex(), which scans a single token, but... there are plus three methods: scanRegexpLiteral(), XMLTagMode::lex() and XMLContentMode::lex(). These XML scan modes are for E4X lexical grammar, whlist scanRegexpLiteral() must be called when you find a slash punctuator (division operator such as / or /=) in an expression context... import hydroper.asparse.* import hydroper.asparse.lexer.* const token: TokenFeed = TokenFeed.token; myLexer.lex() if ((token.kind === Token.Slash) || (token.kind === Token.SlashAssign)) { myLexer.scanRegexpLiteral() // Now, token.kind === Token.RegexpLiteral } The XML modes concept are exactly the InputElementXMLTag and InputElementXMLContent goals of the ECMA-357 standard (2nd edition), the E4X language. My lexer isn't oriented to syntax highlighting, even due to the manner it pushes ignored comments, but you have full access to line and start/end column (truly start/end indices) informations. I think it just needs a update to handle dynamic changes.
https://discuss.as3lang.org/t/pure-actionscript-3-0-lexer/1313
CC-MAIN-2018-51
refinedweb
1,483
55.27
RE: [RCGS-list]: RE: [NUTS] Of stash notes and contact information... Expand Messages - I just now noticed this runaway train wreck of a thread. Geez people. Take a chill pill. Long ago I stopped in at the Wheatland Fire Station (which has a cache out front) to secure a permit for the large tents that we will have at the event. When I first introduced myself, the Fire Chief commented that "just the other day I was talking with the Police Chief about your event." He didn't specify what, exactly, they were talking about, but it became obvious they were both well-informed about us already, and are both excited to have this event take place and pump money into their economy. I didn't hear any of the fear or concern that Ed is alluding to. Other meetings are scheduled and planned. We are well in control. Lil Devil On Tuesday, April 29, 2008, you wrote: > > In Chico, you are required to get permits from the city for hosting an > event like this (even if it's just a notification of the event, at a > minimum). You are looking at parking issues, traffic issues, etc. > With time so short, you really do need to contact city officials now and > let them know there is going to be an influx of several thousand people, > how many you are expecting, and the nature of the event. > DO NOT wait until a week before and drop the bomb on Wheatland about this > event. The city should have been notified as soon as the event was > planned, > and kept updated thru the entire process. > I'm not sure what you mean by you (personally, or the committee?) are "not > in a position to ask permission, but paying a courtesy visit to inform > them...". You absolutely DO need to ask permission of the city to host an > event of this size. > This could be bad, bad publicity for Groundspeak if you try to host this > without the city's blessing and get shut down the day of the event. Who is > gonna take the heat from the city officials when the city government shows > up out of the loop and pissed? > This is a great timefor geocaching to make a good impression on the > city...lets not ruin it by doing it wrong. > Ed Nelson > > > > > > > -----Original Message----- > From: nuts_@yahoogroups.com [mailto:nuts_@yahoogroups.com]On Behalf Of > Gary > Hobgood > Sent: Tuesday, April 29, 2008 5:00 PM > To: RCGS-list@...; nuts_@yahoogroups.com > Subject: Re: [RCGS-list]: RE: [NUTS] Of stash notes and contact > information... > > The GW-6 planning committee has long recognized the importance of > contacting the Wheatland Police Department. Hynr and I are planning to > make an "in person" contact with the Wheatland officials. The timing > of > the contact as not yet been determine. We are not in position of > asking > for permission of the Wheatland officials but paying a courtesy visit > to > inform them of significant influx of "odd behavior" during a three day > period in the month of May. Given what I am hearing about the outgoing > Police Chief's impression of geocaching, it may be a tough sell. > > Gary > (Gary&Vicky) > > ----- Original Message ---- > From: Johnny Vegas <kdm4662@...> > To: RCGS-list@... > Sent: Tuesday, April 29, 2008 4:30:31 PM > Subject: RE: [RCGS-list]: RE: [NUTS] Of stash notes and contact > information... > > "No doubt we should still make sure the city officials (Whomever they > may > be) are notified." > Why? It is a legal function. > > Charles & Stephanie <Irvinehome@...> wrote: > I read the Wheatland newspaper this afternoon and say that the current > Chief of Police is retiring as of 21May08. > No doubt we should still make sure the city officials (Whomever they > may > be) are notified. > > Has anyone gotten any more information? > Perhaps from the committee for GW VI??? > > Chaz > > From: Joy Bellman [mailto:joy.bellman@...] > Sent: Tuesday, April 29, 2008 9:15 AM > To: RCGS-list@... > Subject: Re: [RCGS-list]: RE: [NUTS] Of stash notes and contact > information... > > > How do you adopt a section of highway maybe if we do that as a group > maybe > if in Wheatland ther eis a sign like the post on the highways that > says > this section is adopted by... And it where to say River City > Geocaching... > then they can see Geocaching? Just an idea. > > Candi > > On Tue, Apr 29, 2008 at 6:35 AM, Jenn Oates <wildoates56@...> > wrote: > Exactly. > > > > > Heiner Lieth <Heiner.Lieth@sbcglobal. net> wrote: > > I am curios if there is anything other than rumor and speculation to > this. > > Local law enforcement all over the state is going to see budget crises > coming their way this year. GW6 is pumping tens of thousands of > dollars > into the Wheatland economy. Anything that infuses cash into the local > economy is going to be looked on favorably by any government official. > Especially when the individuals in question who are sober, family > folks. My > suggestion is to adhere to the speed limit as you come or go to the > event. > Any bad experience in this department is purely economic common sense, > not > a hatred of geocaching. > > My guess (and that is what this is) is that silly caches, placed > without > proper clearance, skirting the Groundspeak rules (such as directly on > public objects such as bells) are going to generate some concern by > both > geocachers and local law enforcement alike. If that is all they know > about > geocaching, then you cannot blame them for thinking that geocachers > are a > bunch or renegades. Maybe we should do something that attracts some > positive attention for geocaching in Wheatland (e.g. a CITO event). > There > are a few caches on the way to Wheatland where the roadside is a real > dump; > pretty disgraceful, really. > > Bishop's Pumpkin Ranch is a major economic force in Wheatland. They > have > this sort of traffic every weekend in the fall. So the GW6 event > itself is > not going to do anything that does not happen routinely in Wheatland > in the > fall. On top of that, we will be containing the event at the Ranch. > The > amount of activities and over-all fun level are going to be > extraordinary. > There is no reason for anyone to leave the Ranch except to go home in > the > afternoon. Again: just like the pumpkin business in the fall. > > Heiner > > At 12:59 PM 4/28/2008, Johnny Vegas wrote: > > I got the same information from some cacher from yuba city. The local > LEOa > in wheatland do not like geocachers or geocaching. > I will be intersting to see what happens with this eventGC1A6A6, this > I am > sure will get the attention of the local LEOs when they see people > milling > around in the street because they can not get into the event. > > Telling the local police this time could result in the event being > stopped > by the police chief. > > Charles & Stephanie <Irvinehome@...> wrote: > As a side note the Chief of Police for Wheatland isn't too keen on > caches > in the first place...<?xml:namespace prefix = o ns = > "urn:schemas-microsoft- > com:office:office" /> > If he hasn't been notified as to GWVI by now we're in for a storm. > > > > > From: nuts_@yahoogroups.com [ mailto:nuts_@yahoogroups.com] On Behalf > Of > geospyder > Sent: Monday, April 28, 2008 9:08 AM > To: nuts_@yahoogroups.com > Subject: RE: [NUTS] Of stash notes and contact information... > > > Most likely already done but just in case it slipped people's > minds... > > > Have all the local law enforcement agencies in and around the > Wheatland > area been updated about the influx of suspicious people sneaking > around? > > > Jim (geospyder) > ========================================== You received this email > because > you have subscribed to the RCGDS Mailing List. If you wish to > unsubscribe > please send an email to "rcgs-list-request@..." with the word > "unsubscribe" as the subject. > > > > > > > > > Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it > now. > > > > Your message has been successfully submitted and would be delivered to recipients shortly.
https://groups.yahoo.com/neo/groups/nuts_/conversations/topics/5314?xm=1&o=1&m=p&tidx=1
CC-MAIN-2015-22
refinedweb
1,325
72.97
This OpenKODE Core extension provides a 64-bit floating point data type and associated functions. OpenKODE Core provides only a 32-bit floating point type, KDfloat32. It has been found that many applications, particularly ones being ported from the PC/console world down to mobile, use double as a 64-bit floating point type. This extension brings this type and the functions that use it into the OpenKODE Core specification. When this extension is present, its facilities are accessed by including its header file: #include <KD/KHR_float64.h> This extension defines a type, which is intrinsic (i.e. it participates in C’s casting and promotion rules): KD_E_KHR #define KD_E_KHR 2.718281828459045235 The constant e. KD_PI_KHR #define KD_PI_KHR 3.141592653589793239 The constant pi. KD_PI_2_KHR #define KD_PI_2_KHR 1.570796326794896619 pi/2 KD_2PI_KHR #define KD_2PI_KHR 6.283185307179586477 2 times pi KD_LOG2E_KHR #define KD_LOG2E_KHR 1.442695040888963407 Value of log2e. KD_LOG10E_KHR #define KD_LOG10E_KHR 0.4342944819032518276 Value of log10e. KD_LN2_F_KHR #define KD_LN2_KHR 0.6931471805599453094 Value of loge2. KD_LN10_KHR #define KD_LN10_KHR 2.302585092994045684 Value of loge10. KD_PI_4_KHR #define KD_PI_4_KHR 0.7853981633974483096 Value of PI/4. KD_1_PI_KHR #define KD_1_PI_KHR 0.3183098861837906715 Value of 1/PI. KD_2_PI_KHR #define KD_2_PI_KHR 0.6366197723675813431 Value of 2/PI. KD_2_SQRTPI_KHR #define KD_2_SQRTPI_KHR 1.128379167095512574 Value of 2/sqrt(PI). KD_SQRT2_KHR #define KD_SQRT2_KHR 1.414213562373095049 Value of sqrt(2). KD_SQRT1_2_KHR #define KD_SQRT1_2_KHR 0.7071067811865475244 Value of sqrt(1/2). KD_DBL_EPSILON_KHR #define KD_DBL_EPSILON_KHR 2.2204460492503131e-16 Difference between 1 and the smallest KDfloat64 greater than 1. KD_DBL_MAX_KHR #define KD_DBL_MAX_KHR 1.7976931348623157e+308 The largest possible finite KDfloat64. KD_DBL_MIN_KHR #define KD_DBL_MIN_KHR 2.2250738585072014e-308 The smallest possible positive normalized KDfloat64. KD_HUGE_VAL_KHR #define KD_HUGE_VAL_KHR (1.0/0.0) Used as an error value by certain functions; equivalent to KD_INFINITY_KHR. KD_DEG_TO_RAD_KHR #define KD_DEG_TO_RAD_KHR 0.01745329251994329577 Multiply by this number to convert degrees to radians. KD_RAD_TO_DEG_KHR #define KD_RAD_TO_DEG_KHR 57.29577951308232088 Multiply by this number to convert radians to degrees. Get the next argument from a variable argument list. This extension provides KDfloat64KHR versions of all of the OpenKODE Core KDfloat32 math functions. Each has the same name with the final 'f' dropped and with an 'KHR' suffix added, and each has the same semantics except for the parameter and return types becoming KDfloat64KHR. Where the existing KDfloat32 function indicates a range error by returning KD_HUGE_VALF, the equivalent here returns KD_HUGE_VAL_KHR. The OpenKODE Core 1.0.1 specification contains clarifications over OpenKODE Core 1.0 about the results of kdLogf(-0.0) and kdAtan2f(±0.0,±0.0). The same clarifications apply to kdLog and kdAtan2 here when this extension is used with OpenKODE Core 1.0.1. Convert a string to a 64-bit floating point number. This function has the same semantics as kdStrtof, except that it results in a KDfloat64KHR value, and can therefore read a value with the range and accuracy of that type. The OpenKODE Core 1.0.1 specification contains changes over OpenKODE Core 1.0 about kdStrtof, in that the forms it accepts for infinity and NaN are stricter, and it is defined not to accept hexadecimal floating point. The same changes apply to kdStrtodKHR here when this extension is used with OpenKODE Core 1.0.1. The OpenKODE Core 1.0.2 specification contains a change over OpenKODE Core 1.0.1 about kdStrtof, in that it is undefined whether a number that could be represented as a denormal in the return type causes underflow or not. The analogous change applies to kdStrtodKHR here when this extension is used with OpenKODE Core 1.0.2. Convert a 64-bit float to a string. This function has the same semantics as kdFtostr, except that the number to convert is a KDfloat64, 17 significant digits are used, and the exponent is rendered (where applicable) with three digits as long as the first digit is not 0. The maximum length of the result, including its null termination character, is KD_DTOSTR_MAXLEN_KHR. For a non-zero finite number, the “correct” value of the seventeen significant mantissa digits is determined by successively multiplying or dividing the number by 10 until (ignoring the sign) it is in the range [1e16,1e17), and then obtaining an integer by rounding to nearest, with tiebreak case rounded to even. However it is permitted for the output of the function to have mantissa digits one out from this “correct” value. The OpenKODE Core 1.0.1 specification contains changes over OpenKODE Core 1.0 about kdFtostr, in that the results for infinity and NaN are more tightly specified. The same changes apply to kdDtostrKHR here when this extension is used with OpenKODE Core 1.0.1. kdDtostrKHR is intended to provide a subset of snprintf’s functionality regarding double conversion (assuming C locale). The conversion rules above are intended to be equivalent to the snprintf format "%.17g". Note the difference in return value when the buffer is not large enough; [C99] snprintf returns the length the string would have been if the buffer had been long enough, whereas kdDtostrKHR returns -1. Seventeen significant digits are specified because this is the minimum that guarantees that the original value can be recovered by a conversion with kdStrtodKHR. OpenKODE Core 1.0.2 has a clarification that it is undefined whether kdStrtof reports underflow or not for a number that could be represented as a denormal. This version of the KHR_float64 extension specification points out that this change applies analogously to kdStrtodKHR. Fixed leftover “19” missed when number of significant digits of kdDtostrKHR was changed from 19 to 17, and fixed its description of how to obtain the “correct” result. Changed back to be based on OpenKODE Core 1.0 specification, and pointed out cases where changes in OpenKODE Core 1.0.1 implicitly change the semantics of functions in this extension. Allowed KDfloat64KHR representation to have its halves swapped. Clarified accuracy of kdDtostrKHR. Now based on OpenKODE Core 1.0.1 specification. Changed kdDtostrKHR to give 17 digits rather than 19. Clarified accuracy of kdDtostrKHR. Fixed incorrect function name occurrences in kdDtostrKHR. Clarified accuracy of kdDtostrKHR.
https://www.khronos.org/registry/kode/extensions/KHR/float64.html
CC-MAIN-2016-30
refinedweb
1,002
61.12
I see questions regarding the GC on a daily basis. Finalize vs. Dispose is a popular topic. Let's get it in the clear. Finalize Dispose All of us .NET programmers make use of the Garbage Collector (GC) – some without knowing so. You have to familiarize yourself with the internals of the GC if you wish to create scalable components – there is no other option. Even though it does all its bits automagically, you can harness a lot of its power by understanding three basics: Finalize, Dispose, and the Destructor protocol in managed code. I will not be going into the finer details of how the GC does its job, but rather explain how you as a programmer can (and should) optimize your objects for the GC. When an object is instantiated, the GC allocates memory for it on the managed heap. If the class contains a Finalize method, the object is also enlisted in the “finalisation queue”. When this object is no longer needed, its memory will be reclaimed (freed) by the GC. If the object is enlisted in the finalization queue, its Finalize method will be called before discarding of the object. The purpose of the Finalize method is to release any resources (like a database connection, or a handle on a window) that might be in use by your object. Since the GC decides when it is best to clean up objects (and it does a damn fine job in doing so!), you have no way of telling when exactly Finalize will be called. Finalize is also a protected member and can thus not be called explicitly. Does this mean that cleaning up your object is left entirely in the hands of the GC? protected Of course not. For increased performance, it is best to cleanup your unused resources immediately after using them. For instance, as soon as you have retrieved your data through a database connection, the connection should be discarded of since it eats up system resources like memory, which could be better utilized by objects that you do in fact need. For this reason, an object can implement the Dispose method (by implementing the IDisposable interface). Calling the Dispose method on an object does two things. Firstly, it cleans up any. IDisposable Now that you know the reasons for these two methods, let’s see how to implement it. In managed code, you cannot override or even call Finalize – it is implicitly generated (in IL) by the compiler if you have a destructor for your object. In other words, the following: ~MyClass { //Cleanup code here } Translates to the following: protected override void Finalize() { try { //Cleanup of used resources } finally { base.Finalize(); } } As you can see, the method also calls Finalize on its parent type, and the parent type will call Finalize on its parent type – the whole hierarchy of your object is thus cleaned up. It is important to understand that you should only have a destructor for your class if it is really necessary, since calling Finalize, and enlisting objects that implement Finalize in the finalization queue by the GC, has significant performance implications. The Dispose method is publicly callable. (I’ll explain the overload that accepts the boolean parameter later). Here is an example of an object that implements IDisposable: boolean public class MyClass : IDisposable, BaseClass { bool disposed; //Indicates if object has been disposed //Constructor, where your resources might be instantiated public MyClass() { disposed = false; //Other Constructor code here } //Destructor, that would imply a Finalize method as noted above ~MyClass() { //Dispose is called, passing false, so that only //unmanaged resources are cleaned up. Dispose(false); } public void Dispose() { Dispose(true); //Prevent the GC to call Finalize again, since you have already //cleaned up. GC.SuppressFinalize(this); } protected virtual void Dispose(bool disposing) { //Make sure Dispose does not get called more than once, //by checking the disposed field try { if (!this.disposed) { if (disposing) { //Clean up managed resources } //Now clean up unmanaged resources } disposed = true; } finally { base.Dispose(disposing); } } } When you explicitly call Dispose() on your object, both managed- and unmanaged resources will be cleaned up. When the GC cleans your object (instead of you), only unmanaged resources will be cleaned, since the managed resources will be cleaned up by the GC when necessary. Dispose() Don’t reference any managed resources in your Finalize method (destructor), since Finalizers are not called in any particular fashion, and the object you reference may thus be disposed of already. In such a case, your Finalize method will fail. If you *do* reference any managed resources downward in your object hierarchy, those objects will not be finalized with the current GC collection, and performance will suffer. When calling any method on your object, it is necessary to first check if the object has been disposed. So a method in MyClass would look like this: MyClass //In MyClass: public void MyMethod() { if (this.disposed) { throw new ObjectDisposedException(); } //Method code goes here } In a further article, I will dive deeper into the GC, and explain the implications of threading on your Finalize and Dispose methods. (This article is also published on my [Update: Check GC 102 for further notes on programming for Garbage.
http://www.codeproject.com/Articles/6267/GC-101?fid=34269&df=10000&mpp=50&sort=Position&spc=Relaxed&tid=1815541
CC-MAIN-2015-32
refinedweb
864
59.84
Python library to look up timezone from lat / long offline. Improved version of "pytzwhere". Project description This is a fast and lightweight python project to lookup the corresponding timezone for a given lat/lng on earth entirely offline. This project is derived from and has been successfully tested against pytzwhere (github), but aims to provide improved performance and usability. It is also similar to django-geo-timezones The underlying timezone data is based on work done by Eric Muller. Timezones at sea and Antarctica are not yet supported (because somewhat special rules apply there). Dependencies (python, math, struct, os) numpy Optional: Numba and its Requirements This is only for precompiling the time critical algorithms. When you only look up a few points once in a while, the compilation time is probably outweighing the benefits. When using certain_timezone_at() and especially closest_timezone_at() however, I highly recommend using numba (see speed comparison below)! The amount of shortcuts used in the .bin is also only optimized for the use with numba. Installation (install the dependencies) in your terminal simply: pip install timezonefinder (you might need to run this command as administrator) Usage Basics: from timezonefinder import TimezoneFinder tf = TimezoneFinder() for testing if numba is being used: (if the import of the optimized algorithms worked) print(TimezoneFinder.using_numba()) # this is a static method returning True or False fast algorithm: # point = (longitude, latitude) point = (13.358, 52.5061) print( tf.timezone_at(*point) ) # = Europe/Berlin To make sure a point is really inside a timezone (slower): print( tf.certain_timezone_at(*point) ) # = Europe/Berlin To find the closest timezone (slow): # only use this when the point is not inside a polygon! # this checks all the polygons within +-1 degree lng and +-1 degree lat point = (12.773955, 55.578595) print( tf.closest_timezone_at(*point) ) # = Europe/Copenhagens To increase search radius even more (very slow, use numba!): # this checks all the polygons within +-3 degree lng and +-3 degree lat # I recommend only slowly increasing the search radius # keep in mind that x degrees lat are not the same distance apart than x degree lng! print( tf.closest_timezone_at(lng=point[0],lat=point[1],delta_degree=3) ) # = Europe/Copenhagens (to make sure you really got the closest timezone increase the search radius until you get a result. then increase the radius once more and take this result.) Further application: To maximize the chances of getting a result in a Django view it might look like: def find_timezone(request, lat, lng): lat = float(lat) lng = float(lng) try: timezone_name = tf.timezone_at(lng, lat) if timezone_name is None: timezone_name = tf.closest_timezone_at(lng, lat) # maybe even increase the search radius when it is still None except ValueError: # the coordinates were out of bounds # {handle error} # ... do something with timezone_name ... To get an aware datetime object from the timezone name: # first pip install pytz from pytz import timezone, utc from pytz.exceptions import UnknownTimeZoneError # tzinfo has to be None (means naive) naive_datetime = YOUR_NAIVE_DATETIME try: tz = timezone(timezone_name) aware_datetime = naive_datetime.replace(tzinfo=tz) aware_datetime_in_utc = aware_datetime.astimezone(utc) naive_datetime_as_utc_converted_to_tz = tz.localize(naive_datetime) except UnknownTimeZoneError: # ... handle the error ... Using the conversion tool: Make sure you installed the GDAL framework (thats for converting .shp shapefiles into .json) Change to the directory of the timezonefinder package (location of file_converter.py) in your terminal and then: wget # on mac: curl "" -o "tz_world.zip" unzip tz_world ogr2ogr -f GeoJSON -t_srs crs:84 tz_world.json ./world/tz_world.shp rm ./world/ -r rm tz_world.zip Credits to cstich. There has to be a tz_world.json (of approx. 100MB) in the folder together with the file_converter.py now. Then you should run the converter by: python file_converter.py This converts the .json into the needed .bin (overwriting the old version!) and also updates the timezone_names.py. Please note: Neither tests nor the file_converter.py are optimized or really beautiful. Sorry for that. If you have questions just write me (s. section ‘Contact’ below) Comparison to pytzwhere In comparison to pytzwhere I managed to speed up the queries by up to 190 times (depending on the dependencies used, s. test results below). Initialisation time and memory usage are significanlty reduced, while my algorithm yields the same results. In some cases pytzwhere even does not find anything and timezonefinder does, for example when only one timezone is close to the point. Similarities: - results - data being used Differences: - highly decreased memory usage - highly reduced start up time - the data is now stored in a memory friendly 18MB .bin and needed data is directly being read on the fly (instead of reading, converting and KEEPING the 76MB .csv -mostly floats stored as strings!- into memory every time a class is created). - precomputed shortcuts are stored in the .bin to quickly look up which polygons have to be checked (instead of computing and storing the shortcuts on every startup) - introduced proximity algorithm - use of numba for precompilation (reaching the speed of tzwhere with shapely on and having everything preloaded in the memory) test results*: test correctness: Results: [point, target, timezonefinder is correct, tzwhere is correct] (-60.968888, -3.442172) America/Manaus True True (14.1315716, 2.99999) Africa/Douala True True (-106.1706459, 23.7891123) America/Mazatlan True True (33, -84) uninhabited True True (103.7069307, 1.3150701) Asia/Singapore True True (-71.9996885, -52.7868679) America/Santiago True True (-4.8663325, 40.0663485) Europe/Madrid True True (-152.4617352, 62.3415036) America/Anchorage True True (-44.7402611, 70.2989263) America/Godthab True True (12.9125913, 50.8291834) Europe/Berlin True True (37.0720767, 55.74929) Europe/Moscow True True (14.1315716, 0.2350623) Africa/Brazzaville True True testing 10000 realistic points [These tests dont make sense at the moment because tzwhere is still using old data] shapely: OFF (tzwhere) Numba: OFF (timezonefinder) TIMES for 1000 realistic queries: tzwhere: 0:00:17.819268 timezonefinder: 0:00:03.269472 5.45 times faster TIMES for 1000 random queries: tzwhere: 0:00:09.189154 timezonefinder: 0:00:01.748470 5.26 times faster shapely: OFF (tzwhere) Numba: ON (timezonefinder) TIMES for 10000 realistic queries: tzwhere: 0:02:55.985141 timezonefinder: 0:00:00.905828 194.28 times faster TIMES for 10000 random queries: tzwhere: 0:01:29.427567 timezonefinder: 0:00:00.604325 147.98 times faster Startup times: tzwhere: 0:00:08.302153 timezonefinder: 0:00:00.008768 946.87 times faster shapely: ON (tzwhere) Numba: ON (timezonefinder) TIMES for 10000 realistic queries: tzwhere: 0:00:00.845834 timezonefinder: 0:00:00.979515 0.86 times faster TIMES for 10000 random queries: tzwhere: 0:00:01.358131 timezonefinder: 0:00:01.042770 1.3 times faster Startup times: tzwhere: 0:00:13.570615 timezonefinder: 0:00:00.000265 51209.87 times faster * System: MacBookPro 2,4GHz i5 4GB RAM SSD pytzwhere with numpy active **mismatch: pytzwhere finds something and then timezonefinder finds something else ***realistic queries: just points within a timezone (= pytzwhere yields result) ****random queries: random points on earth Speed Impact of Numba TIMES for 1000 realistic queries***: timezone_at(): wo/ numa: 0:00:01.017575 w/ numa: 0:00:00.289854 3.51 times faster certain_timezone_at(): wo/ numa: 0:00:05.445209 w/ numa: 0:00:00.290441 14.92 times faster closest_timezone_at(): (delta_degree=1) wo/ numa: 0:02:32.666238 w/ numa: 0:00:02.688353 40.2 times faster (this is not included in my tests) Known Issues I ran tests for approx. 5M points and this are the mistakes I found: All points in Lesotho are counted to the ‘Africa/Johannesburg’ timezone instead of ‘Africa/Maseru’. I am pretty sure this is because it is completely surrounded by South Africa and in the data the area of Lesotho is not excluded from this timezone. Same for the small usbekish enclaves in Kirgisitan and some points in the Arizona Dessert (some weird rules apply here). Those are mistakes in the data not my algorithms and in order to fix this I would need check for and then separately handle these special cases. This would not only slow down the algorithms, but also make them ugly. This is the first public python project I did, so most certainly there is stuff I missed, things I could have optimized even further etc. That’s why, I would be really glad to get feedback on my code. If you notice that the tz data is outdated, encounter any bugs, have suggestions, criticism, etc. feel free to open an Issue, add Pull Requests on Git or … contact me: python at michelfe dot it License timezonefinder is distributed under the terms of the MIT license (see LICENSE.txt). Changelog Note: not mentioned versions only contain small and irrelevant changes (e.g. in the readme, setup.py…). I am new to all this, so I am often missing small things which are not really new features worth mentioning. 1.5.4 (2016-04-26) - using the newest version (2016b) of the tz_world from - rewrote the file_converter for parsing a .json created from the tz_worlds .shp - had to temporarily fix one polygon manually which had the invalid TZID: ‘America/Monterey’ (should be ‘America/Monterrey’) - had to make tests less strict because tzwhere still used the old data at the time and some results were simply different now 1.5.3 (2016-04-23) - using 32-bit ints now (instead of 64-bit): I calculated that the minimum accuracy (at the equator) is 1cm with the approach I use. Tests passed. - Benefits: 18MB file instead of 35MB, another 10-30% speed boost (depending on your hardware) 1.5.2 (2016-04-20) - added python 2.7.6 support: replaced strings in unpack (unsupported by python 2.7.6 or earlier) with byte strings - timezone names are now loaded from a separate file for better modularity 1.5.1 (2016-04-18) - added python 2.7.8+ support: - Therefore I had to change the tests a little bit (some operations were not supported). This only affects output. I also had to replace one part of the algorithms to prevent overflow in Python 2.7 1.5.0 (2016-04-12) - automatically using optimized algorithms now (when numba is installed) - added TimezoneFinder.using_numba() function to check if the import worked 1.4.0 (2016-04-07) - Added the file_converter.py to the repository: It converts the .csv from pytzwhere to another .csv and this one into the used .bin. - Especially the shortcut computation and the boundary storage in there save a lot of reading and computation time, when deciding which timezone the coordinates are in. It will help to keep the package up to date, even when the timezone data should change in the future. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/timezonefinder/1.5.4/
CC-MAIN-2019-18
refinedweb
1,793
58.99
QtWebEngine I have installed QT version 5.9.4 into virtual machine with win7 and VS12. I have try used QtWebEngine () I have checked and module QtWebEngine and now, it is installed. I have try use QtWebEngine but it is not recognized. Why? I have read various discussion in forums and it seems impossibile with mingw53_32. Does anyone know how to use QtWebEngine? - raven-worx Moderators @elicat QtWebEngine is not available with mingw32. But anyway you installed Qt for VS2012, so why do you want to use it with mingw32?? Obviously thats not possible @raven-worx good morning, I have installed Visual Studio 2017 and QT 5.9.4 on virtual machine whit Win7 and 32bit. So, i have installe in qt only msvc2015 32bit but in the kit is disabled. Excuse me, I'm a newbie, but do not the modules in the kit serve only to compile based on where the programs will be run? When I write the program it should not affect, or am I wrong? Hi, What @raven-worx meant is that you can't mix and match compilers on Windows. If you have a C++ library built with MinGW32 then you have to build everything with MinGW. For Visual Studio it goes even further, the different versions of VS are not compatible one with the other except for VS2017 which is backward compatible with VS2015. In any case, hover the red icon and see what errors are stated there. @SGaist said in QtWebEngine: For Visual Studio it goes even further, the different versions of VS are not compatible one with the other except for VS2017 which is backward compatible with VS2015. In any case, hover the red icon and see what errors are stated there. So does it means that if you install VS2017, you are able to compile with msvc2015 prebuilt component ? @vithom said in QtWebEngine: So does it means that if you install VS2017, you are able to compile with msvc2015 prebuilt component ? I have installed VS2017 and QT 5.9.4 on win10 64bit. after I have configured qt version same this : If I build with X64 is all ok. but if I try to use x86 or win32 I have this error In QT --> Tools --> options I have configured the parameters of compiler : Mabye it's impossible build in 32bit version? This post is deleted! OK, Now I have install on Ubuntu System 32bit Qt Creator and QtWebEngine is recognized. But wen buil the program I have error in : QtWebEngine::initialize(); I must install other component in Linux for build with QT and yours QtWebEngine? What exact errors are you getting ? @SGaist Good Morning, the cod of main is this : #include <QCoreApplication> #include <QtGui/QApplicationStateChangeEvent> #include <QtQml/QQmlApplicationEngine> #include <QtWebEngine/qtwebengineglobal.h> #include <WebClass.h> #include <QtWebChannel/QQmlWebChannel> #include <QtWebChannel/qwebchannel.h> #include <QtWebEngineWidgets/QtWebEngineWidgets> int main(int argc, char *argv[]) { QCoreApplication app(argc, argv); QtWebEngine::initialize(); QQmlApplicationEngine engine; engine.load(QUrl(QStringLiteral("main.qml"))); return app.exec(); } Erros when build : /home/gianfranco/Public/QtApplication/QtWebApplicationUbuntu/main.cpp:17: error: undefined reference to `QtWebEngine::initialize()' I do not understand where the cause is. The statements are recognized - raven-worx Moderators This post is deleted! Maybe a silly question but did you add QT += webengineto you .pro file ? @SGaist Arghh.. you tell me correct.. -I am again beginner, now, when i buil i have this problem. what is still missing? /usr/bin/ld: cannot find -lGL @elicat Use the Search icon at the top of this page to search this forum for lGL(note not for -lGL). You will find lots of this specific question and its solutions. @elicat all is perfect, now it build without errors, but… UFF… but when i run the program it crashed. I lunch with debug and i have this error. The same project run in x64bit with 5.9.4. - SGaist Lifetime Qt Champion Hi, Why are you using a QCoreApplication ?
https://forum.qt.io/topic/91559/qtwebengine/7
CC-MAIN-2019-22
refinedweb
656
66.74
Details Description We seem to return one generic message when either a namespace or diskspace quota is exceeded. We should be clearer. The error should state which quota is exceeded and only report statistics on the exceeded quota. Further namespace quota is probably not clear to most users. We should probably have explanatory text such as: "you have exceeded the maximum allowed number of files". Maybe we should even rename the quota to number of files quota? Finally, the error below would be clearer if it read: Quota exceeded for directory '/user/ciyaliao'. By not using the word "directory" it is not clear in this case if the issue is with a user or a directory. ------ AN EXAMPLE ----- > Hi, > > I encountered an error: > > put: org.apache.hadoop.hdfs.protocol.QuotaExceededException: The quota of > /user/x is exceeded: namespace quota=200000 file count=200001, > diskspace quota=-1 diskspace=-44221467605496 > > It apparently complains that dfs quota is exceeded. But I only use around > 200GB disk space. From the error message, what kind of quota did I actually > exceed? How do I check my usage of this type of quota? > > Thanks, > x Issue Links Activity - All - Work Log - History - Activity - Transitions
https://issues.apache.org/jira/browse/HDFS-76
CC-MAIN-2015-35
refinedweb
198
57.87
Refining Exceptions and Error Codes November 08, 2010 — code, language, magpie I’ve been thinking a lot about error-handling in Magpie, and I wanted to foist a couple of ideas out there to get some feedback. Kinds of Errors When I’m designing something, I tend to start from a few representative examples, and then I see how my idea for a solution handles those cases. Error-handling encompasses a pretty wide range of cases, everything from “you typed the name wrong” to “the machine is on fire”. That space is pretty big, but I think the following touch the most important corners of it. Let me know if I missed something: Programmatic Errors (The Programmer Screwed Up) When I look at code I wrote, most of the error-handling code in it is for handling errors in other code. In other words, it’s code contract stuff: validation code that asserts what arguments should look like and what state objects should be in when you try to do stuff with them. When possible, it’s good to squash those bugs statically so you don’t need code to handle them at all. Magpie does that with null reference bugs, but there are others that are trickier: things like out of bounds array access, or attempting to cast a variable to the wrong type. In C and C++, these errors are usually handled using assert() or similar mechanisms. Java and C# each have a set of standard exception types that are thrown: InvalidOperationException, IllegalArgumentException, etc. Checks for these errors are very common, so they should be lightweight in both in code and in CPU cycles. At the same time, we don’t generally handle these errors, in the sense of recovering at runtime. Instead, we just want to notice the error and scream and shout to the programmer to handle it by fixing the code that causes the error in the first place. Giving diagnostic information like a stack trace is good. Runtime Errors (The Real World Isn’t Perfect) I lump into this fuzzy category errors that can occur at runtime and either can’t be programmatically prevented, or where doing so is as expensive as performing the operation itself. Some examples: - Working with files or the network - Parsing, formatting, or manipulating data These are the kinds of things we typically think of when we think of “errors”. We want to notice when they’ve happened, and we will very likely need to write code to try to recover from them gracefully at runtime. What this category doesn’t imply is any sort of frequency. Whether or not a given error is common is entirely dependent on the application. A blog engine probably considers text parsing a common source of errors. A game app that just loads a single config file with a known format can safely assume they won’t happen. This means we want to have some flexibility regarding how errors in this category are handled. If we force a certain strategy by assuming that some errors are common and some are rare, then we’ll shaft users that aren’t like us. Catastrophic Errors (All Hell Breaks Loose) The last category is errors that are so deep that we probably can’t handle them. These are errors are distinguished by the fact that they interfere with our ability to execute further code: things like stack overflows or running out of memory. Handling Errors There are a bunch of different strategies languages have tried over the years to cope with the inevitable fallibility of mankind. The two I’m most interested for Magpie are return codes (using unions) and exceptions. Those seem to be the workhorses for languages in wide use. (I’m interested in other ideas, but for this post, I just want to look at those two.) Now the question is how can we use those two features to deal with the different categories of errors I listed up there? Handling Programmatic Errors These are probably the easiest to solve (from the language design perspective) because there’s little to do: we don’t plan to handle them in most cases, just notice them. I’m comfortable with the Java and C# model of “throw an exception that isn’t expected to be caught”. Aborting with a stack dump is equally effective, and does the same thing in practice. For example, a method for accessing an item in a collection could look like: def Collection getItemAt(index Int -> Item) if index < 0 or index > count then OutOfBoundsError throw() // do stuff with index... end That is a bit tedious, though. I’d likely refactor that into a separate function, like: def Int checkBounds(count Int ->) if this < 0 or this > count then OutOfBoundsError throw() end def Collection getItemAt(index Int -> Item) index checkBounds(count) // do stuff with index... end Not exactly rocket science, but I think it gets the job done. Let’s skip runtime errors and move on to the other easy one: Handling Catastrophic Errors Catastrophic errors are exceptional in the sense that we’ll rarely be handling them, so exceptions are a good fit here too. In fact, most of these exceptions wouldn’t even be thrown from Magpie code— they’d bubble up from the bowels of the interpreter itself. On the off chance that you do want to catch one, you can use a regular catch block: try // allocate a huge array... catch (err OutOfMemoryError) end Familiar territory. If you’ve used exceptions a lot, you’ve noticed one annoying thing with them is that they’re syntactically cumbersome: you have to create this try block and push everything over a level of indentation. To try to simplify that, I’m batting around an idea that might be clever, or might just be really dumb: treat every block as a try block. The basic idea is that any block can have catch clauses at the end of it, and having them implicitly makes it a try block. That should get code like: def copy(source String, dest String ->) try // copy files... catch (e IOError) // handle error... end end And simplify it to: def copy(source String, dest String ->) // copy files... catch (e IOError) // handle error... end I’ll have to try it out to see if it causes any problems in the grammar, but my hope is it will work OK. I’m curious to see if just making exceptions a little more terse like this will make them more palatable to people who dislike them. If you happen to have an opinion, I’d like to hear it. Handling Runtime Errors Finally, the biggest class of errors. The trick with these is that there’s no easy way to bucket them into “common” and “rare”. If we could, we could just say “use exceptions for the rare ones and return codes for the common ones”. Instead, we’ll need to support both. Here’s my plan. For our example, we’ll consider a simple one: parsing. Let’s say we have a function to parse strings to booleans: def parseBool(text String -> Bool) match text case "true" then true case "false" then false else // ??? what do we do here? end end This can be called like: var b = parseBool("true") Of course, the question is what happens if it fails? Since this may be common, we want it to be easy to handle the failure case. Unions are a good fit for that. We’ll change the function to: def parseBool(text String -> Bool | Nothing) match text case "true" then true case "false" then false else nothing end end Now it will return a boolean value if the parse succeeds, or the special nothing value if it fails. Note that this is not like just returning null: the return type of parseBool is different now. That means you can’t do this anymore: var b = parseBool("true") var notB = b not The not method is a method on booleans, and b isnt’ a boolean, it’s a Bool | Nothing. To treat it like a boolean, you first have to check its type. The canonical way to do that in Magpie is using let: let b = parseBool("true") // in here, b is a Bool var notB = b not // this is fine else // parse failed... end This is great for cases where parsing is likely to fail. It makes sure you always handle the common failure case by giving you a type-check error before the program is run if you don’t check for success first. But what if parsing rarely fails in your program? Do you really want to have to do a cumbersome let block everywhere you call parseBool just because that fails all the time in some other program? In your case, failing to parse is exceptional, so it should throw an exception. That way, you can ignore the cases that aren’t relevant to your problem. I think we can handle that too. We’ll just add a simple method to Object that tests to see if its of an expected type. If not, it will throw, otherwise it will return itself, but statically-typed to the expected type. Like so: def Object expecting[T] let cast = this as[T] then cast else UnexpectedTypeError throw( "Expected type " + T + " but was " + this type, this) end Now, if we have a function that returns a union containing an error, we can translate that to an exception instead like this: // doesn't expect a parse error var b = parseBool("true") expecting[Bool] var notB = b not // ok, since b is a Bool Using this, almost all functions that can have runtime errors will be implemented by returning a union of success and an error code, like: def readFile(path String -> String | IOError) if pathExists(path) then // return contents of file... else IOError new("Could not find " + path) end Then code that uses it can handle the error in place if that makes sense: def printFile(path String ->) var result = readFile(path) let contents = result as[String] then print(contents) else let error = result as[IOError] then print("Error!") end end (Yes, else let is a bit tedious. Better pattern-matching syntax for unions is still in the works.) Meanwhile, code that doesn’t care to handle the error right there can pass the buck: def printFile(path String ->) var contents = readFile(path) expecting[String] print(contents) end Of course, none of this is any real innovation. My goal here is just to round off some of the sharp corners of exceptions and return codes and see if I can make the process of dealing with errors a bit more flexible and readable. Thoughts?
http://journal.stuffwithstuff.com/2010/11/08/refining-exceptions-and-error-codes/
CC-MAIN-2014-42
refinedweb
1,794
67.18
hi,guys i used the std::numeric_limits<double>::max();,but visual C++ 6.0 reported the error C2589: '(' : illegal token on right side of '::' G:\MY program\temp\test1\test1.cpp(12) : error C2143: syntax error : missing ';' before '::' anyone know if visual C++ 6.0 support std::numeric_limits? thanks in advance Last edited by ephemera; March 17th, 2004 at 02:55 AM. Did you include <limits> and specifies the namespace, like "using namespace std;" yes, I #include <limits> and using namespace std and set the project option /GX,but it still failed Last edited by ephemera; March 18th, 2004 at 04:56 AM. I thing its because 'max' is already defined as a macro. try this before your use of std::numeric_limits<double>::max() #ifdef max #undef max #endif and see if it works Greetings, matze yes .it is okay.thanks Forum Rules
http://forums.codeguru.com/showthread.php?287237-about-std-numeric_limits&p=915468
CC-MAIN-2016-18
refinedweb
143
67.55
? Multiplayer game with SOCKET.io, how to add players? jjwallace posted a topic in Phaser 2Hi guys, I have a socket.io server running with node.js and i am trying to add players/sprites to my game. I am trying to figure out the best way to do this. As of now i have been working towards using object arrays but i know this is a better way. socket.on('addPlayer',function(data){ var thisID = data;//myArrayID; if(data.id == myID){ }else{ console.log(thisID); pFish[thisID] = this.add.sprite(); pFish[thisID].x = this.world.centerX; pFish[thisID].y = this.world.centerY; pFish[thisID].anchor.setTo(0.5, 0.5); pMouth[thisID] = this.add.sprite(0, 0, 'spMouth'); pMouth[thisID].anchor.setTo(0.5, 1); pMouth[thisID].animations.add('eat'); pMouth[thisID].animations.play('eat', 30, true); pTail[thisID] = this.add.sprite(0, 0, 'spTail'); pTail[thisID].anchor.setTo(0.5, 0); pTail[thisID].animations.add('swim'); pTail[thisID].animations.play('swim', 30, true); pTail[thisID].y = spMouth.y - 12; pFish[thisID].addChild(spMouth[thisID]); pFish[thisID].addChild(spTail[thisID]); pFish[thisID].scale.setTo(0.2, 0.2); pFish[thisID].angle = 110; } }) So what i really want to do is add a player object. This way i can have all 3 sprites inside the object and i can just change the object. Of course i will need to update each player with the corisponding input comming in from the server. Any good examples of how to do this in a OOP method for Phaser.? Example: BasicGame.Game.prototype = { preload: function () { //this.load.script('io', 'node_modules/socket.io/node_modules/socket.io-client/dist/socket.io.js'); //console.log('NODE LOADED'); }, create: function () { - Hi everyone I named this post "Alternatives to organize code base into different files" because it is a more general than "alternatives to make modular code" or something like that. I like javascript a lot, but it being the ONLY LANGUAGE IN THE WORLD that does not have a way to load/reference a file from within another is what pisses me off most. Every language has it. C and C++ has "include", C# has "using", Java has "import" and php has "require" and "require_once". I bet even X86 assembly may have something like that. Nonetheless, javascript yet don't have it and only God knows if (and when) the ecmascript 6 draft that proposes modules like python's will come to really become standard and come the masses. And WHEN it come, will everyone use updated browsers ??? And, when it comes, people will already be familiar with what they are already familiar, like AMD (requirejs style modules) and browserify (commonjs style modules) That being said, I would like to know your experiences and opinions about how (and why) you divide your code base into several files. It seems most use browserify and others use requirejs. But there are still others that just define and use globals into several files and put several script tags in the html. I made an attempt to create what seemed to me the simplest way to emulate a include/import/using/require in javascript, using xmlhttprequest+eval to "include" synchronously one js file into another: I think the current ecmascript 6 draft propposal for modules is probably the best. But old javascript files meant to just be included in html with a script tag will probably need to be patched to work with the modules system. My solution is just a way to use the "old way" of adding several script tags, without really adding several script tags. I would like to know, then, what you're using to manage large codebases. Are you bundling every js file into one? with what? browserify? requirejs' bundling tool? linux's cat command? If you're not building your code base into one single file, how are you calling one file from anoter? several script tags? AMD (with requirejs or others)? something with commonjs sintax? And at last, independent of bundling your code into one file or not, how the functionality of each file is exposed to other files? by exporting amd style modules? exporting commonjs style modules? defining globals? defining namespaces? The creator of UglifyJS has given some thoughts on this matter wich is really worth reading. He says, for example, that for many years and still today, C programmers are used to define globals using unique names and are happy with it =) (those are not his words, those are my interpretation of his whole blog post) Your experiences and opinions would be really important to me and probably to several other people that may read this thread. Thanks. TypeScript Tutorials at adam-holden.com adamyall posted a topic in Phaser 2Hey guys, Over the past few weeks I have been learning more about Phaser and TypeScript and I have started a blog to share what I have learned and post code that I think will help people better use Phaser with TypeScript. I was waiting to post anything about them here until I had a post with actual code in it and a GitHub repo. (Though I saw today a link to one of the posts was on the front page, thanks Rich!) I use OSX and WebStorm, but you should be able to use the code and examples in pretty much any TypeScript environment. But keep in mind these tutorials use more advanced TypeScript features (AMD modularization etc). Posts: Set Phasers to TypeScript! - Setting up a TypeScript dev environment the hard way (Brackets, Prepros, Terminal), or the easy way (WebStorm) Big Boy TypeScript: AMD Modularization and Dependency Management – Part 1 - A preface to using AMD modularization, with some "guided reading" to explain the concepts. Big Boy TypeScript: AMD Modularization and Dependency Management – Part 2 - Has example code from a Git repo. Shows how to organize your project to use AMD Modularization. Introduces you to the three ways you have to import and reference files as well as how to export classes and modules. More AMD Modularization and Finite State Machines in Phaser - Cleared up some issues with AMD modules from the last blog. Used the TypeState library to show how to easily use Finite State Machines in Phaser! Has example code and a demo! More to come! If you haven't ever used TypeScript, I would recommend doing some reading at the TypeScript website before reading these blogs. Afterwards, do some Phaser stuff with TypeScript.?
https://www.html5gamedevs.com/tags/RequireJS/
CC-MAIN-2019-51
refinedweb
1,073
65.22
Problem Statement For. Sample Test Cases Input: n = 4, edges = [[1, 0], [1, 2], [1, 3]] 0 | 1 / \ 2 3 Output: [1] Input: n = 6, edges = [[0, 3], [1, 3], [2, 3], [4, 3], [5, 4]] 0 1 2 \ | / 3 | 4 | 5 Output: [3, 4] Problem Solution Gather all leaves in a queue first. At each iteration, destroy all leaves, destroy the edges that they form in the graph, then add their neighbors to the queue, iff the neighbor is a leaf. - We initialize a queue of leaf nodes after creating the graph. - How do we know which are leaf nodes? Keep track of in-degree using an array. - Now, for the current queue of leaf nodes, we shall remove them from graph, then remove the edges they’re connected to, then update the in-degree of their neighbors. - After updating the neighbor’s in-degree, see if we can add it into the queue of leaf nodes - Repeat till we’ve 1 or 2 leaf nodes Complexity Analysis Time Complexity: O(n) Because each node will be processed exactly once. They go into the queue of leaves once, and they exit (at most) once. Space Complexity: Graph creation takes O(E) space ie no of edges to be included. Code Implementation #include <bits/stdc++.h> using namespace std; // This class represents a undirected graph using adjacency list class Graph { public: int V; // No. of vertices list<int> *adj; vector<int> degree; Graph(int V); void addEdge(int v, int w); // function to get roots which give minimum height vector<int> rootForMinimumHeight(); }; Graph::Graph(int V) { this->V = V; adj = new list<int>[V]; for (int i = 0; i < V; i++) degree.push_back(0); } // addEdge method adds vertex to adjacency list and increases // degree by 1 void Graph::addEdge(int v, int w) { adj[v].push_back(w); adj[w].push_back(v); degree[v]++; degree[w]++; } // Method to return roots which gives minimum height to tree vector<int> Graph::rootForMinimumHeight() { queue<int> q; // first enqueue all leaf nodes in queue for (int i = 0; i < V; i++) if (degree[i] == 1) q.push(i); // loop untill total vertex remains less than 2 while (V > 2) { for (int i = 0; i < q.size(); i++) { int t = q.front(); q.pop(); V--; // for each neighbour, decrease its degree and // if it become leaf, insert into queue list<int>::iterator j; for ( j = adj[t].begin(); j != adj[t].end(); j++) { degree[*j]--; if (degree[*j] == 1) q.push(*j); } } } // copying the result from queue to result vector vector<int> res; while (!q.empty()) { res.push_back(q.front()); q.pop(); } return res; } // Driver code to test above methods int main() { Graph g(6); g.addEdge(0, 3); g.addEdge(1, 3); g.addEdge(2, 3); g.addEdge(4, 3); g.addEdge(5, 4); vector<int> res = g.rootForMinimumHeight(); for (int i = 0; i < res.size(); i++) cout << res[i] << " "; cout << endl; }
https://prepfortech.in/interview-topics/trees/minimum-height-trees
CC-MAIN-2021-17
refinedweb
487
73.37
I wanted to know why when i insert a simple code in my dev c++ compiler, then i compile and run the code came up with errors, i'll put the code here: #include <iostream.h> int main() { cout<<"hello world!"; return 0; } (this are the errors) i need to know how to fix it? it says 23 C:\My Documents\Dev-Cpp\Templates\main.cpp:1 iostream.h: No such file or directory. C:\My Documents\Dev-Cpp\Templates\main.cpp [Warning] In function `int main()': 7 C:\My Documents\Dev-Cpp\Templates\main.cpp `cout' undeclared (first use this function) (Each undeclared identifier is reported only once for C:\My Documents\Dev-Cpp\Makefile.win [Build Error] [Templates/main.o] Error 1
http://cboard.cprogramming.com/cplusplus-programming/34456-cplusplus-programming.html
CC-MAIN-2014-10
refinedweb
124
60.92
Stanford ML Group, led by Andrew Ng, works on important problems in areas such as healthcare and climate change, using AI. Last year they released a knee MRI dataset consisting of 1,370 knee MRI exams performed at Stanford University Medical Center. Subsequently, the MRNet challenge was also announced. For those wishing to enter the field for AI in medical imaging, we believe that this dataset is just the right one for you. The challenge problem statement is neither too easy nor too difficult. The uniqueness and subtle complexities of the dataset will surely help you explore new thought processes and grow. And don’t forget, we are here to guide you on how to approach the problem at hand. So let’s dive right in!! Contents This post will be covering the topics - Exploring the MRNet dataset - The problem at hand (The Challenge) - Our approach - Model Architecture - Results - An alternative approach Deep Learning to Classify MRIs Interpretation of any kind of MRI is time-intensive and subject to diagnostic error and variability. Therefore automated system for interpreting this type of image data could prioritize high-risk patients and assist clinicians in making diagnoses. Have a look at this article if you are interested in knowing more about using deep learning with MRI scans 🙂 Moreover, a system that produces less false positives than a radiologist is very advantageous because it eliminates the risk of performing unnecessary invasive surgeries. We think that deep learning will soon help radiologists make faster and more accurate diagnoses. The MRNet Dataset The MRNet dataset consists of 1,370 knee MRI exams performed at Stanford University Medical Center. The dataset contains abnormal exams, with ACL tears and meniscal tears. Labels were obtained through manual extraction from clinical reports. The dataset accompanies the publication of the MRNet work here. I. Explaining the dataset The dataset contains MRIs of different people in .npy file format. Each MRI consists of multiple images (or slices). The number of slices has to do with the way MRI is taken of a particular body part. What happens is we pick a cross-section plane, and then move that plane across the body part, taking snapshots at different instances. So in this way, an image consists of different slices. MRNet consists of images with variable slices across three planes, namely axial, coronal, and sagittal. So an image will have dimensions [slices, 256, 256]. There are three folders with the same name as the three planes discussed above, and each image in each of these three folders is a collection of snapshots at different intervals. The labels are present in the correspondingly named .csv file. Each image in each plane has a label of 0 or 1, where 0 means that the MRI showed does not have the disease and 1 means that MRI shown has that disease. II. Uniqueness of Dataset and Splits The exams have been split into three sets - Training set (1,130 exams, 1,088 patients) - Validation set (called tuning set in the paper) (120 exams, 111 patients) - Hidden test set (called the validation set in the paper) (120 exams, 113 patients). To form the validation and tuning sets, stratified random sampling was used to ensure that at least 50 positive examples of each label (abnormal, ACL tear, and meniscal tear) were present in each set. All exams from each patient were put in the same split. To evaluate your model on the hidden test set, you have to submit your model on CodaLab (more details are present on the challenge website). III. Visualizing the data The dataset contains images as shown below There is some awesome work done on visualizing this dataset by Ahmed Besbes. Do check out his work here. The MRNet Challenge We were asked to do binary classification for each disease separately. Instead of predicting the class, we were asked to predict the probability that the MRI is of positive class. We then calculate area under ROC curve for predictions for each disease and then take average to report the average AUC as the final score. Obstacles in our Approach One thing we noticed is that slices are significantly different from one plane to another. Not just this, the number of slices are also different for the same MRI scan across different planes, for eg. an image across axial plane may have dimensions [25, 256, 256], whereas the same MRI has dimension [29, 256, 256] in coronal plane. Also within the same plane, images may differ a lot since they were taken at different timestamps, eg. at one time the plane would have been completely inside the knee, whereas some other time it would have just grazed the knee from above, thereby resulting in very different images within a single plane too. Due to the variable slices problem, multiple MRI scan couldn’t be put in a single batch, so we used a batch of one patient only. Our Approach Initially our plan was to train 9 CNN models – one for each disease across each plane. But then later we decided – why not combine information across three planes to make a prediction for each disease? So we finalised to make a model for each disease that accepts images from all three planes and uses them to predict whether the patient has that particular disease or not. So effectively we are now training 3 CNN models (one for each disease) which is quite less than the 9 CNN models that we were planning on initially. Model Architecture class MRnet(nn.Module): """MRnet uses pretrained resnet50 as a backbone to extract features """ def __init__(self): """This function will be used to initialize the MRnet instance.""" # Initialize nn.Module instance super(MRnet,self).__init__() # Initialize three backbones for three axis # All the three axes will use pretrained AlexNet model # The models will be used for extracting features from # the input images self.axial = models.alexnet(pretrained=True).features self.coronal = models.alexnet(pretrained=True).features self.saggital = models.alexnet(pretrained=True).features # Initialize 2D Adaptive Average Pooling layers # The pooling layers will reduce the size of # feature maps extracted from the previous axes self.pool_axial = nn.AdaptiveAvgPool2d(1) self.pool_coronal = nn.AdaptiveAvgPool2d(1) self.pool_saggital = nn.AdaptiveAvgPool2d(1) # Initialize a sequential neural network with # a single fully connected linear layer # The network will output the probability of # having a particular disease self.fc = nn.Sequential( nn.Linear(in_features=3*256,out_features=1) ) The model is surprisingly simple, we make a class MRNet that inherits from the torch.nn.Module class. In the __init__ method, we define three pretrained alexnet models for each of the three planes namely axial, sagittal and coronal. We use this backbone networks as a feature extractor, that is why we just use the .features of the alexnet and ignore the classification head of the alexnet. Then a AdaptiveAveragePool layer reduces the size of the feature image that we extracted from alexnet.features backbone. Finally we define a fully connected layer fc with input dimension size 3 x 256, and output dimension as 1 (a single neuron) to predict the probability of the patient having a particular disease. Backbone Network Used As discussed above, we used AlexNet network pretrained network as a feature extractor. Please note – it was just a personal preference to use AlexNet, we could have used ResNet as well for backbone. Input So the input we expect are three images in a list i.e. [image1, image2, image3] where each image is a stack of slices across each plane, i.e image1 is stack of slices across the axial plane. If we look at image1, its dimension is of the form [1, slices, 3, 224, 224], the extra 1 in the beginning of the image1 dimension is due to the Data Loader adding a extra dimension to it. Output We output a single logit denoting the probability of the patient having a particular disease. We don’t take sigmoid in the forward method as during calculation of the loss, BCELoss has torch.sigmoid built in. Forward Method def forward(self,x): """ Input is given in the form of `[image1, image2, image3]` where `image1 = [1, slices, 3, 224, 224]`. Note that `1` is due to the dataloader assigning it a single batch. """ # squeeze the first dimension as there # is only one patient in each batch images = [torch.squeeze(img, dim=0) for img in x] # Extract features across each of the three plane # using the three pre-trained AlexNet models defined earlier image1 = self.axial(images[0]) image2 = self.coronal(images[1]) image3 = self.saggital(images[2]) # Convert the image dimesnsions from [slices, 256, 1, 1] to # [slices,256] image1 = self.pool_axial(image1).view(image1.size(0), -1) image2 = self.pool_coronal(image2).view(image2.size(0), -1) image3 = self.pool_saggital(image3).view(image3.size(0), -1) # Find maximum value across slices # This will reduce the dimensions of image to [1,256] # This is done in order to keep only the most prevalent # features for each slice image1 = torch.max(image1,dim=0,keepdim=True)[0] image2 = torch.max(image2,dim=0,keepdim=True)[0] image3 = torch.max(image3,dim=0,keepdim=True)[0] # Stack the 3 images together to create the output # of size [1, 256*3] output = torch.cat([image1,image2,image3], dim=1) # Feed the output to the sequential network created earlier # The network will return a probability of having a specific # disease output = self.fc(output) return output We first squeeze the first dimension of each image as it is redundant. So the current dimension becomes of each image[i] becomes [slices, 3, 224, 224] Then we pass each image through the AlexNet backbones to extract features across each plane. So the dimension of each image currently is [slices, 256, 7, 7] We then take a Average Pool, which converts the dimension of each image to [slices, 256, 1, 1], which we then convert it to [slices, 256] using the .view() function. Now we pick the maximum value across slices, so the dimension of each image now becomes [1, 256]. This step is important in order to handle the variable size of slices in each plane, we only most prevalent features in each slice. We then stack these three images of three planes together to form a final tensor of size [1, 3 * 256] or [1, 768]. We then pass it to the fully connected layer fc that results in the output of size [1, 1]. Data Loader We created a class MRData that inherits and implemented two functions namely __len__ and __getitem__ as required by torch.utils.data.DataLoader. Nothing too complex in __init__ method as well, we just read the required .csv files that contain the filenames for MRIs and their respective labels. We also calculate the weight for the +ve class that we pass to the loss function as will be discussed below in more detail. class MRData(): """This class used to load MRnet dataset from `./images` dir """ def __init__(self,task = 'acl', train = True, transform = None, weights = None): """Initialize the dataset Args: plane : along which plane to load the data task : for which task to load the labels train : whether to load the train or val data transform : which transforms to apply weights (Tensor) : Give wieghted loss to postive class eg. `weights=torch.tensor([2.223])` """ # Define the three planes to use self.planes=['axial', 'coronal', 'sagittal'] # Initialize the records as None self.records = None # an empty dictionary self.image_path={} # If we are in training loop if train: # Read data about patient records self.records = pd.read_csv('./images/train-{}.csv'.format(task),header=None, names=['id', 'label']) for plane in self.planes: # For each plane, specify the image path self.image_path[plane] = './images/train/{}/'.format(plane) else: # If we are in testing loop # don't use any transformation transform = None # Read testing/validation data (patients records) self.records = pd.read_csv('./images/valid-{}.csv'.format(task),header=None, names=['id', 'label']) for plane in self.planes: # Read path of images for each plane self.image_path[plane] = './images/valid/{}/'.format(plane) # Initialize the transformation to apply on images self.transform = transform # Append 0s to the patient record id self.records['id'] = self.records['id'].map( lambda i: '0' * (4 - len(str(i))) + str(i)) # empty dictionary self.paths={} for plane in self.planes: # Get paths of numpy data files for each plane self.paths[plane] = [self.image_path[plane] + filename + '.npy' for filename in self.records['id'].tolist()] # Convert labels from Pandas Series to a list self.labels = self.records['label'].tolist() # Total positive cases pos = sum(self.labels) # Total negative cases neg = len(self.labels) - pos # Find the wieghts of pos and neg classes if weights: self.weights = torch.FloatTensor(weights) else: self.weights = torch.FloatTensor([neg / pos]) print('Number of -ve samples : ', neg) print('Number of +ve samples : ', pos) print('Weights for loss is : ', self.weights) def __len__(self): """Return the total number of images in the dataset.""" return len(self.records) def __getitem__(self, index): """ Returns `(images,labels)` pair where image is a list [imgsPlane1,imgsPlane2,imgsPlane3] and labels is a list [gt,gt,gt] """ img_raw = {} for plane in self.planes: # Load raw image data for each plane img_raw[plane] = np.load(self.paths[plane][index]) # Resize the image loaded in the previous step img_raw[plane] = self._resize_image(img_raw[plane]) label = self.labels[index] # Convert label to 0 and 1 if label == 1: label = torch.FloatTensor([1]) elif label == 0: label = torch.FloatTensor([0]) # Return a list of three images for three planes and the label of the record return [img_raw[plane] for plane in self.planes], label def _resize_image(self, image): """Resize the image to `(3,224,224)` and apply transforms if possible. """ # Resize the image # Calculate extra padding present in the image # which needs to be removed pad = int((image.shape[2] - INPUT_DIM)/2) # This is equivalent to center cropping the image image = image[:,pad:-pad,pad:-pad] # Normalize the image by subtracting it by mean and dividing by standard # deviation image = (image-np.min(image))/(np.max(image)-np.min(image))*MAX_PIXEL_VAL image = (image - MEAN) / STDDEV # If the transformation is not None if self.transform: # Transform the image based on the specified transformation image = self.transform(image) else: # Else, just stack the image with itself in order to match the required # dimensions image = np.stack((image,)*3, axis=1) # Convert the image to a FloatTensor and return it image = torch.FloatTensor(image) return image One thing to note is that before returning we have to resize the images to [224, 224] from [256, 256] across each slice. Also since alexnet backbone accepts images having three color channels, we could just stack the single image three times to overcome this issue however there is a better way. Augmentations to the rescue!! Instead of stacking the same image thrice, why not apply different augmentations to an image and then stack the resulting images together to overcome the 3 color channel problem. In this way, we fix the problem, but also add more diversity to our dataset that will help our model to generalize better. def load_data(task : str): # Define the Augmentation here only augments = Compose([ # Convert the image to Tensor transforms.Lambda(lambda x: torch.Tensor(x)), # Randomly rotate the image with an angle # between -25 degrees to 25 degrees RandomRotate(25), # Randomly translate the image by 11% of # image height and width RandomTranslate([0.11, 0.11]), # Randomly flip the image RandomFlip(), # Change the order of image channels transforms.Lambda(lambda x: x.repeat(3, 1, 1, 1).permute(1, 0, 2, 3)), ]) print('Loading Train Dataset of {} task...'.format(task)) # Load training dataset train_data = MRData(task, train=True, transform=augments) train_loader = data.DataLoader( train_data, batch_size=1, num_workers=11, shuffle=True ) print('Loading Validation Dataset of {} task...'.format(task)) # Load validation dataset val_data = MRData(task, train=False) val_loader = data.DataLoader( val_data, batch_size=1, num_workers=11, shuffle=False ) return train_loader, val_loader, train_data.weights, val_data.weights Some image transformations we apply are randomly rotating the image 25 degrees to left or right. Also, we add a little bit of translational shift as well. We also apply some random flipping of the image upside down. We use the load_data function as shown above to return iterators to train dataset and validation dataset. Loss Function Used Since this is a binary classification problem, Binary Cross Entropy Loss is the way to go. However, since our dataset had some class imbalances, we went for a weighted BCE Loss. We use torch.nn.BCEWithLogitsLoss to calculate the loss. This calls the torch.sigmoid internally which is numerically more stable. That is why it accepts raw logits from the model, hence the name. It also accepts the parameter, pos_weight which is used to positively weight a class while calculating loss. We assigned this parameter as no. of -ve samples/ no. of +ve samples. A thing to note here is that we don’t need a negative weight here as the loss method just gives it a weight of 1.0. Learning Rate (LR) strategy We use a strategy that reduces the learning rate by a factor of 3.0 whenever the Validation Loss plateaus for 3 consecutive epochs, with a threshold of 1e-4. Evaluation Metric Used We use the Area under the the ROC curve to judge the performance of the model for each disease. We then average these AUCs for all three diseases to get a final performance score of the model. If you don’t know what AUC and ROC means, I recommend that you check this article out, it explains these concepts quite lucidly 🙂 Training Loop Below is the code for train loop for one epoch. def _train_model(model, train_loader, epoch, num_epochs, optimizer, criterion, writer, current_lr, log_every=100): # Set to train mode model.train() # Initialize the predicted probabilities y_probs = [] # Initialize the groundtruth labels y_gt = [] # Initialize the loss between the groundtruth label # and the predicted probability losses = [] # Iterate over the training dataset for i, (images, label) in enumerate(train_loader): # Reset the gradient by zeroing it optimizer.zero_grad() # If GPU is available, transfer the images and label # to the GPU if torch.cuda.is_available(): images = [image.cuda() for image in images] label = label.cuda() # Obtain the prediction using the model output = model(images) # Evaluate the loss by comparing the prediction # and groundtruth label loss = criterion(output, label) # Perform a backward propagation loss.backward() # Modify the weights based on the error gradient optimizer.step() # Add current loss to the list of losses loss_value = loss.item() losses.append(loss_value) # Find probabilities from output using sigmoid function probas = torch.sigmoid(output) # Add current groundtruth label to the list of groundtruths y_gt.append(int(label.item())) # Add current probabilities to the list of probabilities y_probs.append(probas.item()) try: # Try finding the area under ROC curve auc = metrics.roc_auc_score(y_gt, y_probs) except: # Use default value of area under ROC curve as 0.5 auc = 0.5 # Add information to the writer about training loss and Area under ROC curve writer.add_scalar('Train/Loss', loss_value, epoch * len(train_loader) + i) writer.add_scalar('Train/AUC', auc, epoch * len(train_loader) + i) if (i % log_every == 0) & (i > 0): # Display the information about average training loss and area under ROC curve print('''[Epoch: {0} / {1} | Batch : {2} / {3} ]| Avg Train Loss {4} | Train AUC : {5} | lr : {6}'''. format( epoch + 1, num_epochs, i, len(train_loader), np.round(np.mean(losses), 4), np.round(auc, 4), current_lr ) ) # Add information to the writer about total epochs and Area under ROC curve writer.add_scalar('Train/AUC_epoch', auc, epoch + i) # Find mean area under ROC curve and training loss train_loss_epoch = np.round(np.mean(losses), 4) train_auc_epoch = np.round(auc, 4) return train_loss_epoch, train_auc_epoch The code for the train loop for one epoch is quite self explanatory, however I would still like to point out a few things. To calculate AUC value, we are using sklearn.metrics.auc_roc_score function. writer is an object of the SummaryWriter class that ships with tensorboard. Evaluation Loop Below is the code that evaluates the model after every epoch. def _evaluate_model(model, val_loader, criterion, epoch, num_epochs, writer, current_lr, log_every=20): """Runs model over val dataset and returns auc and avg val loss""" # Set to eval mode model.eval() # List of probabilities obtained from the model y_probs = [] # List of groundtruth labels y_gt = [] # List of losses obtained losses = [] # Iterate over the validation dataset for i, (images, label) in enumerate(val_loader): # If GPU is available, load the images and label # on GPU if torch.cuda.is_available(): images = [image.cuda() for image in images] label = label.cuda() # Obtain the model output by passing the images as input output = model(images) # Evaluate the loss by comparing the output and groundtruth label loss = criterion(output, label) # Add loss to the list of losses loss_value = loss.item() losses.append(loss_value) # Find probability for each class by applying # sigmoid function on model output probas = torch.sigmoid(output) # Add the groundtruth to the list of groundtruths y_gt.append(int(label.item())) # Add predicted probability to the list y_probs.append(probas.item()) try: # Evaluate area under ROC curve based on the groundtruth label # and predicted probability auc = metrics.roc_auc_score(y_gt, y_probs) except: # Default area under ROC curve auc = 0.5 # Add information to the writer about validation loss and Area under ROC curve writer.add_scalar('Val/Loss', loss_value, epoch * len(val_loader) + i) writer.add_scalar('Val/AUC', auc, epoch * len(val_loader) + i) if (i % log_every == 0) & (i > 0): # Display the information about average validation loss and area under ROC curve print('''[Epoch: {0} / {1} | Batch : {2} / {3} ]| Avg Val Loss {4} | Val AUC : {5} | lr : {6}'''. format( epoch + 1, num_epochs, i, len(val_loader), np.round(np.mean(losses), 4), np.round(auc, 4), current_lr ) ) # Add information to the writer about total epochs and Area under ROC curve writer.add_scalar('Val/AUC_epoch', auc, epoch + i) # Find mean area under ROC curve and validation loss val_loss_epoch = np.round(np.mean(losses), 4) val_auc_epoch = np.round(auc, 4) return val_loss_epoch, val_auc_epoch Most of the things in here are same as train loop. Rest of the code is self explanatory. Our Results With our approach, we were able to get more than decent results achieving an average AUC of 0.90. Given below is our best AUC (on validation set) scores for all the three diseases - ACL = 0.94 - Abnormal = 0.94 - Meniscus = 0.81 The decent amount of increasing AUC is followed by a steady decrease in the validation loss. How to improve upon this? As you can see above for yourselves, we got quite satisfactory results, but there still some unexplored paths that we were curious about. Maybe you guys can try these for us and let us know. - We could have used a different backbone, maybe like Resnet-50 or VGG. - Trying different/more augmentations of the MRI scans. - Training with an SGD optimizer instead of Adam. - Train for more epochs. An Alternate Approach One thing that caught our interest is that why not train a single model for all three diseases, like doing a Multi-Label classification task. So instead of a single neuron at the end, we now have 3 neurons denoting the probability of each class. It should perform theoretically greater than or equal to the model for each disease that we trained above, since classifying one class might help the model to classify other classes as well since backpropogate the loss through all the classes. So test the above claim, we made a single model for all 3 diseases and we will cover this in our next post along with the results 🙂 Conclusion Congratulations on making this far, we know it was a lot to take in, so we will just summarize everything for you guys. - We got to know about the MRNet Challenge Dataset and the task that we had to do in this challenge. - We discussed some differences that this dataset has with the other image classification datasets. - We then trained 3 different models to classify MRI scans for each disease. - We then discussed some possible alternative approaches. - However due to the unique dataset, it wasn’t possible to provide relatable visualizations. Thank you so much for reading this! Until next time [ 1 ] [ 2 ]
https://learnopencv.com/stanford-mrnet-challenge-classifying-knee-mris/
CC-MAIN-2022-21
refinedweb
4,028
55.95
30 April 2012 09:08 [Source: ICIS news] By Clive Ong SINGAPORE (ICIS)--Major sellers of expandable polystyrene (EPS) resins in ?xml:namespace> Offers are being quoted at around $1,700/tonne CFR (cost and freight) NE (northeast) With prices of feedstock SM at above $1,500/tonne CFR China since the second half of April, EPS resins’ makers had to increase prices to maintain adequate margins, market sources said. “Some end-users have accepted the higher prices as demand is likely to continue to strengthen towards the middle of the year after the May Day lull,” said another EPS manufacturer in EPS plants in A seasonal pick-up in activities in the construction sector – a main downstream of EPS – boosted demand in The Chinese markets are closed for public holidays on 30 April to 1 May. “Demand in However, end-users are now starting to find difficulty in passing on the costs of the resins to their end-products. “At current prices, our margins are squeezed as end product prices have not increased,” said an end-user in Over the past decade, EPS prices traded at a low of $672/tonne CFR NE Asia in May 2003 and touched a peak of $1,850/tonne CFR NE Asia in June 2008, ICIS data showed. EPS is made into styro-foam which is used for packaging as well as insulation panels in houses and roads. (
http://www.icis.com/Articles/2012/04/30/9554685/asia-eps-offers-rise-20-30tonne-on-firm-feedstock-demand-weak.html
CC-MAIN-2015-14
refinedweb
235
51.35
Not logged in Log in now Weekly Edition Recent Features Deadline scheduling: coming soon? LWN.net Weekly Edition for November 27, 2013 ACPI for ARM? LWN.net Weekly Edition for November 21, 2013 GNU virtual private Ethernet June 10, 2008 This article was contributed by Diego Pettenò Free Software development is often a fun task for developers, and it is its low barrier to entry (on average) that makes it possible to have so much available software for so many different tasks. This low barrier to entry, though, is also probably the cause of the widely varying quality of the code of these projects. Most of the time, the quality issues one can find are not related to developers' lack of skill, but rather to lack of knowledge of how the tools work, in particular, the compiler. For non-interpreted languages, the compiler is probably the most complex tool developers have to deal with. Because a lot of Free Software is written in C, GCC is often the compiler of choice. Modern compilers are also supposed to do a great job at optimizing the code by taking code, often written with maintainability and readability in mind, and translating it into assembler code with a focus on performance. Code analysis for optimization (which is also used for warnings about the code) has the task of taking a semantic look at the code, rather than syntactic, and identifying various fragments of algorithms that can be replaced with faster code (or with code that uses a smaller memory footprint, if the user desires to do so). This task is a pretty complex one and relies on the compiler knowing about the function called by the code. For instance, the compiler might know when to replace a call to a (local, static) function with its body (inlining) by looking at its size, the number of times it is called, and its content (loops, other calls, variables it uses). This is because the compiler can give a semantic value to the code for a function, and can thus assess the costs and benefits of a particular transformation at the time of its use. I specified above that the compiler knows when to inline a function by looking at its content. Almost all optimizations related to function calls work this way: the compiler, knowing the body of a function, can decide when it's the case to replace a call with its body; when it is possible to completely avoid calling the function at all; and when it is possible to call it just once and thereby avoid multiple calls. This means, though, that these optimization can be applied only to functions that are defined in the same unit wherein they are used. These functions are usually limited to static functions (functions that are not defined as static can often be overridden both at link time and runtime, so the compiler cannot safely assume that what it finds in the unit is what the code will be calling). As this is far from optimal, modern compilers like GCC provide a way for the developer to provide information about the semantics of a function, through the use of attributes attached to declarations of functions and other symbols. These attributes provide information to the compiler on what the function does, even though its body is not available. Consequently, the compiler can optimize at least some of its calls. This article will focus on two particular attributes that GCC makes available to C developers: pure and const, which can declare a function as either pure or constant. The next section will provide a definition of these two kinds of functions, and after that I'll get into an analysis of some common optimizations that can be performed on the calls of these functions. pure const As with all the other function attributes supported by GCC and ICC, the pure and const attributes should be attached to the declarative prototype of the function, so that the compiler know about them when it finds a call to the function even without its definition. For static functions, the attribute can be attached to the definition by putting it between the return type and the name of the function: int extern_pure_function([...]) __attribute__((pure)); int extern_const_function([...]) __attribute__((const)); int __attribute__((pure)) static_pure_function([...]) { [...] } int __attribute__((const)) static_const_function([...]) { [...] } For what concerns the scope of this article, functions can be divided into three categories, from the smallest to the biggest: constant functions, pure functions and the remaining functions can be called normal functions. As you can guess, constant functions are also pure functions, but pure functions cannot be not all pure functions are constant functions. In many ways, constant functions are a special case of pure functions. It is, therefore, best to first define pure functions and how they differ from all the rest of the functions. A pure function is a function with basically no side effect. This means that pure functions return a value that is calculated based on given parameters and global memory, but cannot affect the value of any other global variable. Pure functions cannot reasonably lack a return type (i.e. have a void return type). GCC documentation provides strlen() as an example of a pure function. Indeed, this function takes a pointer as a parameter, and accesses it to find its length. This function reads global memory (the memory pointed to by parameters is not considered a parameter), but does not change it, and the value returned derives from the global memory accessed. strlen() A counter-example of a non-pure function is the strcpy() function. This function takes two pointers as parameters. It accesses the latter to read the source string, and the former to write to the destination string. As I said, the memory areas pointed to by the parameters are not parameters on their own, but are considered global memory and, in that function, global memory is not only accessed for reading, but also for writing. The return value derives directly from the parameters (it is the same as the first parameter), but global memory is affected by the side effect of strcpy(), making it not pure. strcpy() Because the global memory state remains untouched, two calls to the same pure function with the same parameters will have to return the same value. As we'll see, it is a very important assumption that the compiler is allowed to make. A special case of pure functions is constant functions. A pure function that does not access global memory, but only its parameters, is called a constant function. This is because the function, being unrelated to the state of global memory, will always return the same value when given the same parameters. The return value is thus derived directly and exclusively from the values of the parameters given. The way a constant function "consumes" pointers is very different from the way other functions do: it can handle them as both parameter and return value only if they are never dereferenced, for accessing the memory they are referencing would be a global memory access, which breaks the requirements of constant functions. Of course these requirements have to apply not only to the operations in the given function, but also recursively to all the functions it calls. One function can at best be of the same kind of the least restrictive kind of function it calls. So when it calls a normal function it can't be but a normal function itself, if it only calls pure functions it can be either pure or normal, but not constant, and if it only calls constant functions it can be constant. As with inlining, the compiler will be able to decide if a function is pure or constant, in case no attribute is attached to it, only if the function is static (with the exception of special cases for freestanding code and other advanced options). When a function is not static, even if it's local, the compiler will assume that the function can be overridden at link- or run-time so it will not make any assumption based on the body for the definition it may find. Why should developers bother with marking functions pure or constant, though? As I said, these two attributes help the compiler to know some semantic meaning of a function call, so that it can apply higher optimization than to normal functions. There are two main optimizations that can be applied to these kinds of functions: CSE (Common Sub-expression Elimination) and DCE (Dead Code Elimination). We'll soon see in detail, with the help of the compiler itself, what these two consist of. Their names, however, are already rather explicit: CSE is used to avoid duplicating the same code inside a function, usually factoring out the code before branching or storing the results of common operations in temporary variables (registers or stack), while DCE will remove code that would never be executed or that would be executed but never used. These are both optimization that can be implemented in the source code, to an extent, reducing the usefulness of declaring functions pure or constant. On the other hand, as I'll demonstrate, doing so often reduces the readability of the code by obscuring the actual algorithm in favor of making it faster. This does not apply to all cases though, sometimes, doing the optimization "manually", directly in the source code, makes it more readable, and makes the code resemble the output of the compiler more. When talking about optimization, it's quite difficult to visualize the task of the compiler, and the way the code morphs from what you read in the C source code into what the CPU is really going to execute. For this reason, the best way to write about them is to use examples, showing what the compilers generates starting from the source code. Given the way in which GCC works, this is actually quite easy. You just need to enable optimization and append the -S switch to the gcc command line. This switch stops the compiler after the transformation of C source code into assembly, before the result is passed to the assembler program to produce the object file. -S Although I suspect a good fraction of the people reading this article would be comfortable reading IA-32 or x86-64 assembly code, I decided to use the Blackfin [1] assembly language, which should be readable for people who have never studied a particular assembly language. The Blackfin assembler is more symbolic than IA-32: instead of having operations named movl and addq, the operations are identified by their algebraic operators (=, +), while the registers are merely called R1, R2 and so on. movl addq = + R1 R2 Calling conventions are also quite easy to understand: for all the cases we'll look through in the article (at most four parameters, integers or pointers), the parameters are passed through the registers, starting in order from R0. The return value of the function call is also stored in the R0 register. R0 To clarify the examples which will appear later on, let's see how the following C source code is translated by GCC into Blackfin code: int somefunction(int a, int b, int c); void somestringfunction(char *pA, char *pB); int globalvar; void test() { somestringfunction("foo", "bar"); globalvar = somefunction(11, 22, 33); } becomes: .section .rodata .align 4 L$LC$0: .string "foo" .align 4 L$LC$1: .string "bar" .text; .align 4 .global _test; .type _test, STT_FUNC; _test: LINK 12; R0.H = L$LC$0; R0.L = L$LC$0; R1.H = L$LC$1; R1.L = L$LC$1; call _somestringfunction; R0 = 11 (X); R1 = 22 (X); R2 = 33 (X); call _somefunction; P2.H = _globalvar; P2.L = _globalvar; [P2] = R0; UNLINK; rts; .size _test, .-_test Once the parameters are loaded, the function is called almost identically to any other call operation on other architectures; note the prefixed underscore on symbols' names. call Integers, both constant or parameters and variables, are also loaded for calls in the registers. Blackfin doesn't have 32 bit immediate loading, but if the constant to load fits into 16 bits, it can be loaded through sign extension by appending the (X) suffix. (X) When accessing a global memory location, the P2 pointer is set to the address of the memory location... P2 ... and then dereferenced to assign that memory area. Being a RISC architecture, Blackfin does not have direct memory operations. The return value for a function is loaded into the R0 register, and can be accessed from there. The rts command is the return from subroutine, and usually indicates the end of the function, but like the return statement in C, it might appear in any place of the routine. rts return In the following examples, the preambles with declarations and data will be omitted whenever these are not useful to the discussion. Concerning optimization levels, the code will almost always be compiled with at least the first optimization level enabled (-O1). This both because it makes the code cleaner to read (using register-register copy for parameters passing, instead of saving to the stack and then restoring from that) and because we need optimization enabled to see how they are applied. Also, most of the times I'll refer to the fastest alternative. Most of what I say, though, applies also to the smaller alternative when using the -Os optimization level. In any case, the compiler always weighs the cost-to-benefit ratio between the optimized and the unoptimized version, or between different optimized versions. If you want to know the exact route the compiler takes for your code, you can always use the -S switch to find out. One area where DCE is useful is to avoid operations that result in unused data. It's not that uncommon that a variable is defined by an operation, complex or not, and is then never used by the code, either because it is intended for future expansion or because it's a remnant of older code that has been removed or replaced. While the best thing would be to get rid of the definition entirely, users expect the compiler to produce a good result with sloppy code too, and that operation should not be emitted. The DCE pass can remove all the code that has no side effect, when its result is not used. This includes all mathematical operations and functions known to be pure or constant (as neither are allowed to change the global state of the variables). If a function call is not known to be at least pure, it may change the global state, and its call will not be eliminated, as shown in the following code: int someimpurefunction(int a, int b); int somepurefunction(int a, int b) __attribute__((pure)); int testfunction(int a, int b, int c) { int res1 = someimpurefunction(c, b); int res2 = somepurefunction(b, c); int res3 = a + b - c; return a; } Which, once compiled with -O1, [2] produces the following Blackfin assembler: -O1 _testfunction: [--sp] = ( r7:7 ); LINK 12; R7 = R0; R0 = R2; call _someimpurefunction; R0 = R7; UNLINK; ( r7:7 ) = [sp++]; rts; As you can see, the call to the pure function has been eliminated (the res2 variable was not being used), together with the algebraic operation but, the impure function, albeit having its return value discarded, is still called. This is due to the fact that the compiler emits the call, not knowing whether the latter function has side effects on the global memory state or not. res2 This is equivalent to the following code (which produces the same assembler code): int someimpurefunction(int a, int b); int testfunction(int a, int b, int c) { someimpurefunction(c, b); return a; } The. Without discussing legacy code, it is also useful when writing debug code, so that it doesn't look out of place from the use of lots of #ifdef directives. Take for instance the following code: #ifdef #ifdef NDEBUG # define assert_se(x) (x) #else void assert_se(int boolean); #endif char *getsomestring(int i) __attribute__((pure)); int dosomethinginternal(void *ctx, int code, int val); int dosomething(void *ctx, int code, int val) { char *string = getsomestring(code); // returning string might be a sub-string of "something" // like "some" or "so" assert_se(strncmp(string, "something", strlen(string)) == 0); return dosomethinginternal(ctx, code, val); } The assert_se macro has different behavior from the standard assert, as it has side effects, which basically means that the code passed to the assertion is called even though the compiler is told to disable debugging. This is a somewhat common trick, although its effects on readability are debatable. assert_se assert With getsomestring() pure, when compiling without debugging, the DCE will remove the calls to all three functions: getsomestring(), strncmp() and strlen() (the latter two are usually declared as pure by both the C library and by GCC's built-in replacements). This because none of these functions have a side effect, resulting in a very short function: getsomestring() strncmp() _dosomething: LINK 0; UNLINK; jump.l _dosomethinginternal; If our getsomestring() function weren't pure, even though its return value is not going to be used, the compiler would have to emit the call, resulting in rather more complex (albeit still simple, compared with most real-world functions) assembler code: _dosomething: [--sp] = ( r7:5 ); LINK 12; R7 = R0; R0 = R1; R6 = R1; R5 = R2; call _getsomestring; UNLINK; R0 = R7; R1 = R6; R2 = R5; ( r7:5 ) = [sp++]; jump.l _dosomethinginternal; The Common Sub-expression Elimination optimization is one of the most important optimizations performed by the compiler, because it's the one that, for instance, replaces multiple indexed accesses to an array so that the actual memory address is calculated just once. What this optimization does is to find common operations executed on the same operands (even when they are not known at compile-time), decide which ones are more expensive than saving the result in a temporary (register or stack), and then swapping the code around to take the cheapest course. While its uses are quite varied, one of the easiest ways to see the work of the CSE is to look at the code generated when using the ternary if operator. Let's take the following code: if int someimpurefunction(int a); int somepurefunction(int a) __attribute__((pure)); int testfunction(int a, int b, int c, int d) { int res1 = someimpurefunction(a) ? someimpurefunction(a) : b; int res2 = somepurefunction(a) ? somepurefunction(a) : c; int res3 = a+b ? a+b : d; return res1+res2+res3; } The compiler will optimize the code as: _testfunction: [--sp] = ( r7:4 ); LINK 12; R7 = R0; R5 = R1; R4 = R2; call _someimpurefunction; cc =R0==0; if !cc jump L$L$2; R6 = R5; jump.s L$L$4; L$L$2: R0 = R7; call _someimpurefunction; R6 = R0; L$L$4: R0 = R7; call _somepurefunction; R1 = R0; cc =R0==0; if cc R1 =R4; /* movsicc-1b */ R0 = R5 + R7; cc =R0==0; R2 = [FP+36]; if cc R0 =R2; /* movsicc-1b */ R1 = R1 + R6; R0 = R1 + R0; UNLINK; ( r7:4 ) = [sp++]; rts; As you can see, the pure function is called just once, because the two references inside the ternary operator are equivalent, while the other one is called twice. This is because there was no change to global memory known to the compiler between the two calls of the pure function (the function itself couldn't change it – note that the compiler will never take multi-threading into account, even when asking for it explicitly through the -pthread flag), while the non-pure function is allowed to change global memory or use I/O operations. -pthread The equivalent code in C would be something along the following lines (it differs a bit because the compiler will use different registers): int someimpurefunction(int a); int somepurefunction(int a) __attribute__((pure)); int testfunction(int a, int b, int c, int d) { int res1 = someimpurefunction(a) ? someimpurefunction(a) : b; const int tmp1 = somepurefunction(a); int res2 = tmp1 ? tmp1 : c; const int tmp2 = a+b; int res3 = tmp2 ? tmp2 : d; return res1+res2+res3; } The Common Sub-expression Elimination optimization is very useful when writing long and complex mathematical operations. The compiler can find common calculations even though they don't look common to the naked eye, and act on those. Although sometimes you can get away with using multiple constants or variables to carry out temporary operations so that they can be re-used in the following calculations, leaving the formulae entirely explicit is usually more readable, as long as the formulae are not intended to change. Like with other algorithms, there are some advantages to reducing the source code used to calculate the same thing; for instance you can easily make a change directly to the definition of a constant and get the change propagated to all the uses of that constant. On the other hand, this can be quite a problem if the meaning of two calculations is very different (and thus can vary in different ways with the evolution of the code), and just happen to be calculated in the same way at a given time. Another rather useful place where the compiler can further optimize code with CSE, where it wouldn't be so nice or simple to do manually in the source code, is where you deal with static functions that are inlined by the compiler. Let's examine the following code for instance: extern int a; extern int b; static inline int somefunc1(int p) { return (p * 16) + (3 << a); } static inline int somefunc2(int p) { return (p * 16) + (4 << b); } extern int res1; extern int res2; extern int res3; extern int res4; void testfunc(int p1, int p2) { res1 = somefunc1(p1); res2 = somefunc2(p1); res3 = somefunc1(p2); res4 = somefunc2(p2); } In this code, you can find four basic expressions: (p1 * 16), (p2 * 16), (3 << a) and (4 << b). Each of these four expressions is used twice in the somefunc() function. Thanks to the CSE, though, the code will calculate each of them once, even though they cross the function boundary, producing the following code: (p1 * 16) (p2 * 16) (3 << a) (4 << b) somefunc() _testfunc: [--sp] = ( r7:7 ); LINK 0; R0 <<= 4; R1 <<= 4; P2.H = _a; P2.L = _a; R2 = [P2]; R7 = 3 (X); R7 <<= R2; P2.H = _b; P2.L = _b; R2 = [P2]; R3 = 4 (X); R3 <<= R2; R2 = R0 + R7; P2.H = _res1; P2.L = _res1; [P2] = R2; P2.H = _res2; P2.L = _res2; R0 = R0 + R3; [P2] = R0; R7 = R1 + R7; P2.H = _res3; P2.L = _res3; [P2] = R7; R1 = R1 + R3; P2.H = _res4; P2.L = _res4; [P2] = R1; UNLINK; ( r7:7 ) = [sp++]; rts; As you can easily see (the assembly was modified a bit to improve its readability, the compiler re-ordered loads of registers to avoid pipeline stalls, making it harder to see the point), the four expressions are calculated first, and stored respectively in the registers R0, R1, R7 and R3. R7 R3 These kinds of sub-expressions are usually harder to see in the code and also harder to implement. Sometimes they get factored out on their own parameter, but that can be more expensive during execution, depending on the calling conventions of the architecture. As I wrote above, there are some requirements that apply to functions that are declared pure and constant, related to not changing or accessing global memory; not executing I/O operations; and, of course, not calling further impure functions. The reason for this is that the compiler will accept what the user declares the function to be, whatever its body is (as it's usually unknown by the compiler at the call stage). Sometimes, though, it's possible to fool the compiler so that it treats impure functions as pure or even constant functions. Although this is a risky endeavor, as it might truly cause bad code generation by the compiler, it can sometimes be used to force optimization for particular functions. An example of this can be a lookup function that scans through a global table to return a value. While it is accessing global memory, you might want the compiler to promote it to a constant function, rather than simply to a pure one. Let's take for instance the following code: const struct { const char *str; int val; } strings[] = { { "foo", 31 }, { "bar", 34 }, { "baz", -24 } }; const char *lookup(int val) { int i; for(i = 0; i < sizeof(strings)/sizeof(*strings); i++) if ( strings[i].val == val ) return strings[i].str; return NULL; } void testfunction(int val, const char **str, unsigned long *len) { if ( lookup(val) ) { *str = lookup(val); *len = strlen(lookup(val)); } } If the lookup() function is only considered a pure function, as it is, adhering to the rules we talked about at the start of the article, it will be called three times in testfunction(), like this: lookup() testfunction() _testfunction: [--sp] = ( r7:7, p5:4 ); LINK 12; R7 = R0; P5 = R1; P4 = R2; call _lookup; cc =R0==0; if cc jump L$L$17; R0 = R7; call _lookup; [P5] = R0; R0 = R7; call _lookup; call _strlen; [P4] = R0; L$L$17: UNLINK; ( r7:7, p5:4 ) = [sp++]; rts; Instead, we can trick the compiler by declaring the lookup() function as constant (the data it is reading is constant, after all, so at a given parameter it will always return the same result). If we do that, the three calls will have to return the same value, and the compiler will be able to optimize them as a single call: _testfunction: [--sp] = ( p5:4 ); LINK 12; P5 = R1; P4 = R2; call _lookup; cc =R0==0; if cc jump L$L$17; [P5] = R0; call _strlen; [P4] = R0; L$L$17: UNLINK; ( p5:4 ) = [sp++]; rts; In addition to lookup functions on constant tables, this trick is useful with functions which read data from files or other volatile data, and cache it in a memory variable. Take for instance the following function that reads an environment variable: char *get_testval() { static char *cachedval = NULL; if ( cachedval == NULL ) { cachedval = getenv("TESTVAL"); if ( cachedval == NULL ) cachedval = ""; else cachedval = strdup(cachedval); } return cachedval; } This is not truly a constant function, as its return value depends on the environment. Even so, assuming that the environment of the process is left untouched, its return value will never change between calls. Even though it will affect the global state of the program (as the cachedval static variable will be filled in the first time the function is called), it can be assumed to always return the same value. cachedval Tricking the compiler into thinking that a function is constant even though it has to load data through I/O operations, as I said, is risky, as the compiler will think there is no I/O operation going on; on the other hand, this trick might make a difference sometimes, as it allows the expression of functions in more semantic ways, leaving it up to the compiler to optimize the code with temporaries, where needed. One example can be the following code: char *get_testval() { static char *cachedval = NULL; if ( cachedval == NULL ) { cachedval = getenv("TESTVAL"); if ( cachedval == NULL ) cachedval = ""; else cachedval = strdup(cachedval); } return cachedval; } extern int a; extern int b; extern int c; extern int d; static int testfunc1() { if ( strcmp(get_testval(), "FOO") == 0 ) return a; else return b; } static int testfunc2() { if ( strcmp(get_testval(), "BAR") == 0 ) return c; else return d; } int testfunction() { return testfunc1() + testfunc2(); } Considering the above source code, if get_testval() is impure, as the compiler will automatically find it to be, it will be compiled into: get_testval() _testfunction: [--sp] = ( r7:7 ); LINK 12; call _get_testval; R1.H = L$LC$2; R1.L = L$LC$2; call _strcmp; cc =R0==0; if !cc jump L$L$11 (bp); P2.H = _a; P2.L = _a; R7 = [P2]; L$L$13: call _get_testval; R1.H = L$LC$3; R1.L = L$LC$3; call _strcmp; cc =R0==0; if !cc jump L$L$14 (bp); P2.H = _c; P2.L = _c; R0 = [P2]; UNLINK; R0 = R0 + R7; ( r7:7 ) = [sp++]; rts; L$L$11: P2.H = _b; P2.L = _b; R7 = [P2]; jump.s L$L$13; L$L$14: P2.H = _d; P2.L = _d; R0 = [P2]; UNLINK; R0 = R0 + R7; ( r7:7 ) = [sp++]; rts; As you can see, the get_testval() is called twice, even though its result will be identical. If we declare it constant, instead, the code of our test function will be the following: _testfunction: [--sp] = ( r7:6 ); LINK 12; call _get_testval; R1.H = L$LC$2; R1.L = L$LC$2; R7 = R0; call _strcmp; cc =R0==0; if !cc jump L$L$11 (bp); P2.H = _a; P2.L = _a; R6 = [P2]; L$L$13: R1.H = L$LC$3; R0 = R7; R1.L = L$LC$3; call _strcmp; cc =R0==0; if !cc jump L$L$14 (bp); P2.H = _c; P2.L = _c; R0 = [P2]; UNLINK; R0 = R0 + R6; ( r7:6 ) = [sp++]; rts; L$L$11: P2.H = _b; P2.L = _b; R6 = [P2]; jump.s L$L$13; L$L$14: P2.H = _d; P2.L = _d; R0 = [P2]; UNLINK; R0 = R0 + R6; ( r7:6 ) = [sp++]; rts; The CSE pass combines the two calls to get_testval with one. Again, this is one of the optimizations that are harder to achieve by manually changing the source code since the compiler can have a larger view of the use of its value. A common way to handle this is by using global variables, but that might require one more load from the memory, while CSE can take care of keeping the values in registers or on the stack. get_testval After what you have read about pure and constant functions, you might have some concerns about the average use of them. Indeed, in a lot of cases, these two attributes allow the compiler to do something you can easily achieve by writing better code. There are two objectives you have to keep in mind that are related to the use of these (and other) attributes. The first is code readability because sometimes the manually optimized functions are harder to read than what the compiler can produce. The second is allowing the compiler to optimize legacy or external code. While you might not be too concerned with letting legacy code or code written by someone else get away with slower execution, a pragmatic view of the current Free Software world should take into consideration the fact that there are probably thousands lines of code of legacy code around. Some of that code, written with pre-C99 declarations, might be even using libraries that are being developed with their older interface, which could be improved by providing some extra semantic information to the compiler through use of attributes. Also, it's unfortunately true that extensive use of these attributes might be seen by neophytes as an easy solution to let sloppy code run at a decent speed. On the other hand, the same attributes could be used to identify such sloppy code through analysis of the source code. Although GCC does not issue warnings for all of these cases, it already warns for some of them, like unused variables, or statements without effect (both triggered by the DCE). In the future more warnings might be reported if pure and constant functions get misused. In general, like with many other GCC function attributes, their use is tightly related to how programmers perceive their task. Most pragmatic programmers would probably like these tools, while purists will probably dislike the way these attributes help sloppy code to run almost as fast as properly written code. My hopes are that in the future better tools will make good use of these and other attributes on different levels than compilers, like static and dynamic analyzers. [1] The Blackfin architecture is a RISC architecture developed by Analog Devices, supported by both GCC and Binutils (and Linux, but I'm not interested in that here). [2] I have chosen -O1 rather than -O2 because in the latter case the compiler performs extra optimization passes that I do not wish to discuss within the scope of this article. Implications of pure and constant functions Posted Jun 10, 2008 21:31 UTC (Tue) by nix (subscriber, #2304) [Link] Just one point. You say [...] sometimes, doing the optimization "manually", directly in the source code, makes it more readable, and makes the code resemble the output of the compiler more. Possible advantage Posted Jun 11, 2008 0:38 UTC (Wed) by tialaramex (subscriber, #21167) [Link] I'd say, from a little experience that it's an advantage during debugging. The closer the resemblance between source and executable, the more chance you have of understanding what you're seeing in the debugger. If, for example, you used an unnecessary temporary, the debugger cannot show you the value of that temporary. If you call a side-effect free function like strlen() several times the actual code may call it just once, meaning that breaking on entry to strlen() will not do what you expect. I recently deleted some code which read something like as follows... int cache[CACHE_WIDTH]; if (!cache) { log_critical("Could not allocate cache"); } A naive programmer might be quite surprised to see his debugger skip the last three of those lines during single stepping, but in reality not a single byte of code was emitted or executed for them due to a trivial optimisation. Posted Jun 13, 2008 16:17 UTC (Fri) by giraffedata (subscriber, #1954) [Link] I don't really see any advantage of making the code resemble the output of the compiler. Hear, hear. There are two distinct ways to look at a program: 1) instructions to a computer; 2) description of the solution of a computational problem. The primary audience for (1) is a computer; for (2) it's a human. In view (2), a compiler's job is to produce a machine program that computes the solution described by the source code. A lot of programmers like to do that work themselves, but I think that is an inefficient use of brain power (for everyone who works on that code). The closer the resemblance between source and executable, the more chance you have of understanding what you're seeing in the debugger. That's definitely my experience. But there is a middle ground. I write human-oriented code and let the compiler do its job normally. But when I debug at the machine level I add -O0 to the compile options. That's usually described as "don't optimize", but as I consider optimization to be an integral part of the compiler's job, I view it as, "Make the machine program track the source code as closely as possible." Posted Jun 17, 2008 21:38 UTC (Tue) by roelofs (guest, #2599) [Link] "...and watch your bug go away." :-) That seems to happen to me more often than not. Most recently it turned out to be a gcc code-generation bug, 32-bit only, 3.4.x only. (Pretty sure not related to const, though C++ is a different beast, so I'm not 100% certain.) Greg Posted Jun 18, 2008 2:00 UTC (Wed) by giraffedata (subscriber, #1954) [Link] Of course, this is mostly with older compilers. (The new ones are too slow for me). C89 includes blocks you know... Posted Jun 10, 2008 21:36 UTC (Tue) by ballombe (subscriber, #9523) [Link]); .... } } Omitting extra braces is a boon Posted Jun 10, 2008 21:56 UTC (Tue) by jreiser (subscriber, #11027) [Link]). Posted Jun 11, 2008 18:43 UTC (Wed) by bronson (subscriber, #4806) [Link] Doesn't your editor or IDE color the variable declarations? Posted Jun 11, 2008 19:11 UTC (Wed) by jengelh (subscriber, #33263) [Link] That's not the point.. Posted Jun 12, 2008 13:41 UTC (Thu) by IkeTo (subscriber, #2122) [Link] >..) Posted Jun 11, 2008 10:07 UTC (Wed) by Yorick (subscriber, #19241) [Link] No, local (automatic) variables may have non-const initialisers in c89.. Posted Jun 11, 2008 11:31 UTC (Wed) by ballombe (subscriber, #9523) [Link] info gcc 5.18 Non-Constant Initializers ============================== }; /* ... */ } Posted Jun 12, 2008 5:17 UTC (Thu) by eru (subscriber, #2753) [Link]. Posted Jun 12, 2008 10:54 UTC (Thu) by nix (subscriber, #2304) [Link] OK, broken compilers it is. (It's surprising that a language as apparently simple as C can still trip one up with unexpected corners like this after so long.). Interesting reading... Thanks! Posted Jun 11, 2008 0:26 UTC (Wed) by pr1268 (subscriber, #24648) [Link] Thanks to Diego for contributing an interesting and enlightening article. Posted Jun 11, 2008 1:18 UTC (Wed) by MisterIO (guest, #36192) [Link] "As you can guess, constant functions are also pure functions, but pure functions cannot be constant functions." Shouldn't it be : "As you can guess, constant functions are also pure functions, but pure functions can be not constant functions." ? Posted Jun 11, 2008 3:00 UTC (Wed) by rriggs (subscriber, #11598) [Link] Try this for clarity: "but not all pure functions are constant functions." Posted Jun 11, 2008 6:42 UTC (Wed) by nix (subscriber, #2304) [Link] Or "functions declared 'const' are implicitly 'pure', but not vice versa." Posted Jun 11, 2008 1:40 UTC (Wed) by rsidd (subscriber, #2582) [Link] As MisterIO points out, the statement "pure functions cannot be constant functions" is obviously wrong. Another point: in a language like C, surely the pureness of a function depends on all the other functions in a program? For example, you say (accurately, as far as the definition of "pure" goes): "Because the global memory state remains untouched, two calls to the same pure function with the same parameters will have to return the same value." Your example is the strlen() function. But what if some other function has tampered with the contents addressed by your pointer in between the two calls, without modifying the pointer itself? You hint at this issue later when you say of an example: "(The pure function is called just once) because there was no change to global memory known to the compiler between the two calls of the pure function." So the compiler needs to determine whether the memory addressed could have changed. This is not always possible, unless the compiler decides that any intervening non-pure function call could have changed the memory addressed -- a drastic assumption since in practice most such calls are probably harmless. My take is, if you care about pure functions, use Haskell :) Posted Jun 11, 2008 2:21 UTC (Wed) by droundy (subscriber, #4559) [Link] Indeed, the article should have stated that two consecutive calls to the same pure function with the same arguments will give the same result, that's the difference between pure and constant functions (constant functions always return the same for the same input). With regard to your suggestion to just use Haskell, I'd point out that ghc doesn't support CSE. I love Haskell, and it's a great language, but this sort of optimization is not one of its strengths. Laziness interferes with its strengths as a pure functional language in this case. The trouble is that it's hard to determine when using the memory to store the result of a computation is worth paying to avoid recomputing it. For primitive data types the answer is obvious, and we should do CSE, but ghc doesn't perform that analysis, and even if it did, I wouldn't be happy with a CSE pass that only worked on functions returning Int/Double/Bool etc. Of course, dead code elimination comes for free in Haskell, so that does gain us something. But as the article points out, it's really only useful for things like conditional compilation, which is much less of a gain than CSE. Posted Jun 11, 2008 6:45 UTC (Wed) by nix (subscriber, #2304) [Link] A classic example of a common class of functions that can't be optimized over even though you'd think it could be is *printf(). Standard printf() without %n can be freely optimized over, but alas glibc has an extension (user-defined format masks) that means printf() can call arbitrary functions for you. That nobody ever uses this extension is irrelevant: the compiler must assume that you might be. Std C Posted Jun 11, 2008 8:16 UTC (Wed) by cate (subscriber, #1359) [Link] FYI the attribute concept, along with "pure" and "const" sttribute should enter in next C1x standard, but possibly with other name. See Posted Jun 11, 2008 11:44 UTC (Wed) by Yorick (subscriber, #19241) [Link]. Posted Jun 12, 2008 11:57 UTC (Thu) by ekj (subscriber, #1524) [Link] This low barrier to entry, though, is also probably the cause of the widely varying quality of the code of these projects. I hope you're not suggesting that projects where entry is more closed, such as proprietary development do not experience "widely varying quality" of code. Posted Jun 12, 2008 14:26 UTC (Thu) by Flameeyes (subscriber, #51238) [Link] > I hope you're not suggesting that projects where entry is more closed, > such as proprietary development do not experience "widely varying > quality" of code. In my experience, it isn't much varying... it usually is so bad it's not even funny to think of it twice. On the other hand, proprietary software that is written badly is more easily discarded, as it fails to work at all in most cases - unless it's so proprietary you can't switch for a different one. On Free Software, badly written software can easily be patched up so that it works, and is kept in circulation of months and years. I think the problem is in the big numbers, there is probably way more Free Software than proprietary software being developed by amateurs. Posted Jun 12, 2008 20:04 UTC (Thu) by mtorni (subscriber, #3618) [Link] Posted Jun 12, 2008 20:38 UTC (Thu) by quotemstr (subscriber, #45331) [Link] The key here is "check in". If somebody can check a change in, he can wreak all sorts of havoc, much of it even more subtle than function attribute changes. Besides, using function attributes like this to be a sort of post-benchmarking magic that undergoes strict code review. Constant function declaration: is it cheating? Posted Jun 13, 2008 16:31 UTC (Fri) by giraffedata (subscriber, #1954) [Link] Is it really cheating or trickery to declare a function constant even though it accesses the larger environment? Or is it just helping the compiler properly classify the function when it doesn't have enough information to do it itself? All programs make assumptions about their environment. For example, that there isn't another thread running around overwriting the memory in which the program's variables are stored. Where the user doesn't assure that assumed environment, the program's behavior is arbitrary. So isn't it legitimate to make the environmental assumption that a certain file's contents don't change while the program runs? With that assumption, a function whose result depends upon the contents of the file is constant. Posted Jun 14, 2008 6:52 UTC (Sat) by jimparis (subscriber, #38647) [Link] > Is it really cheating or trickery to declare a function constant even though it accesses the larger environment? Or is it just helping the compiler properly classify the function when it doesn't have enough information to do it itself? Yes! Well, maybe it's not cheating, but it's definitely lying, and you'd better start expecting your code to start failing in mysterious ways when you add more stuff to your program or you switch to a new or different compiler. Really, if you want to help the compiler, help it directly, don't try to trick the optimizer into doing what you want. If you're concerned about if (count(foo)) bar(count(foo)); where count() isn't really constant, but in this one case you know for a fact that it the result won't change, just write tmp = count(foo); if (tmp) bar(tmp); > All programs make assumptions about their environment. For example, that there isn't another thread running around overwriting the memory in which the program's variables are stored. There's a difference between valid documented assumptions (that my kernel separates processes properly) and non-guaranteed observations (hey, if I lie to the compiler here then it seems to generate better code). In the first case, a problem is clearly a bug in the kernel and so it would be fixed if it showed up. In the second, it's your own bug when the assumption proves wrong. > So isn't it legitimate to make the environmental assumption that a certain file's contents don't change while the program runs? With that assumption, a function whose result depends upon the contents of the file is constant. Aah, now that's different.. My concern with the article it suggests you might declare a function constant, even when it's not(!), if it helps generate better code. And that's definitely cheating and just asking for problems. Posted Jun 14, 2008 16:34 UTC (Sat) by giraffedata (subscriber, #1954) [Link]. That's all I'm suggesting. The "constant" attribute must always mean the function is constant, not that you want the compiler to treat it as if it were constant. (It's none of your business what the compiler does with the information). The only question is the definition of constant. The article's use of the term "cheating" suggests to me a definition of constant that is exactly what the compiler would determine on its own, e.g. if the function accesses global memory, it's not constant. But the examples given are of cases where the function really is constant under the broader definition. Pure and constant external functions Posted Jun 13, 2008 16:38 UTC (Fri) by giraffedata (subscriber, #1954) [Link] In addition to saving one from having to code every function declaration twice, it would let the compiler detect attributes like pureness that I'm too lazy to declare manually. Alternatively, the compiler might warn when the declaration doesn't have all the apparently applicable attributes. Posted Jun 19, 2008 19:38 UTC (Thu) by olecom (guest, #42886) [Link] > Has anyone thought of having the compiler generate header files? Posted Jun 21, 2008 3:28 UTC (Sat) by giraffedata (subscriber, #1954) [Link] Has anyone thought of having the compiler generate header files? Has anyone thought of having the compiler generate header files? The reference is to a posting about kbuild and dependencies and build issues. I don't see any mention of having the compiler generate header files. Posted Jun 13, 2008 16:44 UTC (Fri) by etienne_lorrain@yahoo.fr (guest, #38022) [Link] Sometimes, I would also like an attribute to say "no content of parameter-pointers" are changed by a function, in fact nothing important is changed in memory by a function. The main example would be printf(), or any function name used to print a debugging string - with variable number of argument that cannot be prefixed by "const" (we cannot compile "void dbgprintf (const char *, const ...);"). Then, if the compiler see some code like: if (error) dbgprintf ("here is the context address %x %s %s\n", &s1, s1, s2); it has anyway to update in memory all content of pointer passed, but it does not have to reload all those values to register after the call. The main problem of those dbgprintf() is that there may be a lot of them, and it would be nice if they did not interfer too much with execution time or other optimisation. True that for x86 there is not enough register to notice the problem, but it is bad to see a lot of unused register on other arch because the compiler has to keep in sync the memory and registers too often. Posted Jun 15, 2008 12:34 UTC (Sun) by nix (subscriber, #2304) [Link] Thanks to rarely-used glibc extensions, printf() and friends must be assumed by optimizers to be able to change *anything*, both because the information could be sent to a custom stream (-> calling arbitrary functions), and because they can contain specifiers for which custom conversion have been defined (-> calling arbitrary functions). Linux is a registered trademark of Linus Torvalds
http://lwn.net/Articles/285332/
CC-MAIN-2013-48
refinedweb
8,083
56.49
In a previous post, I mentioned using Amazon Linux EC2 to create AWS Lambda compatible packages. While this works, another way to create packages that can run on AWS Lambda is to create them locally via a Docker Amazon Linux image . One downside I’ve found to this method is that sometimes these images are incompatible with some of the system files in the Lambda runtime, but at the time of writing this, I found the docker-lambda project to both create compatible lambda linux images as well as a great way to shorten lambda development cycles by emulating a lambda environment you can invoke locally. To start, here are the instructions to build a Python 3.6 docker lambda image (of course, make sure you have Docker installed): git clone # clone project from git cd docker-lambda/ # go to project directory npm install # install project node.js dependencies cd python3.6/build # go to the python Dockerfile build docker build . # build the image as per instructions in # the Dockerfile (takes time...) docker images # show docker images, note the id of the built image docker tag 32e7f5244861 lambci/python3.6:build # name and tag the built docker image using its id docker run -it lambci/python3.6:build /bin/bash # create a new container based on new image and # run it interactively (/bin/bash command is needed # because CMD ["/bin/bash"] is not included as the # last line in the Dockerfile exit # leave docker container docker ps -a # locate the newly created container from the above # above command, and note the name given to it docker start -i vibrant_heyrovsky # resume interactive session with the container # using the container name found above So, now you have a console to a compatible Amazon Linux shell. To create lambda functions, you basically zip all the relevant files and upload to AWS lambda and after that, you can remotely invoke the required function on Lambda . My current method will be to have two console windows – one is the above console to the docker bash, and another is a console of the host operating system (whatever OS you are running Docker on). This way, you can easily zip the lambda packages in the Docker console, and then copy them from your OS console (and from there upload them to AWS Lambda) Setting up an AWS lambda user ∞ Now that we have a local Lambda-compatible environment, let’s create an actual AWS user that will be used to upload and run the packages that we’ll create in our local Lambda-compatible Docker container. To run the following, make sure you first have the AWS CLI installed on your OS. Let’s create our lambda user using the above CLI. Of course, the assumption is that you already have a credentials file in your .aws directory which enable you to do the next part. If not, you’ll need to create a user with the appropriate privileges from the AWS IAM console, get that user’s aws key id and aws secret, then locally run aws configure and follow the instructions. This will create your initial credentials file. We’ll now create a user that we’ll use for AWS lambda. The information here is based on this excellent simple tutorial with some minor changes to suit this one. # Create a user group 'lambda_group' $ aws iam create-group --group-name lambda_group # Create a user 'lambda_user' $ aws iam create-user --user-name lambda_user # Add our user to the group $ aws iam add-user-to-group --user-name lambda_user --group-name lambda_group # Create a password for this user $ aws iam create-login-profile --user-name lambda_user --password _your_password_here_ # Create a CLI access key for this user $ aws iam create-access-key --user-name lambda_user # Save user's Secret and Access Keys somewhere safe - we'll need them later Now that we have a user, let’s authorise this user to run lambda functions, copy s3 files etc. To do this, we create a policy and grant that policy to the user we just created. For that, create a file with the following json, and name it lambda_policy.json { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "iam:*", "lambda:*", "s3:*" ], "Resource": "*" }] } now grant the above policy to our lambda user: aws iam put-user-policy --user-name lambda_user --policy-name lambda_all --policy-document Now, let’s configure our AWS CLI so that we can perform actions as lambda_user $ aws configure --profile lambda_user > AWS Access Key ID [None]: <your key from the above create-access-key command> > AWS Secret Access Key [None]: <your secret from the above create-access-key command> > Default region name [None]: us-east-1 (or whatever region you use) > Default output format [None]: json # AWS stores this information under [lambda_user] at ~/.aws/cretentials file Finally, we need to create a role which is needed when creating a lambda function and determines what actions the lambda function is permitted to perform. To create the role, create a file named basic_lambda_role.json with the following json text: { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "AWS" : "*" }, "Action": "sts:AssumeRole" }] } Then create the role via the CLI: $ aws iam create-role --role-name basic_lambda_role --assume-role-policy-document The above will return the role identifier as an Amazon Resource Name (ARN), for example: arn:aws:iam::716980512849:role/basic_lambda_role . You’ll need this ARN whenever you create a new lambda function so hold on to it. We now have all the ingredients to create, update and invoke AWS Lambda functions. We’ll do that later, but first – let’s get back to creating the code package that is required when creating a lambda function. The code package is just a zip file which contains all your code and its dependencies that are uploaded to lambda when you create or update your lambda function. The next section will explain how to do this. Creating a AWS Lambda code package ∞ We’ll start with creating and invoking a python package that has some dependencies, and then show how to create a package that can run arbitrary executables on AWS Lambda Creating a local Python 3.6 package So now, let’s make a package example that will return the current time in Seoul. To do this, we’ll install a python module named arrow, but we’ll install it in a local directory since we need to package our code with this python module. To do this, open your docker console that is running the lambda compatible environment and: cd /var/task # move to the base lambda directory in the docker image mkdir arrowtest # Create a directory for the lambda package we're going to make cd arrowtest # move in to the directory pip install arrow -t ./ # install the arrow python library in this directory ls # take a look at what has been added next, we’ll create our lambda function which we’ll later invoke. (you might want to install an editor of your choice on the docker console using yum, for example via yum install vim). So, let’s create arrowtest.py : import sys import arrow def lambdafunc(event, context): utc = arrow.utcnow() SeoulTime = utc.to('Asia/Seoul') return "The time in Seoul is: %s" % (SeoulTime.format()) #just for local testing if __name__ == "__main__": print(lambda_func(None, None)) and test that it works locally in the docker shell: python arrowtest.py Ok, so we have the python file with the lambda function, we have the dependencies, now all we need to do is zip the contents of the entire directory and add this zip file as a parameter to the lambda function creation. This would work, however with larger Python libraries, you might want to remove certain files that aren’t being used by you python code and would just waste space on lambda. My rather primitive but effective method for doing this is cloning the complete directory and start removing files that seem pointless until something breaks, and then I put them back and try other things until I’m happy with the size reduction. In the cloned directory, I actually rename directories before removing them as it’s easier to run the script after renaming and rename them back if we see that the directory is needed by the script. Let’s do it for this example: cd .. pwd # should be /var/task cp -r arrowtest arrowtest_clone cd arrowtest_clone ls # let's see what's in here du -hd1 # note how much space each directory takes (1.2MB) Installed python libraries can contain many directories and files of different types. There are python files, binary dynamic libraries (usually with .so extensions) and others. Knowing what these are can help decide what can be removed to make the zipped package more lean. In this example, the directory sizes are a non issue, but other python libraries can get much larger. an example of some stuff I deleted rm -rf *.dist-info rm -rf *.egg-info rm six.py rm -rf dateutil # we're not making use of this - it's just wasting space # test that the script is still working after all we've deleted python arrowtest.py test du -hd1 # we're down to 332K from 1.2MB and the script still works. now, let’s package this directory in a zip file. if you don’t have zip installed on your docker container yet then yum install zip and now after removing unneeded files and dependencies, let’s pack our directory: zip -r package.zip . now that we have the package on the docker container, let’s copy it to our OS from our OS console: docker cp vibrant_heyrovsky:/var/task/arrowtest_clone/package.zip . (replace vibrant_heyrovsky with the name of your docker image). So we have a zipped package that we tested on docker – let’s create a lambda function from this package and invoke it (replace arn:aws:iam::716980512849:role/basic_lambda_role with your own ARN): aws lambda create-function --region us-east-1 --function-name lambdafunc --zip-file fileb://package.zip --role arn:aws:iam::716980512849:role/basic_lambda_role --handler arrowtest.lambdafunc --runtime python3.6 --profile lambda_user and finally, let’s see if we can get AWS lambda to tell us the current time in Seoul: aws lambda invoke --invocation-type RequestResponse --function-name lambdafunc --region us-east-1 --log-type Tail --profile lambda_user out.txt #invoke the function cat out.txt # check the result the file out.txt contains the return value of the called lambda function. Next we’ll see how to update to a new package and how to pass parameters to the lambda function. To be continued…
https://nocurve.com/2018/11/13/aws-lambda-running-python-bundles-and-arbitrary-executables/
CC-MAIN-2021-21
refinedweb
1,777
57.91
A quick job board with React, Meteor and Material UI I recently wrote a mini-tutorial “How to write a job app in 48 lines of code” — and here it is again, but using React JS: Facebook’s javascript love-child. This is slightly longer than 48 lines of code, but mainly because I also integrated a Material UI interface. I actually learned React so I could use this UI library. React is not strictly necessary with the Meteor framework, as Meteor is reactive out-of-the-box — but I like Google’s Material UI style, and it required React, so here we are. Getting Started Ok, so as before let’s just add our packages to a new Meteor project: meteor create projectname meteor add react coffeescript http cosmos:browserify mquandalle:jade meteorhacks:npm Clearly we need the react package, and we also need the browserify and npm packages for the Material UI library. To pull from Github API we need the http package, and I also added coffeescript and jade — because I prefer them to writing html and javascript. Main Template Our main page index.jade is fairly empty — we simply need to add the div which react hooks into — thusly: head title ReactJobs body #App React will render the html to the #App div. Startup.jsx Now let’s create our startup javascript file. Jobs = new Mongo.Collection("jobs"); if (Meteor.isClient) { Meteor.startup(function () { // injectTapEventPlugin(); React.render(<App />, document.getElementById("App")); }); } I created a collection to hold the jobs data, and in Clientside Startup we add base React.render method and React will render the html into the #App div we placed. App.jsx Now onto our app component. This is the largest file in the whole App, weighing in at 63 lines. I place all components seperately in a components folder inside the client folder: client/components. This is quite verbose, so maybe grab a coffee. const { Paper, List, ListItem, ListDivider, Avatar, RaisedButton, AppBar, FlatButton, IconButton, NavigationClose } = mui; These are required to use the Material UI library. They refer to the components we will be using. const ThemeManager = new mui.Styles.ThemeManager(); And here we attach the mui styles to Thememanager. So far, so good. App = React.createClass({ mixins: [ReactMeteorData], Our app is a React class, and we attach our own methods to it. To use React with MeteorData we need to use their Mixin — which is apparently a temporary thing. childContextTypes: { muiTheme: React.PropTypes.object }, getChildContext: function() { return { muiTheme: ThemeManager.getCurrentTheme() }; }, These two methods are required to use MUI. I’m not going to go into details about the MUI library, if you want more info about it, you can check out the link at the top of this post. getMeteorData() { return { jobs: Jobs.find({}).fetch() } }, Here is our method to fetch the Jobs data we will pull from Github jobs API. componentDidMount() { {this.loadJobs()} }, loadJobs() { loadJobs = Meteor.call("loadGithubJobs"); }, When the component loads for the first time we will fetch the jobs list from Github: this could be written in one method, but I like to keep things seperate, so the second method calls the server-side meteor method that pulls from Github API. renderJobs() { return this.data.jobs.map((job) => { return <Job key={job._id} job={job} />; }); }, Using the new Ecmascript, or latest Javascript, functions we can quickly map the data to each instance of our Job component (which we will create shortly) and return a list of react/html components. render() { return ( <div className="wrapper"> <AppBar title="Github Jobs" /> <div className="container"> <List subheader="Latest Github Jobs"> {this.renderJobs()} </List> </div> </div> ); } }); Finally, we render the main App component. In the midst of this you will notice this.renderJobs() which will return the Material List components with our data filled in each one. Job Component Now we need the React Job Component we call in the above script: const { FontIcons, IconButton, Icons, List, ListItem, ListDivider, Avatar } = mui; I’ve been a bit lazy and haven’t deleted the Material components I have not used. I left them there in case I was to expand this app. But you only need to include the components you want to use. // Task component - represents a single todo item Job = React.createClass({ render() { return ( <ListItem primaryText={ this.props.job.title } leftAvatar={ <Avatar src={ this.props.job.company_logo }/> } secondaryText={ this.props.job.location } href={this.props.job.company_url} rightIcon={ <IconButton iconClassName="muidocs-icon-custom-github" tooltip="GitHub" /> } /> ); } }); This React class will render our List Item component and fill it with the data from our collection and is available in “props”. Pull in the Github data from the API On the server, we need the method(s) which will pull the Github jobs data via http calls. I write this in Coffeescript. Meteor.methods loadGithubJobs: -> @unblock() Meteor.http.call "GET", "", (error,result) -> if(error) console.log error if(result) Meteor.call "writeJobs", (result.data) writeJobs: (jobs) -> Jobs.remove({}) Jobs.insert job for job in jobs Here we have two methods: the first one “gets” the jobs list from the API, and, if successful, calls the second method (writeJobs) to push them into our collection. I actually remove the previous Jobs loaded there as a temporary fix for this demo, so we also have the called data and no “old stuff”. I’m not going into the above methods, they are explained in the previous job app post I referenced at the top of this post which you can look into. Some other bits required for Material UI If you want to use MUI you will also have to add a couple of package files. You will need to add the following (for example) to your root package.json file: { "material-ui": "0.10.1", "externalify": "0.1.0" } And in your client/lib folder you will need a couple of files for browserify. Your json options in the file app.browserify.options { "transforms": { "externalify": { "global": true, "external": { "react": "React.require" }}}} And app.browserify.js mui = require('material-ui'); injectTapEventPlugin = require('react-tap-event-plugin'); More info about this at React and Meteor. Ok and that’s it. I’ll admit, this is early stuff — syntax changes, and the React in Meteor mixin, along with the browserify hacks are patches (in my opinion). But, it’s how it works right now. So, anyway, you should have a nice responsive, pretty web app like this running in localhost: And here is a live version If you have any errors — well, I’m not surprised — this stuff is all pretty new and rough. But if it helps you can clone my repo. A word of warning though: React is still very much in its infancy, and I’ve already found some compiling errors once packages were updated — you know how touchy these dependencies can be… You can ping me on twitter @derrybirkett if you really want to. For now, onward! Originally posted on my blog:
https://medium.com/@derrybirkett/how-to-code-a-quick-job-board-with-react-meteor-and-material-ui-d4ab3b619ec3
CC-MAIN-2017-34
refinedweb
1,155
57.37
Red Hat Bugzilla – Bug 160660 Installer, text mode: ImportError: No module named email.Utils Last modified: 2007-11-30 17:11:07 EST +++ This bug was initially created as a clone of Bug #157709 +++ New information: I repeated the steps described bellow with the official FC4 DVD image and got the same result.: Same here: I tried to upgrade FC3 to FC4 using a HD install started from GRUB, using the CD ISOs. I get: ... File "/usr/lib/python2.4/urllib2.py", line 1131, in open_local_file import email.Utils ImportError: No module named email.Utils Anything I can do about that, other than waiting for FC5 or trying my luck with apt or yum? Here's a workaround that appears to work for me (but I didn't get beyond the disk space check; I need to clean up before retrying the upgrade). WARNING: This will work for UPGRADES (from sufficiently recent RHL/Fedora releases) ONLY! mkdir /mnt/sysimage/tmp/fc4installpython2.4 cp -r /usr/lib/python2.4/* /mnt/sysimage/tmp/fc4installpython2.4 nano /mnt/sysimage/tmp/fc4installpython2.4/urllib2.py * remove "import email.Utils" * change "modified = email.Utils.formatdate(stats.st_mtime, usegmt=True)" to "modified = '2000-01-01'" (in order to get rid of the use of email.Utils, which is not available in Anaconda) /mnt/sysimage/bin/mount -- bind /mnt/sysimage/tmp/fc4installpython2.4 /usr/lib/python2.4/ This wierd copying and rebinding hack is needed because the file system is mounted read-only from the ISO. Also make sure you use /mnt/sysimage/bin/mount, because the Busybox builtin mount doesn't understand the --bind option. Someone at RH will have to rebuild a working ISO for a real fix. I should add that all these commands need to be entered on the console you get by pressing Alt+F2, when Anaconda is on the window asking you how to proceed with the GRUB configuration. (The window before is too early, because you need /mnt/sysimage, the one afterwards is too late, because it is already the backtrace.) I can confirm that my workaround allowed me to upgrade FC3 to FC4, but I needed to run the installer 3 times to get the upgrade to complete (it spontaneously rebooted during the installation the first 2 times). But that is probably a separate issue, so I opened another bug report. See bug 160944 for details. I upgraded my installation using network install from public ftp server. It worked in the text mode without any problems. So the bug shows itself only when using iso images stored on the harddrive. If I would know that I would not even bother to get the DVD image via torrent. *** Bug 161178 has been marked as a duplicate of this bug. *** I can confirm that Kevin Kofler's workaround works also for reinstalls with slight modifications. I got the Python trace screen after the Root Password screen. I istalled from hard drive without any other boot media, using kernel and initrd from the isolinux on FC4-i386-DVD.iso. Since I didn't get sysimage, I used "/tmp/fc4installpython2.4" folder instead. I also had to use vi instead of nano. To make the --bind trick I mounted my previous FC2 partition and used FC2's mount. Anaconda didn't crash any more but I got an (unrelated) reboot at the first boot. Too bad FC4 is so buggy. It seems like every time I make a fresh install it gets harder because of new bugs. Fixed in CVS. *** Bug 170683 has been marked as a duplicate of this bug. *** *** Bug 170576 has been marked as a duplicate of this bug. *** *** Bug 174448 has been marked as a duplicate of this bug. *** *** Bug 180760 has been marked as a duplicate of this bug. ***
https://bugzilla.redhat.com/show_bug.cgi?id=160660
CC-MAIN-2018-17
refinedweb
631
74.79
Builds the new DN of an entry. #include "slapi-plugin.h" char * slapi_moddn_get_newdn(Slapi_DN *dn_olddn, char *newrdn, char *newsuperiordn); This function takes the following parameters: The old DN value. The new RDN value. If not NULL, will be the DN of the future superior entry of the new DN, which will be worked out by adding the value in newrdn in front of the content of this parameter. This function returns the new DN for the entry whose previous DN was dn_olddn. This function is used for moddn operations and builds a new DN out of a new RDN and the DN of the new parent. The new DN is worked out by adding the new RDN in newrdn to a parent DN. The parent will be the value in newsuperiordn, if different from NULL, and will otherwise be taken from dn_olddn by removing the old RDN. (The parent of the entry will still be the same as the new DN). You must free the DN returned using slapi_ch_free_string() .
http://docs.oracle.com/cd/E19693-01/819-0996/aaijk/index.html
CC-MAIN-2015-06
refinedweb
169
80.01
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives I just posted a new article for imperative-loving people about why recursive programming is better than imperative programming. Here's the article: Mastering Recursive Programming I thought you all might be interested, and I am interested in any comments, criticisms, additions, subtractions you all have. That being said, I probably shouldn't pick the nits when the message - that recursive programming is a good thing - is a good one and is shown pretty well :-). Unfortunately, Eric did not include applications of the Gamma function in his article (I actually used to be the webmaster for Wolfram Research and thus Mathworld when it was taken over -- Eric is a very awesome guy). If you know of any, let me know. Anyway, this is a series of articles trying to show C programmers that there is more to life than normal imperative programming. It's preceded by articles on higher-order programming, list manipulation, and memory management, and the next one will be on meta-programming. However, it appears that C has a richer functional tradition than I first thought. Eugenio Vilar showed me how ?: can be used to mimic the nature of many functional programs: int Factorial (int n) { return ((n == 1) ? 1 : (n == 0) ? 1 : Factorial(n - 1) * n); } -- Learn to program using Linux assembly language This is abusive, and I'd appreciate it if people not do things of this nature. Sorry, it's my signature I use pretty much everywhere. If that is considered abusive here, I'm sorry and I'll stop, but you are the first person in any message board or newsgroup that complained. Saving the day, however, is the fact that I had mistyped the URL in my original signature, and the link does not go where I intended! At the end of the article you claim that iteration is more intuitive than recursion. I would dispute this. I find recursion far more natural and simple than iteration. Iteration is perceived as intuitive precisely because most learn in their introductions to Computer Science. But if more curricula introduced students to computer science with books like The Little Schemer or even SICP/CTM, it would likely seem the other way around. Otherwise a nice article. Recursion, however, usually utilizes the explicit sturcture of the inductive definition. The only unintuitive thing, really, is that these definitions can acutally be executed and produce the wanted results. We are simply unaccustomed to this level of epxressiveness... But, in my experience, there are some cases (in interactive programs - processes) when you have a choice between mutable state and iteration on one hand and referential transparency, recursion AND control operators on the other hand (think callcc, shift/reset, prompt/control, etc.). In most cases this choice is done for you by the language designer (hint: no control operators in Java). So one goes with state and iteration. The programming in the large issues require further discussion (and were discussed here previously, of course...) Andris pretty much nails it on the head with the reference to mutable states. I'd like to add the following points: for customer in customers: ... only to discover I have just clobbered the previous definition of customer. Recursion is much more intuitive. From dictionary.com: intuition, n. The act or faculty of knowing or sensing without the use of rational processes; immediate cognition. Knowledge gained by the use of this faculty; a perceptive insight. A sense of something not evident or deducible; an impression. Such claims (there are several in this thread) are far-fetched. Recently some individuals made similar claims (i.e., that they found recursion simpler than interation) in comp.lang.prolog. Such people are very much in the minority. Recursion is rarely perceived as intuitive, especially by novices. To claim otherwise is silly and unrealistic. Recursion is taught in college, and usually only in CS courses, and for good reason. I myself have been writing recursive code for over 20 years, since I was a child, and will frequently choose recursive solutions over iterative ones, but I would still never claim that in general recursion is more intuitive than iteration. ...are an attempt to take a larger problem and divide it up into more manageable chunks. The truly intuitive way to do things is neither iterative or recursive - these are higher orders of abstraction. Rather, it's to lay out the solution in a purely sequential manner. From the viewpoint of mathematical thinking, I would think think that recursion is a much better fit. But then it depends what maths you are acquanted with. I personally think some problems are more intuitive using iteration and others are more intuitive using recursion. The recursive function for Towers of Hanoi is rather hard to come up with. But then the iterative technique is also not obvious. The recursive function for Towers of Hanoi is rather hard to come up with. But then the iterative technique is also not obvious. That's the point! I can't even think of an iterative method that obviously works (though it's easy to invent one that works, but whose reason for working isn't totally obvious, just by fumbling about). Given a problem that lends itself quite naturally to a recursive solution (and uses principles no more complex than "if doing X makes my problem simpler then I should do X"), and where an iterative solution is slightly cryptic, they'll simply find no solution rather than the recursive one. Recursion isn't intuitive, it's more intuitive. Heck, Computer Science in general isn't intuitive. Just because we are "in the minority" doesn't mean we're wrong either. And your claim that it is taught only in CS courses is wrong- it's at least taught in Math courses too, where you won't find iteration at all. Um...'iteration' is a piece of mathematical terminology that plenty of people learn from mathematics courses. Any time an algorithm appears in a mathematics course it's likely to be iterative. For example Newton's method will typically be presented with some pseudocode that looks like a while loop containing a variable assignment (sometimes the variable assignment will be in a modified form like x_{n+1}=f(x_n) rather than x:=f(x)). Out of the people I know who studied mathematics, and didn't spend their time programming (so weren't exposed to recursion as a programming style), many more are familiar with iteration than recursion. I don't think it'd enter their heads to write, say, a dot product between two vectors recursively rather than iteratively. If I showed the recursive approach to this to my non-programming mathematical friends they'd probably thing I was being deliberately perverse. How do you read a book? You read the first page, and then you read the rest of the book, right? Recursion rests on the absolutely intuitive idea that the structure of a procedure should follow the structure of the data it consumes (or produces). Inductively defined data leads to recursive procedures. You have to work really hard to avoid it. After all, there are only two ingredients to recursion (each of which is useful on its own, by the way): breaking a problem into smaller subproblems, and recognizing that some problem is similar to a previous problem. If recursion is not intuitive, then it would seem that one or the other of these ingredients is unintuitive. Since every person I know does both of these things (I know, I know, that's not quite dictionary.com's standard for intuition), I don't buy it. Intuitiveness is not transitive in the sense that a solution may involve intuitive ideas and yet itself not be intuitive. People who know how to break problems down into smaller ones and who can recognise that a subproblem is similar to a previous problem will consistently fail to find a recursive solution to the Tower of Hanoi problem. It's pretty easy to test. Just go out and grab some random people who haven't been educated explicitly on the subject of recursion and see how long it takes them to solve the problem recursively. You may find that you can even ask highly technical (non-computer scientist and non-mathematician) people and they still won't think of it until recursion is taught to them. Don't have to write computer programs. They are probably also unable to find a solution to the SICP change problem. And without a formal education, I don't see most people solving integrals and finding a way to solve them 'more intuitive' than another. ...which is that A is intuitive and B is intuitive doesn't imply that something built solely from A and B is intuitive. Maybe you are suggesting that this proposition is true if we restrict ourselves to people who program computers. We never said recursion was intuitive. Recursion isn't intuitive, it's more intuitive. So we have So in summary, neither recursion nor iteration are intuitive?!8-)) Kevin Millikin remarks: How do you read a book? You read the first page, and then you read the rest of the book, right? Kevin, your penance is to ask 20 persons "How do you read a book?" and report back how many respond with your second sentence above. My answer would be "You read page 1, then page 2 and so on until you've read the last page." This is akin to what Chris Rathmann said "[the truly intuitive way to do things i]s to lay out the solution in a purely sequential manner." I also agree with Chris Rathmann when he says "I personally think some problems are more intuitive using iteration and others are more intuitive using recursion." That is, once one understands both iteration and recursion, some algorithms may be more clearly expressed with one or the other. But I don't believe at all that recursion is intuitive to most people. And my experience is that one must study recursion some time (days at least) and understand the underlying mechanism to some degree before being completely comfortable with it. In contrast iteration can be grasped fully in an afternoon. So in summary, neither recursion nor iteration are intuitive?!8-)) Yep, and until people stop taking four year courses to learn how to do one or the other and start doing it without any instruction/learning process at all, I think it'll stay that way. It's also definitely true that some problems are more naturally expressed using iteration. I would simply argue that most are not. It's also definitely true that some problems are more naturally expressed using iteration. I would simply argue that most are not. Are you claiming then that the number of problems that "are more naturally expressed using iteration" is finite? Well, that's not what the study posted here shows. they're opinion is the same as mine, that the intuitive varies between people - that there may be people more suited to thinking recursively than iteratively. In which case finding recursion more intuitive than iteration is just their apprehension of the relative understandability of the two concepts. The idea further down that a purely sequential way of dealing with the problem is the truly intuitive way of dealing with the problem may be correct, then again it may be perceived as just another preferrence. Recursion is definately a lot more elegant in many cases. And for those with the right theoretical training, it's also intuitive. But on a basic level it really is not as intuitive as iteration. Think of a little kid's idea of an algorithm - you have some really dumb human infront of you and you have to tell them how to do something step by step. The intuition is all about giving instructions. 'Keep moving an object from one pile to the other, and adding one to your count, until the pile is empty. Tell me your count' vs 'Move one object onto the other pile, then count the rest. Add one to the answer and tell me.' Answer: but you can't tell me to 'count the rest', you haven't finished telling me how to count yet! It's just not at all intuitive that an instruction to count could be part of the instructions on how to count. Sure, it's for a simpler verison of the problem, but that doesn't help much intuitively. They just see a definition which refers to itself and have to expend a lot of mental effort making sense of that. If they're smart they might end up mentally unrolling the self-referential definition until they convince themselves that it will come to an end at some point, but it's a confusing and roundabout way of phrasing a set of instructions. In short, we're not born with the concept of fixed-point-combinators built into our brains! it takes insight and practise to formulate and appreciate recursive definitions. More so than iteration, I mean. I remember seeing and having trouble with both, but iteration didn't make you think about self-reference ontop of all the other stuff. Another reason recursion can seem hard: it often requires you to generalise your problem somewhat in order to phrase a recursive solution. Lovely for theorists but not as intuitively obvious if you just want to solve a specific instance. An extra step of abstraction/generalisation. Trivial example: 'Do this 10 times' vs 'To do this n times, do it once then do it n-1 times. Now let n = 10'. I would agree with this. I am a fairly novice programmer (Pascal, Java, SAS), and I find iteration more "intuitive" when I am writing code. I believe this is a learned intuition, hoewever, as in most courses, recursion is a one day side trip with a n! example after which it is not mentioned again. After I have written the iterative code, however, I have recently finding myself going back and re-writing it recursively. I find the recursive solutions tend to be much more "intuitive" to read later on. The code tends to be more compact, and easier to follow, and the method tends to have fewer local variables and a simpler control structure. Michael It's already not very unusual to have input syntax different from output one, so why not go further and have different constructs as well? So, you're a fairly novice programmer and already learning to program in the imperative style of Pascal and Java, learning that recursion is bad, difficult and should be avoided anyway because as we all know it causes stack overflows, ain't it? How do you suppose recursion would be intuitive, then, when you take it for granted a stateful model of the world, full of program counters, a few limited control structures for looping and lots of preconceipts about the functional model? As long as Universities teach direct descendants of assembly to students, recursion and higher level programming will never reach a wide audience or be accepted as intuitive practices... I'm surprised to hear you say that. I interview quite a lot of NCG's in my group, and about 50% of them fail to write a simple recursive algorithm to solve a real problem (such as walk a tree). On the other hand, I've never had an interviewee (is that a word?) failing to solve an iteration problem. Unnamed studies quoted by my old university tutor apparently showed that students first introduced to recursion considered iteration to be unintuitive, while those introduced to iteration first held the reverse opinion. That would be what you'd expect, wouldn't it? Yes, but it's nice to have something more than intuition, because not everyone's intuition is the same. Perhaps it was this study? Learning recursion as a concept and as a programming technique (ACM). Two experiments on learning recursion and iteration were carried out. The first studied learning of the mathematical concept of recursion by having subjects compute mathematical functions by analogy to worked out examples. The results suggest that subjects are quite able to induce a computational procedure for both iterative and recursive functions from examples. Furthermore, prior practice with iterative examples does not seem to facilitate subsequent performance on similar recursive problems, nor does prior practice with recursive examples facilitate performance on iterative problems. The second experiment studied novice subjects' comprehension of iterative and recursive Pascal programs. Comprehension of the iterative program was not improved by prior exposure to the recursive version of the program. Comprehension of the recursive version was improved moderately by prior work with the iterative version. GOTO and GOSUB are by far the most natural. :-) Well, I learned INTERCAL first, so COMEFROM is the most natural, clearly. I think the "intuition" batted about in this thread is closely related to what Van Roy and Haridi call "naturalness" in CTM. Problems whose solutions require a mess of explicit state are more "naturally" expressed using iteration, and problems sans that state (often tree-structured) are more naturally expressed using recursion. A practical thing about recursion, is that it forces you to deal consciously with termination conditions. You can blunder along for quite a ways with iteration without understanding or being conscious of why your loops terminate (and whether they terminate under the desired conditions). In this sense, recursion has a real engineering advantage over iteration: your code is more likely to be correct. It's true that most language implementations treat recursion suboptimally, but this situation may improve over time. In Common Lisp, I frequently mix the two (iteration and recursion), depending on how I perceive what I'm wanting to do. I probably wouldn't use iteration so much though if I didn't have LOOP, which is kind of like every for(;;) loop in every language rolled into one. The thing about recursion, as far as I know, is that you sometimes need to visualize a stack, in particular when you're coding something that can easily be tail-call optimized. Maybe some cutting-edge languages can translate things so my problem doesn't arise, but in SICP Scheme, oftentimes the tail-call optimization style looks less natural than the ordinary solution I'd write had I not been aware of this optimization concept. And the difference between recursion and iteration is that the "looping mechanism" is in different places. In some recursions which I find better expressed with iteration, it seems as if the mechanism is mixed unpleasantly with the rest of the code. Infinite loop: Lather, rinse, repeat. Infinite recursion: To seal, moisten, fold, and seal. They seem about equally intuitive to me. :-) ... but i think they are two sides of the same coin. I've been fascinated to watch my children learn math concepts, and do think it may have some bearing on this discussion. My four-year-old has learned can count and add and subtract on her fingers. She can also count collections of other objects, e.g., pennies. She can recognize visually quantities from zero to at least five. (I'd say that's non-recursive and non-iterative). But with more than that she performs the conventional routine of assigning a correspondence of the next "counting number" to each object in turn. I had her count a line of nine pennies. When she was asked which was the "second" penny, she pointed to the one she had identified with "two". That specific penny was swapped with the one in the middle and she was asked again to point to the "second penny". She again chose the same physical penny that was now in the middle position. I had a conversation with my six-year-old in which I tried to convince her that there was "no largest number" because you could always add one to get a larger number, and that "infinity is not a number". She had a hard time getting her head around it, and I can't say it doesn't seem a bit metaphysical to me, too. Although I'm no certainly no expert in cognitive development, I interpret these anecdotes as lending support to the theory that there is some early innate (rhythmic?) capacity that can assist in iterative processes, while grasping something defined recursively (e.g., Peano's axioms) require specific training in symbolic manipulation. "Split it into smaller pieces and solve them" - if you say this to people who are not developers/computer scientists, etc., they will often evince confusion. I know that I did at first hearing. On the other hand, do the first then, then the second thing, then the third... is familiar from everyday life. It does take some thinking and study to understand recursion. That said, recursion offers such an immensely powerful way of thinking about problems that I say: "Who cares if recursion is not intuitive?" "to recur" is the verb form of the noun "recursion", **NOT** "to recurse". Sorry, one of my pet peeves :( "to recur" is the verb form of the noun "recursion", **NOT** "to recurse". I'm afraid that this assertion is on thin ice. Just because "recur" and "recursion" share an etymological origin in the same Latin verb does not mean that they must retain an inviolate relationship in English. To choose a parallel example, "deduce" and "deduct" share the same nominalization, "deduction", and yet have quite distinct uses. "Recurse" is quite established as a back-formation of "recursion" in the computer science sense, and I see no reason to eliminate it in favour of "recur" which is much more general and potentialy ambiguous and confusing in this instance. It's not like its "nucular" or something. ;-) When I was a student, I remember how difficult recursion was in contrast with simple looping. People immediately got the 'FOR I = 1 TO 10' thing, but recursion created very big difficulties for almost all of us. Of course recursion is one of the foundamental concepts, and it should be one of the first things tought, but it's hardly more intuitive than iteration. Surely the best way to do looping is to avoid it altogether? Let container authors provide methods for efficiently iterating over their contents (e.g., via map, fold, filter etc). The most intuitive way to describe reading a book is neither "read the first page, then read the rest of the book", nor "read page 1, then page 2, ..." -- rather, it is simply "read every page": book.map(readPage)! No stinking loops! :-) Incidentally, I hear that "recursion" was used to refer to any time a function called another function (or itself). Eventually it referred to the special case of a function eventually calling itself during its own execution. I actually prefer the old version; I find it much more enlightening.
http://lambda-the-ultimate.org/node/787
CC-MAIN-2019-43
refinedweb
3,846
54.32
Q. C++ program to find whether a number is power of two or not. Here you will find an algorithm and program in C++ programming language to find whether the given number is power of 2 or not. Now first let us understand what power of two means. Explanation : In mathematics, a power of two is a number of the form 2^n where n is an integer. Here we use a mathematical concept of Logarithm, The most simple method for finding whether a number is power of two or not is to simply take the log of the number on base 2 and if you get an integer then the number is the power of 2. For Example : 256, when we take log of 256 to the base 2 we get 8 which is integer, hence 256 is power of 2. Algorithm to find whether a number is power of two or not START Step 1 → Take integer variable num <iostream> using namespace std; bool find_power_of_two(int num) { if(num==0) return false; return (ceil(log2(num)) == floor(log2(num))); } int main() { int num=256; find_power_of_two(num)? cout<< num<<" is the power of two"<< endl: cout<< num<<" is not the power of two"<< endl; return 0; } Output 256 is the power of two.
https://letsfindcourse.com/cplusplus-coding-questions/cplusplus-program-to-find-whether-a-no-is-power-of-two
CC-MAIN-2022-40
refinedweb
214
62.51
go to bug id or search bugs for Description: ------------ There is a bug in php-src/main/rfc1867.c that allows a malicious user to crash php during a multipart/form-data file upload. A large filename causes an integer overflow that leads to a subsequent crash. The problem is that if multiple files are uploaded at the same time and the bug is triggered with one of the later files, all previous temp files will not be deleted and fill up the disk. This could be used for a easy to execute remote denial of service attack. Required php.ini settings: ; post_max_size needs to be at least 2GB + a few additional bytes for the rest of the form (this depends on the exact POC) ; For the POC attached to this bug report, please use 2147483873 or more. post_max_size = 2147483873 ; this could be remotely set with the MAX_FILE_SIZE form variable but in order to keep the POC as simple as possible, I did not do this ; so set upload_max_filesize to 0 upload_max_filesize = 0 ; according to documentation, memory_limit should always be larger than post_max_size so I set it to 4GB to be on the safe side memory_limit = 4GB The security issue exists in SAPI_API SAPI_POST_HANDLER_FUNC(rfc1867_post_handler) in the handling of large filenames: /* is_arr_upload is true when name of file upload field * ends in [.*] * start_arr is set to point to 1st [ */ is_arr_upload = (start_arr = strchr(param,'[')) && (param[strlen(param)-1] == ']'); if (is_arr_upload) { array_len = (int)strlen(start_arr); if (array_index) { efree(array_index); } array_index = estrndup(start_arr + 1, array_len - 2); } If we upload a file with a array-like name that exceeds the maximum positive 32 bit integer, array_len will be set to a negative value. During the subsequent estrndup(), array_len will be converted to a 64 bit integer that is extremely large and the memory allocation will fail. This causes the script to abruptly exit and the already uploaded temporary files are not deleted. Example run of attached POC: root@vagrant:/var/www/html# ls /tmp/php* ls: cannot access '/tmp/php*': No such file or directory root@vagrant:/var/www/html# python multipart_file_name_oom_crash.py [+] Opening connection to 127.0.0.1 on port 80: Done sending payload with size 2147483873 [*] Switching to interactive mode HTTP/1.1 502 Bad Gateway Server: nginx/1.14.0 (Ubuntu) Date: Mon, 18 Nov 2019 05:31:13 GMT Content-Type: text/html Content-Length: 182 Connection: keep-alive <html> <head><title>502 Bad Gateway</title></head> <body bgcolor="white"> <center><h1>502 Bad Gateway</h1></center> <hr><center>nginx/1.14.0 (Ubuntu)</center> </body> </html> $ [*] Closed connection to 127.0.0.1 port 80 root@vagrant:/var/www/html# ls /tmp/php* /tmp/phpbwGfJu root@vagrant:/var/www/html# cat /tmp/phpbwGfJu testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttestroot@vagrant:/var/www/html# Please note that the python POC requires the pwntools library (pip install pwntools). It posts to localhost:80/poc.php - modify the http request if required, the contents of poc.php do not matter. Test script: --------------- #!/usr/bin/env python from pwn import * r = remote("127.0.0.1", 80) boundary = "FOO" def line(s): return "%s\n" %s buf2 = "" buf2 += line("--"+boundary) buf2 += line('Content-Disposition: form-data; name="test"; filename="test"') buf2 += line("") buf2 += line("test"*0x10) buf2 += line("--"+boundary) buf2 += line('Content-Disposition: form-data; name="foo[%s]"; filename="bar"' % ("a"*(0x7FFFFFFF))) buf2 += line("") buf2 += line("a"*0x10) buf2 += line("--"+boundary+"--") buf = "" buf += "POST /poc.php HTTP/1.1\n" buf += "Host: localhost\n" buf += "Content-Type: multipart/form-data; boundary=%s\n" % boundary buf += "Content-Length: %d\n" % len(buf2) buf += "\n" print "sending payload with size %d" % len(buf2) r.send(buf + buf2) r.interactive() Expected result: ---------------- PHP should not crash, created temporary files should be deleted after script finishes Actual result: -------------- PHP crashes, temporary files remain on disk Add a Patch Add a Pull Request Fixed summary and package according to changes made to my other similar submitted bug with ID 78876 Suggested fix: <>. @jr, could you please confirm that the patch fixes the bug? Related To: Bug #78876 I can confirm the patch should fix the bug, thank you! Automatic comment on behalf of cmbecker69@gmx.de Revision:;a=commit;h=1c9bd513ac5c7c1d13d7f0dfa7c16a7ad2ce0f87 Log: Fix #78875: Long filenames cause OOM and temp files are not cleaned Automatic comment on behalf of cmbecker69@gmx.de Revision:;a=commit;h=c71416cba2ad7b596233e3c0da117a90a2e78bbf Log: Fix #78875: Long filenames cause OOM and temp files are not cleaned
https://bugs.php.net/bug.php?id=78875
CC-MAIN-2020-45
refinedweb
738
51.07
In this blog I will show how to create a simple Master-Detail screen in Flex, how to back it up by an application is Grails and how to publish changes via JMS to all Flash clients. Important topics will be binding and remote java-object invocation in Flex and configuring JMS. A Master-Detail view is a view with a master list, showing a collection of items and a detailed view, most often consisting of a form, in which the a single item can be edited. Clicking an item in the master list will display the details in the detail view. The combination of a framework for building RIA's (Flex) and a Java based dynamic framework for building services (Grails) seems very promising. I think the high productivity that can easily be achieved by this combination will make it a very attractive choice for your next big project. In this blog we will implement a "double" master detail, as can be seen in the image above. A person may have multiple addresses. The list is an address book and the view displays a number of attributes of the people in the address book (first name, last name and city and country of their first address). If an item is selected, it is displayed in the adjacent form and the list in that form shows all addresses of that person. If an address is clicked its details are shown in the form below. In flex we will use an ArrayCollection to contain the list and this list will be displayed using a DataGrid. <mx:Application xmlns: <mx:DataGrid <mx:columns> <mx:DataGridColumn <mx:DataGridColumn <mx:DataGridColumn <mx:DataGridColumn </mx:columns> </mx:DataGrid > ... </mx:Application> The mechanisme that is used to get the data from the addressBook into the DataGrid is called "binding". This is done by using curly brackets around the expression, dataProvider="{addressBook}". Binding will make sure that if a change is made in the referenced object (in this case the addressBook or its content) the changes are reflected in the referrer (in this case the DataGrid). Binding works only in one direction! When the value in the referrer (DataGrid) is changed, the bound value (the addressBook) will not be updated. Some components (like the DataGrid) add this functionality and will update the underlying model. For this tutorial we will use the datagrid in editable="false" mode, since we want to edit the data by using the form. The columns are filled by looking up the properties specified as dataField on the objects in the addressBook. But what are these objects? The are actionscript objects of the Person type. package { import mx.collections.ArrayCollection; [Bindable] public class Person { public var firstName:String; public var lastName:String; public var addresses: ArrayCollection = new ArrayCollection(); public function get firstAddressCity():String { if (addresses.length == 0) { return ""; } return addresses[0].city; } public function get firstAddressCountry():String { if (addresses.length == 0) { return ""; } return addresses[0].country; } } } This class defines two simple properties and a collection. In addition it defines two getters that are also bound to in the DataGrid view. It is important to mark this class as [Bindable], so all properties of the class can be used in bind expressions. Now when a row is selected in the list, we need to bind the values in the selected object to fields in a form. Then if the data in the form changes, we want to update the values in the selected item. Therefore we need Double Binding. ... <local:Person <mx:Form> <mx:FormHeading <mx:FormItem <mx:TextInput </mx:FormItem> <mx:FormItem <mx:TextInput </mx:FormItem> </mx:Form> ... Now all we need to do is add some buttons and eventhandlers to the components... ... <mx:DataGrid ... <mx:Button <mx:Button <mx:Button <mx:Script> <![CDATA[ private function doSelect(p:Person): void { selPerson = p; } private function doRemove(p:Person): void { addressBook.removeItemAt(addressBook.getItemIndex(p)); } private function doUpdate(p:Person):void { if (!addressBook.contains(p)) { addressBook.addItem(p); } selPerson = new Person(); } ]]> </mx:Script> ... ... and the master detail view is done! The double binding will keep the fields in the form, the fields in the datagrid and the data in the model in synch. The master-detail for the address works in much the same way and is a little different. A List is used to display the addresses in stead of a DataGrid and in the list the Address.toString() method is invoked to show the addresses. The eventhandling is also slightly different: A new Address is added to the list right away, instead of having an "Update Address" button. Now editing this data is surely fun, but they are not persistently stored nor shared between users. To do that, we'll nee to add some server component and remoting. Remoting from Flex can be done using BlazeDS from Adobe Labs. This is a "Java remoting and web messaging technology that enables developers to easily connect to back-end distributed data and push data in real-time to Adobe® Flex™". So we can write a Web application in Java to communicate to the backend using BlazeDS. In stead of starting to write a web.xml and a Servlet, let's look at Grails for a moment. This is a framework for creating web applications based on Java and Groovy and it has a flex plugin! That looks very promising. First we create a new Grails application (grails create-app addresses) and download the flex-plugin (grails install-plugin flex). This takes some time to download and thus gives us some time to think. What do we really want to do in this Web application? We only need to domain classes (Person and Address) that should be persisted (this is standard with grails) and then have a service that exposes some methods to the flex front end. Saying this out loud takes almost as much time as writing it in Grails: class Person { String firstName String lastName List addresses static hasMany = [addresses:Address] static fetchMode = [addresses:"eager"] } The Person class has the same two simple properties and the list of addresses. The list is declared as a hasMany relationship and the fetchmode is set to eager. We'll need to load the list of addresses for each person that we fetch, since otherwise the client will get LazyInitializationExceptions. class Address { String street; int Number; String postalCode; String city; String country; } The Address class is far simpler. It only contains the simple properties that the actionscript class contains. class AddressBookService { static expose = ['flex-remoting']; def List<Person> findAllPersons() { return Person.createCriteria().listDistinct {}; } def Person get(id) { return Person.get(id); } def void update(Person p) { p.save(); } def void remove(Person p) { p.delete(flush:true); } } The AddressBookService exposes itself as a flex-remoting service. This will make sure the Grails-Flex plugin will register the object with BlazeDS and it is made available to the Flex client. The changes to the Flex code are pretty complex: [RemoteClass(alias="Person")] Here the alias is the Java class name, in this case the Person class in the default package. public var id:*; public var version:*; We don't care about the type of these properties from the ActionScript code, so we'll just use the wild cards. <mx:RemoteObject <mx:method <mx:method <mx:method <mx:method </mx:RemoteObject> <mx:Script> <![CDATA[ private function setAddressBook(list:*):void { addressBook.removeAll(); for each (var p:Person in list) { addressBook.addItem(p); } } ]]> </mx:Script> The destination of the RemoteObject maps directly on the class name of our grails service. The method declarations in the remote object map to its methods and the result event handler will be called asynchronously with the result of the method call when it is called from the Flex code. <mx:Button <mx:Script> <![CDATA[ private function doSelect(p:Person): void { service.get(p.id); } private function doRemove(p:Person): void { service.remove(p); } private function doUpdate(p:Person):void { service.update(p); selPerson = new Person(); } ]]> </mx:Script> Notice that the handling of the result of a call to the service is not defined with the calling code: it is defined in the method declaration of the RemoteObject. Since we already structured the code with these changes in mind, the structural changes are fairly minimal. We only need to change the implementation of these three methods to make it work with the Grails service. To get the application running, the .mxml and .as files need to be copied to the web-app directory of the Grails application. Here the Flex Webtier Compiler can pick up the source code and copile the application on request. if you go to, the interface will show up and it will be connected to the Grails application through BlazeDS. That's really all there is to it! These 6 source files (addresses.mxml, Person.as, Address.as, AddressBookService.groovy, Person.groovy and Address.groovy) form a complete Rich GUI to an online AddressBook. None of the code is really complex and hardly any plumbing is needed. While this is really great, I'm a little annoyed with this "Refresh" button and the fact that the user will only see changes made by others when he clicks this button. Lets see what we can do... Flex can be configured to listen to a JMS destination and Grails has a JMS plugin. We can use this to push a new data set to all clients whenever data is changed on the back-end. If we are going to use JMS, we need a JMS-provider, preferably one that can be embedded in a Grails application. Browsing the internet I found ActiveMQ from Apache, that is open source and can be embedded easily in a Spring application. After installing the JMS Plugin, downloading ActiveMQ and adding the necessary libraries to lib folder of the Grails application, we're all set. <beans xmlns="" xmlns: <bean id="connectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory" destroy- <property name="connectionFactory"> <bean class="org.apache.activemq.ActiveMQConnectionFactory"> <property name="brokerURL"> <value>vm://localhost</value> </property> </bean> </property> </bean> </beans> This will create the connection factory and also start an embedded broker. def void update(Person p) { p.save(); sendUpdate(); } def void remove(Person p) { p.delete(flush:true); sendUpdate(); } def private void sendUpdate() { try { sendPubSubJMSMessage("addresses", findAllPersons()); } catch (Exception e) { log.error("Sending updates failed.", e); } } The sendPubSubJMSMessage method (added to all service classes by the Grails JMS plugin) is used to send a message to the "addresses" topic. The message will contain the new list of addresses. <service id="message-service" class="flex.messaging.services.MessageService" messageTypes="flex.messaging.messages.AsyncMessage"> <adapters> <adapter-definition </adapters> <destination id="updatesJmsTopic"> <properties> <jms> <message-type>javax.jms.ObjectMessage</message-type> <connection-factory>ConnectionFactory</connection-factory> <destination-jndi-name>addresses<.addresses</name> <value>addresses</value> </property> </initial-context-environment> </jms> </properties> </destination> </service> Configuring the JNDI context for Flex is actually quite tricky. The org.apache.activemq.jndi.ActiveMQInitialContextFactory is a very basic JNDI context and it uses properties to register queues and topics. Normally the properties would reside in a jndi.properties file. Here, these properties are specified in the service-config.xml itself. The property topic.addressesaddresses to register the topic with the physical name "addresses" (the property-value) in JNDI as "addresses" (the property-key-suffix). This is really an ActiveMQ JNDI oddity. <mx:Application ... ... <mx:Consumer ... The Consumer is configured to listen to the "updatesJmsTopic", that is configured in the services-config.xml. Whenever a message comes in, its body is processed by the setAddressBook method, that will copy all the data to the ArrayCollection to be displayed in the DataGrid. Don't forget to subscribe the consumer at start up. We'll also load the initial data at start up. This completes our RIA application. Updates will now be send through JMS to all Flex clients. Creating a RIA with Flex and Grails is relatively simple. Developers can really focus on developing the GUI or the server side business logic. Most plumbing can be left to the sensible defaults and if not, changes can be made using "standard" frameworks like Spring or Hibernate. Whenever an advanced feature is necessary, developers can lean on their knowledge on Java or frameworks like Sping and Hibernate to quickly intergrate the functionality into the Grails application. I think these features make the Flex/Grails combo a very suitable candidate for developing RIAs on projects of all sizes. A few remarks: The sources of this application can be downloaded: The package with the complete grails application with the plugins was almost 60 MB, so to test the application, you'll have to create a Grails application and add the plugins manually. Very good article! As the creator of the Grails plugin it’s nice to see it’s usage! Note that in the future I hope to extend the plugin with some scaffolding generation as well. Cheers, Marcel This looks very handy. Thanks for your work. Incidentally, BlazeDS includes ActiveMQ, so it seems like you should not need a separate install of ActiveMQ. great tutorial!!! Great tutorial. Always nice to see practical examples. I use Flex with Python/Zope for the most part, so the Grails exposure was a plus as well. I noticed that the BlazeDS docs on Adobe Open Source indicate that “Real-time server push over standard HTTP” based on AMF is a part of the open source product. Is this a new development, maybe when Flex 3 was launched? Maybe this is the includion of ActiveMQ mentioned by Will Rogers above? Just curious as how this might change the tutorial. Thanks again! Kinda works once you go through the painful process of debugging without debuggers… Several things would be worth mentioning like, for example, the declaration of the remote object ( ) in the mxml file which must come *before* using it, meaning at the top of the file. If you don’t, you’ll spend hours trying to figure out what’s wrong because of course, no error will pop up. Thx, Rollo Maarten, It’s been 5 months since you posted this very helpful piece. Have you gone further since then in our thoughts about integrating Grails and Flex. Have you found a way to eliminate the JMS to have the Flex changes be reflected back into the Grails underlying data? Ron HI, this is a really helpful article as we all know, but what about using this technologies in a production project. I have no problems whit grails nor flex nor even blazeDS but i have my doubts about the flex plugin. As you can see in the plugin page “It’s not sure if the plugin works in production mode as it is only tested in development mode”. what advise can i get from you. did i say thank you a lot? Thank you for tutorial. It certainly helps to understand how Flex remoting works. I downloaded sources and noticed that it is slightly different from the one in the text. Groovy to Flex binding is not quite clear. How does it marshal groovy domain objects to flex [bindable] ones. Is it all handled by groovy plug-in? It’s a really a good tutorial! There is a new plug-in available ActiveMQ (it’s in develop mode now, I suppose it will not die) It’ll be great to update this tutorial. Before installing jms run command: grails install-plugin activemq There are some errors in addresses.mxml Contain space char. [...] Flex with Grails/BlazeDS [...] Has anyone tried this with Livecycle Data Services (DS) instead of BlaseDS? Should be about the same but with support for RTMPChannel. The Sources “Downloads” are not working. Thanks. What would be the steps in making an AIR App?
http://blog.xebia.com/2008/02/20/tutorial-master-detail-screen-in-flex-backed-up-by-grails-application/
crawl-002
refinedweb
2,627
56.86
File FTP FILE UPload - JSP-Servlet FTP FILE UPload Hi sir, i am doing the file upload via ftp. the program compiled well . but while executing i am getting the below errors... response. Please help me to this solution .Its Urgent. Thank U an error. I am not able to understand that error. Please help me and clear... ----------------------------------------------------------------------------- Display file upload form to the user < How display a Image on Servlet from file upload - JSP-Servlet How display a Image on Servlet from file upload Dear Sir, My requirement is I want to display a Image on Servlet from File Upload... to solve that problem from a long time But Issue are not resolved. Can you help me File upload in JSP File upload in JSP hi! In my previous interview i got two questions which i could not answer regarding file upload and annotations. I want to know which is the best method can be used for file upload. Whether moving them and Retrive files File Upload and Retrive files Can any body help me am getting an error in uploading file. into mysql database.... thank's in advance file upload error - JSP-Servlet file upload error Iam doing jsp project. File uploading is one part... ----------- file upload example... file while checkin file size. I am not able to understand this problem file upload error - JSP-Servlet file upload error Iam doing jsp project. File uploading is one part..., can you send me all file of uploading code. Because here all file uploading... file while checkin file size. I am not able to understand this problem zip file upload in php - WebSevices know how to upload. plase help me...zip file upload in php This is not a normal upload. we know the code for normal upload. i need the zip file upload code in php. how File Upload Tutorial With Examples In JSP by Using JSP This tutorial will help you to understand how you can upload a file... File Upload Tutorial With Examples In JSP  ... with image This tutorial will help you to understand how you can upload... interface.How to work with it? I am totally new to spring can somebody help me.  ...; fileuploadform.jsp file Upload a file please Please upload a file pls help me sir its urgent pls help me sir its urgent thanks for reply, but i am getting this error pls help me its urgent type Exception report description The server encountered an internal error () that prevented it from file upload file upload how to recieve the uploaded file on the next jsp page for modification if its uploaded to the previous page using a from form based file upload using servlets form based file upload using servlets Hai all, I wrote a program to upload a file into specified directory and store the file path and username... any one help me Thanks&Regards P.SriDivya file upload using JSP file upload using JSP I have created a form to upload a file in a html page, now i want to get the path of the file in a jsp page so what code...="java" %> <HTML> <HEAD><TITLE>Display file upload form please help me please help me Dear sir, I have a problem. How to write JSP coding... before. This name list should get from the database. Please help me. By the way, I'm using access database and jsp code. Thank you file upload in spring - Spring file upload in spring Is it necessary to use Multipart support in spring framework for file uploads.If not please let me know the other ways to do sir i,have a another assignment for these question plz help me sir - JavaMail sir i,have a another assignment for these question plz help me sir  ... to both. plz help me sir... that randomly generates Rupee and Dollar objects and write them into a file using object help me help me HI. Please help me for doing project. i want to send control from one jsp page to 2 jsp pages... is it possible? if possible how to File Upload is working in FireFox & Chrome but not in IE7 using java & jquery bytes.I am trying to upload images. can somebody please help me...I am stuck...File Upload is working in FireFox & Chrome but not in IE7 using java & jquery Hi, I have a form which is called on my button click "Upload a file file upload in jsp - Java Beginners file upload in jsp how to upload a file using jsp. my operating...: Display file upload form to the user UPLOAD THE FILE Choose the file To Upload: 2 help me - JSP-Servlet help me how to open one compiled html file by clicking one button from jsp servlet file upload servlet file upload please tell me the complete code to upload a file on localhost using servlet Ajax file upload Ajax file upload I am developing a application for image upload using ajax and servlet. The image should be converted in byte[] and must be saved... on the jsp page. Page shouldn't get refreshed file upload download - Java Beginners file upload download how to upload and download files from one system to another using java.io.* and java.net.* only please send me code help me help me Dear sir/medam i would like to know how to use the java string class method my programm have four button,(enter the text) that first... the order, Fourth button is Find length of text. and there are panels. Please help me java file upload in struts - Struts java file upload in struts i need code for upload and download file using struts in flex.plese help me Hi Friend, Please visit the following links: http problem in writing coding. Please help me. problem in writing coding. Please help me. Hi sir, my name is Logeswaran. I have problem in writing JSP coding for my university assigments... like? I'm really stuck on this. Please help me. By the way, I'm using Access more doubts sir. - Java Beginners own browser.Hope you will help me out.And also sir i need the progressbar... exoplorer such as search bar and some more buttons.Sir help me out...more doubts sir. Hello sir, Sir i have executed your code please help me. please help me. How to move the edits.jsp in below link? please help me. please help me. Please send me the validation of this below link. the link is Thanks Trinath plz help me!!!!!!!! - JSP-Servlet plz help me!!!!!!!! i`ve set the environment varaibles for tomcat... there are compilation errors.. plz do help me. make sure that you did... html file,.java file and xml file.. under which directory or folder i`ve to save please help me. please help me. I have a jsp page under that i add a list box under i get the countries through my database. so how can i do Please help me. Please help me. Hi i am trinath in below there is a url.In that url there is a code of edit a jsp page.I understand that code but only one thing i...:// Please help me Please help me Hi Sir, please send me the code for the following progrems... 1) all sets are integer type: input: set1={10,20,30,40} set2={15,25,35} output: union={10,15,20,25,30,35,40} 2) input: "Hi what in solving this error help me in solving this error Hello, i have installed tomcat 6.0, i...()); %> java file import java.io.IOException; import java.io.PrintWriter...,response); response.sendRedirect("/Simple/jsp/login.jsp"); }else plz help me find a program plz help me find a program plz help..i want a source code in jsp for order processing hii sir tell me plz help me on jstl- <c:url> - JSP-Servlet help me on jstl c:url what is jstl in java CAN U HELP ME TO CODE IN JSP FOR ONLINE VOTING SYSTEM CAN U HELP ME TO CODE IN JSP FOR ONLINE VOTING SYSTEM can u help me to code in jsp for online voting system help me help me i have done as u directed but nothing happens gud to me..... . i am using netbeans is that matter at all help me and take me out from trouble where to store jfree api to use it for j2ee based project help me help me please send me the java code to count the number of similar words in given string and replace that word with new one.. what are the component needed by grid computing, cloud computing and ubiquitous computing? discuss about the security of above computing. difference between that 3 computing Help me Help me Hi, LWUIT is working in eclipse j2me for Symbian OS help me.. help me.. Write a program that inputs four words and them displays all possible permutations of the words. So, for example, if the words mad, dog, bites and man are entered, then the following are output : man bites mad dog mad help me... help me... Write a program that inputs four words and them displays all possible permutations of the words. So, for example, if the words mad, dog, bites and man are entered, then the following are output : man bites mad dog Help Me Help Me What is the source code for Sample example for Mortgage Calculator in J2ME language for developing Symbian Need help Need help Dear sir, I have a problem. How to write JSP coding.... This name list should get from the database. Please help me. By the way, I'm using access database and jsp code. Thank
http://www.roseindia.net/tutorialhelp/comment/87408
CC-MAIN-2015-11
refinedweb
1,626
83.25
When I select multiple items from the project view, if one of them is a custom asset (class derived from ScriptableObject), the selection list looks like this: Question: What needs to be done so that the "Mono Behavior" listing, instead shows my custom (descendant-of-ScriptableObject) Class name? What does Monobehavior have to do with scriptableObject anyway? That they both derive from Object, is the only link I see. Alternatively, and more generally, I'd like to specify a string as a "type" name for each custom asset type I've made. Is there some virtual function I should override for to make this work right? ("Name" member of Object is close, but that's for each instance, not the class itself.) I have tried creating an Editor and project Icon(by adding a file: gizmos\assetsclassname icon.png) for the asset type. Though the editor and icon both work properly in other ways, they do NOT effect the name of the asset type in the shown selection list. Is there an editor function to override that can do this? Possibly relevant: I AM using EditorApplication.projectWindowItemOnGUI += to specify a function that draws a special icon for my custom assets. Update: Limited tests show this does not seem to be related. EditorApplication.projectWindowItemOnGUI += Here is an example of a selected scriptableObject asset's class, if relevant. public class SimpleLineList : ScriptableObject { public Vector3[] lineList; [MenuItem("GameObject/Create SimpleLineList")] static void CreatBlankLineList() { AssetDatabase.CreateAsset(SimpleLineList.CreateInstance<SimpleLineList>(), "Assets/newSimpleLineList. How to highlight or select an asset in project window from editor script? 2 Answers Getting assets / sub folders inside a folder 1 Answer Import Unity package / Asset by script with selection 0 Answers How to obtain selected files in the Project-window? 1 Answer Getting all assets of the specified type 5 Answers EnterpriseSocial Q&A
https://answers.unity.com/questions/1215117/multiple-object-selection-in-project-view-how-to-s.html
CC-MAIN-2022-21
refinedweb
304
55.44
Construct an object. Sets the initial values to inherited class members. This is done by calling inherited function: init(sb) Parameters. Return Value. None (constructor). Example. // istream constructor #include <iostream> #include <fstream> using namespace std; int main () { filebuf fb; fb.open ("test.txt",ios::in); istream is(&fb); cout << char(is.get()); fb.close(); return 0; } This code uses a filebuf object (derived from streambuf) to open the file test.txt. The buffer is passed as parameter to the istream constructor, associating it to the stream. You will seldom use istream objects directly like in the previous example, you will rather use objects of derived classes like ifstream, istringstream or your own ones. Basic template member declaration ( basic_istream<charT,traits> ): See also. ios::init istream class
http://www.kev.pulo.com.au/pp/RESOURCES/cplusplus/ref/iostream/istream/istream.html
CC-MAIN-2018-05
refinedweb
126
60.01
Introduction: ESC Programming on Arduino (Hobbyking ESC) Hello Community,. So let's go... Step 1: Getting ESC Information You really should remark the Amperevalue of your ESC. This tutorial is only tested on 20 AMP ESC :. Source: German: Step 2: Connection to Arduino I tried it with an arduino uno R3. I think it's also possible with an e.g. arduino Duemilanove or Mega.: Step 3: Upload Sketch Just copy and paste this Code to your IDE: /* Coded by Marjan Olesch Sketch from Insctructables.com Open source - do what you want with this code! */ #include <Servo.h> int value = 0; // set values you need to zero Servo firstESC, secondESC; //Create as much as Servoobject you want. You can controll 2 or more Servos at the same time void 4: Understanding and Programming an ESC'. it means the highest Signal the ESC can receive * You will hear the sounds which are described on the picture ( Source: Manual:) The Hobbyking ESC's can receive a Signal between 700 and 2000 us(microseconds). 700 means throttle at lowest position and 2000 on the highest Position. If you want to know what exactly you du, when you pick a menu visit the manual. Example: - Write 2000 us - Wait until D-D-D-D to chose lipo as batterytype - When it apperas, write at the third 'D' 700 in your Serial ( short delay, thats why you have to send it at the third 'D') - ESC will make a sound, and the Option is chosed. I Hope i could help you with this tutorial. 1 Person Made This Project! - joaopinela made it! Recommendations 55 Comments Question 2 years ago Thanks for the great page! Do you know if this works with TURNIGY MULTISTAR 40A BLHELI-S REV 16 ESC 2~4S V3 (OPTO)? Question 2 years ago on Step 4 Hello, Thank you for this tutorial, it is extremely helpful! I was trying to create a similar circuit myself whereby I attach a 40A ESC UBEC 4A, to a 300W brushless motor, being powered by a 6s Lipo battery, controller by my Arduino Mega. I cannot seem to find the specifications defining the signal range for my esc. When I try callibrating it with a throttle I am able to go through the initialization of sequential beeps (AAAA BBBB etc), however when I try it through sending values via the serial monitor, I only get one or two beeps for each value I send, and it does not seem to initialize. Any ideas how I can fix this issue? 4 years ago Here is my two cents. Should work for any ESC. You may need to play with delay times. Choose your own speed as you wish, between HI and LO. /*** ***/ #include <Servo.h> Servo esc; void setup() { esc.attach(9); delay(5000); esc.write(179); // HI delay(5000); esc.write(1); // LO delay(5000); esc.write(90); // MID delay(10000); esc.write(120); // SPEED } void loop() { } Reply 2 years ago I am very interested in running 1 motor through an ESC via an Arduino Uno. And I just need to run it through the usb port. I don't know what to do about the code though. Even if I write it out in the editor, what should I expect -how do I just turn it on/off Question 2 years ago on Step 2 Attached images; The set up as a whole: FIRST: I have it set up in 'a' way that will allow this to function/interact correctly. I do not need an accelerometer and I am wanting to turn it on/off-start/stop and adjust speeds (for all intense and purposes) through my desktop via the USB port. So I want to be able to do all of this through the Arduino create agent editor thing on my desktop. - 6s LiPo battery - 6,000mAh - 80A ESC - "Mystery" brand - Drone motor - U7 T-Motor brand, 420KV, 3-8s LiPo battery NEXT: So even if/when someone is so nice enough to show me the code in such a way I can just copy and paste into the editor - I really don't know what to do next. How do I get it to go? I'll have to adjust numbers in certain sections of the code in editor. I'll need some help with this, e.g. this number means this - that means that and adjusts the speed accordingly, etc. But basically; How do I turn this set up on and off in the editor controlled from my desktop. Thanks AND please let me know what I can clarify 3 years ago An Arduino ESC Library exists on Github . It takes many other issues into consideration. I do admire your effort though! Reply 3 years ago I was using the github code and it was working fine, but only for pin 9 and 10 on the arduino NANO the other pins that do have pwm do not work. Someone said it might be that pin 9 and 10 run on 980 hertz and the other pins only at 490 hertz. Is that the case and is it fixable? Question 3 years ago Oh thanks for explained. In my case, my pwm output from pid calculation is responsive but when it write to esc ,the bldc response is slow(its like have some delay on it) so my quad can't stable in any pid constants that i tune.. Please help how to make my esc reponse fast according to yaw and pitch angle with no delay ?( thanks. 4 years ago hello, love this tutoral, and I'm using it to power 4 esc/motors. I've pretty much just quadrupled your send routines: firstESC.writeMicroseconds(value); secondESC.writeMicroseconds(value); thirdESC.writeMicroseconds(value); fourthESC.writeMicroseconds(value); Running the motors without props they run fine for as long as I need, but I put some small 6" props on for testing, and after about 10 seconds at least one motor slows and stops. Eventually two go, then three. I'm running 2300kva, I would have thought these motors could have handled it. Is there something I'm missing? Reply 4 years ago Did you check the max amperage your ESC can provide vs the rating for the engine? Are your engines overheating because of too much power? Reply 3 years ago Sorry I honestly didnt see this reply. It was just the power supplies I was using couldnt give amps fast enough. I eventually just bought a large capacity lipo and charger off ebay. Thanks for the reply. Reply 3 years ago No worries, thanks for getting back to me :-) LiPo will also be the long term solution, but for now, using a power supply for the algorythm tests with my students.. Thanks again for the the ESC instructions, they were very useful! Reply 4 years ago It turned out to be power supply, I was using the wrong sort. Stuck a LiPo on it and everything is good. I forgot I posted this otherwise I would have updated it, thanks for the reply though. Reply 3 years ago Hey! What sort of Power Supply? Reply 3 years ago hmm this was a while ago now. I was using a pc power supply, but I couldnt get the motors to run at full speed for long before they eventually cause the power supply to fail. I eventually just bought a proper lipo and charger off ebay, I think I got change out of $50 for both. I havent done much on this for a year or so because I've moved and no longer have the space, but I need to get bigger motors. The ones I have ALMOST lift it off the ground but the airframe is just under 1.4kg. I am also trying to lighten the load. Question 3 years ago Hello, is that all kind of ESC have the same signal value between 700 and 2000? 4 years ago The link doesn't work. It just says error 404 when the page goes to hobbyking. Is there another link? Reply 3 years ago I know this is very late but for any new people coming here, I found the link here 5 years ago Found how to calibrate traxxas XL 2.5. Hope this helps: #include <Servo.h> #define MAX_SIGNAL 2300 #define NETRUAL 1400 #define MIN_SIGNAL 400 #define MOTOR_PIN 9 Servo motor; void setup() { //open serial monitor Serial.begin(9600); Serial.println("Uno online. Software launch sucessful"); delay(100); Serial.println("Calabration set for TRAXXAS XL 2.5. Please wait for further instruction."); delay(100); Serial.println("Begin calibration with ESC powered off and LiPo attached. Connect control wire and grnd to Arduino. Press any key when complete"); while (!Serial.available()); Serial.read(); motor.attach(MOTOR_PIN); Serial.println("Output at NEUTRAL. Please press and hold ESC calibration button. Light shall flash green then red. Release the button"); delay(1000); Serial.println("Wait for ESC to blink red once. Then press any key"); while (!Serial.available()); Serial.read(); Serial.println("Now outputting maximum output."); motor.writeMicroseconds(MAX_SIGNAL); delay(1000); Serial.println("Wait for ESC to blink red twice. Then press any key"); while (!Serial.available()); Serial.read(); Serial.println("Sending minimum output"); motor.writeMicroseconds(MIN_SIGNAL); delay(1000); Serial.println("ESC should blink green once. If not, calbration has failed. Please atempt agian"); } void loop() { } Reply 3 years ago I tried it on my XL5 esc but it didn't work. Could you please tell me if there are any modifications I'd have to make to the code? Thanks!
https://www.instructables.com/ESC-Programming-on-Arduino-Hobbyking-ESC/
CC-MAIN-2021-31
refinedweb
1,599
74.49
How do I create a custom portal for Tableau Online?varun pillai Jan 10, 2019 4:34 PM I am trying to connect to Tableau Online and embed the workbook on my website. I am able to use TableauServerClient to connect to Tableau Online, however I am not able to get a working ticket to display the VIZ on the page. The code works for workbooks on Tableau Public since it doesn't require any authentication. But for Tableau Online, both the TSC and the REST API both produce tokens which expire and cannot be used. Here is my code: Here is my HTML: <html> <head> <script type='text/javascript' src=""></script> <script type="text/javascript"> var viz; window.onload=function(){ //when the page has loaded, load the viz var vizDiv = document.getElementById('myViz'); var vizURL = {{vizURL}} var options = { width: '640px', height: '700px', hideToolbar: true, hideTabs: true }; viz = new tableau.Viz(vizDiv, vizURL, options) }; </script> </head> <body> <div> <div id="myViz"></div> </div> </body> </html> Here is my python code: from flask import Flask, render_template, json, request import tableauserverclient as TSC import requests import xml.etree.ElementTree as ET app = Flask(__name__) @app.route("/") def index(): # using TSC tableau_auth = TSC.TableauAuth('username', 'password', site_id="mysite") server = TSC.Server('') server.auth.sign_in(tableau_auth) authtoken = server.auth_token # Using traditional rest api\r\n <site contentUrl=\"mysite\" />\r\n </credentials>\r\n</tsRequest>" headers = { 'Content-Type': "text/xml", 'Cache-Control': "no-cache" } response = requests.request("POST", url, data=payload, headers=headers) tree = ET.fromstring(response.text) for elem in tree.iter('{}credentials'): print(elem.attrib['token']) authtoken = elem.attrib['token'] vizURL = "" + "/trusted/" + authtoken + "/t/mysite/views/myview/myview?embed:=y"; return render_template('test.html', vizURL=vizURL) if __name__ == "__main__": app.run(debug=True, threaded=True) Can you please help me with this issue soon. 1. Re: How do I create a custom portal for Tableau Online?Jeff D Jan 10, 2019 8:59 PM (in response to varun pillai) Hi Varun, for embedding vizzes, take a look at the javascript API: Tableau JavaScript API - Tableau There's a section on authentication. (And you're correct, the REST API authentication won't work for this.) 2. Re: How do I create a custom portal for Tableau Online?varun pillai Jan 11, 2019 6:30 AM (in response to varun pillai) Hi Jeff As you might have noticed, my code does use the same instructions as provided on the link you provided. The issue is that the code works for normal embedded code as well as workbooks published on Tableau Public which require no authentication. However, since I am trying to programmatically login via the API/TSC, the ticket doesn't stay alive for the entirety and expires before it can fetch the workbook for the viz. I need a mechanism which will allow me to keep the ticket alive so that the embedding can work. Thanks Varun 3. Re: How do I create a custom portal for Tableau Online?Jeff D Jan 11, 2019 10:21 AM (in response to varun pillai) Varun, take a look at the authentication section for the Javascript API: Tableau JavaScript API Concepts--Authentication - Tableau . The problem is not that the ticket is expiring; the problem is that you need a different kind of ticket, not available through the REST API (since TSC uses the REST API, it has the same issue). 4. Re: How do I create a custom portal for Tableau Online?varun pillai Jan 11, 2019 11:13 AM (in response to Jeff D) So essentially the documentation says that Tableau Online doesn't support Trusted Authentication. What other option do I have to achieve this? Since Tableau Online does authenticate users to display workbooks on their side. It also has to call some REST API or generate tokens to get to the workbooks. How can I access their APIs? 5. Re: How do I create a custom portal for Tableau Online?Jeff D Jan 11, 2019 1:33 PM (in response to varun pillai) So essentially the documentation says that Tableau Online doesn't support Trusted Authentication. What other option do I have to achieve this? Here's the doc for Tableau Online Authentication: Authentication - Tableau It also has to call some REST API or generate tokens to get to the workbooks. How can I access their APIs? I don't believe that this API is documented (or officially supported).
https://community.tableau.com/message/866078
CC-MAIN-2020-24
refinedweb
735
65.32
The Groovy community has been asking for us to ensure it’s an officially supported language once again, and we’ve been listening. With the recent release of Elasticsearch 1.4.1, we are also excited to announce the release of the official Elasticsearch Groovy client (1.4.1), which is fully compatible with Elasticsearch 1.4.1. Starting today, the Groovy client becomes an officially supported client hosted on our elasticsearch-groovy GitHub repository. The old, long-defunct Groovy client has been completely rewritten for this release in order to guarantee 100% Java client compatiblity. Unlike before, any feature that exists in the Java client is now guaranteed to exist in the Groovy client. New Features - 100% Compatibility with the Java client - Simplified Closureto Mapconversion The More You Know The new and improved Groovy client does away with the old, Groovy-friendly variants of Java client objects in favor of using Groovy extension modules. Groovy extension modules provide the excellent ability to add new methods to existing classes and the Groovy client takes full advantage of them. Thanks to the use of the extension modules, all Java client examples are 100% compatible with the Groovy client, which means that any Groovy project can transition to using the Groovy client where and when it is convenient without having to make hard decisions about features. Naturally, any new Groovy code can be written to take full advantage of the new client immediately. What does 100% compatibility mean? Having 100% compatibility means that you can and will use the same code as the Java client, often with added Groovy-isms as opposed to custom-written Groovy equivalents (ye olde GClient will be missed … but hopefully forgotten). From development of the Groovy client’s perspective, this means that there is less code to write and test, and superior usability. And, from your perspective, it hopefully means less new code to learn and no worrying about missing any functionality as future versions are released. // No Groovy imports! import org.elasticsearch.action.search.SearchResponse import org.elasticsearch.client.Client import static org.elasticsearch.node.NodeBuilder.nodeBuilder // Create a client node using a mix of the Java NodeBuilder and Groovy extensions Client client = nodeBuilder().client(true).settings { cluster { name = "my-cluster-name" } arbitrary { setting = "arbitraryValue" } }.node().client // Perform a search on your cluster using a Closure! SearchResponse response = client.search { indices "index1", "index2" types "type1", "type2" source { query { match_all { } } } }.actionGet() Making Life Groovy-er As with earlier incarnations of the Groovy client, support for treating a Groovy Closure as data — instead of just as a block of code — is a core concept. This allows you to do things that are sometimes verbose when using the Java client by using a Groovy Closure. In your Java code, you might see something akin to: import static org.elasticsearch.common.xcontent.XContentFactory.*; XContentBuilder builder = jsonBuilder() .startObject() .field("user", "kimchy") .field("postDate", new Date()) .field("message", "trying out Elasticsearch") .endObject() String json = builder.string() However, if you start to use the Groovy client, then you could replace this with an equivalent Closure: String json = { user = "kimchy" postDate = new Date() message = "trying out Elasticsearch" }.asJsonString() No code import. Just Groovy magic! And thanks to this feature, the Groovy client makes full use of passing a Closure into your Elasticsearch Client requests as well: import org.elasticsearch.action.ListenableActionFuture def username = "kimchy" ListenableActionFuture<IndexResponse> responseFuture = client.index { index "my-index" type "my-type" id "my-id" source { user = username postDate = new Date() message = "Trying out Elasticsearch Groovy" nested { nested_object { some_int = 123 some_double = 4.56 some_object_list = [{ key = "Closures" }, { key = "Beats" }, { key = "Bears" }] } favorites = Integer.MAX_VALUE } } } This enables very powerful flows for submitting requests to your Elasticsearch cluster(s) by allowing you to use a Closure instead of a Map or String to pass around request bodies because the full flexibility of each Closure is expanded to be handled as data. It creates some of the easiest to read and reuse conversion code around (and no one’s ever going to keep it down). Finding Closure Most of the flexibility added by the Groovy client comes from extension methods applied to the Closure class itself and the internal usages of it. Other than the convenient-for-debugging closure.asJsonString(), you can easily convert your Closure into a Map<String, Object> using closure.asMap() (this kind of thing is done for you in the Groovy client’s methods that accept a Closure): Map<String, Object> map = { user = "kimchy" postDate = new Date() nested { value = 1.23 } }.asMap() Feedback The release of the Groovy client was because of user comments and requests. If you have a future feature request or find a bug, please let us know by opening an issue on GitHub or if you just want to leave us some good ol’ fashion, 140-character feedback, find us on Twitter (@elasticsearch)!
https://www.elastic.co/fr/blog/making-elasticsearch-groovy-er
CC-MAIN-2019-30
refinedweb
806
54.73
"SnowFlake vs SNOW_FLAKE" Summary The FIDL specification and front-end compiler currently considers two identifiers to be distinct based on simple string comparison. This proposes a new algorithm that takes into account the transformations that bindings generators make. Motivation Language binding generators transform identifiers to comply with target language constraints and style that map several FIDL identifiers to a single target language identifier. This could cause unexpected conflicts that aren't visible until particular languages are targeted. Design This proposes introducing a constraint on FIDL identifiers that no existing libraries violate. It doesn't change the FIDL language, IR (yet [1]), bindings, style guide or rubric. In practice, identifiers consist of a series of words that are joined together. The common approaches for joining words are CamelCase, where a transition from lower to upper case is a word boundary, and snake_case, where one or many underscores ( _) are used to separate words. Identifiers should be transformed to a canonical form for comparison. This will be a lower_snake_case form, preserving the word separation in the original form. Words are broken (1) where there are underscores, (2) on transitions from lower-case or digit to upper-case, and (3) before transitions from upper-case to lower-case. In FIDL, identifiers must be used in their original form. So if a type is named FooBar, attempting to refer to it as foo_bar is an error. There is a simple algorithm to carry out this transformation, here in Python: [2] def canonical(identifier): last = '_' out = '' for i, c in enumerate(identifier): is_next_lower = i + 1 < len(identifier) and identifier[i+1].islower() if c == '_': if last != '_': out += '_' elif (((last.islower() or last.isdigit()) and c.isupper()) or (last != '_' and c.isupper() and is_next_lower)): out += '_' + c.lower() else: out += c.lower() last = c return out Some examples, with their possible translation in various target languages: Implementation strategy The front-end compiler will be updated to check that each new identifier's canonical form does not conflict with any other identifier's canonical form. The next version of the FIDL IR should be organized around canonical names rather than original names, but the original name will be available as a field on declarations. If we can eliminate the use of unmodified names in generated bindings then the original names can be dropped from the IR. Ergonomics This codifies constraints on the FIDL language that exist in practice. Documentation and examples The FIDL language documentation would be updated to describe this constraint. It would be expanded to include much of what's in the Design section above. Because this proposal simply encodes existing practice, examples and tutorials won't be affected. Backwards compatibility Any existing FIDL libraries that would fall afoul of this change violate our style guides and won't work with many language bindings. This does not change the form of identifier that is used to calculate ordinals. Performance This imposes a negligible cost to the front-end compiler. Security No impact. Testing There will be extensive tests for the canonicalization algorithm implementation in fidlc. There will also be fidlc tests to ensure that errors are caught when conflicting identifiers are declared and to make sure that the original names must be used to refer to declarations. Drawbacks, alternatives, and unknowns One option is to do nothing. Generally we catch these issues as build failures in non-C++ generated bindings. As Rust is used more in fuchsia.git, the chance of conflicts slipping through to other petals is lessened. And these issues are already pretty rare. The canonicalization algorithm is simple but has one unfortunate failure case — mixed alphanumeric words in UPPER_SNAKE_CASE identifiers might be broken. For example H264_ENCODER → h264_encoder but A2DP_PROFILE → a2_dp_profile. This is because the algorithm treats digits as lower-case letters. We have to break on digit-to-letter transitions because H264Encoder should canonicalize as h264_encoder. Identifiers with no lower-case letters could be special cased — only breaking on underscores — but that adds complexity to the algorithm and perhaps to the mental model. The canonical form could be expressed as a list of words rather than a lower_camel_case string. They're equivalent and in practice it's simpler to manage them as a string. We could use identifiers' canonical form when generating ordinals. That would make this a breaking change for no obvious benefit. If there is an ordinal-breaking flag day in the future then we could consider that change then. Initial rejection, and second review During its first review on 4/18/2019, this FTP was rejected with the following rationale. - Two opposing views on solving this class of problems. - Work to model target languages' constraints to maintain as much flexibility in FIDL as possible, even if that is different than the recommended style. That's the approach taken by this FTP. - Pros: Keeps flexibility for eventual uses of FIDL beyond Fuchsia, more pure from a programming language standpoint. - Cons: Scoping rules are more complex, style is not enforced, but encouraged (through linting for instance). Could lead to APIs built by partners that do not conform to the Fuchsia style guide we want (since they are not required to run, or adhere to linting). - Enforce style constraints directly in the language, which eliminates the class of problem. - Pros: Style is enforced, developers are told how things ought to be, or it doesn't compile. - Cons: ingrains stylistic choices in the language definition, higher hill to climb for novice developers using FIDL. - → We rejected the proposal, and instead prefer an approach that directly enforces style in the language. - → Next step here is a formal proposal to make this happen, and clarifies all aspects of this (e.g., should uint8be Uint8, vector<T>be Vector<T>?) It was decided to overturn this decision based on the following observations: The kernel API is now described FIDL. This pushed us to re-opened the identifier uniqueness issue last October, and we essentially overturned the decision, allowing both a 'C-style' and 'FIDL-style' to co-exist. This is checked by the fidl linter today. We have seen other use cases pushing FIDL to generalize the transport, and with it possibly also local style rules to better support these domains. Identifier clashes continue to be an issue, and modeling target language constraints is a well identified gap in the FIDL toolchain. Prior art and references In proto3 similar rules are applied to generate a lowerCamelCase name for JSON encoding. Footnote1 until a new version of the IR schema, which would likely carry names with additional structure, rather than the fully-qualified name as it exists today. Footnote2 This algorithm was modified on 2020-06-03, after the FTP was accepted, in order to more closely match the existing behavior of FIDL backends.
https://fuchsia.dev/fuchsia-src/contribute/governance/rfcs/0040_identifier_uniqueness
CC-MAIN-2022-21
refinedweb
1,132
56.05
In this post, I explore the differences between the unittest boolean assert methods assertTrue and assertFalse and the assertIs identity assertion. Definitions Here’s what the unittest module documentation currently notes about assertTrue and assertFalse, with the appropriate code highlighted: assertTrue(expr, msg=None) assertFalse(expr, msg=None) Test that expr is true (or false). Note that this is equivalent tobool(expr) is True and not toexpr is True (use assertIs(expr, True) for the latter). Mozilla Developer Network defines truthy as: A value that translates to true when evaluated in a Boolean context. In Python this is equivalent to: bool(expr) is True Which exactly matches what assertTrue is testing for. Therefore the documentation already indicates assertTrue is truthy and assertFalse is falsy. These assertion methods are creating a bool from the received value and then evaluating it. It also suggests that we really shouldn’t use assertTrue or assertFalse for very much at all. What does this mean in practice? Let’s use a very simple example - a function called always_true that returns True. We’ll write the tests for it and then make changes to the code and see how the tests perform. Starting with the tests, we’ll have two tests. One is “loose”, using assertTrue to test for a truthy value. The other is “strict” using assertIs as recommended by the documentation: import unittest from func import always_true class TestAlwaysTrue(unittest.TestCase): def test_assertTrue(self): """ always_true returns a truthy value """ result = always_true() self.assertTrue(result) def test_assertIs(self): """ always_true returns True """ result = always_true() self.assertIs(result, True) Here’s the code for our simple function in func.py: def always_true(): """ I'm always True. Returns: bool: True """ return True When run, everything passes: always_true returns True ... ok always_true returns a truthy value ... ok ---------------------------------------------------------------------- Ran 2 tests in 0.004s OK Happy days! Now, “someone” changes always_true to the following: def always_true(): """ I'm always True. Returns: bool: True """ return 'True' Instead of returning True (boolean), it’s now returning string 'True'. (Of course this “someone” hasn’t updated the docstring - we’ll raise a ticket later.) This time the result is not so happy:) Only one test failed! This means assertTrue gave us a false-positive. It passed when it shouldn’t have. It’s lucky we wrote the second test with assertIs. Therefore, just as we learned from the manual, to keep the functionality of always_true pinned tightly the stricter assertIs should be used rather than assertTrue. Use assertion helpers Writing out assertIs to test for True and False values is not too lengthy. However, if you have a project in which you often need to check that values are exactly True or exactly False, then you can make yourself the assertIsTrue and assertIsFalse assertion helpers. This doesn’t save a particularly large amount of code, but it does improve readability in my opinion. def assertIsTrue(self, value): self.assertIs(value, True) def assertIsFalse(self, value): self.assertIs(value, False) Summary In general, my recommendation is to keep tests as tight as possible. If you mean to test for the exact value True or False, then follow the documentation and use assertIs. Do not use assertTrue or assertFalse unless you really have to. If you are looking at a function that can return various types, for example, sometimes bool sometimes int, then consider refactoring. This is a code smell and in Python, that False value for an error would probably be better raised as an exception. In addition, if you really need to assert the return value from a function under test is truthy, there might be a second code smell - is your code correctly encapsulated? If assertTrue and assertFalse are asserting that function return values will trigger if statements correctly, then it might be worth sense-checking you’ve encapsulated everything you intended in the appropriate place. Maybe those if statements should be encapsulated within the function under test. Happy testing!
https://jamescooke.info/python-unittest-asserttrue-is-truthy-assertfalse-is-falsy.html
CC-MAIN-2021-17
refinedweb
658
56.86
/* uncomment next line and myreader.cpp will compile, but arduino complains about redefinitions. without it, myreader.cpp says it can't find CallBackInterface or PCintPort.*///#include <ooPinChangeInt.h>#include "myreader.h"MyReader p(2);void setup(){ Serial.begin(9600);}void loop(){ delay(500);} #ifndef __MYREADER_H__#define __MYREADER_H__#include <Arduino.h>#include <ooPinChangeInt.h>#include <cbiface.h>class MyReader : public CallBackInterface{public: unsigned char count; unsigned long time; unsigned long ticksperminute; uint8_t pin; unsigned long total_ticks; MyReader(uint8_t _pin); void cbmethod();private: void init ();};#endif #include <limits.h>#include "myreader.h"#include <ooPinChangeInt.h>//#include <cbiface.h>//#include <Arduino.h>inline unsigned long elapsed_time(unsigned long now, unsigned long then) { if (now < then) { /* number wrapped around; do some math */ return ULONG_MAX - then + now + 1; } else { /* straight-forward difference */ return now - then; }}#define MAX_TICKS_PER_CALC 255#define TICKS_PER_CALC 10static const double coeff = 0.10;MyReader::MyReader (uint8_t _pin): pin(_pin){ init();};void MyReader::cbmethod() { unsigned long now; unsigned long delta; unsigned long tpm; count++; total_ticks++; if(count >= TICKS_PER_CALC) { now = micros(); if (now == time) return; tpm = (unsigned long)( TICKS_PER_CALC / (double) (elapsed_time(now, time)) * 1000000 * 60 ); ticksperminute = tpm * coeff + ticksperminute * (1 - coeff); count=0; time = micros(); }}void MyReader::init () { ticksperminute=0; pinMode(pin, INPUT); digitalWrite(pin, LOW); PCintPort::attachInterrupt(pin, this, RISING); time = micros(); total_ticks = 0;} As an aside, I am finding Arduino's build system to be rather cumbersome and very fragile. You have included Arduino.h in the wrong places, perhaps.And if your myReader.cpp includes myReader.h, which includes the ooPinChange.h, then it doesn't need to include ooPinChange.h again, particularly in a different order. Your files are copied to a temporary directory. The #include directives inform the copying process which ones to copy. If your main sketch does not include a .h file it isn't copied. Annoying perhaps, but that's the way it works.So your main sketch has to include files which it may not personally use. So why, every time I recompile my sketch, does it take ages, and recompiles a whole bunch of stuff I never use, never edit, afaik never reference, and get a whole bunch of compiler warnings ? What version of the IDE are you using, on what operating system? From the sounds of it, you aren't using a recent version.Or, are you not using the IDE at all? Are you using some other method to compile your code? Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=146205.0;prev_next=next
CC-MAIN-2016-18
refinedweb
435
58.99
symfony 2.0 performance tests #1 Posted 23 February 2010 - 03:07 PM According to them symfony 2.0 is 75% faster than yii 1.1.1 First, is Yii 1.1.1 out... ? Second, how is that even possible ? here is the link to those tests : Sounds like a revenge to me. #2 Posted 23 February 2010 - 03:37 PM Note that it says "50% faster than Yii 1.1.1 - for production mode" (that's what counts). They tested the trunk version, where it says in the changelog "1.1.1 - to be released". Other than that I wonder why you guys always freak out when you see frameworks claiming to be faster than Yii? In this case I guess Smyfony just has a slim core with less "magic" involved. Also possible that they gain some speed from the usage of php namespaces, don't know. #3 Posted 23 February 2010 - 03:59 PM. My guess is that with the above two aspects fixed, the gap won't be as big as shown in their results. I am not sure if the benchmark makes a lot of sense because it seems to me it focuses too much on the routing part (10 URL rules are used). A typical yii application should use less than 5 rules if most of its URLs are regular. At this moment, improving the performance is not the top priority for yii because we haven't heard any user complaining about yii's performance. We will continue focusing more on the ease of use, features and documentation in the near future. When time comes for us to develop yii 2.0 (on PHP 5.3), we will revisit this. #4 Posted 23 February 2010 - 05:55 PM I am more than happy with yii performances and I am also glad to say that we have chosen Yii for our next french large scale project in my company. #5 Posted 23 February 2010 - 07:01 PM If we've started to discuss Symfony 2, I'd like to add my two cents and maybe to discuss it a bit. I really like how routing is made. Parameters are bound by name and are passed to controller actions via named method parameters. Still don't know how it's done, but it looks very clean. If I remember correctly, Django does it the same way. File structure, logger, class autoloading, bundles (except that Yii core is not divided too much) are very similar (conceptually) to what is in Yii. Enjoying Yii? Star us at github: 1.1 and 2.0. #6 Posted 23 February 2010 - 07:50 PM #7 Posted 24 February 2010 - 04:39 AM Enjoying Yii? Star us at github: 1.1 and 2.0. #8 Posted 24 February 2010 - 07:54 PM PHP 5.3 really isn't supported by very many shared hosts yet unfortunately. At least not the ones I tried.. Looks similar to yii: I think having the action method named indexAction is worse than actionIndex.. don't know why they do that. Look how much typing was required for their simple route: seems they got a few good ideas going tho #9 Posted 25 February 2010 - 03:26 AM jonah, on 24 February 2010 - 07:54 PM, said: Slightly OT: Your last link reminded me, why i ran far far away from Symfony some years ago: YML. I hated having to learn another syntax to create dynamic PHP code (at least that's how i understood). Why not use PHP for that in the first place? That's a big plus for Yii: I like that everything's pure PHP (except maybe js for the client, but that's a different story). #10 Posted 25 February 2010 - 04:30 AM qiang, on 23 February 2010 - 03:59 PM, said:. Moreover, the fact Yii is the fastest after their own framework, is pretty good. #11 Posted 25 February 2010 - 05:23 AM Well, I don't miss named parameters very much as well. It's just another code style and it looks a bit cleaner. Same about indexAction vs actionIndex. It's just another style. PHP 5.3 is not a big problem nowadays. There are many virtual private server hosting services and they are cheap. ekerazha According to Symfony 2 tests (that are not really fair), Yii is not faster. Mike I can agree about yaml. Too easy to make a typo. In Symfony 2 they do have a good alternative to yaml — XML. There is validation and code completion. As for PHP configs… well they are too much complicated in Symfony 2. Enjoying Yii? Star us at github: 1.1 and 2.0. #12 Posted 25 February 2010 - 06:25 AM Well, even if Symfony would run 100 times faster than YII, I wouldn't use it again (after using it for two years). I think it's overcomplicated. And yes...I think YML and config files sucks too... And btw, we don't need to blame other frameworks... Developers are free to choose what they want, and for us, YII is our choice (at least until we choose other framework that fits better to our needs)... YII doesn't need evangelists. Let's show the world what YII can do developing sites with it... Free as in "beer"... #13 Posted 25 February 2010 - 06:30 AM Noone blames Symfony 2. Fabien did a really good job compared to Symfony 1 and there are some innovations too. We're just discussing good parts of Symfony 2 and trying to imagine if they will fit Yii. Enjoying Yii? Star us at github: 1.1 and 2.0. #14 Posted 25 February 2010 - 08:06 AM samdark, on 25 February 2010 - 05:23 AM, said: According to Symfony 2 tests (that are not really fair), Yii is not faster. I've said it's the fastest after Symfony 2, not that it's faster than Symfony 2. #15 Posted 25 February 2010 - 07:30 PM php: foreach(array('cat', 'dog', 'cow') as $animal) echo $animal."\n"; python: [(animal, print(animal)) for animal in ['cat', 'dog', 'cow']] ruby: ['cat', 'dog', 'cow'].each {|animal| puts animal} You say Tomato, I say Tomato. #16 Posted 17 March 2010 - 06:04 AM #17 Posted 17 March 2010 - 09:39 AM IMHO, the real questions are : 1. Can I produce the application I need without re-inventing the wheel & without a mountain of redundant code sitting around the place. 2. Can I do this *quickly*. 3. Does my real-world application run fast enough to satisfy my prospective user group without upgrading my server hardware. 4. Is the community for the framework I have chosen populated by smart people who are happy to help others (i.e. me). Obviously, for Yii the answer to all 4 questions is a resounding YES (Oh, and yes, YAML sucks ...) #18 Posted 30 March 2010 - 09:47 PM #19 Posted 31 March 2010 - 10:50 AM #20 Posted 29 April 2010 - 05:40 AM
http://www.yiiframework.com/forum/index.php/topic/7474-symfony-2-0-performance-tests/
CC-MAIN-2016-40
refinedweb
1,175
83.46
You can subscribe to this list here. Showing 1 results of 1 hello, while running this program (from the README file, into the package = pyusb-0.3.5).. #____program____ import usb # import the usb module bus =3D usb.busses() # get a list of all available busses dev =3D bus[4].devices[0] # choose the first device on the first bus handle =3D dev.open() # open the device for alt in dev.configurations[0].interfaces[0]: print alt # look at the alternate settings. handle.setConfiguration(0) # choose the first configuration handle.claimInterface(0) # choose the first interface ### Use the device here. ### handle.releaseInterface() # also called automatically on __del__ #____end____ ..i got the following error: Traceback (most recent call last): File "progreadme.py", line 13, in ? handle.setConfiguration(0) # choose the first configuration usb.USBError: could not set config 0: Device or resource busy every bus number i fill in, i got the error, and if i comment the error = line, error in the next line: still "device or resource busy". i have to run it as a Super User, or i'm told "operation not permitted", = but i think that's normal. can someone help me? i'm using debian 4.0 etch thank you -and sorry for my bad english- stefano
http://sourceforge.net/p/pyusb/mailman/pyusb-users/?viewmonth=200707
CC-MAIN-2014-23
refinedweb
211
61.02
6.6. Gluon Implementation in Recurrent Neural Networks¶ @TODO(smolix/astonzhang): the data set was just changed from lyrics to time machine, so descriptions/hyperparameters have to change. This section will use Gluon to implement a language model based on a recurrent neural network. First, we read the Jay Chou album lyrics data set. In [1]: import sys sys.path.insert(0, '..') import gluonbook as gb import math from mxnet import autograd, gluon, init, nd from mxnet.gluon import loss as gloss, nn, rnn import time (corpus_indices, char_to_idx, idx_to_char, vocab_size) = gb.load_data_time_machine() 6.6.1. Define the Model¶ Gluon’s rnn module provides a recurrent neural network implementation. Next, we construct the recurrent neural network layer rnn_layer with a single hidden layer and 256 hidden units, and initialize the weights. In [2]: num_hiddens = 256 rnn_layer = rnn.RNN(num_hiddens) rnn_layer.initialize() Then, we call the rnn_layer’s member function begin_state to return hidden state list for initialization. It has an element of the shape (number of hidden layers, batch size, number of hidden units). In [3]: batch_size = 2 state = rnn_layer.begin_state(batch_size=batch_size) state[0].shape Out[3]: (1, 2, 256) Unlike the recurrent neural network implemented in the previous section, the input shape of rnn_layer here is (time step, batch size, number of inputs). Here, the number of inputs is the one-hot vector length (the dictionary size). In addition, as an rnn.RNN instance in Gluon, rnn_layer returns the output and hidden state after forward computation. The output refers to the hidden states that the hidden layer computes and outputs at various time steps, which are usually used as input for subsequent output layers. We should emphasize that the “output” itself does not involve the computation of the output layer, and its shape is (time step, batch size, number of hidden units). While the hidden state returned by the rnn.RNN instance in the forward computation refers to the hidden state of the hidden layer available at the last time step that can be used to initialize the next time step: when there are multiple layers in the hidden layer, the hidden state of each layer is recorded in this variable. For recurrent neural networks such as long short-term memory networks, the variable also contains other information. We will introduce long short-term memory and deep recurrent neural networks in the later sections of this chapter. In [4]: num_steps = 35 X = nd.random.uniform(shape=(num_steps, batch_size, vocab_size)) Y, state_new = rnn_layer(X, state) Y.shape, len(state_new), state_new[0].shape Out[4]: ((35, 2, 256), 1, (1, 2, 256)) Next, we inherit the Block class to define a complete recurrent neural network. It first uses one-hot vector to represent input data and enter it into the rnn_layer. This, it uses the fully connected output layer to obtain the output. The number of outputs is equal to the dictionary size vocab_size. In [5]: # This class has been saved in the gluonbook) 6.6.2. Model Training¶ As in the previous section, a prediction function is defined below. The implementation here differs from the previous one in the function interfaces for forward computation and hidden state initialization. In [6]: # This function is saved in the gluonbook package for future use. def predict_rnn_gluon(prefix, num_chars, model, vocab_size, ctx, idx_to_char, char_to_idx): # Use model's member function to initialize the hidden state. state = model.begin_state(batch_size=1, ctx=ctx) output = [char_to_idx[prefix[0]]] for t in range(num_chars + len(prefix) - 1): X = nd.array([output[-1]], ctx=ctx).reshape((1, 1)) (Y, state) = model(X, state) # Forward computation does not require incoming model parameters. if t < len(prefix) - 1: output.append(char_to_idx[prefix[t + 1]]) else: output.append(int(Y.argmax(axis=1).asscalar())) return ''.join([idx_to_char[i] for i in output]) Let us make one predication using a model with weights that are random values. In [7]: ctx = gb.try_gpu() model = RNNModel(rnn_layer, vocab_size) model.initialize(force_reinit=True, ctx=ctx) predict_rnn_gluon('traveller', 10, model, vocab_size, ctx, idx_to_char, char_to_idx) Out[7]: 'travellerdsidsidsgn' Next, implement the training function. Its algorithm is the same as in the previous section, but only random sampling is used here to read the data. In [8]: # This function is saved in the gluonbook package for future use. def train_and_predict_rnn_gluon(model, num_hiddens, vocab_size, ctx, corpus_indices, idx_to_char, char_to_idx, num_epochs, num_steps, lr, clipping_theta, batch_size, pred_period, pred_len, prefixes): loss = gloss.SoftmaxCrossEntropyLoss() model.initialize(ctx=ctx, force_reinit=True, init=init.Normal(0.01)) trainer = gluon.Trainer(model.collect_params(), 'sgd', {'learning_rate': lr, 'momentum': 0, 'wd': 0}) for epoch in range(num_epochs): loss_sum, start = 0.0, time.time() data_iter = gb.data_iter_consecutive( corpus_indices, batch_size, num_steps, ctx) state = model.begin_state(batch_size=batch_size, ctx=ctx) for t, (X, Y) in enumerate(data_iter): for s in state: s.detach() with autograd.record(): (output, state) = model(X, state) y = Y.T.reshape((-1,)) l = loss(output, y).mean() l.backward() # Clip the gradient. params = [p.data() for p in model.collect_params().values()] gb.grad_clipping(params, clipping_theta, ctx) trainer.step(1) # Since the error has already taken the mean, the gradient does not need to be averaged. loss_sum += l.asscalar() if (epoch + 1) % pred_period == 0: print('epoch %d, perplexity %f, time %.2f sec' % ( epoch + 1, math.exp(loss_sum / (t + 1)), time.time() - start)) for prefix in prefixes: print(' -', predict_rnn_gluon( prefix, pred_len, model, vocab_size, ctx, idx_to_char, char_to_idx)) Train the model using the same hyper-parameters as in the previous experiments. In [9]: num_epochs, batch_size, lr, clipping_theta = 200, 32, 1e2, 1e-2 pred_period, pred_len, prefixes = 50, 50, ['traveller', 'time traveller'] train_and_predict_rnn_gluon(model, num_hiddens, vocab_size, ctx, corpus_indices, idx_to_char, char_to_idx, num_epochs, num_steps, lr, clipping_theta, batch_size, pred_period, pred_len, prefixes) epoch 50, perplexity 4.251001, time 0.17 sec - traveller that ne move a moment of the time traveller that - time traveller that ne move a moment of the time traveller that epoch 100, perplexity 2.018702, time 0.17 sec - traveller that is all the ling of the time traveller, 'in a - time traveller some of has of the time traveller some of the ind epoch 150, perplexity 1.483906, time 0.17 sec - traveller man thought real existences, whichness, 'and his - time traveller che has getting all real bus in ony directile of epoch 200, perplexity 1.317007, time 0.17 sec - traveller came back and the sulby they think that spally re - time traveller. 'you can show blay from the present momention.' 6.6.3. Summary¶ - Gluon’s rnnmodule provides an implementation at the recurrent neural network layer. - Gluon’s nn.RNNinstance returns the output and hidden state after forward computation. This forward computation does not involve output layer computation. 6.6.4. Problems¶ - Compare the implementation with the previous section. Does Gluon’s implementation run faster? If you observe a significant difference, try to find the reason.
http://gluon.ai/chapter_recurrent-neural-networks/rnn-gluon.html
CC-MAIN-2019-04
refinedweb
1,124
50.23
Fill values X from values Y I'm trying to create an algorithm to fill some values X (e.g. 10,000, 50,000, 100,000) from another set of values Y (e.g. 2500, 5000, 10,000, 42,500, 27,500). Aim: To get the highest total filling of values X. Rules: Can't repeat any value Y. Any value X must be entirely filled to count. I've tried to knapsack this but it doesn't work very well because it creates huge value arrays. Any ideas? Edit for more clarity: An array of values X (ValueX), and an array of values Y (ValueY). Fill each individual value from ValueX, using any combination of values from ValueY. Once a value from ValueY has been used, it cannot be reused. Example: Fill ValueX[0] (10,000) You could use ValueY[2] (10,000) and that would fill it completely. However now ValueY[2] cannot be reused for future filling of any ValueX. If you then tried to fill ValueX[1] (50,000), you could use ValueY[3] (42,500), ValueY[1] (5000) and ValueY[0] (2500), to get a total of 50,000. Now those values(3, 1, 0) from ValueY have also been used. See also questions close to this topic - Something wrong about ExecutorService#submit public class CASTest { private static final int ii = 10; public static AtomicInteger ai = new AtomicInteger(); //public static CountDownLatch latch = new CountDownLatch(ii); public static void main(String[] args) { DemoRunnable dr = new DemoRunnable(); List<Future<AtomicInteger>> list = new ArrayList<Future<AtomicInteger>>(ii); ExecutorService es = Executors.newCachedThreadPool(); for (int i = 0; i < ii; i++) { list.add(es.submit(new Callable<AtomicInteger>() { @Override public AtomicInteger call() throws Exception { try { Thread.sleep(20); ai.incrementAndGet(); } catch (InterruptedException e) { e.printStackTrace(); } return ai; } })); } for (int i = 0; i < list.size(); i++) { try { System.out.println(list.get(i).get()); } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } finally { es.shutdown(); } } } } and the console print I wanna the console print like this " 1 2 3 4 ... 10 " in order but why it acting like this? - How to stop child class to stop executing further line of code, if Parent Activity starts an intent I have a base class that checks if app is launched from history, and no processes of app are available, then redirect user to MainActivity. This is my base class code; public abstract class BaseActivity extends AppCompatActivity { protected void onCreate(Bundle savedInstanceState) { if(MainActivity.instance == null){ //instance is a static variable in MainActivity Intent i= new Intent(this, MainActivity.class); startActivity(i); this.finish(); return; }else{ setContentView(getLayoutRes()); } } //Child app override this method, and provides their layout. protected abstract int getLayoutRes(); Child Activity public class MapActivity extends BaseActivity{ @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); //1000 line of code } @Override protected int getLayoutRes() { return R.layout.activity_map; } Now what it does when user launches the app from history, and no processes of the app are available, then it reads the check of BaseActivityand tries to start the MainActivityintent, and does not set the layout, but the problem is that it does not stop there, it goes to MapActivity- onCreate()method and start executing the other lines of code, as i have not set layout, it crashes while trying to read findViewById(), it also fails at other static variables. So the whole point of redirecting user to MainActivitywhen the process is killed is not fullfiled. I need to know how to stop executing code of child class when BaseClassstart the intent - Make two thread plus an integer alternately I write the code below in order to plus an integer nwith two threads alternately until nreach to LIMIT. Sometimes it works well, but in most of time it doesn't, and both t1& t2are waiting. I'm a beginner for Java concurrent programming. Could anyone point out my bug? Thanks in advance :-) import java.util.concurrent.locks.Condition; import java.util.concurrent.locks.ReentrantLock; public class TwoThreads { private final int LIMIT = 500; private volatile int n = 0; public void runWithReentrantLock() { final ReentrantLock lock = new ReentrantLock(); final Condition t1CanRun = lock.newCondition(); final Condition t2CanRun = lock.newCondition(); Thread t1 = new Thread(new Runnable() { @Override public void run () { while(n < LIMIT) { lock.lock(); try { t1CanRun.await(); if (n < LIMIT) { n++; System.out.println(Thread.currentThread().getName() + "-->" + n); } t2CanRun.signal(); }catch (InterruptedException e) { e.printStackTrace(); } finally { lock.unlock(); } } } }, "t1"); Thread t2 = new Thread(new Runnable() { @Override public void run() { while(n < LIMIT) { lock.lock(); try { t2CanRun.await(); if(n < LIMIT) { n++; System.out.println(Thread.currentThread().getName() + "-->" + n); } t1CanRun.signal(); } catch (InterruptedException e) { e.printStackTrace(); } finally { lock.unlock(); } } } }, "t2"); t1.start(); t2.start(); lock.lock(); t1CanRun.signal(); lock.unlock(); } public static void main(String[] args) { TwoThreads instance = new TwoThreads(); instance.runWithReentrantLock(); } } the thread dump when result is bad -]] - Python - Tricky combination I am looking for a way to combine one element of each list below at a time, with the subsequent iteration being only able to grab among not taken elements yet. For instance: I have three lists. These lists would always be composed of two elements. lst_1 = ['AB','CD'] lst_2 = [10,10] lst_3 = [30,60] I would like to iterate two times. The first time, it selects out = [(['AB',10,30],['CD',10,60])] meaning two combinations of one element of each list, element that once selected can't be selected again. Please note that when I say "element that once selected can't be selected again", it is based on it's position in the list. For instance, lst_2 as twice 10. However, I select them twice because I am not looking at the value but at its position. Second iteration: out = (['AB',10,60],['CD',10,30])] meaning two combination of one element of each list, combination already selected previously not being able to be selected: 'AB' for instance, could only be combined with 10 (lst_2) and 60 (lst_3), given that on the first iteration it took 30 as it's combination with the third list. Thus far, I was able to tackle the first part (the first iteration): test = [lst_1,lst_2,lst_3] unzipped = list(zip(*test)) - plain javascript - function to get set of pairs from two arrays given two arrays (of unknown size) like so: var first = ['A1','A2','A3']; var second= ['B1','B2']; I want to get set of all possible pairs from those arrays. In above example proper result should look like this: var result = [['A1B1','A2B2'], ['A1B1','A3B2'],['A1B2','A2B1'],['A1B2','A3B1'],['A2B1','A3B2'],['A2B2','A3B1']]; I tried to come up with solution on my own, but I am stuck in increasingly bigger numer of nested loops... EDIT: Sorry, perhaps I wasn't that clear when forming my question. Maybe it would be easier on an example - let's assume that first array contains names of truck drivers, and second array contains cities. Each driver can only drive to one city, and each city can only be visited by single driver. In this scenario expected output of the function would be a complete list of possible combinations (driver + city). Unknown lengths of input arrays obviously means that there may be drivers that won't drive anywhere, and there might be cities which won't get visited by any driver. I hope now my question is more clear now. EDIT2: I don't think this is a duplicate of Finding all possible value combinations between two arrays since accepted answer in that question works only (if I understand it correctly) for second array of length 2. - How to become a software engineer that every big company is looking for? I am a student of bachelor of science and technology, I have hit many walls, I did MEAN Stack programming for an year as well as Machine learning and deep learning, along with my B.tech studies. Now I am in the 6th semester of my college and have realised that companies require people who know everything about how a computer works as well as programming logic... So I have started brushing up my Algorithms and data structure. Can you help me tell some sources you did study from? - get checkbox state from dynamic element I'm using the YouTube API to populate a list of cards to my app using an each function. These cards are placed into a variable, then appended to a playlist container. Within the individual cards is a switch to "favorite" a video. The switch, along with rest of the card is populated dynamically, so I'm trying to use the "on" "change", instead of ".change" "function". From what I've ready the on method is what you use to manipulate dynamic elements. The challenge I'm running into is that I cannot seem to get the correct state of the switch(checkbox), and I'm always returning the first if statement no matter what I'm doing. Can anyone spot what I'm doing wrong here? The plan is to push to firebase if a user clicks to "favorite" a video, but I cannot get this function working. Thanks in advance! :D $(document).on("change", ".switch", function () { if (this.checked) { console.log("Checked"); } else { console.log("NOT Checked"); } }); - Knapsack Algorithm Code I am trying to code the algorithm for the knapsack problem in C++. I can't get this to work. I am getting max sum equals 0 for any input. I think the problem relies in the nested for-loop. Can someone point out what went wrong and how to fix the code ?. I would like to know also what you think about my implementation. Thanks for the help. #include <iostream> #include <vector> #include <set> #include <utility> using namespace std; using knapsack_pair = pair<int,int>; // pair<value,weight> using matrix_value_type = pair<int, set<knapsack_pair>>; // pair<max, set of elements> matrix_value_type max(matrix_value_type first_value, matrix_value_type second_value){ if(first_value.first >= second_value.first) return first_value; else return second_value; } int main(){ int number_of_values; int knapsack_size; cout << "How many values are you going to enter?" << endl; cin >> number_of_values; cout << "what's the size of the knapsack?" << endl; cin >> knapsack_size; int counter = 0; vector<knapsack_pair> knapsack_vector; while(counter < number_of_values){ knapsack_pair new_pair; cout << "insert the value of the element" << endl; cin >> new_pair.first; cout << "insert the weight of the element" << endl; cin >> new_pair.second; knapsack_vector.push_back(new_pair); ++counter; } matrix_value_type matrix[number_of_values + 1][knapsack_size + 1]; for(int i = 0; i < knapsack_size + 1; ++i){ set<knapsack_pair> empty_set; matrix_value_type value(0,empty_set); matrix[0][i] = value; } for(int x = 1; x < number_of_values + 1; ++x){ knapsack_pair current_pair = knapsack_vector.at(x-1); for(int y = 0; y < knapsack_size + 1; ++y){ matrix_value_type first_value = matrix[x-1][y]; matrix_value_type second_value; int weight_s_pair = y - current_pair.second; if(weight_s_pair >= 0){ second_value = matrix[x-1][weight_s_pair]; second_value.first = second_value.first + current_pair.first; second_value.second.insert(current_pair); } else second_value.first = -999; matrix[x][y] = max(first_value,second_value); } } matrix_value_type result = matrix[number_of_values][knapsack_size]; cout << "the max value is: " << result.first << endl; cout << "the elements in the knapsack are: " << endl; for(auto& it : result.second) cout << "element of value: " << it.first << " and weight: " << it.second; } - Recursive Knapsack in R I wrote the following code in Rfor recursive solution of knapsack problem, but the output is NULL. Please let me know where I am making errors. k.capacity<- 8 k.value<- c(15,10,9,5) k.weight<- c(1,5,3,4) k.sol<- for (i in 1:4) { if(k.weight[i]>k.capacity) return(k.weight[i-1]) else(max(k.value[i]+k.value[i-1]&&(k.capacity-k.weight[i-1])|| k.value[i-1] && k.weight[i])) } - Complexity Analysis Unbounded Knapsack-like (profit) I found myself unable to reach a runtime Big-O boundary expression for the following python code: def profit3(values_lst, size): n = len(values_lst) return profit_rec(values_lst, n, size) def profit_rec(values_lst, i, size): if size==0 or i==0: return 0 if size==1: return values_lst[0] else: return max(values_lst[i-1]+profit_rec(values_lst,min(i, size-i),size-i), profit_rec(values_lst, i-1, size)) Code is intended to give a land owner (land size) the maximum value he can get on his property if he divides the property into smaller areas where values_lst contains the value of each size (index 0 for size 1, index 1 for size 2, etc.) This is quite similar to the unbounded knapsack problem as size is akin to weight, and we can use the same size again and again within our problem. However, I did not manage to find a good source that I will understand fully why the complexity of the unbounded knapsack problem is O(nW), which in the profit case, would translate to O(n^2)? .. and my runtime analysis shows that O(n^2)is too small for this solution, albeit, profit3()does not apply memoization, which might be the crux of the matter. Regardless, I'd like to be able to find the Big-Oh of the current problem and would appreciate some help.
http://codegur.com/48248830/fill-values-x-from-values-y
CC-MAIN-2018-05
refinedweb
2,161
55.84
How to prevent carrierwave from adding the filename string to the mounted column when file is not uploaded? I am using Rails and ActiveRecord. I have carrierwave mounted on one of the columns(:logo) of a model(Listing). My default filename is "disp_logo". Let's say I just do Listing.create! In this case, I haven't really uploaded any file. I did not do Listing.logo=<some file> or Listing.remote_logo_url=<some url>. But, carrierwave still inserts the string "disp_logo" in the :logo column. Why does it do that? How can I prevent carrierwave from doing so? My uploader class has the following methods: def store_dir "uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}" end def filename "disp_logo" end That's what I mean by default filename. I want the column :logo to have NULL if image is not uploaded. Instead it has "disp_logo". Answers It would seem you created your migration with "disp_logo" as the default value for your column. You should set the default image in your uploader instead, like this: # Provide a default URL as a default if there hasn't been a file uploaded: def default_url "/" + [version_name, "disp_logo.jpg"].compact.join('_') end Or if you use rails 3.1 and the assets pipeline: # Include the Sprokets helpers for Rails 3.1+ asset pipeline compatibility: include Sprockets::Helpers::RailsHelper include Sprockets::Helpers::IsolatedHelper # Provide a default URL as a default if there hasn't been a file uploaded: def default_url asset_path [version_name, "disp_logo.jpg"].compact.join('_') end Edit: Modify your filename method as follows: def filename "something.jpg" unless original_filename.nil? end Need Your Help Gotomeeting php api(oauth) implementation I can't get images to show up in Eclipse java eclipse image embedded-resourceI am fairly new to using Swing and I thought that in Eclipse all I would have to do is put an image file into the SRC directory and the program would have access to it. Of course this is not the c...
http://unixresources.net/faq/8346538.shtml
CC-MAIN-2019-13
refinedweb
333
50.53
As per GoF definition, flyweight design pattern enables use sharing of objects to support large numbers of fine-grained objects efficiently. A flyweight is a shared object that can be used in multiple contexts simultaneously. The flyweight acts as an independent object in each context. 1. When to use flyweight design pattern We can use flyweight pattern in following scenarios: - When we need a large number of similar objects that are unique in terms of only a few parameters and most of the stuffs are common in general. - We need to control the memory consumption by large number of objects – by creating fewer objects and sharing them across. 2. Extrinsic and intrinsic attributes A flyweight objects essentially has two kind of attributes – intrinsic and extrinsic. An intrinsic state attribute is stored/shared in the flyweight object, and it is independent of flyweight’s context. As the best practice, we should make intrinsic states immutable. An extrinsic state varies with flyweight’s context, which is why they cannot be shared. Client objects maintain the extrinsic state, and they need to pass this to a flyweight object during object creation. 3. Real world example of flyweight pattern - Suppose we have a pen which can exist with/without refill. A refill can be of any color thus a pen can be used to create drawings having N number of colors. Here Pencan be flyweight object with refillas extrinsic attribute. All other attributes such as pen body, pointer etc. can be intrinsic attributes which will be common to all pens. A pen will be distinguished by its refill color only, nothing else. All application modules which need to access a red pen – can use the same instance of red pen (shared object). Only when a different color pen is needed, application module will ask for another pen from flyweight factory. - In programming, we can see java.lang.String constants as flyweight objects. All strings are stored in string pool and if we need a string with certain content then runtime return the reference to already existing string constant from the pool – if available. - In browsers, we can use an image in multiple places in a webpage. Browsers will load the image only one time, and for other times browsers will reuse the image from cache. Now image is same but used in multiple places. It’s URL is intrinsic attribute because it’s fixed and shareable. Images position coordinates, height and width are extrinsic attributes which vary according to place (context) where they have to be rendered. 4. Flyweight design pattern example In given example, we are building a Paint Brush application where client can use brushes on three types – THICK, THIN and MEDIUM. All the thick (thin or medium) brush will draw the content in exact similar fashion – only the content color will be different. public interface Pen { public void setColor(String color); public void draw(String content); } public enum BrushSize { THIN, MEDIUM, THICK } public class ThickPen implements Pen { final BrushSize brushSize = BrushSize.THICK; //intrinsic state - shareable private String color = null; //extrinsic state - supplied by client public void setColor(String color) { this.color = color; } @Override public void draw(String content) { System.out.println("Drawing THICK content in color : " + color); } } public class ThinPen implements Pen { final BrushSize brushSize = BrushSize.THIN; private String color = null; public void setColor(String color) { this.color = color; } @Override public void draw(String content) { System.out.println("Drawing THIN content in color : " + color); } } public class MediumPen implements Pen { final BrushSize brushSize = BrushSize.MEDIUM; private String color = null; public void setColor(String color) { this.color = color; } @Override public void draw(String content) { System.out.println("Drawing MEDIUM content in color : " + color); } } Here brush color is extrinsic attribute which will be supplied by client, else everything will remain same for the Pen. So essentially, we will create a pen of certain size only when the color is different. Once another client or context need that pen size and color, we will reuse it. import java.util.HashMap; public class PenFactory { private static final HashMap<String, Pen> pensMap = new HashMap<>(); public static Pen getThickPen(String color) { String key = color + "-THICK"; Pen pen = pensMap.get(key); if(pen != null) { return pen; } else { pen = new ThickPen(); pen.setColor(color); pensMap.put(key, pen); } return pen; } public static Pen getThinPen(String color) { String key = color + "-THIN"; Pen pen = pensMap.get(key); if(pen != null) { return pen; } else { pen = new ThinPen(); pen.setColor(color); pensMap.put(key, pen); } return pen; } public static Pen getMediumPen(String color) { String key = color + "-MEDIUM"; Pen pen = pensMap.get(key); if(pen != null) { return pen; } else { pen = new MediumPen(); pen.setColor(color); pensMap.put(key, pen); } return pen; } } Let’s test the flyweight pen objects using a client. The client here creates three THIN pens, but in runtime their is only one pen object of thin type and it’s shared with all three invocations. public class PaintBrushClient { public static void main(String[] args) { Pen yellowThinPen1 = PenFactory.getThickPen("YELLOW"); //created new pen yellowThinPen1.draw("Hello World !!"); Pen yellowThinPen2 = PenFactory.getThickPen("YELLOW"); //pen is shared yellowThinPen2.draw("Hello World !!"); Pen blueThinPen = PenFactory.getThickPen("BLUE"); //created new pen blueThinPen.draw("Hello World !!"); System.out.println(yellowThinPen1.hashCode()); System.out.println(yellowThinPen2.hashCode()); System.out.println(blueThinPen.hashCode()); } } Program output. Drawing THICK content in color : YELLOW Drawing THICK content in color : YELLOW Drawing THICK content in color : BLUE 2018699554 //same object 2018699554 //same object 1311053135 5. FAQs 5.1. Difference between singleton pattern and flyweight pattern The singleton pattern helps we maintain only one object in the system. In other words, once the required object is created, we cannot create more. We need to reuse the existing object in all parts of the application. The flyweight pattern is used when we have to create large number of similar objects which are different based on client provided extrinsic attribute. 5.2. Effect of concurrency on flyweights Similar to singleton pattern, if we create flyweight objects in concurrent environment, we may end up having multiple instances of same flyweight object which is not desirable. To fix this, we need to use double checked locking as used in singleton pattern while creating flyweights. 5.3. Benefits of flyweight design pattern Using flyweights, we can – - reduce memory consumption of heavy objects that can be controlled identically. - reduce the total number of “complete but similar objects” in the system. - provide a centralized mechanism to control the states of many “virtual” objects. 5.4. Is intrinsic and extrinsic data shareable? The intrinsic data is shareable as it is common to all contexts. The extrinsic data is not shared. Client need to pass the information (states) to the flyweights which is unique to it’s context. 5.5. Challenges of flyweight pattern - We need to take the time to configure these flyweights. The design time and skills can be overhead, initially. - To create flyweights, we extract a common template class from the existing objects. This additional layer of programming can be tricky and sometimes hard to debug and maintain. - The flyweight pattern is often combined with singleton factory implementation and to guard the singularity, additional cost is required. Drop me your questions related to flyweight pattern in comments. Happy Learning !! 4 thoughts on “Flyweight Design Pattern” Hii lokesh we can make extrinsic attribute immutable. In multi threading environment every thread will be sure same extrinsic object they are working with no other threads modifies it. Private final String color; MediumPen (String color){ this.color=color; } Using above approach instead of setter metter for setting extrinsic attribute. Hi there, The issue I am facing is that you need to know a compile-time the name of all the flyweights objects – there are cases that that decision is taken at runtime. What would you suggest here? generating and compiling classes at runtime? hi Lokesh, Thanks for your time to create this post. Suppose I have created a system using the exact logic you provided. Suppose its a multi threaded app, Thread1: T1 requested YELLOW pen object and factory returns it successfuly. Thread2: T2 requested YELLOW pen object(but T1 has not completed its work) and factory returned the object which is being used by T1. My Solution: If I synchronize the below method that should solve my concurrency problems right? @Override public void draw(String content) { System.out.println(“Drawing THICK content in color : ” + color); } That makes sense.
https://howtodoinjava.com/design-patterns/structural/flyweight-design-pattern/
CC-MAIN-2022-27
refinedweb
1,394
56.76
Closed Bug 1001159 Opened 6 years ago Closed 6 years ago Simplify Cell Iter Impl and Arena::finalize() Categories (Core :: JavaScript: GC, defect) Tracking () mozilla32 People (Reporter: njn, Assigned: njn) Details Attachments (6 files, 3 obsolete files) CellIterImpl and its subclasses are hard to read because there are two modes: - Iterating over all cells in a Zone - Iterating over all cells in an Arena There are two init() functions, one for each mode. Furthermore, I think there's a subtle bug in the single-Arena mode which could cause use to iterate one cell into the subsequent arena. Imagine you want to iterate over an empty arena. In the current code, you'll call next() from init(ArenaHeader*). The first time around the loop you'll have the not-real initial values for |span| and |thing|, so you'll hit the aiter.get() case, which is fine -- this gets the passed-in arena. But if this arena is empty, the second time around the loop you'll again skip past the first two conditions and hit the aiter.done() case. If there's a subsequent arena, you'll enter it -- even though we're supposed to be iterating only through one arena! There's an argument-less aiter.init() call in init(ArenaHeader*) which usually prevents iterating into that next arena, but it occurs *after* that first next() call. I added some diagnostic code and we don't seem to be hitting this case in practice. Perhaps it's just luck, or perhaps there are other structural constraints preventing it. Either way, it gives me the heebie-jeebies and I want to fix it. This patch separates CellIterImpl in two, to distinguish the two modes. Specifically: - CellIterImpl becomes ZoneCellIterImpl and ArenaIterImpl (and each class gets one of the init() functions). - CellIterUnderGC becomes ZoneCellIterUnderGC and ArenaIterUnderGC (ditto). - CellIter becomes ZoneCellIter (no ArenaIter is needed). This patch is very mechanical, and involves some code duplication. The subsequent patches will improve on that. This patch merges ZoneCellIterImpl::{initSpan,init}() and ArenaCellIterImpl::{ArenaCellIterImpl,initSpan,init}(). initSpan() wasn't a good name for that function anyway. This patch removes the now-unnecessary ArenaIter from ArenaCellIterImpl. This fixes the potential bug mentioned in comment 0 -- if the passed-in arena is entirely empty, we now just set |cell| to null and so done() will succeed. It also gets rid of one initAsEmpty() call, which is good, because I want to eliminate that function -- it's horrible. I have other ideas to clean up these iterator classes further, but making the zone vs. arena split clearer and fixing the potential next-arena defect is probably enough for this bug. (In reply to Nicholas Nethercote [:njn] from comment #0) > CellIterImpl and its subclasses are hard to read because there are two modes: > - Iterating over all cells in a Zone > - Iterating over all cells in an Arena > There are two init() functions, one for each mode. +1. Also the Zone version does not really iterate across a Zone as much as across an Allocator (the Zone itself is just used for some asserts); I have patches in progress that make use of that simplification. > Also the Zone version does not really iterate across a Zone as much as > across an Allocator (the Zone itself is just used for some asserts); I have > patches in progress that make use of that simplification. Oh, interesting. I didn't even know about the Allocator class. Please feel free to provide additional review for these patches! This seems nice, but I don't like the extra duplication, especially for such tricky code. Ideally we could keep an instance of ArenaCellIterImpl as a member of ZoneCellIterImpl and call some sort of reset method on it every time we want to move to a new arena. That reset method could be private and friended to ZoneCellIterImpl maybe. Do you think you could try to do that? The reason the problem you mentioned in comment 0 doesn't happen is that we never iterate over empty arenas. The sweeping code immediately removes empty arenas from the usual arena lists. I admit that we should at least have an assertion or something about that though. This is almost the same as the previous version. Again, it's just the mechanical splitting; follow-ups will clean this up. Attachment #8414924 - Flags: review?(wmccloskey) Attachment #8412164 - Attachment is obsolete: true This patch simplifies ArenaCellIterImpl greatly: no ArenaIter required, no loops required. Attachment #8414925 - Flags: review?(wmccloskey) This patch layers ZoneCellIterImpl on top of ArenaCellIterImpl, simplifying it greatly. Attachment #8414926 - Flags: review?(wmccloskey) Arena::finalize() is complex. In particular, the logic underlying the creation of the new free list is spread throughout. This patch simplifies things. - The logic underlying the creation of the new list is more concentrated. In particular, none of it is in the |thing == nextFree.first| branch; this will allow this loop to be later re-written in terms of an single-arena cell iterator. - There are some extra assertions to make sure the computation of |nfree| is sensible. - It moves the arena-is-entirely-free case after the full (DEBUG-only checking) -- there's no point in excluding it from that checking. Attachment #8414929 - Flags: review?(wmccloskey) There are two things I'm uncertain about in this patch. - I had to comment out the assertion in ArenaCellIterImpl::init(). I don't entirely understand what this assertion does. - I'm using ArenaCellIterImpl directly in Arena::finalize(), which feels gross. One possibility for solving both problems is to implement a new iterator on top of ArenaCellIterImpl (ArenaCellIter?) and move the assertion into ArenaCellIterUnderGC. Attachment #8414931 - Flags: review?(wmccloskey) Attachment #8412166 - Attachment is obsolete: true Attachment #8412170 - Attachment is obsolete: true Summary: Split CellIterImpl in two → Simplify CellIterImpl and Arena::finalize() BTW, I'm not sure what the "UnderGC" suffix means. Does that mean it's happening during a GC? Try push looks good: Comment on attachment 8414925 [details] [diff] [review] (part 2) - Rewrite ArenaCellIterImpl Review of attachment 8414925 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/jsgcinlines.h @@ +165,5 @@ > > + // Upon entry, |thing| points to any thing (free or used) and finds the > + // first used thing, which may be |thing|. > + void moveForwardIfFree() { > + MOZ_ASSERT(!done()); Please use JS_ASSERT here and everywhere else. @@ +179,5 @@ > > public: > + ArenaCellIterImpl() {} > + > + void init(ArenaHeader *aheader) { Can we move this code to the constructor? The reason we had the init routines before was to share common code between the zone and arena constructors. Now that we only have one case, I think we can just move everything to the constructor (unless something changes in later patches...). @@ +180,5 @@ > public: > + ArenaCellIterImpl() {} > + > + void init(ArenaHeader *aheader) { > + MOZ_ASSERT(aheader); Please just remove this one. We'll safely crash on the next line if we get null. Attachment #8414925 - Flags: review?(wmccloskey) → review+ Comment on attachment 8414926 [details] [diff] [review] (part 3) - Rewrite ZoneCellIterImpl Review of attachment 8414926 [details] [diff] [review]: ----------------------------------------------------------------- Oh, I see. You're using init() as the reset method for this patch. Please disregard my remark from before. Although it would be nice if we could avoid recomputing thingSize for every arena. ::: js/src/jsgcinlines.h @@ +226,5 @@ > > class ZoneCellIterImpl > { > + ArenaIter aIter; > + ArenaCellIterImpl acIter; How about calling these arenaIter and cellIter? Attachment #8414926 - Flags: review?(wmccloskey) → review+ FWIW, more changes to CellIterUnderGC (to support PJS garbage collection) are coming, currently at the bottom of this patch: Basically it bypasses the Zone because I have Allocators that are not attached to Zones. Comment on attachment 8414929 [details] [diff] [review] (part 5) - Simple FreeSpan computation in Arena::finalize() Review of attachment 8414929 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/jsgc.cpp @@ +499,5 @@ > nfree += (newListTail->last + 1 - newListTail->first) / thingSize; > JS_ASSERT(nfree + nmarked == thingsPerArena(thingSize)); > #endif > + > + if (nmarked == 0) { Not sure why you moved this down. It just causes us to do a little extra work. Did you want to get the assertions to run? In any case, I think I'd rather it stay where it is. Asserting here that |firstThingOrSuccessorOfLastMarkedThing == thingsStart(thingKind)| covers pretty much everything we might be concerned about, I think. Attachment #8414929 - Flags: review?(wmccloskey) → review+ Comment on attachment 8414931 [details] [diff] [review] (part 6) - Used ArenaCellIterImpl in Arena::finalize() Review of attachment 8414931 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/jsgc.cpp @@ +448,4 @@ > uintptr_t lastByte = thingsEnd() - 1; > > + ArenaCellIterImpl i; > + i.init(&aheader); It would be nice for ArenaCellIterImpl (or ArenaCellIterUnderFinalize, as in the next comment) to have a constructor taking an ArenaHeader*. Then we could move the def of i to the for loop below: for (ArenaCellIterUnderFinalize i(&aheader); !i.done(); i.next()) { ... } I think that would be nice. ::: js/src/jsgcinlines.h @@ +183,5 @@ > > void init(ArenaHeader *aheader) { > MOZ_ASSERT(aheader); > AllocKind kind = aheader->getAllocKind(); > + //MOZ_ASSERT(aheader->zone->allocator.arenas.isSynchronizedFreeList(kind)); Let's just expose a method initUnsynchronized or something that doesn't run this assertion. Then we'd also have an init() method that would do the assert and call initUnsynchronized. Then let's have two subclasses: ArenaCellIterUnderGC: constructor calls init ArenaCellIterUnderFinalize: constructor calls initUnsynchronized and use the latter from Arena::finalize. The main thing I want to ensure is that ZoneCellIterImpl runs the assertion on every arena that it gets. Attachment #8414931 - Flags: review?(wmccloskey) → review+ Thanks Nick. These changes are fantastic. The UnderGC stuff is indeed intended to be run during GC. They bypass some code that's not needed during GC: waiting for background finalization, doing a minor GC, and synchronizing the free lists. It probably makes sense to explain freelist synchronization. Normally, we store the first FreeSpan for an Arena in its ArenaHeader. However, if we're allocating into an Arena, then we store the FreeSpan in the Zone (well, the Allocator for the Zone). (We could have the allocator load the current Arena being allocated into and then read the FreeSpan from that, but it would cost us an extra load.) We don't keep the ArenaHeader's FreeSpan up to date while we're allocating into that Arena. But that means that whenever we want to iterate over Arenas, we need to copy the FreeSpan for the Arena being allocated into back to its ArenaHeader. The CellIter class automatically copies the FreeList back to the ArenaHeader in its constructor. We don't do that for CellIterUnderGC. It just asserts that things are already synchronized because the GC is supposed to take care of that. However, Arena::finalize is not exactly part of the GC. It can run on the background sweeping thread at the same time that same time that the mutator is allocating. So that's why that assertion was failing. > The main thing I want to ensure is that ZoneCellIterImpl runs the assertion on every arena > that it gets. I guess this actually doesn't make much sense since the assertion only looks at the current arena being allocated into, which should be the same for every Arena of a given thingKind in the Zone. So just do whatever makes sense to you here. (In reply to Lars T Hansen [:lth] from comment #18) > FWIW, more changes to CellIterUnderGC (to support PJS garbage collection) > are coming, currently at the bottom of this patch: > > > > Basically it bypasses the Zone because I have Allocators that are not > attached to Zones. I *think* my changes won't cause major conflicts with you -- you'll need to do some rebasing, but you're mostly adding new init()/constructor functions, and that should also be doable with my split design. Another good try run, testing the patches with comments addressed: I landed these in two tranches of three patches each, because it might help with identifying perf regressions. Status: ASSIGNED → RESOLVED Closed: 6 years ago Resolution: --- → FIXED Target Milestone: --- → mozilla32
https://bugzilla.mozilla.org/show_bug.cgi?id=1001159
CC-MAIN-2019-43
refinedweb
1,947
56.25
I have some sample code that iterates through two different ranges of numbers successfully, but I want to add functionality to it so that it moves through the chained ranges randomly like so: import itertools import random for f in random.sample(itertools.chain(range(30, 54), range(1, 24)), 48): print f Traceback (most recent call last): File "<pyshell#12>", line 1, in <module> for f in random.sample(itertools.chain(range(30, 54), range(1, 24)), 48): File "G:\Python27\lib\random.py", line 321, in sample n = len(population) TypeError: object of type 'itertools.chain' has no len() A quick fix would be as follows: for f in random.sample(list(itertools.chain(range(30, 54), range(1, 24))), 48): The problem with your code is that to sample from some iterable randomly, you need to know its length first, but itertools.chain is an iterable that provides only the __iter__ method and no __len__. Basically, to do random.choice or random.sample or anything that involves choosing elements at random, you'll need a sequence or a set, which means that sequence should be finite. Iterables that don't provide the __len__ method are considered infinite as you'll never know how many elements will be produced until the iterable's exhausted, if at all.
https://codedump.io/share/jj1XxRWu0xL0/1/random-and-itertools
CC-MAIN-2017-39
refinedweb
219
62.78
Red Hat Bugzilla – Bug 480103 Review Request: bnIRC - An ncurses based IRC client and modular IRC framework.(Need Sponsorship. First time Packager) Last modified: 2012-06-29 18:31:12 EDT Spec URL: SRPM URL: Description: Hi. This is the first time I'm trying to contribute an RPM to Fedora and would love for someone to sponsor me and review my rpm. All criticism is welcome :). bnIRC is a ncurses based IRC client as well as a modular IRC framework. It can easily be extended through the creation of plugins written in Python. Just some quick comments on your spec file. - There is no need for '%define name bnIRC' and '%define version 1.1.1' because 'Name:' and 'Version:' can be used as macros later. - Source0: should point to the upstream location of the tarball. - 'BuildRoot:' please use on of the examples in the guidelines - Your %description is too long. Didn't rpmlint complain about this? - Please preserve the time stamps in your %install section if possible make install DESTDIR=%{buildroot} INSTALL="install -p" - You are using '%post -p /sbin/ldconfig' and '%postun -p /sbin/ldconfig'. Aren't 'Requires(post): /sbin/ldconfig' and 'Requires(postun): /sbin/ldconfig' missing? - Please use one of the formating style from the guidelines for your %changelog entry Thanks for your input. Much appreciated. I went ahead and made the appropriate changes and uploaded a new set of rpm's. I have to admit, the %changelog thing took me a while to figure out. I kept staring at it and staring at it, but not seeing what's wrong. I then noticed that I was missing the version/release number :) I had a look through the Fedora Guidelines for the %post ldconfig sections but didn't see any reference to 'Requires(post): /sbin/ldconfig. I then did a quick google search and came across the following bugzilla entry which mentions that '%post -p /sbin/ldconfig' notation automatically mentions 'Requires(post): /sbin/ldconfig'. So I left 'Requires(post):' out. If it's really needed, please let me know and I'll add it in. When running rpmlint against the RPM, I do have errors. They seem to have something to do with ownership of directories. Here is a sample: bnIRC.i386: E: standard-dir-owned-by-package /usr/share Not quite sure what I can do differently in my spec file to fix this. I can only assume I would have to make changes to my %files section and/or %defattr, but I have no idea what to change. Once again, any and all feedback/guidance is really appreciated. Thanks. Forgot to provide the new links. SRC RPM: SPEC FILE: I will do a full review soon but be aware I can't sponsor you. Created attachment 329407 [details] rpmlint output There are still some issues. - From my point of view, the name should be bnirc.spec - One line per BR would be nice - The %file section needs some work - duplicates - ownership - You need to make a devel subpackage - *.la files must be deleted The rpmlint output [fab@laptop024 i386]$ rpmlint bnIRC* bnIRC.i386: W: devel-file-in-non-devel-package /usr/src/debug/bnIRC-1.1.1/plugins/server_strings/server_strings.c .... .... .... bnIRC.i386: W: unstripped-binary-or-object /usr/lib/debug/usr/bin/bnirc.debug bnIRC.i386: E: statically-linked-binary /usr/lib/debug/usr/bin/bnirc.debug 2 packages and 0 specfiles checked; 22 errors, 75 warnings. see attachment for full details > Aren't 'Requires(post): /sbin/ldconfig' and 'Requires(postun): > /sbin/ldconfig' missing? No, they are automatic if /sbin/ldconfig is set as scriptlet processor via option -p. [...] * The entire /usr tree is mispackaged: Package must not include directories /usr/include /usr/lib /usr/lib/debug /usr/share /usr/share/man /usr/share/man/man1 /usr/src /usr/src/debug and no files below /usr/src and /usr/lib/debug either. Where files below /usr/include and /usr/lib are needed (in the -devel subpackage), prefix the paths with %_includedir and %_libdir. Use %_mandir as prefix for files below /usr/share/man * It must not include /usr/lib/debug/ as those files are automatically put into the -debuginfo subpackage. * It must not include /usr/share and not anything in /usr/src either, which is another side-effect of using %_prefix/* as a bad catch-all for all files below /usr * Including static libs as plugins makes no sense. It likely loads the *.so or *.so.0 files. Perhaps the *.la, but not the *.a libs. * The %doc file "INSTALL" is irrelevant to your package users. * The %doc file "NEWS" is empty. You can remove it for now and add a guard in %prep which exists if NEWS is larger than zero. Then you can include it. * rpmlint also reports an executable .spec file. * Including the "config.h" autoheader file in the public API is dangerous. Values in it bear the risk of conflicting with any API-user that uses an own config.h file. Alright. Finally found some free time to work on this again. I have split the rpm into two packages, the rpm and a devel rpm. The links are below. Please have a look and let me know what else needs to be changed. Thanks again for all suggestions/feedback. Spec URL: RPM URL: SRPM URL: DEVEL RPM URL: > -License: GPLv2+ > +License: GPLv2 Confirmed. The source files explicitly say "LICENSE: GPL Version 2". > +%package devel > +Group: Applications/Internet That sounds wrong for the bnIRC-devel package. More likely the group is "Development/Libraries". Even if the package contained just a plugin API, there would not be a more accurate RPM Group. > -%post -p /sbin/ldconfig > - > -%postun -p /sbin/ldconfig Deleting them is not right. The previous .spec file was correct. Put them back. > -%{_datadir}/%{name}-%{version} > +%{_datadir}/%{name}-%{version}/* With this change, the directory is not included. Please revert. > +%files devel > +%defattr(-,root,root,-) > +%{_prefix}/src/* Don't include %_prefix/src. These are included in the automatically generated -debuginfo package. If that doesn't work for you, install the "redhat-rpm-config". > +%{_libdir}/bnIRC/* These are the application's plugins. They belong into the main package. > +%{_libdir}/libbnirc.a > +%{_libdir}/libbnirc.la These are not needed and must not be included. You can %exclude it or remove it in the %install section. > +%{_libdir}/libbnirc.so This is the softlink that really belongs into the -devel package. It is needed when compiling/linking with -lbnirc > +%{_libdir}/libbnirc.so.0 > +%{_libdir}/libbnirc.so.0.0.0 These two belong into the main application package. Your %changelog doesn't comment on several of the spec changes between release 3 and 5. It is good practise to document and explain non-trivial modifications. Alright, I made some changes based on the feedback and they're all available at the links below SPEC: RPM: SRPM: DEVEL RPM: eed9f0123b0695c63072eeeb37a66114 bnIRC-1.1.1-6.fc10.i386.rpm a836f791a84132e0cdc280ddf7ea8867 bnIRC-1.1.1-6.fc10.src.rpm 8f22430b1299a368a1ebc93ef75bbeb0 bnIRC-devel-1.1.1-6.fc10.i386.rpm Now that I added some of the libs to the main package I get warnings when I run rpmlint. Since they're plugins, they do belong in the main package like mentioned in the previous comment made by Michael Schwendt. Not sure if I need to do something differently to get rid of the warnings or if they can be left alone. Any and all comments are welcome. thanks. * The plugin loader evaluates the libtool .la files and dlopen()s the library with the file name found in the "dlname=" parameter, e.g. libdcc.so.0, which in turn is a symlink to libdcc.so.0.0.0 The statically linked plugins 'lib*.a' are not needed as they cannot be loaded at run-time. The plugin symlinks 'lib*.so' are not needed either. The program could be patched to simply name the plugins 'lib*.so' and dlopen() them directly instead of looking at the .la files. * Please look at "rpm --query --provides bnIRC". Currently, the plugin libraries produce several automatic SONAME Provides, which bear the risk of causing conflicts with other packages during dependency resolving: libctcp.so.0 libdcc.so.0 libdebug.so.0 libhello.so.0 libio_ncurses.so.0 libirc_input.so.0 libpython.so.0 librserver.so.0 libserver_strings.so.0 This is a blocker, even if one could show that no other Fedora package currently provides libraries with the same SONAMEs. I haven't tried that, but I could imagine packages such as "libdcc", "libctcp", "librserver", for example, with similar library sonames. The package also contains automatic "Requires" for the same library SONAMEs. The least thing that could be done is to filter these self-Provides and self-Requires out. Various docs exist, in the Wiki and on Google, but disabling rpmbuild's internal dependency generator is dangerous, and one must carefully examine the results. It would be good, if upstream could use a unique namespace for these plugins, e.g. like libbnirc_plugin_foo.so.0 > %post devel -p /sbin/ldconfig > %postun devel -p /sbin/ldconfig These are a no-op and can be deleted. The scriptlets in the main pkg are the ones that are correct and needed. > %{_libdir}/bnIRC/* Directory %{_libdir}/bnIRC is not included. Thanks for the quick reply. I'll make sure to remove the post and postun sections from the devel package. I'll also remove the asterisk from the %{_libdir}/bnIRC/* line. You say that the library names are still a major blocker for this package. I just want to make sure I understand your suggestion. If I were to talk to upstream and have them rename the libraries, that is all that would be needed for me to get this package approved? Or would I still have to jump through some hoops to have the libraries accepted? Please let me know and I'll try talking to upstream about your suggestions. Thanks again. If all the plugin libraries (and specifically their library SONAME values) were renamed to put them into a namespace that is much more specific to this application, that would make it unnecessary to filter rpmbuild's automatic Provides/Requires. The risk that any other library package would introduce a shared library with a SONAME like libbnirc_plugin_SOMETHING.so.0 would be very low. And as such I would approve that as a valid work-around. [...] Here's a run-time error: RegisterTab called added python tab hook Traceback (most recent call last): File "/usr/share/bnIRC-1.1.1/scripts/toc.py", line 15, in <module> import whrandom ImportError : No module named whrandom script error! So as I wait for upstream to make the necessary changes, I have a quick question about package naming. Since upstream is making changes to the code, they will most likely release it as 1.1.2. How should I deal with this in my rpm? Right now my rpm is called bnIRC-1.1.1-6.fc10.src.rpm. Once the newer version of source is out, would I name my rpm bnIRC-1.1.2-7.fc10.src.rpm, basically changing the software version number and incrementing the release number by 1, or do I have to start the release numbering from scratch since it's a new upstream version number? So the new name should be bnIRC-1.1.2-1.fc10.src.rpm. Just trying to prepare for when upstream makes the required changes. thanks. If you increase the %version, you can and should reset %release to 1: [Basically, package 1.1.2-1.fc10 means "the 1st release/build of version 1.1.2 for Fedora 10"...] What is the status of this bug? Currently waiting for the dev of this program to make some changes to the names of certain libraries. I have confirmation from the dev that he's working on it but life has been getting in the way and delaying the changes. As soon as I have a new version of the software, I'll make a new RPM and upload it for review. Good news. Upstream made requested changes(renaming libraries) so I had a chance to create new packages. Please review and let me know if there is anything else that needs changing. SPEC: RPM: SRPM: DEVEL: 830c2a3d2ac694ac23900f35805e8ff4 bnIRC-1.1.2-1.fc10.i386.rpm 050865e2fcf07c2bc9c8e210392231fe bnIRC-1.1.2-1.fc10.src.rpm cfb5e3af3f1f2f5403c1b4ba0381e68b bnIRC-devel-1.1.1-2.fc10.i386.rpm * It fails to build on Fedora 11: | channel.c:146: warning: conflicting types for built-in function 'log' | user.c: In function 'users_in_channel': | user.c:299: warning: passing argument 4 of 'qsort' from incompatible pointer type /usr/include/stdlib.h:710: note: expected '__compar_fn_t' but argument is of type 'int (*)(void *, void *)' | regex.c:34: error: static declaration of 'strndup' follows non-static declaration | make[1]: *** [regex.lo] Error 1 Indeed, regex.c includes <string.h> and declares its own one just a few lines further down in the file. * It doesn't adhere to the compiler flags guidelines: The flags you should see in the build log are those printed by "rpm --eval %{optflags}". Since %configure exports them (see "rpm --eval %configure"), but the bnirc source tarball doesn't accept the variables passed in from the outside, it may be necessary to apply a patch. * Issues pointed out in bottom of comment 10 are not fixed yet. hmm...I was sure I fixed the suggestions about ldconfig and the libdir. Must have got my spec files confused. Guess that's what happens when you don't use version control :/ I'll check with upstream to see what can be done about the compiler flags and I'll start testing in F11. Thanks for the review. Any update? Waiting to hear back from upstream about possible fixes/changes. Will definitely update the bug when I hear something new. Well, it's been well over a year now. Should this just be closed now? In any case it does not build properly on current rawhide: regex.c:34:14: error: static declaration of 'strndup' follows non-static declaration so I'll mark this as not building. Please clear the whiteboard if providing a version that builds. tibbs it's been 6 months, I am going to clear FE-NEEDSPONSOR (and the whiteboard) and add FE-DEADREVIEW and close this ticket.
https://bugzilla.redhat.com/show_bug.cgi?id=480103
CC-MAIN-2017-43
refinedweb
2,391
67.25
My. Having said that, I am in favour of this patch. I think we need to be cautious about adding magic properties because they have a slight negative implication, but where they have a clear benefit we should adopt them because the benefit outweighs the risk. There are other solutions to the namespace problem, anyway. We could have a reserved prefix like "__ant." that wasn't allowed for user-defined properties. Or we could have a special character that was illegal in user-defined properties and used in all system-defined ones. Your example about "dynamic" properties is interesting. I've always been in favoring of making properties immutable not by value but by meaning. When the changes were being made in the code to make properties immutable, I got as far as convincing people to keep the DSTAMP, TSTAMP, and TODAY properties "dynamic", so that they always represented the time when the <tstamp /> task was run, even if it was run many times during a build.. Domin}... > > --DD > > On Fri, May 30, 2008 at 4:21 AM, <Jan.Materne@rzf.fin-nrw.de> wrote: > >>> >> >> >> --------------------------------------------------------------------- >>
http://mail-archives.apache.org/mod_mbox/ant-dev/200806.mbox/%3C48444B9A.70202@callenish.com%3E
CC-MAIN-2016-30
refinedweb
185
62.27
NAME Create a resource object. SYNOPSIS #include <zircon/syscalls.h> zx_status_t zx_resource_create(zx_handle_t parent_rsrc, uint32_t options, uint64_t base, size_t size, const char* name, size_t name_size, zx_handle_t* resource_out); DESCRIPTION zx_resource_create() creates a resource object for use with other DDK syscalls. Resources are typically handed out to bus drivers and rarely need to be interacted with directly by drivers using driver protocols. Resource objects grant access to an address space range starting at base up to but not including base + size. Two special values for kind exist: ZX_RSRC_KIND_ROOT and ZX_RSRC_KIND_HYPERVISOR. These resources have no range associated with them and are used as a privilege check. parent_rsrc must be a handle to a resource of kind ZX_RSRC_KIND_ROOT, or a resource that matches the requested kind and contains [base, base+size*] in its range. options must specify which kind of resource to create and may contain optional flags. Valid kinds of resources are ZX_RSRC_KIND_MMIO, ZX_RSRC_KIND_IRQ, ZX_RSRC_KIND_IOPORT (x86 only), ZX_RSRC_KIND_ROOT, ZX_RSRC_KIND_HYPERVISOR, ZX_RSRC_KIND_VMEX, and ZX_RSRC_KIND_SMC (ARM only). ZX_RSRC_KIND_ROOT, ZX_RSRC_KIND_HYPERVISOR, and ZX_RSRC_KIND_VMEX must be paired with zero values for base and size, as they do not use an address space range. At this time the only optional flag is ZX_RSRC_FLAG_EXCLUSIVE. If ZX_RSRC_FLAG_EXCLUSIVE is provided then the syscall will attempt to exclusively reserve the requested address space region, preventing other resources creation from overlapping with it as long as it exists. name and name_size are optional and truncated to ZX_MAX_NAME_LENGTH - 1. This name is provided for debugging / tool use only and is not used by the kernel. On success, a valid resource handle is returned in resource_out. RETURN VALUE zx_resource_create() returns ZX_OK on success. In the event of failure, a negative error value is returned. The returned handle will have ZX_RIGHT_TRANSFER (allowing it to be sent to another process via zx_channel_write()), ZX_RIGHT_DUPLICATE (allowing the handle to be duplicated), ZX_RIGHT_INSPECT (to allow inspection of the object with zx_object_get_info() and ZX_RIGHT_WRITE which is checked by zx_resource_create() itself. RIGHTS parent_rsrc must be of type ZX_OBJ_TYPE_RESOURCE and have ZX_RIGHT_WRITE. ERRORS ZX_ERR_BAD_HANDLE the parent_rsrc handle is invalid. ZX_ERR_WRONG_TYPE the parent_rsrc handle is not a resource handle. ZX_ERR_ACCESS_DENIED The parent_rsrc handle is not a resource of either kind or ZX_RSRC_KIND_ROOT. ZX_ERR_INVALID_ARGS options contains an invalid kind or flag combination, name is an invalid pointer, or the kind specified is one of ZX_RSRC_KIND_ROOT or ZX_RSRC_KIND_HYPERVISOR but base and size are not 0. ZX_ERR_NO_MEMORY Failure due to lack of memory. There is no good way for userspace to handle this (unlikely) error. In a future build this error will no longer occur.
https://fuchsia.dev/fuchsia-src/reference/syscalls/resource_create
CC-MAIN-2020-29
refinedweb
416
55.95
16 February 2009 16:56 [Source: ICIS news] By Nigel Davis ?xml:namespace> The broad economic news gets worse by the day. And the That steep fall in industrial production will hit chemicals hard and few sectors will be immune. European majors will report over the next few weeks. The stories are likely to be similar - depressed European demand and a distinct drop in demand in emerging markets. Production plants were run at much reduced rates or idled towards the end of the fourth quarter. December was worse than expected in November. BASF CEO Jurgen Hambrecht reportedly described the chemicals outlook as “pitch black” in an interview published in the German press last week. Only agrochemicals and the life sciences offer anything approaching real immunity from recession, other segments are only more or less recession-proof. Industrial gases producer, Air Liquide, had a reasonably good story to tell on Monday – and the share price responded positively. The company did not reveal fourth quarter profits but said fourth quarter sales had risen – by 10.9% in Europe, 2.1% in North America and 4.1% in The industrial gases and engineering group was hit by the downturn in its large industries segment in North America and in speciality gases, which are sold into the electronics industry in Air Liquide has described the decline in gases demand in some cyclical sectors as “brutal” and in one of its scenarios for 2009 envisages a downturn in gases sales to cyclical industries of 30%. If there is a partial recovery in the second half the negative impact from cyclicals could be as much as 10%. The company's balanced gases services portfolio, however, should ensure that sales grow even in this extremely difficult year. Unfortunately, a balance in chemicals generally cannot guarantee much in the current environment. So many chemical end-use industries are hobbled in this recession, the impact is geographically widespread too. In a note to clients, Credit Suisse warned that the results from the European chemicals sector companies it covers are likely to be poor and potentially worse than consensus forecasts in many areas. Having said that, however, low expectations from investors and the tone of outlook statements are likely to rule the day as far as stock prices are concerned. Companies might be expected to be conservative in the extreme in their outlooks given the changing economic news and low business visibility. Downstream or specialty chemicals makers will be benefitting from lower feedstock costs volume demand will have been key. There is some evidence, but not necessarily among the group of companies left still to report, of some stock-rebuilding. Any reinforcement of the trend could indicate that some markets have seen bottom. That does not necessarily mean, however, that an upturn is imminent. Product markets can be expected to bump along the bottom of the downturn for some time. As the major
http://www.icis.com/Articles/2009/02/16/9193119/insight-next-batch-of-results-could-be-worse-than-expected.html
CC-MAIN-2014-10
refinedweb
483
53.61
Introduction Azure DocumentDB is NoSQL solution provided by Microsoft on the cloud. It falls under the ‘Document’ category. Data is stored in a JSON format. DocumentDB refers to the database entities as ‘resources’. To begin with, let’s understand the concept of a ‘resource’ in DocumentDB. One of the basic resource is a “Database account” that can have multiple capacity units that are a set of databases and blob storage. After creating a database account, the next step is to create a ‘Database’ similar to a namespace, as a logical container. A Database can have a set of ‘Collections’ that is a container for storing Documents. Documents are JSON data that represents a record. In addition to these, we also have ‘Users’ and ‘Permissions’. Users are namespaces to assign permissions and Permissions are authorized tokens to access a resource by a user. Figure 1 explains the structure of a DocumentDB account. Figure 1: The structure of a DocumentDB account The resources are further classified as a ‘system resource’ and ‘user-defined resource’. From the diagram shown in Figure 1, resources like ‘DocumentDB account’, ‘Databases’, ‘Users’,’permissions’, ‘collections’, ‘stored procedures’, ‘triggers’, and ‘UDFs’ are all system resources and they have fixed schemas. However, documents and attachments are ‘user ‘New’ button and select DocumentDB. Figure 2: Starting a new DocumentDB project Fill in ‘ID’ and the ‘Location’ and click Create; this will create a DocumentDB account. Once the account is created, it starts appearing as a tile on the home page. Click the tile to see the details. Figure 3: Viewing details Figure 4: Viewing the Keys = "[email protected]", Address = "Delhi, India" }; 2. Student Priya = new Student() { EmployeeNo = "4", Name = "Priya R", Email = "[email protected]",, ‘id’ is added to the ‘Student’ class and it’s not explicitly set in the code. We also create an ‘UpdateDocument‘ ‘UpdateDocument‘ ‘Document.
https://www.developer.com/database/using-an-azure-nosql-solution-documentdb/
CC-MAIN-2022-40
refinedweb
306
56.86
Issues ZF-4460: Resource objects passed in to ACL query methods are not passed through to registered assert()'s Description Hi, When querying an ACL, the resource objects that are passed into the query methods are not actually passed through to any asserts that are defined. Instead, their resourceId() is looked up and then the object passed in at ACL construction time is used instead. This prevents some advanced use of the assert() system. Here is a detailed description: I have classes defined in my system which implement Zend_Acl_Resource_Interface. In my case I always implement my getResourceId() as a very simple "return CLASS". I'll use an example of Articles to try and put this into context. Say I have following setup: class Article implements Zend_Acl_Resource_Interface. { public function getResourceId() { return __CLASS__; } public $author; public function __construct($articleId) { // Load article by id and populate $author. } } (Obviously it does other stuff too!!) In order to restrict editing to only the author of the article, I could implement the following Acl_Assert: class Own_Article_Assert implements Zend_Acl_Assert_Interface { public function assert( Zend_Acl $acl, Zend_Acl_Role_Interface $role = null, Zend_Acl_Resource_Interface $resource = null, $privilege = null) { if (!($resource instanceof Article)) return false; // We now know $resource is of the class "Article" $auth = Zend_Auth::getInstance(); if (!$auth->hasIdentity()) return false; return ($auth->getIdentity()->username == $resouce->author); } } Then I had an ACL something like... $acl = new Zend_Acl; $acl->addRole(new Zend_Acl_Role('User')); $acl->add(new Zend_Acl_Resource('Article')); $acl->allow('User', 'Article' 'view'); $acl->allow('User', 'Article' 'edit', new Own_Article_Assert); This should allow a user to view any article, but only allow the original author to edit it. I've used the generic Zend_Acl_Resource() object to add the resource into the Acl as I cannot configure my Acl with the real article object without instantiating it (and thus loading an article from the DB). This seems like a reasonable thing to do. Allow me to elaborate further. Let's say I then have some code: $art1 = new Article(1); // An article by current user $art2 = new Article(2); // An article by someone else echo 'View 1: '.($acl->isAllowed('User', $art1, 'view') ? 'ACK' : 'NAK')." "; echo 'View 2: '.($acl->isAllowed('User', $art2, 'view') ? 'ACK' : 'NAK')." "; echo 'Edit 1: '.($acl->isAllowed('User', $art1, 'edit') ? 'ACK' : 'NAK')." "; echo 'Edit 2: '.($acl->isAllowed('User', $art2, 'edit') ? 'ACK' : 'NAK')." "; What you'd expect here is: {{ACK, ACK, ACK, NAK.}} But what actually happens is: {{ACK, ACK, NAK, NAK.}} (sounds like Mars Attacks.... ;)) This is because the article object itself never gets passed through to the assert method and thus it fails at the "instanceof" check. This is due to the fact that the object you pass in to isAllowed is actually internally replaced by a the generic object added during ACL construction. This is done in Acl.php's get() function. (see the variable instance storage: $this->_resources[$resourceId]['instance']). I can fix this trivially with a small patch (which I'll attach) that preserves the object passed in if it is given and only uses the generic object if a string is used as the parameter. With this patch applied, my original Article object makes it through to the assert() and I can perform the necessary tests. I really hope this is a bug as, to me, I really don't see the point in using "objects" in the Acl system if the objects themselves are not going to be preserved and passed about accordingly. If the objects just represent generic strings then why not just use generic strings in the first place. Being able to pass a specific object to an assert method is what makes it powerful/useful. I hope this is understandable. If you would like me to provide code I can do. EDIT I cannot for the life of me work out how to attach a patch to this issue, so I'll just have to post it inline here... am I being thick or is there really no way to add patches etc? Seems pretty counter productive if there isn't a way.... Index: Acl.php =================================================================== --- Acl.php (revision 30213) +++ Acl.php (working copy) @@ -288,8 +288,10 @@ { if ($resource instanceof Zend_Acl_Resource_Interface) { $resourceId = $resource->getResourceId(); + $lookup = false; } else { $resourceId = (string) $resource; + $lookup = true; } if (!$this->has($resource)) { @@ -297,7 +299,7 @@ throw new Zend_Acl_Exception("Resource '$resourceId' not found"); } - return $this->_resources[$resourceId]['instance']; + return $lookup ? $this->_resources[$resourceId]['instance'] : $resource; } /** Posted by Colin Guthrie (coling) on 2008-10-15T04:26:49.000+0000 There are similarities in these two bug reports with regards to the actual object that is passed to the assert(). Overall the usefulness of asserts are IMO restricted by these two issues. Posted by Colin Guthrie (coling) on 2008-10-15T04:31:39.000+0000 Again these are similar bugs. With regards to Resources, I feel that my fix in ZF-4460 is sufficient, I have not looked specifically at Roles. I suspect my fix applies better as the code in Zend_Acl has moved on. Posted by Colin Guthrie (coling) on 2008-10-15T04:32:47.000+0000 Sorry I chose the wrong kind of link :s Again these are similar bugs. With regards to Resources, I feel that my fix in ZF-4460 is sufficient, I have not looked specifically at Roles. I suspect my fix applies better as the code in Zend_Acl has moved on. Posted by Roger Hunwicks (rhunwicks) on 2008-10-24T07:46:50.000+0000 Colin, thanks for submitting such a clear write-up of this bug. I think this bug is extremely important, as it is impossible to write non-trivial assertions if you can get access to the methods of the resource you are considering. My use case is exactly the same as yours (a requirement to restrict people to records that belong to them), and I suspect that this is a very common scenario. As an alternative to changing isAllowed(), we are overriding get() by telling it to only return the resource from the registry if the $resource parameter doesn't implement Zend_Acl_Resource_Interface: Posted by Roger Hunwicks (rhunwicks) on 2008-10-24T07:48:21.000+0000 Sorry, critical spelling error above! I think this bug is extremely important, as it is impossible to write non-trivial assertions if you CAN'T get access to the methods of the resource you are considering. Posted by Roger Hunwicks (rhunwicks) on 2008-10-24T07:52:05.000+0000 Finally, I think that whichever fix is applied, it should also be applied to Zend_Acl_Role_Registry->get() so that the code is consistent. Posted by Colin Guthrie (coling) on 2008-10-24T08:17:28.000+0000 Yeah the solution I had was a little messy with the inline if statement etc.. I think this is the optimal version for neatness/compactness/clarity: Posted by Colin Guthrie (coling) on 2008-10-24T08:19:08.000+0000 Erm, scratch that last comment. I see the by looking up the instance before checking ->has() is clearly insane. I'll stop to stop smoking so much crack. :s Posted by Wil Sinclair (wil) on 2009-01-14T13:31:40.000+0000 Assigning to Ralph to get closure on this issues. Posted by Stefan Gehrig (sgehrig) on 2009-06-15T01:54:28.000+0000 Anything new on this one? As some other posters wrote this is some major blocker when working with assertions. Posted by Ralph Schindler (ralph) on 2009-07-30T19:03:04.000+0000 A fix is in place in trunk at r17317 for ZF-1721 and ZF-1722 which I think might address this issue, please test. Thanks! Posted by Colin Guthrie (coling) on 2009-08-17T08:57:34.000+0000 Hi Ralph, I'd like to confirm this has indeed fixed my test case. That said, when the get() method is called and a resource is passed in, that same resource object will not be passed out again. Is this the intended behaviour for a public method? See my inline patch at the end of the description of this topic which fixes the get() method.. The resource object passed in to the get() method is preserved if it is already a resource. In the current 1.9.1 implementation, if a resource object is passed in, the resource object with the same identifier registered in the ACL is what is returned, not the original resource. This is arguably not needed if this is just the intended behaviour of this public function (and changing it is admittedly an API breakage), but perhaps this should be rethought for 2.0 and changed? I'd welcome you're opinion on the matter. :) Thanks for the fix. Posted by Ralph Schindler (ralph) on 2009-08-17T09:07:23.000+0000 I originally tried going the same route, but it gets you to (like you said) a place where there is bc breakage. Even if its not intended behavior, its what we have. I think there are loads of refactoring we can do for 2.0 time. First is that we should tree ACL's more like Trees and nodes, thus utilizing RecursiveIterator and RecursiveIteratorIterator. There should also be a focus on explict roles/resources and types of resources/roles, both use cases being supported. This means using strings for types, and exact objects for explicit role/resource registration and checking. If you have other thoughts, I suggest you start a Zend_Acl 2.0 page on the ZFDEV wiki :) -ralph
http://framework.zend.com/issues/browse/ZF-4460?focusedCommentId=33528&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2014-52
refinedweb
1,565
65.52
Parent-Child component: Meaning An angular application is made up of components and often components are nested inside one-another. Example, suppose you have a table of rows on a page. In this simple structure we can have 2 components: A table component which represents the entire table and a component which represents a single row of the table. If we have this kind of structure, then table will contain row component and thus table component will be the Parent component and row component will be the Child component. In simple words, when one component contains another component, then the components are said to be nested. When the components are nested, the components which contains other component is called Parent component and the contained component is called Child component. How to pass data from Parent to Child Component There are two methods to pass values from parent component to child component, one is using input properties and another is by utilizing a service. Both of these are explained in detail below. Method 1: Using Input properties An input property is a field(or member) of the component class which is annotated with @Input() decorator. Its main purpose is to receive a value sent from the parent component. Example, consider the below component class referred as ChildComponent which has an input property named message as shown. import { Component, OnInit, Input } from ‘@angular/core’; @Component({ selector:’child‘, templateUrl:’./child.component.html‘, styleUrls: [‘./child.component.css‘] }) export class ChildComponent { @Input() message: string; } HTML template for this component is given below. This HTML template displays the value of its property which is an input property. Message sent from Parent is : {{messageFromParent}} Notice the use of {{ and }} around the property name. This is called interpolation and it displays the value of the property. Now let us define the Parent component for the above component. import { Component, OnInit } from ‘@angular/core’; @Component({ selector: ‘parent‘, templateUrl: ‘./parent.component.html‘, styleUrls: [‘./parent.component.css‘] }) export class ParentComponent implements OnInit { messageToChild: string ngOnInit() { this.messageToChild = “How are you?”; } } Parent component has one property which contains the message to be sent to the child or the value which will be sent to the input property of the child. HTML template of the parent will look like Notice how the value of input property of child component is set from the parent component. Input property of the child will be on left side while property of the parent will be at the right side. When this application is deployed and run, it shows the following output on the browser. Calling child with message Message sent from Parent is : How are you? Look, the value sent by parent is successfully received by the child. Method 2: Using a Service A service is a special class which is generally used to fetch data from server and to share data between components. A service is not tied to any component or module but it can be used by multiple components. If you are familiar with any programming language that uses classes such as C++, Java or C#, then a service may be compared to a utility class containing static methods which can be used by multiple classes. A service in angular is annotated with @Injectable annotation and it is directly injected into a component. If you want to learn about an angular service in depth, then refer this link. Below is a service which contains a field to store a string and methods to provide value to this field and to read value of this field. import { Injectable } from ‘@angular/core’; // field which stores a string private message:string; } Have a look at the modified parent and child components. Both parent and child components utilize the same service which is injected into them via their constructors. Parent calls the setMessage method of the service to set the message it wants to send to child in the message field of the service and child calls the readMessage method of the service to read the value of message field. Note that child now does not contain any input property. Typescript for Parent component import { Component, OnInit } from ‘@angular/core’; @Component({ selector: ‘parent‘, templateUrl: ‘./parent.component.html‘, styleUrls: [‘./parent.component.css‘] }) export class ParentComponent { private service; // service injected constructor(codippaService: CodippaService){ this.service=codippaService; this.sendMessage(); } } Typescript for Child component import { Component } from ‘@angular/core’; @Component({ selector:’child‘, templateUrl:’./child.component.html‘, styleUrls: [‘./child.component.css‘] }) export class ChildComponent { private message:string; // service injected } // call service method to read message } Modified HTML template for parent is given below. Parent now simply calls the child component. HTML template for child remains the same as earlier. Output on the browser is which is same as earlier. Calling child with message Message sent from Parent is : How are you? Again, the value sent by the parent is successfully received by the child. Comparing Both Methods When comparing both the above methods, as which to use when, then there is a simple logic. If the components which want to share data exhibit a parent-child relation, then you should go with method 1, that is, using input properties and not with method 2. This is because a service is primarily designed to contain methods that are common among components and to perform utility tasks while input properties are mainly designed to receive value from parent components. Method 2 should be used when there is no parent-child relationship between components sharing data or the components are unrelated to each other. If they possess a relation, then go with input properties. Also, method 1 can only be used to pass data from parent to child while method 2 can also be used to pass data either way(both from parent to child and from child to parent).
https://codippa.com/passing-data-from-parent-to-child-component-in-angular/
CC-MAIN-2019-04
refinedweb
965
53
I'm working on a python script that starts several processes and database connections. Every now and then I want to kill the script with a Ctrl+C signal, and I'd like to do some cleanup. In Perl I'd do this: $SIG{'INT'} = 'exit_gracefully'; sub exit_gracefully { print "Caught ^C \n"; exit (0); } How do I do the analogue of this in Python? Register your handler with signal.signal like this: #!/usr/bin/env python import signal import sys def signal_handler(sig, frame): print('You pressed Ctrl+C!') sys.exit(0) signal.signal(signal.SIGINT, signal_handler) print('Press Ctrl+C') signal.pause() More documentation on signal can be found here. You can treat it like an exception (KeyboardInterrupt), like any other. Make a new file and run it from your shell with the following contents to see what I mean: import time, sys x = 1 while True: try: print x time.sleep(.3) x += 1 except KeyboardInterrupt: print "Bye" sys.exit()
https://pythonpedia.com/en/knowledge-base/1112343/how-do-i-capture-sigint-in-python-
CC-MAIN-2020-16
refinedweb
162
67.04
Jay Taylor's notesback to listing index Nex7's Blog: ZFS: Read Me 1st[web search] Thursday, March 21, 2013 ZFS: Read Me 1st Things Nobody Told You About ZFS Foreword Latest update 9/12/2013 - Hot Spare, 4K Sector and ARC/L2ARC sections edited, note on ZFS Destroy section, minor edit to Compression section. There are a couple of things about ZFS itself that are often skipped over or missed by users/administrators. Many deploy home or business production systems without even being aware of these gotchya's and architectural issues. Don't be one of those people! I do not want you to read this and think "ugh, forget ZFS". Every other filesystem I'm aware of has many and more issues than ZFS - going another route than ZFS because of perceived or actual issues with ZFS is like jumping into the hungry shark tank with a bleeding leg wound, instead of the goldfish tank, because the goldfish tank smelled a little fishy! Not a smart move. ZFS is one of the most powerful, flexible, and robust filesystems (and I use that word loosely, as ZFS is much more than just a filesystem, incorporating many elements of what is traditionally called a volume manager as well) available today. On top of that it's open source and free (as in beer) in some cases, so there's a lot there to love. However, like every other man-made creation ever dreamed up, it has its own share of caveats, gotchya's, hidden "features" and so on. The sorts of things that an administrator should be aware of before they lead to a 3 AM phone call! Due to its relative newness in the world (as compared to venerable filesystems like NTFS, ext2/3/4, and so on), and its very different architecture, yet very similar nomenclature, certain things can be ignored or assumed by potential adopters of ZFS that can lead to costly issues and lots of stress later. I make various statements in here that might be difficult to understand or that you disagree with - and often without wholly explaining why I've directed the way I have. I will endeavor to produce articles explaining them and update this blog with links to them, as time allows. In the interim, please understand that I've been on literally 1000's of large ZFS deployments in the last 2+ years, often called in when they were broken, and much of what I say is backed up by quite a bit of experience. This article is also often used, cited, reviewed, and so on by many of my fellow ZFS support personnel, so it gets around and mistakes in it get back to me eventually. I can be wrong - but especially if you're new to ZFS, you're going to be better served not assuming I am. :) 1. Virtual Devices Determine IOPS 2. Deduplication Is Not Free. 3. Snapshots Are Not BackupsThis is critically important to understand. ZFS has redundancy levels from mirrors and raidz. It has checksums and scrubs to help catch bit rot. It has snapshots to take lightweight point-in-time captures of data to let you roll back or grab older versions of files. It has all of these things to help protect your data. And one 'zfs destroy' by a disgruntled employee, one fire in your datacenter, one random chance of bad luck that causes a whole backplane, JBOD, or a number of disks to die at once, one faulty HBA, one hacker, one virus, etc, etc, etc -- and poof, your pool is gone. I've seen it. Lots of times. MAKE BACKUPS. 4. ZFS Destroy Can Be Painful Something often waxed over or not discussed about ZFS is how it presently handles destroy tasks. This is specific to the "zfs destroy" command, be it used on a zvol, filesystem, clone or snapshot. This does not apply to deleting files within a ZFS filesystem (unless that file is very large - for instance, if a single file is all that a whole filesystem contains) or on the filesystem formatted onto a zvol, etc. It also does not apply to "zpool destroy". ZFS destroy tasks are potential downtime causers, when not properly understood and treated with the respect they deserve. Many a SAN has suffered impacted performance or full service outages due to a "zfs destroy" in the middle of the day on just a couple of terabytes (no big deal, right?) of data. The truth is a "zfs destroy" is going to go touch many of the metadata blocks related to the object(s) being destroyed. Depending on the block size of the destroy target(s), the number of metadata blocks that have to be touched can quickly reach into the millions, even the hundreds of millions. If a destroy needs to touch 100 million blocks, and the zpool's IOPS potential is 10,000, how long will that zfs destroy take? Somewhere around 2 1/2 hours! That's a good scenario - ask any long-time ZFS support person or administrator and they'll tell you horror stories about day long, even week long "zfs destroy" commands. There's eventual work that can be done to make this less painful (a major one is in the works right now) and there's a few things that can be done to mitigate it, but at the end of the day, always check the actual used disk size of something you're about to destroy and potentially hold off on that destroy if it's significant. How big is too big? That is a factor of block size, pool IOPS potential, extenuating circumstances (current I/O workload of the pool, deduplication on or off, a few other things). 5. RAID Cards vs HBA's 6. SATA vs SAS 7. Compression Is Good (Even When It Isn't) 8. RAIDZ - Even/Odd Disk Counts 9. Pool Design Rules -) (5 is a typical average). - For raidz2, do not use less than 6 disks, nor more than 10 disks in each vdev (8 is a typical average). - For raidz3, do not use less than 7 disks, nor more than 15 disks in each vdev (13 & 15 are typical average). - Mirrors trump raidz almost every time. Far higher IOPS potential from a mirror pool than any raidz pool, given equal number of drives. Only downside is redundancy - raidz2/3 are safer, but much slower. Only way that doesn't trade off performance for safety is 3-way mirrors, but it sacrifices a ton of space (but I have seen customers do this - if your environment demands it, the cost may be worth it). - For >= 3TB size disks, 3-way mirrors begin to become more and more compelling. - Never mix disk sizes (within a few %, of course) or speeds (RPM) within a single vdev. - Never mix disk sizes (within a few %, of course) or speeds (RPM) within a zpool, except for l2arc & zil devices. - Never mix redundancy types for data vdevs in a zpool (no raidz1 vdev and 2 raidz2 vdevs, for example) - Never mix disk counts on data vdevs within a zpool (if the first data vdev is 6 disks, all data vdevs should be 6 disks). - If you have multiple JBOD's, try to spread each vdev out so that the minimum number of disks are in each JBOD. If you do this with enough JBOD's for your chosen redundancy level, you can even end up with no SPOF (Single Point of Failure) in the form of JBOD, and if the JBOD's themselves are spread out amongst sufficient HBA's, you can even remove HBA's as a SPOF. 10. 4KB Sector Disks There are a number of in-the-wild devices that are 4KB sector size instead of the old 512-byte sector size. ZFS handles this just fine if it knows the disk is 4K sector size. The problem is a number of these devices are lying to the OS about their sector size, claiming it is 512-byte (in order to be compatible with ancient Operating Systems like Windows 95); this will cause significant performance issues if not dealt with at zpool creation time. 11. ZFS Has No "Restripe" 12. Hot Spares For a bit of clarification, the main reasoning behind this has to do with the present method hot spares are handled by ZFS & Solaris FMA and so on - the whole environment involved in identifying a failed drive and choosing to replace it is far too simplistic to be useful in many situations. For instance, if you create a pool that is designed to have no SPOF in terms of JBOD's and HBA's, and even go so far as to put hot spares in each JBOD, the code presently in illumos (9/12/2013) has nothing in it to understand you did this, and it's going to be sheer chance if a disk dies and it picks the hot spare in the same JBOD to resilver to. It is more likely it just picks the first hot spare in the spares list, which is probably in a different JBOD, and now your pool has a SPOF. Further, it isn't intelligent enough to understand things like catastrophic loss -- say you again have a pool setup where the HBA's and JBOD's are set up for no SPOF, and you lose an HBA and the JBOD connected to it - you had 40 drives in mirrors, and now you are only seeing half of each mirror -- but you also have a few hot spares in that JBOD, say 2. Now, obviously, picking 2 random mirrors and starting to resilver them from the hot spares still visible is silly - you lost a whole JBOD, all your mirrors have gone to single drive, and the only logical solution is getting the other JBOD back on (or if it somehow went nuts, a whole new JBOD full of drives and attach them to the existing mirrors). Resilvering 2 of your 20 mirror vdevs to hot spares in the still-visible JBOD is just a waste of time at best, and dangerous at worst, and it's GOING to do it. What I tend to tell customers when the hot spare discussion comes up is actually to start with a question. The multi-part question is this: how many hours could possibly pass before your team is able to remotely login to the SAN after receiving an alert that there's been a disk loss event, and how many hours could possibly pass before your team is able to physically arrive to replace a disk after receiving an alert that there's been a disk loss event? The idea, of course, is to determine if hot spares are seemingly required, or if warm spares would do, or if cold spares are acceptable. Here's the ruleset in my head that I use after they tell me the answers to that question (and obviously, this is just my opinion on the numbers to use): - Under 24 hours for remote access, but physical access or lack of disks could mean physical replacement takes longer - Warm spares - Under 24 hours for remote access, and physical access with replacement disks is available by that point as well - Pool is 2-way mirror or raidz1 vdevs - Warm spares - Pool is >2-way mirror or raidz2-3 vdevs - Cold spares - Over 24 hours for remote or physical access - Hot spares start to become a potential risk worth taking, but serious discussion about best practices and risks has to be had - often is it's 48-72 hours as the timeline, warm or cold spares may still make sense depending on pool layout; > 72 hours to replace is generally where hot spares become something of a requirement to cover those situations where they help, but at that point a discussion needs to be had on customer environment that there's a > 72 hour window where a replacement disk isn't available 13. ZFS Is Not A Clustered Filesystem 14. To ZIL, Or Not To ZIL So with that explained, the real question is, do you need to direct those writes to a separate device from the pool data disks or not? In general, you do if one or more of the intended use-cases of the storage server are very write latency sensitive, or if the total combined IOPS requirement of the clients is approaching say 30% of the raw pool IOPS potential of the zpool. In such scenarios, the addition of a log vdev can have an immediate and noticeable positive performance impact. If neither of those is true, it is likely you can just skip a log device and be perfectly happy. Most home systems, for example, have no need of a log device and won't miss not having it. Many small office environments using ZFS as a simple file store will also not require it. Larger enterprises or latency-sensitive storage will generally require fast log devices. 15. ARC and L2ARC One of ZFS' strongest performance features is its intelligent caching mechanisms. The primary cache, stored in RAM, is the ARC (Adaptive Replacement Cache). The secondary cache, typically stored on fast media like SSD's, is the L2ARC (second level ARC). Basic rule of thumb in almost all scenarios is don't worry about L2ARC, and instead just put as much RAM into the system as you can, within financial realities. ZFS loves RAM, and it will use it - there is a point of diminishing returns depending on how big the total working set size really is for your dataset(s), but in almost all cases, more RAM is good. If your use-case does lend itself to a situation where RAM will be insufficient and L2ARC is going to end up being necessary, there are rules about how much addressable L2ARC one can have based on how much ARC (RAM) one has. 16. Just Because You Can, Doesn't Mean You Should It is very rare for a company to need 1 PB of space in one filesystem, even if it does need 1 PB in total space. Find a logical separation and build to meet it, not go crazy and try to build a single 1 PB zpool. ZFS may let you, but various hardware constraints will inevitably doom this attempt or create an environment that works, but could have worked far better at the same or even lower cost. Learn from Google, Facebook, Amazon, Yahoo and every other company with a huge server deployment -- they learned to scale out, with lots of smaller systems, because scaling up with giant systems not only becomes astronomically expensive, it quickly ends up being a negative ROI versus scaling out. 17. Crap In, Crap Out 54 comments: This is great! When you have a slog, how do you decide pool spindle count to maximize the use of the slog. I have always used mirrors, but my math says to take advantage of a high performance slog, that I would want lots of spindles.Reply My slogs do 900MB/sec, therefore don't I want a pool that does 900MB/sec, which is 20+ vdevs. That answer is really pretty specific on the workload of the pool itself. Much of the time, the slog devices are there to speed up the pool by offloading the ZIL traffic - and as an added benefit, reducing write latency from a client perspective.Reply I almost always am looking at slog devices from an IOPS perspective first and foremost, and a throughput potential as a distant or even non-existent second (depends on the environment). Often a pool that can do 2.4 GB/s in a large-block sequential workload can't do anywhere near that at 4K random read/write request sizes (indeed, that's some 620,000 IOPS) -- and the client is doing exactly those, so suddenly all the interest is in IOPS and little time is spent worrying about throughput. In a pure throughput workload, things can and should be a bit different. And in ZFS, they are. For instance, ZFS has built-in mechanics for negating normal ZIL workflow if the incoming data is a large-block streaming workload. It can opt to send the data straight to the disk, bypassing any slog device (well, bypassing the ZIL entirely, really, and thus the slog device). There could be a whole post at some point on the varying conditions and how ZFS deals with each, I think. You've got 'logbias' on datasets (good writeup here: ). And even on latency, there's some code to deal with limits, I believe. Take a look at: , or the ZFS On Linux guys (dechamps, specifically) has a pretty good write-up on this at . I liked the oracle article the best, thanks for the feedback. My scenario is different then theirs however. My specific workload is a VDI implemnation with 80/20 r/w bias. I cannot seem to get a diskpool to get to performance levels to match the hardware I think. I have 22 spindle 10K mirrored pools with a ram based slog. The slog is rated at 90K iops and 900MB/sec.Reply Wouldn't zpool iostat show under ideal conditions 22 * 50MB/s = 1100 MB/sec or near there? Best I can get is 300 MB/sec. I am just trying to explain the gap. Zpool iostat shows peaks of 42K iops which is great, but never very high MB/sec. When the system is not busy, I would think that a file copy would reach the speed of the slog at least or at least double what the readings are at 300MB/sec. Nobody seems to use zpool iostat for performance data. iostat seems to be the tool of choice, but I don't have that data compiled over time like I do for zpool iostat. So would taking my 22 spindle 10k mirrored pool to a 44 spindle mirrored pool, which a little bigger than what oracle pushed at spec.org here: I should see my numbers go up closer to the limits of my slog right? So, 'rated at' and 'capable of' are always two different things. However, more importantly, 'capable of when used as a ZFS log device' is a whole new ballgame.Reply Manufacturers tend to provide numbers that show them in the most favorable light -- and even third-party analysis websites focus on typical use-cases: database, file transfer, those sorts of things. ZIL log device traffic is something every device fears - synchronous single-thread I/O. Your device may be capable of 90,000 IOPS @ 4K block size with 8, 16, 32, or more threads.. and anywhere from 4 to 64 threads is likely what both they and third-party websites run tests at -- but what can it do at 1 thread, at the average block size of your pool datasets? Because that's what a log device will be asked to do. :) As for SPECfs - I tend to, well, ignore that benchmark entirely. What it is testing isn't particularly real-world applicable, especially since vendors tend to game the system. For instance, you mention 44 spindle mirrors - no, in that test, the Oracle system had *280* drives, which they split up into 4 pools, each containing 4 filesystems, which were then tested in aggregate I believe. I also believe the data amount tested was significantly less than the pool size, and various other tunings were likely done as well. This picture gives some idea as to how big that system was: Even pretending you had the specific tunings, and ignoring for a moment its not particularly fair to just 'divide down' to get an idea for what a smaller system could do, doing so puts your 22 spindle 10K mirrored pool at about 14K iops, on the same benchmark. I generally want to see both iostat and zpool iostat; they're very different in what they're reporting, as they're reporting on different layers. Sometimes the combination of both gives hints that one or the other would not alone provide. I suspect with a 'VDI' implementation you're probably running 4-32K block size, and at that, I'd be happy with a peak of 42K iops out of 22 10K disks.. indeed, that's way past what you should realistically expect out of the drives, most of that 42K is coming out of ARC and an 80%+ read workload. Were I just gut feeling, I'd suspect you to get much less at times. This sort of performance work is time-consuming and involves a ton of variables. However, it is important to note that the log device is not some sort of write cache -- that's your RAM. The log device's job is to take the ZIL workload off the data pool. The performance benefit of that is purely in that the pool devices now have all those I/O they were spending on ZIL back. If there's any further benefit, its just that luck of the draw that the incoming writes were 'redundant' (they were writing some % of the same blocks multiple times within a txg, allowing ZFS to effectively ignore all but the last write that comes in when it lays it out on the spinning media). The pain that spinning disks feel from ZIL traffic cannot be understated. However, the streaming small-block performance of the spinning media minus the serious pain of interjecting the random read that gets past the ARC is, at the end of the day, the actual write performance the pool is capable of -- not what the log device can do at all. In super streaming workloads, sometimes, the log devices end up being the bottleneck. However, in almost all VM/VDI deployments I've seen, the log device is not your bottleneck - your drives are. :) Therefore, is going from a 22 disk mirror to a 44 disk mirror bad? How may vdevs are too many vdevs? The spec test, which I get was tuned, 280 disks, 2 controllers, leaves 140 disks per controller. 4 slogs, mean 4 mirrored pools, therefore they used 35 spindles. But you say that lots of spindles is bad.Reply The slog I have is a STEC ZeusRAM. I discovered them in the Nexenta setup from VMworld 2011 (I have a diagram of it also), which is what I have been trying to replicate ever since. Since I have 100 of these 10K drives and JBODS to go with them, I am trying to figure out how to get the best out of them for a VDI deployment. So far I have only tried 22 spindles and I was thinking 44 would be better. Lots of $$ in equipment and consultants plus gobs and gobs of wasted time still has me scratching my head. No no, more spindles is usually better up to a point. I don't start to worry about spindle counts until it is up into the 100's. However, remember the Oracle box got the 200K+ IOPS only from 280 spindles - at 44, you're at a small fraction of that.Reply Your box will perform twice as well as your 22-disk mirror system does, assuming no part in the system hits a bottleneck (which is going to happen if you've insufficient RAM, CPU, network, etc), and it is properly tuned(!). I would not expect, on a properly tuned system, in an 8-32K average block size VDI-workload, for 44 drives in mirror pool to be able to outperform a single STEC ZeusRAM (eg: I wouldn't expect it to be your bottleneck, from an IOPS perspective). I would expect the ZeusRAM to bottleneck you on a throughput test - or if your average blocksize is 32K or greater (getting ever more likely up to 128K). Its IOPS potential is not 90,000 at 4K, nor at 8K, 32K, or 128K (each of which is worse than the previous), because ZIL traffic is single-threaded, unlike most benchmarks you'd cite when saying how fast a device is. I love ZeusRAM, and I recommend them on every VM/VDI deployment I'm involved with and commend you on their use; but while they are in fact the very best device you could possibly use, it is not like they can't limit you, they are not of unlimited power. Still, again, if you're at 16K or under average block size, I'd suspect your pool (22 or 44 drives) to run out of IOPS, first. What block size are you using? What protocol (iSCSI, NFS)? Is this a NexentaStor-licensed system, or a home grown (and if so, what O/S & version)? That will matter in terms of where you can go for performance tuning assistance - because it needs some, unless you've already started down that path? I'm unaware of a single ZFS-capable O/S whose default tuneables for ZFS will well suit a high-IOPS VDI workload. The spinning disks are very likely underutilized. I am running Solaris 11 because the Nexenta resellers I reached out to were too busy to get back with me I guess because they never did. So I just started buying what made sense to me. If any of you out there are reading this.. look at what you missed. Sorry I wasn't interesting enough! I have 100 SAS 10K spindles, 2 Stec's, 2 DDRDrives, 2 256GB w/10Gbe Servers. Tried a 60 disk pool but someone told me it was too big, so now I have 22. Your nugget of vdev's are for I/O was worth the price of admission. I learned this, you can never have enough RAM, ever. All in all its been such a letdown because of the $$ spent and the results achieved.Reply Sorry they never got back to you. Doubly so since that precludes the option of contacting Nexenta to do a performance tuning engagement. :(Reply Also sorry the performance has seemed underwhelming - this is one of the current problems with ZFS go-it-on-your-own, is that there's just such a dearth of good information out there on sizing, tuning, performance gotchya's, etc - and the out of box ZFS experience at scale is quite bad. What information does exist is often in mailing lists, hidden amongst a lot of other, bad advice. I'm hoping to try to fix that as best I can with blog entries on here, but time I have to spend on this is erratic, and some of these topics are nearly impossible to address fully in a few paragraphs on a blog post, I'm afraid. 60 disk is most assuredly not 'too big'. Average Nexenta deployment these days I'd say is probably around 96 disks per pool, or somewhere thereabouts. If you don't mind people poking around on the box via SSH (and it is in a place where that's possible), email me (nexseven@gmail.com) to work out login details, and I can try to find some off time to take a peek at it. I dropped you a note last weekend, but maybe your on spring break like I have been. I was thinking of just adding another jbod of 24 disks to the exiting pool, creating new zfs datasets, then copy the exiting data to them to spread it around the new disks. Go from 22 to 44 spindles. The whole operation should only take a few hours. Currently when I do zpool iostat I see maybe 1-2k ops/s with a high of 4k. What I don't like is the time to clone VM's, the max MB/s I get is around 500-550 and doubling the spindles would double that .. correct?Reply Also.. how many minutes/seconds should a RSF1 with 96 disks take to fail over? I am curious what I would expect. Oops, email lost in the clutter. I've responded.Reply It would very likely double your IOPS count, but not potentially double your throughput count, since there's more bottleneck concerns to consider there. I assume you're using NFS -- you might (and it IS beta, so bear that in mind) be interested in this: - we're in beta on the NFS VAAI plugin. I say that because you mentioned tasks like VM cloning and such, and NFS VAAI support could have a serious impact on certain VM image manipulation tasks in VMware when backed by NexentaStor. Possibly worth looking at (though again -- beta, probably not good for production, yet). The goal of RSF-1 is to fail over in the shortest safe time possible. I've seen failovers take under 20 seconds. That said, I've also seen them take over 4 minutes (which isn't bad when you put it in context -- at my last job, my Sun 7410 took *15 minutes* to fail over). There's a number of factors involved. Number of disks is one, number of datasets (zvols & filesystems) is another. In general I recommend people expect 60-120 seconds, which is why I have the blog post up on VM Timeouts and suggest at least 180 second timeout values everywhere (personally I use higher than even that, as I see no reason to go read-only when I know the SAN will come back *some day*). What about Zpool fragmentation? That seems to be another issue with ZFS that you don't see much discussion about. As your pools get older, they tend to get slower and slower because of the fragmentation, and in the case of a root filesystem on a zpool, that can even mean that you can't create a new swap device or dump device because there is no contiguous space left. Zpools really need a defrag utility. Today the only solution is to create a new pool and migrate all your data to it.Reply A related issue is that there are no tools to even easily check the pool fragmentation. Locally, we estimate the fragmentation based on the output of "zdb -mm", but even that falls down when you have zpools that are using an "alternate root" (for example in a zone in a cluster). "zpool list" sees those pools fine, but zdb does not. Are you aware of any work being done on solutions to those issues?Replies BK: Fragmentation does remain a long-term problem of ZFS pools. The only real answer at the moment is to move the data around -- eg: zfs send|zfs recv it to another pool, then wipe out the original pool and recreate, then send back. The 'proper' fix for ZFS fragmentation is known -- it is generally referred to as 'block pointer rewrite', or BPR for short. I am not presently aware of anyone actively working on this functionality, I'm afraid. For most pools, especially ones kept under 50-60% utilization that are mostly-read, it could be years before fragmentation becomes a significant issue. Hopefully by then, a new version of ZFS will have come along with BPR in it. Ok I have a quick question and I'll include specific info below the actual question just in case you need it: I have a home "all-in-one" ESX/OpenIndiana ZFS machine. Right now I have a 8 disk RAIDz2 array with 8 2TB drives. 2 Samsung, 2 WD, 2 Seagate and 2 Hitachi drives (just worked out that way). The two Hitachi drives are 7200RPM the rest are 5400-5900RPM drives. 6 of them are 4k and I think the two hitachi's are "regular" 512byte drives. I want to know if I'm making a terrible mistake mixing these drives? I don't mind "loosing" the performance of those 7200RPM drives over the 5400 ones I just don't want data risk due to that. I could probably find someone who would happily trade those 7200RPMs for 5400s but if I can leave it as is I would prefer that.Reply Second, I have pulled a spare 2TB 5400rpm WD green from a external case and was going to put it in as a hotspare. Would I be better off just rebuilding the array as a z3 instead (or tossing it into the z2 array for a "free" 2TB?) Or leaving it in a box and keeping it for when something dies? (BTW) This question *might* have a different answer after you read my specs below. Specs: Supermicro dual CPU MB (2x L5520 Xeons) with 48GB of ECC registered samsung ram. 4 PCI-E 8x slots, 3 PCI-E 8x (4x electrical) slots. 1 LSI 3081E-R 3gb/s HBA, 1 M1015 6gb/s HBA, 2 (incoming, not installed yet) 3801E 3gb/s HBAs (for the drives soon to go into my DIY DAS), 1x Mellanox DDR Infiniband card. Drives: the 8 2TB drives previously mentioned in a z2 pool and 8 300GB raptor 10kRPM in a 8 disk RAID10 array for putting VMs on (overkill honestly, but fun). Right now its one pool on one card, one pool on the other. SOFTWARE: ESXi 4.1U3, Open Indiana VM with 24GB of ram handed over from ESXi. My future plans were to use 5 1TB drives I had laying around to create another pool. My pool ideas were raidz1 with all 5, raidz2 with all 5, or RAID10 with 6 (and locate/buy a 6th 1TB drive). Given that I was going to have this second pool, possibly setup with RAIDz1 I was seriously considering using that 9th 2TB drive as a global hotspare so it could be pulled into either pool (even the pool with the 1TB drives right?). And even more bizzare an idea could I also flag that 2TB drive to be used as a spare for the RAID10 raptors? Obviously it would impact performance if it got "called-to-duty" but it would protect the array until I could source a new 10k drive right? If thats a really stupid idea then I'll just order another 10k drive this weekend and toss it in as a spare for the mirror set, or if you prefer, sit on it in a box and swap it in when something actually dies (which saves power so I'm ok with it). Right now the data that is considered "vital" and business important (my wifes small business) is sitting on the 8 disk Z2 array, and two different physical locations elsewhere in the house, *and* backed up on tape. Regardless how we setup the ZFS pools all important data will be residing on the ZFS box *AND* *TWO SEPARATE* other locations/machines/drives (IE, it will always be on 3 separate machines on at least 3 separate drives. The array with the 1TB drives will be housing TV shows and Movies that can be lost without many tears so while I'd prefer not to lose it, its not *vital* like the array with the 2TB disks. I would be open to changing the *vital* array/pool to a RAID10 or RAIDz3 if you believe its worthwhile considering my requirements. Thanks for any help you feel like giving! :) -BenReplies Let me see if I can get through all this! 1) Mixing RPM's -- generally speaking, don't do it. You'll only go as fast as the slowest disk in the pool. If you don't mind that as well as the occasional performance hiccup as well, maybe that's fine. I suspect it would perform worse than you might even expect - it isn't a situation I've spent any time investigating, and it isn't something ZFS has specifically set out to work well at. 2) Mixing 4K & non-4K disks -- I've not spent any real time thinking this one out or seen it, but in general I'd also suggest not doing it if you can avoid it. It would also be pretty important that all the disks were appearing the same, even if some technically aren't the same. However, of note -- it sounds like what you actually have are 4K disks that REPORT as 512. There are a few of these running around, and they lead to really terrible performance. You can Google around (or maybe I'll see about writing up an entry for this at some point) about zpool 'ashift'. 3) I'd throw the extra disk into the existing z2, making it a z3, if its uptime is vital. You'll find z2/z3 slightly more failure-resistant than the same disks in a raid10, so I don't recommend that, at least. You could in theory use it as a global spare (though I don't like hotspares, just warm or cold spares), though putting it into service paired to a 10K disk would indeed lead to very odd performance issues and wouldn't generally be something I'd recommend (I tend to be conservative and cautious when it comes to storage, though). Andrew,Reply I'm curious about your following rules. Could you be so kind as to point me to some further information on these? Also shouldn't the number of data disks always be 2**n, meaning that raidz2 should start at 6 not 5?). For raidz2, do not use less than 5 disks, nor more than 10 disks in each vdev. For raidz3, do not use less than 7 disks, nor more than 15 disks in each vdev. IanReplies I need a prize if you find a typo. You are correct, I've updated it to 6. The primary source for that information is internal experience (1000's of live zpools), that is not knowledge I picked up from websites or blog posts. The 'do not use less' rule is fairly obvious - it's silly; why would you use less than 7 disks in a raidz3 (at just 5, you're left with more parity than data, and should probably have gone with raidz2 at 6 disks). The 'more' rule logic is around the nightmare scenario of 'lose another disk while resilvering from an initial loss'. You do not want this to happen to you, and keeping the number of disks per vdev low is how you mitigate it. I will actually bend these rules based on disk type, JBOD layout, known workload, etc. I'll be more conservative in environments where the number of JBOD's is low, the workload is high (thus making resilvers take longer as they compete for I/O), or the chosen disk type is very large (3+ TB) since they take forever to resilver, or a vendor or model of disk I'm less confident in, since I'll then expect more failures). I'll be less conservative and even go out of my own ranges if it's a very strong environment with no SPOF on the JBOD's, good disks that are not more than 2 TB in size, and the workload is light or confined to only certain periods of the day, etc. When making this decision, it is also important to be cognizant of IOPS requirements - your environment may be one that would otherwise be OK to lean towards the high end of these ranges, but you have an IOPS requirement that precludes it, and requires you go with smaller vdevs to hit the IOPS needs. Let me know if that didn't cover anything you were curious about. 'vdevs IOP potential is equal to that of the slowest disk in the set - a vdev of 100 disks will have the IO potential of a single disk.'Reply is this how it works in traditional RAID6 (if we are talking raidz2) hardware arrays too? I'm a bit concerned as my current volumes are comprised of 12 vdevs of 6 disks each (raidz2), which if I understand correctly means i'm really only seeing about 12 disks worth of write IOPs. Which would explain why it doesn't seem that fantastic till we put a pair of Averes in front of it.Replies No, traditional RAID5/6 arrays tend to have IOPS potential roughly equivalent to some % of the number of data drives - parity drives. This is one of the largest performance differences between a traditional hardware RAID card and ZFS-based RAID -- when doing parity RAID on lots of disks, the traditional hardware RAID card has significantly higher raw IOPS potential. ZFS mirror versus hardware RAID10 is a reasonable comparison, performance wise, but ZFS will win no wars versus traditional RAID5/6/50/60. Then again, it also won't lose your data, and isn't subject to the raid write hole problem. :) I often have to remind people that ZFS wasn't designed for performance. It's fairly clear from the documentation, the initial communication from Sun team, and the source, that ZFS was designed with data integrity as the primary goal, followed I'd say by ease of administration and simplification of traditionally annoying storage stuff (like adding drives, etc) -- /performance/ was a distant second or third or even fourth priority. Things like ARC and the fact that ZFS is just newer than many really ancient filesystems gives people this mistaken impression that it's a speed demon -- it isn't. It never will be. If your use-case is mostly-read and reasonably cacheable, ARC/L2ARC utilization can make a ZFS filesystem outperform alternatives, but it's doing so by way of the caching layer, not because the underlying IOPS potential is higher (that's rarely the case). If your use-case isn't that, then the only reason you'd go ZFS is for the data integrity first and foremost, and also possibly for the features (snapshots, cloning, compression, gigantic potential namespace, etc); not because you couldn't find a better performing alternative. Hi Andrew,Reply I am a "normal" home user with a "normal" home media server and after reading (and some testing with virtualbox) been considering moving to zfs (freeBSD or solaris) from windows 7. Most likely going to try Esxi (no experience on this either but I have no problems learning) to run one VM for file server (zfs) and another for Plex media server. Specs for my "server": Currently running windows 7, Intel motherboard (DP67BG if I remember correctly), i7 2600k and 16 GB of DDR3 (non ECC) ram, one 500 GB HD for the OS and 8 3TB (sata, non enterprise class) HDs for data (bought 4 then the other 4 later). The 8 data disks are on a raid 6 hardware array (adaptec 6805) with around 9 TB of used space. 90% of that space are movies in mkv format (I rip all my blurays with makemkv so I have 20-30 gb files) and 10% of random files (backup of family photos and stuff from main box, that I back up to another 2 different HDs). Main purpose of my "media server" is Plex, serving 2 HTPCs and some devices (iPads). I want to move from one windows 7 box to Esxi with VMs to have storage and Plex on different VMs and optionally a third VM for misc stuff (like encoding video/testing). Everytime I install/update something I have to reboot the windows box and if anyone is watching a movie has to wait for it to get back online. Apart from a learning experience, would zfs (solaris or freeBSD) be better or am I just fine and should just try to use Esxi with windows VMs?? Would a zraid2 be better than the hw raid6 array I currently have (for my use)? My plan is one VM for zfs (still don't know what to install here, solaris, freeBSD, nexenta, etc.), one for Plex media server (windows or linux) and one windows VM for misc stuff. Thanks a lot for any feedback.Replies I can't wait until we're "normal" home users, Simon. Pretty sure we're not, at the moment. :) I could easily run over the 4,096 character limit trying to advise you here. The tl;dr version would be: No, not unless you've got a backup of the data or a place to put it while migrating, and preferably only if you're willing to change boxes to something with ECC RAM in the process (that in and of itself is not a deal breaker) and definitely to an HBA instead of a RAID card. So you're definitely migrating data. It's potentially a lot of work for some data integrity gains. If you're not planning to use any ZFS features (which your present list of requirements doesn't seem to indicate you would -- you mention nothing that sounds like zfs snapshots, clones, rollback and so on would /really/ improve your life, those data integrity gains may not be worth the move (definitely not if not also going to ECC RAM & an HBA). Moving off Windows to a *nix derivative for the storage portion is very sane. Separating the storage to its own box or VM is reasonably sane. The level of effort to get you there safely on ZFS would almost necessitate buying a whole new server. As for choice, if you do decide to go through with a migration to a new box and ZFS, in order of what I feel to be the best options at the moment (9/21/2013): If you prefer command line administration: 1. OmniOS 2. FreeBSD 9.1 (or, really, wait for 10!) If you prefer UI administration: 1. OmniOS with free version of napp-it if over 18 TB of space required 2. NexentaStor Community Edition if under 18 TB of space required 2. FreeNAS I can't currently recommend you use anything sporting ZFS On Linux. Lack of fault management, lack of dtrace/mdb, few other niggling things keep it off my list for now. > I can't wait until we're "normal" home users, Simon.Reply I just stumbled upon this looking for advice how to install a global hot spare on Solaris 11. There is conflicting information about this capability and after reading your post, I am thinking that one or two warm spares might be more appropriate. My setup is a Solaris 11 home server, doing multiple duties for the family as a Sun Ray server, virtual machine host (so all five family members can have as many instances of Windows, Linux or whatever) and media server. Thin clients are scattered around the house and in many rooms,. Rooms each have a 24 port switch with gigabit fiber backhauled to the rack. The server is HP DL585 G2 with 4x dual core Opterons and 60GB of RAM, two fibre HBAs each with two ports, connected to two Brocade switches in a way that any HBA, cable or switch can fail without losing a path to the disks. Disks are 500GB in four arrays of 11 disks, each with dual paths. Your notes on backup are spot on. Right now the main pool consists of 3 vdevs, each containing 8 disks in RAIDZ2 (6+2), allowing any two disk failures before the array becomes critical. The remaining 20 disks are a backup pool as a single RAIDZ3 vdev (17+3). Snapshots are synchronized to the backup pool every 8 hours using the zrep script. The disks are not terribly reliable with one failing every week or three. I have about 50 cold spares, so the loss of a spindle is not an issue, but I often cannot get time to make the replacement too quickly. I was thinking that it made sense to reduce the size of the backup pool and allocate two global hot spares, so that any failure would rebuild automatically and give me time to respond. Your post brought back scary memories of a single array going offline, causing ZFS to scramble to build hot spares and declaring the whole pool invalid. I think I will take your advice and simply allocate one or two warm spares.Replies Yeah - I stress hard not to do hot spares, and never feel quite as good about builds that the client ends up demanding it in, claiming it is required or because they actually meet my criteria for using them (I can never feel very good about a SAN that nobody will even be able to remotely login to upon notification of a failure for over 72 hours). Glad I could remind you of a scary memory, I suppose. :) PS: Nice home setup! Andrew, here is another reason to be careful with ZFS. I have a less reliable spare array and many unused fast 73GB disks. I also wanted to investigate how L2ARC would impact performance, or even if ZFS would populate L2ARC storage. No problem, power up the spare array, put in some disks and add them as cache.Reply bash-4.1$ sudo zpool add tank c0t20000011C692521Bd0 vdev verification failed: use -f to override the following errors: /dev/dsk/c0t20000011C692521Bd0s0 is part of exported or potentially active ZFS pool slow. Please see zpool(1M). Unable to build pool from specified devices: device already in use Oh yeah, those were part of an old pool. No problem, override. bash-4.1$ sudo zpool add -f tank c0t20000011C692521Bd0 Did you catch the error? My array now looks like this: capacity operations bandwidth pool alloc free read write read write ------------------------- ----- ----- ----- ----- ----- ----- tank 4.77T 6.18T 0 0 63.9K 0 raidz2 1.59T 2.04T 0 0 0 0 c0t20000011C61A75FFd0 - - 0 0 0 0 c0t20000011C619D560d0 - - 0 0 0 0 c0t20000011C619A481d0 - - 0 0 0 0 c0t20000011C619DBDCd0 - - 0 0 0 0 c0t20000014C3D47348d0 - - 0 0 0 0 c0t20000011C619D695d0 - - 0 0 0 0 c0t20000011C619D742d0 - - 0 0 0 0 c0t20000011C619A4ADd0 - - 0 0 0 0 raidz2 1.59T 2.04T 0 0 63.9K 0 c0t20000011C619D657d0 - - 0 0 10.7K 0 c0t20000011C61A75A6d0 - - 0 0 10.7K 0 c0t20000011C619D4ECd0 - - 0 0 10.7K 0 c0t20000011C619A043d0 - - 0 0 10.5K 0 c0t20000011C619D669d0 - - 0 0 10.5K 0 c0t20000011C61A7F9Cd0 - - 0 0 0 0 c0t20000011C619D6C5d0 - - 0 0 0 0 c0t20000011C619D220d0 - - 0 0 10.7K 0 raidz2 1.59T 2.04T 0 0 0 0 c0t20000011C619DCD3d0 - - 0 0 0 0 c0t20000011C619D7FCd0 - - 0 0 0 0 c0t20000011C619D646d0 - - 0 0 0 0 c0t20000011C619A41Fd0 - - 0 0 0 0 c0t20000011C6199E5Ed0 - - 0 0 0 0 c0t20000011C619D43Fd0 - - 0 0 0 0 c0t20000011C61A7F82d0 - - 0 0 0 0 c0t20000011C619D636d0 - - 0 0 0 0 c0t20000011C692521Bd0 33.8M 68.0G 0 0 0 0 cache - - - - - - c0t20000011C615FDBAd0 0 68.4G 0 0 0 0 c0t20000011C6924F09d0 0 68.4G 0 0 0 0 c0t20000011C6C2163Cd0 0 68.4G 0 0 0 0 c0t20000011C6C2C468d0 0 68.4G 0 0 0 0 c0t20000011C6C2C4B8d0 1.15M 68.4G 0 0 0 0 ------------------------- ----- ----- ----- ----- ----- ----- So now, my entire pool is critical due to a a single vdev located on an unreliable array. My only hope is to mirror it (which I have done) and pray that the spare array stays alive until I can rebuild the ENTIRE POOL from a backup. That's rather lame.Replies Ouch. Yes. This is much like forgetting what your current path is and rm -rf'ing. :( This is the sort of thing that generally prompts the creation of 'safe' tools to use in lieu of the underlying 'unsafe' tools. I say this somewhat tongue in cheek, since my employer makes one of those 'safe' tools and yet I'm fairly sure it would have still let you do this in the interface (though I'll make a point of bringing it up to our development staff to add some logic to keep you from doing so without a warning). Yes, not very fun. The underlying issue (besides me not realizing my mistake sooner) was that the -f override silenced the warning I had seen, and most critically, the warning I had not seen yet (and would never see). Andrew - could you expand on what tragedy might happen if you mixed disk sizes, speeds, redundancy types?Reply I'm thinking of expanding a pool that so far has only one vdev. RAID_Z2[ 10 x 600G 10k ] + SLOG + Hot-Spare (before I found your blog) Proposed 2nd vdev = RAID_Z3[ 11 x 1TB 7.2k ] + SLOG Single 24-slot 2.5in JBOD chassis. SLOG devices are STEC s840z. NexentaStor with the RSF-1 H-A plugin. Thanks for any responseReplies Performance, mostly, including a few performance corner cases you'd be hard-pressed to actually hit in a homogeneous pool. Before I answer generically, let me state that as a NexentaStor user, if you have a license key with an active support contract, be aware that Nexenta Support does not support the use of heterogeneous pools. Contact them for more information. If you were to add an 11x 1-TB disk raidz3 vdev to an existing pool comprised of a single 10x 600-GB raidz2 vdev, you'd be effectively adding a larger, slower vdev that is also at start less utilized. First, this will make ZFS 'prefer', it as it's emptier (which shouldn't be read as completely ignoring the other vdev for purposes of writes, but it is going to push more % of the writes to the new vdev). Second, it's larger, so it will prefer it even longer. Third, it's slower, and this 'preference' is not synonymous with 'all on new vdev'. So at the end of the day, you've added another vdev which should have almost doubled your write performance, but instead it won't double it, it in fact will probably only increase it by 20-50%, because not only is every write only as fast as the slowest vdev involved in the write (and now you've got a 7200 RPM vdev in there), but it's going to write a larger majority of the new data onto that slower vdev for awhile, as well. Even if you rewrite data often enough that you eventually 'normalize', it will still end up only improving your pool's write IOPS by less than double the original speed, as the new vdev isn't as fast as the old one. I feel compelled to point out, though, that the part about normalizing and preferring the new vdev is going to happen regardless of similarity in the vdevs - that's one of the reasons I like to explain this early if I get the chance, so people know what to expect when it comes to 'expanding' a pool (it expands the space, but you can't expect it to expand the performance nearly as linearly, especially if you don't rewrite existing data that often). If all you're concerned about is more space, and you have no performance problems, you might be OK, but if you presently have a system that is nearing its maximum performance whatsoever, adding this vdev is likely to end up tanking you in the end, if adding capacity means you also add client demand at the same rate. The new space (and the old space) won't respond as quickly on a 'speed per GB' basis as it did pre-addition, so if you had 20 clients before and you add 30 more (as you're adding more than double the original space) for a total of 50 clients, there's every expectation the pool will fall over, performance wise. Hopefully that makes sense. Can you tell me more about this please?Reply 15. ARC and L2ARC (9/12/2013) There are presently issues related to memory handling and the ARC that have me strongly suggesting you physically limit RAM in any ZFS-based SAN to 128 GB. Go to > 128 GB at your own peril (it might work fine for you, or might cause you some serious headaches). Once resolved, I will remove this note. We have 384GB of RAM and on one system I notice that the disk pool goes to 100% over time (3 days) but then if I export and an re-import it we are good for another couple of days. We are running Solaris x86 11.1 and specifically SRU 0.5.11-0.175.1.7.0.5.0. Later SRUs exhibit the same problem. Any ideas much appreciated! EffremReplies In the production ZFS Appliances from Oracle you can purchase them with up to 1TB of RAM per controller so they must not agree with not going over 128GB the 7420s we use have 512GB per controller. The real issue with more RAM is ARC evictions when you delete large things. For the most part this can be alleviated by adjusting zfs:zfs_arc_shrink_shift. I have several ZFS systems with 256GB ram and have this setting in /etc/system: set zfs:zfs_arc_shrink_shift=12 It does not fix the case of deleting a single large file. In my case that means any file over a few TB needs a bit of timing consideration and temporarily disabling RSF-1. In my environment that has been only twice in the past 3 years. It has not been an issue otherwise. I will not hesitate to build with 1TB or more RAM if I find it necessary. The arc shrink shift will have to be adjusted accordingly. Oracle's latest offering has 1.5TB of RAM. They have latched on to how ZFS keeps getting faster with more RAM. For the most part I would say the rule about limiting to 128GB of RAM has been blown away. However, ZFS needs some enhancement to take better advantage of modern SSDs for L2ARC. The default tuning no longer makes any sense and writing needs to be made more parallel to take better advantage of multiple SSDs. -Chip. Hello,Reply I was wondering you could clarify this point for me: For raidz2, do not use less than 6 disks, nor more than 10 disks in each vdev (8 is a typical average). I am doing a home NAS for my media and was considering doing a raidz2 pool with 4x2.0TB WD Reds. Why would I want to use a minimum of six as opposed to four? It is a mini-ATX case so space is tight and I really wanted to add a second RAID0 group for a hot backup location. I would have to sacrifice this to get six drives in my raidz2 pool. Can you elaborate? Thank you for this guide as well.Replies For home use, 4 disks is fine. For enterprise use, follow the recommendations in this guide, I'm considering building a 24 disk storage box for home use (insane hobby). Although not recommended for production use, would there be a significant downside to just go with 2x12 disk RAIDZ2 vdevs in one pool?Reply. If you go with 3x8, at the cost of two disks for parity, you'll increase your write IOPS by 50%. The recommendations Andrew gives are a balance between speed, space, and redundancy.Reply I'm building out a 12-bay solution that will mostly be used to house VMWare virtual machines. My plan right now is to have 64G of ECC RAM, a 128G ZIL SSD drive, 2x240G SSD in a mirror pool for applications that need extra performance and 8x4TB WD Red setup as a mirrored pool for the main storage. Anything in particular I need to watch out for? The majority of the servers will connect to the storage via a 4G fiber channel switch, but there will also be connections via regular 1G ethernet. If I understand the math correctly, my theoretical max throughput for the main storage would be 4 x the throughput of a single WD Red disk, so appx. 600MB/s, right?Reply HIReply The article and following comments has given more insight to my knowledge with regard to ZFS. I am trying to build a storage of about 100TB usable with the following configuration. Please let me know if any precaution to be taken in terms of performance. The requirement is for an NFS storage for mailstore for around 100000 Mail users. Mailing solution will be based on Postfix + Dovecot. 3TB NLSAS or Enterprise SATA HDDs x 55Nos, configuring Raidz3. I will be using a server class hardware system (SuperMicro or Intel Server System) with Dual 6 Core Xeon CPU and can have around 128GB RAM. Do you recommend more RAM or do I need to invest in SSDs for ZIL or L2ARC. Kindly help with any precautions that I may to take take before procuring this infrastructure Thanks in advance. I have 2 Linux ROCKS clusters currently with hardware RAID 24 bay SATA drive systems. I intend to replace the 3TB SATA drives with 4TB SAS, and install JBOD HBA's to move from XFS to ZFS. I suspect that storage capacity and redundancy will be prized over outright performance. Can anyone suggest a starting point for the ZFS setup? I will likely start with 128GB RAM, and I need to investigate the particulars of the backplane in these Supermicro boxes. Will we likely take a big performance hit using what I presume is a 1x3 expander? Should I be looking at replacing the backplane and using three 8 channel HBA's?Reply Hi,Reply first of all, thanks to Andrew for this great article and to the people who commented on it. We have used Nexenta for a year now and this has given a great deal of valuable information. I've got one question about recordize. For our first pools we used 128k recordsize because we were told it was the default value and most suitable for many cases. We trusted the techies from Nexenta who gave that advice, until we started to have experience some bumps in the road with our production pools. One of the things I tested was different recordsizes. Our use case is Xen Community edition accessing Nexenta 3.x through NFS. In ZFS we store .img files, one for each vm. So, I did a lot of testing with iozone and my conclusion was that if you align NFS mountpoint and ZFS recordsize to either 4k or 8k, you get the best possible performance of all the possible combinations which go from 4k to 128k on both sides. I also used Dtrace to get as much information as possible from the pool, and I saw that more than 90% of the requests to the ARC are either 4k or 8k blocks, no matter what blocksize on Linux or recordsize on ZFS you use, you always get the same kind of requests from Xen. I'm telling you this because I've seen many articles and posts in forums about this which say the contrary, that you should use recordsizes of 32k or bigger, or even stick to the default 128k. I would like to know if anyone has ever done this kind of tests and what they got. Why are my results so different than the recommended values? I have no graphs or anything "nice-looking" to show you, just a text file with all the results, but if anyone is interested in my findings I am more than willing to publish it somewhere in a human-readable way. Thanks.Replies Jordi: I recently testing a nuymber of block sizes using the free NexentaStor 4,0 release. The winner was 32k but this was with a Linux box. Windows still uses a native 4k buffer, so depending on your mix of wht you run (Linux and Windows) it may take a compromise (8k?). The duffer flushing mechanism used by Nexenta is set to be 32k so an engineer I spoke with there recommended 32k for everything. --Tobias minor correction: "without further adieu" should be "without further ado"Reply Pankaj International manufactures and supplies wide variety of best quality pole line hardware, high tensile fasteners, fence fittings like 2h nuts, b7 studs, hex nuts, hex bolts and many other line construction hardware products.Reply High tensile fasteners india Greetings! I enjoyed your article. Thank you.Reply Background: My FreeNAS servers are currently pretty anemic Dell r410s (only 8GB RAM, 2x E5506), with LSI SAS2308 based HBAs. I have 4 norco ds24E SAS expander jbods 2 on each r410. Each ds24E set is configured as 1 with 24 (Hitachi Deskstar 7k1000 (HDS721075KLA330)) and the other with 24 (ST4000VN000). Each box is a pool of 4 6-disk raidz2 vdevs. Because of volume of data, the backup solution is periodic rsync across r410s. Where periodic ends up being whenever I add a movie/show to my library. In any case, the smaller ds24E houses the plexmediaserver jail and transcoded full quality .mkvs and the larger one has all of the 1:1 .isos. It's been a long, _LONG_, process to rip my entire library. Some questions: What is the authoritative way to tell what block size is, or _should_ be, being reported to the OS? Using zdb I can see ashift is set to 12 (4k) for _both_ of my pools. Using diskinfo, I see 512B sectors for all 48 disks. Clearly that's not right. However, when using smartctl, I find 'Sector Sizes: 512 bytes logical/physical' for the Hitachi drives and 'Sector Sizes: 512 bytes logical, 4096 bytes physical' for the Seagate drives. Is smartctl an/the authoritative source for this information? Are there issues with using 4k ashift value on true 512B hard drives? I'm also wondering if FreeNAS is just pinning the setting to 12? Because the Dells are too anemic with multi-stream transcodes occurring and I just upgraded to a Haswell-E system, I had designs on making my Sandybridge-E system a replacement for one of the r410s. It is not anemic and has a decent amount of RAM (64GB). Although it is not ECC. Other benefits include, multiple PCI-E slots. The r410s only have the one slot. 50GB .isos are slow going over Gbe so I picked up an xs712t, and two intel x540-t2 NICs. Right there I can't use the r410s outside of the 2Gbe lagg. So, it would seem that moving to the Sandybridge-E box has a lot going for it. I don't/would not plan on duplicating it so it would be a SPOF, but I would be fine with using an r410 in a pinch. Is this a reasonable plan? I also have just acquired an Areca ARC1883ix-16 and an 8028-24 to convert an additional ds24e to 12Gb. Yeah yeah, 6GB sata drives != 12Gb SAS. There should still be some perf benefit to be had. I was thinking I would try the hw RAID solution in a FreeNAS box. And then I was wondering how that was going to really work with zfs. And _that's_ how I came across your article. Given that, what is the downfall if the hw raid card were exporting 4 (raid 6) devices and those were put into a raidz1 pool? Beyond 6 extra disks worth of unuseable space? I understand you mentioned raidz1 <= 1TB and preferably 750GB. And it may well be asinine, but I'm curious as to if it was just guarding against the additional, potentially imminent, failures occurring whilst attempting to resilver a spare? Thanks in advance..Reply Bolts and nuts. Just a quick note on the content here. It's a real breath of fresh air after digging through the - well, frankly snotty tone at the forums.freenas.org site.Reply The biggest issue for neophytes with ZFS systems is the number of ways you can lose all your data, not some files or a disk. That gets stated a lot in the online content about ZFS, but it's only mentioned as an explanation of why to do or not do something. I think that's inverted: ZFS systems need a big, red, blinking warning that there are subtle ways to lose your entire pool of data if you do it wrong, and then the list of what to do or not do needs to follow. As so well stated here, ZFS is fundamentally different from other file systems and has different pitfalls. I have two identical systems with 12 drives each. Dual L5640, 32GB, 10gb ethernet.Reply Initially they were set up as a HA pair, BSD 9.2, using HAST & CARP. They performed the initial synchronization in 8 hours, at over 1000 MBs, which lead me to believe that HAST works very well. Then a ZFS pool was created, using 2 virdev of 6 drives raidz2 Two ZFS volumes were created and used for ISCSI The best write speed initially was less than 150MBs and diminished substantially within a month. The HA was taken apart, Each was reconfigured as a standalone NAS, with memory increased to 64GB. They each write ~170MBs and has been consistent for over a month. I decided to experiment with NAS-1 and set up ZFS using several layouts, none of which made much difference, until I changed the ISCSI from a ZFS Volume to a raw file on ZFS. Doing so increased the write speed to ~275MBs. (+100MBs) Do you know why using a raw file vs a ZFS volume would make such a difference? Is there a better way to deliver the disk space to a XenCenter group? (Footnote - XenCenter 6.2 apparently does not recommend NFS v4 and our distribution had bugs with NFS v3, so we opted for iSCSI. Thank you, Steve a good articlestorage serverReply What, in zfs terms, is a "warm spare"?Reply It means, the drive is in server, but not in any of disk pools.Reply I'm building two NAS4FREE boxes, one with two new 2TB drives and the other just to play with using whatever I've got laying around. I'm building the fun one first, for experience. What I've got laying around are two 500GB drives, one 300GB drive, and one 200GB drive. It it possible to first combine the 300 and the 200 into a 500GB vdev and then combine that and the other two 500GB drives into a RAID? Keep in mind this isn't for anything serious or critical, I'm just messing around with left over parts I have lying around. NAS4FREE itself will reside on yet another disk, these are just for data.ReplyReplies Rick: Yes and no. Without getting fancy, you can't add the 200 and 300GB drives to create a single vdev of 500GB effective size. However, you can simply add each drive by itself as a basic vdev, and you do get all 500GB of combined space. Doing this however is effectively raid0 and if any of those drives fail, you lose all the data on all the drives. A fancy way to do what you have in mind, is to create on pool with the 200 and 300 added as basic vdevs, then create a zvol from that, and then create a 2nd pool adding each 500gb drive and the zvol from the first pool. This would end up with a pool of 3 vdevs each 500GB in size, but is in no way functionally different than the 1st example of 4 basic vdevs in a single pool.
https://jaytaylor.com/notes/node/1456678553000.html
CC-MAIN-2021-39
refinedweb
11,910
67.59
dolphinscheduler-operator feature - deployment the master ,worker moudle - scale the pods numbers with one commond - update the master,worker version quickly (not include the sql) Project Status Project status: ‘alpha1’ Current API version: v1alpha1 Prerequisites go version : go1.17.6 minikube version: v1.25.1 kubebuilder version: 3.3.0 kubectl version: 1.23.1 create namespace ds kubectl create namespace ds install postgres (not required) if had no postgressql ,you can turn into config/ds/ and run “kubectl apply -f postgreSQL/” ,but you need to replace your local document to hostPath.path in postgres-pv.yaml first connect to postgressql and run the sql script in dolphinscheduler/dolphinscheduler-dao/resources/sql record the deployment ip eg: 172.17.0.3 install zookeeper(not required) if had no zookeeper ,the doployment file is in config/ds/zookeeper ,run “kubectl apply -f zookeeper/” and record the ip ,eg :172.17.0.4 create pv and pvc (not required) if you had pv and pvc ,you can config it in config/sameples or you can create it with config/ds/ds-pv.yaml and config/configmap/ds-pvc.yaml .notice to replace your local document address in hostPath.path in ds-pv.yaml and you can mount the lib in dolphinscheduler /opt/soft in config/samples/ds_v1alpha1_dsworker.yaml with paramter named lib_pvc_name mount the logs in /opt/dolphinscheduler/logs with the paramters named log_pvc_name with pvcname how to test replace the database config and zookeeper config paramters in config/samples/*.yaml replace the nodeport in config/samples/ds_v1alpha1_api.yaml in current project run “make build && make manifests && make install && make run” cd to config/samples first run “kubectl apply -f ds_v1alpha1_dsalert.yaml “ then run “kubectl apply -f ds_v1alpha1_api.yaml -f ds_v1alpha1_dsmaster.yaml -f ds_v1alpha1_dsworker.yaml “
https://golangexample.com/apache-dolphinscheduler-with-golang/
CC-MAIN-2022-33
refinedweb
293
58.79
If you want to check the Python version on a Linux machine, it’s really not that hard. Python is a great programming language to learn. It has simple syntax and yet it is also very powerful. Having the latest version of the Python interpreter is a good idea. You don’t want to be using an old, out of date version, especially one that is deprecated. How to Check Your Python Version So, let’s get to the nitty gritty. If you are running the Python interpreter on a Linux machine, here is what you need to do to check the Python version. Some of these commands may also work on Windows machines using the Windows command line. Command Line Python Version Check Login as root, and from the command line, type: python --version (That is two dashes before the word “version.”) If you are using Ubuntu, and can’t login as root, use the sudo command ahead of the text as follows: sudo python --version The shortcut to this is to use -V instead of --version, such as: python -V Or, in Ubuntu: sudo python -V You might also try a lowercase v, because it is by the way, an almost universal command on Linux to check the version of software running on the server. However, with Python, the uppercase V is what is in the user manual. This can be confusing because so many other programs use a lowercase v. For example, you can use the same command with php: php -v Or, in Ubuntu: sudo php -v However, sometimes, the name of the software is not so obvious. With the Apache web server, you would think that the command might be “apache -v” but this is wrong. To find the version of Apache running on your server, type this in instead: httpd -v Or, in Ubuntu: sudo httpd -v HTTPD stands for “HTTP Daemon,” which makes a bit more sense – this is the daemon (or server) that uses the HTTP protocol to deliver web pages. At any rate, if you can remember the -v, and the lowercase version is not working, you now know to try an uppercase -V for checking the Python version. You can also just type in “python” at the command line (or “sudo python” in Ubuntu). You will get not only the version you are running, but a lot more info, such as: Python 2.7.5 (default, Jun 17 2014, 18:11:42) [GCC 4.8.2 20140120 (Red Hat 4.8.2-16)] on linux2 Type “help”, “copyright”, “credits” or “license” for more information. In some respects, this is just faster and easier, and gives you a lot more information about the entire platform. Checking the Python Version from a Control Panel Surprisingly, when you search for ways to check the Python version from an administrative interface or control panel such as WHM or Cpanel, not much comes up. Your best bet is to use the search function from within the control panel to look up “Python.” From there, you might be able to see which version of Python is installed and running. Checking the Version of Python from Within a Python Script You may want to check the version of Python from within a Python script you are writing. Some of the reasons may be that you want to ensure compatibility with the Python interpreter, especially if the script is going to be run on different servers. There is no one solution to this, as you can find various examples and ways to do this from within Python. It will also depend on which version of Python you are writing your programming script for. You can check sites such as Stack Overflow, where developers can get into long and sometimes convoluted discussions on how best to do certain things with code. In this Stack Overflow discussion about checking the Python version, you can find this potential solution for looking at the version from within a Python program: # Method 1: import sys assert(sys.version_info >= (2,6)) ?# Method 2: import platform from distutils.version import StrictVersion assert(StrictVersion(platform.python_version()) >= "2.6") ?Note that this answer does not get as many votes as some of the solutions above. However, this answer was posted on November 16, 2017, making it a much more recent answer than the highest voted answer, which was originally posted in 2009. This is why it is important to check the dates on Stack Overflow answers, because what is voted up the most may be due to its longevity on the website, when a more current answer with fewer votes may be the answer that works. A Little Bit More About Python Versions Python, like any living programming language, changes over time. As the team who maintains the code improves it and optimizes it, the syntax of the language may be modified. Usually, these changes are not so overwhelming that you couldn’t understand a Python program from a different version. However, these changes could break your program or make it not work as expected. Compatibility Between Python Versions Python, like PHP, is an interpreted programming language. This means that Python is not compiled into an executable program before it is run. Instead, it is “compiled on the fly” via the Python interpreter. Thus, the program will work until it hits up against some code that isn’t in compliance with the Python version the interpreter is using. Then, the program could hiccup, mess things up, or stop working entirely. Usually, if you are using a Python version that is the same major version, but the interpreter has a different minor version, the hiccups will hopefully be minimal. However, if you are trying to run a Python program where the major version is different from the major version of the interpreter, it is often totally incompatible. You especially have to pay attention any time a release is labeled as “backwards incompatible.” This means that it is totally not going to work with earlier versions of the software. Python 3.0 is backwards incompatible with Python 2.0. However, some of the features of the 3.0 version of Python have been “backported” to Python versions 2.6 and 2.7 to make the code more compatible. Major Versions vs. Minor Versions of Python As with most programming languages and software applications, Python uses a numerical convention to distinguish between major and minor releases. The major version is the number before the first period. The minor version is the number after the period, with an update to that minor version following the second period: MajorVersionNumber.MinorVerson.MinorUpdate Thus, 3.2.5 is major version 3, minor version 2.5. Also, a “zero” indicates the general major version. All variations of Python 3 (Python 3.1.0 or 3.5.1) are part of Python 3.0. All variations of Python 2 are part of Python 2.0, etc. Here is a list of all Python versions: Python Beta Version An initial version of Python was released in 1991. This was 0.9.0 and not a full number 1, as it was not ready for prime time yet. These versions were released on the following schedule: ?Python 1.0 The first official version of Python was launched in 1994. ??Python 2.0 Released in 2000, Python 2.0 not only offered a whole host of new features, but the new version was transitioned to a community-based, collaborative open source language. ?Python 3.0 Python 3.0 is also known as "Python 3000" or "Py3K," and it is the current major version of Python. ?Don’t Forget to Check Python Version Checking the Python version is easy, just remember to do this once in a while – perhaps by placing a reminder on your calendar. You will need to keep the Python interpreter up to date to benefit from the latest features of the ever-evolving and constantly improving Python coding language.
https://www.liberiangeek.net/2019/04/check-python-version/
CC-MAIN-2022-40
refinedweb
1,333
71.24
. I don't think u need the object() Also, I believe the py plugin which allows you to use picking data in python might solve this problem for you, Well if I try just Bob = System.Create it gives me a NoneType error when I try to set the angle. No error on the actual creation, heh... I'd seen the object picking fix, but I don't think that will help me in this particular case, since everything will be handled within the python. Is there some way to do this? I was annoyed that the pyfix plugin didn't work for this, so I made you another one: (plug and example cap included) The normal return value of System.Create is None, which is python's null value. All of my past attempts to store a direct, persistent reference to a single instance of a multi-instance Construct object have failed. Many of the python methods for Construct objects don't work properly, if they are even there, and work at all. It definitely needs some work. As far as the index number goes, they are practically scrambled every time an instance is destroyed, so they can't be counted on to point to the same instance under all circumstances. Here's the output of a quick test with my PyShell.cap: >>> ls = [] for i in range(5): for i in range(5): ls.append(System.CreateByName('Sprite',1,i*36+16,16)) ls ls [None, None, None, None, None] for i, s in enumerate(Sprite): for i, s in enumerate(Sprite): print i, s.uid 0 1 1 3 2 4 3 5 4 6 5 7 Sprite[2].uid Sprite[2].uid 4 Sprite[2].Destroy() > for i, s in enumerate(Sprite): Sprite[2].Destroy() > for i, s in enumerate(Sprite): 2 7 7[/code:4py0sf3d] I'm afraid that this affects your above plugin's function as well, Lucid, though I really like the idea. Not saying that there isn't a better way that I haven't found, but the only way I know of to reliably reference an instance is to store it's uid, as you might do with events. Then, you can use something like so: def rotateInstance(obj, uid, deg): for i in obj: if i.uid = uid: i.angle += deg[/code:4py0sf3d] ... which kind of sucks. I keep hoping that I stumble across a clever way to eliminate that loop. That said, I still find Python quite useful for many things in Construct, but you may need to interface it with normal events for some things. Construct's function object can help with that. I'm afraid that this affects your above plugin's function as well, Lucid, though I really like the idea. if he only needs store the objects python index temporarily this should work fine, which I assumed was the case given his bob example however, if he needs to store many instances long term, you can already manipulate large arrays of objects using python with s also, btw arsonide, if you just want to create the object and assign it all the various dimension, position, information etc, I could just add that to the create function, so you don't have to worry about storing instance data at all if he only needs store the objects python index temporarily this should work fine, which I assumed was the case given his bob example however, if he needs to store many instances long term, you can already manipulate large arrays of objects using python with s True, it would work fine as long as you don't use the reference past the point that an instance may be destroyed, which could be never in some cases. This would be great for setting up initial values, as you mentioned. Thanks for the tip on your S plugin. I had been using python for my complicated data structure needs. That comment just opened my eyes to some new possibilities. I'll be checking S out, soon. Seems as though one (or both) of those two solutions pretty much covers it, hopefully.:27wv8267] Develop games in your browser. Powerful, performant & highly capable. What dowry do you require in order for me to marry you, R0J0hound?:1w1dg6o9] Excellent! Any caveats to this? I assume you can do System.Create('Sprite', 1, 0, 0) objRef = SOL.Sprite objRef.x = 300 System.Create('Sprite', 1, 0, 0) objRef = SOL.sprite objRef.x = 600 [/code:1w1dg6o9] Luomu, that will work no problem. Also you can eliminate the "objref" variable and do it directly: System.Create('Sprite', 1, 0, 0) SOL.Sprite.x =300[/code:32mlmfm3] Luomu, that will work no problem. Also you can eliminate the "objref" variable and do it directly: System.Create('Sprite', 1, 0, 0) SOL.Sprite.x =300[/code:2btwhewz] So, can this be used in the fashion of the py fix plugin? Is there a len() operator of any fashion? Is this correct? import random for x in range(10): System.CreateByName("spr", 1, random.randint(10,400), random.randint(10,400)) for i in range(10): spr[i].X = random.randint(10,400) spr[i].Y = random.randint(10,400) [/code:2btwhewz] Just wondering. I couldn't figure out how to use the SOL class for iterating, but the above works (not sure if it's intentional or not, or if that was always functional). SOL stands for "selected object list" and it allows you to access the objects picked via events. When an object is created it becomes the only picked object of that type. Also the created object will not be accessible via "Sprite[index]" until the next frame. Say there are no instances of the "Sprite" object. System.Create('Sprite', 1, 45, 45) Sprite.x=22 #this will cause an index error[/code:25n0pb6b] [code:25n0pb6b]System.Create('Sprite', 1, 45, 45) SOL.Sprite.x=22 #this will work[/code:25n0pb6b] Your example will create 10 sprites and move the first 10 sprites not the created ones. Ideally new objects are modified right after each are created. You can however create multiple objects save their references and modify them later. [code:25n0pb6b]import random newObjs=[] for x in range(10): System.CreateByName("spr", 1, random.randint(10,400), random.randint(10,400)) newObjs.append(SOL.spr[0]) for obj in newObjs: obj.X = random.randint(10,400) obj.Y = random.randint(10,400)[/code:25n0pb6b] [quote:25n0pb6b]Is there a len() operator of any fashion? len(Sprite) returns the number of Sprite objects excluding objects created that frame. len(SOL.Sprite) returns the number of picked Sprite objects. It will return 1 after a sprite object is created. I suppose I don't see how that is a list, though. You're saying I have to make my own list and store the references returned by SOL... so is it a misnomer? Should it be called LastObjectRef instead? A list would be.. a list. Not a single reference after 10 calls of Create*() It would probably be prudent to do a write up on all the functionality of SOL... lest we all sit around trying to figure out it's actual functionality. As it stands, the Lucid's plugin is more useful for picking objects. I must be confused, I thought this was proper built in support for that bandaid fix. As it stands it seems like a wrapper to get the last object created, and not a list at all. Surely I am mistaken? In a nutshell "Sprite" is a list of all the sprite instances. "SOL.Sprite" is a list of all the picked sprite instances. The reason "System.Create()" picks only the new object is because that's what happens in events. Here's an example:
https://www.construct.net/forum/construct-classic/construct-classic-discussion-37/object-manipulation-with-pytho-36712
CC-MAIN-2018-34
refinedweb
1,305
66.64
Intended audience: HTML and XML authors, script developers (PHP, JSP, etc.), schema developers (DTDs, XML Schema, RelaxNG, etc.), and anyone who needs to know how to mark up content with language information when no language applies. How do I use language markup in HTML or XML content when I don't know the language, or the content is non-linguistic? You should always identify the human language of the text, when known, in HTML or a format based on XML, so that applications such as voice browsers, style sheets, and the like can process the text in an appropriate way. In XML-based formats you would usually use the xml:lang attribute, and in HTML the lang attribute. (See Working with language in HTML for details about language tagging in HTML.)? There are two parts to the above question: what to do when the text is non-linguistic, and what to do when the language is undetermined. Use the subtag zxx when the text is known to be not in any language. This would apply for text such as type samples, part numbers, illustrations of binary data, etc. The definition of zxx in the IANA Language Subtag Registry is 'no linguistic content'. For example: <p>Here is a list of part numbers: <span lang="zxx">9RUI34 8XOS12 3TYY85</span>.</p> In HTML, use lang="". If you are using XML and the format you are using supports it, use xml:lang="", otherwise use xml:lang="und". These values indicate that we cannot determine, for one reason or another, what the appropriate language information is, or whether the text is non-linguistic. For example, you might use an empty value for the language attribute if database text is included into a document but the database doesn't provide language information and you can't be reasonably sure what the language is. The effect would be to prevent any language information declared higher up the hierarchy of elements in the document from applying to the included text. However you should only tag text as undetermined if you can't just leave it as is. In practice, this means you should only use this markup if the undetermined text is embedded in content that has already been labeled for language in some way, or if its use at the document level is required by the format you are using. Legacy pages that use XHTML 1.0, and cannot be updated to HTML5 or XHTML5, should use xml:lang="und" if there is a need to express the undefined nature of some text embedded in a document, because xml:lang="" is not allowed. On the very rare occasion when the whole document is in an undefined language it is better to just not declare the default language of the document. xml:lang="" only works if the schema that describes the format of your document allows an empty string as a value of xml:lang. For example, because the XHTML 1.0 DTDs define xml:lang in such a way that an empty string value for the xml:lang attribute is disallowed, you can't use the empty string in XHTML 1.0. For those who are aware of how DTDs and other schemas work: The xml:lang attribute takes NMTOKEN values in the XML schema, so they cannot be empty. In your XML DTD, if possible, declare xml:lang as CDATA so that an empty value is allowed. For XML Schema users, rely on the XML schema document for the XML namespace. Martin Dürst points out that you can redefine the XHTML format within the document to create an XHTML page that validates while using lang="" or xml:lang="". This is not recommended for widespread use, however, because such a document is no longer strictly conforming in the sense of XHTML 1.0. This is a summary of a discussion in a thread on www-international@w3.org, and a later reprise of those ideas to which several people contributed. Related links, Authoring HTML & CSS Related links, Authoring XML
http://www.w3.org/International/questions/qa-no-language?changelang=hu
CC-MAIN-2016-36
refinedweb
674
60.24
scalars and namespace Discussion in 'Perl Misc' started by Jeff Thies, Jun 27, 2003.: - 436 - Ray Gardener - Jun 10, 2004 naming convention for scalars, lists, dictionaries ..., Feb 28, 2005, in forum: Python - Replies: - 8 - Views: - 331 - Paul Boddie - Mar 1, 2005 cannot use bitwise AND-Operator on hex-scalars from ARGV[0]Roland Reichenberg, Oct 14, 2003, in forum: Perl Misc - Replies: - 7 - Views: - 172 - Josef Möllers - Oct 14, 2003 arrays and scalars.radioactiveman, Jul 21, 2004, in forum: Perl Misc - Replies: - 7 - Views: - 116 - Brian McCauley - Jul 27, 2004 Scalars Leaked and Segmentation FaultAMLiapunov, Feb 21, 2006, in forum: Perl Misc - Replies: - 1 - Views: - 125 - zentara - Feb 23, 2006
http://www.thecodingforums.com/threads/scalars-and-namespace.869775/
CC-MAIN-2014-42
refinedweb
110
61.7
Skip navigation links java.lang.Object oracle.javatools.ui.MouseHoverSupport public class MouseHoverSupport Support to fire events if the mouse pointer hovers over a Component for a certain duration. The pointer doesn't have to remain completely still to prompt the firing of a mouseHover MouseEvent, just fairly steady to cater for hand shake. The hovering timer will begin as the mouse moves inside the component, the timer will restart if the mouse moves too far from the original hover point. Once the event has fired, the hovering timer will restart again once the mouse pointer is moved within the component. The timer will stop if the pointer leaves the component bounds. public final int MOUSE_HOVERED public MouseHoverSupport(java.awt.Component comp, int duration, boolean repeat) comp- the component to monitor for hovers duration- the number of milliseconds after a hover event will fire repeat- if true the hover event will be fired every 'duration' that the mouse pointer continues to hover over the comp. If false only one event is fired per hover. public void addMouseHoverListener(MouseHoverListener l) public void removeMouseHoverListener(MouseHoverListener l) Skip navigation links
http://docs.oracle.com/cd/E35521_01/apirefs.111230/e17493/oracle/javatools/ui/MouseHoverSupport.html
CC-MAIN-2017-26
refinedweb
187
53.61
A string that has associated attributes (such as visual style, hyperlinks, or accessibility data) for portions of its text. SDKs - iOS 3.2+ - macOS 10.0+ - tvOS 9.0+ - watchOS 2.0+ Framework - Foundation Overview An NSAttributed object manages character strings and associated sets of attributes (for example, font and kerning) that apply to individual characters or ranges of characters in the string. An association of characters and their attributes is called an attributed string. The cluster’s two public classes, NSAttributed and NSMutable, declare the programmatic interface for read-only attributed strings and modifiable attributed strings, respectively. An attributed string identifies attributes by name, using an NSDictionary object to store a value under the given name. You can assign any attribute name/value pair you wish to a range of characters—it is up to your application to interpret custom attributes (see Attributed String Programming Guide). If you are using attributed strings with the Core Text framework, you can also use the attribute keys defined by that framework. You use attributed strings with any APIs that accept them, such as Core Text. The AppKit and UIKit frameworks also provide a subclass of NSMutable, called NSText, to provide the storage for the extended text-handling system. In iOS 6 and later you can use attributed strings to display formatted text in text views, text fields, and some other controls. Both AppKit and UIKit also define extensions to the basic attributed string interface that allows you to draw their contents in the current graphic context. The default font for NSAttributed objects is Helvetica 12-point, which may differ from the default system font for the platform. Thus, you might want to create new strings with non-default attributes suitable for your application. You can also use the NSParagraph class and its subclass NSMutable to encapsulate the paragraph or ruler attributes used by the NSAttributed classes. Be aware that comparisons of NSAttributed objects using the is method look for exact equality. The comparison includes both a character-by-character string equality check and an equality check of all attributes. Such a comparison is not likely to yield a match if the string has many attributes, such as attachments, lists, and tables, for example. The NSAttributed class is “toll-free bridged” with its Core Foundation counterpart, CFAttributed. See Toll-Free Bridging for more information.
https://developer.apple.com/documentation/foundation/nsattributedstring
CC-MAIN-2017-30
refinedweb
391
54.93
28 July 2012 00:06 [Source: ICIS news] HOUSTON (ICIS)--?xml:namespace> The reduction took the July MEK range to 87-95 cents/lb, as assessed by ICIS, widening the typical range seen in this market. Most of the decline was reflected at the low end of the new range. Reductions of 4-9 cents/lb had been heard during the month, and the market’s largest producer cut pricing at the bottom of that range. However, deeper cuts were confirmed during the month for some customers. Demand continued to be described as flat and balanced with supply. Among feedstock, butane at Mont Belvieu, US MEK suppliers include Shell Chemicals, ExxonMobil and Sas
http://www.icis.com/Articles/2012/07/28/9581878/us-july-mek-prices-weaker-on-soft-market-conditions.html
CC-MAIN-2014-52
refinedweb
113
73.17
Installing PyS60 From OpenSource Python for S60 can be installed both in your phone and in the S60 emulator environment. The instructions on this page assume you want to install the latest testing version of PyS60. Version 1.2.0 has a different installation procedure. You can download it from this page in Forum Nokia. A "Getting Started" guide for that version is included. Determining the S60 version Installable binary PyS60 packages have two version numbers: - version of the PyS60 release (e.g. "1.3.1") - version of the S60 platform the package is intended for (e.g. "2nd Edition Feature Pack 2") Different releases of the S60 platform require different builds of PyS60. You must use a build of PyS60 that is compatible with the S60 version of your phone and your S60 SDK. See this page to determine which version of S60 your phone runs. This visual guide to recognizing the correct version is also nice, but not up to date for 3rd Edition devices. Phone installation For pre-3rd edition phones - Download the latest PythonForS60 and PythonScriptShell package from the SourceForge project page or maemo garage. Make sure you get the correct versions for your S60 version. - Install first the PythonForS60 package and then the PythonScriptShell package. For 3rd edition phones - Download the latest SIS packages of the Python for S60 runtime and the PyS60 Script Shell from the SourceForge project page or maemo garage. - Install first the PythonForS60 package and then the PythonScriptShell package. - The Python script shell for 3rd edition will list anything it finds in a "Python" folder on C: or E: drives in the "Run Script" list. Luckily, you can write the E: drive directly from a Mac or a PC. So here are step-by-step instructions: Via USB cable (PC or Mac) - Put a memory card in the phone - Connect phone to computer with cable. - KEY STEP Select the "Data Connection" mode on phone screen. - On a Mac the contents of the memory card are now mounted. On a PC, Win XP, you need to choose to see the card with Explorer. - KEY STEP You're now in fact on the phone's E: drive. Create a "Python" folder. - Put any .py files in the Python folder you now have, unmount the filesystem cleanly and detach the cable. - Open Python application, and choose Options. "Run Script" command will see the uploaded script. Via Bluetooth (Mac only tested so far) Switch on Bluetooth on the Phone. Launch the "Bluetooth File Exchange" utility application on the Mac. Pair with the phone, and test the Bluetooth connection by pushing over some small text file or sound. Then choose the "Browse Device" command, and browse drive E: on the phone. You can now use the utility to create a "Python" folder on the Phone's E: drive and then to push your .py source files to the phone. Via Bluetooth (NOKIA E70, and maybe other 3rd edition phones. Mandriva Linux/Gnome) Problems are : no "browse device" capability (maybe I don't know how), and the phone don't recognize .py send over bluetooth as a special file. You can not specify neither where the message will be stored, nor what to do whith it. Solution : use a file format which allow you to do that sort of thing. Steps (Mandriva/Gnome using standard Gnome config: nautilus, file-roller, gnome-bluetooth) : 1. on the phone, create a "Python" folder on the memory card (phone's E: drive) with the phone's file manager; 2. on the computer, compress your '.py' file as a ZIP file (nautilus: just right click on the file and choose the 'compress' menu entry; default parameters should be ok); 3. send the zip to your phone (nautilus: right click, choose "send to", then select your phone in the bluetooth part); 4. on the phone, open the message. "ZIP Manager" application automatically handles it: "open"/"copy". Choose "open", select you .py in the uncompressed archive, and use the nokia "option" key to display the contextual menu : use the "extract" menu entry. You can now browse your phone's folders, to save the file in your "Python" folder. That's it (and you can now suppress the bluetooth message). Via Memory Card Connect the miniSD card to a PC (windows or Mac). Create a directory, such as Python or Python/Libs or whatever. Copy python scripts (and libraries) to the directory. Put card in phone. It will show up as e: drive. In python, add the directory to the search path: import sys sys.path.append('e:\python\libs') These scripts can be used as libraries and imported as usual. The miniSD cards can be placed into the miniSD to SD holder and this can be placed in a USB compact flash card reader, or one can get a usb dongle that directly takes a miniSD card (I got one from ATP -- it also came with a 1 GB miniSD card and the USB reader) Direct download You can also directly download the software on your phone and install it without using a PC. The steps are as following: Phones tested with direct download : Nokia E90, N73 1. Download Python runtime for S60. Installation will start automatically 2. Choose if you want to install on the phone memory or memory card 3. Download PythonScriptShell package. Again, intallation will start automatically. 4. Choose the same media (phone or memory card) as for the runtime package. 5. Enjoy Python on your S60! Important note for 3rd edition phone users Certain PyS60 services (such as the location service, which require the Location, ReadUserData and ReadDeviceData capabilities) are capability-dependent, which means that all applications using these services must inherit the capabilities themselves. In other words: if the location.gsm_location() call keeps returning "None" when using the PyS60 Script Shell in a 3rd edition phone, please consider using a SIS signing service such as the Symbian Signed. - Download the SIS packages as described in this Section. - Sign the SIS package of PythonScriptShell for your phone, and install this instead of the (unsigned) package. Please note that, if you've signed the PythonScriptShell package Open Signed Online service, the signed file will only install in the phone with the IMEI code you've specified in that page form. Emulator installation Typically you would want to install the SDK for the same S60 version as your phone runs, but if you want to experiment you can install any version of the SDK you want (as long as PyS60 supports it, that is). - Download and install the S60 C++ SDK from Forum Nokia. Here's a direct link that may or may not still work. - To install the 3rd Edition SDK (and possibly previous versions) you need to have installed Active Perl and have a newish Java Runtime installed as well JRE - After unzipping the downloaded S60 SDK (S60_3rd_Ed_SDK_FP2_Beta_b.zip at this time) run setup.exe in the root directory of the unzipped file. After the core install it asks if you want to install CSL ARM Q1C Toolchain if you don't have it installed. - Download the latest PyS60 SDK ZIP package that matches your S60 C++ SDK version from the SourceForge project page or maemo garage. E.g. if you installed the 3nd Ed FP2 SDK download PythonForS60_1_4_1_SDK_3rdEd.zip. - Unzip the package. It contains another zip package: sdk_files.zip and an uninstaller script. The sdk_files.zip package file structure maps to the Emulator install file structure you previously installed and just needs to be unzipped and placed in the corresponding directory detailed below. - You must not use Cygwin unzip to extract files from sdk_files.zip. Otherwise, trying to open the Python application may result in "System Error (-23)" message. I would suggest you unzip sdk_files.zip into a temp directory using Windows Explorer, then moving the contents to directory that contains the epoc32 directory of your S60 C++ SDK. E.g. for 2nd Ed FP2 you would unzip sdk_files.zip in the directory "c:\symbian\8.0a\s60_2nd_fp2\epoc32", for 3rd Ed FP2 the default directory is "c:\Symbian\9.3\S60_3rd_FP2_Beta\epoc32". - Start the S60 Emulator "All Programs > S60 Developer Tools > 3rd Edition FP2 SDK > C++, Beta > Emulator" - Wait..... It takes a while to start up the first time your firewall may pick up the epoc.exe process asking for server rights as well, that is to be expected. Eventualy you will see the main Phone screen and Icons, you can watch the epoc.exe process bouncing about if you get bored while you wait, don't be tempted to click on the power switch of the phone until all the icons are there, just wait. - Click on the Phone Menu key on the left, use the cursor keys or click on the phone GUI to highlight Installations, click on the center phone button, highlight the Python application, Click Open, you should get a splash screen giving the version. - Click on Options > Run Script c:ball.py, the demo should run! - The sample scripts are located in "C:\Symbian\9.3\S60_3rd_FP2_Beta\epoc32\winscw\c\python" this is where you can put your scripts and they will appear in the Emulator. - You can create a sub directory called "lib" under the python directory and this will get picked up in the search path if you have modules to add. - NOTE: From experience so far, although the Emulator picks up new files without having to restart the whole emulator I have found it best to restart the Python application to make sure changes to existing scripts are picked up. Installation troubleshooting If you run into problems and solve them, please write about it here! "Certificate error. Contact the application supplier" Your phone may not be configured to accept installation of self-signed SIS packages. See this page for details on how to fix it. Note that some operators have disabled the installation of self-signed apps so you may not be able to change those settings. "Check if the following components are installed: Python Runtime and PIPS Library" You need to install the S60 SDK add ons Open C/C++ Plug-ins for S60 3rd Edition. If you do not have the SDK installed, find pips_nokia_1_3_SS.sis on the Internet, for example here. Setting up the Bluetooth Console See PyS60 Bluetooth console. More info If you have any installation tips to share or useful discussion board links, then please add them here! - Information and experimentation in spanish:
http://wiki.opensource.nokia.com/projects/Installing_PyS60
crawl-002
refinedweb
1,748
73.37
Never The Bindings class does what we want within a module or function def scope, but there is no way to make it work within a single expression. In ML-family languages, however, it is natural to create bindings within a single expression: Listing 2: Haskell expression-level name bindings Greg Ewing observed that it is possible to accomplish the same effect using Python's list comprehensions; we can even do it in a way that is nearly as clean as Haskell's syntax: Listing 3: Python 2.0+ expression-level name bindings: Listing 5: "Stepping down" from Python list comprehension We have performed: Listing 6: Efficient stepping down from list comprehension." This works quite similarly to the closures I wrote about in Part 2 --: Listing 8: Currying a Haskell computation Now in Python: Listing 9: Currying a Python computation It is possible to further illustrate the parallel with closures by presenting the same simple tax-calculation program used in Part 2 (this time using curry()): Listing 10: Python curried tax calculations Unlike with closures, we need to curry the arguments in a specific order (left to right). But note that functional also contains an rcurry() class that will start at the other end (right to left). The second taxcalc(50000,0.30,10000). In a different level, however, it makes rather clear the concept that every function can be a function of just one argument -- a rather surprising idea to those new to it. Miscellaneous higher-order functions(). Often,'s look at some of what functional provides. The functions sequential(): Listing 11: Sequential calls to functions (with same args)-in map(), reduce(), filter() functions. But these particular higher-order functions from functional ask Boolean questions about collections of return values. For example: Listing 12: Ask about collections of return values: Listing 13: Creating compositional functions I hope this latest look at higher-order functions will arouse.. Since conceptions without intuitions are empty, and intuitions without conceptions, blind, David Mertz wants a cast sculpture of Milton for his office. Start planning for his birthday. David may be reached at mertz@gnosis.cx; his life pored over at. Suggestions and recommendations on this, past, or future columns are welcome.
http://www.ibm.com/developerworks/opensource/library/l-prog3/index.html
crawl-003
refinedweb
368
50.57
Newb can't get a clean compile New to Arduino. I'm trying to create a serial gateway. I have a genuine UNO with a 8266 WIFI. Hopefully wired correctly. I did it per the MySensors serial gateway video. I've downloaded Arduino IDE ver 1.8.2 and MySensors ver 2.1.1. I can compile the Arduino example blink and it loads and runs fine. When I try to compile GatewaySerial I get errors. I suspect I don't have things in the right place. Hopefully the errors below will be something that is just a common mistake. Any help is greatly appreciated. ================================================= WARNING: Category '' in library UIPEthernet is not valid. Setting to 'Uncategorized' C:\Program Files (x86)\Arduino\MySensors archive\MySensors -2.1.1\MySensors-master\examples\GatewaySerial\GatewaySerial.ino:84:23: fatal error: MySensors.h: No such file or directory #include <MySensors.h> ^ compilation terminated. exit status 1 Error compiling for board Arduino/Genuino Uno. I think you are mixing things up : what are you trying to achieve? What are your components and project? Thanks for responding. I am trying to build a serial gateway to interface with HomeSeer. There will be several additional nodes that will have various sensors such as motion and temp/humidity. I used the serial controller project under the build tab. I have used a UNO instead of a Nano. I am trying to compile the GatewaySerial project in the mysensors 2.1.1. example folder. No changes. When I compile, I get the errors as listed in the first post. @logbuilder probably you are missing some defines. How are you going to connect the sensors? In addition I'm not sure if you can use mysensors gateway with the esp8266 on a UNO In the examples that come with mysensors 2.1.1 there are a couple of 8266 gateways. I can get none of them to compile. Right now I am just trying to get a clean compile. The MySensors.h file is in the MySensors root directory. @logbuilder welcome to the MySensors community. Based on the error in your first post, the Arduino IDE is unable to find the MySensors library. Install the MySensors library using the Library Manager. That will make sure all files end up in the right folders. Instructions are available at Also note that the MySensor esp8266 examples are meant for a standalone esp8266. They do not work on an esp8266 shield. See for an earlier discussion on this topic. Thanks for the input @mfalkvidd In mysensors 2.1.1, there is no library folder. Apparently all the libs were incorporated into the core starting with ver 2.0 Not sure how to do the equivalent to what you say in 2.+. @logbuilder yes and no. The MySensors library should be installed from the library manager. The error you are getting is due to the MySensors library not being correctly. The instructions I linked to are for MySensors 2.+ Some third party libraries (not MySensors itself) was moved to a separate git repository when 2.0 was released. This does not affect the example sketches shipped within the MySensors library. Well,you got me to thinking about this. I guess that loading of the libs is what used to get the .H files in the right place and telling the compiler that they exist. So I decided why not try it even though there is no lib folder. I took the whole mysensors 2.1.1 folder and told the library manager to add it. No errors. I then went into the library manager and it said 2.1.1 was in fact installed. Loaded up GatewaySerial and it compiled fine with board of UNO, Nano and nodeMCU. Loaded it onto a nodeMCU and it is running. In the serial monitor I can see messages from the board. It is indicating some problem with initialization but that is another thread. I really, really thank you for giving me some ideas. Ultimately I'm trying to add some adruino sensor devices to my HomeSeer system. There is a mysensors plugin for HomeSeer. That is a multi step process but getting a cleanly compiled program to run on a wifi enabled device is a major step. Hopefully the nodeMCU will work out as well as it looks. Next step is to get the mysensors plugin installed in HomeSeer and get everybody to talk to each other. @logbuilder great work. Thanks for reporting back....
https://forum.mysensors.org/topic/6555/newb-can-t-get-a-clean-compile
CC-MAIN-2018-47
refinedweb
747
69.99
10 December 2009 20:11 [Source: ICIS news] By Prema Viswanathan DUBAI (ICIS news)--Petrochemical major Saudi Basic Industries Corp (SABIC) intends to eventually pursue the acquisitions route to gain access to technology and markets, the CEO Mohamed Al-Mady said on Thursday. However, there were no immediate plans on the drawing board to make new acquisitions, Al-Mady said. “Early in the game we realised that it is important to have a technological platform to gain market franchise,” said Al-Mady, in an interview with ICIS news on the sidelines of the 4th Gulf Petrochemicals and Chemicals Association (GPCA) forum. In a challenging market situation, there is a pressing need to remain competitive, said the SABIC leader. “There are three legs to competitiveness: one is material inputs, the second is market access and the third is innovation. Material inputs I think we have in ?xml:namespace> Despite some worries about inadequate natural gas availability to meet industry needs, he said he was optimistic that the situation would improve. But to gain market access in a meaningful way, it is only possible to do it through acquisitions, he said. “Innovation too is possible through acquisitions, otherwise we have to wait too long,” he said. Acquisitions also made give the company ready access to quality staff, he added. “We can look at acquisition opportunities if they fit into our overall strategy. And if the price is right.” But right now, he said his hands were full with all the new projects due on stream in This was the main impetus behind SABIC’s acquisitions of the assets of DSM, Huntsman and GE Plastics, he said. The last acquisition saw the company making forays into the speciality chemicals segment, a focus that SABIC wants to strengthen. “Of course, in the past year, we saw both specialities and commodities go down, but we hope in the coming years, engineering plastics will perform better than commodity plastics,” said Al-Mady. Although the poor performance of engineering plastics dragged down SABIC’s bottom line somewhat in 2008, Al-Mady said he had no regrets about the GE Plastics acquisition. “Engineering plastics in general suffered last year due to the slowdown in the automotive, construction and electronics segments," he said. There is improvement now, but demand still hasn’t reached prior-2008 levels. There is a big improvement in the last two quarters. We expect further improvement next year.” The diversification into specialty chemicals - a trend encouraged by the Saudi Arabian government - is best exemplified by the Saudi Kayan project. Although the start-up of two of the units has been delayed to 2012, the remaining 14 plants in the complex are expected to start up from end-2010 through 2011, said Al-Mady. “Many of the products in the complex will be produced for the first time in Once the project takes off, SABIC will have a gamut of products to offer customers, he added. Joint ventures have been another route pursued by SABIC to access technology and markets. The Sharq joint venture between SABIC and a Japanese consortium, which is undergoing a major expansion currently, and the Yanpet complex, a joint venture between SABIC and ExxonMobil, are good examples of successful collaborations within SABIC has also sought to tap into the Chinese market through the The recently announced joint venture with US-based additives and catalyst maker Albemarle marks another kind of diversification. “We will be making a catalyst called TEED (tetraethylethylenediamine) for polyolefin production - it will be utilised here in “We try to produce our inputs whether it’s raw material or catalyst, when we have a critical mass. But the plant in its own right is commercially viable. The catalyst can be used anywhere, However, “In India, we have a compounding unit for polycarbonate, and also a research centre, but no major manufacturing facility,” said Al-Mady. The SABIC chief said he has many contentions with the Indian authorities. “I’m not saying the Indian policy is not transparent, but there is no opportunity. We’re not invited to do investment. Normally, you have to invite major companies to invest in your country, show them what you have, show them what benefits you will get.” He also complained of protectionist measures adopted by the Indian authorities. “Demand [in The Indian government recently imposed anti-dumping duties on polypropylene imports from The Chinese government has also levied anti-dumping duties on “We are talking to the Chinese and they are responding favourably to our discussions on the anti-dumping measures,” Al-Mady said. “In the case of It is important to allow competition between the local producers and those from outside the country, so that the customer benefits, said Al-Mady. “Otherwise the local producers will raise the prices and this will hurt the customer.” In Gulf Cooperation Council (GCC) countries, including “Competition in the market is healthy for everybody. That’s why we are against trade barriers, we want competition. Competition is good for us because it makes us work harder and go for innovation and cost reduction programmes.”
http://www.icis.com/Articles/2009/12/10/9318365/gpca-09-sabic-looks-to-acquisitions-to-access-tech-markets.html
CC-MAIN-2014-10
refinedweb
847
50.06