text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
A) - The to family of functions converts a value from type Source to type Target. The source type is deduced and the target type must be specified, for example the expression to!int(42.0) converts the number 42 from double to int. The conversion is "safe", i.e., it checks for overflow; to!int(4.2e10) would throw the ConvOverflowException exception. Overflow checks are only inserted when necessary, e.g., to!double(42) does not do any checking because any int fits in a double. Example: int a = 42; auto b = to!int(a); // b is int with value 42 auto c = to!double(3.14); // c is double with value 3.14Converting among numeric types is a safe way to cast them around. Conversions from floating-point types to integral types allow loss of precision (the fractional part of a floating-point number). The conversion is truncating towards zero, the same way a cast would truncate. (To round a floating point value when casting to an integral, use roundTo.)Examples: int a = 420; auto b = to!long(a); // same as long b = a; auto c = to!byte(a / 10); // fine, c = 42 auto d = to!byte(a); // throw ConvOverflowException double e = 4.2e6; auto f = to!int(e); // f == 4200000 e = -3.14; auto g = to!uint(e); // fails: floating-to-integral negative overflow e = 3.14; auto h = to!uint(e); // h = 3 e = 3.99; h = to!uint(a); // h = 3 e = -3.99; f = to!int(a); // f = -3Conversions from integral types to floating-point types always succeed, but might lose accuracy. The largest integers with a predecessor representable in floating-point format are 2^24-1 for float, 2^53-1 for double, and 2^64-1 for real (when real is 80-bit, e.g. on Intel machines). Example: int a = 16_777_215; // 2^24 - 1, largest proper integer representable as float assert(to!int(to!float(a)) == a); assert(to!int(to!float(-a)) == -a); a += 2; assert(to!int(to!float(a)) == a); // fails!Conversions from string to numeric types differ from the C equivalents atoi() and atol() by checking for overflow and not allowing whitespace. For conversion of strings to signed types, the grammar recognized is: Integer: Sign UnsignedInteger UnsignedInteger Sign: + -For conversion to unsigned types, the grammar recognized is: UnsignedInteger: DecimalDigit DecimalDigit UnsignedIntegerConverting an array to another array type works by converting each element in turn. Associative arrays can be converted to associative arrays as long as keys and values can in turn be converted. Example: int[] a = [1, 2, 3]; auto b = to!(float[])(a); assert(b == [1.0f, 2, 3]); string str = "1 2 3 4 5 6"; auto numbers = to!(double[])(split(str)); assert(numbers == [1.0, 2, 3, 4, 5, 6]); int[string] c; c["a"] = 1; c["b"] = 2; auto d = to!(double[wstring])(c); assert(d["a"w] == 1 && d["b"w] == 2);Conversions operate transitively, meaning that they work on arrays and associative arrays of any complexity: int[string][double[int[]]] a; ... auto b = to!(short[wstring][string[double[]]])(a);This conversion works because to!short applies to an int, to!wstring applies to a string, to!string applies to a double, and to!(double[]) applies to an int[]. The conversion might throw an exception because to!short might fail the range check.,Infinite) - Rounded conversion from floating point to integral.Rounded conversions do not work with non-integral target types.Examples: assert(roundTo!int(3.14) == 3); assert(roundTo!int(3.49) == 3); assert(roundTo!int(3.5) == 4); assert(roundTo!int(3.999) == 4); assert(roundTo!int(-3.14) == -3); assert(roundTo!int(-3.49) == -3); assert(roundTo!int(-3.5) == -4); assert(roundTo!int(-3.999) == -4); assert(roundTo!(const int)(to!(const double)(-3.999)) == -4); - Target parse(Target, Source)(ref Source s) if (isInputRange was meaningfully converted.Examples: import std.string : munch; string test = "123 \t 76.14"; auto a = parse!uint(test); assert(a == 123); assert(test == " \t 76.14"); // parse bumps string munch(test, " \t\n\r"); // skip ws assert(test == "76.14"); auto b = parse!double(test); assert(b == 76.14); assert(test == ""); - provides a means to declare a number in base 8. Using octal!177 or octal!"177" for 127 represented in octal (same as 0177 in C).Examples: //.Examples: struct S { int a, b; } auto p = new void[S.sizeof]; S s; s.a = 42; s.b = 43; auto s1 = emplace!S(p, s); assert(s1.a == 42 && s1.b == 43); -: immutable int s = 42; auto u1 = unsigned(s); //not qualified static assert(is(typeof(u1) == uint)); Unsigned!(typeof(s)) u2 = unsigned(s); //same qualification static assert(is(typeof(u2) == immutable uint)); immutable u3 = unsigned(s); //explicitly: immutable uint u = 42; auto s1 = signed(u); //not qualified static assert(is(typeof(s1) == int)); Signed!(typeof(u)) s2 = signed(u); //same qualification static assert(is(typeof(s2) == immutable int)); immutable s3 = signed(u); //explicitly qualified - template castFrom(From) - A wrapper on top of the built-in cast operator that allows one to restrict casting of the original type of the value.A common issue with using a raw cast is that it may silently continue to compile even if the value's type has changed during refactoring, which breaks the initial assumption about the cast.Parameters: - ref @system auto to(To, T)(auto ref T value); - Parameters:Returns:the value after the cast, returned by reference if possible.Examples: // Regular cast, which has been verified to be legal by the programmer: { long x; auto y = cast(int) x; } // However this will still compile if 'x' is changed to be a pointer: { long* x; auto y = cast(int) x; } // castFrom provides a more reliable alternative to casting: { long x; auto y = castFrom!long.to!int(x); } // Changing the type of 'x' will now issue a compiler error, // allowing bad casts to be caught before it's too late: { long* x; static assert ( !__traits(compiles, castFrom!long.to!int(x)) ); // if cast is still needed, must be changed to: auto y = castFrom!(long*).to!int(x); } - template hexString(string hexData) if (hexData.isHexLiteral) template hexString(wstring hexData) if (hexData.isHexLiteral) template hexString(dstring hexData) if (hexData.isHexLiteral) - Converts a hex literal to a string at compile time.Examples: // conversion at compile time auto string1 = hexString!"304A314B"; assert(string1 == "0J1K"); auto string2 = hexString!"304A314B"w; assert(string2 == "0J1K"w); auto string3 = hexString!"304A314B"d; assert(string3 == "0J1K"d);
https://docarchives.dlang.io/v2.068.0/phobos/std_conv.html
CC-MAIN-2017-43
refinedweb
1,081
54.08
Coffeehouse Thread10 posts Forum Read Only This forum has been made read only by the site admins. No new threads or comments can be added. file undelete recommendations Back to Forum: Coffeehouse Conversation locked This conversation has been locked by the site admins. No new comments can be made. skydrive and live sync just replace a bunch of my code with older files that were on skydrive. Is there a file undelete utility that can be recommended? thanks, @SteveRichter:I've used Runtime GetDataBack for years. It's ugly as hell, but it works. There are so many of these out there that I haven't had the patience to sit and try them all. just to understand what I am up against, when a file is replaced with a newer version, is the replaced file recoverable? Or is it only when a file is deleted that it can theoretically be recovered? When I read that the system eventually reuses the deleted space, does it first use all the free space on the drive before overwriting deleted space? As in, an 80GB drive with 20GB of space used would first write to the 60GB of free space before writing over deleted space? @SteveRichter: As far as I know, you don't reuse the same place when replacing a file. It is simply writing somewhere else and change the pointer to the new one. Doing a delta kind of operation is more like a database where special software is used. So yes, the data is still somewhere in your HDD. Actually you should see files in your recycle bin if you are using WL Mesh. I am sure of it because sometimes I rename my IE fovarite file and the other machine will have two files, one in recycle bin and one in fovorite folder. Or you also can activate file versioning on your folder, which will keep track of all version of files on that particular folder. Not sure this works on Home OS or not, but, I know this is a basic Server OS feature. @magicalclick: AFAIK drives are opportunistic. They will write to wherever is faster. So, if it's faster to use free space, then it will, if it's faster to overwrite something, then it will. The drive only thinks in terms of sectors. where is the recycle bin folder found? I have the drive in a 2nd PC as the E: drive. So I don't see the recycle bin of that drive in windows explorer. @SteveRichter: there is only one recycle bin. Your E drive's recycled bin files will show up in your one and only recycle bin. That doesn't mean the files are in C, it is still in E. The recycle bin shows from multiple drives. @kettch: I think the OS writes before you replace the old file. I mean, if it failed to write the new file, at least the old file is still there. That's how I would do if I designed the OS. Meaning replacing 4GB file would temporarly take 8GB of HDD, until the write is complete, the old file would not be removed. But, this is just my assumption. I'll start with a dig at the OP: Do not use file synchronisation and online file hosting services as a substitute for proper source control. There are free SVN providers out there (even for closed and commercial projects) so there really isn't any excuse. Remember a disk is split up into blocks and a file spans a number of blocks. On many NTFS drives the block size is 4KB and that's why the "Size on Disk" value in File Properties is a multiple of 4KB. Now what happens to the actual data in these blocks depends on the operation: I'm not familar with how your synchronisation software works, but there are two possibilities for what happened when it synchronsied: The comment about databases and special software is irrelevant. There are intelligent ways of performing remote differential synchronisation (i.e. methods that don't involve sending the entire file contents when only a few bytes are different) but Microsoft's pet implementation of this (called "Remote Differential Compression") is very recent and so far has only been implemented in Windows Server family. I would not be surprised if WL Mesh did trivial file replacement (but again, we don't know if it deletes the original file or overwrites the file's blocks).[/quote] Yes, "Previous Versions" (aka Shadow Copies) would have saved the OP's bacon in this case, however I've found it to be a very unreliable method of maintaining history of a directory. That said, I understand that the Home Editions of Windows do not expose a GUI for working with Shadow Copies, only the API is present for Windows Backup to use. Many people confuse "folder" with "directory". A folder is an object in the shell namespace, but a directory is something physically present in the filesystem; generally speaking all directories are folders, but not all folders are directories. The Recycle Bin is an example of such a folder: it exists only in-memory; in reality it 'contains' the contents of the "$Recycle.Bin" directories on all of your mounted drives combined. That said, I don't recommend actually browsing the $Recycle.Bin directories, the data in there is arranged in a special way by the OS. Anyway, back on-topic: I personally use Piriform Recuva for my undelete needs. It's pretty-looking, fast enough, and has worked for me in the past. It might work for the OP, but it depends on how it his sync software works. I wish him the best of luck in attempting to recover the data, but do not get your hopes up. @SteveRichter: In case you didn't see it, right click on your folder, go to Properties. Do you have a Previous Versions tab? -Josh The only piece of software that worked for me a couple of years back was Active UNDELETE. No other software cracked the case.
https://channel9.msdn.com/Forums/Coffeehouse/file-undelete-recommendations
CC-MAIN-2017-51
refinedweb
1,019
71.95
25 April 2012 23:59 [Source: ICIS news] LONDON (ICIS)--European April acrylic acid (AA) and acrylate esters prices have been assessed up by €30-100/tonne from the previous month, depending on grade, largely due to ongoing upward pressure from feedstock costs, sources said on Wednesday. April AA prices have been assessed at €1,940-1,980/tonne ($2,553-2,605/tonne) FD (free delivered) NWE (northwest ?xml:namespace> Methyl acrylate (Methyl-A) prices for April were assessed at €1,900-1,950/tonne FD NWE, an increase of €40-50/tonne from March. Ethyl acrylate (Ethyl-A) prices were assessed at €1,970-2,040/tonne, an increase of €70-90/tonne from the previous month. Butyl acrylate (Butyl-A) prices were assessed at €1,920-2,020/tonne FD NWE, up by €30-50/tonne from March, while 2-ethylhexyl acrylate (2-EHA) numbers were assessed at €2,190-2,320/tonne, an increase of €90-100/tonne. Sources agree that the increases were driven by the €50/tonne increase for propylene in April. However, some shortness on 2-ethylhexyl acrylate (2-EHA) and ethyl acrylate (EA) led to larger increases on these grades for April. While increases were expected for April, the more ambitious price targets for sellers were scaled down. With end-use markets such as paints and coatings struggling amid the wider economic challenges in the second quarter, in many cases buyers are simply refusing to move up despite the pressure from feedstock markets. “This time of year the market should be balanced to tight,” said one buyer. “But even with shutdowns we are not seeing a pick up in activity. There is good availability on all grades.” Suppliers are still struggling to recoup lost margins. Propylene has moved up by €300/tonne since January, reaching a record high of €1,245/tonne FD NWE in April, while the acrylates sector has been slower to keep pace with these movements. As a result, players expect that producers will curtail production output in the coming weeks because of poor economics. Two major European suppliers are already said to be running at reduced utilisation rates, while another is shutting down its AA, EA and 2-EHA lines for annual maintenance and a five-year catalyst change (on AA only) in late April until the middle of May. “We will be looking to recoup the propylene increase and more,” said one producer earlier this month. “Our margins are still under pressure.” However, the current speculation in the market is that propylene will come down in May. As a result, buyers are already planning to resist any further upward price movements. “We are not expecting any pick up in demand,” said one trader. “The market [in terms of pricing] will be steady.” ($1 = €0.76)
http://www.icis.com/Articles/2012/04/25/9553711/europe-april-acrylates-prices-move-up-on-feedstock-costs.html
CC-MAIN-2014-42
refinedweb
468
60.75
DEBSOURCES Skip Quicknav sources / git / 1:2.20.1-2+deb10u3 / notes-merge #ifndef NOTES_MERGE_H #define NOTES_MERGE_H #include "notes-utils.h" #include "strbuf.h" struct commit; struct object_id; #define NOTES_MERGE_WORKTREE "NOTES_MERGE_WORKTREE" enum notes_merge_verbosity { NOTES_MERGE_VERBOSITY_DEFAULT = 2, NOTES_MERGE_VERBOSITY_MAX = 5 }; struct notes_merge_options { const char *local_ref; const char *remote_ref; struct strbuf commit_msg; int verbosity; enum notes_merge_strategy strategy; unsigned has_worktree:1; }; void init_notes_merge_options(struct notes_merge_options *o); /* * Merge notes from o->remote_ref into o->local_ref * * The given notes_tree 'local_tree' must be the notes_tree referenced by the * o->local_ref. This is the notes_tree in which the object-level merge is * performed. * * The commits given by the two refs are merged, producing one of the following * outcomes: * * 1. The merge trivially results in an existing commit (e.g. fast-forward or * already-up-to-date). 'local_tree' is untouched, the OID of the result * is written into 'result_oid' and 0 is returned. * 2. The merge successfully completes, producing a merge commit. local_tree * contains the updated notes tree, the OID of the resulting commit is * written into 'result_oid', and 1 is returned. * 3. The merge results in conflicts. This is similar to #2 in that the * partial merge result (i.e. merge result minus the unmerged entries) * are stored in 'local_tree', and the OID or the resulting commit * (to be amended when the conflicts have been resolved) is written into * 'result_oid'. The unmerged entries are written into the * .git/NOTES_MERGE_WORKTREE directory with conflict markers. * -1 is returned. * * Both o->local_ref and o->remote_ref must be given (non-NULL), but either ref * (although not both) may refer to a non-existing notes ref, in which case * that notes ref is interpreted as an empty notes tree, and the merge * trivially results in what the other ref points to. */ int notes_merge(struct notes_merge_options *o, struct notes_tree *local_tree, struct object_id *result_oid); /* * Finalize conflict resolution from an earlier notes_merge() * * The given notes tree 'partial_tree' must be the notes_tree corresponding to * the given 'partial_commit', the partial result commit created by a previous * call to notes_merge(). * * This function will add the (now resolved) notes in .git/NOTES_MERGE_WORKTREE * to 'partial_tree', and create a final notes merge commit, the OID of which * will be stored in 'result_oid'. */ int notes_merge_commit(struct notes_merge_options *o, struct notes_tree *partial_tree, struct commit *partial_commit, struct object_id *result_oid); /* * Abort conflict resolution from an earlier notes_merge() * * Removes the notes merge worktree in .git/NOTES_MERGE_WORKTREE. */ int notes_merge_abort(struct notes_merge_options *o); #endif
https://sources.debian.org/src/git/1:2.20.1-2+deb10u3/notes-merge.h/
CC-MAIN-2020-50
refinedweb
394
55.44
In the previous tutorial, we discussed scrolling long text strings on a character LCD using Arduino. However, it’s also possible to display custom characters on the LCD. These custom characters are user-defined and are stored in Character Generator RAM (CGRAM) of the LCD module. The different LCD modules have different Display Data RAM (DDRAM) and CGRAM, which means several custom characters can be stored and displayed on the LCD modules. Custom characters The character LCDs support ASCII characters that serve as a standard set of characters. The patterns for the supported characters are already stored in the memory (CGROM) of the LCD module. For printing these standard characters, the controller simply needs to “pass” the associated data register value to the address counter of the LCD. That data register value is, then, stored in the DDRAM and the address counter is updated (and increased or decreased depending on the direction of the text). When the LCD has to print a character, it reads the value from the DDRAM, compares it with CGROM, and prints the character by generating the stored pattern for that character. Sometimes it’s necessary to print user-defined characters/icons on an LCD. For example, you may be using an LCD module in a communication device and need to print a custom character showing the status of the connectivity. A character LCD may be embedded in a battery-operated device and it may be necessary to display the level of charging by using a custom icon. Or, depending on the application, it may be ideal to display different custom (user-defined) characters or icons on the LCD. The LCD modules, apart from DDRAM, have the CGRAM to store user-defined characters. To generate a custom character/icon, it’s necessary for the controller needs to pass the entire character pattern to the LCD module. This character pattern is stored in the CGRAM of the LCD. A 16×2 LCD module typically has enough CGRAM to store a pattern for 8 characters/icons. The pattern for custom characters is defined as a group of 7, 8, or 10 bytes, depending on the number of rows of pixels for each character on the LCD. Each byte represents a row of pixels that form the character. The bits of the byte represents the status of the pixels (i.e. whether the respective pixels will turn on or off). The bit ‘1’ signifies that the pixel will turn on and the bit ‘0’ signifies that the pixel will turn off. Each byte is read from the Least Significant Bit (LSB), where only the first five or more represent the status of the pixels. For example, on a 16×2 LCD, each character is printed as 5×8 dots. Therefore, 8 bytes (for 8 rows of pixels) are used to represent a character and 5 LSBs of each byte are used to turn on or off the pixels. />As bytes representing a character are stored in RAM memory, the custom characters are stored temporarily on the LCD module. When the power to the LCD module is shut off, the CGRAM data is lost. CGRAM Most of the character LCD modules use an HD4478 controller. There can be a different LCD controller on an LCD module. To know what controller is used in an LCD module, simply refer to its data sheet. Depending on the size of the character LCDs, they have different DDRAM and CGRAM. For example, a 16×2 LCD has 80 bytes of DDRAM and 64 bytes of CGRAM. As there are 8 rows of pixels for each character, the pattern for 8 characters of 5×8 dots can be stored on the CGRAM. The CGRAM addresses start from 0x40. The first character is stored from address 0x40 to 0x47. This custom character can be printed at the current cursor position by sending the command ‘0’ to the LCD module. The second custom character is stored from the address 0x48 to 0x4F. It can be printed at the current cursor position by sending the command 1 to the LCD module. This table lists the CGRAM addresses and commands to print them at current cursor position. Generating custom characters It’s fairly easy to create a custom character. Simply determine the pixel map of the character and, then, determine the bytes that should be written to the CGRAM to generate that character/icon. For a 16×2 LCD, which has 5×8 dots characters, the top three MSB for each byte can be ignored. The top three MSBs can be set to 0 in each byte while other bits can be set to 0 or 1, according to the pixels that should turn off or on. In Embedded C (or any other programming language used to program the target microcontroller), these bytes can be grouped in an array. The bytes should be defined from the top to the bottom rows. If the bytes for any of the bottom row of pixels are not defined in the array, they’re assumed to be 0x00 by the programming language. This means the pixels on those rows will remain off. Therefore, it is not necessary to explicitly define the bytes for every row, but just those for the top rows of pixels generating the character/icon. This table lists some of the custom characters/icons and the respective array of bytes required to generate them. Generating custom characters To create custom characters, typically the LCD command must first be set and the CGRAM address needs to be passed. Next, the LCD command to write data to the CGRAM needs to be passed. After the CGRAM address is selected and the data is written to the CGRAM, the address counter is automatically updated — and either increased or decreased by one, depending on the entry mode. Afterward, all of the top bytes defining the character pattern must be written to the CGRAM to generate a custom character/icon. When using Arduino, it’s even simpler to create a custom character/icon. The LiquidCrystal library on the Arduino platform has a function createChar() that creates custom characters and icons on an LCD. This function takes the position of the custom character and an array of bytes as an argument. The generated custom character can be flashed on an LCD at the current cursor position by using the write() or print() function. The createChar() mskethod The createChar() function creates a custom character/icon for use on an LCD. Essentially, it writes a user-defined character pattern (a pixel map of the character) to a given CGRAM address. The function has this syntax: lcd.createChar(num, data) This function takes two arguments: 1. A number that signifies the position of custom character 2. An array of bytes that determines the pixel map of the character. The position of the character is specified as a number. For example, in the case of 16×2 LCD, 8 custom characters can be defined and their position can be 0 to 7. The character with the position 0 is stored at the CGRAM address 0x40, and the character with the position 1 is stored at the CGRAM address 0x48, and so on. If a position greater than 7 is passed in the function, the modulus of that number by 8 is used as the position by the function. For example, if the position of the character is passed as 8, then the position of the character will be 0. The array of bytes passed as the argument to define the character must have the bytes defined for all of the top rows of the pixels forming the character. The createChar() method has this source code: The write() method The write() function writes a character on an LCD. The character is written (displayed) at the current cursor position and the cursor is moved right or left according to the direction of text on LCD. The function has this syntax: lcd.write(data) If the data passed is a string (quoted in double or single quotes) of ASCII characters, it is written as such. If it’s a number, the character corresponding to that position in the CGRAM is written on the LCD. The write() method has this source code: The print() method The print() method is used to print text to an LCD. If a custom character has to be written on the LCD using the print() method, then the char() function (with the position of the custom character as the argument), must be passed as the argument of the print() function. Here’s a valid example of printing a custom character on an LCD: lcd.print(char(0)); Recipe: Displaying custom characters on an LCD In this recipe, we will print some custom characters on a 16×2 LCD. Components required 1. Arduino UNO x1 2. A 16×2 character LCD x1 3. A 10K Pot x1 4. A a 4-bit mode. - Pin 1 (GND) and 16 (LED) of the LCD module are connected to the ground. - Pin 2 (VCC) is connected to the VCC. - Pin 15 (LED+) of the LCD module is then connected to the VCC via a small-value resistor. - Pin 3 (VEE) is connected to the variable terminal of a pot while the fixed terminals of the pot are connected to the ground and the VCC. - The R/W pin is connected to the ground because Arduino will only write data to the LCD module. - The RS, EN, DB4, DB5, DB6, and DB7 pins of LCD are connected to pins 13, 11, 7, 6, 5, and 4 of Arduino UNO, respectively. - The breadboard is supplied to the ground, the 5V supply rail from one of the ground pins and the 5V pin of the Arduino UNO, respectively. Circuit diagram Arduino sketch How the project works The LCD module is connected with Arduino in 4-bit mode. The LCD is first initialized and the display is cleared to get rid of any garbage values in the DDRAM. The patterns for 16 different custom characters are then generated by defining their pixel maps as arrays. Eight of these custom characters can be displayed on the LCD at one time. This is because 64 bytes of the CGRAM of the 16×2 LCD can store a pattern for only 8 characters at any time. The 8 custom characters are generated by defining their character maps. The cursor position on the LCD is set from column 0 of line 0 to column 7 of line 0, and each character is displayed one by one. Then, the next 8 custom characters are generated and they’re displayed on line 0 of the LCD at the cursor positions from columns 0 to 7. Thereafter, the embedded program starts iterating and two sets of 8 custom characters keep displaying on the LCD, one after the other. Programming guide The LiquidCrystal.h library is imported in the code. Then, an object defined by the “lcd” is defined by the LiquidCrystal class. #include <LiquidCrystal.h> //LiquidCrystal lcd(RS, E, D4, D5, D6, D7); LiquidCrystal lcd(13, 11, 7, 6, 5, 4); The pixel maps of the 16 different custom characters are defined as array objects of the global scope as follows: byte c1[8]={B00000,B01010,B00000,B00000,B10001,B01110,B00000,}; //Smile-1 byte c2[8]={B00000,B01010,B00100,B00100,B00000,B01110,B10001,}; //Smile-2 byte c3[8]={B00100,B01010,B10001,B10001,B01010,B00100,}; //Diamond byte c4[8]={B01110,B01010,B11111,B11011,B11111,B01010,B01110,}; //Brick byte c5[8]={B01010,B00100,B00100,B01010,B10001,B00100,B10001,}; //Hour-glass byte c6[8]={B00100,B01010,B11111,B01010,B10101,B11011,B10001,}; //Hut byte c7[8]={B11111,B10001,B10001,B10001,B10001,B10001,B10001,B11111,}; //Rectangle byte c8[8]={B11111,B11101,B11011,B11101,B11111,B10000,B10000,B10000,}; //Flag-1 byte c9[8]={B00100,B01110,B11111,B00100,B00100,B00100,B00100,B00100,}; //Up-Arrow byte c10[8]={B00100,B00100,B00100,B00100,B00100,B11111,B01110,B00100,}; //Down-Arrow byte c11[8]={B00000,B00000,B01010,B10101,B10001,B01010,B00100,B00000,}; //Blank-Heart byte c12[8]={B00000,B00000,B01010,B11111,B11111,B01110,B00100,B00000,}; //Full-Heart byte c13[8]={B00000,B01010,B00000,B00000,B01110,B10001,B00000,}; //Smile-Sad byte c14[8]={B00100,B00100,B00100,B00100,B00100,B00100,B00100,B00100}; //Pole byte c15[8]={B00000,B00000,B00000,B00100,B00000,B00000,B00000,B00000}; //Dot byte c16[8]={B11111,B10001,B10001,B10001,B11111,B00001,B00001,B00001}; //Flag-2 In the setup() function, the LCD is initialized to the 16×2 size by using the begin() method. The LCD is also cleared once to get rid of any garbage values as follows: void setup() { lcd.begin(16, 2); lcd.clear(); } In the loop() function, the first 8 custom characters are generated using the customChar() method. The cursor position is set by using the setCursor() method. Each character is printed on the LCD by using the print() method at the cursor positions of column 0 to 7 of line 0, one after the other. After the printing of the 8 characters, a two-second delay is provided using the delay() function. Then, the LCD is cleared by using the clear() method. void loop() { lcd.createChar(0 , c1); //Creating custom characters in CG-RAM lcd.createChar(1 , c2); lcd.createChar(2 , c3); lcd.createChar(3 , c4); lcd.createChar(4 , c5); lcd.createChar(5 , c6); lcd.createChar(6 , c7); lcd.createChar(7 , c8); lcd.setCursor(0 ,0); lcd.print(char(0)); lcd.setCursor(1 ,0); lcd.print(char(1)); lcd.setCursor(2 ,0); lcd.print(char(2)); lcd.setCursor(3 ,0); lcd.print(char(3)); lcd.setCursor(4 ,0); lcd.print(char(4)); lcd.setCursor(5 ,0); lcd.print(char(5)); lcd.setCursor(6 ,0); lcd.print(char(6)); lcd.setCursor(7 ,0); lcd.print(char(7)); delay(2000); lcd.clear(); Similarly, the next set of 8 custom characters is generated and printed on line 0 of the LCD. It should be noted that we have created and displayed custom characters in the loop() function. So, if the power supply to the LCD module is interrupted for any reason, the custom characters will regenerate and reprint on the LCD. If we would have generated the custom characters in the setup() function, that code would have only run once. In that case, if the power supply to the LCD module was somehow interrupted, the character pattern for the custom characters would have been lost and garbage values would have displayed on the LCD. Additionally, we are printing more than 8 custom characters on the LCD even though only 8 custom characters can be generated and printed. This is another reason that we generated the custom characters in the loop() function. Do it yourself In this recipe, you learned to generate custom characters and print them on a 16×2 character LCD. Now think of other possible custom characters and icons, draw their pixel maps (in 5×8 dots), and determine the bytes that should be written to the CGRAM to generate them. You can modify the above Arduino sketch to print custom characters and icons. You can also try generating character patterns for your regional language or types of common icons — such as a battery status, Wi-Fi, smiley, or other glyphs. Do so until you are limited by the 5×8 dots matrix! In the next tutorial, we’ll learn how to interface an LM-35 temperature sensor with Arduino. Demonstration video Questions related to this article? 👉Ask and discuss on EDAboard.com and Electro-Tech-Online.com forums. Tell Us What You Think!!
https://www.engineersgarage.com/articles-arduino-16x2-character-lcd-generating-custom-characters-icons/
CC-MAIN-2022-27
refinedweb
2,593
63.7
NAME Set a process as critical to a job. SYNOPSIS #include <zircon/syscalls.h> zx_status_t zx_job_set_critical(zx_handle_t job, uint32_t options, zx_handle_t process); DESCRIPTION Sets process as critical to job. When process terminates, job will be terminated as if zx_task_kill() was called on it. The return code used will be ZX_TASK_RETCODE_CRITICAL_PROCESS_KILL. The job specified must be the parent of process, or an ancestor. If options is ZX_JOB_CRITICAL_PROCESS_RETCODE_NONZERO, then job will only be terminated if process has a non-zero return code. RIGHTS job must have ZX_RIGHT_DESTROY. process must have ZX_RIGHT_WAIT. RETURN VALUE zx_job_set_critical() returns ZX_OK on success. In the event of failure, a negative error value is returned. ERRORS ZX_ERR_BAD_HANDLE job or process is not a valid handle. ZX_ERR_WRONG_TYPE job or process is not a job handle. ZX_ERR_INVALID_ARGS options is not 0 or ZX_JOB_CRITICAL_PROCESS_RETCODE_NONZERO, or job is not the parent of process, or an ancestor. ZX_ERR_ALREADY_BOUND process has already been set as critical to a job. ZX_ERR_ACCESS_DENIED job does not have ZX_RIGHT_DESTROY or process does not have ZX_RIGHT_WAIT.
https://fuchsia.dev/fuchsia-src/reference/syscalls/job_set_critical
CC-MAIN-2021-04
refinedweb
167
61.12
Common Mistakes Junior Developers Make When Writing Unit Tests Common Mistakes Junior Developers Make When Writing Unit Tests Join the DZone community and get the full member experience.Join For Free teams and I had the chance to review a lot of test code. In this post I’m summarizing the most common mistakes that in-experienced developers usually do when writing unit tests. Let’s take a look at the following simple example of a class that collects registration data, validates them and performs a user registration. Clearly the method is extremely simple and its purpose is to demonstrate the common mistakes of unit tests and not to provide a fully functional registration example public class RegistrationForm { private String name,email,pwd,pwdVerification; // Setters - Getters are ommitted public boolean register(){ validate(); return doRegister(); } private void validate () { check(name, "email"); check(email, "email"); check(pwd, "email"); check(pwdVerification, "email"); if (!email.contains("@")) { throw new ValidationException(name + " cannot be empty."); } if ( !pwd.equals(pwdVerification)) throw new ValidationException("Passwords do not match."); } private void check(String value, String name) throws ValidationException { if ( value == null) { throw new ValidationException(name + " cannot be empty."); } if (value.length() == 0) { throw new ValidationException(name + " is too short."); } } private boolean doRegister() { //Do something with the persistent context return true; } Here’s a corresponding unit test for the register method to intentionally show the most common mistakes in unit testing. Actually I’ve seen many times very similar test code, so it’s not what I’d call science fiction: @Test public void test_register(){ RegistrationForm form = new RegistrationForm(); form.setEmail("Al.Pacino@example.com"); form.setName("Al Pacino"); form.setPwd("GodFather"); form.setPwdVerification("GodFather"); assertNotNull(form.getEmail()); assertNotNull(form.getName()); assertNotNull(form.getPwd()); assertNotNull(form.getPwdVerification()); form.register(); } Now, this test, obviously will pass, the developer will see the green light so thumbs up! Let’s move to the next method. However this test code has several important issues. The first one which is in my humble opinion, the biggest misuse of unit tests is that the test code is not adequately testing the register method. Actually it tests only one out of many possible paths. Are we sure that the method will correctly handle null arguments? How the method will behave if the email doesn’t contain the @ character or passwords don’t match? Developers tend to write unit tests only for the successful paths and my experience has shown that most of the bugs discovered in code are not related to the successful paths. A very good rule to remember is that for every method you need N numbers of tests where N equals to the cyclomatic complexity of the method adding the cyclomatic complexity of all private method calls. Next is the name of the test method. For this one I partially blame all these modern IDEs that auto-generate stupid names for test methods like the one in the example. The test method should be named in such a way that explains to the reader what is going to be tested and under which conditions. In other words it should describe the path under testing. In our case a better name could be :should_register_when_all_registration_data_are_valid. In this article you can find several approaches on naming unit tests but for me the ‘should’ pattern is the closest to the human languages and easier to understand when reading test code. Now let’s see the meat of the code. There are several assertions and this violates the rule that each test method should assert one and only one thing. This one asserts the state of four(4) RegistrationForm attributes. This makes the test harder to maintain and read (oh yes, test code should be maintainable and readable just like the source code. Remember that for me there’s no distinction between them) and it makes difficult to understand which part of the test fails. This test code also asserts setters/getters. Is this really necessary? To answer that I will quote Roy Osherove’s saying from his famous book :” The Art of Unit Testing” Properties (getters/setters in Java) are good examples of code that usually doesn’t contain any logic, and doesn’t require testing. But watch out: once you add any check inside the property, you’ll want to make sure that logic is being tested. In our case there’s no business logic in our setters/getters so these assertions are completely useless. Moreover they wrong because they don’t even test the correctness of the setter. Imagine that an evil developer changes the code of the getEmail method to always return a constant String instead of the email attribute value. The test will still pass because it asserts that the setter is not null and it doesn’t assert for the expected value. So here’s a rule you might want to remember. Always try to be as much as specific you can when you assert the return value of a method. In other words try to avoid assertIsNull, assertIsNotNull unless you don’t care about the actual return value. The last but not least problem with the test code we’re looking at is that the actual method (register) that is under test, is never asserted. It’s called inside the test method but we never evaluate its result. A variation of this anti-pattern is even worse. The method under test is not even invoked in the test case. So just keep in mind that you should not only invoke the method under test but you should always assert the expected result, even if it’s just a Boolean value. One might ask : “what about void methods?”. Nice question but this is another discussion – maybe another post, but to give you a couple of tips testing of a void method might hide a bad design or it should be done using a framework that verifies method invocations ( such as Mockito.Verify ) As a bonus here’s a final rule you should remember. Imagine that the doRegister is actually implemented and do some real work with an external database. What will happen if some developer that has no database installed in her local environment tries to run the test. Correct! Everything will fail. Make sure that your test will have the same behavior even if it runs from the dumpiest terminal that has access only to the code and the JDK. No network, no services, no databases, no file system. Nothing! }}
https://dzone.com/articles/common-mistakes-junior
CC-MAIN-2019-30
refinedweb
1,078
53.21
python tutorial - Python telephonic interview questions - learn python - python programming python interview questions :241 code: testfile = "test.txt" with open(testFile) as f: for line in f: line = line.rstrip() if "#*#*# ham" in line or "#*#*# spam" in line: print line Another nice form is: if any(substr in line for substr in (#*#*# ham", "#*#*# spam")): print line python interview questions :242 MySQL database: lua or python - Mysql is a database server -- it doesn't have a menu. I think you mean mysql workbench which is a visual database design tool. That option allows you to use lua scripting or python scripting to help you on the design of your database -- it is unrelated to what happens on the mysql server Learn python - python tutorial - microsoft-visual-studio - python examples - python programs python interview questions :243]")) Read AlsoByte objects vs string in python. python interview questions :244. python interview questions :245 How to use yield - generator? - You want to define a function with a generator which can iterate the numbers, which are divisible by 7 within range(n). code: def getNum(n,div): for i in range(n): if i % div == 0: yield i DIVIDER = 7 RANGE = 50 print [n for n in getNum(RANGE, DIVIDER)] Output: [0, 7, 14, 21, 28, 35, 42, 49] python interview questions :246. - dir() - will display the defined symbols. Eg: >>>dir(str) - will only display the defined symbols. python interview questions :247 Python File ending with .py~? - It's a backup file -- many editors save the previous version of your file under the same name with a ~ appended. Read AlsoOject Oriented Python. python interview questions :248 What style does PyCharm / IntelliJ use for Python docstrings ? python interview questions :249 Tell whether a character is a combining diacritic mark? - Use the unicodedata module: import unicodedata if unicodedata.combining(u'a'): print "is combining character" else: print "is not combining" python interview questions :250 Which cassandra python pacakge to be used? - Pycassa is an older python driver that is thrift based while python-driver is a newer CQL3 driver based on cassandra's binary protocol. Thrift isn't going away but it has become a legacy API in cassandra so my advice going forward is to use the newer python driver. Learn python - python tutorial - python-packages - python examples - python programs - In the Twissandra example application with the DataStax python-driver to provide an overview of CRUD and using prepared statements etc. - As for cql I haven't had any experience with this one, but the homepage of the project says it all: - This driver has been deprecated. Please use python-driver instead
https://www.wikitechy.com/tutorials/python/python-telephonic-interview-questions
CC-MAIN-2021-10
refinedweb
435
61.77
As I’m relatively new to the python ecosystem some things, which may be obvious for many, continue to bother me a bit. The last problem I encounter, which block me for several days, was the following. In my spare time, I’m developing carp, a python3 application to help me manage my different encFS containers. As I want to do clean stuff and learn good practice, I check my source code with flake8. And I setup internationalization too. If I read the python documentation about the gettext module, I see that pushing the following lines at the top of my application is sufficient to have gettext working (1): import gettext gettext.install("carp", PATH_TO_PO_FILES) This, is responsible to install the _ function in the global namespace. Problem is, if I do that, flake8 keeps complaining that _ is undeclared. Then stupidly I redeclare it this way (2): import gettext gettext.install("carp", PATH_TO_PO_FILES) _ = gettext.gettext This is not working at all. Even if (1) was working as intended, as soon as I redeclare _ in (2), it stops working. Thus for now I come back to the explicit declaration (3) and it works as expected: import gettext gettext.bindtextdomain("carp", PATH_TO_PO_FILES) gettext.textdomain("carp") _ = gettext.gettext But still, I’m wondering why (2) does not work. Did I forget something? Surely, I misunderstood something, but I cannot point it out. Comments and explanations if you have some are very welcome. (This post will be cross post to stackoverflow, I’ll update it with an answer from there if any good one comes.)
https://etienne.depar.is/a-ecrit/post/2017/02/02/Python-and-gettext-setup
CC-MAIN-2018-51
refinedweb
265
67.15
Displaying a Directory Search Custom Task Pane from the 2007 Office Ribbon Summary: Build a project that combines several 2007 Microsoft Office and Microsoft Visual Studio technologies. These include creating a custom Office Fluent Ribbon, custom task pane, working with Open XML Format files, and using Microsoft Visual Studio 2005 Tools for Office. (12 printed pages) Frank Rice, Microsoft Corporation April 2008 Applies to: Microsoft Office Word 2007, Microsoft Visual Studio 2008, Microsoft Visual Studio 2005 Tools for the 2007 Microsoft Office System, Second Edition Contents Overview Creating the Word Search Add-in Project Adding a Custom Ribbon to the Project Creating the Custom Task Pane Adding File Search Functionality to the Task Pane Testing the Project Conclusion Additional Resources Overview In this article, you create a custom word search Add-in in Microsoft Office Word 2007 that allows you to specify a search term and directory path, and then searches for that word in the files and sub-directories of the search path. The project that I'll demonstrate in this article features a number of technologies. First, you create a custom Microsoft Office Fluent Ribbon (hereafter known as the Ribbon) tab and button in a Word 2007 document. Clicking this button displays a custom task pane that you also create. The task pane contains custom controls that allow you to specify parameters and start the custom search. Each of the files searched is an Open XML Format Word 2007 (.docx) document which is essentially a zipped package of parts that combine to make a document. The package is opened by using the Open XML application programming interface (API), and the document part is scanned for the occurrence of the search term. If the file is found to contain the search term, the path and filename are displayed in the Word document containing the task pane. The entire project is created in Microsoft Visual Studio 2005 Tools for Office Second Edition (hereafter known as VSTO SE). The result of the project is to create a tab in Word 2007 as shown in Figure 1. Creating the Word Search Add-in Project In the following steps, you create a Word 2007 Add-in in Microsoft Visual Studio. To create the Word Search WordSearch, and then click OK. Next, you need to add a reference to the WindowsBase library to the project. This library of methods is used to open and manipulate the Open XML Format files later in this article. On the Project menu, click Show All Files. In the Solution Explorer, right-click the WordSearch node and then click Add Reference. In the Add Reference dialog box, on the .NET tab, scroll-down, select WindowsBase, and then click Add. Notice that the WindowsBase reference is added to the References folder.tpSearch As Microsoft.Office.Tools.CustomTaskPane Public Sub AddSearchTaskPane() ctpSearch = Me.CustomTaskPanes.Add(New wordSearchControl(), _ "File Search Task Pane") ctpSearch.DockPosition = _ Microsoft.Office.Core.MsoCTPDockPosition.msoCTPDockPositionRight ctpSearch.Visible = True End Sub private Microsoft.Office.Tools.CustomTaskPane ctpSearch; public void AddSearchTaskPane() { ctpSearch = this.CustomTaskPanes.Add(new wordSearchControl(), "File Search Task Pane"); ctpSearch.DockPosition = Microsoft.Office.Core.MsoCTPDockPosition.msoCTPDockPositionRight; ctpSearch.Visible = true; } As its name implies, this procedure is used to display the task pane. First, you set a reference to a task pane object. In the AddSearchTaskPane procedure, the wordSearchControl custom task pane is added to the collection of task panes and assigned to the variable you defined earlier. The title of the task pane is set as File Search Task Pane. The docked position of the pane is set and the task pane is made visible. Next, add the procedure that hides the task pane when the button is clicked a second time. Public Sub RemoveSearchTaskPane() If Me.CustomTaskPanes.Count > 0 Then Me.CustomTaskPanes.Remove(ctpSearch) End If End Sub public void RemoveSearchTaskPane() { if ((this.CustomTaskPanes.Count > 0)) { this.CustomTaskPanes.Remove(ctpSearch); } } In this procedure, the count of open task panes is checked and if any task panes are displayed, the ctpSearch task pane is hidden, if it is open. Finally, add the following procedure, if it doesn't already exist. This procedure returns a reference to a new Ribbon object to Microsoft Office when initialized. Protected Overrides Function CreateRibbonExtensibilityObject() As Microsoft.Office.Core.IRibbonExtensibility Return New Ribbon() End Function protected override Microsoft.Office.Core.IRibbonExtensibility CreateRibbonExtensibilityObject() { return new Ribbon(); } Adding a Custom Ribbon to the Project In the following steps, you create the custom tab containing a button control. This tab is added to the existing Ribbon in Word 2007 when the Add-in is loaded. To create the custom ribbon In the Solution Explorer, right-click the WordSearch node, point to Add, and then click Add New Item. In the Add New Item dialog box, in the Templates pane, select Ribbon (Visual Designer), and then click Add. The Ribbon1.vb (Ribbon1.cs)node is added to the Solution Explorer and the Ribbon Designer is displayed. In the Ribbon Designer, click the TabAddIns tab. In the Properties pane, change the Label property to Word Search. Notice that the title of the tab is updated in the Ribbon Designer. You see how easy it is to change the Ribbon properties in the Visual Designer. Now in the following steps, you do the same thing but this time directly in the XML file that defines the Ribbon. Right-click the Ribbon Designer and then click Export Ribbon to XML. Notice that Ribbon.xml and Ribbon.vb (Ribbon.cs) files are added to the Solution Explorer. Double-click the Ribbon.xml file to display the code window. The XML you see defines the Ribbon thus far. For example, the label attribute for the <tab> element is set to Word Search just as you manually set that property earlier. To define the other components of the Ribbon, replace the XML with the following code. <?xml version="1.0" encoding="UTF-8"?> <customUI onLoad="Ribbon_Load" xmlns=""> <ribbon> <tabs> <tab id="searchTab" label="Word Search" insertAfterMso="TabHome"> <group id="searchGroup" label="Search"> <button id="btnTaskPane" label="Display Search Pane" onAction="btnTaskPane_Click" /> < search task pane. Also notice in the code the insertAfterMso attribute of the <tab> element. Any attribute that ends in Mso specifies functionality that is built into Microsoft Office. In this instance, the insertAfterMso attribute tells Word 2007 to insert the tab you create after the built-in Home tab. Next, you use the label attribute to add a caption to the button. In addition, the button has an onAction attribute that points to the procedure that is executed when you click the button. These procedures are also known as callback procedures. When the button is clicked, the onAction attribute calls back to Microsoft Office, which then executes the specified procedure. The net result of the XML is to create a tab in the Word 2007 Ribbon that looks similar to that seen in Figure 1. Figure 1. The Word Search tab in the Word 2007 Ribbon When you created the Ribbon Designer, notice that the Ribbon.vb (Ribbon.cs) file was also created for you. This file contains the callback and other procedures that you need to make the Ribbon functional. Open the Ribbon.vb (Ribbon.cs) file by right-clicking it in the Solution Explorer and clicking View Code. When you add the Ribbon to your project, a Ribbon object class and a few procedures are already available in the Ribbon's code-behind file. Public Class Ribbon Implements Office.IRibbonExtensibility public class Ribbon : Office.IRibbonExtensibility Notice that the Ribbon class implements the Office.IRibbonExtensibility interface. This interface defines one method named GetCustomUI. Public Function GetCustomUI(ByVal ribbonID As String) As String Implements Office.IRibbonExtensibility.GetCustomUI Return GetResourceText("WordSearch.Ribbon.xml") End Function public string GetCustomUI(string ribbonID) { return GetResourceText("WordSearchCS.Ribbon.xml"); } When the Ribbon is loaded by Microsoft Office, the GetCustomUI method is called and returns the XML that defines the Ribbon components to Office. Now you need to add the callback procedures to the class that give the Ribbon its functionality. In the Ribbon Callbacks block, add the following procedure. Private WordSearchControlExists As Boolean Public Sub btnTaskPane_Click(ByVal control As Office.IRibbonControl) If Not WordSearchControlExists Then Globals.ThisAddIn.AddSearchTaskPane() Else Globals.ThisAddIn.RemoveSearchTaskPane() End If WordSearchControlExists = Not WordSearchControlExists End Sub bool WordSearchControlExists = false; public void btnTaskPane_Click(Office.IRibbonControl control) { if (!WordSearchControlExists) { Globals.ThisAddIn.AddSearchTaskPane(); } else { Globals.ThisAddIn.RemoveSearchTaskPane(); } WordSearchControlExists = !WordSearchControlExists; } This callback procedure is called when the button that you added to the Ribbon earlier is clicked. As stated earlier, its purpose is to display or hide the custom task pane. It does this by checking state of the WordSearchControlExistsBoolean variable. Initially, by default, this variable is set to False. When the procedure is called, Not WordSearchControlExists (!WordSearchControlExists) equates to True so the AddSearchTaskPane method of the ThisAddIn class is called. This causes the task pane to be displayed. The WordSearchControlExists variable is then set to True. When the button is clicked again, Not WordSearchControlExists (!WordSearchControlExists) now equals False, the RemoveSearchTaskPane procedure is called, and the task pane is hidden. Creating the Custom Task Pane In the following steps, you create the search task pane and populate it with labels, textboxes, and a button. The textboxes allow you to specify a directory path and term to search for. The button initiates the search. To create the custom task pane In the Solution Explorer, right-click the WordSearch node, point to Add, and then click Add New Item. In the Add New Item dialog box, select the User Control, name it wordSearchControl.vb (wordSearchControl.cs) and click Add. Next, add the task pane controls. On the View menu, click Toolbox. From the toolbox, add the controls specified in Table 1 to the wordSearchControl Designer and set the properties as shown. The design should look similar to Figure 2. Table 1. Add these controls to the wordSearchControl control Figure 2. The wordSearchControl control Now add the code to the button to make it functional. In the Solution Explorer, right-click the wordSearchControl.vb (wordSearchControl.cs) node, and click View Code. Now, above the Public Class wordSearchControl (public partial class wordSearchControl : UserControl) declaration, add the following namespaces to the existing declarations. These are containers for the various objects and methods used in the project. Imports System.Xml Imports System.IO Imports System.IO.Packaging Imports System.Collections.Generic Imports System.Windows.Forms using System.IO; using System.IO.Packaging; using System.Xml; Next, add the following code after the Public Class wordSearchControl (public partial class wordSearchControl : UserControl) statement. Private Const docType As String = "*.docx" Private Sub btnSearch_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnSearch.Click Dim sDir As String = txtPath.Text Call DirSearch(sDir) MessageBox.Show("Search complete.") End Sub string docType = "*.docx"; private void btnSearch_Click(object sender, EventArgs e) { string sDir = txtPath.Text; DirSearch(sDir); MessageBox.Show("Search complete."); } First, the document-type of the search files is defined. Placing the declaration at the top of the class makes it easier to change the type of document, if desired. The btnSearch_Click procedure is called when you click the Search button on the task pane. First, it assigns the value representing the search path from the txtPath textbox to a variable. This variable is then passed to the DirSearch method. Finally, a message box that signals that the search has finished is added. Adding File Search Functionality to the Task Pane In the following steps, you modify the code behind the custom task pane controls to give them search functionality. To add search capability to the task pane controls In the wordSearchControl.vb (wordSearchControl.cs) code window, add the following statements. Private alreadyChecked As Boolean = False Private Sub DirSearch(ByVal sDir As String) Dim d As String Dim f As String Dim searchTerm As String = txtSearchTerm.Text Try If Not alreadyChecked Then 'Check all of the files in the path directory first. For Each f In Directory.GetFiles(sDir, docType) Call GetToDocPart(f, searchTerm) Next alreadyChecked = True End If 'If there are sub-directories, check those files next. For Each d In Directory.GetDirectories(sDir) For Each f In Directory.GetFiles(d, docType) Call GetToDocPart(f, searchTerm) Next DirSearch(d) Next Catch MessageBox.Show("There was a problem with the file " & f) End Try End Sub private bool alreadyChecked = false; private void DirSearch(string sDir) { string badFile = ""; string searchTerm = txtSearchTerm.Text; try { if (!alreadyChecked) { // Check all of the files in the path directory first. foreach (string f in Directory.GetFiles(sDir, docType)) { GetToDocPart(f, searchTerm); } alreadyChecked = true; } // If there are sub-directories, check those files next. foreach (string d in Directory.GetDirectories(sDir)) { foreach (string f in Directory.GetFiles(d, docType)) { badFile = f.ToString(); GetToDocPart(f, searchTerm); } DirSearch(d); } } catch (System.Exception) { MessageBox.Show("There was a problem with the file " + badFile); } } In this code, you first declare a Boolean variable alreadyChecked. This variable is used to ensure that once the root directory has been searched, it is not searched again when the method is called recursively to search any sub-directories. In the DirSearch method, variables are declared that represent the directory and files within the search directory. Next, the contents of the txtSearchTerm textbox is assigned to the search term String variable. Then the Directory.GetFiles method is called with the directory path argument, returning the files at that location. Each file is then passed to the GetToDocPart method along with the search term. Next, Directory.GetDirectories is called to determine if there are sub-directories from the current directory. If there are sub-directories, the GetToDocPart method is called again with the files in the sub-directory. However, unlike the previous loop, when control is returned from the GetToDocPart procedure, the DirSearch method is called recursively to continue searching through any additional sub-directories. If there is a problem opening a file in the GetToDocPart method, control is passed back to the DirSearch method and an error message is displayed listing the path and name of the file with the problem. Next, add the following procedure to the wordSearchTerm.vb (wordSearchTerm.cs) code window. Private Sub GetToDocPart(ByVal fileName As String, ByVal searchTerm As String) ' Given a file name, retrieve the officeDocument part and search ' through the part for the ocuurence of the search term. Const documentRelationshipType As String = "" Const wordmlNamespace As String = "" Dim myRange As Word.Range = Globals.ThisAddIn.Application.ActiveDocument.Content ' If the file is a temp file, ignore it. If fileName.IndexOf("~$") > 0 Then Return End If ' Open the package with read/write access. Dim myPackage As Package myPackage = Package.Open(fileName, FileMode.Open, FileAccess.ReadWrite) Using (myPackage) Dim relationship As System.IO.Packaging.PackageRelationship For Each relationship In myPackage.GetRelationshipsByType(documentRelationshipType) Dim documentUri As Uri = PackUriHelper.ResolvePartUri(New Uri("/", UriKind.Relative), relationship.TargetUri) Dim documentPart As PackagePart = myPackage.GetPart(documentUri) Dim doc As XmlDocument = New XmlDocument() doc.Load(documentPart.GetStream()) ' Manage namespaces to perform Xml XPath queries. Dim nt As New NameTable() Dim nsManager As New XmlNamespaceManager(nt) nsManager.AddNamespace("w", wordmlNamespace) ' Specify the XPath expression. Dim XPath As String = "//w:document/descendant::w:t" Dim nodes As XmlNodeList = doc.SelectNodes(XPath, nsManager) Dim result As String = "" Dim node As XmlNode ' Search each node for the search term. For Each node In nodes result = node.InnerText + " " result = result.IndexOf(searchTerm) If result <> -1 Then myRange.Text = myRange.Text & vbCrLf & fileName Exit For End If Next Next End Using End Sub private void GetToDocPart(string fileName, string searchTerm) { // Given a file name, retrieve the officeDocument part. const string documentRelationshipType = ""; const string wordmlNamespace = ""; Word.Range myRange = Globals.ThisAddIn.Application.ActiveDocument.Content; // If the file is a temp file, ignore it. if ((fileName.IndexOf("~$") > 0)) { return; } // Open the package with read/write access. Package myPackage; myPackage = Package.Open(fileName, FileMode.Open, FileAccess.ReadWrite); using (myPackage) { //System.IO.Packaging.PackageRelationship relationship; foreach (System.IO.Packaging.PackageRelationship relationship in myPackage.GetRelationshipsByType(documentRelationshipType)) { Uri documentUri = PackUriHelper.ResolvePartUri(new Uri("/", UriKind.Relative), relationship.TargetUri); PackagePart documentPart = myPackage.GetPart(documentUri); XmlDocument doc = new XmlDocument(); doc.Load(documentPart.GetStream()); // Manage namespaces to perform Xml XPath queries. NameTable nt = new NameTable(); XmlNamespaceManager nsManager = new XmlNamespaceManager(nt); nsManager.AddNamespace("w", wordmlNamespace); // Specify the XPath expression. string XPath = "//w:document/descendant::w:t"; XmlNodeList nodes = doc.SelectNodes(XPath, nsManager); string result = ""; // Search each node for the search term. foreach (XmlNode node in nodes) { result = (node.InnerText + " "); int inDex = result.IndexOf(searchTerm); if (inDex != -1 ) { myRange.Text = (myRange.Text + ("\r\n" + fileName)); } } } } } After defining namespaces that are needed to open the Open XML Format file package, a Range in the Word 2007 document is specified. This is where the search results will be inserted as the procedure runs. Next, attempting to open temporary documents (back-up documents created when you open a Word 2007 document) will result in an error so the input document is tested. The next code segment opens the Open XML Format package representing the document, with read and write privileges. Then the document-part of the document is retrieved and the content is loaded into an XML document. An XPath query is then run to test for the occurrence of the search term. If the term is found, the path and file name are added to the Range object. And because there is no need to search the document further, the procedure exits the ForEach..Next (foreach) loop. Testing the Project In the following steps, you build and test the add-in project. To test the Add-in project On the Debug menu, click Start Debugging. The project is built and Word 2007 is displayed. On the right-side of the Home tab, click the Word Search tab and then click the Display Task Pane button. The Word Search task pane that you created is displayed on the right side of the screen. In the top textbox, type a directory path that you know contains one or more Word 2007 (*.docx) files. In the second textbox, type the term you want to search for in those files. The search term is case-sensitive. Click the Search button to start the search. As the search progresses, for each .docx file found that contains the search term, its directory path and filename are added to the Word document. Close the document once the search is complete. Conclusion By building this project, you have seen the marriage of different technologies into a single useful application. These included the Office Fluent Ribbon, custom task panes, Open XML Format files as well as project development in Microsoft Visual Studio utilizing the Microsoft Visual Studio Tools for the Office System Second Edition. I encourage you to experiment with the project as well as add your own features such allowing the user to specify the type of documents to search in the task pane, allowing both .docx and. docm files to be searched, or adding functionality that allows the user to interrupt the search whenever desired. Additional Resources You can find additional information about Introducing the Office (2007) Open XML File Formats Manipulating Word 2007 Files with the Open XML Format API (Part 1 of 3) Working with Files and Folders in Office 2003 Editions
https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2007/cc531345(v=office.12)
CC-MAIN-2018-30
refinedweb
3,182
50.73
- NAME - DESCRIPTION - SOURCE CODE STATIC ANALYSIS - lint, splint - Coverity - cpd (cut-and-paste detector) - gcc warnings - Warnings of other C compilers - DEBUGGING - Poking at Perl - Using a source-level debugger - gdb macro support - Dumping Perl Data Structures - Patching - Patching a core module - Adding a new function to the core - Writing a test - Special Make Test Targets - Running tests by hand - Common problems when patching Perl source code - Perl environment problems - Portability problems - Problematic System Interfaces - Security problems - EXTERNAL TOOLS FOR DEBUGGING PERL - CONCLUSION - AUTHOR NAME perlhack - How to hack at the Perl internals DESCRIPTION: Larry is always by definition right about how Perl should behave. This means he has final veto power on the core functionality. Larry is allowed to change his mind about any matter at a later date, regardless of whether he previously invoked Rule 1.: - Does concept match the general goals of Perl?. - Where is the implementation?. - Backwards compatibility. - Could it be a module instead? the feature generic enough?. - Does it potentially introduce new bugs? Radical rewrites of large chunks of the Perl interpreter have the potential to introduce new bugs. The smaller and more localized the change, the better. - Does it preclude other desirable features? A patch is likely to be rejected if it closes off future avenues of development. For instance, a patch that placed a true and final interpretation on prototypes is likely to be rejected because there are still options for the future of prototypes that haven't been addressed. - Is the implementation robust? Good patches (tight code, complete, correct) stand more chance of going in. Sloppy or incorrect patches might be placed on the back burner until the pumpking has time to fix, or might be discarded altogether without further notice. - Is the implementation generic enough to be portable? The worst patches make use of a system-specific features. It's highly unlikely that non-portable additions to the Perl language will be accepted. - Is the implementation tested?? - Is there enough documentation? Patches without documentation are probably ill-thought out or incomplete. Nothing can be added without documentation, so submitting a patch for the appropriate manpages as well as the source code is always a good idea. - Is there another way to do it? Larry said "Although the Perl Slogan is There's More Than One Way to Do It, I hesitate to make 10 ways to do something". This is a tricky heuristic to navigate, though--one man's essential addition is another man's pointless cruft. - Does it create too much work? Work for the pumpking, work for Perl programmers, work for module authors, ... Perl is supposed to be easy. - Patches speak louder than words. Keeping in sync're looking for a particular change, or a change that affected a particular set of files, you may find the Perl Repository Browser useful:: - rsync'ing the source tree Presuming you are in the directory where your perl source resides and you have rsync installed and available, you can "upgrade" to the bleadperl using: # rsync -avz rsync://public.activestate.com/perl-current/ . This takes care of updating every single item in the source tree to the latest applied patch level, creating files that are new (to your distribution) and setting date/time stamps of existing files to reflect the bleadperl status. Note that this will not delete any files that were in '.' before the rsync. Once you are sure that the rsync is running correctly, run it with the --delete and the --dry-run options like this: # rsync -avz --delete --dry-run rsync://public.activestate.com/perl-current/ .. - Using rsync over the LAN an inetd entry or a daemon. You must, however, have a working rsh or ssh system. Using ssh is recommended for its security features." - Using pushing over the NFS). - rsync'ing the patches The source tree is maintained by the pumpking who applies patches to the files in the tree. These patches are either created by the pumpking himself using diff -ca://public.activestate.com/perl-current-diffs/ . This makes sure the latest available patch is downloaded to your patch directory. It's then up to you to apply these patches, using something like # last="`cat ../perl-current/.patch`.gz" # rsync -avz rsync://public.activestate.com/perl-current-diffs/ . #. Why rsync the source tree - It's easier to rsync the source tree Since you don't have to apply the patches yourself, you are sure all files in the source tree are in the right state. - It's more reliable While both the rsync-able source and patch areas are automatically updated every few minutes, keep in mind that applying patches may sometimes mean careful hand-holding, especially if your version of the patchprogram does not understand how to deal with new files, files with 8-bit characters, or files without trailing newlines. Why rsync the patches - It's easier to rsync the patches ;-) - It's a good reference then on (7583, 7584, ...). You can use the patches later as a kind of search archive. - Finding a start point. - Finding how to fix a bug If you've found where the function/feature Foo misbehaves, but you don't know how to fix it (but you do know the change you want to make), you can, again, peruse the patches for similar changes and look how others apply the fix. - Finding the source of misbehaviour When you keep in sync with bleadperl, the pumpking would love to see that the community efforts really. Working with the source Because you cannot use the Perforce client, you cannot easily generate diffs against the repository, nor will merges occur when you update via rsync. If you edit a file locally and then rsync against the latest source, changes made in the remote copy will overwrite your local versions! The best way to deal with this is to maintain a tree of symlinks to the rsync'd source. Then, when you want to edit a file, you remove the symlink, copy the real file into the other tree, and edit it. You can then diff your edited file against the original to generate a patch, and you can safely update the original tree. Perl's Configure script can generate this tree of symlinks for you. The following example assumes that you have used rsync to pull a copy of the Perl source into the perl-rsync directory. In the directory above that one, you can execute the following commands: mkdir perl-dev cd perl-dev ../perl-rsync/Configure -Dmksymlinks -Dusedevel -D"optimize=-g" This will start the Perl configuration process. After a few prompts, you should see something like this: Symbolic links are supported. Checking how to test for symbolic links... Your builtin 'test -h' may be broken. Trying external '/usr/bin/test -h'. You can test for symbolic links with '/usr/bin/test -h'. Creating the symbolic links... (First creating the subdirectories...) (Then creating the symlinks...) The specifics may vary based on your operating system, of course. After you see this, you can abort the Configure script, and you will see that the directory you are in has a tree of symlinks to the perl-rsync directories and files. If you plan to do a lot of work with the Perl source, here are some Bourne shell script functions that can make your life easier: function edit { if [ -L $1 ]; then mv $1 $1.orig cp $1.orig $1 vi $1 else vi $1 fi } function unedit { if [ -L $1.orig ]; then rm $1 mv $1.orig $1 fi } Replace "vi" with your favorite flavor of editor. Here is another function which will quickly generate a patch for the files which have been edited in your symlink tree: mkpatchorig() { local diffopts for f in `find . -name '*.orig' | sed s,^\./,,` do case `echo $f | sed 's,.orig$,,;s,.*\.,,'` in c) diffopts=-p ;; pod) diffopts='-F^=' ;; *) diffopts= ;; esac diff -du $diffopts $f `echo $f | sed 's,.orig$,,'` done } This function produces patches which include enough context to make your changes obvious. This makes it easier for the Perl pumpking(s) to review them when you send them to the perl5-porters list, and that means they're more likely to get applied. This function assumed a GNU diff, and may require some tweaking for other diff variants. Perlbug administration There is a single remote administrative interface for modifying bug status, category, open issues etc. using the RT bugtracker system, maintained by Robert Spier. Become an administrator, and close any bugs you can get your sticky mitts on: To email the bug system administrators: "perlbug-admin" <perlbug-admin@perl.org> Submitting patches Always submit patches to perl5-porters@perl.org. If you're patching a core module and there's an author listed, send the author a copy (see "Patching a core module").., even if you're fixing a bug in the 5.8 track, patch against the latest development version rsynced from rsync://public.activestate.com/perl-current/ ) If changes are accepted, they are applied to the development branch. Then the 5.8: - perlguts. ( ) - perlxstut and perlxs. - perlapi. Finding Your Way Around Perl maintenance can be split into a number of areas, and certain people (pumpkins) will have responsibility for each area. These areas sometimes correspond to files or directories in the source kit. Among the areas are: - Core modules Modules shipped as part of the Perl core live in the lib/ and ext/ subdirectories: lib/ is for the pure-Perl modules, and ext/ contains the core XS modules. - Tests There are tests for nearly all the modules, built-ins and major bits of functionality. Test files all have a .t suffix. Module tests live in the lib/ and ext/ directories next to the module being tested. Others live in t/. See "Writing a test" - Documentation Documentation maintenance includes looking after everything in the pod/ directory, (as well as contributing new documentation) and the documentation to the modules in core. - Configure.) - Interpreterflagis either your system's malloc, or Perl's own mallocis actually a wrapper around S_parse_body, as defined in perl.c, which processes the command line options, sets up any statically linked XS modules, opens the program and calls yyparseto parse it. - Parsing. - Optimization Now the parsing stage is complete, and the finished tree represents the operations that the Perl interpreter needs to perform to execute our program. Next, Perl does a dry run over the tree looking for optimisations: constant expressions such as 3 + 4will be computed now, and the optimizer will also see if any multiple operations can be replaced with a single one. For instance, to fetch the variable $foo, instead of grabbing the glob *fooand looking at the scalar component, the optimizer fiddles the op tree to use a function which directly looks up the scalar in question. The main optimizer is peepin op.c, and many ops have their own optimizing functions. - Running Now we're finally ready to go: we have compiled Perl byte code, and all that's left to do is run it. The actual execution is done by the runops_standardfunctionwhich choose the next op dynamically at run time. The PERL_ASYNC_CHECKmakesand pp_entertryjust push a CxSUBor CxEVALblock struct onto the context stack which contain the address of the op following the sub call or eval. They then return the first op of that sub or eval block, and so execution continues of that sub or block. Later, a pp_leavesubor pp_leavetryop pops the CxSUBor CxEVAL, retrieves the return op from it, and returns it. - Exception handing Perl's exception handing (i.e. dieetc.) is built on top of the low-level setjmp()/ longjmp()C-library functions. These basically provide a way to capture the current PC and SP registers and later restore them; i.e. a longjmp()continues at the point in code where a previous setjmp()was done, with anything further up on the C stack being lost. This is why code should always save values using SAVE_FOOrather than in auto variables. The perl core wraps setjmp()etc in the macros JMPENV_PUSHand JMPENV_JUMP. The basic rule of perl exceptions is that exit, and die(in the absence of eval) perform a JMPENV_JUMP(2), while diewithin evaldoesor ENDblocks. Amongst other things, this is how scope cleanup still occurs during an exit. If a diecan find a CxEVALblock on the context stack, then the stack is popped to that level and the return op in that block is assigned to PL_restartop; then a JMPENV_JUMP(3)is performed. This normally passes control back to the guard. In the case of perl_runand call_sv, a non-null PL_restartoptriggers re-entry to the runops loop. The is the normal way that dieor croakisbefore executing FETCHin the inner runops loop, but for efficiency reasons, perl in fact just sets a flag, using CATCH_SET(TRUE). The pp_require, pp_enterevaland pp_entertryops check this flag, and if true, they call docatch, which does a JMPENV_PUSHcompares the JMPENVlevel of the CxEVALwith PL_top_envand if they differ, just re-throws the exception. In this way any inner loops get popped. Here's an example. 1: eval { tie @a, 'A' }; 2: sub A::TIEARRAY { 3: eval { die }; 4: die; 5: } To run this code, perl_runis called, which does a JMPENV_PUSHthen enters a runops loop. This loop executes the eval and tie ops on line 1, with the eval pushing a CxEVALonto the context stack. The pp_tiedoes a CATCH_SET(TRUE), then starts a second runops loop to execute the body of TIEARRAY. When it executes the entertry op on line 3, CATCH_GETis true, so pp_entertrycalls docatchwhich does a JMPENV_PUSHoff the context stack, sets PL_restartopfrom it, does a JMPENV_JUMP(3), and control returns to the top docatch. This then starts another third-level runops level, which executes the nextstate, pushmark and die ops on line 4. At the point that the second pp_dielevel recorded in the CxEVALdiffers from the current one, docatchjust does a JMPENV_JUMP(3)and the C stack unwinds to: perl_run main Because PL_restartopis non-null, run_bodystarts a new runops loop and execution continues. Internal Variable Types. Op Trees. Stacks. - Argument stackngives you the NV (floating point value) of the top SV on the stack: the $xin cos($x). Then we compute the cosine, and push the result back as an NV. The Xin XPUSHnmeansgives. - Mark stack roughly how the tied pushis implemented; see av_pushin av.c:subonly gave us a local copy, not a reference to the global. 6 ENTER; 7 call_method("PUSH", G_SCALAR|G_DISCARD); 8 LEAVE; ENTERand LEAVElocaltakes care of that, and it's described in perlcall. We call the PUSHmethod in scalar context, and we're going to discard its return value. The call_method() function removes the top element of the mark stack, so there is nothing for the caller to clean up. - Save stack C doesn't have a concept of local scope, so perl provides one. We've seen that ENTERand LEAVEare used as scoping braces; the save stack implements the C equivalent of, for example: { local $foo = 42; ... } See "Localising Changes" in perlguts for how to use the save stack. Millions of Macros. The .i Targets You can expand the macros in a foo.c file by saying make foo.i which will expand the macros using cpp. Don't be scared by the results. SOURCE CODE STATIC ANALYSIS. lint, splint Coverity () is a product similar to lint and as a testbed for their product they periodically check several open source projects, and they give out accounts to open source developers to the defect databases. cpd (cut-and-paste detector) ... gcc warnings. Warnings of other C compilers. DEBUGGING. Poking at Perl' Using a source-level debugger If the debugging output of -D doesn't help you, it's time to step through perl's execution with a source-level debugger. We'll use [args] Run the program with the given arguments. - break function_name - - break source.c:xxx Tells the debugger that we'll want to pause execution when we reach either the named function (but see "Internal Functions" in perlguts!) or the given line in the named source file. - step Steps through the program a line at a time. Steps through the program a line at a time, without descending into functions. - continue Run until the next breakpoint. - finish Run until the end of the current function, then stop again. - 'enter'. gdb macro support). Dumping Perl Data Structures # Patching-8! Patching a core module This works just like patching anything else, with an extra consideration. Many core modules also live on CPAN. If this is so, patch the CPAN version instead of the core and send the patch off to the module maintainer (with a copy to p5p). This will help the module maintainer keep the CPAN version in sync with the core version without constantly scanning p5p. The list of maintainers of core modules is usefully documented in Porting/Maintainers.pl. Adding a new function to the core If, as part of a patch to fix a bug, or just because you have an especially good idea, you decide to add a new function to the core, discuss your ideas on p5p well before you start work. It may be that someone else has already attempted to do what you are considering and can give lots of good advice or even provide you with bits of code that they already started (but never finished).. Writing a test Every module and built-in function has an associated test file (or should...). If you add or change functionality, you have to write a test. If you fix a bug, you have to write a test so that bug never comes back. If you alter the docs, it would be nice to test what the new documentation says./. - t/base/ Testing of the absolute basic functionality of Perl. Things like if, basic file reads and writes, simple regexes, etc. These are run first in the test suite and if any of them fail, something is really broken. - t/cmd/ These test the basic control structures, if/else, while, subroutines, etc. - t/comp/ Tests basic issues of how Perl parses and compiles itself. - t/io/ Tests for built-in IO functions, including command line arguments. - t/lib/ The old home for the module tests, you shouldn't put anything new in here. There are still some bits and pieces hanging around in here that need to be moved. Perhaps you could move them? Thanks! - t/op/ Tests for perl's built in functions that don't fit into any of the other directories. - t/pod/ Tests for POD directives. There are still some tests for the Pod modules hanging around in here that need to be moved out into lib/. - t/run/ Testing features of how perl actually runs, including exit codes and handling of PERL* environment variables. - t/uni/ Tests for the core support of Unicode. - t/win32/ Windows-specific tests. - t/x2p A test suite for the s2p converter.. - t/base t/comp Since we don't know if require works, or even subroutines, use ad hoc tests for these two. Step carefully to avoid using the feature being tested. - t/cmd t/run t/io t/op Now that basic require() and subroutines are tested, you can use the t/test.pl library which emulates the important features of Test::More while using a minimum of core features. You can also conditionally use certain libraries like Config, but be sure to skip the test gracefully if it's not there. - t/lib ext lib Now that the core of Perl is tested, Test::More can., and t/uni tests. - test.valgrind check.valgrind utest.valgrind ucheck.valgrind (Only in Linux) Run all the tests using the memory leak + naughty memory access tool "valgrind". The log files will be named testname.valgrind. - test.third check.third utest.third ucheck.third (Only in Tru64) Run all the tests using the memory leak + naughty memory access tool "Third Degree". The log files will be named perl.3log.testname. -) -. Running tests by hand You can run part of the test suite by hand by using one the following commands from the t/ directory : . outbut. - o/to/foo.t except that the harnesses set up some environment variables that may affect the execution of the test : - PERL_CORE=1 indicates that we're running this test part of the perl core test suite. This is useful for modules that have a dual life on CPAN. - PERL_DESTRUCT_LEVEL=2 is set to 2 if it isn't set already (see "PERL_DESTRUCT_LEVEL") -'. Other environment variables that may influence tests -. See also the documentation for the Test and Test::Harness modules, for more environment variables that affect testing. Common problems when patching Perl source code. Perl environment problems Not compiling with threading. Not compiling with -DDEBUGGING The DEBUGGING define exposes more code to the compiler, therefore more ways for things to go wrong. You should try it. Introducing (non-read-only) globalshas. Not exporting your new function Some platforms (Win32, AIX, VMS, OS/2, to name a few) require any function that is part of the public API (the shared Perl library) to be explicitly marked as exported. See the discussion about embed.pl in perlguts. Exporting your new function. Portability problems. Casting pointers to integers or casting integers to pointers().) Casting between data function pointers and data pointers Technically speaking casting between function pointers and data pointers is unportable and undefined, but practically speaking it seems to work, but you should use the FPTR2DPTR() and DPTR2FPTR() macros. Sometimes you can also play games with unions. Assuming sizeof(int) == sizeof(long). Assuming one can dereference any type of pointer for any type of data char *p = ...; long pony = *p; /* BAD */ Many platforms, quite rightly so, will give you a core dump instead of a pony if the p happens not be correctly aligned. Lvalue casts (int)*p = ...; /* BAD */ Simply not portable. Get your lvalue to be of the right type, or maybe use temporary variables, or dirty tricks with unions. Assume anything about structs (especially the ones you don't control, like the ones coming from the system headers) That a certain field exists in a struct That no other fields exist besides the ones you know of That a field is of certain signedness, sizeof, or type That the fields are in a certain order While C guarantees the ordering specified in the struct definition, between different platforms the definitions might differ That the sizeof(struct) or the alignments are the same everywhere There might be padding bytes between the fields to align the fields - the bytes can be anything Structs are required to be aligned to the maximum alignment required by the fields - which for native types is for usually equivalent to sizeof() of the field Mixing #define and #ifdef #define BURGLE(x) ... \ #ifdef BURGLE_OLD_STYLE /* BAD */ ... do it the old way ... \ #else ... do it the new way ... \ #endif You cannot portably "stack" cpp directives. For example in the above you need two separate BURGLE() #defines, one for each #ifdef branch. Adding stuff after #endif or #else warns about the bad variant (by default on starting from Perl 5.9.4). Having a comma after the last element of an enum list enum color { CERULEAN, CHARTREUSE, CINNABAR, /* BAD */ }; is not portable. Leave out the last comma. Also note that whether enums are implicitly morphable to ints varies between compilers, you might need to (int). Using //-comments //. Mixing declarations and code void zorklator() { int n = 3; set_zorkmids(n); /* BAD */ int q = 4; That is C99 or C++. Some C compilers allow that, but you shouldn't. The gcc option -Wdeclaration-after-statementsscans for such problems (by default on starting from Perl 5.9.4). Introducing variables inside for() for(int i = ...; ...; ...) { /* BAD */ That is C99 or C++. While it would indeed be awfully nice to have that also in C89, to limit the scope of the loop variable, alas, we cannot. Mixing signed char pointers with unsigned char pointers. Macros that have string constants and their arguments as substrings of the string constants . Using printf formats for non-basic C typesformat really does require a void pointer: U8* p = ...; printf("p = %p\n", (void*)p); The gcc option -Wformatscans for such problems. Blindly using variadic macros gcc has had them for a while with its own syntax, and C99 brought them with a standardized syntax. Don't use the former, and use the latter only if the HAS_C99_VARIADIC_MACROS is defined. Blindly passing va_list Not all platforms support passing va_list to further varargs (stdarg) functions. The right thing to do is to copy the va_list using the Perl_va_copy() if the NEED_VA_COPY is defined. Using gcc statement expressions val = ({...;...;...}); /* BAD */ While a nice extension, it's not portable. The Perl code does admittedly use them if available to gain some extra speed (essentially as a funky form of inlining), but you shouldn't. Binding together several statements Use the macros STMT_START and STMT_END. STMT_START { ... } STMT_END Testing for operating systems or versions when should be testing for features . Problematic System Interfaces malloc(0), realloc(0), calloc(0, 0) are non-portable. To be portable allocate at least one byte. (In general you should rarely need to work at this low level, but instead use the various malloc wrappers.) snprintf() - the return type is unportable. Use my_snprintf() instead. Security problems Last but not least, here are various tips for safer coding. Do not use gets() Or we will publicly ridicule you. Seriously. Do not use strcpy() or strcat() or strncpy() or strncat() Use my_strlcpy() and my_strlcat() instead: they either use the native implementation, or Perl's own implementation (borrowed from the public domain implementation of INN). Do not use sprintf() or vsprintf()(). EXTERNAL TOOLS FOR DEBUGGING PERL. Rational Software's Purify. Purify on: - -Accflags=-DPURIFY Disables Perl's arena memory allocation functions, as well as forcing use of memory allocation functions derived from the system malloc. - -Doptimize='-g' Adds debugging information so that you see the exact source statements where the problem occurs. Without this flag, all you will see is the source filename of where the error occurred. - -Uusemymalloc Disable Perl's malloc so that Purify can more closely monitor allocations and leaks. Using Perl's malloc will make Purify report most leaks in the "potential" leaks category. - -Dusemultiplicity NT Purify on Windows NT instruments the Perl binary 'perl.exe' on the fly. There are several options in the makefile you should change to get the most use out of Purify: - DEFINES. - USE_MULTI = define Enabling the multiplicity option allows perl to clean up thoroughly when the interpreter shuts down, which reduces the number of bogus leak reports from Purify. - #PERL_MALLOC = define Disable Perl's malloc so that Purify can more closely monitor allocations and leaks. Using Perl's malloc will make Purify report most leaks in the "potential" leaks category. - CFG = Debug Compaq's/Digital's/HP's Third Degree Third Degree is a tool for memory leak detection and memory access checks. It is one of the many tools in the ATOM toolkit. The toolkit is only available on Tru64 (formerly known as Digital UNIX formerly known as DEC OSF/1).. PERL_DESTRUCT_LEVEL; it also converts new_SV() from a macro into a real function, so you can use your favourite debugger to discover where those pesky SVs were allocated. PERL_MEM_LOG If compiled with -DPERL_MEM_LOG, all Newx() and Renew() allocations and Safefree() in the Perl core go through logging functions, which is handy for breakpoint setting. If also compiled with -DPERL_MEM_LOG_STDERR, the allocations and frees are logged to STDERR (or more precisely, to the file descriptor 2) in these logging functions, with the calling source code file and line number (and C function name, if supported by the C compiler). This logging is somewhat similar to -Dm but independent of -DDEBUGGING, and at a higher level (the -Dm is directly at the point of malloc(), while the PERL_MEM_LOG is at the level of New()). Profiling Depending on your platform there are Profiling: - -a Suppress statically defined functions from the profile. - -b Suppress the verbose descriptions in the profile. - -e routine Exclude the given routine and its descendants from the profile. - -f routine Display only the given routine and its descendants in the profile. - -s Generate a summary file called gmon.sum which then may be given to subsequent gprof runs to accumulate data over several runs. - -z Display routines that have zero usage. GCC gcov Profiling Pixie Profiling Pixie is a profiling tool available on IRIX and Tru64 (aka Digital UNIX aka DEC OSF/1) platforms. Pixie does its profiling using basic-block counting.: - -h Reports the most heavily used lines in descending order of use. Useful for finding the hotspot lines. - -l Groups lines by procedure, with procedures sorted in descending order of use. Within a procedure, lines are listed in source order. Useful for finding the hotspots of procedures. In Tru64 the following options are available: - -p[rocedures] Procedures sorted in descending order by the number of cycles executed in each procedure. Useful for finding the hotspot procedures. (This is the default option.) - -h[eavy] Lines sorted in descending order by the number of cycles executed in each line. Useful for finding the hotspot lines. - -i[nvocations] The called procedures are sorted in descending order by number of calls made to the procedures. Useful for finding the most used procedures. - -l[ines] Grouped by procedure, sorted by cycles executed per procedure. Useful for finding the hotspots of procedures. - -testcoverage The compiler emitted code for these lines, but the code was unexecuted. - -z[ero] Unexecuted procedures. For further information, see your system's manual pages for pixie and prof. Miscellaneous tricks. CONCLUSION We've had a brief look around the Perl source, how to maintain quality of the source code, an overview of the stages perl goes through when it's running your code, how to use debuggers to poke at the Perl guts, and finally how to analyse the execution of Perl.: Subscribe to perl5-porters, follow the patches and try and understand them; don't be afraid to ask if there's a portion you're not clear on - who knows, you may unearth a bug in the patch... Keep up to date with the bleeding edge Perl distributions and get familiar with the changes. Try and get an idea of what areas people are working on and the changes they're making.. - The Road goes ever on and on, down from the door where it began. - If you can do these things, you've started on the long road to Perl porting. Thanks for wanting to help make Perl better - and happy hacking! AUTHOR This document was written by Nathan Torkington, and is maintained by the perl5-porters mailing list. 1 POD Error The following errors were encountered while parsing the POD: - Around line 371: Non-ASCII character seen before =encoding in 'König'. Assuming ISO8859-1
https://metacpan.org/pod/release/NWCLARK/perl-5.8.9-RC2/pod/perlhack.pod
CC-MAIN-2016-44
refinedweb
5,151
64.51
Created on 2006-06-27 21:06 by marienz, last changed 2006-07-29 17:04 by fdrake. This issue is now closed. There is something weird going on with xml.sax exceptions, probably related to the xml/xmlcore shuffle: from xml.sax import make_parser, SAXParseException from StringIO import StringIO parser = make_parser() try: parser.parse(StringIO('invalid')) except SAXParseException: print 'caught it!' On python 2.4.3 this prints "caught it!". On python 2.5b1 the exception is not caught, because it is a different exception: an xmlcore.sax._exceptions.SAXParseException. Printing the SAXParseException imported from xml.sax gives "<class 'xml.sax._exceptions.SAXParseException'>". Stumbled on this running the logilab-common (see logilab.org) tests with python 2.5b1, but it seems likely other code will be affected. Logged In: YES user_id=1326842 This bug is simmilar to. It is caused by absolute imports in xmlcore.sax.expatreader. Patch #1519796 ( ) should fix it. Logged In: YES user_id=3066 Patch #1519796 does not do anything for this, based on the current trunk. I've attached a diff containing a test case. Logged In: YES user_id=3066 The patch attached contains the wrong bug number, but it really is for this issue. Logged In: YES user_id=3066 I've managed to come up with a patch that solves this specific issue, but it really deals with the symptom and not the real problems. While I think the "xmlcore" package was the right idea, I'm not convinced it can be correctly implemented without enormous effort at this time. Given the release schedule, it doesn't make sense to jump through those hoops. The previous hackery that made allowed the PyXML distribution to "replace" the standard library version of the "xml" package worked only because there was only one public name for whichever was being used. Moving to the "xmlcore" package proved to be more than that hack could support. I think the right thing to do for Python 2.5 is to revert the changes that added the "xmlcore" package. Further investigation into a better approach can be made for Python 2.6. Logged In: YES user_id=33168 I think Martin is of the same opinion. This needs to be resolved soon. Logged In: YES user_id=3066 The xmlcore package has been reverted as of revision 50941. This problem report no longer applies, but a test has been added as part of the removal of the xmlcore package to ensure this does not re-surface.
http://bugs.python.org/issue1513611
CC-MAIN-2016-26
refinedweb
413
68.47
Feb 18, 2008 11:31 AM|Reddy4All|LINK Hello, I am new to Visual Studio 2005, Recently we are upgrading the entire applications to .Net framework 1.1 to 2.0. In globalization or localization of my application is giving problems. Here is what my application is.... There is one web service for Header and Footer purpose. In that Header (webservice call) we have drop down list for language selection. Once the user selects the language in header my application needs to localize accordingly. My application contains master pages and content pages. I tried with creating base pages, writing seperate class for setting up thread culture in Master Page Pre-render Event but no luck. When i change the browser default language then entire site is picking the correct localized resource files but when i change the dropdown list from the browser not able to pick localized Resource File. one more thing whenever i change the dropdownlist of languages from Header it will set the value in cookies. So How can i solve this issue? Can someone help me on this... How can i localized my static and Dynamic content with master pages?? I'll appreciate if you can help me on this. Thanks Reddy Feb 18, 2008 11:51 AM|Tok Bek|LINK Hi, Build a "Base class" inherit it from System.Web.UI.Page. In the Base class override the Page.InitializeCulture() Method and set the CurrentThread culture. then Inherit all your pages (not in your master pages) from this Base class. Tok Feb 18, 2008 12:30 PM|Reddy4All|LINK Hi Tok, Thanks for your response. I tried this but it did not worked. I implemented base class and overrides the InitializeCulture() and set the current thread but it is working only when i change the language of my browser not working when i change the dropdown list language from my header. Thanks Feb 18, 2008 12:53 PM|Tok Bek|LINK 1. what action does this dropdown cause ? Does it raise a server side event ? client side ? 2. how do you set the culture ? by the cookie ? when do you set the cookie ? Can you post some code ? Feb 18, 2008 02:06 PM|Reddy4All|LINK Once the language is changed from the Header dropdown list then in my master page pre-render event i am setting the following code... Here strLanguage is the value which is selected from Dropdown... HttpCookie myCookie = new HttpCookie("myLanguage"); myCookie.Valuw = strLanguge; Response.SetCookie(myCookie); In Global.asa ... void Application_BeginRequest(object sender, EventArgs e) { string strLang = string.Empty; HttpCookie myCookie = Request.Cookies["myLanguage"]; if (myCookie != null && myCookie.Value != null) strLang = myCookie.Value; System.Threading.Thread.CurrentThread.CurrentUICulture = System.Globalization.CultureInfo.GetCultureInfo(strLang); System.Threading.Thread.CurrentThread.CurrentCulture = System.Globalization.CultureInfo.CreateSpecificCulture(strLang); } Feb 18, 2008 02:42 PM|Tok Bek|LINK ok you have few problems as i see it. You are setting the cookie after you set the culture. Begin request occurs before the prerender event. also, the InitializeCulture() occurs after the begin request (thats why it ignores your settings and take the browser ones). As I recommended before put your code in the InitializeCulture method, and remove it from the begin request. Make sure that strLang is in the right format.("en-us", "he-il"....etc) protected override void InitializeCulture() { base.InitializeCulture(); //Here you need to add code that : //1. check wether a cookie exist //2. set cookie from the dropdown (if needed) //3. set the strLang Thread.CurrentThread.CurrentUICulture = new CultureInfo(strLang); Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture(strLang); } This sould work Good luck Tok Feb 18, 2008 03:05 PM|Reddy4All|LINK Thanks again... Here is the clsBasePage Class code; using System.Globalization; namespace LocalizationSample { public class clsBasePage: Page { public clsBasePage() { } protected override void InitializeCulture() { //Default to Invariant Culture string strCulture = string.Empty; HttpCookie myAppCookie = Request.Cookies["myLanguage"]; if (myAppCookie != null && myAppCookie.Value != null) { strCulture = myAppCookie.Value; Thread.CurrentThread.CurrentCulture = CultureInfo.GetCultureInfo(strCulture); Thread.CurrentThread.CurrentUICulture = CultureInfo.CreateSpecificCulture(strCulture); } Thread.CurrentThread.CurrentCulture = CultureInfo.GetCultureInfo(strCulture); Thread.CurrentThread.CurrentUICulture = CultureInfo.CreateSpecificCulture(strCulture); base.InitializeCulture(); } } } Content pages are inherited from clsBasePage. But when user i change the language from header i am getting last value in my cookie. i.e after changing new language still referring old value? How can I set language co for master pages static and Dynamic content ?? Feb 18, 2008 03:24 PM|Tok Bek|LINK Again, You need to handle the dropdown Request here in this function - InitializeCulture() - and not in the MasterPage PreRender Event. The Master Page PreRender event occurs after the InitializeCulture. Reddy4AllHow can I set language co for master pages static and Dynamic content ?? Sorry but i don't understand your question . Feb 18, 2008 04:39 PM|Tok Bek|LINK well, for dynamic content, there is no "magic" [:)] For each item in your DB, You will need to attach some variable, indicating its language, and then query your DB using this variable. Feb 18, 2008 05:52 PM|Tok Bek|LINK I can give you guidelines for how i think it should be implemented. In your DB create Languages Table : Languages_Table langID langName -------- -------------- 1 English 2 French Each content Item should have a foreign key to the Languages_Table : Articles_Table ArticleID ArticleText langID ------------ ---------------------------- ---------- 17 Some English Text 1 18 Some French Text 2 In your Web App maintain List of the languages DB keys/IDs : You can put this code in your global.asax - protected void Application_Start(object sender, EventArgs e) { Hashtable LangCulturesHT = new Hashtable(); LangCulturesHT.Add("en-us", 1); LangCulturesHT.Add("fr-fr", 2); Application["LangDbIds"] = LangCulturesHT; } Now when you need to get data from db you can use it in this way : int LangID = (int)((Hashtable)Application["LangDbIds"])[Thread.CurrentThread.CurrentCulture.Name] "SELECT ArticleID, ArticleText FROM Articles_Table WHERE langID = " + LangID.ToString() Something like that [:)] Feb 19, 2008 04:02 PM|Tok Bek|LINK I guess everything works ok [:)] 15 replies Last post Feb 19, 2008 04:45 PM by Reddy4All
https://forums.asp.net/t/1221456.aspx?Globalization+with+Masterpages+Issue+
CC-MAIN-2017-43
refinedweb
991
50.94
Add {% querystring %} template tag to add, remove, and update querystring parameters Review Request #9712 — Created March 1, 2018 and submitted Previously, it was only possible to update a single query parameter with {% querystring_with %}. However, in the name of future proofing, it became necessary to make a new template tag {% querystring %} that can take an arbitrary number of key-value pairs now with different modes in the form of: {% querystring “mode” "key1=value1" "key2=value2" %} The three different modes are: “remove” - Will remove keys from the query string. “append” - Will append values for its given key without overwriting. “update” - Will add to or replace part of a query string. Ran unit tests. Thanks for the patch Mandeep! I've left some mostly-stylistic comments here as it looks pretty good. However, I think it might be better to take a different approach (which I outline in one of the comments). As you fix issues locally, make sure to mark them as fixed here and then, once you've fixed them all, commit and re-post ( rbt post -u). Thanks again! Your summary should be in the imperitive mood, i.e., it should read like you are giving a command or order. How about: Update querystring_with to take an arbitrary number of key-value pairs You have several lines that contain whitespace. You can set up Sublime to trim this on save via Preferences -> Settings -> User and adding the following: "trim_trailing_white_space_on_save": true and "ensure_newline_at_eof_on_save": true Please wrap your description at 72 lines. You can install AutoWrap (Cmd+Shift+P, PCI (for package control install), AutoWrap) and set the following settings for the Git Commit syntax { "rulers": [52, 72], "auto_wrap": true, "auto_wrap_width": 72 } (To edit the syntax setting for a specific filetype you can open a new file and do Cmd+Shift+P, and type syntax git commit. Then go to Preferences -> Settings -> Syntax Specific and add those settings.) Your description mentions RB but this is a change to Djblets. Your future change to RB should refer to those changes since Djblets has no notion review requests, etc. How about: Previously, it was only possible to update a single query parameter with `{% querystring_with %}`. However, it is frequently the case that it would be handy to update the querystring from within the template rather than having to enumerate all possible cases in the view. Now `{% querystring_with %}` can take an arbitrary number of key-value pairs in the form of: ```html+django {% querystring_with "key1" "value1" "key2" "value2" %} ``` Our description and testing done fields support markdown This documentation needs to be updated to indicate that the function can update multiple query parameters. This function no longer takes these aruments. You will want to update this section. I did some talking with Christian and I'm not convinced this is the best way to handle this. Instead, we may want to forgo the simple parsing that register.simple_taggives us and do our own parsing so that we could do: {% querystring_with sorted='1' page='2' foo-bar='baz' %} I can work with you to write a templatetag parser that understands this syntax. Since attrand valueare used immediately, we can move them inline, like so: query[args[i].encode('utf-8')] = args[i + 1].encode('utf-8') Undo this change. tmpisn't a very descriptive variable name. How about render_result? Can you add a trailing comma? This is only used once so it could be moved inline. t_dictisn't a very expressive variable name. How about expected_result? However, it can be moved inline (see next comment) I think it would be fine to do: self.assertEqual( parse_qs(render_result[1:]), { 'bar': ['baz'], 'foo': ['bar'], }) Can you add a trailing comma? This should also be dedented so that it looks like: render_result = t.render(Context({ 'request': request, }) See comments on previous function about moving things inline. Checks run (1 failed, 1 succeeded) flake8 Checks run (1 failed, 1 succeeded) flake8 Checks run (1 failed, 1 succeeded) flake8 Your summary should be in the imperitive mood, i.e., it should read like a command. For example: Update querystring_with to support an arbitrary number of attributes and values Docstring summary should fit on a single line. Missing context(which is a django.template.Context) *args (tuple) Documentation should always be sentence-case. How about: Multiple querystring fragments (e.g., "foo=1") that will be used to update the initial querystring. These should no longer be here. These tests would be easier to read if the initial state was: { 'foo': 'foo', 'bar': 'bar', 'baz': 'baz', 'qux': 'qux', } and the templatetag changed them to something else. We are going to want to make this into a new template tag. The reason being is that an old usage of {% querystring_with "foo" "bar" %}which used to do ?foo=barnow does ?foo&bar(and both are correct usages). So we will want this to do warning.warn("...", DeprecationWarning)with instructions to use the new template tag (lets call it querystring_with_fragments). Can you add unit tests for {% querystring_with_fragments "foo" %} Can you add unit tests for {% querystring_with_fragments "foo=bar=baz" %} Can you add unit tests for Can you add unit tests for {% querystring_with_fragments "foo=" %} Can you add unit tests for {% querystring_with_fragments "a=1" "a=2" "a=3 %}? The result should be ?a=1&a=2&a=3. Can you add unit tests for {% querystring_with_fragments "=foo" %} Can you add unit tests for {% querystring_with_fragments "foo bar=baz qux" %} Can you add unit tests for {% querystring_with_fragments "a=b=c=d" "e=f=g=h" %} Typo: `context. While technically correct, how about: The Django template rendering context. It is what we use elsewhere. Undo this indentation. context See comment on other templatetag about this. The description should not be indented, i.e. unicode: The new URL ... Wrong templatetag :) This should use the parse_qsmethod becuase dictiteration order is not guaranteed. Mind doing bar: "bar"just to keep in line with other examples? Mind ordering this foothen barlike above? Checks run (1 failed, 1 succeeded) flake8 Checks run (1 failed, 1 succeeded) flake8 Your summary should be in the imperitive mood (i.e., it should read like a command or order). How about: Add {% querystring %} template tag to add, remove, and update querystring parameters Can you add unit tests with unicode keys and values with e.g. han characters? six.movesbefore six.moves.urllib Undo This should be a header of Argsand describe what it is in addition to its values. E.g. Args: mode (unicode): How the querystring will be modified. This should be one of the following values: ``"update"``: Replace the values for the specified key(s) in the query string. ``"append"``: Add new values for the specified key(s) to the query string. ``"remove"``: Remove the specified key(s) from the query string. If no value is provided, all instances of the key will be removed. This just returns the querystring, not the URL. The <should line up with the cin code-block. Instead of having this highly nested, we can pull things out into temporaries to make it more readable. How about: for arg in args: parsed = QueryDict(arg) for key in part: key = key.encode('utf-8') values = [ value.encode('utf-8') for value in sparsed.getlist(key) ] query.setlist(key, value) Can we add support to remove specific key-vaue pairs, in addition to removing the entire set? e.g. "remove" "a=4"would remove "a=4" from ?a=1&a=2&a=3&a=4leaving a=1&a=2&a=3. How about: query.pop(arg.encode('utf-8'), None) This does not require the try/except. You can use the same approach I outlined above to make this more readable. The tag is no longer called QuerystringWithFragments Some minor style nitpicks. Otherwise this looks good to me. The <in <ashould be aligned with the cin the code-blockabove. This message is no longer correct. It should include the update mode. Can we format like the following example? This makes it clear it is a string literal and renders as a <code>element in the docs. ``'update'``: .... ``'append'``: .... ``'remove'``: .... This should be first in the list of args. This is missing its type. These all need to be de-dented one space so that the first character lines up with the cin code-block. reStructuredText code blocks are only indented three spaces instead of the usual four that we use. You may want to enable "show indents" in ST3 (View->Indentation). The implementation of QueryDict.urlencodeactually forces the text to bytes, so we don't need to encode the attrs or values here or below. See: implementation This is no longer required. Six has moves support for this: from django.utils.six.moves.html_parser import HTMLParser No blank line after function docstrings. Here and below. This one doesn't assert that rendered_result.startswith('?'). """on next line. "args get" fits on previous line. This blank line is unnecessary here and in all tests below. No period at the end of test docstrings (because they print out as <DOCSTRING> ... OK). Here and below. This fits on a single line. You seem to be wrapping your docstrings at ~70 rather than 79. Please correct this here and below. Checks run (1 failed, 1 succeeded) flake8 Again, mostly some formatting nits. I did have a few suggestions on ways to simplify a few things :) Description has a grammar-o: "There are three different modes are" This is not indented a multiple of four. (It is missing one space.) Should not be indented relative to prior line. Remove this blank line This is setting the same list in a loopp len(parsed.getlist(attr))times. This also doesn't need to encode values. What this should be is: query.setlist(parsed.getlist(attr)) Since we are appending every single entry in argsinto query, we can simply do: for arg in args: query.update(QueryDict(arg)) QueryDict.updateappends to lists instead of overwriting them. I think this will miss some cases, e.g. {% querystring "remove" "x&y=1&z=" %}. Now, this is an edge case, but we should still handle it well. We know that in the above example it will result in: {'x': [''], 'y': ['1'], 'z': ['']} We can't really distinguish between z=and zin the argument, so we should treat them both the same. for attr in parsed: values = parsed.getlist(attr) if values == ['']: # An empty value means either `attr` or `attr=` was provided. # In either case, remove all values. query.pop(attr, None) else: values = query.getlist(attr) for value in parsed.getlist(attr): try: values.remove(value) except ValueError: pass We only need to do getlistonce since we are modifying the same list in each iteration of the loop Can you surround "foo=1" with double backticks, e.g. ``"foo=1"`` The second for loop needs to be nested in the first, otherwise only the last element in argswill be removed form the querystring. Can you add a unit test that does multiple removes? e.g. {% querystring "remove" "a" "b" "c=1" %} This needs to be indented to be aligned with the if. It is currently always executing because there is no breakin the for loop This will actually be a django.template.RequestContext. This should not be indented. The key=valueshould be in quotes, right? For "mode", is the equivalent always "update"? If so, we can say that explicitly. In fact, we can help further with examples: warnings.warn( '{% querystring_with "%(attr)s" "%(value)s" %} is deprecated ...' 'Please use {% querystring "update" "%(attr)s=%(value)s" %} instead.' % { 'key': attr, 'value': value, }, DeprecationWarning) Too long for the line. Must fit in 79 characters. It's also missing an ending period. See the type above. This can be on the same line. I think you can just do: query = context['request'].GET.copy() The copy will be mutable. This is going to set the list of values for every value in the list of values. You should be able to remove the forloop here. This doesn't seem right. We're overriding to_removeevery time, meaning we're only ever getting the list for the last attribute. Was the rest of this meant to be within the for loop? This means to me that we're missing some very crucial unit tests somewhere. This should raise a TemplateSyntaxError. This is in the wrong import group. The trailing period should remain. You only remove this for docstrings for unit tests themselves (as they're outputted to the terminal and shouldn't end in a period). The tests in this module should be updated to check for warnings as well. Search the codebase for catch_warningsfor examples. Same here. Needs a trailing period. Last entries in a dictionary should always have a trailing comma. I'd recommend just setting self.requestin setUp()(not setUpClass(), since it'll be modified) and referencing it in all the tests. """on the next line. "overridden". Best to explicitly check the resulting value, rather than introducing another parsing/checking step. That just leads to problems, as more things can go wrong, and reviewers/future contributors have no idea what the result is even supposed to be. This applies to all tests. "overridden" I'd also add "that" before "get". You can start the template on the first line, like: t = Template('{% load djblets_utils %}' '{% querystring .... %}') Same for other tests. """on the next line. "overridden" """on the next line. No trailing period. "existing" """on the next line. "a key fragments" doesn't make sense. Maybe just "a key fragment?" What's a key fragment? "non-existing" """on the next line.
https://reviews.reviewboard.org/r/9712/
CC-MAIN-2019-09
refinedweb
2,255
68.26
On Tue, 1 Nov 2005, Paul Eggert wrote: "Theodoros V. Kalamatianos" <address@hidden> writes:lseek is valid on e.g. /dev/hda and people would not expect dd to null their data till it reached the desired offset.True. I guess the algorithm should be to use lseek if possible, and to write nulls otherwise. Then ftruncate if possible. I agree. I also think that ftruncate should happen regardless of the presence of of=. This would allow cases like `true | dd seek=1 >> file'I agree. I also think that ftruncate should happen regardless of the presence of of=. This would allow cases like `true | dd seek=1 >> file' to work properly. Perhaps dd should output null bytes only on FIFOs ?I'd say it should output nulls if lseek fails for any reason. Hmm, so all we need is some code in skip() to fall back to iwrite for STDOUT_FILENO.Hmm, so all we need is some code in skip() to fall back to iwrite for STDOUT_FILENO. On a relative matter, what should dd do in the following case: $ echo -n AB > f $ echo -n ab | dd bs=1 seek=1 >> f What should the contents of `f' be ?Just "ABab". That's a tricky one, since the ">>f" means that stdout is in append mode, which means all writes are appended to the end of the file regardless of the current seek position. So the "seek=1" is ineffective. Yes, I realised this when I read the whole Std-Info-Man documentation. Anyway, I _think_ that following patch implements the desired behaviour: diff -uNr coreutils-5.92/src/dd.c coreutils-5.92/src/dd.c --- coreutils-5.92/src/dd.c 2005-10-01 08:54:57.000000000 +0300 +++ coreutils-5.92/src/dd.c 2005-11-02 04:00:34.000000000 +0200 @@ -1139,6 +1139,47 @@ advance_input_offset (offset); return 0; } + /* Do not seek if offset is zero */ + else if (offset == 0) + return 0; + else if (fdesc == STDOUT_FILENO) + { + memset(buf, 0, blocksize); + + do + { + /* Try reading first */ + ssize_t nseek = iread (fdesc, buf, blocksize); + if (nseek < 0) + { + /* Don't stop if the stream is open write-only + Q: What is the proper error handling here ? */ + if (errno != EBADF) { + error (0, errno, _("%s: cannot seek"), quote (file)); + quit (EXIT_FAILURE); + } + nseek = 0; + } + + /* Use write() for the remaining bytes */ + if (nseek < blocksize) + { + nseek = iwrite (fdesc, buf, blocksize - nseek); + + /* writes to the output are expected to succeed */ + if (nseek < 0) + { + error (0, errno, _("%s: cannot seek"), quote (file)); + quit (EXIT_FAILURE); + } + if (nseek == 0) + break; + } + } + while (--records != 0); + + return records; + } else { int lseek_errno = errno; @@ -1647,6 +1688,7 @@ && (fd_reopen (STDOUT_FILENO, output_file, O_WRONLY | opts, perms) < 0)) error (EXIT_FAILURE, errno, _("opening %s"), quote (output_file)); + } #if HAVE_FTRUNCATE if (seek_records != 0 && !(conversions_mask & C_NOTRUNC)) @@ -1682,7 +1724,6 @@ } } #endif - } install_signal_handlers ();One stupid question: what non-seekable stream types exist, apart from a pipe to a program ? Is there any stream type where lseek could fail, but there is stored data that could be overwritten ? Regards, Theodoros Kalamatianos
http://lists.gnu.org/archive/html/bug-coreutils/2005-11/msg00017.html
CC-MAIN-2015-18
refinedweb
501
71.75
0 Hello fellow python lovers of daniweb.com. This isn't as much a question as much as the fact that I want to see how different people would go about using python to inject an SQl database. I made a script were you can access the ip and run a command. NOTE:Obviously I didn't give an actual database. Who am I kidding you guys aren't stupid. Here's my injection script import MySQLdb db = MySQLdb.connect(host="localhost", # your host, usually localhost user="USER", # your username passwd="PWD", # your password db="MySQLdb") # name of the data base # you must create a Cursor object. It will let # you execute all the queries you need cur = db.cursor()
https://www.daniweb.com/programming/software-development/threads/485866/sql-injection-with-python
CC-MAIN-2016-50
refinedweb
120
66.33
[ ] John Vines commented on ACCUMULO-802: ------------------------------------- bq. I wasn't sure whether I should change the TableOperations functions to throw TableNamespaceNotFoundExceptions (and possibly break backwards compatibility), so right now they throw IllegalArgumentExceptions constructed from TableNamespaceNotFoundExceptions. I think making a TableNamespaceNotFoundException as an extension of TableNotFoundException should be sufficient? bq. In copying the format of TableOperations' create() function, I made the TableNamespaceOperations() create function take a TimeType for input, but I don't use it. I wasn't sure if we should apply that TimeType to all tables in the namespace or scrap that functionality for namespaces in general. I am under the impression that there is another layer of configuration hierarchy of table configurations with the namespaces. And with that, TimeType is one. So having a alternative default TimeType for a namespace makes sense. However, if there is not a configuration hierarchy, then there should be no TimeType involved. bq. I never looked into putting qoutas/priorities on namespaces, although that was one of the original use cases. We can defer this to 1.7+ as it's a nice to have, not a bare minimum for namepsace implementation bq. I never implemented "aliasing" or having one table show in multiple namespaces. Also defer, not a necessity bq. I never messed with the Monitor code to organize the tables by namespace (ACCUMULO-1480). Defer, !necessity bq. I changed the default initial table properties (that are applied when a table is created) to instead be applied to new namespaces (and the default namespace) rather than each table. Makes sense bq. I wasn't sure if there should be any initial properties on the system namespace, so I left it empty (no default properties). Not quite sure what this means bq. I didn't add default user permissions for a namespaces (ACCUMULO-617 and ACCUMULO-1479). I sorta think this should be there, as it would cause a lot of issues to add down the line. bq. While running RandomWalk, one related error that I see every once in a while is that a table will have a reference to a namespace that no longer exists. (There are a few other errors that seem unrelated to table namespaces). Probably has to do with ZK cache timings. May have to add some wait time in the test to allow propagation. Propagation time is something that should be expected with this behavior. > table namespaces > ---------------- > > Key: ACCUMULO-802 > URL: > Project: Accumulo > Issue Type: New Feature >2.t was sent by Atlassian JIRA (v6.1#6144)
http://mail-archives.apache.org/mod_mbox/accumulo-notifications/201310.mbox/%3CJIRA.12611276.1349912970529.89066.1382125901959@arcas%3E
CC-MAIN-2019-09
refinedweb
420
56.86
07 January 2010 10:16 [Source: ICIS news] SINGAPORE (ICIS news)--Chevron Australia and Nippon Oil Corp have signed an agreement for the delivery of 300,000 tonnes/year of liquefied natural gas (LNG) for 15 years from the Gorgon Project in Western Australia, a statement from the US energy major said on Thursday. Chevron Asia Pacific Exploration and Production (CAPEP) president Jim Blackwell said the agreement would help underpin the importance of the Gorgon Project to growing LNG markets. "The agreement is another step towards commercialising our equity natural gas in ?xml:namespace> PetroChina signed a $41bn (€28.6bn) deal to buy LNG from the Gorgon project in August 2009. The Gorgon Project is a natural offshore gas project with estimated natural gas resources equivalent to 6.7bn bbl of oil. The Chevron statement said that the initial development at the project would include a three-train, 15m tonne/year LNG unit and a domestic gas plant. (
http://www.icis.com/Articles/2010/01/07/9323104/chevron-australia-and-nippon-oil-sign-agreement-for-lng.html
CC-MAIN-2014-49
refinedweb
158
61.06
I knew it'd been a while since I'd updated my blog, but the realization that it'd had been nearly a year since my last post shocked me some. I mean, I've told people it'd been nearly a year, but didn't really believe it until I saw the date myself. Well, it's time to rectify that problem and re-emerge onto the blogging scene. I've been working on a project over the last several months called "Play". Those of you who attended MEDC 2005 or Gamefest 2005 may have seen demos built on Play. It's a Peer to Peer Gaming Infrastructure, written on the Compact Framework, which allows for pluggable games and transports. It maintains your buddy lists and lets you play multiple games (both 2D Winforms Games and Managed Direct3D Mobile games). A lot of interesting things have come out of the Play work, and I plan to start blogging regularly both on Play, and another project of mine (Aquarium.NET) which you may have seen at MEDC 2004 or PDC 2003. So, without further adieu, my first blog post from Play is a simple ResourceHelper class (). Retrieving resources can be a pain on CF, so I developed this class in Play to make things a little easier. It scans the resources from the calling assembly and will find case insensitive full and partial matches, and can even return both streams and byte arrays. My next post will be for playing sound, so stay tuned. using /// public ResourceName = ResourceName.ToUpper(); foreach (string name in assembly.GetManifestResourceNames()) if (name.ToUpper().IndexOf(ResourceName) >= 0; return assembly.GetManifestResourceStream(name); } return new MemoryStream(new byte[0], false); } /// <summary> /// Returns the requested resource as a Stream /// </summary> /// <param name="ResourceName"> /// The name of the resource to retrieve /// </param> /// <returns> /// A stream for the requested resource on success. /// An empty stream on failure /// </returns> /// <remarks> /// GetStream first searches for an exact, case insensitive /// match to ResourceName. If the initial search fails,| /// GetStream will search again and return the first /// resource for which ResourceName is a case insensitive /// sub-string /// </remarks> [MethodImpl(MethodImplOptions.NoInlining)] public static Stream GetStream(string ResourceName) { return GetStream(Assembly.GetCallingAssembly(), ResourceName); } /// <summary> /// Returns the requested resource as a byte[] /// </summary> /// <param name="ResourceName"> /// The name of the resource to retrieve /// </param> /// <returns> /// A byte[] for the requested resource on success. /// An empty byte[] on failure</returns> /// <remarks> /// GetBytes first searches for an exact, case insensitive /// match to ResourceName. If the initial search fails, /// GetBytes will search again and return the first /// resource for which ResourceName is a case insensitive /// sub-string /// </remarks> [MethodImpl(MethodImplOptions.NoInlining)] public static byte[] GetBytes(string ResourceName) { Stream s = GetStream(Assembly.GetCallingAssembly(), ResourceName); byte[] bytes = new byte[s.Length]; s.Read(bytes, 0, bytes.Length); s.Close(); return bytes; }} --- jehThis post is provided “As Is”, with no warranties, and confers no rights. NewApplicationClass outlookApp = new ApplicationClass();outlookApp.Logon(0);IFolder tasksFolder = outlookApp.GetDefaultFolder(OlDefaultFolders.olFolderTasks); // Get the application class and the tasksFolderApplication. It is my intention to keep this blog parimarily technical, but I feel the need to make an off topic post today. For those that do not know, today is the 35th Anniversary of Neil Armstrong's famous first step out onto the moon, one of a few truly monumental events in the history of our world. If you turn on the TV or listen to the radio today, you're certain to hear at least some snippet about the anniversary, as well you should. I've known about the upcoming anniversary for days, but I was reminded in the car this morning while driving my step-daughter to the airport. I was again reminded while watching the news at the airport (during the inevitable delay before boarding). I was amazed, and frankly somewhat dismayed, when the CNN Anchor asked people to email in to discuss whether or not we should continue to push to the moon and, possibly, beyond. And, I must admit, I was quite shocked to see the numbers of people that responded crying an emphatic “no”, citing cost, risk, technology, and 'lack of need' as the barriers to our continued push into this vast frontier. There are certainly numerous 'reasons' to continue our efforts to push into space: And, yet, the most compelling argument to continue our drive into unknown space is one beyond reason and logic; it is at the very core of what makes us human; it is, quite simply, “because it's there.” Why do we celebrate Neil Armstrong's titanic first steps onto a new world? Why is he considered a true hero? Why was it a “giant leap for mankind”? The cost of those steps was enourmous, the risks extreme, the technology insufficient, and the need small. Why, then, are men such as he - the Columbus's, the Lewis and Clark's, and the Magellan's of the world, so celebrated, so revered, so well remembered, that their stories are told even centuries after their accomplishments. Why do small boys and girls dream of one day being astronauts, being the first to step upon the surface of a strange new world? It is because facing the risks, paying the costs, and looking fear in the eye to discover the unknown are at the very heart of what makes us human... they are what define the human spirit... they are what set us apart from the other animals that share our home. It is these singular men and women that give us hope and fuel our dreams. It is these people that prove we haven't given up, that life is something which, even in the darkest hour, is still worth fighting for. I, for one, support our exploration of space whole heartedly, and support those men and women that give me, and future generations, a reason to dream.. Is there anything better than working from home? I'm sitting on my back porch today, wireless laptop at the ready, 10 month old Chocolate Lab at my feet, staring out at the beautiful rarity known as a “Sunny Seattle Day”. Today, it was my step-daughter's doctor appointment that prompted the day at home. Tomorrow it will be the garage door repairmen. It's not that I don't work when I stay home. I actually got quite a bit of important work done today. There's just something rejuvenating about a relaxing day on the back porch, with the dog, staring out at the blue between the coding and the emails. It recharges the batteries in a way that a hectic weekend just can't touch. But then, I'm weird like that. I. I've received a number of questions regarding how best to retrieve the Device ID. As a result, I've decided to provide the following class for your coding pleasure: --- jehThis posting is provided “AS IS“, without warranties, and confers no rights. using System;using System.Runtime.InteropServices;namespace DeviceID{ public sealed class DeviceIDException : Exception { public DeviceIDException() : base() {} public DeviceIDException(string message) : base(message) {} public DeviceIDException(string message, Exception innerException) : base(message, innerException) {} } /// <summary> /// Summary description for DeviceID. /// </summary> public sealed class DeviceID { private const int GuidLength = 16; private const Int32 ERROR_NOT_SUPPORTED = 0x32; private const Int32 ERROR_INSUFFICIENT_BUFFER = 0x7A; private const Int32 IOCTL_HAL_GET_DEVICEID = 0x01010054; private byte[] idBytes = null; private Guid idGuid = Guid.Empty; private string idString = String.Empty; public DeviceID() { byte[] buffer = new byte[20]; bool idLoaded = false; uint bytesReturned; while(!idLoaded) { Array.Copy(BitConverter.GetBytes(buffer.Length), 0, buffer, 0, 4); try { idLoaded = KernelIoControl(IOCTL_HAL_GET_DEVICEID, 0, 0, buffer, (uint)buffer.Length, out bytesReturned); } catch (Exception e) { throw new DeviceIDException("This platform may not support DeviceIDs", e); } if(!idLoaded) { int error = Marshal.GetLastWin32Error(); if(error == ERROR_INSUFFICIENT_BUFFER) { buffer = new byte[BitConverter.ToUInt32(buffer, 0)]; } else { // Some older PPC devices only return the ID if the buffer // is exactly the size of a GUID, so attempt to retrieve // the ID this way before throwing an exception buffer = new byte[GuidLength]; idLoaded = KernelIoControl(IOCTL_HAL_GET_DEVICEID, 0, 0, buffer, GuidLength, out bytesReturned); if(idLoaded) { InitializeFromBytes(buffer); return; } else { if(error == ERROR_NOT_SUPPORTED) { throw new DeviceIDException("This platform does not support DeviceIDs"); } else { throw new DeviceIDException(String.Format( "Error Encountered Retrieving ID (0x{0})", error.ToString("X8"))); } } } } } int dwPresetIDOffset = BitConverter.ToInt32(buffer, 4); int dwPresetIDBytes = BitConverter.ToInt32(buffer, 8); int dwPlatformIDOffset = BitConverter.ToInt32(buffer, 12); int dwPlatformIDBytes = BitConverter.ToInt32(buffer, 16); idBytes = new byte[dwPresetIDBytes + dwPlatformIDBytes]; Array.Copy(buffer, dwPresetIDOffset, idBytes, 0, dwPresetIDBytes); Array.Copy(buffer, dwPlatformIDOffset, idBytes, dwPresetIDBytes, dwPlatformIDBytes); } public DeviceID(Guid g) { idBytes = g.ToByteArray(); idGuid = g; } public DeviceID(byte[] bytes) { InitializeFromBytes(bytes); } private void InitializeFromBytes(byte[] bytes) { idBytes = new byte[bytes.Length]; Array.Copy(bytes, 0, idBytes, 0, bytes.Length); } /// <summary> /// DeviceIDs are only guaranteed to be Guids on PPC. On generic Windows CE, they can be /// any unspecified length /// </summary> public bool IsGuid { get { return (idBytes.Length == GuidLength); } } public Guid Guid { get { if(IsGuid) { if(idGuid == Guid.Empty) { idGuid = new Guid(idBytes); } return idGuid; } else { throw new DeviceIDException(String.Format("The DeviceID {0} is not a Guid", ToString())); } } } public override bool Equals(object obj) { if(obj is DeviceID) { DeviceID rhs = (DeviceID)obj; if(idBytes.Length == rhs.idBytes.Length) { for(int i = 0; i < idBytes.Length; i++) { if(idBytes[i] != rhs.idBytes[i]) { return false; } } return true; } } return false; } public override int GetHashCode() { if(IsGuid) { return this.Guid.GetHashCode(); } else { // The default GetHashCode for object is guaranteed to // always be the same for a given object, but not for // multiple objects with the same value. We want a HashCode // that will always be the same for a given ID byte[] tempbytes = new byte[16]; if(idBytes.Length > 16) { Array.Copy(idBytes, 0, tempbytes, 0, 16); } else { Array.Clear(tempbytes, 0, 16); // Should be unneccessary Array.Copy(idBytes, 0, tempbytes, 0, idBytes.Length); } return (new Guid(tempbytes)).GetHashCode(); } } public override string ToString() { if(idString == String.Empty) { if(IsGuid) { idString = this.Guid.ToString(); } else { idString = ""; for(int i = 0; i < idBytes.Length; i++) { if(i == 4 || i == 6 || i == 8 || i == 10) { idString = String.Format("{0}-{1}", idString, idBytes[i].ToString("x2")); } else { idString = String.Format("{0}{1}", idString, idBytes[i].ToString("x2")); } } } } return idString; } public byte[] ToByteArray() { byte[] cpyBytes = new byte[idBytes.Length]; Array.Copy(idBytes, 0, cpyBytes, 0, idBytes.Length); return cpyBytes; } [DllImport("coredll.dll", SetLastError=true)] private static extern bool KernelIoControl( uint dwIoControlCode, uint lpInBuf /* set to 0 */, uint nInBufSize /* set to 0 */, [In, Out] byte[] lpOutBuf, uint nOutBufSize, out uint lpBytesReturned); }} The Desktop Framework and the CF Framework have different Public Key Tokens as part of their strong name. The Desktop framework commonly uses two different Key Tokens (b77a5c561934e089 and b03f5f7f11d50a3a), while CF currently uses one (969db8053d3322ac). CF does not map Desktop references to CF assemblies, so any application referencing Desktop assemblies should never work on the Compact Framework because the references can not be resolved at run time. “Never?!?” you say? Well, okay, not exactly never. There are two exceptions. The first exception is mscorlib.dll. Currently, all shipping frameworks (including the Desktop framework) treat any references to any mscorlib as a reference to their own mscorlib. There are multiple reasons for this, but the primary reason is that without mscorlib, an app can not even generate a managed exception, as all the basic types and exception types are housed in mscorlib. Therefore, any app which uses things housed entirely within mscorlib should work on any .NET Framework. That said, this is not a supported scenario and there are still cases where it will fail to work. Primarily, there are things that exist within the Desktop's mscorlib that are not supported on CF. A reference to something not supported on CF will still cause the app to fail when run against CF. The second exception is a bug that existed both in V1, and V1SP1 of CF, but was fixed in V1SP2 (and will remain fixed moving forward). This bug allowed, under certain circumstances, Desktop references to be retargetted to CF assemblies if the CF assembly was already loaded. This would happen for instance if, in a DLL, you referenced the Desktop System.Windows.Forms, but loaded the CF System.Windows.Forms in the application before using the DLL. Unfortunately, the way that Visual Studio .NET 2003's “Add New Project” wizard is structured, it is very easy to add a Desktop Class Library to a CF Project, so several applications were broken by this bug fix. To add a CF Class Library to an existing CF Project, remember to select “Smart Device Application”, and then select “Class Library” on the first pane of the Wizard. -- jeh Disclaimer:This posting is provided “AS IS” with no warranties, and confers no rights My name is Jeremy Hance, and I'm an SDET on the Compact Framework team. Officially, SDET = Software Development Engineer in Test. In my position, I write code used to test the functionality of the Compact Framework. Primarily, I work with the Global Assembly Cache (GAC), the Loader, and Native Interop (including COM Interop, which is new to V2). My intentions for this blog are fairly straight forward. I'm looking for a place where I can post answers to questions I frequently find myself answering, and a place for posting code samples/snippets that I find myself repeatedly providing. Of course, like most things on the web, blogs are an active and constantly evolving art form, so we'll just have to see where this blog goes.
http://blogs.msdn.com/jehance/default.aspx
crawl-002
refinedweb
2,244
56.05
BuildSteps¶ There are a few parent classes that are used as base classes for real buildsteps. This section describes the base classes. The "leaf" classes are described in Build Steps. slave. Subclasses can override this to get access to the build object as soon as it is available. The default implementation sets the build attribute. - setBuildSlave(build)¶ Similarly, this method is called with the build slave that will run this step. The default implementation sets the buildslave attribute. - setDefaultWorkdir(workdir)¶ This method is called at build startup with the default workdir for the build. Steps which allow a workdir to be specified, but want to override it with the build's default workdir, can use this method to apply the default. - setupProgress()¶ This method is called during build setup to give the step a chance to set up progress tracking. It is only called if the build has useProgress set. There is rarely any reason to override this method. Execution of the step itself is governed by the following methods and attributes. - startStep(remote)¶ Begin the step. This is the build's interface to step execution. Subclasses should override run to implement custom behaviors. - run()¶ Execute the step. When this method returns (or when the Deferred it returns fires), the step is complete. The method's return value must be an integer, giving the result of the step -- a constant from buildbot.status.results. If the method raises an exception or its Deferred fires with failure, then the step will be completed with an EXCEPTION result. Any other output from the step (logfiles, status strings, URLs, etc.) is the responsibility of the run method. Subclasses should override this method. Do not call finished or failed from this method. - start()¶ Begin the step. BuildSteps written before Buildbot-0.9.0 often override this method instead of run, but this approach is deprecated. When the step is done, it should call finished, with a result -- a constant from buildbot.status.results. The result will be handed off to the Build. If the step encounters an exception, it should call failed with a Failure object. If the step decides it does not need to be run, start can return the constant SKIPPED. In this case, it is not necessary to call finished directly. -. This method must only be called from the (deprecated) start method. -. or getResultSummary as appropriate. New-style build steps should call this method any time the summary may have changed. This method is debounced, so even calling it for every log line is acceptable. - getCurrentSummary()¶ Returns a dictionary containing status information for a running step. The dictionary can a step key and build, each with unicode values. The step key gives a summary for display with the step, while the build key. New-style build steps should override this method to provide a more interesting summary than the default u"running", or to provide any build summary information. -. - slaveVersion(command, oldversion=None)¶ Fetch the version of the named command, as specified on the slave. In practice, all commands on a slave have the same version, but passing command as its content. This is often useful to add a short logfile describing activities performed on the master. The logfile is immediately closed, and no further data can be added. -. LoggingBuildStep¶ - class buildbot.process.buildstep.LoggingBuildStep(logfiles, lazylogfiles, log_eval_func, name, locks, haltOnFailure, flunkOnWarnings, flunkOnFailure, warnOnWarnings, warnOnFailure, alwaysRun, progressMetrics, useProgress, doStepIf, hideStepIf)¶ The remaining arguments are passed to the BuildStep constructor. Warning Subclasses of this class are always old-style steps. As such, this class will be removed after Buildbot-0.9.0. Instead, subclass BuildStep and mix in ShellMixin to get similar behavior. This subclass of BuildStep slaves and finishes with a status of RETRY - logfiles¶ The logfiles to track, as described for ShellCommand. The contents of the class-level logfiles attribute are combined with those passed to the constructor, so subclasses may add log files with a class attribute: class MyStep(LoggingBuildStep): logfiles = dict(debug='debug.log') Note that lazy logfiles cannot be specified using this method; they must be provided as constructor arguments. - startCommand(command)¶ NoteThis method permits an optional errorMessages parameter, allowing errors detected early in the command process to be logged. It will be removed, and its use is deprecated. Handle all of the mechanics of running the given command. This sets up all required logfiles, keeps status text up to date, and calls the utility hooks described below. When the command is finished, the step is finished as well, making this class is unsuitable for steps that run more than one command in sequence. Subclasses should override start and, after setting up an appropriate command, call this method. def start(self): cmd = RemoteShellCommand(...) self.startCommand(cmd, warnings) To refine the status output, override one or more of the following methods. The LoggingBuildStep implementations are stubs, so there is no need to call the parent method. - commandComplete(command)¶ This is a general-purpose hook method for subclasses. It will be called after the remote command has finished, but before any of the other hook functions are called. - createSummary(stdio)¶ This hook is designed to perform any summarization of the step, based either on the contents of the stdio logfile, or on instance attributes set earlier in the step processing. Implementations of this method often call e.g., addURL. - evaluateCommand(command)¶ This hook should decide what result the step should have. The default implementation invokes log_eval_func if it exists, and looks at rc to distinguish SUCCESS from FAILURE. The remaining methods provide an embarrassment usually the easiest method to override, and then appends a string describing the step status if it was not successful. argument which, if true, will abandon the entire buildstep on command failure. This is accomplished by raising BuildStepFailed. These methods all write to the stdio log (generally just for errors). They do not close the log when finished. - runRmdir(dir, abandonOnFailure=True)¶ Remove the given directory, using the rmdir command. Returns False on failure. - runMkdir(dir, abandonOnFailure=True)¶ Create the given directory and any parent directories, using the mkdir command. Returns False on failure. - pathExists(path)¶ :param path path to test :returns: Boolean via Deferred Determine if the given path exists on the slave (in any form - file, directory, or otherwise). This uses the stat command. are rejected with a configuration error. The return value should be passed to the BuildStep constructor. ..py:method:: makeRemoteShellCommand(collectStdout=False, collectStderr=False, **overrides) This method constructs a RemoteShellCommand instance.
http://docs.buildbot.net/0.8.12/developer/cls-buildsteps.html
CC-MAIN-2018-34
refinedweb
1,081
58.58
Our new RadControls for Windows 8 offers a number of controls to make your application look and perform better. One of the key features is the new set of HubTiles. These are easy to work with, and greatly simplify the job of creating a terrific tile for your application. Remember, Microsoft’s advice is “Invest in a Great Tile.” Now that’s easy to do. To see how to create great tiles, we’ll create a small application that hosts two tiles: one an image with “peek” text that appears halfway and then all the way over the image, and the second a flipping tile that provides one set of information on the front and another set on the back. We’ll also look briefly at how to handle events such as tapping on the tile. To get started, create a new Blank application for Windows 8 Store, and name it TelerikHubTiles. Start by adding a reference to RadControls For Windows 8, and then, in MainPage.xaml, add a using statement, xmlns:telerikPrimitives="using:Telerik.UI.Xaml.Controls.Primitives" Give the grid three rows and add a TextBlock to the top row to display messages based on tapping the tiles, <Grid Background="{StaticResource ApplicationPageBackgroundThemeBrush}"> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <TextBlock Grid. You are now ready to add a RadSlideHubTile to your page. Drag one from the toolbox or add it by hand. Dragging from the toolbox has the advantage that the needed namespace will be added for you automagically. We’ll set four properties on the Tile: Height, Width, the GroupTag and an eventhandler for being tapped. The GroupTag allows for coordination among multiple tiles. <telerikPrimitives:RadSlideHubTile Width="150" Height="150" telerikPrimitives:HubTileService. We need to fill in the top and bottom content for the slide hub. The top content “slides” over the bottom content. In our case, we want an image as the bottom content and the name for the image as the top content. We’ll create these with an image control and a TextBlock respectively. <telerikPrimitives:RadSlideHubTile.TopContent> <TextBlock Text="Jesse Liberty" HorizontalAlignment="Left" VerticalAlignment="Bottom" FontSize="22" FontWeight="Light" Margin="12,12,12,12" /> </telerikPrimitives:RadSlideHubTile.TopContent> <telerikPrimitives:RadSlideHubTile.BottomContent> <Image Source="../Images/Jesse.jpg" Stretch="UniformToFill" /> </telerikPrimitives:RadSlideHubTile.BottomContent> lerikPrimitives:RadSlideHubTile> Notice that the Image source is set to “../Images/Jesse.jpg.” Create an Images folder and either copy in the picture from the downloadable source, or add your own image and fix up the name. In either case, once the image is in the file, be sure to right click on the Images folder in Visual Studio and choose Add->Existing Item… and add the image to the project. That is it for our first tile. Let’s create a second tile, this time one that will flip from front to back. To do so, create a second stack panel (on the row below the previous one) and in it add a RadCustomHubTile, <StackPanel Grid. <telerikPrimitives:RadCustomHubTile Width="150" Height="150" telerikPrimitives:HubTileService.GroupTag = "FirstTiles" Tapped="RadCustomHubTile_Tapped_1" To flesh out the CustomHubTile we need to create the Front and the Back of the tile. Let’s start with the front where we will add three TextBlocks inside a stack panel. <telerikPrimitives:RadCustomHubTile.FrontContent> <StackPanel Margin="16,0,0,0" VerticalAlignment="Center"> <TextBlock Text="18" FontSize="80" FontWeight="SemiLight" /> <TextBlock Text="PENDING" FontSize="14" Margin="4,-12,0,0" /> <TextBlock Text="Messages" FontSize="12" Margin="4,14,0,0" /> </StackPanel> </telerikPrimitives:RadCustomHubTile.FrontContent> The back content is very similar; once again we use a stack panel and a set of TextBlocks, though we could use virtually any content as we’ll see in an upcoming blog post. <telerikPrimitives:RadCustomHubTile.BackContent> <StackPanel Margin="16,0,0,0" VerticalAlignment="Center"> <TextBlock Text="5" FontSize="80" FontWeight="SemiLight" /> <TextBlock Text="OVERDUE" FontSize="14" Margin="4,-12,0,0" /> <TextBlock Text="Tasks" FontSize="12" Margin="4,14,0,0" /> </StackPanel> </telerikPrimitives:RadCustomHubTile.BackContent> With the XAML in place, we need only fill in the event handlers for the two tiles, which we do in the code behind page . When the user clicks on the first tile, we’ll display the message “You tapped my tile.” When the user clicks on the second tile, we’ll display the message “You tapped tile 2.” Of course, in a “real” application you will take whatever action is appropriate to the tile being tapped. private void RadSlideHubTile_Tapped_1( object sender, TappedRoutedEventArgs e ) { Message.Text = "You tapped my tile."; } private void RadCustomHubTile_Tapped_1( Message.Text = "You tapped tile #2!"; That’s it, run the application, see the animated tiles and tap on them to activate the event handlers.
http://www.telerik.com/blogs/creating-a-great-windows-8-tile-with-xaml-radcontrols-for-windows-8
CC-MAIN-2017-30
refinedweb
774
57.16
Mic. > I am not against that (I am even in favor), but it fooled my > intuition coming from other languages. > Maybe others will appreciate that. One of the important things to know about python is the namespace concept. In a namespace (be it local or global) a name is bound to an object. In. To see the difference, in C, for example, a name points to a memory location and 'i=3;' will do something to the memory location. Python's namespace abstraction is truly powerful although sometimes surprising if you are used to thinking of 'variables' and 'memory locations'. cheers, holger
https://mail.python.org/pipermail/python-list/2003-January/191116.html
CC-MAIN-2016-40
refinedweb
102
62.98
Subscriber portal Hello, I am having some problems figuring this out. I need to add CLR functions to MSSQL 2008 that use an external website reference. I think I will have to create a project that uses SQL clr and then add another project that uses a form or command line c# to access the namespace of the service reference. I am having some trouble putting this all together. Any help would be appreciated. So basically have a sql server call a clr function from a c# project which in turn calls a form in another project to run the reference. If there are any easier ways of doing this please let me know. Hi PTCScott, Maybe you could tell me the real VS IDE and project type you create in your side. But my understanding is that you create the SQL Server CLR project, you want to add the C# library reference to this SQL Server CLR project, am I right? Reference: We would execute 'CREATE ASSEMBLY' on our external dll and load it into your database. Best Regards, Jack We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place. Click HERE to participate the survey.
https://social.msdn.microsoft.com/Forums/en-US/8d1076fb-b631-4186-9931-7f4959973992/visual-studio-clr-to-sql-with-a-service-reference?forum=visualstudiogeneral
CC-MAIN-2018-39
refinedweb
223
70.94
I've written a small Haskell program to print the MD5 checksums of all files in the current directory (searched recursively). Basically a Haskell version of md5deep. All is fine and dandy except if the current directory has a very large number of files, in which case I get an error like: <program>: <currentFile>: openBinaryFile: resource exhausted (Too many open files) It seems Haskell's laziness is causing it not to close files, even after its corresponding line of output has been completed. The relevant code is below. The function of interest is getList. import qualified Data.ByteString.Lazy as BS main :: IO () main = putStr . unlines =<< getList "." getList :: FilePath -> IO [String] getList p = let getFileLine path = liftM (\c -> (hex $ hash $ BS.unpack c) ++ " " ++ path) (BS.readFile path) in mapM getFileLine =<< getRecursiveContents p hex :: [Word8] -> String hex = concatMap (\x -> printf "%0.2x" (toInteger x)) getRecursiveContents :: FilePath -> IO [FilePath] -- ^ Just gets the paths to all the files in the given directory. Are there any ideas on how I could solve this problem? The entire program is available here: Edit: I have plenty of files that don't fit into RAM, so I am not looking for a solution that reads the entire file into memory at once.
http://ansaurus.com/question/2981582-haskell-lazy-i-o-and-closing-files
CC-MAIN-2017-34
refinedweb
205
72.97
Reverse engineering the Nokia N93 QVGA LCD Welcome to what will probably be the last in the series of articles in which I reverse engineer one of the Nokia QVGA cellphone displays from the pre-smartphone era. I think that by now I’ve covered every possible aspect of these incredibly cost effective little displays and hopefully opened up new avenues for hobbyists needing a richer display experience than they can get with a simple character LCD. If you’ve been following these articles then you’ll know that I’ve successfully reverse engineered the 2730, 6300, N82 and N95 8Gb displays. Each one is readily and cheaply available on ebay making them ideal for incorporation into your microcontroller projects. All of my reverse engineering efforts have been accompanied by free, open-source libraries for the Arduino and the STM32. In this final article I will tackle the N93 2.4″ display, which at the time of writing is available for just a few pounds, dollars or euros on ebay. Let’s see how I get on. The physical display The physical LCD is the same layout as all the previous Nokia LCDs that I’ve tackled. A short FPC cable runs from the LCD and is terminated by a board-to-board connector at the end. The connector The type of connector is usually a primary source of concern when reverse-engineering a cellphone LCD. Fortunately for me I discovered early on that the 24-pin 0.4mm pitch connector is made by a company called JST. The part number is 24R-JANK-GSAN-TF. Here’s the datasheet. The connector is readily available in quantity direct from the JST web-store. It’s also available from cellphone repair stores and at the time of writing an enterprising reader is even selling them in single units on ebay. Determining the connector pinout was not difficult. A little googling quickly unearthed the cellphone repair manual and schematic. Once the LCD connector was located in the schematic it was a simple matter of matching up pins 1..24 with the physical device. If you look at the photograph of the connector you will see that a number of the pins run directly into a solid plane on the FPC cable. These are the ground connections. With that knowledge it’s trivial to determine where the connector pin 1 is located. The backlight I can see from the schematic that the backlight is the same as the 6300 and N82. It consists of 4 white LEDs in series that we must drive ourselves. By far the best way to do this is to use an IC designed to boost voltage and to supply a constant current to the LED string. We will use the same NCP5007 constant current LED driver from OnSemi that we’ve been using in all our Nokia TFT designs so far. It’s cheap, simple to configure and requires few external components to make it work. The NCP5007 will be configured to supply a constant 20mA through the backlight circuit and we will use a PWM signal on the ENABLE pin to vary the brightness. An Arduino development board It’s become something of a tradition for me to prove the reverse engineering by designing a development board for the Arduino Mega and I’ll continue that tradition here. Level Conversion All my previous boards have featured level conversion using the NXP 74ALVC164245DL device. We need level conversion because the Arduino Mega has 5V GPIO levels and I have to assume that these screens are designed for a maximum of around 3.3V. I’ve lost count of the number of TFT controller datasheets that I’ve read and I’ve never yet seen one that is designed for 5V I/O. For this board I decided to use a different method of level conversion using a pair of logic buffer ICs. The device I use is the MC74VHC50 hex buffer from OnSemi, and I need two of them to handle all the signals on the board. These ICs have 5V-tolerant inputs and a variable output determined by its VCC pin. The reason for this change in design is that these logic ICs are easier to get hold of, cheaper to buy and slightly easier to work with due to the wider pin pitch. Before producing the LCD development board I verified that the level conversion strategy would work by hacking up some wires to the leads of the IC and verifying that an input level of 5V would be correctly converted down to 3.3V. The image from my little pocket oscilloscope shows that the test square wave is perfectly reproduced on the IC output pin. You’ll have to take my word for it that the level is 3.3V in that image. Schematic The schematic image shows the display connector, level converter, backlight circuit and arduino connector all linked together. I decided to add a small amount of capacitance to the active-low RESET line to enhance its stability. I’ve never seen any unexpected resets in any of my boards but I note that all the official Nokia schematics add capacitance here so I’ve followed suit this time. Click on this thumbnail for the schematic PDF Here’s the complete bill of materials for this schematic. PCB layout After designing the schematic the next step is to lay out the printed circuit board and route the traces between the components. My target board size is 50x50mm square so that I can use the cost-effective manufacturing service provided by ITead studios in China. The most critical part of this layout is the positioning of the LCD connector. It has to be accurately placed so that the LCD, when connected, wraps around the board and sits perfectly on the other side mounted on double-sided sticky pads to lift it clear of the board traces. In the above image pin 1 of the LCD connector is down the bottom right. As usual these days I don’t even consider the auto-router built in to the design package. Manual routing is time consuming but the results are always better than an auto-router. When the design was completed to my satisfaction I generated the Gerber files and sent them off to ITead for manufacturing. About two weeks later they arrived. As usual the boards from ITead are flawless and I can get started with the build. My process is essentially unchanged from what I usually do. After tinning the contacts on the board I place the difficult components (the ICs, the connector and the inductor) with the aid of some flux and bake them on a hot plate. The solder on the tinned pads melts and the components drop into place. The remaining discrete components are much easier to handle and I simply reflow them into place using my hot air gun and a pair of fine tweezers. When all is complete I examine the results under my binocular microscope and finally wash the boards to remove excess flux. Testing the board Only now, several weeks after starting do I get to see if the whole thing works. I’m hopeful though, given the success of the previous reverse-engineering efforts. I quickly determined that the LCD controller is much the same as the 2730, 6300 and N82 with only minor differences around the commands that control the orientation. We don’t really know the official identity of this controller but it’s close enough to the MagnaChip MC2PA8201 that we can work from its datasheet with a high degree of confidence. Even better, this batch of screens that I got on ebay appears to be of a very high quality. They’re bright, the colours are accurate and the viewing angle is very wide. They were advertised as ‘genuine Nokia’ and certainly appear to perform as if that’s what they are. It’s not all plain sailing though. Despite my best efforts I’ve not managed to discover the correct sequence to do vertical scrolling. If any Nokia engineers are out there reading this and want to anonymously drop me a tip then my contact form awaits your keystrokes. Go on… I have provided full support for the N93 in my Arduino library, version 2.4.0 and above available from my downloads page. To use the N93 panel you simply need to include its header file and change the name of the declared driver to match the N93. #include "NokiaN93.h" using namespace lcd; typedef NokiaN93_Portrait_262K TftPanel; All the examples except those that feature hardware scrolling such as the terminal demo are supported. The panels that I have obtained on ebay all support 16M and 262K colour modes in portrait and landscape orientation. Driver support is provided for the 64K colour mode but the panels that I have obtained don’t support it. Here’s a short demo video on YouTube that shows the LCD board in action. Click on the ‘YouTube’ logo at the bottom right to watch it in a larger format at the YouTube site. Gerbers available I’m afraid that the additional boards that I built have now all gone but I’ve now made the Gerber CAM files available from my downloads page so you can get your own printed simply by uploading the package to a service such as Seeed Studio, ITead Studio or Elecrow.
https://andybrown.me.uk/2013/01/26/nokia-n93-lcd/
CC-MAIN-2020-16
refinedweb
1,581
69.11
On Fri, 8 Oct 2010 13:00:25 -0600Grant Likely <grant.likely@secretlab.ca> wrote:> On Fri, Oct 8, 2010 at 12:52 PM, David Miller <davem@davemloft.net>> wrote:> > From: Andres Salomon <dilinger@queued.net>> > Date: Fri, 8 Oct 2010 11:34:24 -0700> >> >>> >> It's unknown why openprom.h was being exported; there doesn't seem> >> to be any reason for it currently, and it creates headaches with> >> userspace being able to potentially use the structures in there.> >> So, don't export it anymore.> >>> >> Signed-off-by: Andres Salomon <dilinger@queued.net>> >> > Acked-by: David S. Miller <davem@davemloft.net>> > I suppose it makes sense for me to pick this one up into my tree so it> is grouped with the rest of the pdt patches. I'll pick it up once> Andres reposts the series.> > g.Ok, I sent a new version of the phandle stuff (which was easier thanexpected, and doesn't affect any other patches).So to summarize, what's pending is:1- (sparc: stop exporting openprom.h header) Acked by Dave2- ([v3] sparc: convert various prom_* functions to use phandle) Acked by Dave3- (sparc: break out some PROM device-tree building code out into drivers/of) Acked by Dave4- (sparc: make drivers/of/pdt.c no longer sparc-only) Acked by Dave5- (of: no longer call prom_ functions directly; use an ops structure) Acked by Dave6- (of: add of_pdt namespace to pdt code) Acked by Dave7- (of: add package-to-path support to pdt)8- (x86: OLPC: add OLPC device-tree support)The make-of-build-on-x86 stuff is already in your tree.--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2010/10/8/436
CC-MAIN-2017-47
refinedweb
298
64.2
NumPy by Ivan Idris Author:Ivan Idris Language: eng Format: epub Publisher: Packt Publishing What just happened? We covered three bit twiddling hacks—checking whether the signs of integers are different, checking whether a number is a power of 2, and calculating the modulus of a number that is a power of 2. We saw the NumPy counterparts of the operators ^, &, <<, and < (see bittwidling.py): from __future__ import print_function import numpy as np x = np.arange(-9, 9) y = -x print("Sign different?", (x ^ y) < 0) print("Sign different?", np.less(np.bitwise_xor(x, y), 0)) print("Power of 2?\n", x, "\n", (x & (x - 1)) == 0) print("Power of 2?\n", x, "\n", np.equal(np.bitwise_and(x, (x - 1)), 0)) print("Modulus 4\n", x, "\n", x & ((1 << 2) - 1)) print("Modulus 4\n", x, "\n", np.bitwise_and(x, np.left_shift(1, 2) - 1))<<
https://ebookhunter.ch/numpy-by-ivan-idris_5d3dbd8924968d687644b54d/
CC-MAIN-2019-39
refinedweb
146
68.97
#include <djv_utils.h> This structure is used to get/set the information against INFO chunk. Initialize the data members. This constructor initializes the structure by loading INFO chunk from the specified stream. Decodes the INFO chunk from the specified stream. Referenced by decode(), and PageInfo(). Decodes the specified INFO chunk. Converts the current status of this instance to INFO chunk and write out to the specified stream. Referenced by encodeINFOChunk(). Sets the rotation status to the current flags variable. Please note that if the version is smaller than DJVUVER_SUPPORTROTATION (22), this method internally updates the file version. The flags. Traditionally, the flags has many meanings but currently, only rotation specification is used. To access rotation status, you had better use setRotation and getRotation methods. The rotation feature seems to have introduced with DjVu file format version 22. And the following values are supported by the current viewer: Referenced by decode(), encode(), getRotation(), and setRotation(). The version of the page. In the spec, this is described as "Minor Version". Referenced by decode(), encode(), getRotation(), and setRotation().
https://www.cuminas.jp/sdk/structCelartem_1_1DjVu_1_1PageInfo.html
CC-MAIN-2017-51
refinedweb
175
52.66
Hello everyone, I am back on Dev.to today to share another of my project! We all know how important it is to practice regularly in order to improve our development skills. As I am getting more confident, I try to build more complex and robust applications. This last project was a lot of fun to build. It took me almost a month to deploy it (I mainly work after school hours). Enjoy reading 😇 Table of contents - Project Introduction - Features - Tech Stack - Wireframe & Design - Data modeling & API routing - Project Organization - Sprint 01: Frontend - Sprint 02: Backend - Sprint 03: Fix & Deploy - Conclusion Project Introduction 👋 I am glad to introduce GroupChat 🥳 This challenge's wireframes are provided by devchallenges which offers many cool ideas of projects to build and practice. Take a look if you are missing inspiration! Ok, let's talk about GroupChat, it is an instant messaging app that allows users to create channels and chat with people interested in a particular topic. Sounds simple ? Well, I would not say that it was "complicated" but it is always challenging to try something new. It was the first time I have worked with socket.io and it was also my first medium-sized project built with TypeScript. Features 🌟 ✅ Custom Authentication (Email - Password) ✅ Login as guest (limited access) ✅ Random Avatar / Profile image upload ✅ Authorization (json web tokens) ✅ End to End input validation ✅ Create and join channels ✅ Instant messaging ✅ Bug report ✅ Mobile friendly Tech Stack ⚛️ Once again, I went for my best friend the MERN stack which includes: ➡️ MongoDB ➡️ Express ➡️ React ➡️ Node In addition to above technologies, I worked with TypeScript to improve the robustness of my code and with Redux to manage the app state. I should also mention socket.io that enables real-time, bidirectional and event-based communication between the browser and the server. For deployment, an easy and efficient way is to host the frontend on Netlify and backend with Heroku. Here is a list of tools I usually work with to enhance my programming experience: ➡️ OS: MacOS ➡️ Terminal: iterm2 ➡️ IDE:VSCode ➡️ Versioning: Git ➡️ Package Manager: NPM ➡️ Project Organization: Notion Wireframe & Design 🎨 To be honest, I don't have too much pleasure designing a product's UI. So, I decided to work with existing wireframes and focus on the code instead. As I said already, I inspired from devchallenges. Quick overview: Data modeling & API routing 💾 Database design and API routing are important steps. Make sure you have an action plan before starting coding, or it will be a disaster 🧨 Here is a simple data model made with Lucidchart: It is indeed simple, but it is enough for this project. As you could guess, we are building a REST API with Node/Express which involves HTTP requests. Let's imagine our routes: Note: API doc made with Apiary Project Organization 🗂️ I love when everything is clean and well-organized. Here is the folder structure I decided to work with: Simple, clean and consistent 💫 In order to keep track of my progress, I made myself a task board on Trello Before you head over to the next step, I will briefly talk about the Git workflow. As I was the only one working on this project, GitHub flow worked just fine. Every addition to the code has a dedicated branch and the code is reviewed (by myself only...) for each new PR. Note: Around 180 commits and 40 branches were created Sprint 01: Setup & Frontend 🖥 It is always so exciting to start coding, this is my favorite part of the process. I would say that the first week was the easiest.I began with setting up both Frontend and Backend which means install dependencies, environment variables, CSS reset, create a database, ... Once setup is done, I built every single component that should appear on the screen and made sure they are mobile friendly (flex, media queries, ...). Speaking of components and UI, here is a simple example: // TopBar/index.tsx import React from 'react'; import { IconButton } from '@material-ui/core'; import MenuIcon from '@material-ui/icons/Menu'; // Local Imports import styles from './styles.module.scss'; type Props = { title?: String; menuClick: () => void; }; const TopBar: React.FC<Props> = props => { return ( <div className={styles.container}> <div className={styles.wrapper}> <IconButton className={styles.iconButton} onClick={props.menuClick}> <MenuIcon className={styles.menu} </IconButton> <h2 className={styles.title}>{props.title}</h2> </div> </div> ); }; export default TopBar; // TopBar/styles.module.scss .container { width: 100%; height: 60px; box-shadow: 0px 4px 4px rgba($color: #000, $alpha: 0.2); display: flex; align-items: center; justify-content: center; } .wrapper { width: 95%; display: flex; align-items: center; } .title { font-size: 18px; } .iconButton { display: none !important; @media (max-width: 767px) { display: inline-block !important; } } .menu { color: #e0e0e0; } Nothing fancy, it is a basic implementation of TypeScript (I still have a lot to learn) and SCSS modules. I like SCSS a lot and wrote an introduction for anyone interested: SCSS Introduction 🎨 Killian Frappart ・ Oct 7 '20 ・ 4 min read You can also notice that some components (icons, inputs, ...) are imported from my favorite UI library out there: Material UI. Speaking of TypeScript, the first days were really painful and tiring but in the end, it appeared to be extremely easy to catch bugs during development. If you find struggling with TypeScript, you may want to have a look to this post: Killian Frappart ・ Oct 30 '20 ・ 5 min read I am not so familiar with Redux and I had to spend some time reading the doc in order to make it right. Another cool tool I worked with is Formik which manages form validation in a smart and simple way. // Login/index.tsx import React, { useState } from 'react'; import { Link } from 'react-router-dom'; import axios from 'axios'; import { TextField, FormControlLabel, Checkbox, Snackbar, CircularProgress } from '@material-ui/core'; import MuiAlert from '@material-ui/lab/Alert'; import { useDispatch } from 'react-redux'; import { useFormik } from 'formik'; import * as Yup from 'yup'; import { useHistory } from 'react-router-dom'; // Local Imports import logo from '../../../assets/gc-logo-symbol-nobg.png'; import CustomButton from '../../Shared/CustomButton/index'; import styles from './styles.module.scss'; type Props = {}; type SnackData = { open: boolean; message: string | null; }; const Login: React.FC<Props> = props => { const dispatch = useDispatch(); const history = useHistory(); const [isLoading, setIsLoading] = useState(false); const [checked, setChecked] = useState(false); const [snack, setSnack] = useState<SnackData>({ open: false, message: null }); // Async Requests const loginSubmit = async (checked: boolean, email: string, password: string) => { setIsLoading(true); let response; try { response = await axios.post(`${process.env.REACT_APP_SERVER_URL}/users/login`, { checked, email: email.toLowerCase(), password: password.toLowerCase() }); } catch (error) { console.log('[ERROR][AUTH][LOGIN]: ', error); setIsLoading(false); return; } if (!response.data.access) { setSnack({ open: true, message: response.data.message }); setIsLoading(false); return; } if (checked) { localStorage.setItem('userData', JSON.stringify({ id: response.data.user.id, token: response.data.user.token })); } dispatch({ type: 'LOGIN', payload: { ...response.data.user } }); history.push(''); setIsLoading(false); }; const formik = useFormik({ initialValues: { email: '', password: '' }, validationSchema: Yup.object({ email: Yup.string().email('Invalid email address').required('Required'), password: Yup.string() .min(6, 'Must be 6 characters at least') .required('Required') .max(20, 'Can not exceed 20 characters') }), onSubmit: values => loginSubmit(checked, values.email, values.password) }); return ( <div className={styles.container}> <Link to="/"> <img className={styles.logo} } <CustomButton type="submit" onClick={formik.handleSubmit} isPurple <p className={styles.guest}>Don't have an account? Sign Up</p> </Link> {isLoading && <CircularProgress />} <Snackbar open={snack.open} onClose={() => setSnack({ open: false, message: null })} autoHideDuration={5000}> <MuiAlert variant="filled" onClose={() => setSnack({ open: false, message: null })} {snack.message} </MuiAlert> </Snackbar> </div> ); }; export default Login; Sprint 02: Backend 📊 The server is pretty straightforward, it is a classic representation of what a Node/Express server should look like. I created mongoose models and their associations. Then, I registered routes and connected corresponding controllers. Inside my controllers, you can find classic CRUD operations and some custom functions. Thanks to JWT, it was possible to work on the security, which was an important point for me. Now comes the coolest feature of this app, bidirectional communication or maybe should I say socket.io ? Here is an example: // app.js - Server side // Establish a connection io.on('connection', socket => { // New user socket.on('new user', uid => { userList.push(new User(uid, socket.id)); }); // Join group socket.on('join group', (uid, gid) => { for (let i = 0; i < userList.length; i++) { if (socket.id === userList[i].sid) userList[i].gid = gid; } }); // New group socket.on('create group', (uid, title) => { io.emit('fetch group'); }); // New message socket.on('message', (uid, gid) => { for (const user of userList) { if (gid === user.gid) io.to(user.sid).emit('fetch messages', gid); } }); // Close connection socket.on('disconnect', () => { for (let i = 0; i < userList.length; i++) { if (socket.id === userList[i].sid) userList.splice(i, 1); } }); }); // AppView/index.tsx - Client side useEffect(() => { const socket = socketIOClient(process.env.REACT_APP_SOCKET_URL!, { transports: ['websocket'] }); socket.emit('new user', userData.id); socket.on('fetch messages', (id: string) => fetchMessages(id)); socket.on('fetch group', fetchGroups); setSocket(socket); fetchGroups(); }, []); I discovered express-validator and it helped a lot to provide input validation on the server side. With no doubt, a library that I am going to use again. Sprint 03: Fix & Deploy ☁️ Alright, the app is looking good and features are working fine. It is time to finish this portfolio project and start a new one. I am not a pro of cloud solution and complex CI/CD methods so I will satisfy with a free hosting service. Heroku has a free solution that works fine for the backend. 5 minutes after my node server was uploaded, it was running independantly. Awesome 🌈 I experienced some security issues with the client. Usually, everything is ok when I send my React app to Netlify via GitHub but not this time. Many of my friends could not reach the given URL because of some "security reasons" and I had to buy a domain name to fix it. No big deal here, 15 euros for a year does not seem overpriced. Finally, images uploaded by users are stored on my Cloudinary account via their public API. Conclusion ✅ Once again, I enjoyed myself so much working on this project and learned a lot. It was a pleasure to share the process with you and I can't wait to hear your tips and feedbacks. This project is nothing more than a portfolio project and there is no "production" intention behind. However, the code is open sourced on GitHub, feel free to do whatever you want with it. KillianFrappartDev / GroupChat Instant messaging webapp project made with React, Redux, TypeScript, Node, MongoDB & Socket.io I know that there is a lot to improve in term of code quality, security, optimization, ... Whatever, I managed to finish this and the result looks pretty cool and I hope you like it as well. Never stop challenging yourself 🚀 Discussion (14) Now add end-to-end encryption :) cooool Excellent article, well done! Nicccceeee! 🔥 Very good job! Great article, well done! Good job! I did my best, thanks for reading! Great article, well done! Thank you 😇 Nice one 👏 Thanks mate 🔥🔥🔥 Very inspiring Good one! 👌
https://practicaldev-herokuapp-com.global.ssl.fastly.net/killianfrappartdev/instant-messaging-app-made-with-react-typescript-node-socket-io-27pc
CC-MAIN-2021-21
refinedweb
1,833
58.79
After fiddling with Djangos auth-app for a while I decided t rather have my own (I know, why should one do this? Answer: To learn). It consists of several steps: - activation - adding a password First I created an app for user-management $python manage.py startapp user_management This gave me the structure to work with. First I created the usermodel: from django.db import models import bcrypt class User(models.Model): email = models.CharField(max_length=100, unique=True) firstname = models.CharField(max_length=30) lastname = models.CharField(max_length=30) password = models.CharField(max_length=128) last_login = models.DateTimeField(auto_now=True) registered_at = models.DateTimeField(auto_now_add=True) core_member = models.BooleanField() activation_key = models.CharField(max_length=50, null=True) The idea here was to have email as username and to have that unique. I don’t consider usernameshis is a good choice for logins but rather a feature for profiles, but that depends on one’s taste I think. The registration view is pretty straight forward . I create a RegistrationForm object with fields for email, first and last name. The activation_key is simply a string of randomly chosen ASCII characters and digits. Activation itself is just creating a link, sending it and comparing the random part of the link and the stored string. If they match is_active is set to True and the user can set his/her password. For passwords I normally store bcrypt hashes in the database (NEVER! store plaintext passwords in a database!). This is quite simple and can be done by following this description. The function for setting the password goes into the model. For this to work I use a classmethod. As the name suggests, this is a method bound to the class, not an instance of said class which allows to get objects as in “cls.objects.get()” which is the classmethod’s equivalent to self.something in instance methods. @classmethod def set_password(cls, user_id, plain_pass): secret = bcrypt.hashpw(plain_pass, bcrypt.gensalt()) user = cls.objects.get(pk=user_id) user.password = secret user.save() return True The login process itself is done via another classmethod which I named authenticate: @classmethod def authenticate(cls, email, password, request): user = cls.objects.get(email__exact=email) if bcrypt.hashpw(password, user.password) == user.password: request.session['user_id'] = user.id user.save() # this is to get last_login updated return user else: return None (In order for this to work you have to enable the session middleware and the session app in settings.py.) So, a quick rundown. Since I use email as an unique identifier for the login the function expects an email address which is used to find the person to authenticate, the plaintext password (e.g. as given from a inputfield) and the request object to make use of a session. (I use database session handling for development but there are alternatives described in the django docs.) The bcrypt function returns True if given plaintext password hashed and the stored hash match False if not. After haveing checkd that the user has given the right credentials I’m going to store the user_id in the session which allows me to get the full set of user information should I need it. I save the user to trigger the auto_now function of the user model in which updates the last_login field to the actual time. Now with User.authenticate(email, password, request) the user is logged in.
https://kodekitchen.wordpress.com/2013/05/
CC-MAIN-2018-22
refinedweb
563
60.01
SKOS Repository REST http API From Semantic Web Standards This page will contain a proposed specification for a REST API to access a SKOS repository. This is work in progress for discussion! The goal is to propose a simplified access to SKOS/RDF (or JSON) resources: - HTTP methods (GET, POST, PUT, DELETE) would indicate the action desired; - URL would identify the resources desired by their identification or a path to reach them (search and/or navigation within relations); - Results would simply be SKOS/RDF (or JSON) triples. - Identification would be the suffix part in the URL specific to the Concept or ConceptScheme when removing the application namespace URL. - Don't understand: isn't identification the whole URI? RDF namespaces don't really exist. Additional Proposal: - Results are normally the exact objects corresponding to the request with all their RDF triples (not the objects they contain or the objects related). - A "follow" parameter could be added to list also (but only once in a given result) the RDF triples linked with specified attributes. - A "depth" parameter could indicate how recursive this process goes. - A "lang" parameter could indicate "all" (default), "nego" for http header negociation, "x,y,z" to specify a preference order
https://www.w3.org/2001/sw/wiki/SKOS_Repository_REST_http_API
CC-MAIN-2017-22
refinedweb
202
51.78
Here's an interesting one that I can't figure out. I was about to call MS, but figured I'd check here first. Scenario: Two Exchange 2010 forests federated with GAL Sync. User Bob@domain.com had a mailbox on Exchange 2010 server. Bob now has a new mailbox on a different Exchange forest (Bob@Awesome.com). Bob wants his old email forwarded for Bob@domain.com to Bob@Awesome.com. So...easy enough right? Create a contact in the domain.com Exchange server and set the forwarding on the mailbox and for grins hide the mailbox from the address books. Done, right? Wrong (sort of)...because (note: I have federation and GAL sync allowing free/busy across forests): Bob is getting auto-forwarded meeting requests from Sally@domain.com who used the Scheduling assistant and typed in "bob@domain.com" and saw that he's available. He gets the calendar forward and says "Um...Sally...I'm booked at that time" to which she replies "not from what I see". Now if Bob is available on bob@awesome.com and he accepts, it shows up on his awesome.com calendar as it should. But Sally sees the request still sent to Bob@domain.com in the scheduling assistant as he is free but bob@awesome.com is coming to the meeting. SO...basically users in the domain.com organization can still see free/busy details on the old calendar for the mailbox bob@domain.com even though the mailbox is hidden from the GAL. THE QUESTION: Since I can't create a contact and then forward that contact....is there any way around the above? I don't think I can remove a calendar from a mailbox. I considered removing all calendar permissions but wasn't sure if that was the right path to go down or not. OR even better: Can someone tell me how to accept email for bob@domain.com on Exchange without having a mailbox for him and then re-route it to bob@awesome.com? UPDATE: I have figured out how to handle the calendar with removing the default permissions...it's an ok fix. The BOUNTY will be for the "OR EVEN BETTER" question in bold. If it isn't possible, then that doesn't count as BOUNTY worthy. :) Thank you! The solution is to set up SMTP namespace sharing between the two Exchange servers. I would usually use the targetAddress on the user object via adiedit to forward email during migrations, but that doesn't take care of calender sync. How about a transport rule ? I've also used Quest Migration Manager for Exchange for syncing free/busy information between Exchange orgs. Since Bob is actually part of a different Exchange forest, Sally can't see Bob's free/busy information at all in the other Exchange org (and vice versa). If I am reading what you did correctly, you essentially created an external contact for Bob for his new Exchange org(Bob@awesome.com) and hid his existing mailbox (Bob@domain.com) from the GAL (which does not hide free/busy info on his mailbox). Then you set the forwarding address on his Bob@domain.com mailbox to forward to Bob@awesome.com. When Sally goes to set up a meeting with Bob, she resolves his Bob@domain.com account and it shows that he is free which is true since, according to his @domain.com mailbox, he is. There are several different ways you can try to resolve this, but nothing really cut and dry. One of these methods may work for you based on your requirements: Tell Sally that Bob doesn't live in the domain.com domain anymore, and she can't see his free/busy info to reliably schedule meetings with him. She can continue to send meeting requests to his domain.com account, but has to accept that she can't see if he is actually busy or not. This is fine if Sally and Bob are the only users involved with this problem, but doesn't work if you have 1/2 your users split between Exchange orgs and the domain.com users aren't sure which are in the awesome.com org. Remove Bob's free/busy permissions on his domain.com mailbox so when Sally (or any other domain.com user) tries to schedule him, he shows up with no free/busy info and Sally can't claim that he appeared to be free on her end. From what I've gathered, you would do this either using the Set-MailboxPermissions cmdlet or by opening up Bob's domain.com mailbox directly in Outlook and setting the calendar permissions for "Default" to None. Remove Bob's domain.com mailbox and only leave the external contact for Bob@awesome.com in the GAL. This will break the e-mail forwarding for bob@domain.com > bob@awesome.com, which may be an issue for Bob, but people will soon realize that he doesn't live at domain.com anymore when they get NDRs and ask questions, figure out his new address when they call him up and ask him about it, etc... If Bob no longer even needs to login to the domain.com domain you can remove his entire AD account. If you have access to both Exchange orgs (domain.com & awesome.com), you can set them up to share Free/Busy information between them. I personally have never done this, but doing some quick googling found this technet article from MS on setting it up at a high level with links to the more detailed steps. Like many technet articles, there may be more caveats to it than what the article itself covers. At my company, we have users in 2 different locations that primarily use one or the other Exchange org for e-mail, but we do not have a unified calendar scheduling capability since we don't control the other Exchange org. We just forward messages to the other org if the user says they are primarily using that one for e-mail or don't forward if they primarily use ours. Over time our users have just gotten to remember to use our domain's e-mail or the external contact for sending mail ("Let's see... that person is at the other location so I don't e-mail their domain.com account, I use the external contact..."). It isn't easy to manage, but either somehow seems to work for them or they have just accepted the fact that they can't see the other org's scheduling info for meetings. Update for the OR even better scenario (disclaimer - untested): By posting your answer, you agree to the privacy policy and terms of service. asked 2 years ago viewed 1180 times active
http://serverfault.com/questions/462791/exchange-2010-forward-email-to-external-contact-without-keeping-the-original-m/464261
CC-MAIN-2015-48
refinedweb
1,145
66.84
Reporting Services Crib Sheet For things you need to know rather than the things you want to know Contents - Introduction - The design of SSRS - The components of SSRS - SSRS DataSources and Datasets - Conclusions - Further Reading… Introduction SQL Server Reporting Services (SSRS) aims to provide a more intuitive way of viewing data. It allows business users to create, adapt and share reports based on an abstraction or ‘model’, of the actual data , so that they can create reports without having to understand the underlying data structures. This data can ultimately come from a variety of different sources, which need not be based on SQL Server, or even relational in nature. It allows also developers a wide range of approaches to delivering reports from almost any source of data as part of an application. The reports are interactive. The word ‘reporting’, in SSRS, does not refer just to static reports but to dynamic, configurable, reports that can display hierarchical data with drill-down, filters, sorting, computed columns, and all the other features that analysts have come to expect from Excel. Users can specify the data they are particularly interested in by selecting parameters from lists. The reports can be based on any combination of table, matrix or graph, or can use a customized layout. Reports can be printed out, or exported as files in various standard formats. SSRS provides a swift, cheap way of delivering to the users all the basic reports that are required from a business application, and can provide the basis for customized reports of a more advanced type. The design of SSRS The surprising thing about Reporting Services is its open, extensible, architecture. With SSRS, Microsoft has taken pains over a product that has an obvious long-term importance for data handling in .NET. From a programmer’s perspective, the ‘Big Idea’ behind Reporting Services is to have a standard way of specifying reports. In a way, it is an attempt to do for reports what HTML did for rendering pages. Report Definition Language (RDL) is an XML-based open standard grammar. It was designed to provide a standard way to define reports, to specify how they should appear, their layout and content. It specifies the data source to use and how the user-interaction should work. In theory, there could be a number of applications to design business reports; several ways of managing them, and a choice of alternative ways of rendering them. All these would work together because of the common RDL format. SQL Server Reporting Services is the first product to adopt the architecture. It is a combination of report authoring, report management and report delivery. It is not limited to SQL Server data. It can take data from any ODBC source. Reporting Services can use a SQL Server Integration Services package as a data source, thereby benefiting from Analysis Service’s multidimensional analysis, hierarchical viewing and data mining. It can just as easily report from OLAP data as relational data. It can also render reports to a number of media including the browser, application window, PDF file, XML, Excel, CSV or TIFF. The API of SSRS is well-enough documented to allow the use of custom data, custom ways of displaying data or special ways of delivering it. Because Microsoft has carefully documented the RDL files, and the APIs of the ReportingServices namespace, it is reasonably easy to extend the application for special data or security requirements, different data sources, or even the way the reports are rendered. One can, of course, replace a component such as the report authoring tool with one designed specially for a particular application. When SSRS is installed, it is set to deliver reports via a ‘Report Server’ which is installed as an extension to the IIS service on the same server as that on which SQL Server is installed. The actual portal, with its hierarchical menu, report models and security, can be configured either via a browser or from Visual Studio. The browser-based tools are designed more for end-users, whereas the Visual Studio ‘Business Intelligence Development Studio’ tools are intended for the developer and IT administrator. The ‘Report Server’ is by no means the only possible way of delivering reports using Reporting Services, but it is enough to get you started. So let’s look in more detail at the three basic processes that combine to form SQL Server Reporting Services (SSRS): Report Authoring, Report Management and Report Rendering The components of SSRS Report Authoring The Report Authoring tools produce, as their end-product, RDL files that specify the way that the report will work. Any application capable of producing an XML file can produce an RDL file, since RDL is merely an XML standard. There is nothing to stop an application from producing an RDL and then using Microsoft’s ReportViewer component to render the report. Hopefully, third-party ‘Report Designer’ packages will one day appear to take advantage of the applications that are capable of rendering RDL files. The report designers of SSRS are of two types: ‘Report Builder’ designed for end users and ‘Report Designer’ designed for developers. Report Builder Report Builder is an ‘ad-hoc reporting tool’, and designed for IT-savvy users to allow them to specify, modify and share the reports they need. It can be run directly from the report server on any PC with the .NET 2 framework installed. It allows the creation of reports derived from ‘report models’ that provide a business-oriented model of the data. These reports can then be managed just like any others. The Report Builder allows the users to specify the way data is filtered and sorted, and allows them to change the formulas of calculated columns or to insert new columns. These reports have drill-down features built into them. Report Designer Visual studio has a ‘Report Designer’ application hosted within Business Intelligence Development Studio. It allows you to define, preview and publish reports to the Report Server you specify, or to embed them into applications. It is a different angle on the task of designing reports to ‘Report Builder’, intended for the more sophisticated user who understands more of the data and technology. It has a Query Builder, and expression editor and various wizards. The main designer has tabs for the data, layout and preview. With the embedded Query Designer , you can explore the underlying data and interactively design, and run, a query that specifies the data you want from the data source. The result set from the query is represented by a collection of fields for the dataset. You can also define additional calculated fields. You can create as many datasets as you need to for representing report data. The embedded Layout Designer allows the insertion or alteration of extra computed columns. With the Layout Designer, you can drag fields onto the report layout, and arrange the report data on the report page. It also provides expression builders to allow data to be aggregated even though it has come from several different data locations. It can then be previewed and deployed. Model Designer The Model designer in Visual Studio allows you to define, edit and publish ‘report models’ for Report Builder that are abstractions of the real data. This makes the building of ad-hoc reports easier. These models can be selected and used by Report Builder so that users of the system can construct new reports or change existing reports, working with data that is as close as possible to the business ‘objects’ that they understand. The model designer allows the programmer to specify the tables or views that can be exposed to the users who can then use the models to design their reports. One can also use it to determine which roles are allowed access to them. Report Management There are configuration, monitoring and management tools in SSRS which are provided within the Business Intelligence Development Studio. Report Manager Report Manager is a web-based tool designed to ease the management task of connections, schedules, metadata, history and subscriptions. It allows the administrator to categorize reports and control user access. The data models that are subsequently used by the ad-hoc Report Builder tool to translate the data into business entities can be edited in this tool. The report portal , which provides the ‘homepage’ for the Report Server, can be edited to create or modify the directory hierarchy into which the individual reports are placed. The RDF files can be uploaded to the report server using this tool and placed in their logical position within the hierarchical menu. One can create or assign the roles of users that are allowed access the various levels of access to this report. These roles correspond to previously defined groups in the Active Directory. One can specify whether and how often a report should be generated and email the recipients when the report is ready. SSRS uses role-based security to ensure that appropriate access to reports is properly enforced. It controls access to folders, resources and the reports themselves. With SQL Server Standard and Enterprise editions, one can add new roles, based on Active Directory groups. There are APIs for integrating other security models as well. Management Studio The SQL Server Management Studio (SSMS) tool mirrors most of the capabilities of the Report manager with the addition of instance configuration and scripting. Management Studio itself uses RDL files in order to implement the performance Dashboard so as to get reports on the performance of the server itself, and this is easily extended to provide additional reports. Report Rendering Viewing Reports on an intranet When SSRS is installed, it sets up a virtual directory on the local IIS. From there, users with the correct permissions can gain access to whatever reports you choose to deploy. The idea of allowing users to interact with reports and to drill-down into the detail is fundamental to the system, so it is possible to allow users to design their own reports or to use pre-existing ones and to hyperlink between reports or drill down into data to get more detailed breakdowns. SSRS now provides ‘floating headers’ for tables that remain at the top of the scrolled list so one can easily tell what is in each column Report parameters are important in SSRS. If, for example, the users can choose a sales region for a sales report then all possible sales regions for which data exists are displayed for selection in a drop-down list. This information is derived from the data model that forms the basis for the report. Reports can be viewed via a browser from the report server, from any ASP.NET website and from a Sharepoint portal. Reports in applications One is not restricted to browser-based access of SSRS reports. Any .NET application can display such reports easily. The latest version of SSMS, for example, uses reporting services in order to get performance reports. There are alternatives. such as using the Web Browser control or the ReportViewer control. To use the web browser control in an application, all one needs to do is to provide the URL of the report server. The report is then displayed. One can, of course launch the browser in a separate window to display the reports. The URL parameters provide precise control over what information is returned. Using the appropriate parameters, not only can you get the report itself for display, you can also access the contents of the Data Source as XML, the Folder-navigation page, the child items of the report, or resource contents for a report. You can also specify whether it should be rendered on the browser or as an image/XML/Excel file. The report viewer control, ‘ReportViewer’, ships with Visual studio 2005 and can be used in any Windows Form or web form surface, just by dragging and dropping. After you assign a report url and path, the report will appear on the control. You can configure the ReportViewer in a local report-processing mode where the application is responsible for supplying the report data. In local-processing mode, the application can bind a local report to various collection-based objects, including ADO.NET regular or typed datasets. One can use the Report Server Web Service to gain access to the report management functionality such as content, subscription and data source, on top of all the facilities provided by using a URL request s. This allows reporting via any development tool that implements the SOAP methods. This Web Service approach provides a great deal of control over the reporting process greatly facilitates the integration of Reporting Services into applications, even where the application is hosted in a different operating environment. SSRS DataSources and Datasets SSRS Data Sources Data that is used to provide the Dataset that forms the basis for a report usually comes from SQL Server, or a source for which there is an OLEDB or ODBC provider. It is possible to create the dataset in another application, even a CLR, and bind it to a report. One can access other data sources, such as an ADO.NET dataset, by using a Custom Data Extension (CDE). Report delivery can be from a Sharepoint site, using the SharePoint Web parts that are included in the SSRS package. The information contained within a data source definition varies depending on the type of underlying data, but typically includes information such as a server name, a database name, and user credentials. Data sources can include Microsoft SQL Server, Microsoft SQL Server Analysis Services, ODBC, and OLE DB, Report Server Model, XML, Oracle, SAP NetWeaver Business Intelligence or Hyperion Essbase A data source can be contained within a report, or it can be shared by several. In the first case, the definition for a report-specific data source is stored within the report itself, whereas for a shared source, the definition is stored as a separate item on the report server. A report can contain one or more data sources, either report-specific or shared. SSRS DataSets A Reporting Services dataset, which is not the same as a .NET dataset, is the metadata that represents the underlying data on a specific data source. It contains a data source definition, a query or stored procedure of the data source and a resulting fields list, and the parameters if any, calculated fields, as well as the collation. A report can contain one or more datasets, each of which consists of a pointer to a data source, a query, and a collection of fields. These datasets can be used by different data regions on the report, or they can be used to provide dynamic lists of parameters. The datasets used as the basis for reports can come from a wide variety of sources. The examples are mostly queries involving SQL Server base tables, and this has given the impression that this is all that can be used. Reports can, in fact, easily use Stored Procedures to provide the dataset for a report. However, the queries for datasets that fetch the items in the drop-down Parameter lists must be provided too. Dataset Fields Each dataset in a report contains a collection of fields. These fields generally refer to database fields and contain a pointer to the database field and a name property but this can be overwritten with a more meaningful name where necessary. These fields can, alternatively, be calculated fields, which contain a name and an expression . Conclusion When implementing an application, one ignores Reporting Services at one’s peril. The benefit to almost any application of implementing standard reports from SSRS is immediate and always impressive to end-users. The impact is far greater than the effort involved. One of us (Phil) suffered intense embarassment through believing the users of an application when they said that they would never require interactive reports and only wanted strictly defined and cross-checked standard reports in an application. When someone else implemented both Business Intelligence and SSRS, and gave the users the freedom to explore their own data, Phil was left in no doubt as to his foolishness in having neglected to do so. There is always a point when developing an application that the standard fare that can be provided by SSRS is not quite enough for the more advanced reporting requirements. However, it is prudent to make sure that all other reporting up to that point is done via SSRS. The worst mistake of all is dismissing SQL Server Reporting Services as being just an end-user tool for simple reports. Its architecture is such that it forms the basis of an extremely powerful tool for delivering information to users of an application. Further Reading…. - SQL Server 2005 Reporting Services - Technologies: Reporting Services - SQL Server 2005 Books Online: SQL Server Reporting Services - Configuring Reporting Services to Use SSIS Package Data - Introducing Reporting Services Programming (SQL 2000) - Report Definition Language Specification - Report Controls in SQL Server 2005 Reporting Services Load comments
https://www.red-gate.com/simple-talk/sql/reporting-services/reporting-services-cribsheet/
CC-MAIN-2020-29
refinedweb
2,838
50.57
I am attempting to create a multiplayer Android game using Google Play Services Realtime Multiplayer. I have successfully been able to get one account to authenticate and get room setup to 20% with no errors. However, when attempting this with any other google play account I get an "Uh-Oh. Encountered some error connecting to the room" returned with no connection to the room made. The account is authenticated and google play cloud saving does work with any account. I have attempted with multiple android devices and only my main google account works. If anyone has any ideas as to what might be causing it. using GooglePlayGames; not working? 0 Answers CommandInvokationFailure: Gradle build failed. 0 Answers Google Play fails to connect on android 0 Answers Google Play showing log in dialog but not loggin in. 0 Answers GPGS - How to check if a Saved Game file exists? 1 Answer
https://answers.unity.com/questions/1664068/google-play-realtime-multiplayer-only-works-on-one.html
CC-MAIN-2020-40
refinedweb
150
66.23
Java Game Programming Part I: The Basics by Adam King The internet has become an excellent medium for game programmers. If you surf the internet chances are that you have seen at least a couple of java applet games. These games, besides making a great addition to a website, are a great place for beginners to learn and advanced programmers to hone and expand their skills. Over the course of these articles I want to cover the basics of programming java applets, how to make some simple games, as well as some advanced topics including double buffering. A few of the simple games that I will use are tetris, nibbles, pacman, and pong. I will use these and others as examples to illustrate the thought process and steps that can and should be followed when approaching a game. Everyone loves to jump right in and start trying to code complicated, but cool programs. Without a little background knowledge this can be a frustrating experience which is why I am going to outline the basics here before we dive into writing full blown java applets and discussing more advanced topics. If you are already familiar with the basics I would encourage you to read on because you never know when you might read something interesting even if it is only a different perspective on something you already know. Before I proceed, I want to mention how you can go about working through the ideas and concepts that I will be discussing. If you have a Java programming environment then you are set, but for those of you who don't there is another option. If you go to java.sun.com you can get a free copy of the Java Development Kit (JDK). You can use this in conjunction with notepad or some other text editor to produce all of the applets that I will discuss here. Ultimately it doesn't matter which route that you decide to take. I personally prefer the JDK when I am programming at home. I have used several visual programming environments for Java and haven't found one that impressed me enough to spend money on it. You may however find one that fits your needs. Whenever you learn a new programming language the first program you always write is "Hello World!". I don't want to break with tradition so here is a Hello World applet. import java.applet.*; import java.awt.*; public class HelloWorld extends Applet { public void paint (Graphics g) { g.drawString("Hello World!", 50, 25); } } One important point to take note of here is that you need to call this file "HelloWorld.java". You will notice that this is identical to the name I gave my class on the fourth line of the program above. If these two names differ at all you will get an error during compilation. If you have never seen any Java code before this may seem like Greek to you, but it really isn't that bad. Before we get into how to compile and run this applet lets take a look at what is going on. Java is totally based on classes. It allows related classes to be grouped together into something called a package. Two examples of packages are java.applet and java.awt. The import statement allows you to include in your program one or more classes from a package. It is similar to includes in C/C++ in its function. For example: import java.applet.Applet; // includes the Applet class from the java.applet package import java.applet.*; // includes all of the classes from the java.applet package As we progress I will mention what we need to import to allow our programs to function. For now importing all of the classes from the java.applet and java.awt packages will do. As I mentioned before, everything in Java is centered around the use of classes. You will notice that the next line in our sample program is a class declaration. public class HelloWorld extends Applet This is a class declaration which represents our applet. Two important points need to be noted here. First of all, the name of your class must be identical to the name of the file in which it is located. In this case, our class is called HelloWorld so the filename must be HelloWorld.java. I know that I just mentioned this a moment ago, but you would be suprised with the number of times simple errors like this crop up in people's programs. The other important point to take note of is the end of the class declaration, "extends Applet". For those of you familiar with programming in C++ the extends statement is the equivalent of inheritance in C++. If you aren't familiar with inheritance what it means is that our class (HelloWorld) receives and can expand on variables and methods found in the Applet class. The end result of this is that we will get a program which can function as an applet. Methods in Java are the equivalents of functions in C++ which reside inside classes. Since this is only our first program and it is very basic we only have one method. public void paint (Graphics g) The "public" keyword allows the method to be called from within other classes. If the keyword "private" is used instead then the method cannot be called from within other classes. There exists a third possibility here, "protected", but we won't look at that for the time being. The next keyword after public is used to tell us what sort of information will be returned by the method. In this particular case we have used "void" which means that nothing will be returned. I could have used int, char, etc. instead, but given the nature of this method they were unnecessary. We will look at methods which require a return type as we advance into more complicated examples. The paint method will be an important fixture in all of our programs since this is where we display on the screen everything we want the user to be able to see. The paint method is always passed a graphics context, "Graphics g", which is later used by the method to enable painting to the screen. The last statement which we need to take a look at is g.drawString(....). g.drawString("Hello World!",50,25); Guess what this does? That's right! This statement draws a string to the screen (represented by the graphics context) at the x (50) and y (25) coordinates. The coordinate (0,0) would give you the top left of the applet not the top left of the web page the applet is on. One other important point to note here is that we don't have to draw strings to the screen. This will be an important fact when we get into double buffering later on, but for now lets move on. Now you are probably saying "Adam, that is great, but how do we see our applet in action?". The first step toward running our applet is to compile it. You will remember at the start of the article I mention the two options you have for writing Java applets. If you choose to use the JDK then you can go into DOS and type "javac HelloWorld.java". If you use a visual environment such as VisualAge or Visual J++ then you will find a menu option somewhere that allows you to compile. Whichever method you use you should end up seeing a file created in called HelloWorld.class. Java is an interpreted language which means that it doesn't create standard executable files. Instead we get files with a .class ending. The last step before we run our program is to create a HTML file which will display our applet. If you haven't used HTML before don't worry about it because you will be able to use the sample page I give you here over and over with one or two minor modifications. I would recommend that you look into learning it though because it is very useful. So without any further delay here is the code we want in our HTML file which I have called Hello.html. <HTML> <HEAD> <TITLE>Hello World Applet</TITLE> </HEAD> <BODY> <CENTER> <H1>Hello World Applet</H1> <APPLET CODE="HelloWorld.class" WIDTH=150 HEIGHT=25> </APPLET> </BODY> </HTML> Once you have saved this HTML file you can then open it up in the browser of your choice. The only part of the above HTML that you really need to be concerned with is the <APPLET>..</APPLET> lines. You will notice that for the applet I have entered three pieces of information: width, height, and the name of the .class file. The width and height specify the dimensions of the applet on the web page and the code attribute specifies the name of the applet we want to display. For the code attribute you want to make sure that you have the full name of the compiled java applet with the proper capitialization. There you have it. That is all there is to writing a basic java applet. Now that we know the basics lets pick up the pace and introduce some more interesting capabilities of java applets. Java provides the ability to change the font of the text which you are displaying. While it only supports a small number of fonts it is still a useful capability which you may want to take advantage of in your games. private Font a_Font; public void init() { a_Font = new Font("Helvetica", Font.BOLD, 48); setFont(a_Font); } The addition of the above lines to our applet will allow us to change the font in which our text appears. These lines should be placed after the opening curly brace of the class and before the paint method. You will also notice the use of a new method here. The init method is called automatically when the applet is first loaded. If you have some initialization code and code that only needs to happen when the applet is first loaded then this is the place to put it. You must make sure that you spell init the same way that I have done. If you spell it incorrectly or use improper capitalization then it will appear to Java as a different method and will not be called automatically. This problem will not be flagged for you by the compiler so you must be extremely careful when you write your methods to get spelling and capitalization perfect. Trying to track this problem down can be annoying. The first parameter in the above font creation code is the name of the font that we want to use. These could be "TimesRoman", "Helvetica", "Courier", "Dialog", and "DialogInput" which work, but are being replaced by "Serif", "SansSerif", and "Monospaced". The second parameter specifies the style of our new font. This can be Font.BOLD, Font.PLAIN, Font.ITALIC, or Font.BOLD + Font.ITALIC. The last parameter specifies the size of the new font. In order to put our new font to work we must use the setFont method with our new font passed as a parameter. This will change the font for the entire applet for the entire life span of the applet to our font. If you want to bounce back and forth between fonts you will set the font in the paint method instead using the setFont method prefixed by the graphics context (g.setFont(...);). Another way you can alter the appearence of text is by changing the color of it. g.setColor (Color.red); This line of code is normally found in the paint method because you need a graphics context in order to use it. The code itself is not bad and some of the colors that are available to you are listed below. Color.black Color.blue Color.cyan Color.darkGray Color.gray Color.green Color.lightGray Color.magenta Color.orange Color.pink Color.red Color.white Color.yellow The colors listed previously will probably suit your needs, but you may be interested in creating your own colors on occasion. The code to do this is very simple and you will probably notice similarities between it and the code used to create a new font. Color adamBlue = new Color (80,124,224); g.setColor(slateBlue); The three numbers in the constructor of our new color are the RGB values. Each of the numbers must be between 0 and 255. If you are unsure about what a constructor is and why I use the keyword new don't worry for now. Take everything I say as unquestionable truth and we will talk about it later when we discuss classes and how they fit into our games. One thing that I didn't mention previously is that when you call the setColor method it also changes the color of everything you do after that point including more text and shapes which we will talk about in a minute. Two other useful functions that are available for you use are setBackground(..) and setForeground(..). setBackground(Color.yellow); setForeground(Color.blue); These two lines, if used, are generally found in the init method. The setForeground method has the same effect as the g.setColor statement except that the effect is permanent for everything in your applet. I recommend that you use g.setColor because as you make your games you are going to want to frequently change colors. The setBackground method is good for now, but in a little bit I will introduce the idea of double buffering and with it a new way to set the background color. Shapes are a very important topic because they will play a crucial role in quite a few of the games that you will write. There is no built in 3D segment in Java, but I may discuss a way around this later. Java provides several built-in methods to allow for quick and easy drawing of shapes. Each of these will usually be called from your paint method. drawRect (int x, int y, int width, int height) eg g.drawRect (5,8,49,29); drawOval (int x, int y, int width, int height) drawLine (int x1, int x2, int y1, int y2) drawRoundRect (int x, int y, int width, int height, int arcWidth, int arcHeight) drawArc (int x, int y , int width, int height, int startAngle, int arcAngle) draw3DRect (int x, int y, int width, int height, boolean raised) For all of the above shapes, except for the line, there is a filled alternative. fillRect (....); fillOval (....); If you want to have different colors for each of your shapes then you must make sure that you call g.setColor before you draw the shape. I recommend going through and playing around with the above shapes a bit because they will be important in many games that you write and will be important in some of the examples that I show you. I will not be talking about any of these subjects here because there isn't any real big difference between those found in Java and those in C++. There will be plenty of examples of them later when I get into more coding related information about specific games. Image coolImage; .... coolImage = getImage(getCodeBase(), "dx3d.GIF"); .... g.drawImage(coolImage,0,0,this); Everyone likes images and it is relatively simple to display them in a java applet. The top line of sample code is our variable declaration which can go at the top of our applet class with our font variable and any others you may have inserted at that location. The second line of code will go in our init method and the last line will go in our paint method. The syntax of the Java language is very intuitive with many of the method names it uses. For example, by just looking at the code above you can probably get a good idea of what is going on. As I mentioned before, the first line is our variable declaration. The next line involves the loading of our image from it's file. In this case the name of the file is dx3d.GIF. When you are loading images ensure that you have the right capitalization and spelling of the filename. As you start to write bigger applets you are going to be confronted with more and more errors so if you can eliminate some by being cautious in areas like this then your job will be easier. The last line of importance here is responsible for the drawing of the image. You will notice that we are using our graphic context and rather than drawing a string or filling an oval we are drawing an image. The first three parameters are what interest us here. The first parameter is our variable which holds the image that we want to display. Next we have the x and y coordinates of where we want the image to be located on the screen. You will notice that we don't have to give the size of the image at any point. When we load the image in we take on the width and height of the image as it was in the file. It is a good idea to get familiar with images now because we will come back to images in a bit and add some new wrinkles when we talk about double buffering. Random numbers are used quite frequently in game programming as well as regular programming. To get random numbers in Java is a simple task as you can see in the code excerpt below. public void paint(Graphics g) { int x; int y; x = (int)(Math.random()*100); y = (int)(Math.random()*100); g.setColor(Color.red); g.drawString("Hello World",x,y); } Let's look at the code for the random numbers before we see how they are used in this particular bit of code. You will notice that we make a call to Math.random(). This returns a number between 0 and 1. We then multiply by the range of numbers that we want to have. In the example above I multiply by 100 to give myself a number between 0 and 100. If I wanted a number between 1 and 100 I would multiply by 99 and then add 1. The reason that works is because the multiplication by 99 gives me a number between 0 and 99 and then I add one to put it in the proper range. The last thing you will notice is the int keyword out in front of the random number code. This is called casting. If you are familiar with C/C++ you have probably seen this concept in action before. For those of you who haven't all that is happening is that I am telling the compiler that I am going to stick a float into an int and I am aware of the fact that I will lose some information by using this code. The purpose of this applet is really simple. Every time it loads it will randomly place the string "Hello World" on the applet somewhere between 0 and 100 for the x and y coordinates. This may seem fairly bland at the moment, but take note of what we are doing because when we discuss threads we will be able to have text randomly appearing and disappearing on the screen and moving around with only slight modifications to the code above. Well that is the end of this article. Hopefully you have learned a bit and are interested in what is to come. Next time I will start talking about threads, double buffering, and how to use the mouse and keyboard. To illustrate these concepts we will also talk about how to program a pong game and some other interesting applets. If you have any questions or comments feel free to email me at kinga@cpsc.ucalgary.ca. By the time you see the next article I will have my web page up and running with example applets that I have discussed as well as additional applets. Discuss this article in the forums Date this article was posted to GameDev.net: 12/21/2000 (Note that this date does not necessarily correspond to the date the article was written) See Also: Java © 1999-2009 Gamedev.net. All rights reserved. Terms of Use Privacy Policy
http://www.gamedev.net/reference/articles/article1262.asp
crawl-002
refinedweb
3,422
70.53
Behold! A Solr Client for Scala DZone 's Guide to Behold! A Solr Client for Scala Join the DZone community and get the full member experience.Join For Free Scala users, check this out! If you've been trying to have a better interface for using Apache Solr in your Scala projects, there's a pretty new simple Solr client for Scala being developed on GitHub by Naoki Takezoe. It hasn't been released yet but if this project is of interest to you, it might be worth it to lend a hand. The client is based on SolrJ and you simply add a dependency into your build.sbt. Here is the simplest example of its usage: The client is based on SolrJ and you simply add a dependency into your build.sbt. Here is the simplest example of its usage: import jp.sf.amateras.solr.scala._ val client = new SolrClient("") // register client .add(Map("id"->"001", "manu" -> "Lenovo", "name" -> "ThinkPad X201s")) .add(Map("id"->"002", "manu" -> "Lenovo", "name" -> "ThinkPad X220")) .add(Map("id"->"003", "manu" -> "Lenovo", "name" -> "ThinkPad X121e")) .commit // query val result: List[Map[String, Any]] = client.query("name:%name%") .fields("id", "manu", "name") .sortBy("id", Order.asc) .getResult(Map("name" -> "ThinkPad")) result.foreach { doc: Map[String, Any] => println("id: " + doc("id")) println(" manu: " + doc("manu")) println(" name: " + doc("name")) }Here is the list of features that Takezoe wants to add before release: - Mapping document to case class - Flexible configuration to SolrServer - Facet search Topics: Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/behold-solr-client-scala
CC-MAIN-2019-26
refinedweb
268
57.16
We recently rewrote the code that powers this blog. Previously, the blog ran as a Middleman app. The new system is tailored to our preferred authoring workflow (Markdown + GitHub) and takes advantage of webhooks to automate tasks that are not writing or reviewing a post. Splitting content from engine The idea to rebuild the blog stemmed from a conversation about publishing new blog posts. We love our process of writing posts in Markdown, versioning them via Git and reviewing them via GitHub pull requests. However, in our previous setup, we needed to redeploy the blog to Heroku whenever a new post was published. This was tedious and frustrating. The ideal workflow would be to merge a pull request for a new blog post and have everything else happen automatically. A big obstacle to this goal was the coupling between the content of our blog and the application code that served it. This led to the decision to break up our blog into two independent repositories. One would contain the blog engine, written in Rails, while the other would be strictly Markdown documents. ![ new blog workflow]() Setting up a GitHub webhook GitHub allows you to subscribe to events on a repository via a webhook. You provide them with a URL and they will post to it every time the designated event occurs. When a new post gets merged to the master branch of the content repository, we respond by kicking off an import. ![ github's webhook event options]() GitHub’s documentation for webhooks is pretty good. Check it out. For security reasons, we want to restrict access to the webhook URL to only allow payloads from GitHub. GitHub allows you to set a secret key with which the incoming request is signed. If the request signature matches the payload hashed with the secret key, then we know the request is genuine. Caching with Fastly We host our blog on Heroku and use Fastly as our CDN. Jessie wrote a fantastic post on how to set up Fastly with a Rails application. We used this approach for the blog engine. When we import a new post, we purge the cache. However, this won’t work for posts that don’t show up immediately on the blog such as those with future dates. In addition, we run a daily task via heroku scheduler that purges the posts. Initially we were confused by the Article#purge and Article.purge_all methods included into our models by the fastly-rails gem. Article#purge will expire all pages that have the surrogate key for that individual article while Article.purge_all will expire all pages that have the general article surrogate key. Some pages have both, for example: def index @articles = Article.recent set_surrogate_key_header Article.table_key, @articles.map(&:record_key) end This index page can be expired by calling Article.purge_all or by calling purge on any of the article objects rendered on that page. So when should you use one over the other? - When creating a new object you want to use purge_all. This is a new object that isn’t on any page yet so purgewouldn’t do anything. - When updating an object, you can use purge. This will expire any pages that render that object. Building a sitemap Search engines like Google and Bing use XML sitemaps to generate search results. The Giant Robots sitemap allows us to inform search engines about the relative importance of each URL on the site and how often they change. The most popular gem we found for generating sitemaps, SitemapGenerator, generates static files and suggests setting up a cron job to update it periodically. We found that it wasn’t difficult to serve our own dynamic sitemap using the Builder templates that ship with Rails. Authoring posts locally While splitting the content from the engine simplified a lot of things, it did make previewing posts more difficult. Previously, an author could spin up a local Middleman server and preview their post exactly as it would show up on the blog. However, the new engine doesn’t read files from the local repo but imports them from GitHub instead. This would force authors to: - Set up a local version of the engine - Connect it to the GitHub repository’s webhook - Push to GitHub in order for GitHub to send the file back down to their local machine so the engine can render it. This whole workflow is tedious. We considered using a standard Markdown editor such as Marked to preview the posts but then they wouldn’t be rendered using our stylesheet and layouts. We decided to implement an author mode that would read Markdown files from the local file system rather than the database + GitHub. In order to do this, we built a set of lightweight objects that mimicked our ActiveRecord models. Local::Article, Local::Author, and Local::Tag. These objects are backed by the file system rather than the database. To ensure the correct objects are called by the controller we added the following initializer: if ENV.fetch("AUTHOR_MODE") == "true" && ENV["LOCAL_POSTS_PATH"].present? require "local/article" require "local/tag" require "local/author" Article = Local::Article Author = Local::Author Tag = Local::Tag end You would expect this would cause some “already initialized constant” errors if these redefine constants from our models or if the models get loaded after this initializer. However, this is not the case. Rails’ autoloading system will only load a model file if it finds an undefined constant named for that file. Since our constants are already defined, Rails will never load the models. Conclusion Since these changes went live, authoring blog posts is now much more streamlined. We get to focus on writing content using our favorite plain-text editor and getting feedback on GitHub. Once satisfied, we merge our post and it will automatically show up on the blog on the day it was dated for. Magical!
https://robots.thoughtbot.com/blog-in-markdown-deploy-with-webhooks
CC-MAIN-2018-22
refinedweb
982
64.2
Re: Determining the Main Class - From: Daniele Futtorovic <da.futt.news@xxxxxxxxxxxxxxx> - Date: Mon, 11 Aug 2008 20:55:47 +0200 On 11/08/2008 19:54, Stefan Ram allegedly wrote: Jason Cavett <jason.cavett@xxxxxxxxx> writes:Is there a way to determine (maybe through reflection or something?) where a Java application is being run from (what class contains the main method)? Examine the stack trace. public class Main { public static void method() { System.out.println ( java.util.Arrays.toString ( java.lang.Thread.currentThread().getStackTrace() )); } public static void main( final java.lang.String[] args ) { Main.method(); }} [java.lang.Thread.getStackTrace(Unknown Source), Main.method(Main.java:24), Main.main(Main.java:28)] In the stack trace above, it is the last entry. This might not always be so. Also, there might be other methods with the signatur of main in the stack trace. So, some care has to be taken. Often, it should be the last method in the stack trace with the signature of main. The code above also assumes that the current thread is the main thread. I've investigated that a bit, trying to get the main Thread through searching the ThreadGroups, and also via Thread.getAllStacktraces. There's a problem with that approach. To wit, that if the main Thread isn't *active*, you don't get it either way (at least that's what my tests indicate). I would venture say that in a typical application, the main Thread isn't active. So this approach might not work. As for searching for defined classes with a main method, that's pointless, since many classes can have a main defined. As Mark said, it would work with a JAR (via its manifest). Ideally, you'd have to know the command the java executable was started with to deal with all cases... -- DF. . - Follow-Ups: - Re: Determining the Main Class - From: Arne Vajhøj - References: - Determining the Main Class - From: Jason Cavett - Prev by Date: Re: wrting a soap response to file - Next by Date: Re: Delegation and generics craziness - Previous by thread: Re: Determining the Main Class - Next by thread: Re: Determining the Main Class - Index(es):
http://coding.derkeiler.com/Archive/Java/comp.lang.java.programmer/2008-08/msg01223.html
crawl-002
refinedweb
360
66.64
We will start by installing the required software. This will include the Python distribution, some fundamental Python libraries, and external bioinformatics software. Here, we will also be concerned with the world outside Python. In bioinformatics and Big Data, R is also a major player; therefore, you will learn how to interact with it via rpy2 a Python/R bridge. We will also explore the advantages that the IPython framework can give us in order to efficiently interface with R. This chapter will set the stage for all the computational biology that we will perform in the rest of the book. As different users have different requirements, we will cover two different approaches on how to install the software. One approach is using the Anaconda Python () distribution and another approach to install the software via Docker (a server virtualization method based on containers sharing the same operating system kernel—). We will also provide some help on how to use the standard Python installation tool, pip, if you use the standard Python distribution. If you have a different Python environment that you are comfortable with, feel free to continue using it. If you are using a Windows-based OS, you are strongly encouraged to consider changing your operating system or use Docker via boot2docker. Before we get started, we need to install some prerequisite software. The following sections will take you through the software and the steps needed to install them. An alternative way to start is to use the Docker recipe, after which everything will be taken care for you via a Docker container. If you are already using a different Python version, you are encouraged to continue using your preferred version, although you will have to adapt the following instructions to suit your environment. Python can be run on top of different environments. For instance, you can use Python inside the JVM (via Jython) or with .NET (with IronPython). However, here, we are concerned not only with Python, but also with the complete software ecology around it; therefore, we will use the standard (CPython) implementation as that the JVM and .NET versions exist mostly to interact with the native libraries of these platforms. A potentially viable alternative will be to use the PyPy implementation of Python (not to be confused with PyPi: the Python Package index). An important decision is whether to choose the Python 2 or 3. Here, we will support both versions whenever possible, but there are a few issues that you should be aware of. The first issue is if you work with Phylogenetics, you will probably have to go with Python 2 because most existing Python libraries do not support version 3. Secondly, in the short term, Python 2, is generally better supported, but (save for the aforementioned Phylogenetics topic) Python 3 is well covered for computational biology. Finally, if you believe that you are in this for the long run, Python 3 is the place to be. Whatever is your choice, here, we will support both options unless clearly stated otherwise. If you go for Python 2, use 2.7 (or newer if it has been released). With Python 3, use at least 3.4. If you were starting with Python and bioinformatics, any operating system will work, but here, we are mostly concerned with the intermediate to advanced usage. So, while you can probably use Windows and Mac OS X, most heavy-duty analysis will be done on Linux (probably on a Linux cluster). Next-generation sequencing data analysis and complex machine learning are mostly performed on Linux clusters. If you are on Windows, you should consider upgrading to Linux for your bioinformatics work because many modern bioinformatics software will not run on Windows. Mac OS X will be fine for almost all analyses, unless you plan to use a computer cluster, which will probably be Linux-based. If you are on Windows or Mac OS X and do not have easy access to Linux, do not worry. Modern virtualization software (such as VirtualBox and Docker) will come to your rescue, which will allow you to install a virtual Linux on your operating system. If you are working with Windows and decide that you want to go native and not use Anaconda, be careful with your choice of libraries; you are probably safer if you install the 32-bit version for everything (including Python itself). Remember, if you are on Windows, many tools will be unavailable to you. Tip Bioinformatics and data science are moving at breakneck speed; this is not just hype, it's a reality. If you install the default packages of your software framework, be sure not to install old versions. For example, if you are a Debian/Ubuntu Linux user, it's possible that the default matplotlib package of your distribution is too old. In this case, it's advised to either use a recent conda or pip package instead. The software developed for this book is available at. To access it, you will need to install Git. Alternatively, you can download the ZIP file that GitHub makes available (however, getting used to Git may be a good idea because lots of scientific computing software are being developed with it). Before you install the Python stack properly, you will need to install all the external non-Python software that you will be interoperating with. The list will vary from chapter to chapter and all chapter-specific packages will be explained in their respective chapters. Some less common Python libraries may also be referred to in their specific chapters. If you are not interested on a specific chapter (that is perfectly fine), you can skip the related packages and libraries. Of course, you will probably have many other bioinformatics applications around—such as bwa or GATK for next-generation sequencing, but we will not discuss these because we do not interact with them directly (although we might interact with their outputs). You will need to install some development compilers and libraries (all free). On Ubuntu, consider installing the build-essential ( apt-get it) package, and on Mac, consider Xcode (). In the following table, you will find the list of the most important Python software. We strongly recommend the installation of the IPython Notebook (now known as Project Jupyter). While not strictly mandatory, it's becoming a fundamental cornerstone for scientific computing with Python: Note that the list of available software for Python in general and bioinformatics in particular is constantly increasing. For example, we recommend you to keep an eye on projects such as Blaze (data analysis) or Bokeh (visualization). Here are the steps to perform the installation: Start by downloading the Anaconda distribution from. You can either choose the Python Version 2 or 3. At this stage, this is not fundamental because Anaconda will let you use the alternative version if you need it. You can accept all the installation defaults, but you may want to make sure that condabinaries are in your PATH(do not forget to open a new window so that the PATHis updated). If you have another Python distribution, but still decide to try Anaconda, be careful with your PYTHONPATHand existing Python libraries. It's probably better to unset your PYTHONPATH. As much as possible, uninstall all other Python versions and installed Python libraries. Let's go ahead with libraries. We will now create a new condaenvironment called bioinformatics with Biopython 1.65, as shown in the following command: conda create -n bioinformatics biopython biopython=1.65 python=2.7 If you want Python 3 (remember the reduced phylogenetics functionality, but more future proof), run the following command: conda create -n bioinformatics biopython=1.65 python=3.4 Let's activate the environment, as follows: source activate bioinformatics Also, install the core packages, as follows: conda install scipy matplotlib ipython-notebook binstar pip conda install pandas cython numba scikit-learn seaborn We still need pygraphivz, which is not available on conda. Therefore, we need to use pip: pip install pygraphviz Now, install the Python bioinformatics packages, apart from Biopython (you only need to install those that you plan to use): This is available on conda: conda install -c pysam conda install -c simuPOP This is available via pypi: pip install pyvcf pip install dendropy If you need to interoperate with R, of course, you will need to install it; either download it from the R website at or use the R provided by your operating system distribution. On a recent Debian/Ubuntu Linux distribution, you can just run the following command as root: apt-get r-bioc-biobase r-cran-ggplot2 This will install Bioconductor: the main R suite for bioinformatics and ggplot2—a popular plotting library in R. Of course, this will indirectly take care of installing R. Alternatively, If you are not on Debian/Ubuntu Linux, do not have root, or prefer to install in your home directory, after downloading and installing R manually, run the following command in R: source("") biocLite() This will install Bioconductor (for detailed instructions, refer to). To install ggplot2, just run the following command in R: install.packages("ggplot2") install.packages("gridExtra") Finally, you will need to install rpy2, the R-to-Python bridge. Back at the command line, under the condabioinformatics environment, run the following command: pip install rpy2 There is no requirement to use Anaconda; you can easily install all this software on another Python distribution. Make sure that you have pip installed and install all conda packages with it, instead. You may need to install more compilers (for example, Fortran) and libraries because installation via pip will rely on compilation more than conda. However, as you also need pip for some packages under conda, you will need some compilers and C development libraries with conda, anyway. If you are on Python 3, you will probably have to perform pip3 and run Python as python3 (as python/ pip will call Python 2 by default on most systems). In order to isolate your environment, you may want to consider using virtualenv (). This allows you to create a bioninformatics environment similar to the one on conda. The Anaconda () Python distribution is commonly used, especially because of its intelligent package manager: conda. Although condawas developed by the Python community, it's actually language agnostic. The software installation and package maintenance was never Python's strongest point (hence, the popularity of condato address this issue). If you want to know the currently recommended installation policies for the standard Python distribution (and avoid old and deprecated alternatives), refer to. You have probably heard of the IPython Notebook; if not, visit their page at. Docker is the most widely used framework that implements operating system-level virtualization. This technology allows you to have an independent container: a layer that is lighter than a virtual machine, but still allows you to compartmentalize software. This mostly isolates all processes, making it feel like each container is a virtual machine. Docker works quite well at both extremes of the development spectrum: it's an expedient way to set up the content of this book for learning purposes and may be your platform to deploy your applications in complex environments. This recipe is an alternative to the previous recipe. However, for long-term development environments, something along the lines of the previous recipe is probably your best route, although it can entail a more laborious initial setup. If you are on Linux, the first thing you have to do is to install Docker. The safest solution is to get the latest version from. While your Linux distribution may have a Docker package, it may be too old and buggy (remember the "advancing at breakneck speed" thingy?). If you are on Windows or Mac, do not despair; boot2docker () is here to save you. Boot2docker will install VirtualBox and Docker for you, which allows you to run Docker containers in a virtual machine. Note that a fairly recent computer (well, not that recent, as the technology was introduced in 2006) is necessary to run our 64-bit virtual machine. If you have any problems, reboot your machine and make sure that on the BIOS, VT-X or AMD-V is enabled. At the very least, you will need 6 GB of memory, preferably more. Note that this will require a very large download from the Internet, so be sure that you have a big network pipe. Also, be ready to wait for a long time. These are the steps to be followed: Use the following command on the Linux shell or in boot2docker: docker build -t bio If you want the Python 3 version, replace the 2 with 3 versions on the URL. After a fairly long wait, all should be ready. Note that on Linux, you will either require to have root privileges or be added to the Docker Unix group. Now, you are ready to run the container, as follows: docker run -ti -p 9875:9875 -v YOUR_DIRECTORY:/data bio Replace YOUR_DIRECTORYwith a directory on your operating system. This will be shared between your host operating system and the Docker container. YOUR_DIRECTORYwill be seen in the container on /data and vice versa. The -p 9875:9875will expose the container TCP port 9875on the host computer port 9875. If you are using boot2docker, the final configuration step will be to run the following command in the command line of your operating system, not in boot2docker: VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,127.0.0.1,9875,,9875" If you now start your browser pointing at, you should be able to get the IPython Notebook server running. Just choose the Welcome notebook to start! Docker is the most widely used containerization software and has seen enormous growth in usage in recent times. You can read more about it at. You will find a paper on arXiv, which introduces Docker with a focus on reproducible research at. If there is some functionality that you need and cannot find it in a Python library, your first port of call is to check whether it's implemented in R. For statistical methods, R is still the most complete framework; moreover, some bioinformatics functionalities are also only available in R, most probably offered as a package belonging to the Bioconductor project. The rpy2 provides provides a declarative interface from Python to R. As you will see, you will be able to write very elegant Python code to perform the interfacing process. In order to show the interface (and try out one of the most common R data structures, the data frame, and one of the most popular R libraries: ggplot2), we will download its metadata from the Human 1000 genomes project (). As this is not a book on R, we do want to provide any interesting and functional examples. You will need to get the metadata file from the 1000 genomes sequence index. Please check and download the sequence.index file. If you are using notebooks, open the 00_Intro/Interfacing_R notebook.ipynb and just execute the wget command on top. This file has information about all FASTQ files in the project (we will use data from the Human 1000 genomes project in the chapters to come). This includes the FASTQ file, the sample ID, and the population of origin and important statistical information per lane, such as the number of reads and number of DNA bases read. Take a look at the following steps: We start by importing rpy2 and reading the file, using the read_delimR function: import rpy2.robjects as robjects read_delim = robjects.r('read.delim') seq_data = read_delim('sequence.index', header=True, stringsAsFactors=False) #In R: # seq.data <- read.delim('sequence.index', header=TRUE, # stringsAsFactors=FALSE) The first thing that we do after importing is accessing the read.delimR function that allows you to read files. Note that the R language specification allows you to put dots in the names of objects. Therefore, we have to convert a function name to read_delim. Then, we call the function proper; note the following highly declarative features. First, most atomic objects—such as strings—can be passed without conversion. Second, argument names are converted seamlessly (barring the dot issue). Finally, objects are available in the Python namespace (but objects are actually not available in the R namespace; more about this later). For reference, I have included the corresponding R code. I hope it's clear that it's an easy conversion. The seq_dataobject is a data frame. If you know basic R or the Python pandas library, you are probably aware of this type of data structure; if not, then this is essentially a table: a sequence of rows where each column has the same type. Let's perform a basic inspection of this data frame as follows: print('This dataframe has %d columns and %d rows' % (seq_data.ncol, seq_data.nrow)) print(seq_data.colnames) #In R: # print(colnames(seq.data)) # print(nrow(seq.data)) # print(ncol(seq.data)) Again, note the code similarity. You can even mix styles using the following code: my_cols = robjects.r.ncol(seq_data) print(my_cols) You can call R functions directly; in this case, we will call ncolif they do not have dots in their name; however, be careful. This will display an output, not 26(the number of columns), but [26]which is a vector composed of the element 26. This is because by default, most operations in R return vectors. If you want the number of columns, you have to perform my_cols[0]. Also, talking about pitfalls, note that R array indexing starts with 1, whereas Python starts with 0. Now, we need to perform some data cleanup. For example, some columns should be interpreted as numbers, but they are read as strings: as_integer = robjects.r('as.integer') match = robjects.r.match my_col = match('BASE_COUNT', seq_data.colnames)[0] print(seq_data[my_col - 1][:3]) seq_data[my_col - 1] = as_integer(seq_data[my_col - 1]) print(seq_data[my_col - 1][:3]) The match function is somewhat similar to the index method in Python lists. As expected, it returns a vector so that we can extract the 0element. It's also 1-indexed, so we subtract one when working on Python. The as_integerfunction will convert a column to integers. The first print will show strings (values surrounded by "), whereas the second print will show numbers. We will need to massage this table a bit more; details can be found on a notebook, but here we will finalize with getting the data frame to R (remember that while it's an R object, it's actually visible on the Python namespace only): robjects.r.assign('seq.data', seq_data) This will create a variable in the R namespace called seq.datawith the content of the data frame from the Python namespace. Note that after this operation, both objects will be independent (if you change one, it will not be reflected on the other). We will finalize our R integration example with a plot using ggplot2. This is particularly interesting, not only because you may encounter R code using ggplot2, but also because the drawing paradigm behind the Grammar of Graphics is really revolutionary and may be an alternative that you may want to consider instead of the more standard plotting libraries, such as matplotlib ggplot2 is so pervasive that rpy2 provides a Python interface to it: import rpy2.robjects.lib.ggplot2 as ggplot2 With regards to our concrete example based on the Human 1000 genomes project, we will first plot a histogram with the distribution of center names, where all sequencing lanes were generated. The first thing that we need to do is to output the chart to a PNG file. We call the R png()function as follows: robjects.r.png('out.png') We will now use ggplot to create a chart, as shown in the following command: from rpy2.robjects.functions import SignatureTranslatedFunction ggplot2.theme = SignatureTranslatedFunction(ggplot2.theme, init_prm_translate={'axis_text_x': 'axis.text.x'}) bar = ggplot2.ggplot(seq_data) + ggplot2.geom_bar() + ggplot2.aes_string(x='CENTER_NAME') + ggplot2.theme(axis_text_x=ggplot2.element_text(angle=90, hjust=1)) bar.plot() dev_off = robjects.r('dev.off') dev_off() The second line is a bit uninteresting, but is an important boilerplate code. One of the R functions that we will call has a parameter with a dot in its name. As Python function calls cannot have this, we map the axis.text.xR parameter name to the axis_text_xPython name in the function theme. We monkey patch it (that is, we replace ggplot2.themewith a patched version of itself). We then draw the chart itself. Note the declarative nature of ggplot2 as we add features to the chart. First, we specify the seq_data data frame, then we will use a histogram bar plot called geom_bar, followed by annotating the Xvariable ( CENTER_NAME). Finally, we rotate the text of the x axis by changing the theme. We finalize by closing the R printing device. If you are in an IPython console, you will want to visualize the PNG image as follows: from IPython.display import Image Image(filename='out.png') This chart produced is as follows: Figure 1: The ggplot2-generated histogram of center names responsible for sequencing lanes of human genomic data of the 1000 genomes project As a final example, we will now do a scatter plot of read and base counts for all the sequenced lanes for Yoruban (YRI) and Utah residents with ancestry from Northern and Western Europe (CEU) of the Human 1000 genomes project (the summary of the data of this project, which we will use thoroughly, can be seen in the Working with modern sequence formats recipe in Chapter 2, Next-generation Sequencing). We are also interested in the difference among the different types of sequencing (exome, high, and low coverage). We first generate a data frame with just YRI and CEU lanes and limit the maximum base and read counts: robjects.r('yri_ceu <- seq.data[seq.data$POPULATION %in% c("YRI", "CEU") & seq.data$BASE_COUNT < 2E9 & seq.data$READ_COUNT < 3E7, ]') robjects.r('yri_ceu$POPULATION <- as.factor(yri_ceu$POPULATION)') robjects.r('yri_ceu$ANALYSIS_GROUP <- as.factor(yri_ceu$ANALYSIS_GROUP)') The last two lines convert the POPULATIONand ANALYSIS_GROUPSto factors, a concept similar to categorical data. We are now ready to plot: yri_ceu = robjects.r('yri_ceu') scatter = ggplot2.ggplot(yri_ceu) + ggplot2.geom_point() + \ ggplot2.aes_string(x='BASE_COUNT', y='READ_COUNT', shape='factor(POPULATION)', col='factor(ANALYSIS_GROUP)') scatter.plot() Hopefully, this example (refer to the following screenshot) makes the power of the Grammar of Graphics approach clear. We will start by declaring the data frame and the type of chart in use (the scatter plot implemented by geom_point). Note how easy it is to express that the shape of each point depends on the POPULATIONvariable and the color on the ANALYSIS_GROUP: Figure 2: The ggplot2-generated scatter plot with base and read counts for all sequencing lanes read; the color and shape of each dot reflects categorical data (population and the type of data sequenced) Finally, when you think about Python and R, you probably think about pandas: the R-inspired Python library designed with data analysis and modeling in mind. One of the fundamental data structures in pandas is (surprise) the data frame. It's quite easy to convert backward and forward between R and pandas, as follows: import pandas.rpy.common as pd_common pd_yri_ceu = pd_common.load_data('yri_ceu') del pd_yri_ceu['PAIRED_FASTQ'] no_paired = pd_common.convert_to_r_dataframe(pd_yri_ceu) robjects.r.assign('no.paired', no_paired) robjects.r("print(colnames(no.paired))") We start by importing the necessary conversion module. We then convert the R data frame (note that we are converting the yri_ceuin the R namespace, not the one on the Python namespace). We delete the column that indicates the name of the paired FASTQ file on the pandas data frame and copy it back to the R namespace. If you print the column names of the new R data frame, you will see that PAIRED_FASTQis missing. As this book enters production, the pandas.rpymodule is being deprecated (although it's still available). In the interests of maintaining the momentum of the book, we will not delve into pandas programming (there are plenty of books on this), but I recommend that you take a look at it, not only in the context of interfacing with R, but also as a very good library for data management of complex datasets. It's worth repeating that the advancements on the Python software ecology are occurring at a breakneck pace. This means that if a certain functionality is not available today, it might be released sometime in the near future. So, if you are developing a new project, be sure to check for the very latest developments on the Python front before using a functionality from an R package. There are plenty of R packages for bioinformatics in the Bioconductor project (). This should probably be your first port of call in the R world for bioinformatics functionalities. However, note that there are many R bioinformatics packages that are not on Bioconductor, so be sure to search the wider R packages on CRAN (refer to the Comprehensive R Archive Network at). There are plenty of plotting libraries for Python. matplotlib is the most common library, but you also have a plethora of other choices. In the context of R, it's worth noting that there is a ggplot2-like implementation for Python based on the Grammar of Graphics description language for charts and this is called—surprise-surprise—ggplot! (). There are plenty of tutorials and books on R; check the R web page () for documentation. For Bioconductor, check the documentation at. If you work with NGS, you might also want to check High Throughput Sequence Analysis with Bioconductor at. The rpy library documentation is your Python gateway to R at. The Grammar of Graphics is described in a book aptly named The Grammar of Graphics, Leland Wilkinson, Springer. In terms of data structures, similar functionality to R can be found on the pandas library. You can find some tutorials at. The book, Python for Data Analysis, Wes McKinney, O'Reilly Media, is also an alternative to consider. You have probably heard of, and maybe used, the IPython Notebook. If not, then I strongly recommend you try it as it's becoming the standard for reproducible science. Among many other features, IPython provides a framework of extensible commands called magics, which allows you to extend the language in many useful ways. There are magic functions to deal with R. As you will see in our example, it makes R interfacing much more declarative and easy. This recipe will not introduce any new R functionalities, but hopefully, it will make clear how IPython can be an important productivity boost for scientific computing in this regard. You will need to follow the previous getting ready steps of the rpy2 recipe. You will also need IPython. You can use the standard command line or any of the IPython consoles, but the recommended environment is the notebook. If you are using our notebooks, open the 00_Intro/R_magic.ipynb notebook. A notebook is more complete than the recipe presented here with more chart examples. For brevity here, we concentrate only on the fundamental constructs to interact with R using magics. This recipe is an aggressive simplification of the previous one because it illustrates the conciseness and elegance of R magics: The first thing you need to do is load R magics and ggplot2: import rpy2.robjects.lib.ggplot2 as ggplot2 %load_ext rpy2.ipython Note that the %starts an IPython-specific directive. Just as a simple example, you can write on your IPython prompt: %R print(c(1, 2)) See how easy it's to execute the R code without using the robjectspackage. Actually, rpy2 is being used to look under the hood, but it has been made transparent. Let's read the sequence.indexfile that was downloaded in the previous recipe: %%R seq.data <- read.delim('sequence.index', header=TRUE, stringsAsFactors=FALSE) seq.data$READ_COUNT <- as.integer(seq.data$READ_COUNT) seq.data$BASE_COUNT <- as.integer(seq.data$BASE_COUNT) Note that you can specify that the whole IPython cell should be interpreted as R code (note the double %%). As you can see, there is no need for a function parameter name translation or (alternatively) explicitly call the robjects.rto execute a code. We can now transfer a variable to the Python namespace (where we could have done Python-based operations): seq_data = %R seq.data Let's put this data frame back in the R namespace, as follows: %R -i seq_data %R print(colnames(seq_data)) The -iargument informs the magic system that the variable that follows on the Python space is to be copied in the R namespace. The second line just shows that the data frame is indeed available in R. We actually did not do anything with the data frame in the Python namespace, but this serves as an example on how to inject an object back into R. The R magic system also allows you to reduce code as it changes the behavior of the interaction of R with IPython. For example, in the ggplot2 code of the previous recipe, you do not need to use the .pngand dev.offR functions, as the magic system will take care of this for you. When you tell R to print a chart, it will magically appear in your notebook or graphical console. For example, the histogram plotting code from the previous recipe is now simply: %%R bar <- ggplot(seq_data) + aes(factor(CENTER_NAME)) + geom_bar() + theme(axis.text.x = element_text(angle = 90, hjust = 1)) print(bar) R magics makes interaction with R particularly easy. This is true if you think about how cumbersome multiple language integration tends to be. The notebook has a few more examples, especially with chart printing, but the core of R-magic interaction is explained before. For basic instructions on IPython magics, see this notebook at A list of default extensions is available at A list of third-party magic extensions can be found at Tip Downloading the example code You can download the example code files from your account at for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit and register to have the files e-mailed directly to you.
https://www.packtpub.com/product/bioinformatics-with-python-cookbook/9781782175117
CC-MAIN-2021-17
refinedweb
5,040
53.61
Updated 6:19pm PST 11/29/2021 – Our sentiment prediction was right! Next step is to predict how much it’ll go up. Recently I’ve been playing around with sentiment analysis on Tweets a lot. I discovered the Twitter API over the Thanksgiving holidays and it’s like Christmas came early. Sort of like how Christmas comes earlier to malls every year. I have applied for upgraded access already because it’s been so fun and I’m hoping they grant it to me soon. My sentiment analysis of Black Friday on Twitter was quite popular, getting over 400 views this last weekend! Other than just holidays though, I wanted to see if analyzing Twitter sentiment could be used for a real life application. THIS IS NOT FINANCIAL ADVICE AND SHOULD NOT BE TAKEN AS SUCH. I decided that hey, I like to play with Twitter’s API, Natural Language Processing, and stocks, why not see if I can combine all of them? Thus, the idea for using Twitter sentiment to predict stocks was born for me. I’ve always been a big fan of Starbucks, both the actual store and the stock. #SBUX has made me a lot of money over the past couple years. So this post will be about the Starbucks stock and seeing how Twitter does in predicting its performance for just one day. Click here to skip directly to the results. This project will be built with two files. You’ll need access to not only the Twitter API linked above, but also to get a free API key from The Text API to do the sentiment analysis part. Make sure you save the Bearer Token from Twitter and your API Key from The Text API in a safe place. I stored them in a config file. You’ll also need to install the requests library for this. You can install that in the command line with the command below: pip install requests Using the Twitter API to Get Tweets About Starbucks As we always do, we’ll get started by importing the libraries we need. We’ll need the requests library we installed earlier to send off HTTP requests to Twitter. We will also need the json library to parse the response. I have also imported my Twitter Bearer Token from my config here. As I said above, you may choose to store and access this token however you’d like. import requests import json from config import bearertoken Once we’ve imported the libraries and the Bearer Token, we’ll set up the endpoint and header for the request. You can find these in the Twitter API documentation. We need to use the recent search endpoint (only goes back last 7 days). The only header we need is the Authorization header passing in the Bearer token. search_recent_endpoint = "" headers = { "Authorization": f"Bearer {bearertoken}" } Creating Our Twitter Search Function Everything to search the Twitter API is set up and ready to go. The reason we declared the headers and URL outside of the function is because they may be able to be used in a context outside of the function. Now let’s define our search function. Our search function will take one parameter – a search term in the form of a string. We will use our search term to create a set of search parameters. In this example, we will create a query looking for English Tweets with no links and are not retweets that contain our term. We are also going to set the maximum number of returned results to 100. With our parameters, headers, and URL set up, we can now send our request. We use the requests module to send a request and use the json module to parse the text of the returned response. Then we open up a file and save the JSON to that file. # automatically builds a search query from the requested term # looks for english tweets with no links that are not retweets # saves the latest 100 tweets into a json def search(term: str): params = { "query": f'{term} lang:en -has:links -is:retweet', 'max_results': 100 } response = requests.get(url=search_recent_endpoint, headers=headers, params=params) res = json.loads(response.text) with open(f"{term}.json", "w") as f: json.dump(res, f) Once we’ve set up the search function, we simply prompt the user for what term they’d like to search for and then call our search function on that term. term = input("What would you like to search Twitter for? ") search(term) When we run our program it will look like this: The saved JSON file should look something like this: Analyzing the Tweets for Average Sentiment Now let’s get into the second part, the fun part, the part you’re all here for, the sentiment analysis. As I said earlier, we’ll be using The Text API for this. If you don’t already have a free API key, go over to the site and grab one. We won’t be needing any Python libraries we didn’t already install for using the Twitter API so we can dive right into the code. I created a second file for this part to follow the rule of modularity in code. You can opt to do this in the same file and you’ll only need the third import – The Text API key. Setting Up and Building the Request As always, we’ll want to get started by importing our libraries. Just like before we’ll need the requests and json library and we will do the same thing with them as above – sending the HTTP request and parsing the response. For this file we’ll import our API key from The Text API instead of the Twitter Bearer Token. We have a couple other differences as well. The URL endpoint we’ll be hitting is going to be the polarity_by_sentence URL. The headers we need to send will tell the server that we’re sending JSON content and also pass in the API key through an apikey keyword. import requests import json from config import text_apikey text_url = "" polarity_by_sentence_url = text_url+"polarity_by_sentence" headers = { "Content-Type": "application/json", "apikey": text_apikey } Just like we did with the Twitter API, we’ll need to build a request for The Text API. Our build_request function will take in a term in the form of a string. We’ll use this term to open the corresponding JSON file. Then we’ll combine all the text from the tweets to form a final text string that we will send to The Text API to be analyzed. Finally, we’ll create a body in the form of a JSON that we will send to the endpoint and return that JSON body. # build request def build_request(term: str): with open(f"{term}.json", "r") as f: entries = json.load(f) text = "" for entry in entries["data"]: text += entry["text"] + " " body = { "text": text } return body Getting the Average Sentiment Okay so here we’re actually going to get the average Text Polarity, but that’s about synonymous with sentiment. Text polarity tells us how positive or negative a piece of text was, sentiment is usually used in the same way. Let’s create a polarity_analysis function that will take in a dictionary as a parameter. The dictionary input will be the JSON that we send as the body of the request to the polarity_by_sentence endpoint. Once we get our response back and parse it to get the list of polarities and sentences, we can calculate the average polarity. For an idea of what the response looks like, check out the documentation. Once we have the response, all we have to do is calculate the average polarity. The way the response is structured, the first element of an entry is the polarity and the second is the sentence text. For our use case, we just care about the polarity. We are also going to ignore neutral sentences because they’re entirely useless to and don’t affect whether the overall outcome will be positive or negative. They could affect the absolute value of the outcome, and maybe we could take that into account, but as long as we approach these problems with the same method each time, it won’t matter. # get average sentence polarity def polarity_analysis(body: dict): response = requests.post(url=polarity_by_sentence_url, headers=headers, json=body) results = json.loads(response.text)["polarity by sentence"] # initialize average polarity score and count avg_polarity = 0.0 count = 0 # loop through all the results for res in results: # ignore the neutral ones if res[0] == 0.0: continue avg_polarity += res[0] count += 1 # average em out avg_polarity = avg_polarity/count print(avg_polarity) Twitter Sentiment on Starbucks Over Thanksgiving Weekend 2021 When we run this program we’ll get something that looks like the following. I’ve pulled Tweets about Starbucks from Saturday, Sunday, and Monday morning (today) and renamed the files from their original names ( starbucks.json). Looks like Sunday was a little less positive than Saturday or Monday, but overall the Twitter sentiment towards Starbucks is positive. I predict the stock price will go up today. Let’s see how much. “Twitter Sentiment for Stocks? Starbucks 11/29/21”
https://pythonalgos.com/twitter-sentiment-for-stocks-starbucks-11-29-21/
CC-MAIN-2022-27
refinedweb
1,545
71.14
NAME dfindnm - find next matching record using secondary index SYNOPSIS #include <cbase/dirio.h> rno_t dfindnm(buffer, fcb) char *buffer; DFILE *fcb; DESCRIPTION Dfindnm searches for duplicate secondary keys. Before calling dfindnm, call dseti(C-3) to select the desired secondary index. Call dfindm(C-3) to find the first key value in the secondary index. Each subsequent call to dfindnm returns the next record in secondary key order whose keys match the record returned by dfindm(C-3). If dnumidx(C-3) has been called prior to this function, only the first n fields of the current index are compared; otherwise, all fields of the current index are compared. The beginning contents of buffer are ignored by dfindnm. Dfindnm maintains the current position in the secondary index, so additional records may be fetched in secondary key order by calling the dfindnm function repeatedly. Fcb is the file block pointer returned by dlopen(C-3) or dopen(C-3). The returned record number may be saved and used on a subsequent call to dread(C-3). SEE ALSO dlopen(C-3), dopen(C-3), dseti(C-3), dfindm(C-3), dread(C-3), dfindpm(C-3), dnumidx(C-3) Chapter 4, RMS Programming Guide DIAGNOSTICS Dfindnm returns a value of BAD (-1) if an I/O error occurred or if there is no record that contains field values equal to the last values returned. If no record is found, the contents of buffer are undefined.
http://www.conetic.com/helpdoc/cbautil/cbautil00000057.html
CC-MAIN-2017-47
refinedweb
245
61.87
> -----Original Message----- > From: Martin Sebor [mailto:sebor@roguewave.com] > Sent: Monday, June 04, 2007 9:36 PM > To: stdcxx-dev@incubator.apache.org > Subject: Re: [PING] Re: svn commit: r541672 - > /incubator/stdcxx/trunk/include/rw/_config-msvc.h > > > What I see, is that we could remove all section between > > #if _MSC_VER <= 1200 // MSVC <= 6.0 > > and > > #endif // MSVC <= 6.0 > > because of the all macros are defined in config.h at the > configure > > step. > > I think the reason why they are defined in _config_msvc.h is > because the autoconfigured ones are (or were at some point) > incorrectly detected. That certainly may have changed so we > need to check to make sure they are now detected correctly > before we take out the overriding definitions. I was not carefully read the comments. Since that section is related to the MSVC 6.0 and earlier, I can't check whether that macros detected correctly or not. Farid.
http://mail-archives.apache.org/mod_mbox/incubator-stdcxx-dev/200706.mbox/%3C7BDB2168BEAEF14C98F1901FD2DE64388D1941@epmsa009.minsk.epam.com%3E
CC-MAIN-2016-30
refinedweb
154
61.12
#include <gl_box.h> List of all members. FLTK uses an Fl_Gl_Window widget to provide access to OpenGL from an FLTK application. The simplest way to use Fl_Gl_Window is to derive your own class from it and over-ride the draw and handle functions. The draw function is used to do all OpenGL drawing. It is called by the FLTK code whenever the window needs to be drawn. Sometimes when the draw function is called, the window's shape will have changed. This can be detected by calling the valid function. When valid returns false, it is necessary to reset the window size and viewing parameters for OpenGL. The handle function is called with mouse and keyboard events. This class uses mouse events in the gl_box to control rotation of the data displayed using OpenGL. The code for rotation is extracted from the <A HREF="> GLUI code by Paul Rademacher. Glui provides a nice set of user interface widgets which are written using OpenGL and glut. The GLUI_Rotation widget provides an excellent tool for rotating an OpenGL object. After a short reading of the GLUI_Rotation code, I decided to try to use the classes below the GLUI_Rotation widget, all of which perform only numerical calculations. The separation of GUI code from the mathematics was perfect and there was not much required in gl_box to achieve the same sort of smooth rotations as in glui.
http://animp.sourceforge.net/classgl__box.html
CC-MAIN-2017-43
refinedweb
233
72.87
"Pearu Peterson" <pearu at cens.ioc.ee> wrote in message news:mailman.1031563647.10076.python-list at python.org... > > On 4 Sep 2002, Skip Montanaro wrote: > > > > > I don't want to wake any sleeping dogs, however, as I'm sitting here > > watching a bunch of Fortran function names fly by as SciPy builds, I'm > > reminded of the fairly recent threads about backward compatibility, > > Python-in-a-tie, etc. Here I am compiling with a Fortran 90/95 compiler > > (Sun's Forte thing-a-ma-bob) and see function names like > > > > lpni: > > klvna: > > chgubi: > > cyzo: > > klvnb: > > rmn2so: > > csphik: > > > > spew forth. So, while it's great that this same large library compiles and > > runs on compilers back to at least Fortran 77 (and probably earlier), > > programmers are still stuck with the same cryptic function and data names > > they had to deal with 30+ years ago, all in the name of backward > > compatibility. > > I don't think that its is due to the backward compatibility. > There is huge amount of Fortran 77 code available and if one wants to fix > the cryptic names of functions to something more meaningful, then one has > to switch to newer Fortran standard (that allows longer names) and edit > huge amount of F77 code. I think nobody is willing to take this task due > to enormous amount of work (and it cannot be fully automated, some > brain has to deside what are meaningful names), the backward compatibility > is a secondary issue (if an issue at all). > > > What's the Python connection? Other than a reminder not to get to slavish > > about backward compatibility, I note that these same function names will > > then go on to pollute the Python namespace because all this Fortran code is > > automatically wrapped using f2py. > > The Fortran function names, that you refer to, are visible only in the > extension modules that are wrapping these functions (this is due to how > Python imports shared modules). All functions that f2py generates, > have the f2py_ prefix to avoid name collision with names from Python > or any other library. > So, I don't understand your argument about name pollution. > > And btw, f2py supports mapping function names so that wrapping a Fortran > function with a name `tiarcf' can be accessed in Python side as > `this_is_a_really_cool_function', for example. > > Pearu > >
https://mail.python.org/pipermail/python-list/2002-September/137045.html
CC-MAIN-2017-04
refinedweb
380
55.78
Back to: ASP.NET Web API Tutorials For Begineers and Professionals How to use Swagger in Web API Application? In this article, I am going to discuss how to use. As part of this article, we are going to discuss the following pointers. - What is Swagger? - How to Add Swagger to Web API Application? - How to Configuring Swagger in ASP.NET Web API? - Understanding the Swagger UI. - How to enable Swagger to use XML comments? What is Swagger? The Swagger is a simple but powerful representation of the RESTful API. Nowadays most of the developers are using Swagger in almost every modern programming language and deployment environment to document. With a Swagger-enabled Web API, you will get interactive documentation, client SDK generation as well as discoverability. How to Add Swagger to Web API Project? To add Swagger to your ASP.NET Web API project, you need to install an open-source project called Swashbuckle via NuGet as shown below. Once the package is installed successfully, navigate to the App_Start folder in the Solution Explorer. You will find a new file called SwaggerConfig.cs. This is the file where Swagger is enabled and any configuration options should be set here. How to Configure Swagger in ASP.NET Web API Application? To enable Swagger and Swagger UI, modify the SwaggerConfig class as shown below namespace FirstWebAPIDemo { public class SwaggerConfig { public static void Register() { var thisAssembly = typeof(SwaggerConfig).Assembly; GlobalConfiguration.Configuration .EnableSwagger(c => c.SingleApiVersion("v1", "First WEB API Demo")) .EnableSwaggerUi(); } } } Start a new debugging session by pressing the F5 key and navigate to:[PORT_NUM]/swagger and then you should see the help pages for your APIs. Ok. That’s cool. Now expand an API and then click on the “Try it out!” button which will make a call to that specific API and return results as shown in the below image. Here click on the Try it out Button which will display the result as shown below. In the same way, you can test all other methods. How to enable Swagger to use XML Comments in ASP.NET Web API Application?\FirstWebAPIDemo\FirstWebAPIDemo.XML”, System.AppDomain.CurrentDomain.BaseDirectory)); Configuration, so far namespace FirstWebAPIDemo { public class SwaggerConfig { public static void Register() { var thisAssembly = typeof(SwaggerConfig).Assembly; GlobalConfiguration.Configuration .EnableSwagger(c => { c.SingleApiVersion("v1", "First WEB API Demo"); c.IncludeXmlComments(string.Format(@"{0}\bin\FirstWebAPIDemo.XML", System.AppDomain.CurrentDomain.BaseDirectory)); }) .EnableSwaggerUi(); } } } Let’s add some XML documents to our API methods as shown below. Here we are adding XML Document to the get method. Modify the Get method as shown below. /// <summary> /// Get All the Values /// </summary> /// <remarks> /// Get All the String Values /// </remarks> /// <returns></returns> public IEnumerable<string> Get() { return new string[] { "value1", "value2" }; } Run the application and navigate back to /swagger. You should see more details added to your API documentation as shown below. In the next article, I am going to discuss how to use Fiddler to test ASP.NET WEB API Services. Here, in this article, I try to explain how to use Swagger in Web API Application to document and test ASP.NET Web API Services. I hope now you got a good understanding of how to use Swagger in ASP.NET Web API Application. 2 thoughts on “How to use Swagger in Web API” thank you very much !!! Very nice. Thank you
https://dotnettutorials.net/lesson/how-to-use-swagger-in-web-api/
CC-MAIN-2020-05
refinedweb
555
51.55
AWS Developer Tools Blog Storing JSON documents in Amazon DynamoDB tables DynamoDBMapper is a high-level abstraction layer in the AWS SDK for Java that allows you to transform java objects into items in Amazon DynamoDB tables and vice versa. All you need to do is annotate your java class in a few places, and the mapper takes care of getting the objects in and out of the database. DynamoDBMapper has a new feature that allows you to save an object as a JSON document in a DynamoDB attribute. To do this, simply annotate the class with @DynamoDBDocument, and the mapper does the heavy work of converting the object into a JSON document and storing it in DynamoDB. DynamoDBMapper also takes care of loading the java object from the JSON document when requested by the user. Let’s say your application maintains the inventory of a car dealership in Amazon DynamoDB and uses DynamoDBMapper to save and retrieve data. One of the tables is Car, which holds information about a car and has name as its primary key. Here is how the java class looks for the table: @DynamoDBTable(tableName = "Car") public class Car { private String name; private int year; private String make; private List<String> colors; private Spec spec; @DynamoDBHashKey public String getName() { return name; } public void setName(String name) { this.name = name; } public int getYear() { return year; } public void setYear(int year) { this.year = year; } public String getMake() { return make; } public void setMake(String make) { this.make = make; } public List<String> getColors() { return colors; } public void setColors(List<String> colors) { this.colors = colors; } public Spec getSpec() { return spec; } public void setSpec(Spec spec) { this.spec = spec; } } @DynamoDBDocument public class Spec { private String engine; private String wheelbase; private String length; private String width; private String height; public String getEngine() { return engine; } public void setEngine(String engine) { this.engine = engine; } public String getWheelbase() { return wheelbase; } public void setWheelbase(String wheelbase) { this.wheelbase = wheelbase; } public String getLength() { return length; } public void setLength(String length) { this.length = length; } public String getWidth() { return width; } public void setWidth(String width) { this.width = width; } public String getHeight() { return height; } public void setHeight(String height) { this.height = height; } } As you can see, the class Spec is modeled with a @DynamoDBDocument annotation. DynamoDBMapper converts an instance of Spec into a JSON document before storing it in DynamoDB. When stored in DynamoDB, an instance of the class Car will look like this: { "name" : "IS 350", "year" : "2015", "make" : "Lexus", "colors" : ["black","white","grey"], "spec" : { "engine" : "V6", "wheelbase" : "110.2 in", "length" : "183.7 in", "width" : "71.3 in", "height" : "56.3 in" } } You can also apply other DyanmoDBMapper annotations like @DyanmoDBIgnore and @DynamoDBAttribute to the JSON document. For instance, model the height attribute of the Spec class with @DynamoDBIgnore. @DynamoDBIgnore public String getHeight() { return height; } public void setHeight(String height) { this.height = height; } The updated item in DynamoDB will look like this: { "name" : "IS 350", "year" : "2015", "make" : "Lexus", "colors" : ["black","white","grey"], "spec" : { "engine" : "V6", "wheelbase" : "110.2 in", "length" : "183.7 in", "width" : "71.3 in" } } To learn more, check out our other blog posts and the developer guide - Specifying Conditional Constraints with Amazon DynamoDB Mapper. - Client-side Encryption for Amazon DynamoDB. - Using the SaveBehavior Configuration for the DynamoDBMapper. - DynamoDBMapper developer guide. Do you want to see new features in DynamoDBMapper? Let us know what you think!
https://aws.amazon.com/blogs/developer/storing-json-documents-in-amazon-dynamodb-tables/
CC-MAIN-2021-49
refinedweb
561
64.81
Get the highlights in your inbox every week. How to add a player to your Python game How to add a player to your Python game Part three of a series on building a game from scratch with Python. Subscribe now In the first article of this series, I explained how to use Python to create a simple, text-based dice game. In the second part, I showed you how to build a game from scratch, starting with creating the game's environment. But every game needs a player, and every player needs a playable character, so that's what we'll do next in the third part of the series. In Pygame, the icon or avatar that a player controls is called a sprite. If you don't have any graphics to use for a player sprite yet, create something for yourself using Krita or Inkscape. If you lack confidence in your artistic skills, you can also search OpenClipArt.org or OpenGameArt.org for something pre-generated. Then, if you didn't already do so in the previous article, create a directory called images alongside your Python project directory. Put the images you want to use in your game into the images folder. To make your game truly exciting, you ought to use an animated sprite for your hero. It means you have to draw more assets, but it makes a big difference. The most common animation is a walk cycle, a series of drawings that make it look like your sprite is walking. The quick and dirty version of a walk cycle requires four drawings. Note: The code samples in this article allow for both a static player sprite and an animated one. Name your player sprite hero.png. If you're creating an animated sprite, append a digit after the name, starting with hero1.png. Create a Python class In Python, when you create an object that you want to appear on screen, you create a class. Near the top of your Python script, add the code to create a player. In the code sample below, the first three lines are already in the Python script that you're working on: import pygame import sys import os # new code below class Player(pygame.sprite.Sprite): ''' Spawn a player ''' def __init__(self): pygame.sprite.Sprite.__init__(self) self.images = [] img = pygame.image.load(os.path.join('images','hero.png')).convert() self.images.append(img) self.image = self.images[0] self.rect = self.image.get_rect() If you have a walk cycle for your playable character, save each drawing as an individual file called hero1.png to hero4.png in the images folder. Use a loop to tell Python to cycle through each file. ''' Objects ''' class Player(pygame.sprite.Sprite): ''' Spawn a player ''' def __init__(self): pygame.sprite.Sprite.__init__(self) self.images = [] for i in range(1,5): img = pygame.image.load(os.path.join('images','hero' + str(i) + '.png')).convert() self.images.append(img) self.image = self.images[0] self.rect = self.image.get_rect() Bring the player into the game world Now that a Player class exists, you must use it to spawn a player sprite in your game world. If you never call on the Player class, it never runs, and there will be no player. You can test this out by running your game now. The game will run just as well as it did at the end of the previous article, with the exact same results: an empty game world. To bring a player sprite into your world, you must call the Player class to generate a sprite and then add it to a Pygame sprite group. In this code sample, the first three lines are existing code, so add the lines afterwards: world = pygame.display.set_mode([worldx,worldy]) backdrop = pygame.image.load(os.path.join('images','stage.png')).convert() backdropbox = screen.get_rect() # new code below player = Player() # spawn player player.rect.x = 0 # go to x player.rect.y = 0 # go to y player_list = pygame.sprite.Group() player_list.add(player) Try launching your game to see what happens. Warning: it won't do what you expect. When you launch your project, the player sprite doesn't spawn. Actually, it spawns, but only for a millisecond. How do you fix something that only happens for a millisecond? You might recall from the previous article that you need to add something to the main loop. To make the player spawn for longer than a millisecond, tell Python to draw it once per loop. Change the bottom clause of your loop to look like this: world.blit(backdrop, backdropbox) player_list.draw(screen) # draw player pygame.display.flip() clock.tick(fps) Launch your game now. Your player spawns! Setting the alpha channel Depending on how you created your player sprite, it may have a colored block around it. What you are seeing is the space that ought to be occupied by an alpha channel. It's meant to be the "color" of invisibility, but Python doesn't know to make it invisible yet. What you are seeing, then, is the space within the bounding box (or "hit box," in modern gaming terms) around the sprite. You can tell Python what color to make invisible by setting an alpha channel and using RGB values. If you don't know the RGB values your drawing uses as alpha, open your drawing in Krita or Inkscape and fill the empty space around your drawing with a unique color, like #00ff00 (more or less a "greenscreen green"). Take note of the color's hex value (#00ff00, for greenscreen green) and use that in your Python script as the alpha channel. Using alpha requires the addition of two lines in your Sprite creation code. Some version of the first line is already in your code. Add the other two lines: img = pygame.image.load(os.path.join('images','hero' + str(i) + '.png')).convert() img.convert_alpha() # optimise alpha img.set_colorkey(ALPHA) # set alpha Python doesn't know what to use as alpha unless you tell it. In the setup area of your code, add some more color definitions. Add this variable definition anywhere in your setup section: ALPHA = (0, 255, 0) In this example code, 0,255,0 is used, which is the same value in RGB as #00ff00 is in hex. You can get all of these color values from a good graphics application like GIMP, Krita, or Inkscape. Alternately, you can also detect color values with a good system-wide color chooser, like KColorChooser. If your graphics application is rendering your sprite's background as some other value, adjust the values of your alpha variable as needed. No matter what you set your alpha value, it will be made "invisible." RGB values are very strict, so if you need to use 000 for alpha, but you need 000 for the black lines of your drawing, just change the lines of your drawing to 111, which is close enough to black that nobody but a computer can tell the difference. Launch your game to see the results. In the fourth part of this series, I'll show you how to make your sprite move. How exciting! 1 Comments I think there are some faults in the code: 1. In the class 'Player': The indentation is wrong. The four last lines should be part of '__init__ '. Otherwise 'self' is not recognised 2. in the begin of the code you should put 'pygame.display.set_mode()', because otherwise you'll get an error. 3. You should replace 'screen' in the main loop with world, because that's how the screen is named. 'screen' is used as an argument by player_list.draw(). These are my remarks, though the rest of the program is very well done. I already learned some new/useful functions and attributes. The bugs can also be due to the operating system you're using or the version of python.
https://opensource.com/article/17/12/game-python-add-a-player
CC-MAIN-2020-05
refinedweb
1,326
74.29
A proxy for DisplayObject pointers. More... #include <CharacterProxy.h> A proxy for DisplayObject pointers. The proxy will store a pointer to a DisplayObject until the DisplayObject is destroyed, in which case it will only store the original target path of it and always use that for rebinding when needed. Construct a CharacterProxy pointing to the given sprite. Construct a copy of the given CharacterProxy. Get the pointed sprite, either original or rebound. Referenced by operator==(), gnash::as_value::to_string(), and gnash::as_value::toDebugString(). bound one. Get the sprite target, either current (if not dangling) or References gnash::DisplayObject::getTarget(). Referenced by gnash::as_value::to_string(), and gnash::as_value::toDebugString(). Return true if this sprite is dangling. Dangling means that it doesn't have a pointer to the original sprite anymore, not that it doesn't point to anything. To know if it points to something or not use get(), which will return NULL if it doesn't point to anyhing. Referenced by gnash::as_value::toDebugString(). Make this proxy a copy of the given one. Set the original sprite (if any) as reachable. NOTE: if this value is dangling, we won't keep anything alive. References gnash::GcResource::setReachable(). Referenced by gnash::as_value::setReachable().
http://gnashdev.org/doc/html/classgnash_1_1CharacterProxy.html
CC-MAIN-2014-15
refinedweb
201
52.26
Hey guys, I just started using Rails and Ruby this week. I come from PHP and I LOVE RAILS. It is so clean and fresh I wanted to create a very simple that has no database representation … I found several active_form approaches. My problem is to use those classes after installing them, maybe that is a real newbie quetsion So what I did: $ rails plugin install Do I have to do anything else after that in order to have that plugin available in may code? Because I get the error: “uninitialized constant ActiveForm” #app/controllers/register.rb class Register < ActiveForm def index @register_form = Register.new end end #app/models/register.rb class Register < ActiveForm end Thank you so much in advance, Philipp
https://www.ruby-forum.com/t/using-a-gem-plugin/214692
CC-MAIN-2021-49
refinedweb
122
56.86
If you follow me, you know that this year I started a series called Weekly Digest for Data Science and AI: Python & R, where I highlighted the best libraries, repos, packages, and tools that help us be better data scientists for all kinds of tasks. The great folks at Heartbeat sponsored a lot of these digests, and they asked me to create a list of the best of the best—those libraries that really changed or improved the way we worked this year (and beyond). If you want to read the past digests, take a look here: Disclaimer: This list is based on the libraries and packages I reviewed in my personal newsletter. All of them were trending in one way or another among programmers, data scientists, and AI enthusiasts. Some of them were created before 2018, but if they were trending, they could be considered. AdaNet is a lightweight and scalable TensorFlow AutoML framework for training and deploying adaptive neural networks using the AdaNet algorithm [Cortes et al. ICML 2017]. AdaNet combines several learned subnetworks in order to mitigate the complexity inherent in designing effective neural networks. This package will help you selecting optimal neural network architectures, implementing an adaptive algorithm for learning a neural architecture as an ensemble of subnetworks. You will need to know TensorFlow to use the package because it implements a TensorFlow Estimator, but this will help you simplify your machine learning programming by encapsulating training and also evaluation, prediction and export for serving. You can build an ensemble of neural networks, and the library will help you optimize an objective that balances the trade-offs between the ensemble’s performance on the training set and its ability to generalize to unseen data. adanet depends on bug fixes and enhancements not present in TensorFlow releases prior to 1.7. You must install or upgrade your TensorFlow package to at least 1.7: $ pip install "tensorflow>=1.7.0" To install from source, you’ll first need to install bazel following their installation instructions. Next clone adanet and cd into its root directory: $ git clone && cd adanet From the adanet root directory run the tests: $ cd adanet $ bazel test -c opt //... Once you have verified that everything works well, install adanet as a pip package . You’re now ready to experiment with adanet. import adanet Here you can find two examples on the usage of the package: You can read more about it in the original blog post:
https://www.tefter.io/bookmarks/66302/readable
CC-MAIN-2019-47
refinedweb
412
57.3
Introduction We will be talking about - Spidering/Scraping - How to do it elegantly in python - Limitations and restriction In the previous posts, I shared some of the methods of text mining and analytics but one of the major and most important tasks before analytics is getting the data which we want to analyze. Text data is present all over in forms of blogs, articles, news, social feeds, posts etc and most of it is distributed to users in the form of API's, RSS feeds, Bulk downloads and Subscriptions. Some sites do not provide any means of pulling the data programmatically, this is where scrapping comes into the picture. Note: Scraping information from the sites which are not free or is not publically available can have serious consequences. Web Scraping is a technique of getting a web page in the form of HTML and parsing it to get the desired information. HTML is very complex in itself due to loose rules and a large number of attributes. Information can be scraped in two ways: - Manually filtering using regular expressions - Python's way -Beautiful Soup In this post, we will be discussing beautiful soup's way of scraping. Beautiful Soup As per the definition in its documentation "Beautiful Soup is a Python library for pulling data out of HTML and XML files. It works with your favorite parser to provide idiomatic ways of navigating, searching and modifying the parse tree. It commonly saves programmers hours or days of work." If you have ever tried something like parsing texts and HTML documents that you will understand how brilliantly this module is built and really save a lot of programmers work and time. Let's start with beautiful soup Installation I hope python is installed in your system. To install Beautiful Soup you can use pip pip install beautifulsoup4 Getting Started Problem 1: Getting all the links from a page. For this problem, we will use a sample HTML string which has some links and our goal is to get all the linksGoogle</a> <br> <a href="">Apple</a> <br> <a href="">Yahoo</a> <br> <a href="">MSDN</a> </body> </html> """ #to import the package from bs4 import BeautifulSoup #creating an object of BeautifulSoup and pass 2 parameters #1)the html t be scanned #2)the parser to be used(html parser ,lxml parser etc) soup=BeautifulSoup(html_doc,"html.parser") #to find all the anchor tags in the html string #findAll returns a list of tags in thi scase anchors(to get first one we can use find ) anchors=soup.findAll('a') #getting links from anchor tags for a in anchor: print a.get('href') #get is used to get the attributes of a tags element #print a['href'] can also be used to access the attribute of a tag This is it, just 5-6 lines to get any tag from the the html and iterating over it, finding some attriutes. Can you this of doing this with the help of regular expressions. It will be one heck of a job doing it with RE. We can think of how well the module is coded to perform all this functions. Talking about the parsers (one we have passed while creating a Beautiful Soup object), we have multiple choices if parsers. This table summarizes the advantages and disadvantages of each parser library: ParserTypical usageAdvantagesDisadvantages Python’s html.parserBeautifulSoup(markup, "html.parser") - Batteries included - Decent speed - Lenient (as of Python 2.7.3 and 3.2.) - Not very lenient (before Python 2.7.3 or 3.2.2) lxml’s HTML parserBeautifulSoup(markup, "lxml") - Very fast - Lenient - External C dependency lxml’s XML parserBeautifulSoup(markup, "lxml-xml")BeautifulSoup(markup, "xml") - Very fast - The only currently supported XML parser - External C dependency html5libBeautifulSoup(markup, "html5lib") - Extremely lenient - Parses pages the same way a web browser does - Creates valid HTML5 - Very slow - External Python dependency Other methods and Usage Beautiful soup is a vast library and can do things which are too difficult in just a single line. Some of the methods for searching tags in HTML are: #finding by ID soup.find(id='abc') #finding through a regex #lmit the return to 2 tags soup.find_all(re.compile("^a"),limit=2) #finding multiple tags soup.find_all(['a','h1']) #fiind by custom or built in attributes soup.find_all(attrs={'data':'abc'}) Problem 2: In the above example, we are using HTML string for parsing, now we will see how we can hit a URL and get the HTML for that page and then we can parse it in the same manner as we were doing for HTML string above For this will be using urllib3 package of python. It can be easily installed by the following command pip install urllib3 Documentation for urllib3 can be seen here. import urllib3 http = urllib3.PoolManager() #hiitng the url r = http.request('GET', '') #creating a soup object using html from the link soup=BeautifulSoup(r.data,"html.parser") #getting whole text from the wiki page text=soup.text #getting all the links from wiki page links=soup.find_all('a') #iterating over the new pages and getting text from them #this can be done in a recursive fashion to parse large number of pages for link in links: prihref=nt link.get('href') new_url=''+href http = urllib3.PoolManager() r_new = http.request('GET', new_url) #do something with new page new_text=r_new.text #getting source of all the images src=soup.find('img').get('src') This was just a basic introduction to web scraping using Python. Much more can be achieved using the packages used in this tutorial. This article can serve as a starting point. Points to Remember Web Scraping is very useful in gathering data for different purposes like data mining, knowledge creation, data analysis etc but it should be done with care. As a basic rule of thumb, we should not scrape anything which is paid content. Being said that we should comply with the robots.txt file of the site to know the areas which can be crawled. It is very important to look into the legal implications before scraping.Hope the article was informative. -- TechScouter (JSC)
http://blog.agileactors.com/blog/2017/9/20/spidering-the-web
CC-MAIN-2018-39
refinedweb
1,028
61.67
I have function in python,(Assume that i have imported all necessary module), def DL_Iperf(args): ssh=paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect(server_ip,username="root",password=Password) some_code This function is actually a thread and it will be created as many no of UE i have, (Ex: if i have 1 UE than 1 Thread will be created), So, if i have 1 UE/ 2 UE than its working but if i have 3 UE then it is failing, with error "Paramiko : Error reading SSH protocol banner", Below is the stderr of the script, No handlers could be found for logger "paramiko.transport" Unhandled exception in thread started by <function DL_Iperf at 0x02B8ACF0> Traceback (most recent call last): File "C:\Users\qxdm-5\Desktop\Chirag\LTE_11_Perfect_Working\TCP_Latest_2\Windo ws_UE\slave.py", line 379, in DL_Iperf ssh.connect(ServerIp,username="root",password=Pwd) File "build\bdist.win32\egg\paramiko\client.py", line 295, in connect File "build\bdist.win32\egg\paramiko\transport.py", line 451, in start_client paramiko.SSHException: Error reading SSH protocol banner From some reference i found that this is because of some network related issue, but my question is if it network related then why everytime in 3rd call of the function i am getting this error? And how do i resolve? I am writing the code without main function. #include<stdio.h> _start() { int a=10,b=20,res; res=a+b; printf("Res = %d\n",res); } Compiling : cc -nostartfiles filename.c Output: Res = 30 Segfault For testing purposes I want my code to raise a socket "connection reset by peer" error, so that I can test how I handle it, but I am not sure how to raise the error.)
https://www.queryhome.com/tech/90816/getting-paramiko-error-reading-protocol-banner-below-solution?show=90830
CC-MAIN-2022-05
refinedweb
287
51.89
This post will describe a technique of shortening an integer by approximately 50%. For example, the approximate coordinate extent of continental United States in web Mercator can be expressed in the following JSON representation. {"xmin":-15438951,"ymin":1852835,"xmax":-5420194,"ymax":7485635}, …or simplified to: -15438951,1852835,-5420194,7485635 …or shortened to: -ji5l,2sk},cGyR,70o# The principle being employed here is that numbers are normally expressed using the decimal or base ten numerical system. This system requires ten characters (or “digits”), 0 to 9, to formulate a number. However, because ASCII has 95 printable characters we can encode numbers using base 93 which can result in printed strings that are 50% shorter than their decimal cousins. For example: -15438951 to –ji5l, or 1852835 to 2sk} Why not use all 128 ASCII characters? Well. 33 are non-printable and the characters “,” and “-“ are reserved for delimiter and negation respectively. Lastly, to reduce the overall length of extents, the xmax is expressed as a width and ymax as a height. At small scales the benefit of this technique is negligible (if any) but significant at large scales. Please see below for source code and test method. using System; using System.Diagnostics; using System.Globalization; using System.Linq; namespace ESRI.PrototypeLab.Base93 { public static class Base93Extension { private const string CHRS = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ" + " !?\"'`^#$%@&*+=/.:;|\\_<>[]{}()~"; public static string ToShortenedString(this int num) { string result = string.Empty; if (num < 0) { result += "-"; num = Math.Abs(num); } int len = CHRS.Length; int index = 0; while (num >= Math.Pow(len, index)) { index++; } index--; while (index >= 0) { int pow = (int)Math.Pow(len, index); int div = num / pow; result += CHRS[div]; num -= pow * div; index--; } return result; } public static int ToUncompressedInteger(this string s) { int result = 0; int len = CHRS.Length; int index = 0; var chars = s.ToCharArray(); foreach (char c in chars.Reverse().Where(x => x != '-')) { int pow = (int)Math.Pow(len, index); int ind = CHRS.IndexOf(c); result += pow * ind; index++; } if (chars.First() == '-') { result *= -1; } return result; } internal static void Test() { int max = int.MaxValue; string com = max.ToShortenedString(); int unc = com.ToUncompressedInteger(); double x = (double)com.Length / (double)max.ToString(CultureInfo.InvariantCulture).Length; Debug.WriteLine("Max Integer: {0}", max); Debug.WriteLine("Compressed: {0}", com); Debug.WriteLine("Uncompressed: {0}", unc); Debug.WriteLine("Compression: {0:P}", 1 - x); } } } Great piece of code! But why, when using 95 characters, is it called Base93? It is called base93 because each "digit" can be comprised of any one of 93 characters. As described in the post, two of the visible 95 characters are reserved for negation ("-") and a decimal (".") indicator. If the goal was to represent ONLY positive integers then it would be possible to use all 95 ASCII character, and hence, you could encode in base 95. Thanks for the comment! Great Article C# Training C# Training C# OOP Interview Questions C# Online Training C-Sharp Training Dot Net Training in Chennai .Net Online Training Dot Net Interview Questions
http://kiwigis.blogspot.com/2013/09/base-93-integer-shortening-in-c.html
CC-MAIN-2017-22
refinedweb
489
52.46
Metabase You can use the Pulumi Metabase Component to help you get started quickly running Metabase in the cloud. Currently this Component only supports AWS. The code below will show you examples of each resources supported in this Component, but please refer to the API Docs for more detailed descriptions and information about each resource. Quick Start The following steps will get you up and running with Metabase on AWS with very little effort. Once you have completed the steps you will have an RDS Database, Metabase running in a Fargate Task, and a Load Balancer. Configure Environment Before you get started using Pulumi, let’s run through a few quick steps to ensure your environment is set up correctly. Install Pulumi Install Pulumi on macOS through Homebrew: $ brew install pulumi/tap Python version 3.6 or later. To reduce potential issues with setting up your Python environment on Windows or macOS, you should install Python through the official Python installer. pip is required to install dependencies. If you installed Python from source, with an installer from python.org, or via Homebrew you should already have pip. If Python is installed using your OS package manager, you may have to install pip separately, see Installing pip/setuptools/wheel with Linux Package Managers. For example, on Debian/Ubuntu you must run sudo apt install python3-venv python3-pip. If you're having trouble setting up Python on your machine, see Python 3 Installation & Setup Guide for detailed installation instructions on various operating systems and distributions. Pulumi requires Go 1.16 or later. If you're using Linux, your distribution may not provide an up to date version of the Go compiler. To check what version of Go you have installed, use: go version. Pulumi will need the dotnet executable in order to build and run your Pulumi .NET application. Ensure that the dotnet executable can be found on your path after installation. Configure Pulumi to access your AWS account Pulumi requires cloud credentials to manage and provision resources. You must use an IAM user account that has Programmatic access with rights to deploy and manage resources handled through Pulumi.. Create New Project Now that you have set up your environment by installing Pulumi, installing your preferred language runtime, and configuring your AWS credentials, let’s create your first Pulumi program. $ mkdir metabase-quickstart && cd metabase-quickstart $ pulumi new aws-typescript $ mkdir metabase-quickstart && cd metabase-quickstart $ pulumi new aws-python # from within your $GOPATH $ mkdir metabase-quickstart && cd metabase-quickstart $ pulumi new aws-go $ mkdir metabase-quickstart && cd metabase-quickstart $ pulumi new aws-csharp $ mkdir metabase-quickstart && cd metabase-quickstart $ pulumi new aws-yaml The pulumi new command creates a new Pulumi project with some basic scaffolding based on the cloud and language specified. If this is your first time running pulumi new or other pulumi commands, you may be prompted to log in to the Pulumi Service. The Pulumi CLI and Pulumi Service work in tandem to deliver a reliable experience. It's free for individual use, with features available for teams, and self-managed options are also available. Hitting Enter at the prompt opens a browser for you to sign in. After some dependency installations from npm, your project and stack will be ready. Install Metabase Component Next you will need to install the Metabase Component so you can use it in your program. Yarn $ yarn add @pulumi/metabase NPM $ npm install @pulumi/metabase After the command completes, the project and stack will be ready. Install Metabase Component Next you will need to install the Metabase Component so you can use it in your program. $ pip3 install pulumi_metabase After the command completes, the project and stack will be ready. Install Metabase Component Next you will need to install the Metabase Component so you can use it in your program. $ go get -u github.com/pulumi/pulumi-metabase/sdk After the command completes, the project and stack will be ready. Install Metabase Component Next you will need to install the Metabase Component so you can use it in your program. $ dotnet add package Pulumi.Metabase Update Code Now that you have all your dependencies installed and your project configured, you can now add the code that will provision your Metabase Service. Replace your index.ts with the following: import * as pulumi from "@pulumi/pulumi"; import * as metabase from "@pulumi/metabase"; const metabaseService = new metabase.Metabase("metabaseService", {}); export const url = metabaseService.dnsName; Replace your __main__.py with the following: import pulumi import pulumi_metabase as metabase metabase_service = metabase.Metabase("metabaseService") pulumi.export("url", metabase_service.dns_name) Replace your main.go with the following:", nil) if err != nil { return err } ctx.Export("url", metabaseService.DnsName) return nil }) } Replace your Program.cs with the following: using Pulumi; using Metabase = Pulumi.Metabase; await Deployment.RunAsync(() => { var metabaseService = new Metabase.Metabase("metabaseService"); return new Dictionary<string, object?> { ["url"] = metabaseService.DnsName }; }); Replace your Pulumi.yaml with the following: name: metabase-yaml runtime: yaml resources: metabaseService: type: "metabase:index:Metabase" outputs: url: ${metabaseService.dnsName} Deploy Once you have updated your code you are ready to deploy your Metabase Component. To do so, just run the the following command: $ pulumi up First Pulumi will perform a preview showing you exactly what will be created. Once the preview is complete Pulumi will ask you if you want to continue. yes to proceed to actually provisioning the service. All the resources will take a few minutes to fully provision. Once the update has completed it is likely it will take a few more minutes for the Metabase task to finishing provisioning and start accepting traffic. (Optional) Destroy You can destroy all the resources by running pulumi destroy. This will ultimately delete your Metabase Service’s database so you will lose all stored data and have to start from scratch if you provision a new Metabase service. Full Examples Below you will find complete examples (all arguments supplied) for the Metabase Components. import * as pulumi from "@pulumi/pulumi"; import * as metabase from "@pulumi/metabase"; const metabaseService = new metabase.Metabase("metabaseService", {", }, }); export const url = metabaseService.dnsName; import pulumi import pulumi_metabase as metabase metabase_service = metabase.Metabase("metabaseService", vpc_id="vpc-123", networking=metabase.NetworkingArgs( ecs_subnet_ids=[ "subnet-123", "subnet-456", ], db_subnet_ids=[ "subnet-789", "subnet-abc", ], lb_subnet_ids=[ "subnet-def", "subnet-ghi", ], ), domain=metabase.CustomDomainArgs( hosted_zone_name="example.com", domain_name="metabase.example.com", )) pulumi.export("url", metabase_service.dns_name)", &metabase.MetabaseArgs{ VpcId: pulumi.String("vpc-123"), Networking: &metabase.NetworkingArgs{ EcsSubnetIds: pulumi.StringArray{ pulumi.String("subnet-123"), pulumi.String("subnet-456"), }, DbSubnetIds: pulumi.StringArray{ pulumi.String("subnet-789"), pulumi.String("subnet-abc"), }, LbSubnetIds: pulumi.StringArray{ pulumi.String("subnet-def"), pulumi.String("subnet-ghi"), }, }, Domain: &metabase.CustomDomainArgs{ HostedZoneName: pulumi.String("example.com"), DomainName: pulumi.String("metabase.example.com"), }, }) if err != nil { return err } ctx.Export("url", metabaseService.DnsName) return nil }) } using Pulumi; using Metabase = Pulumi.Metabase; await Deployment.RunAsync(() => { var metabaseService = new Metabase.Metabase("metabaseService", new Metabase.MetabaseArgs { VpcId = "vpc-123", Networking = new Metabase.Inputs.NetworkingArgs { EcsSubnetIds = { "subnet-123", "subnet-456", }, DbSubnetIds = { "subnet-789", "subnet-abc", }, LbSubnetIds = { "subnet-def", "subnet-ghi", }, }, Domain = new Metabase.Inputs.CustomDomainArgs { HostedZoneName = "example.com", DomainName = "metabase.example.com", }, }); return new Dictionary<string, object?> { ["url"] = metabaseService.DnsName }; }); name: metabase-yaml runtime: yaml resources: metabaseService: type: "metabase:index:Metabase" properties:" outputs: url: ${metabaseService.dnsName}
https://www.pulumi.com/registry/packages/metabase/
CC-MAIN-2022-33
refinedweb
1,204
51.04
. Last post, I talked a lot about what a model can do for you. But I think it’s also important to talk about the proper limits of a model, and what happens at the edges.. Another huge factor was that we weren’t building a standalone product, but a part of the Windows platform, the System.Workflow namespace in WinFX. That meant very high standards for consistency with the rest of WinFX, approachability, conceptual clarity, naming, and intelligibility. The WinFX guys (including Brad Abrams and Krzysztof Cwalina) WWF SDK and create a Sequential Workflow Console Application project or a State Machine Console Application project, and you’ll see.? You can give us your feedback here. Be sure we will be listening closely. Workflow,. Expressiveness. Execution. Monitoring. Transformation.. I....
http://blogs.msdn.com/davegreen/
crawl-002
refinedweb
129
55.64
hello, i have a 1 problem whit the image.. excuse my low level english but i'm italian.. i can display a full screen image?? hello, i have a 1 problem whit the image.. excuse my low level english but i'm italian.. i can display a full screen image?? Buongiorno! I think you want Canvas.setFullScreenMode() in the JavaDocs. Note that this only works on devices with MIDP2. Cheers, Graham. Hi, There can be two queries related to you post - @ Are you talking about to draw the image in full screen.. If you want to display the image in full screen then you have to take that image of the screen size..ob the mobile..then just use - The your image will be in the full screen...The your image will be in the full screen...Graphics.drawImage(image_name,0,0,0); for more query, write here, @ Are you talking about to show the full screen.. I mean to say is that by default the device excludes some pixel..and does not display the complete screen.. in this case you can use for more query, write here,for more query, write here,setFullScreenMode(true); I hope that these lines can help you in any means, Thanks, Last edited by raj_J2ME; 2008-12-02 at 12:40. Reason: spell tanks however i should make full screen only the image and not other object... will continue to operate exitcommand and okcommand?however i should make full screen only the image and not other object... will continue to operate exitcommand and okcommand? i use s60 edition... nokia n81 Hi, That means you want to draw the image as the full screen..correct. While you will look at the link Then you will find that the screen size is 240x320..ok @ Then take an image of the same size i.e.240x320 @ Then write the lines as - //int var screen_width screen_width = 240; //int var screen_height screen_height = 320; then @ write this line setFullScreenMode(true); @ Then draw your image Graphics.drawImage(image_name,0,0,0); Thats all you have to do, Are these lines helpful to you, Thanks, but i extended MIDlet and not Canvas You will need at least two classes... a MIDlet and a Canvas. Hi Morbidick, Well for running the mobile application you have to define (In custom application) the two classes called as - (Though simple demo can be run through midlet only.) @ Class AnyClass extends MIDlet { } @ Class AnyClass1 extends Canvas { } all the information about the midlet class and its purpose can be found here. What is a MIDlet? What is a MIDlet? The Mobile Information Device Profile (MIDP) is a set of Java APIs targeted at mobile information devices such as mobile phones and entry-level palmtop devices. A MIDlet is a MIDP application. Throughout this article, the terms MIDlet and MID application are used interchangeably. MIDlets form the building blocks of the Java 2 Platform, Micro Edition (J2ME) runtime environment. The MIDlet is designed to be run and controlled by the application manager in the K Virtual Machine (KVM), a stripped-down version of the Java Virtual Machine designed to run on mobile devices. The javax.microedition.midlet.MIDlet class acts as an interface between the MIDlet and the application manager. The methods of this class allow the application manager to create, start, pause, and destroy a MIDlet. J2ME applications must extend the javax.microedition.midlet.MIDlet class, which provides a framework for the following actions: * To allow the application manager to control the MIDlet by notifying and requesting MIDlet state changes. * To allow the MIDlet to retrieve properties from the application descriptor, a registry of applications maintained by the application manager. MIDlet states A MIDlet can be in various states in its lifetime. These states allow the application manager to manage the activities of multiple MIDlets within a runtime environment. It can select which MIDlets are active at a given time by starting and pausing them individually. The application manager also maintains the state of the MIDlet. A MIDlet can be in any of the following states: * Active: The MIDlet enters an active state at startup and may acquire some resources. * Paused: In the paused state, the MIDlet releases shared resources and becomes quiescent. * Destroy: This is the MIDlet's termination phase. The terminating MIDlet must release all resources and save any persistent state with the application manager. State implementations: Moving from state to state The application manager invokes certain methods on the MIDlet to change states. The MIDlet implements these methods to update its internal activities and resource usage as directed by the application manager: * startApp(): This method signals the MIDlet that it has entered an active state. It consists of the initialization procedures for setting up of an interactive display environment for the MIDlet. * pauseApp(): This method signals the MIDlet to stop and enter the paused state. * destroyApp(): This method signals the MIDlet to terminate and enter the destroyed state. The MIDlet can initiate some state changes itself and notifies the application manager of those state changes by invoking one of these methods: * notifyDestroyed(): This method is used by a MIDlet to notify the application manager that it has entered into the destroyed state. * notifyPaused(): This method notifies the application manager that the MIDlet does not want to be active and has entered the paused state. * resumeRequest(): This method provides a MIDlet with a mechanism to indicate that it is interested in entering the active state. Calls to this method can be used by the application manager to determine which applications to move to the active state. When the application manager decides to activate a MIDlet, it will call that MIDlet's startApp() method. The MIDlet is provided with a mechanism to retrieve named properties from the application manager: its getAppProperty() method. These properties, provided as part of the MIDlet deployment, are retrieved from the combination of the application descriptor file and the manifest. If a runtime exception occurs during startApp() or pauseApp(), the MIDlet will be destroyed immediately. Last edited by raj_J2ME; 2008-12-04 at 05:12. Reason: spell Hi, @ Without the canvas demo is as - import javax.microedition.lcdui.Display; import javax.microedition.lcdui.Form; import javax.microedition.midlet.MIDlet; import javax.microedition.midlet.MIDletStateChangeException; public class HelloJ2ME_a1 extends MIDlet { private Display mDisplay; private Form mForm; protected void startApp() throws MIDletStateChangeException { if(mForm == null) { // Creo il Form dove inserire il messaggio (questo puo' essere pensato come ad un JFrame) mForm = new Form("Hello J2ME !!!"); // Ottengo un Display per la MIDlet mDisplay = Display.getDisplay(this); } // Imposto il Form corrente nel Display mDisplay.setCurrent(mForm); } protected void pauseApp() { } protected void destroyApp(boolean arg0) throws MIDletStateChangeException { // Notifico alla JVM della morte della MIDlet (viene chiamato il Garbage Collector) this.notifyDestroyed(); } } @ With the canvas and midlet it is as - Go through the link, you will be having the code..then you will be able to understand it. For more query you can ask again, Thanks,
http://developer.nokia.com/community/discussion/showthread.php/152035-image-full-screen
CC-MAIN-2014-10
refinedweb
1,157
56.86
. I think Dare misses the point. To me, using "CSS class names for aggregation purposes" means embedding the structure of your content in your page, so that it can be parsed by an aggregator -- or indeed transformed into any alternative message/publishing/interaction format. I use DIV ids to mark up the body, title and permalinks of my weblogs, and then I have a tool that parses them to create a) my RSS feed b) some Javascript indexes c) extracts to build my home page d) the HTML for updating my archive index page every week. All this multipurposed in a loosely coupled way from one XHTML source. The only pity is that it's a bit of a bodge at the moment, and the id="rssi0" labels are not very expressive. So I'd welcome any suggestions for progressing this concept to make it a bit more robust. Posted by Phil Wainewright at I was about to post a detailed response but noticed Mark Pilgrim has already beaten me to the punch with which eloquently points out most of the issues I saw with the XHTML-as-syndication-format proposal. The criticisms I have left that Mark doesn't bring up are probably not politically correct anyway given my employer. ;) Posted by Dare Obasanjo at Re: That blog-that-is-also-valid-RSS (which isn't really valid RSS, it declares its version as 0.92 but uses namespaces, but the validator isn't that sophisticated). Anyway, the page doesn't display properly in Opera (latest stable version, appears unstyled), and doesn't display at all in Lynx (latest stable version, offers to download) or my copy of IE 5.5 (offers to download, although this may just be a symptom of an ongoing problem I have with my copy of IE). Works in Mozilla, though. Woohoo. It's an interesting thought experiment, but I don't think it's proving the point you thought it was proving. Posted by Mark at Mark, I guess that depends on what point you think I was trying to prove. In the nirvana that is the distant future, there really is no need for XHTML. All that is needed is XML + CSS. I don't believe that we can get there in one step, but meanwhile I believe in encouraging things that are helpful along the way (like supporting namespaces, and ignoring elements that one doesn't understand). Posted by Sam Ruby at Sam, I wish I could believe in that future. I really do. My life would be much simpler. My day job is currently consumed by a project that has me debugging nested tables (of my own devising!) because our client has clients that are still using the dreaded Netscape 4 and our client is not in a position to force them to upgrade, AND they are of the mind that everything should look exactly the same in all browsers. (Ever wonder why my own site is getting more and more stripped down and "semanticly pure"? Classic overcompensation behavior.) So we might get there eventually, but frankly I doubt it. Technologies don't evolve forever; they go a certain distance until they hit diminishing returns and lose the attention of a critical mass of end users (to push for innovations) and developers (to deliver them), and then they just stagnate. We in the blogging community are all in love with the web now, but most people just take it for granted, it more or less does what they want it to do, they've already been lowering their expectations for years, and future attention will be spent elsewhere. How does pure XML + CSS benefit end users? I don't see a compelling reason for them to upgrade. Embedding other XML datatypes? MathML? SVG? RDF? Please. Reduced bandwidth? Minimal, again you're hitting diminishing returns, your limited time and resources are better spent elsewhere (like implementing mod_gzip). My site is readable in Netscape 4 and Lynx. Hell, it's readable in Netscape 1 and Mosaic. It's unstyled but it's there. That text/xml blog is not. It defines a cutoff point for basic functionality (display), and sets it too high. And once XHTML 2.0 comes down from on high, old browsers won't even understand links. LINKS! The single thing the web has going for it, and they changed the semantics of links. (In XHTML 2.0, anything can have an HREF attribute; you don't need a separate tag.) How long with it be before we can safely use *that* little nugget of a standard? In five years, will we all be arguing about supporting legacy browsers like IE 6 and Netscape 7? No, the entire world will have moved on and the field will have stagnated. "HTML needs a RANT tag." --Alan Cox Posted by Mark at HTML is the 3270 datastream of the late 20th century. There is no problem that can't be solved by a smart individual and a little screen scraping. Posted by Sam Ruby at While XML + CSS would yield poor results in older browsers and would possibly yield bad effects in even recent browsers for certain purposes - wouldn't XML + XSLT provide a better answer as it would actually transform the data for its intended purpose - posting on the web, syndication, etc? Posted by Lou at Look at . I did it some months ago. I wanted RSS to be the place where my stories get placed. But the same XML has a reference to a XSL filter that makes the browser turn the tags into a beautiful webpage. Posted by Alfonso Sanchez at It's gonna be a grocely big redundant overhead if you start mixing a lot of different applications. At this moment (2003-Jan) it's probably better to use server-side XSL Transformations, or to manually write seperate documents. *snuff* Personally, we're still actively waiting for the user-side software revolution, that will catch up with the latest and most current standards. Posted by Stanislaw Arkadiusz at I think using XHTML as a syndication format is a dumb idea. Then again I think XHTML was and still is a dumb idea but this idea seems even more ill-considered. PS: Before anyone claims this is some Microsoft opinion. It isn't. Tantek is our HTML working group rep and he is all for this given my reading of his blog plus his opinion counts a whole lot more than mine in the grand scheme of things at least in that space. Posted by Dare Obasanjo at
http://www.intertwingly.net/blog/977.html
crawl-001
refinedweb
1,100
70.84
As you've already learned, XAML and WPF are separate, albeit complementary, technologies. As a result, it's quite possible to create a WPF application that doesn't use the faintest bit of XAML. Altogether, there are three distinct coding styles that you can use to create a WPF application: Code-only. This is the traditional approach used in Visual Studio for Windows Forms applications. It generates a user interface through code statements. Code and uncompiled markup (XAML). This is a specialized approach that makes sense in certain scenarios where you need highly dynamic user interfaces. You load part of the user interface from a XAML file at runtime using the XamlReader class from the System.Windows.Markup namespace. ... Get Pro WPF in C# 2008: Windows Presentation Foundation with .NET 3.5, Second Edition now with O’Reilly online learning. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
https://www.oreilly.com/library/view/pro-wpf-in/9781590599556/9781590599556_loading_and_compiling_xaml.html
CC-MAIN-2021-04
refinedweb
156
57.87
Table of Content: Introduction The unix: module provides access to features that only make sense on UNIX-like operating systems, such as Linux, FreeBSD, and macOS. On non-UNIX operating systems, such as MS Windows, this namespace does not exist and use unix will fail. Use the $platform:is-unix variable to determine if this namespace is usable. Variables $unix:umask The file mode creation mask. Its value is a string in Elvish octal representation; e.g. 0o027. This makes it possible to use it in any context that expects a $number. When assigning a new value a string is implicitly treated as an octal number. If that fails the usual rules for interpreting numbers are used. The following are equivalent: unix:umask = 027 and unix:umask = 0o27. You can also assign to it a float64 data type that has no fractional component. The assigned value must be within the range [0 … 0o777], otherwise the assignment will throw an exception. You can do a temporary assignment to affect a single command; e.g. umask=077 touch a_file. After the command completes the old umask will be restored. Warning: Since the umask applies to the entire process, not individual threads, changing it temporarily in this manner is dangerous if you are doing anything in parallel. Such as via the peach command.
https://elv.sh/ref/unix.html
CC-MAIN-2020-34
refinedweb
220
66.64
py2exe (0.50) python (2.3.3) running py2exe on a script, no errors thrown and an executable is made. Executable does not work - it runs but no results occur and no errors are thrown. py2exe (0.50) python (2.3.3) running py2exe on a script, no errors thrown and an executable is made. Executable does not work - it runs but no results occur and no errors are thrown. irishtek, A few suggestion in no particular order: 1. Try building the app as a console application and run it from an already open command window. See if anything is printed. 2. Sprinkle a few print statements in the main module to confirm its actually running 3. Check the build logs - I use a modified standard script like this3. Check the build logs - I use a modified standard script like thisCode:if __name__ == "__main__": print "Starting..." #your code print "Stopping" to write a log file.to write a log file.Code:# setup.py # Requires py2exe has been installed that matches python version # Run this program "python release.py py2exe" to create an executable + dlls found in # c:/python21/dist/gscript. # see for details. from distutils.core import setup import py2exe import glob import sys rd="c:\\tester" f = open(rd+'\\pybuild.log','w') sys.stdout = f setup( name="GSCRIPT", windows=[ { "script":rd+"\\gscript.pyw", "icon_resources":[(1,rd+"\\g.ico")] } ], data_files=[ ( ".",['gextension.py','pypdu.py','g.ico','w9xpopen.exe', "GSCRIPT.htm",'script.template', 'calldll.pyd','DLPORTIO.dll', 'DLPORTIO.sys'] +glob.glob("*.gsc") ), ( "GSCRIPT_files",glob.glob(rd+"\\GSCRIPT_files\\*.gif") +glob.glob(rd+"\\GSCRIPT_files\\*.jpg") +glob.glob(rd+"\\GSCRIPT_files\\*.png") ), ( r"ftp", glob.glob(rd+"\\ftp\\*.*") ), ], ) f.close() I then use a batch file: 4. Make sure you don't just pass exceptions4. Make sure you don't just pass exceptionsCode:c:\python23\python py2exe setup.py notepad pybuild.log Let us know how you get on. not so Grim it appears to be an issue with the image library. If compiled and executed from DOS I see the following error: Starting... Traceback (most recent call last): File "<string>", line 164, in ? File "<string>", line 129, in MakeText File "Image.pyc", line 1571, in open IOError: cannot identify image file Works fine as a py script - fails as an exe In order to make PIL and PY2EXE work together all libraries of PIL need to be imported such as: import Image import BmpImagePlugin also the py2exe line needs to be: python setup.py py2exe -pPIL
http://forums.devshed.com/python-programming/120330-py2exe-last-post.html
CC-MAIN-2015-32
refinedweb
412
68.87
Introduction Today, I will show you how to develop an ASP.NET MVC 5 web application that has multi-platform UI support (like Mobile View and Desktop View). Mobile View v/s Desktop View The User Interface for building a mobile View is very different than building desktop Views. So, ASP.NET MVC can help us to fulfill it using 51Degrees.mobi library package. Description The start page of this application will display all Cricket player's achievements and we have another page to filter the achievements based on selection of the player name. I have two types of UI Views: Desktop View and Mobile View. We use static data to display on web pages for mobile specific Views. If we have two pages with the same information, we have to retrieve a particular View based on the platform and browser . If the requests come from a smart phone browser, the mobile View will come, otherwise the application will come in the normal View for desktop/laptop browser. To detect the platform and browser from where the request is coming at server, I used 51Degrees.mobi to detect the user details. Then, to run the application for mobile View, we need emulator for mobile or we can use browser which can behave like a browser on mobile. For this purpose, I am using Mozilla Firefox browser to test both kinds of Views. Process to develop this application using ASP.NET MVC 5. Step 1 Select project template as internet application. Visual Studio 2013 adds SatyajQueryMobileMvc project in the solution, as shown in the screenshot. Step 2 Install 51Degrees.mobi 51Degrees.mobi is a library device package. By using this, we will get details of library device data, accurate screen sizes, model information which are all available to the application. Click on "Tools" in Menu bar, go to "Library Package Manager" and the choose "Manage NuGet Packages for Solution" option, as shown in the below screenshot. Type 51Degrees.mobi in search textbox and the "Install" option will show up, download and install it. Now, you will be able to see “51Degrees.config” file in your project. It will create a “Mobile” folder in application. I will delete that unnecessary folder and i have our mobile view in Views folder only. Also, we have to comment out some code in 51Degrees.config as it will redirect the request from mobiles to “~/mobile/default.aspx”. Commented code ref by me - It can be done using a second way also. Go to top menu “TOOLS > Library Package Manager > Package Manager Console”. It will open a console window. Now, you need to run the following command: Please visit site here to know How it works. Step 3 WORKING WITH MODELS. Create a Model Class file named “Player.cs”; Code Ref Code Description Two entities PlayerId and PlayerName are declared publicly to access in other class files. Then, we created a Model Class file named “Achieve.cs”. The entities AchieveId, AchieveName, AchieveDescription, PlayerAchieve are declared here publicly to access the data in other class files. Then we created a Model Class file named “DataRepository.cs”. In DataRepository.cs, we have used Player.cs and Achieve.cs class references and their entities to insert data successfully. Use Player class as list generic class to add some records to filter data by using the entities defined in this class. Here, GetPlayers() is the user defined function used to deliver this record to Views for filtering data as per selection of these records. Here, Achieve.cs class files used their entities to put some records details as per selection of value of entities defined in Player.cs class file. In this class, the Player class reference is used for accurate filtering of data. Here, MS DHONI is a Player in Player.cs and is shown in the dropdown list. The records will be shown as defined in Achieve.cs. Here, GetAchieves is the user defined function in Achieve.cs class file for showing details of records as per selection of values in GetPlayers user defined function in Player.cs class file. Step 4 Working with "PlayerModels" folder. Create a new folder named “PlayerModels”. Then, create class file named “PlayerAchieve.cs”. Here, I am using two namespaces to access Model class files, which are Player, Achieve and DataRepository classes. Here are the generic list classes with properties for future references in Dropdown list and details list in Views. Step 5 Working with Controllers. Create a Controller class file named “AchieveController.cs”; Here I using two namespaces to Access Models and PlayerModels class files that is Player , Achieve, DataRepository and PlayerAchieve classes and their respected entities. DataRepository class creates an object named “_repository” using new keyword. Here two Important controller action methods defined here. In AllAchieves action method using the Achieve class method called GetAchieves , The details of data in DataRepository class file will be fetched. In PlayersWiseAchieves action method using the PlayerAchieve class and reference classes defined in PlayerAchieve that is Player and Achieve class’s methods that is GetPlayers and GetAchieves defined to access dropdown list data and details data . The details of data in DataRepository class file will be fetched. In the end return View taking corresponding class object for view references. Step 6 WORKING WITH VIEWS. Create a View cshtml file named “AllAchieves.cshtml” in Achieve folder of Views folder; This will be the start page view of this application. We have to add namespace to get Achieve Class data details. For this view the title will be shown as Here I assigned foreach loop to show Achieve Name and Achieve Description of Achieve class. See Here I using actionlink that will be redirected to PlayersWiseAchieves view and show data according to selection parameters in dropdown list. TO KNOW DETAILS OF THE FEATURE GO TO MY BLOGS AND ARTICLES etc. Again create a View cshtml file named “PlayersWiseAchieves.cshtml” in Achieve folder of Views folder; Code Ref We have to add namespace to get PlayerAchieve Class References that is Player and Achieve class details. @model SatyajQueryMobileMvc.PlayerModels.PlayerAchieve For this view the title will be shown as ViewBag.Title = "Players Wise Achieves"; We have to add some player name in Dropdown list SelectList cat = new SelectList(Model.AllPlayers, "PlayerId", "PlayerName"); Here AllPlayers the Object of Player generic list class in PlayerAchieve class file. Model.AllPlayers Here PlayerId and PlayerName the entities in Player class file. SelectList class represents lists let user select one item. The Dropddown list designed to get Player names . Here cat is the SelectList Class object. Here SelectedPlayerId is the int variable defined in PlayerAchieve class In PlayerModels folder. Here I designed one table to show data details and this id we have to pass in Jquery Script file. Here I used Jquery Script file is responsible for to show data. In first Session I passed SeletcedPlayerId to get data details based on selection of dropdown selection. JSON HELPER CLASS PROVIDES METHODS TO WORK WITH DATA. ENCODE CONVERTS DATA OBJECTS IN STRING FORMATS. Json.Encode(Model.AllAchieves)); The PlayerAchieve class Object value passed in ENCODE. Model.AllAchieves Then defined table id passed here to design view with heading and details. This part will define you How data details will come as per selection of dropdownlist selection parameter. With selection of Player name of player class , the achieve name and achieve description of Achieve class will be shown using methods and generic list class properties of PlayerAchieve class. If there is data then the table id defined in jquery script tag will show you in proper format with heading and data details using combination of Headers and Elements . Then I used one Action Link to redirect to Home page / Start page that is “AllAchieves.cshtml” View page. @Html.ActionLink("Go To Achieve Home Page", "AllAchieves", "Achieve", new { style="font-size:20px;color:red;" }) Step 7 WORKING WITH SHARED FOLDER. Code Ref(Layout.cshtml) The Title tag defined here to add some extra text to whatever title text tab defined in both AllAchieves.cshtml and PlayersWiseAchieves.cshtml. ViewBag.Title = "All Players Achieves"; <title>@ViewBag.Title - SATYAPRAKASH SAMANTARAY</title> So browser title tab output must be for AllAchieves.cshtml All Players Achieves - SATYAPRAKASH SAMANTARAY So browser title tab output must be for PlayersWiseAchieves.cshtml Players Wise Achieves - SATYAPRAKASH SAMANTARAY As per requirement you can customize css content by using this reference path. @Styles.Render("~/Content/css") This Scripts class line very important for Jquery Script file defined in PlayersWiseAchieves.cshtml View file. @Scripts.Render("~/bundles/jquery") @*With commenting the Players Wise Achieves not showing player details after selection of specific player name.*@ Details of Jquery Bundles. Without this line the Jquery script will not work and the output will not be shown even after selection of player name in drop downlist. Then renders the portion of content page in body tag. Step 8 WORKING WITH 51Degrees.mobi. After Installation Of This 51Degrees.mobi , How to Find all installation related files like Dll file ? Follow below mentioned image for references. Picture 1 Picture 2 Picture 3 Picture 4 Step 9 SET START PAGE AT PAGE LOAD FIRST TIME. Here Controller Name : Achieve Controller Action Method / View Name : AllAchieves Step 10 How to Configure Asp.net Mvc for mobile specific view ? Important Code Reference 51Degrees.config which can be modified later for further requirement OUTPUT The Url is - Click On Go To Players wise Achieves link it will go to Player Wise Achieve Page. The Url changed to The drop down list showing Player Name coming from Palyer class and DataRepository class file. Now, playerwise achievements page showing Achievement Name and Achievement Description from DataRepository Class and Achieve Class file let the user for proper selection of select player name from dropdown list. If You click Go To Achieve Home Page it will be redirected to All Achieve Home Page View The Output is shown below. The title text of The title text of Output of different formats of View in different platforms with images is shown below. MOBILE VIEW FORMAT OF THIS ASP.NET MVC APPLICATION Mobile View 1 Mobile View 2 TABLET FORMAT OF THIS ASP.NET MVC APPLICATION Tablet View 1 Tablet View 2 DESKTOP/LAPTOP VIEW FORMAT OF THIS ASP.NET MVC APPLICATION Desktop/Laptop View 1 Desktop/Laptop View 2 SummaryIn this article, we learned the below details. View All
http://www.c-sharpcorner.com/article/multi-platform-support-using-51degrees-mobi-device-detection-solution-in-asp-net/
CC-MAIN-2018-09
refinedweb
1,726
66.64
Do notation considered harmful Contents Criticism Haskell's do notation is popular and ubiquitous. However we shall not ignore that there are several problems. Here we like to shed some light on aspects you may not have thought about, so far. Didactics The do notation hides functional details. This is wanted in order to simplify writing imperative style code fragments. The downsides are that: - Since donotation is used almost everywhere (). In fact it is a data Header = Header Char Int Bool readHeader :: Get Header readHeader = liftA3 Header get get get or readHeader = Header <$> get <*> get <*> get Not using monads, along with the do notation, can have advantages. Consider a generator of unique identifiers. First you might think of a State monad which increments a counter each time an identifier is requested. run :: State Int a -> a run m = evalState m 0 newId :: State Int Int newId = do n <- get modify succ return n example :: (Int -> Int -> a) -> a example f = run $ do x <- newId y <- newId return (f x. The following is like a Reader monad, where we call local on an incremented counter for each generated identifier. Alternatively you can view it as Continuation monad. This way users cannot accidentally place a return somewhere in a do block where it has no effect. Safety - This page addresses an aspect of Haskell style, which is to some extent a matter of taste. Just pick what you find appropriate for you and ignore the rest. With do notation we have kept alive a dark side of the C programming language: not even be evaluated. The situation is different for IO: While processing the IO, you might still ignore the contained return value. You can write do getLine putStrLn "text" and thus silently ignore the result of getLine. The same applies to do System.Cmd.system "echo foo >bar" where you ignore the ExitCode. Is this behaviour wanted? There are possibilities to explicitly ignore return values in safety oriented languages (e.g. EVAL in Modula-3). Haskell does not need this, because you can already write do _ <- System.Cmd.system "echo foo >bar" return () Writing _ <- should always make you cautious whether ignoring the result is the right thing to do. The possibility for silently ignoring monadic return values is not entirely the fault of the do notation. It would suffice to restrict the type of the (>>)combinator to (>>) :: m () -> m a -> m a This way, you can omit _ <- only if the monadic return value has type (). New developments: - GHC since version 6.12 emits a warning when you silently ignore a return value - There is a new function called voidthat makes ignoring of return values explicit: GHC ticket 3292 Happy with less sugar Additional combinators Using the infix combinators for writing functions simplifies the addition of new combinators. Consider for instance a monad for random distributions. This monad cannot be an instance of MonadPlus, because there is no mzero (it would be an empty list of events, but their probabilities do not sum up to 1) and mplus is not associative because we have to normalize the sum of probabilities to 1. Thus we cannot use standard guard for this monad. However we would like to write the following: do f <- family guard (existsBoy f) return f Given a custom combinator which performs a filtering with subsequent normalization called (>>=?) :: Distribution a -> (a -> Bool) -> Distribution a we can rewrite this easily: family >>=? existsBoy Note that the (>>=?)combinator introduces the risk of returning an invalid distribution (empty list of events), but it seems that we have to live with that problem. Alternative combinators If you are used to writing monadic functions using infix combinators Monad type constructor class, where the monadic result type cannot be constrained. This is e.g. useful for the Set data type, where the element type must have a total order. (>>)and (>>=)you can easily switch to a different set of combinators. This is useful when there is a monadic structure that does not fit into the current Useful applications It shall be mentioned that the do sometimes takes the burden away from you of writing boring things. E.g. in getRight :: Either a b -> Maybe b getRight y = do Right x <- y return x a case on y is included, which calls fail if y is not a Right (i.e. Left), and thus returns Nothing in this case. Also the mdo notation proves useful, since it maintains a set of variables for you in a safe manner.)) See also - Paul Hudak in Haskell-Cafe: A regressive view of support for imperative programming in Haskell - Data.Syntaxfree on Wordpress: Do-notation considered harmful - Things to avoid#do notation
https://wiki.haskell.org/index.php?title=Do_notation_considered_harmful&direction=prev&oldid=64755
CC-MAIN-2022-21
refinedweb
782
62.98
The pre tag displays preformatted text in a fixed-width font. Other than this it's identical to p The pre element displays all white space and line breaks exactly as they appear inside the <PRE> and </PRE> tags. Using this tag, you can insert and reproduce formatted text, preserving its original layout. This tag is frequently used to show code listings, tabulated information, and blocks of text that were created for some text-only form, such as email messages. XML tags inside the pre tag are still interpreted - since the pre is just another type of paragraph, span tags and their varients (b, i, u, a etc.) may be used to format the text. See the p element examples of general layout applicable to all paragraphs, including PRE The text inside the PRE tag will keep the same spacing you can see here. <p>Here is a preformatted block of text.</p> <pre> public class HelloWorld { public static void main(String[] args) { System.out.println("Hello World"); } } </pre>
http://bfo.com/products/report/docs/tags/tags/pre.html
CC-MAIN-2017-43
refinedweb
169
63.7
Dev Diaries - Building a JRPG - Part 9 May 27, 2020 Dad Life Being a new dad has really made life busy! There is really no book, article, or YouTube video that can serve as a silver-bullet on how to care for your new baby. Seriously. The only resource is... yourself, your partner, and the baby. Newborns don't do very much aside from pooping, peeing, eating, and sleeping, so it's easy to forget that they are people too. Your newborn, and my newborn are both different in having their own needs, and wants. So, it's just unfair to care for them in a way that's templated through some How-To. It's been a month for me, and wow it has been rough. You really cannot prepare for children. You can buy all these supplies, and trinkets for your baby, but nothing can really prepare you, and your partner for the baby. Expect to just feel like a shitty parent, and learn from it. It happens in this weird endless loop -- day after day. Just when you feel like you've been able to stay a step ahead of your kid, they throw you a curve ball by changing the way they sleep, or eat. It does get better though, and it has for me over the past couple of days. Anyway, here are some things I found out that was specific to our daughter, Riley that we would have never caught had we not taken a step back to rethink what the hell we were doing: First, get a baby monitor, and just trust it. You'll maintain your sanity this way. Riley had been having lots of fits, and diarrhea due to the specific formula we were giving her. Lots of loud crying, and hissing would come from her whenever it was "bathroom" time for her. It was terrible. We were using a primarily milk-based formula, but switched to a non milk-based formula, and has been behaving much better since then. We only get 1 big poop diaper a day now (as opposed to 5)! Building onto the previous point, our daughter just absolutely hates wetting herself. A single drop of pee onto her diaper causes her to throw a rage-fit. As a result, she had not been able to sleep well. Only about 1 to 1.5 hours at a time... Which not only caused her to lose sleep, but for my wife and I too... We did two things here to minimize it after some experimentation: - My wife started loosening the diapers. Having a snug diaper for our little one just made it more noticeable that she went to pee. We started applying a thin layer of Aquaphor to mask the wetness of the diaper whenever urination happened. The little one has been sleeping better since then. If your newborn starts crying randomly and has just been fed, don't try to feed immediately again.. Just.. check the diaper. If your newborn starts crying, and there's no bad diaper, and has taken a nap, then feed her! We're trying to not abuse the use of a pacifier. It's bad for the teeth, and we don't want to go through a potential long process in weening her off. Some will say to buy a white noise machine, we're not sure if this helps yet, but we've been using it "in case"... but Riley seems fine sleeping with, or without one. Swaddling? Not too important -- unless your little one kicks herself awake randomly. Newborns aren't as fragile as you think. Riley and I sometimes play football while I gently run to the end zone of the living room, guess who's the football? She loves this. My daughter also loves Super Junior. I can't sing in Korean to calm her down, unfortunately, but I can definitely sing a Disney tune. It's been fun! Get a standing desk to stay productive. You can carry your newborn in one arm while writing/typing on the other. It's easier to do than sitting. My wife, and I both now have standing desks because of this. A happy newborn tends to just stay still while staring into space, anyway. The standing desk we bought is from FlexiSpot, and is the adjustable 29-48" model. Here's a link to the exact one: Since I'm also in school, I tend to just read aloud to Riley, and it helps me reinforce what I'm learning. For her, it's a good way to be lulled to sleep. Who the heck would be THAT INTERESTED in topics like Euler integration, anyway? Also, for some reason, even though my daughter wails, growls, and kicks me whenever she gets upset due to being impatient about something, for some reason, I always forget about it, and love her the same as always. It's crazy -- this is what unconditional love ❤ is, people. Game Dev! The scope of our session today is to get started in writing some of the code reflecting the design which I had outlined in the previous article. The goal is to get as far as filling in some of the gaps I had intentionally missed while designing the first version of the Menu System, and to write a couple of unit tests which demonstrate basic functionality. I'm not sure how many sessions this part of the series in developing a Menu System will expand into, but I want to keep each session bite-sized, as Riley has taken up (rightfully so!) a lot of my focus time. Who knows, maybe she'll start writing her own video games soon! Jrpg.MenuSystem Today we'll be creating a new project, Jrpg.MenuSystem within the framework. Of course, as usual, since Unity doesn't support anything newer than .NET Standard 2.0, this is what the framework the project will based on. We'll create the files which were outlined in the previous post: Cursor.cs Menu.cs MenuContent.cs MenuContentImage.cs MenuContentMemory.cs MenuContentOption.cs MenuContentOptionHandler.cs MenuContentText.cs MenuContentToken.cs MenuContentTokenReplacer.cs MenuContentType.cs MenuSize.cs MenuStack.cs TilePoint.cs I'll go through each one, in order and note any changes which may have been made, and if there are any noteworthy things that I may have missed from last time. Cursor.cs using System.Collections.Generic; namespace Jrpg.MenuSystem { public class Cursor { private Stack<MenuContentMemory> Memory; public bool Visible { get; set; } public Cursor() { Memory = new Stack<MenuContentMemory>(); } public void Execute() { return; } public MenuContentMemory Peek() { return Memory.Peek(); } public void Push(MenuContentMemory mcm) { Memory.Push(mcm); } public MenuContentMemory Pop() { return Memory.Pop(); } } } The design of this is left relatively unchanged. Again, it's quite simple, and just performs basic stack operations on MenuContentMemory objects. Menu.cs using System; using System.Collections.Generic; using System.Text; namespace Jrpg.MenuSystem { public class Menu { private Dictionary<string, MenuContent> Contents; public string Key { get; set; } public TilePoint Location { get; set; } public MenuSize Size { get; set; } public Menu() { Contents = new Dictionary<string, MenuContent>(); } public void AddContent(MenuContent mc) { Contents[mc.Key] = mc; } public MenuContent RemoveContent(string key) { var content = Contents[key]; Contents.Remove(key); return content; } public MenuContent GetContent(string key) { return Contents[key]; } public string DebugRender() { StringBuilder sb = new StringBuilder(); int prevY = 0; foreach(var mc in Contents.Values) { for (var i = 0; i < mc.Location.Y - prevY; i++) { sb.AppendLine(); } prevY = mc.Location.Y; for (var i = 0; i < mc.Location.X; i++) { sb.Append(" "); } switch (mc.Type) { case MenuContentType.Text: sb.Append(((MenuContentText)mc).Content); break; case MenuContentType.Option: sb.Append(((MenuContentOption)mc).Content); break; case MenuContentType.Token: ((MenuContentToken)mc).Replace(); sb.Append(((MenuContentToken)mc).Content); break; case MenuContentType.Image: sb.Append(((MenuContentImage)mc).Content); break; default: break; } } return sb.ToString(); } public void Render() { Console.WriteLine($"Rendering {Key}"); } } } At its core, Menu is a dictionary-based data structure housing MenuContent type objects. Most of it is relatively straightforward, and we're not going to touch the Render method today, but one method which I find handy, which should be discussed is the DebugRender method. The DebugRender method will loop through all the active MenuContent instances within the current MenuContext and invoke custom rendering logic which will output to the console as text. This is a useful method if we need to debug some individual Menu object in the future. Within the loop, an individual MenuContent object is cast into a more specific type, and custom logic is used to display it through to the console. You can see that the logic for displaying MenuContentType.Token is different from MenuContentType.Text , for example. An example output of DebugRender is demonstrated here, with a prompt, and two possible choices: Hello! Terry Token, is Pie good? Yes No MenuContent.cs using Jrpg.System; using UnityEngine; namespace Jrpg.MenuSystem { public abstract class MenuContent { protected GameStore gameStore; public MenuContent(GameStore g) { gameStore = g; } public MenuContentType Type { get; set; } public string Key { get; set; } public MenuSize Size { get; set; } public TilePoint Location { get; set; } /* Must Override */ public abstract void Render(MonoBehaviour mono); } } MenuContent serves as a base class for more specific MenuContent types. The only method here which needs to be implemented is the Render method. Again, I'm not going to go through rendering today as I haven't thought too much about it, but I expect it will need some sort of Unity game object passed to it, so let's take that into account for now, and pivot away from it if needed in the future. MenuContentImage.cs using Jrpg.System; using System; using UnityEngine; namespace Jrpg.MenuSystem { public class MenuContentImage : MenuContent { public string Content { get; set; } public MenuContentImage(GameStore g) : base(g) { } public override void Render(MonoBehaviour mono) { throw new NotImplementedException(); } } } This is a simple class which doesn't do much for now as displaying, and handling images within a Menu is going to be saved for a separate discussion another day. 😊 MenuContentOption.cs using NetStandardSystem = System; using System.Collections.Generic; using System.Text; using Jrpg.System; using UnityEngine; namespace Jrpg.MenuSystem { public class MenuContentOption : MenuContent { public string Content { get; set; } public string Handler { get; set; } public int Index { get; set; } public MenuContentOption(GameStore g) : base(g) { Type = MenuContentType.Option; } public void Handle() { MenuContentOptionHandler handler = (MenuContentOptionHandler)NetStandardSystem.Activator.CreateInstance( NetStandardSystem.Type.GetType(Handler), new object[] { } ); // Now, handle handler.Handle(this.gameStore); } public override void Render(MonoBehaviour mono) { throw new NetStandardSystem.NotImplementedException(); } } } Okay, so this class changed just slightly with a couple of additions when compared to what we originally designed. First, in order to logically represent ordering for menu options we will add an Index property. Then, we need to implement logic to run the Handle method. If you're familiar with respect to how I've been doing things for this entire framework, you'll easily pick up the usage of .NET Reflection here. The handler object which is created draws from the Handler property which is expected to contain the class to be instantiated to handle the selection of the option at runtime. This logic is custom, and is intended to be very flexible. MenuContentOptionHandler.cs using Jrpg.System; namespace Jrpg.MenuSystem { public interface MenuContentOptionHandler { void Handle(GameStore gs); } } All option handlers should implement this particular interface. MenuContentText.cs using System; using Jrpg.System; using UnityEngine; namespace Jrpg.MenuSystem { public class MenuContentText : MenuContent { public string Content { get; set; } public MenuContentText(GameStore g) : base(g) { Type = MenuContentType.Text; } public override void Render(MonoBehaviour mono) { throw new NotImplementedException(); } } } Probably the most simple of all MenuContent sub-types, the MenuContentText object just contains the text content of what is to be displayed for a specific region. MenuContentToken.cs using NetStandardSystem = System; using System.Collections.Generic; using Jrpg.System; using UnityEngine; namespace Jrpg.MenuSystem { public class MenuContentToken : MenuContent { public MenuContentToken(GameStore g) : base(g) { Type = MenuContentType.Token; } public string Content { get; set; } public List<string> Replacers { get; set; } public void Replace() { foreach(var replacer in Replacers) { MenuContentTokenReplacer replaceHandler = (MenuContentTokenReplacer)NetStandardSystem.Activator.CreateInstance( NetStandardSystem.Type.GetType(replacer), new object[] { } ); Content = Content.Replace(replaceHandler.Token, replaceHandler.Replace(this.gameStore)); } } public override void Render(MonoBehaviour mono) { throw new NetStandardSystem.NotImplementedException(); } } } Similar to MenuContentOptionHandler.cs, MenuContentTokenReplacer.cs implements a base abstract class which will be extended by other sub-types handling the chosen options. Every MenuContentTokenReplacer object has a Token property that will be searched for in the Content property of the MenuContentToken object. So for example, if the Content property has First Name: $FNAME$, and Token is $FNAME$, then the Replace method which is implemented by the MenuContentTokenReplacer class will run the specialized logic to replace the token found within the MenuContentToken.Content property with a custom value. MenuContentType.cs namespace Jrpg.MenuSystem { public enum MenuContentType { Text, Image, Token, Option } } Basic enum definition here! I am sticking to my original design. MenuSize.cs namespace Jrpg.MenuSystem { public class MenuSize { public int Width { get; set; } public int Height { get; set; } public MenuSize(int w, int h) { Width = w; Height = h; } public override string ToString() { return $"{Width} x {Height}"; } } } A typical POCO class that abstracts the size of a Menu in the context of tile coordinates. I overrode the ToString method here to make debugging easier. MenuStack.cs using System; using System.Collections.Generic; using System.Linq; namespace Jrpg.MenuSystem { public class MenuStack { private Stack<Menu> Menus; public MenuStack() { Menus = new Stack<Menu>(); } public Menu Peek() { return Menus.Peek(); } public void Push(Menu m) { Menus.Push(m); } public Menu Pop() { return Menus.Pop(); } public void Clear() { Menus.Clear(); } public List<string> Keys() { return Menus.Select(m => m.Key).ToList(); } public int Count() { return Menus.Count; } public void Render() { foreach(var m in Menus) { Console.WriteLine($"Rendering menu {m.Key} at {m.Location}, and size {m.Size}"); m.Render(); } } } } My hope is that MenuStack is the main entry point in manipulating menus from the game. The design leads to the developer being only able to use stack-like operations to fully operate on a set of menus. We'll see if that really is the case, though... 🤣 TilePoint.cs namespace Jrpg.MenuSystem { public class TilePoint { public int X { get; set; } public int Y { get; set; } public TilePoint(int x, int y) { X = x; Y = y; } public override string ToString() { return $"({X}, {Y})"; } } } Another POCO class! Here, it is a representation of a tile coordinate. TilePoint tells us the exact coordinate of a tile within the grid which contains all menus on screen.\ Unit Tests The easiest thing to do to test some functionality of our menu system is to write some unit tests now that we have functional code written out. For the time being, let's just limit ourselves to testing the functionality of MenuStack, and Let's create a file called TestMenus.cs under our Jrpg.System.Tests project. Here is a summary of the tests which will validate our implementation: TestMenuStack - Tests to see if the MenuStackcan hold data TestMenuStackPeek - Tests to see if we can obtain the top-most Menu without altering the context. TestMenuStackPop - Tests if we can remove Menus from the stack, to simulate either moving away from the menu, or the cursor taking some sort of action. TestMenuStackClear - Tests to see if we can clear the MenuStack-- leaving the number of items to be 0. TestMenuWithContent - Ahhh!! The heart of the test. This actually creates a menu with content! The menu should display a prompt with 2 choices, and each choice taking some sort of action. The rendered menu should look like this: |--------------------------------------------------| | Hello! Terry Token, is Pie good? | | Yes | | No | |--------------------------------------------------| We will check this using DebugRender. Setting Up Within our constructor of the test file, we will set up a working MenuStack instance to be used in the rest of the unit tests. This goes against unit test best practice, but in this situation, we want to see if we can maintain a stable MenuStack structure as we run our tests. We can also always create tests which operate on a different MenuStack instance should we need. public TestMenus(ITestOutputHelper output) { this.output = output; MenuStack menuStack = new MenuStack(); Menu m1 = new Menu(); m1.Key = "menu-party"; m1.Location = new TilePoint(1, 2); m1.Size = new MenuSize(50, 30); Menu m2 = new Menu(); m2.Key = "menu-options"; m2.Location = new TilePoint(38, 1); m2.Size = new MenuSize(10, 20); Menu m3 = new Menu(); m3.Key = "menu-time"; m3.Location = new TilePoint(38, 25); m3.Size = new MenuSize(10, 4); Menu m4 = new Menu(); m4.Key = "menu-location"; m4.Location = new TilePoint(30, 31); m4.Size = new MenuSize(25, 1); menuStack.Push(m1); menuStack.Push(m2); menuStack.Push(m3); menuStack.Push(m4); this.menuStack = menuStack; } Here we create a MenuStack with an overall structure which resembles that of the Final Fantasy 7 Party menu. The exact location and size of each individual menu isn't extremely important for now. TestMenuStack [Fact] public void TestMenuStack() { List<string> keys = new List<string> { "menu-party", "menu-options", "menu-time", "menu-location" }; Assert.Equal(keys.Count, menuStack.Count()); Assert.All(menuStack.Keys(), key => keys.Contains(key)); } We will check to see if we have all the expected Menu instances we had created within the constructor. The assertion statements simply check to see if we have the expected number, and if all keys match the keys of the Menu instances found in MenuStack. TestMenuStackPeek [Fact] public void TestMenuStackPeek() { var m = menuStack.Peek(); Assert.Equal("menu-location", m.Key); Assert.Equal(4, menuStack.Count()); } Another small, and easy test to see if the expected top-level Menu matches the one we expect to be (the location menu). Additionally, the Peek method in MenuStack should not alter the context of the menu system, so we should still expect the same number of Menu instances to be in the MenuStack. TestMenuStackPop [Fact] public void TestMenuStackPop() { menuStack.Pop(); var m = menuStack.Pop(); Assert.Equal("menu-time", m.Key); Assert.Equal(2, menuStack.Count()); } We remove 2 Menu items from the MenuStack. What is left should just be 2 Menu instances, with the top level being the time menu. TestMenuStackClear [Fact] public void TestMenuStackClear() { menuStack.Clear(); Assert.Equal(0, menuStack.Count()); } Pretty self explanatory test here. We just test to see if there is no longer any Menu objects after clearing the MenuStack. TestMenuWithContent [Fact] public void TestMenuWithContent() { var mainCharacterName = "Terry Token"; var gameStore = GameStore.GetInstance(); gameStore.Put<string>("MainCharacterName", mainCharacterName); Menu m = new Menu(); var line1 = "Hello! $NAME$, is $FOOD$ good?"; MenuContentToken mcLine1 = new MenuContentToken(GameStore.GetInstance()); mcLine1.Key = "line-1"; mcLine1.Size = new MenuSize(line1.Length, 1); mcLine1.Content = line1; mcLine1.Location = new TilePoint(1, 1); mcLine1.Replacers = new List<string> { "Jrpg.SampleGame.Menus.Tokens.MenuTokenReplacerNameTest, Jrpg.SampleGame", "Jrpg.SampleGame.Menus.Tokens.MenuTokenReplacerFoodTest, Jrpg.SampleGame" }; mcLine1.Replace(); var line2 = "Yes"; MenuContentOption mcLine2 = new MenuContentOption(GameStore.GetInstance()); mcLine2.Key = "line-2"; mcLine2.Size = new MenuSize(line2.Length, 1); mcLine2.Location = new TilePoint(4, 2); mcLine2.Content = line2; mcLine2.Handler = "Jrpg.SampleGame.Menus.Options.MenuOptionHandlerYesTest, Jrpg.SampleGame"; var line3 = "No"; MenuContentOption mcLine3 = new MenuContentOption(GameStore.GetInstance()); mcLine3.Key = "line-3"; mcLine3.Size = new MenuSize(line3.Length, 1); mcLine3.Location = new TilePoint(4, 3); mcLine3.Content = line3; mcLine3.Handler = "Jrpg.SampleGame.Menus.Options.MenuOptionHandlerNoTest, Jrpg.SampleGame"; m.AddContent(mcLine1); m.AddContent(mcLine2); m.AddContent(mcLine3); var rendered = m.DebugRender(); this.output.WriteLine(rendered); Assert.Contains(mainCharacterName, rendered); this.output.WriteLine("Choosing the NO option"); Assert.Null(gameStore.Get<string>("OptionResult")); ((MenuContentOption)m.GetContent("line-3")).Handle(); Assert.Equal("NO", gameStore.Get<string>("OptionResult")); } This test is a little more complex, and requires a bit of explanation. As mentioned earlier, our rendered prompt will look something like this: |--------------------------------------------------| | Hello! Terry Token, is Pie good? | | Yes | | No | |--------------------------------------------------| Since creating MenuContent objects require a GameStore object, we first retrieve the global instance which we have. If no global instance has yet been instantiated, the Jrpg.System framework will do that automatically for us. We then set the property MainCharacterName in our GameStore instance to be Terry Test. We will need this later in our test. We create a Menu and create several MenuContent objects to build our prompt. Our prompt is only three lines, so we'll just need a single MenuContentToken instance to dynamically display the name of our hero, and two MenuContentOption objects to display the Yes, and No choices for this prompt. For MenuContentToken, our Content string with the token is: Hello! $NAME$, is $FOOD$ good? We have 2 tokens which need to be replaced, so we will need to create 2 MenuContentTokenReplacer objects to handle replacement of these tokens. In Jrpg.SampleGame, we will create: - MenuTokenReplacerNameTest.cs using Jrpg.MenuSystem; using Jrpg.System; namespace Jrpg.SampleGame.Menus.Tokens { public class MenuTokenReplacerNameTest : MenuContentTokenReplacer { public MenuTokenReplacerNameTest() : base() { Token = "$NAME$"; } public override string Replace(GameStore g) { return g.Get<string>("MainCharacterName"); } } } - MenuTokenReplacerFoodTest.cs using Jrpg.MenuSystem; namespace Jrpg.SampleGame.Menus.Tokens { public class MenuTokenReplacerFoodTest : MenuContentTokenReplacer { public MenuTokenReplacerFoodTest() : base() { Token = "$FOOD$"; } public override string Replace(Jrpg.System.GameStore g) { return "Pie"; } } } For both of the above, notice that we have assigned the Token property to what the replacer should look for when executing its replacement logic. Then the Replace method is implemented to return the final string in which we should replace the token by. I had decided to make MenuTokenReplacerNameTest interesting by having it access the GameStore to return the value which corresponds to the main character name to be used for replacement when MenuContentToken does its replacement logic. This is similar to features in JRPGs where dialogues use the custom character name of a player in place of the original name. Implementing the MenuContentOptionHandlers to add functionality to the prompt choices is similar to implementing a MenuTokenReplacer object. The implementations for MenuOptionHandlerYesTest.cs, and MenuOptionHandlerNoTest.cs are similar, and just set a GameStore property of OptionResult to be the corresponding prompt label. This is how our test can check to see if the player had selected a specific prompt. using Jrpg.MenuSystem; using Jrpg.System; namespace Jrpg.SampleGame.Menus.Options { public class MenuOptionHanderYesTest : MenuContentOptionHandler { public void Handle(GameStore g) { g.Put<string>("OptionResult", "YES"); } } } using Jrpg.MenuSystem; using Jrpg.System; namespace Jrpg.SampleGame.Menus.Options { public class MenuOptionHandlerNoTest : MenuContentOptionHandler { public void Handle(GameStore g) { g.Put<string>("OptionResult", "NO"); } } } Finally, we want to test DebugRender by outputting a representation of the constructed Menu, and assert that the GameStore has the right OptionResult value after selecting the No option in the prompt. Conclusion So we've been able to write a some unit tests to validate the basic design of our Menu System. However, we still need to address a few gaps in our system which we have been deferring. The main question is how to render these menus in Unity, and how the various Render methods interact with the engine. This will be explored in the next post!
http://rogerngo.com/article/20200527_dev_diaries_jrpg_9/
CC-MAIN-2020-29
refinedweb
3,797
51.04
Hi, I was wondering if there is a way to control hardware in C++, such as opening and closing a CD drive. Thanks in advanced, C++ Hi, I was wondering if there is a way to control hardware in C++, such as opening and closing a CD drive. Thanks in advanced, C++ To me you have not even shown an effort. If you notice this board helps other coders to become better codes - this is only going to take place when one posts up his / her code and let us see what they understand. Such demands like these won't help anyone nor yourself to become a better coder! I'm not sure that you even know the basics never mind playing around with the API interface! >Can you tell me how to do it now? Sure, go to msdn.microsoft.com and read. Normally I would give you some code, but I find your "gimme gimme" tone annoying. Hi AcidBurn and Narue, Sorry for my bad attidute. Just so you know, this wasn't a class assignment(I'm not even in a programming class). I was actually trying to make a CD player. Thanks for telling me to look at msdn.microsoft.com. AcidBurn, I actually have absolutely no idea how to do this. I just started learning C++ and I'm still a beginner. -- C++ Then I strongly suggest you start from the beginning, either by taking a programming course, or attempt to get a book and work through! Don't expect to find yourself playing with that sort of stuff soon! I've had almost 1 yer doing C and C++...and we havent even started that yet! (Next year) .......... Here what I like to call the lazy man's method! Interprocess communication can help improve the functionality of your program. // Command line approach #include <stdlib.h> ... // To open system("wineject -open d:"); // Note: d: can be any letter you pick. // To close system("wineject -close d:"); // Note: d: can be any letter you pick. ----------------------------------------------------------- // Windows Approach #include <windows.h> ... //To Open WinExec("wineject -open d:", SW_HIDE); //To close WinExec("wineject -close d:", SW_HIDE); ------------------------------------------------------------ Hope this helps, you can get wineject from: The above assumes that wineject.exe is in the same location as your program's exe. If you wish to use configurable drive letters with the commands you must get rid of the "d:" part and concatenate your own drive letter to the string. ------------------------------------------------------------ That's the easy, interprocess communication approach. Now lets use something more direct. You expressed using Windows XP. Luckily, there is API support for this type of communication. This is what your looking for: The IMAPI interface through the windows api will make things a breeze =) . Documentation and examples available at MSDN.com . Now lets take it a step further. How does communication with the CDROM work? Well it is true that the API may handle this a little differently as you go from platform to platform, but layers exist to compensate for procedures that are not explicitly available in the API. Such layers are ASPI layers and IMAPI layers. In fact, Nero provides a free aspi layer. WinAspi.dll is also an option (but only for windows). For more information about a [almost] platform independent approach to communicating with the CDROM drives, see CDRECORD implements an ASPI layer to provide a common interface to interact with CDROM drives. Not only can you eject a cd, you can write data! Good Luck! Although the explanation refers to an MFC based example, at the top of the page you can download a NON-MFC example. You will need to know how the windows API works and how to code in windows API. If you don't check out: You can also do it without any new libraries by just using windows.h This a bit more confusing but msdn.com outlines the functions very well. Search for: NOTIFYICONDATA structure Shell_NotifyIcon We're a friendly, industry-focused community of 1.18 million developers, IT pros, digital marketers, and technology enthusiasts learning and sharing knowledge.
https://www.daniweb.com/programming/software-development/threads/22027/controlling-hardware
CC-MAIN-2020-29
refinedweb
682
75.2
online shopping portal Portal XYZ.com wants to create an online shopping portal for managing its... first before they do shopping using the shopping portal. However, everyone... listed in the portal. The registered customers, after logging in, are allowed Online Shopping Portal for xyz.com Online Shopping Portal for xyz.com XYZ.com wants to create an online shopping portal for managing its registered customers and their shopping... the shopping portal. However, everyone, whether registered or not, can view Online Shopping Portal portal for managing its registered customers and their shopping. The customers... portal. However, everyone, whether registered or not, can view the various products along with the prices listed in the portal. The registered customers, after Online Shopping Portal portal for managing its registered customers and their shopping. The customers... portal. However, everyone, whether registered or not, can view the various products along with the prices listed in the portal. The registered customers, after online shoopping portal portal for managing its registered customers and their shopping. The customers... portal. However, everyone, whether registered or not, can view the various products along with the prices listed in the portal. The registered customers, after - Struts Struts Dear Sir , I am very new in Struts and want to learn about validation and custom validation. U have given in a such nice way to understand but little bit confusion,Plz could u provide the zip for address Struts Articles . 4. The UI controller, defined by Struts' action class/form bean... application. The example also uses Struts Action framework plugins in order to initialize the scheduling mechanism when the web application starts. The Struts struts 1.x and struts2.0 - Struts . While in Struts 2, an Action class implements an Action interface, along... friend, Struts1 extends the abstract base class by its action class... ActionSupport class that implements commonly used interfaces. Although an Action VoIP Information VoIP Information VOIP Information When you are looking for the most comprehensive VOIP Information available, you will quickly find that you have... the vast amount of information on your own could be imposing. Your other option validations in struts - Struts than Bar. Foo = ${foo}, Bar = ${bar}. ------------------------------- Visit for more information.... I an getting an error in tomcat while running the application in struts What is portal What is portal What is portal What is portal and how can i create a portal in Java using components like Java and Java servlet? What are the elements of a portal and how can i create sigle request portal page in Java I want detail information about switchaction? - Struts I want detail information about switch action? What is switch action in Java? I want detail information about SwitchAction Struts Tag Lib - Struts , sun, and sunw etc. For more information on Struts visit to : http...Struts Tag Lib Hi i am a beginner to struts. i dont have.... JSP Syntax Examples in Struts : Description The taglib Open Source Portal Servers written in Java Struts Why struts rather than other frame works? Struts is used into web based enterprise applications. Struts2 cab be used with Spring... by the application. There are several advantages of Struts that makes it popular How to retrieve URL information the network information for a given URL. In the given below example we use the URL..., where the output display all the information related with the given URL. Here... How to retrieve URL information   configuring ready-made struts application - Struts browser. The only information that I have was, the project is built on Struts and Hibernate. Other than that, not much I know. I've been trying to follow...configuring ready-made struts application I have a source of Struts coding for given question coding for given question Board of Control for Cricket in India... of best bowlers among the given lot. Input/Output Specs Input Specs Your program... of player information that is provided to you. Considering the 5 players as V. Kumar information struts logic tags struts logic tags what is the use of struts logic tag The purpose of Struts logic tags is to alter output depending on the given... to true. For more information, visit the following linkL http How Struts Works application. This file has all the information about many types of Struts... as a result of that action. In our application there can be more than one view which... Struts configuration file. Java bean is nothing but a class having getter Communication Portal Communication Portal A communication interface desinged with Web 2.0 technology, mission is to help peolpe communicate easily over a comfortable web interface Read full Description }//execute }//class struts-config.xml <struts...struts <p>hi here is my code in struts i want to validate my...;gt; <html:form <pre> Struts - Struts Struts Hi All, Can we have more than one struts-config.xml... in Advance.. Yes we can have more than one struts config files.. Here we use SwitchAction. So better study to use switchaction class for more information. Thanks...Struts Hi, I m getting Error when runing struts application. i... /WEB-INF/struts-config.xml 1 - Struts . Struts1/Struts2 For more information on struts visit to : Hello I like to make a registration form in struts inwhich of a struts application Hi friend, Code to help in solving... are different.So, it can be handle different submit button : class MyAction extends... super.execute(); } } For more information on struts2 visit to : http Struts 2 Actions . Different uses of Struts 2 Action Struts 2 Action class can be used... application frameworks. Struts 2 Action are the important concept in Struts 2... request. About Struts Action Interface In Struts 2 all actions may implement struts - Struts class.It is having three values: 0- It does not give any debug information. 1- It gives partial debug information. 2- It gives full debug information struts - Struts . ----------------------------------------------- Read for more information. Weblogic portal - Framework Weblogic portal Hi all, Please let me know what are good websites for learning step by step Weblogic Portal 10.3. Thanks Manoj Hi Friend, Please visit the following link: online shoping portal online shoping portal Hello Sir/Madam , I am working on a project online shopping portal in Java , I want to share my screen with my friend to show my selected things , can you help me out?what should i do C:Redirect Tag - Struts in conjuction with a struts 2 action. I am trying to do something like What I am... some application initialization to place (in the form of a Struts action) so... information will have already been initialized. Is this possible with the c:redirect Open Sourec Portal Server - Security Open Sourec Portal Server Hello sir/madam, I'm looking for some PHP/Mysql or Perl/MySQL based well-documented open source portal server which... is "Strengthening of Portal server Security", and i haven't found portal yet Weblogic Portal - JSP-Interview Questions Weblogic Portal Hi, Can any please give me the details of 1) Weblogic portal interview questions & answers ? 2) Weblogic portal learning step by step websites? Thanks for your help in advance struts validations - Struts --------------------------------- Visit for more information. validations hi friends i an getting an error in tomcat while running the application in struts validations the error in server Operating System Information Operating System Information  ... information about our operation system. In this example we are getting the OS name... os.orch. The System class contains several useful class fields Struts 1.1 Struts 1.1  ... application using Jakarta Struts Framework. Before starting we are considering... and custom tags. So let?s from the Jakarta Struts Framework. Model-View Writing more than one cards in a WML deck. than one cards in a deck.In this lesson...; Using this code you can display some information in your... a action on the displayed card. Some of the uses of do Poems Pedia Poems Pedia A new service from Transparent Solutions, Poems Pedia is a free portal that contains more than 5,000 poems and quotes organized into categories. Read full Description online exam portal job portal - Java Beginners Struts Struts Architecture - Struts information on struts visit to : Architecture Hi Friends, Can u give clear struts architecture with flow. Hi friend, Struts is an open source Struts Books more than just cursory or introductory information on Struts. In addition... application Struts Action Invocation Framework (SAIF) - Adds features like Action interceptors and Inversion of Control (IoC) to Struts.  Latest Version of Struts Framework Complete detail and tutorials about the Latest Version of Struts Framework In this page we are listing the Latest Version of Struts Framework which is being... is working on the development and maintenance of Struts Framework.1 Tutorials you can develop nice, highly responsive applications for your clients. Struts...Struts 1.1 Tutorials This page is giving tutorials on Struts 1.1. Struts 1.1 was the earlier version of Struts Hi - Struts Hi Hi Friends, Thanks to ur nice responce I have sub package in the .java file please let me know how it comnpile in window xp please give the command to compile-It Action class other Struts-related classes like configuration... to create all Struts artifacts like Form-bean, Action, Exception, etc... in their existing web applications. For more information: Struts file downloading - Struts Struts file downloading how to download a file when i open a file... will help you. visit for more information. Thanks Proficy Portal Proficy Portal Proficy Portal Commercial product. Similar to "enterprise-level" business apps. development environments like JD Edwards except that it is built based on interfacing use of Struts - Struts a link. This link will help you. Please visit for more information with running example. Thanks Struts Alternative properties of the respective Action class. Finally, the same Action instance... Struts Alternative Struts is very robust and widely used framework, but there exists the alternative to the struts framework Finding the longest word in a user given string Finding the longest word in a user given string In java, accept a string from the user and find the longest word in it. The given code finds the longest word from the string. import java.util.*; class LongestWord Pagination example in Struts 2.2. <default-action-ref <action name="student" class="...Pagination example in Struts 2.2.1 Pagination is a way to divide the data Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://roseindia.net/tutorialhelp/comment/13937
CC-MAIN-2015-40
refinedweb
1,738
58.48
Python Programming, news on the Voidspace Python Projects and all things techie. Piping Objects: Bringing a Powershell-alike Syntax to Python Harry Pierson recently twittered about how he missed the Powershell syntax, for piping objects between commandlets, in Python. After exchanging emails with him, he actually prefers something like the F# syntax - which uses '|>' rather than '|' as a pipe. I decided to see how far I could get with Python (settling on '>>' as the operator), and I think that what I've come up with is quite nice. It enables you to create commandlets and pipe objects between them using '>>' (the right shift operator). Creating new commandlets is as easy as writing a function. As I don't actually have a use case for this (!), this 'proof of concept' implementation is pretty specific to my example use case - but as it is around 60 lines of Python it is very easy to customise. The syntax is nice and declarative, so creating a library of commandlets could be useful for working at the interactive interpreter, or it could be used for creating Domain Specific Languages. Suppose you have a set of data that you want to pass through several filters that also transform the data, and then perform an action on each record. With commandlets you can do things like: some_data >> filter1 >> filter2 >> action The normal Python technique would be to use list comprehensions. With list comprehensions each record has to go through the filter twice, as transforming and filtering have to be done separately. An equivalent of the above using list comprehensions looks like: intermediate = [filter1(x) for x in some_data if filter1(x) is not ignored][action(filter2(x)) for x in intermediate if filter2(x) is not ignored] Rolling that exactly into a single list comprehension means one big-ass ugly list comprehension. As an example of the syntax it enables, I've implemented three simple commandlets that allow you to do: listdir('.') >> notolderthan('2/3/08') >> prettyprint I'm afraid that the example only works with IronPython (because the date handling is much nicer than CPython), but none of the rest of the code requires IronPython. The first part of the chain shown above is a commandlet called listdir that returns a list of all the files in a directory (it delegates to os.listdir). Although it is lists that are piped between commandlets, the functions you write (which are wrapped in a Cmdlet class) only need to handle one argument at the time. You create commandlets that take arguments (like notolderthan), in the same way you write decorators that take arguments - as a function that returns a function. Here is the implementation of listdir and the notolderthan filter: def listdir(): # see the end of this blog entry for the definition of Path return [Path(path, member) for member in os.listdir(path)] return listdir def f_notolderthan(date): datetime = System.DateTime.Parse(date) def notolderthan(member): if member.mtime >= datetime: return member return ignored return notolderthan listdir = Cmdlet(f_listdir) notolderthan = Cmdlet(f_notolderthan) ignored is a special sentinel value that allows commandlets to act as filters. Commandlets can also perform an action instead of piping objects out. prettyprint is an example of this: print val prettyprint = Action(f_prettyprint) You can also pass in a generator (or any iterable) to the start of the chain. Here is an argument that uses a recursive generator, listing all the files in a directory and its subdirectories, on the left hand side of the chain: for e in os.listdir(path): p = os.path.join(path, e) if os.path.isfile(p): yield Path(path, e) else: for entry in recursive_walk(p): yield entry recursive_walk('.') >> notolderthan('2/3/08') >> prettyprint The Cmdlet class is a subclass of list, so a chain of commandlets returns a list (well - a Cmdlet) populated with the results of the call chain. Here's the full implementation of Cmdlet and Action: import System __version__ = '0.1.0' __all__ = ['Action', 'Cmdlet', 'ignored', 'listdir', 'notolderthan', 'prettyprint'] ignored = object() class Cmdlet(list): def __init__(self, function, _populated=False): self.function = function self._populated = _populated def __call__(self, *args, **keywargs): function = self.function if args or keywargs: function = self.function(*args, **keywargs) return Cmdlet(function) def __rshift__(self, other): if not self._populated: # TODO: the first function must return a list ? self[:] = self.function() new = Cmdlet(other.function, True) vals = [other.function(m) for m in self] new[:] = [v for v in vals if v is not ignored] return new def __rrshift__(self, other): # the left side is not a commandlet # so it must be an iterable new = Cmdlet(self.function, True) vals = [self.function(m) for m in other] new[:] = [v for v in vals if v is not ignored] return new def __repr__(self): return 'Cmdlet(%s)' % list.__repr__(self) class Action(Cmdlet): def __rshift__(self, other): Cmdlet.__rshift__(self, other) return None def __repr__(self): return 'Action(%s)' % self.function.__name__ Nice. Most of the magic is in __rshift__ and __rrshift__, but I'm also fond of __call__ which allows you to create commandlets that take arguments. To run the examples you also need my homegrown Path class: def __init__(self, path, entry): self.dir = path self.name = entry self.path = os.path.join(path, entry) self.mtime = System.IO.File.GetCreationTime(self.path) self.ctime = System.IO.File.GetLastWriteTime(self.path) def __repr__(self): start = 'File:' if os.path.isdir(self.path): start = 'Dir:' ctime = self.ctime mtime = self.mtime return "%s %s :ctime: %s :mtime: %s" % (start, self.path, ctime, mtime) Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board. Posted by Fuzzyman on 2008-03-27 01:12:31 | | Categories: Hacking, Python, IronPython, General Programming Tags: powershell, F#, DSL, commandlets Apples, Django is Wired and Faster Python Just a collection of interesting links from the intarwebz today: Report: Mac sales up 60% in February The report shows growth in Mac unit sales up 60 percent from 2007 In dollar terms, NPD has Apple capturing a full 25 percent of the U.S. computer market last month. Wired Magazine: Expired-Tired-Wired Expired: ASP.NET - Tired: PHP - Wired: Django Issue 2459: speedup loops with better bytecode A patch for CPython that speeds up for and while loops through better bytecode. The same technique promises to improve list comprehensions and generator expressions as well. Python-safethread: Python 3 without the GIL This is a big-assed patch and I wouldn't rate its chances of making it into the core too highly - but it is a damn impressive project. It allows you to compile a version of Python without the GIL (requires changes to C extensions of course). There is a performance cost for single-threaded code, but allows multi-threaded code to scale across multiple cores. Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board. Posted by Fuzzyman on 2008-03-25 18:24:33 | | Categories: Python, Computers Tags: apple, mac, wired, concurrency, performance SFI Conference: Erlang, Ruby and Java (and Concurrency) A week before PyCon [1] I was at a very different conference: the Academic IT Festival organised by three universities in Krakow, Poland. I was also accompanied by my erstwhile colleague, Jonathan Hartley (who has currently abandoned us for Mexico to get married.) You can read his write-up in case you don't believe mine... For a student conference this was fiercely well organised, with a fantastic array of speakers (myself and Jonathan not-withstanding). It was great to meet (amongst many others): - Chad Fowler, author and prominent member of the Ruby community - Maciek Fijalkowski who works on PyPy - Joe Armstrong, the creator of Erlang - Gilad Bracha, one of the architects of the JVM The conference is free and open to all, and around half the talks are in English. Krakow is a beautiful city, so if you have the opportunity to be there next year then take it [2]. I'd like share with you some of the things I learnt at the conference. Chad Fowler Chad is a great guy, I had a great time eating kebabs with him at one in the morning in Krakow. He is also a great speaker and I'd love to be as confident as him before speaking. I've gradually been learning more about Ruby, and Chad filled in a few more gaps for me. One interesting point was that the Ruby community generally sees Rubinius (a kind-of-equivalent of PyPy for Ruby) as the future of Ruby. Currently the VM is written in C, but they aim to develop a static-subset of Ruby so that the VM can be maintained in Ruby. This an idea originating, I believe, in Slang for Smalltalk, but also seen in RPython for PyPy. I would love to see the future of Python in PyPy. Maintaining Python in Python sounds much more fun than maintaining it in C, but also I see it as the only viable path that can lead us away from the GIL and reference counting. I also learned that in some ways Ruby is more restrictive than Python. For example you can't change the type of objects (which you can do in Python by assigning to __class__), or the bases of a class (by manipulating __bases__ on the class). Being able to change the type of an object is one of the requirements Gilad Bracha has of a dynamic language, of which he is a big fan. Maciek Fijalkowski Maciek is one of the PyPy developers. He first got involved through the google summer of code. He emphasised that one of differences between the PyPy team and the Python core developers is that while the Python-Dev team are fanatically interested in language design (which is good news for Python users), the PyPy team are much more interested in remaining language agnostic and producing the best possible VM. There has been a sea-change in PyPy development recently. In the past PyPy has been a fantastic project for producing a wealth of not-quite-working-but-really-cool side projects. As a result PyPy has been at the 'nearly working' stage for a long time. They have recently pruned the codebase of anything unmaintained or that was blocking development. Some really cool things (like the ability to write CPython extensions in RPython) have gone (possibly to return), but the new focus on solidifying and completing the core is great. Unsurprisingly though, the core Python developers, who have no familiarity with the PyPy code base, don't see it as a 'replacement' for the code base of CPython which they are very familiar with. Hopefully, as the JIT integration improves, PyPy will become more of a viable alternative in the not-too distant future. (Something I see as being potentially important to the concurrency with Python story.) Maciek is also interested in the Resolver Systems Ironclad project, to see if parts of it could be reused to bring compatibility with CPython C extensions to PyPy - something that otherwise could be a barrier to adoption. Joe Armstrong Joe Armstrong is an eccentric Englishman living in Sweden. He started his talk with a look to the future for computing. Modern computing has long been based on the Von Neumann Architecture. In recent years programmers of sequential programs ('normal' programs, which Joe quaintly calls 'legacy' programs) have seen their code get faster and faster as CPU clock speeds have improved (despite the fact that Intel clock speeds are largely fiddled anyway). That has now started to change. To reduce power consumption, and maintain the ever decreasing proportion of a chip that can be reached in a single clock cycle, clock-speeds are starting to drop and processors are gaining more cores instead. Sequential programs that can only run on a single core will start getting slower rather than faster. Single chips with hundreds of cores, the power of a super-computer, have already been produced experimentally by chip manufacturers (who aren't keen to sell them and wipe out their market for selling many processors for supercomputers). Joe has a project which is close to getting funding for blowing sixty MIPS cores onto an FPGA. Threading with locking, as a concurrency solution, is very difficult (but not as painful as it is often painted by the Python community - particularly if you take care with your design). Some alternatives exist, like Transactional Memory (which when implemented as Software Transactional Memory without hardware support has performance costs) - but although this technique scales over multiple cores it doesn't scale over multiple machines. Another alternative is the functional programming language Erlang. By using lightweight processes (which aren't OS level threads or OS level processes but run on top of a scheduler in the VM) and removing mutable state, Erlang programs scale to multiple cores or multiple machines as part of the language design. Concurrency is going to be ever more important and Erlang is very trendy at the moment. It is used by ejabbared which in turn is used by Twitter and there are some very interesting 'massively distributed hash table' projects likely to surface soon. In CPython, because of the Global Interpreter Lock, even threaded applications don't scale across multiple cores. The GIL is supposed to make writing C-extensions easier, but it is an interesting design choice to make aspects of Python more suited to writing C than to writing Python. The Python community normally touts Process based concurrency as a better alternative. However, with a clean design you can limit the locking needed for multiple threads, and where you use threads for doing large calculations (and so have a lot of data to marshall back and forth) the costs of using multiple processes are great. Naturally Joe Armstrong things the answer is for us all to write in Erlang. Interestingly, the work Chad Fowler has been doing recently has involved Ruby-without-Rails and Erlang. He says that writing Erlang test driven has been 'interesting' and has tended to make them use more processes (where in Python or Ruby you would expect TDD to make your code more modular). He is not sure whether this is a good thing or not, but it seems to be working for them. Gilad Bracha Gilad was one of the architects of the Java Virtual Machine, and also involved in Strongtalk, the phenomenal Smalltalk JIT that effectively became the Java hotspot JIT. Random Aside: .NET vs Java It was very interesting to talk to some of the Microsoft .NET guys at PyCon about '.NET vs Java'. Their take was "they have the better JIT while we have the better garbage collection". Gilad knows a lot about optimising dynamic languages. He firmly believes that dynamic languages can run as fast as statically typed languages, the reasons that it doesn't happen is that all the people who know enough about the subject are being paid to work on other things (this is rocket science he says). One of the things that Maciek emphasised in his talk was that you have more information for optimisation about a dynamic language at runtime than you do for optimising a statically typed program at compile time. Now that Gilad has left Sun (he is now working on a hush-hush project that involves creating a new language called Newspeak built on Smalltalk), he is happy to talk about the mistakes made with the JVM! (Unfortunately the slides aren't available.) His talk was full of great quotes, and it is a shame that I can't remember them all. They include: The problem with software is that you can add things, but you can't remove them. So it bloats (this is why his language is called Newspeak - in the book 1984 the language Newspeak was regularly revised to remove words). Java has nine types of integer, and none of them do the right thing. (C# is basically no better before .NET enthusiasts get excited.) For example, 'Integer' has a public constructor and you can create new ones for the same value that aren't equal to each other! You don't have auto promotion to big integers (Python longs), so you still have to know in advance how big the results of your calculations are going to be. The basic problem comes from having primitives that aren't 'objects' and the consequent boxing and unboxing. A decent JIT can apply optimisation without needing to have primitives that aren't objects. Programs possible from languages with a static type system are a subset of all the programs that can be written. For some people this is enough. Don't listen to your customer Although it sounds like it, I don't think he was knocking agile processes of involving customers in the design. He was more saying don't let your customers tell you how to do things. Static type systems are an inherent security risk. This last point is an interesting one, as security is one of the things that type-safety is supposed to be able to bring. He explained it with a justification followed by a story: Formalising real-world type systems is very difficult. In order to formalise them, the authors of most type systems simplify them by making some assumptions. In practise those assumptions turn out to be wrong. Even if your formalisation is fully correct in theory, it is only safe if the implementation has no bugs... Gilad told the story of a bug in the Java Mobile Edition type verifier. The byte-code has a 'jump' instruction, and the two conditions for the jump instruction are that the target exists and that the type-state at source and target agree. In Java ME, the type-state is tracked separately, and the type verifier only verified the type-state and forgot to verify that the target exists. A Polish programmer discovered that he could construct bytecode that would pass verification but could jump into data. He could then do things like overwrite the length of an array and effectively peek and poke into memory (on a Nokia phone). Using the type information in the operating system, he was then able to reverse engineer the whole Nokia operating system... Gilad thinks the answer is for everyone to use Smalltalk of course (he also likes Self). He isn't too polite about other languages: - On Ruby: The performance is pathetic - On Python: I can't take seriously a language VM that uses reference counting - On Erlang and improving performance through concurrency: what Joe didn't say is that Erlang isn't exactly fast... Gilad's latest blog entry on monkey-patching is well worth a read. Jonathan and I are sort-of-quoted as 'the-pythoners'. Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board. Posted by Fuzzyman on 2008-03-24 20:23:43 | | Categories: Python, General Programming, Computers Tags: erlang, ruby, java, smalltalk, pypy, concurrency Python 3, Bytes and Source File Encoding After PyCon I stayed on for the sprints, spending most of the time with the core-Python guys. Brett's introduction to developing Python was very good. I feel much more confident about tackling bugs now - from compiling Python, to creating and submitting a patch, to running the tests. (Python's test framework is awful in my opinion, but I think it's getting better.) It was great to renew friendships with people like Jack Diederich and Brett Cannon, plus make new friends (despite what you may think from reading the mailing list, the Python-Dev guys are a friendly bunch). On Monday I decided to help Trent Nelson, who bet Martin Lowis a beer he could get the buildbots green by the end of the sprints. This was some challenge as at the time none of the 64bit Windows buildbots had built successfully, let alone gone green. A problem that seemed easy was a failure in the tests for the tokenize module in Python 3. The problem, that we assumed would be a four line fix, was that generate_tokens wasn't honouring the encoding declaration of Python source files. In Python 3, where string literals are Unicode, it meant that string literals would be incorrectly decoded. I spent about a day and a half fixing it, a lot of it pairing with Trent on the work but some of it just under his watchful eye whilst he revamped the Windows build scripts and process. generate_tokens has an odd API. It takes a readline method as its argument, either from a file or the __next__ method of a generator so that the source you are tokenizing doesn't have to come from the filesystem. The problem is that if the file has been opened in text mode, then in Python 3 it will be decoded to unicode using the platform default. On the Mac this is utf-8 (a nice sensible default), but on Windows it is 'cp-1252'. If the Python source file has an 'encoding cookie' (# coding=<latin-1> from pep-0263 or a utf-8 bom) then it will already have been (possibly incorrectly) decoded by the time we read the declaration. I think this problem is fairly typical of moving from Python 2 to 3. If you save a text file with non-ascii characters on the Mac and then open it in text mode on Windows, it could well fail on Python 3. Even writing arbitrary text to a file could fail on Windows because the default encoding doesn't have a representation for every Unicode code point. Having Unicode strings is no panacea, you still have to know the encoding of your text files. The answer for tokenize (at least the answer we implemented), was to move it to a bytes API. The readline method passed in to tokenize (the new name for generate_tokens) should return bytes instead of strings. The correct encoding is then determined (defaulting to utf-8) and used to decode the source file. The encoding declaration can be on the first line or the second, to allow for a shebang line. The function we wrote to detect encoding may be useful if you ever have to work with Python source files. Here's a Python 2 compatible version: from codecs import lookup cookie_re = re.compile("coding[:=]\s*([-\w.]+)") def detect_encoding(readline): """ The detect_encoding() function is used to detect the encoding that should be used to decode a Python source file. It requires one argument. a readline function. It will call readline a maximum of twice, and return the encoding used and a list of any lines 'utf-8' will be returned. """ utf8_bom = '\xef\xbb\xbf' bom_found = False encoding = None def read_or_stop(): try: return readline() except StopIteration: return '' def find_cookie(line): try: line_string = line.decode('ascii') except UnicodeDecodeError: pass else: matches = cookie_re.findall(line_string) if matches: encoding = matches[0] if bom_found and lookup(encoding).name != 'utf-8': # This behaviour mimics the Python interpreter raise SyntaxError('encoding problem: utf-8') return encoding first = read_or_stop() if first.startswith(utf8_bom): bom_found = True first = first[3:] if not first: return 'utf-8', [] encoding = find_cookie(first) if encoding: return encoding, [first] second = read_or_stop() if not second: return 'utf-8', [first] encoding = find_cookie(second) if encoding: return encoding, [first, second] return 'utf-8', [first, second] Because the lines that have been read in still need to be tokenized, any lines that have been consumed by detect_encoding need to be 'buffered'. This is done using itertools.chain: def readline_generator(): while True: try: yield readline() except StopIteration: return chained = chain(consumed, readline_generator()) My first contribution to Python (except for a couple of trivial patches). As you can see from the check-in message, Trent was amused by my TDD approach to testing [1]... The change may not survive into Python 3 in the wild, but working on it was very satisfying. It was great to work a bit with Python 3, particularly to understand the bytes / string / io changes. The new nonlocal keyword is wicked, especially for testing. On Wednesday I spent some time with Jerry Seutter fixing some tests for urllib2 so that they no longer do things like test for redirection by going to an external server that we know does a redirect! That patch hasn't yet been applied. Trent got most of the buildbots green and the 64bit Windows boxes not only building but passing! Not sure if he got the beer though... Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board. Posted by Fuzzyman on 2008-03-23 15:54:18 | | Categories: Python Tags: sprint, pycon2008, conference Fun at PyCon 2008 I really enjoyed PyCon. It was great that the conference has grown so much but still has a real 'community' feeling where you can stop and talk to people you don't know. Unfortunately the boss took ill at the last minute, which left Jonathan [1] and I to man the Resolver Systems sponsor stand (we had some good conversations in the exhibition hall) and to do his talk. We survived though, and being totally unprepared even enjoyed giving the talk together. Even more problematic, Jonathan has a Linux laptop and I use a Mac - Giles was bringing the Windows laptop! Resolver One runs fine on my Macbook under Parallels, which acted as a great stand-in until Van Lindberg came to the rescue with a loan laptop. (Many thanks Van!) This was a double rescue, as in an attempt to get his Linux laptop to work with a conference projector Jonathan managed to blow away his X-server configuration and couldn't boot his laptop an hour before his talk on Test Driven Development. I couldn't find Jonathan before doing the Resolver Systems lightning talk though, so I did it from the Mac. I didn't attend many talks in the end (at least I don't think I did - it's all a bit of a blur), but my favourite was Raymond Hettinger on Core Python Containers (which as it is specific to CPython wasn't really relevant to me as I'm spending most of my time buried deep inside IronPython these days). As for the great 'sponsor controversy'... personally I didn't enjoy the sponsor keynote(s?) particularly, but not everyone agrees. I'm afraid that most of the lightning talks on Friday were pretty dull, but the ones on Saturday and Sunday were much better (even the sponsor ones). As usual the issue has been blown out of proportion, but the organisers are well aware of what worked and what didn't. I think my talk went OK. I dashed from the talk with Jonathan on 'End User Computing and Resolver One', straight into my 'Python in the Browser Talk'. Rather than having any time to prepare myself, Chris McAvoy introduced me as I walked into the room. The audience was then treated to an undignified scramble as I tried to get my computer in a fit state to give the presentation. The best part of the talk was showing the prototype 'Interactive Interpreter in the Browser' right at the end. This used some bugfixed IronPython binaries that Dino Viehland delivered to me the morning of my talk. As if it wasn't rushed enough already! I think that once the updated binaries, the 'interpreter in the browser' will be a great tool for teaching Python. After the talk I gave a brief (7.5 minutes podcast) interview with Dr Dobb's Journal on Silverlight and IronPython. There were many other highlights. I talked to a lot of great folks, too numerous to mention all of them, including folks like Maciek of PyPy (who I met at the SFI conference and will also be at RuPy): And Dino the lead IronPython developer (he gave me some great reasons why Resolver Systems should upgrade to IronPython 2 including improved performance and startup time): His talk was amazing (showing Django on IronPython using Silverlight). In order to maintain feature parity with Jython Dino implemented from __future__ import GIL whilst I was watching! I presented the Ironclad Overview at the 'Python and .NET' Open Space organised by Feihong Hsu (who gave a great talk on Python.NET): After the Open Space a bunch of us went to downtown Chicago (including Mahesh Prakriya who you can see lingering in the back of the picture above and Harry Pierson who would prefer I didn't call him the new commander-in-chief of dynamic languages at Microsoft). We went up the John Hancock Tower which has an astonishing view of the city. After this it was onto the sprints. Jonathan and I went out with a crew of the 'Python-Dev' guys to a Mexican restaurant. Never have I seen an individual so excited to find good Mexican food! Whilst hanging around in the Python-core sprint I got to meet Mark Hammond: The sprinting was massively my favourite part of the whole event, and deserves a blog entry of its own... Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board. Posted by Fuzzyman on 2008-03-23 05:25:55 | | Categories: Python, IronPython, Fun Tags: pycon2008, conference, chicago IronPython & Silverlight 2 Tutorial with Demos and Downloads Instead of posting my PyCon talk slides, I've turned them into a series of articles instead, which should be easier to follow. All the examples are available online and for download. This is everything you need to get started with IronPython and Silverlight 2. .) The Articles These articles will take you through everything you need to know to write Silverlight applications with IronPython: Introduction to IronPython & Silverlight 2 The Structure and Contents of a Dynamic Silverlight Application Getting Started: Minimal Examples of IronPython and Silverlight 2 is the possibility of embedding an interactive interpreter inside web pages: If you find any bugs, typos or missing links then please let me know. Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board. Posted by Fuzzyman on 2008-03-22 19:03:34 | | Categories: Writing, Python, IronPython Tags: silverlight, tutorials, articles, dotnet, web, demos, examples Archives This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License. Counter...
http://www.voidspace.org.uk/python/weblog/arch_d7_2008_03_22.shtml
crawl-002
refinedweb
5,108
61.26
program, I'll talk about how you can display text and get user input. Last time, we stopped at the actual code that tells the computer that it's the start and end of our program. Now, let's go into things in between. The next line that we've seen was: cout << "Hello, World!" << endl; The key is the word "cout". This is read as "console out." This means that we want to display information to a computer monitor, like printing to a printer or listening to a sound (when the music player "outputs" i.e. plays the sound). This is the keyword that we'll use to send messages to the screen so that users will know what the program is doing. Before I end this discussion, make sure to include two less-than signs after cout (<<) - called "stream insertion operator". I'll discuss what that is later. Next, the text within quotations marks is the actual thing that the program will display for us. For example, if we want our computer to display our name, we would say in psudo code (an English rendition of our code): Console out, "Hi, my name is Joseph."; When the program is "executed" i.e. runs, it'll say: Hi, my name is joseph. But how do we insert a new line so that the screen is not "overcrowded" with messages? There are two ways: using "endl" (new line operator) or using "\n" (ASCII (American Standard Code for Information Interchange) code for new line). But, as with cout, you need to provide two less-then signs, but this time, you put it between the text and the "endl" sign; or, if you elect to use "\n", you don't have to put any less-than signs - all you need is a semicolon after the quote. So, here are the two variations - first using "endl" and the second using "\n": cout << "Hi, my name is Joseph." << endl; and: cout << "Hi, my name is Joseph.\n"; Both lines will do the same thing. It's really up to you to use either method - I personally use the first method for ease of reading my code. Now, how do we get user input? As opposed to "cout", there is another one called "cin" - read as "console in". This allows a user to type something on their keyboard so that the program can work with it. As opposed to using two less-than signs, we use two greater-than signs (>>) to differentiate from cout (called stream extraction operator, but more on that later). But here's a twist: you need to provide the actual data that the computer will work with, and that'll be covered after I post a very important tip on commenting your code, but I'll give you a preview: Suppose if I want the computer to store my age (a number), and ask it to print out my name and my age. I'd do the following: int my_age; cout << "Hi, what's your age?" << endl; cin >> My_age; cout << "I see. You are " << my_age << " years old." << endl; The thing about this code will be covered later when I discuss variables and data types - after all, programs do need to work with data that the user provides. There is another keyword (or operator) called "cerr" which is used to display error messages, but I think cout and cin is enough - I think it's time to let you go. By the way, these operators - cout, cin and cerr - comes from iostream library. And, you would put the name of these "default" or built-in libraries and surrounding them with "arrows" like: #include <iostream> When we create classes and our own modules, we'll differentiate between built-in ones and our own using a quote (the quotes are used to say that we are putting our own libraries). Stay tuned for an important announcement from our "blind Highlander..." // JL
http://joslee22590.blogspot.com/2010/07/c-meahning-of-cout-and-cin.html
CC-MAIN-2017-30
refinedweb
657
78.59
You can click on the Google or Yahoo buttons to sign-in with these identity providers, or you just type your identity uri and click on the little login button. Here is a simplified example that exhibits the problem. Parent is the parent package to sub-packages A and B. Sub-Package A: Parent/A/__init__.py contains "import a" Parent/A/a.py contains: from __future__ import absolute_import from ..B import b class aClass(b.bClass): def __init__(self): b.bClass.__init__(self) def foo(self): print self.x print self.y print self.z Sub-Package B: Parent/B/__init__.py contains "import b" Parent/B/b.py contains: class bClass: def __init__(self): self.x = 0 self.y = [] self.z = '' The Parent package is located in my sys.path. The above code runs just fine: >>> import Parent.A >>> a = Parent.A.a.aClass(1) >>> a.foo() 0 However, when I run pylint for A/a.py it gives me F0401: 9: Unable to import 'B.b' (No module named B) and therefore of course the ensuing errors: E1101: 14:aClass.foo: Instance of 'aClass' has no 'x' member E1101: 15:aClass.foo: Instance of 'aClass' has no 'y' member E1101: 16:aClass.foo: Instance of 'aClass' has no 'z' member If I change the import syntax in Parent/A/a.py to use "from Parent.B import b" all errors go away. This seems to indicate a problem with intra-sub-package relative imports. Ticket #5010 - latest update on 2009/09/01, created on 2008/05/19 by Sylvain Thenault
https://www.logilab.org/ticket/5010
CC-MAIN-2020-40
refinedweb
263
72.42
Player Collision , Hitbox Is there a way I can make the Rect around my player a color for debugging purposes I'm fairly new , and I'm sure this is fairly simple but I tried fill_color, set_color nothing worked. Here is the code player_hitbox = Rect(self.player.position.x - 20, 32, 40, 65,fill_color = "Blur") May be you can make a new image with suitable background using ui.ImageContext See the example below. import scene, ui class MyScene(scene.Scene): def setup(self): img = ui.Image.named('plf:AlienGreen_front') w, h = img.size with ui.ImageContext(w, h) as ctx: ui.set_color('gray') ui.Path.rect(0, 0, w, h).fill() img.draw(0,0,w,h) img1 = ctx.get_image() self.sprite_node = scene.SpriteNode(scene.Texture(img1), position=self.size/2, parent=self) scene.run(MyScene()) - chriswilson The hitbox you created is just a Rect() object, which is simply coordinates and not actually something rendered on the screen. You could make a Shapenode() or SpriteNode() of the same size which would be visible. I'm not at my computer just now, but can show some code later if you like. Chris Wilson Sure bro , I appreciate it. I dont understand how ai would implement this into my game , but the concept is what I'm looking for being able to see my players/enemy hit box. but I want to implement this with out going thru ui , there has to be a easier way. 🤔 I'm currently trying many different options Collision: If Player is a SpriteNode and Bullet is a SpriteNode then in Player.update() do something like: update(self): if self.frame.intersects(bullet.frame): exit('Player has died of rapid onset lead poisoning!!!') A SpriteNode contained in a ShapeNode... from scene import * import ui class MyScene (Scene): def setup(self): sprite = SpriteNode('plf:HudPlayer_yellow') self.player = ShapeNode(parent=self, path=ui.Path.rect(*sprite.frame), position=self.size/2, fill_color='clear', stroke_color='red') self.player.add_child(sprite) if __name__ == '__main__': run(MyScene(), show_fps=False) - chriswilson @ccc has been a big help on my journey too! :) Okay guys so everything works but now my touch movement for my Sprite isn't responding here's the original code before I implemented your code , x, _= touch.location _, y = self.player.position move_action = Action.move_to(x,y,2,TIMING_SINODIAL) self.player.run_action(move_action) if (x,_ > 0.05): self.player.x_scale = cmp(touch.location.x - self.player.position.x, 0) x = self.player.position.x Do I switch it to self.sprite.position Hmm Got everything up and running , it was a indention problem. @lance1994 it is good that your problem got solved. Here are some notes about imagecontext and I hope it is useful. ui.Path, ui.Image and ui.mageContext are general purpose routines (not like ui.View) and they are also heavily used in scene programs. ShapeNode has node only few types of shapes and if you need shapes like polygon (triangle, pentagon etc.) you may need to use ImageContext. Here is @ccc 's example, coded using ImageContext. import scene import ui class MyScene (scene.Scene): def setup(self): img = ui.Image.named('plf:HudPlayer_yellow') w, h = img.size with ui.ImageContext(w, h) as ctx: img.draw(0,0,w,h) path = ui.Path.rect(0, 0, w,h) ui.set_color('red') path.stroke() img1 = ctx.get_image() self.sprite_node = scene.SpriteNode(scene.Texture(img1), position=self.size/2, parent=self) # use img for regular game and img1 for debugging purpose #self.sprite_node = scene.SpriteNode(scene.Texture(img), position=self.size/2, parent=self) if __name__ == '__main__': scene.run(MyScene(), show_fps=False) Okay Guys Something New Here Is The Code class Rock(SpriteNode): def __init__(self, **kwargs): SpriteNode.__init__(self, 'IMG_1726.GIF', **kwargs) def update (self): self.spawn_rock class Rock(SpriteNode): def __init__(self, **kwargs): SpriteNode.__init__(self, 'IMG_1726.GIF', **kwargs) It does what it's suppose to do spawn the rocks, but I want it to spawn 3 rocks at a time. After each rock is destroyed it is randomly generated somewhere else. Not immediately but like a 10 second duration. I tried if and else statements in the update but nothing seems to work , going to keep poking around any input ? For the collision I still had to set a player_hitbox Rect() for it to intersect with the rock , if you look above you'll see my rock code . I tried the self.frame.intersects(item.frame) But that doesn't work any idea guys? @lance1994 I am a bit lost in the code snippets. Would it be possible to put a more complete codebase in a GitHub repo?
https://forum.omz-software.com/topic/3471/player-collision-hitbox/2
CC-MAIN-2022-33
refinedweb
775
53.17
README ¶ Project Calico Note that the documentation in this repo is targeted at Calico contributors. Documentation for Calico users is here: This repository contains the source code for Project Calico's per-host daemon, Felix. How can I get support for contributing to Project Calico? The best place to ask a question or get help from the community is the calico-users #slack. We also have an IRC channel. Who is behind Project Calico? Tigera, Inc. is the company behind Project Calico and is responsible for the ongoing management of the project. However, it is open to any members of the community – individuals or organizations – to get involved and contribute code. Contributing Thanks for thinking about contributing to Project Calico! The success of an open source project is entirely down to the efforts of its contributors, so we do genuinely want to thank you for even thinking of contributing. Before you do so, you should check out our contributing guidelines in the CONTRIBUTING.md file, to make sure it's as easy as possible for us to accept your contribution. How do I build Felix? Felix mostly uses Docker for builds. We develop on Ubuntu 16.04 but other Linux distributions should work (there are known Makefile issues that prevent building on OS X). To build Felix, you will need: - A suitable linux box. - To check out the code into your GOPATH. - Docker >=1.12 - GNU make. - Plenty of disk space (since the builds use some heavyweight full-OS containers in order to build debs and RPMs). Then, as a one-off, run make update-tools which will install a couple more go tools that we haven't yet containerised. Then, to build the calico-felix binary: make build or, the calico/felix docker image: make image Other architectures When you run make build or make image, it creates the felix binary or docker image for linux on your architecture. The outputs are as follows: - Binary: bin/calico-felix-${ARCH}, e.g. bin/calico-felix-amd64or bin/calico-felix-arm64 - Image: calico/felix:${TAG}-${ARCH}, e.g. calico/felix:3.0.0-amd64or calico/felix:latest-ppc64le When you are running on amd64, you can build the binaries and images for other platforms by setting the ARCH variable. For example: $ make build ARCH=arm64 # OR $ make image ARCH=ppc64le If you wish to make all of the binaries or images, use the standard calico project targets build-all and image-all: $ make build-all # OR $ make image-all Note that the image and image-all targets have the build targets as a depedency. How can I run Felix's unit tests? To run all the UTs: make ut To start a ginkgo watch, which will re-run the relevant UTs as you update files: make ut-watch To get coverage stats: make cover-report or make cover-browser How can I run a subset of the go unit tests? If you want to be able to run unit tests for specific packages for more iterative development, you'll need to install - GNU make - go >=1.10 then run make update-tools to install ginkgo, which is the test tool used to run Felix's unit tests. There are several ways to run ginkgo. One option is to change directory to the package you want to test, then run ginkgo. Another is to use ginkgo's watch feature to monitor files for changes: cd go ginkgo watch -r Ginkgo will re-run tests as files are modified and saved. How do I build packages/run Felix? Docker After building the docker image (see above), you can run Felix and log to screen with, for example: docker run --privileged \ --net=host \ -v /run:/run \ -e FELIX_LOGSEVERITYSCREEN=INFO \ calico/felix Notes: --privilegedis required because Felix needs to execute iptables and other privileged commands. --net=hostis required so that Felix can manipulate the routes and iptables tables in the host namespace (outside its container). -v /run:/runis required so that Felix shares the global iptables file lock with other processes; this allows Felix and other daemons that manipulate iptables to avoid clobbering each other's updates. -e FELIX_LOGSEVERITYSCREEN=INFOtells Felix to log at info level to stderr. Debs and RPMs The Makefile has targets for building debs and RPMs for different platforms. By using docker, the build does not need to be run on the target platform. make deb make rpm The packages (and source packages) are output to the dist directory.
https://pkg.go.dev/github.com/awprice/felix
CC-MAIN-2021-43
refinedweb
748
54.52
Remy Maucherat wrote: > Filip Hanik - Dev Lists wrote: >> head is clearing up...how about... >> >> since: >> public class MyServlet implements HttpServlet, o.a.c.CometProcessor { >> .... >> >> wouldn't it make sense for: >> public class MyFilter implements Filter, o.a.c.CometFilter {.... >> >> and you'd declare it the same way, since we are piggy backing on the >> servlet logic to create Comet servlets, wouldn't it be smart to piggy >> back on the filter logic to create Comet filters? >> >> the interface CometFilter would define the new application chain, ie >> void "event(CometEvent,CometFilterChain)" >> achieves the new filter chain, piggy backs on mapping logic. > > Great ! Yes, mapping is easy to add, so since you like it, those > "filters" can completely use the existing infrastructure. Are you ok > if I start with adding your CometEvent interface to the main source > tree ? > > On the other side of the container fence, I would need to make some > mods to the "valve" type (since if I don't have any possibility to > integrate with JEE, my boss will murder me). Most likely I would add a > new "event" method to Valve and ValveBase (as a special case, the > "begin" event would be handled by the regular invoke method), and the > (very few) valves that need to do business per event would be able to > do it. AFAIK, all current valves "invoke" methods support Comet > without problems (they don't do any funky tricks, and provide > functionality that is still going to be needed on the initial event: > HTTP auth, error pages and reports, etc). I think we should also > specify that the response will be considered committed after the > initial event. This also means the event method in the servlet adapter > will not directly call the servlet (which IMO is a good idea). > > I missed it, but I am ok with adding a "close" method on the event > class, since it is more explicit (indeed, it is to be implemented by > closing the output buffer). yes please get started, I want to spend some time in the clustering code right now, so I'll chime in a bit later. > >
http://mail-archives.apache.org/mod_mbox/tomcat-dev/200609.mbox/%3C45017C73.1010406@hanik.com%3E
CC-MAIN-2017-26
refinedweb
351
64.04
CSV files are still found all over, and developer's often are faced with situations for parsing and manipulating that data. Often times, we want to take the CSV data and use it to initialize objects. In this article, we'll take a look at one approach to mapping incoming CSV data to our own objects. For brevity, I will assume that you have already developed a way to parse a given CSV input line and convert it to an array of strings. I was first prompted to look at this problem when I was asked by a customer if there was an easy way to map incoming CSV data to objects. He had already figured out how to use regular expressions to parse the line of text he read into his application to create an array containing all the fields from the data file. It really was a matter of then creating objects from that array. The obvious and brute force method would be something like this: Customer customerObj = new Customer(); customerObj.Name = datafields[0]; customerObj.DateOfBirth = DateTime.Parse(datafields[1]); customerObj.Age = int.Parse(datafields[2]); That would be fairly straightforward, but with more than a few objects or properties, it would get pretty tedious. And, there is no accounting for any custom processing of the input data prior to assigning it to fields. You could also come up with a special constructor for each class that would take an array of data and set the object up correctly, which would probably be a marginally better approach. My initial two thoughts when faced with this problem were: With those two thoughts in mind (and thereby limiting my other remaining thoughts to one, since I can only manage three things at a time), I set out to free up my thought queue as fast as possible. From those two thoughts, I picked three key things that drove my thinking: Loaderclass of some kind. I figured for my first shot at this, I would like a couple of static methods that could be either given an existing object to populate with a given array of string data, or it could be told the type of object to create based on the array data, and it would return a new object of the correct type fully populated. Loaderneeded to work with any kind of class, I would need to use .NET Reflection to interrogate the class for what information needed to be updated. Loaderwould know how to map the array data to the property. I started tackling this idea by working backwards on my list. First, I needed a .NET attribute I could use. If you have never worked with custom attributes, they are pretty cool, though they almost always lead to using Reflection. I think many developers get scared off by Reflection for whatever reason (since you don�t see it used in a lot of scenarios where it would make life a ton easier), and that is a shame. Reflection, really, is straightforward, so make sure it is part of your toolbox. To create a custom attribute, you just need to define a class that inherits from System.Attribute, add some public fields to it, at least one constructor, and you are rocking and rolling. Here is the attribute I declared for my project: [AttributeUsage(AttributeTargets.Property)] public class CSVPositionAttribute : System.Attribute { public int Position; public CSVPositionAttribute(int position) { Position = position; } public CSVPositionAttribute() { } } In this case, the user will need to supply a Position value as part of the attribute. The other thing to notice about this attribute is the use of the [AttributeUsage(AttributeTargets.Property)] attribute above the class declaration. This attribute declares that my custom attribute can only be assigned to properties of a class, and cannot be used on the class itself, methods, fields, etc. To use this custom property, all I would have to do is the following: public class SomeClass { private int _age; [CSVPosition(2)] public int Age { get { return _age;} set {_age = value;} } } The [CSVPosittion] attribute sets the Position field to two. Note that even though our custom attribute class name is CSVPositionAttribute, I can shorten that to CSVPosition (dropping the Attribute suffix) when using the actual attribute to mark up a property. This gives me a simple way to mark up my objects to be loaded with information contained in an array derived from a line in a CSV file. The next step, is to have a way to figure out how to take some arbitrary class, figure out which properties are to be populated with data from a CSV, and update the object with that data. To do that, I will use .NET Reflection. I start by creating a new class called Loader that will have a single method (for now) as follows: public class ClassLoader { public static void Load(object target, string[] fields, bool supressErrors) { } } The Load method is a static method that takes any target object to be loaded from the CSV data, an array of strings (the data parsed from a single line in the CSV file), and a flag on whether or not errors encountered during the processing of the data should be suppressed or not. One quick point to make is that I am using a very simple approach to handle errors for this demo. There is certainly a much richer and robust way to handle errors, but I leave that to you, dear reader, to implement as needed. The first thing I am going to need to do is evaluate the incoming object for all of its available properties and check those properties for the CSVPosition attribute. Getting a list of an object�s properties is very easy using Reflection: Type targetType = target.GetType(); PropertyInfo[] properties = targetType.GetProperties(); I can then iterate over the properties array, and use the PropertyInfo objects to determine if a given property needs to be loaded with data from the CSV field array. foreach (PropertyInfo property in properties) { // Make sure the property is writeable (has a Set operation) if (property.CanWrite) { // find CSVPosition attributes assigned to the current property object[] attributes = property.GetCustomAttributes(typeof(CSVPositionAttribute), false); // if Length is greater than 0 we have // at least one CSVPositionAttribute if (attributes.Length > 0) { // We will only process the first CSVPositionAttribute CSVPositionAttribute positionAttr = (CSVPositionAttribute)attributes[0]; //Retrieve the postion value from the CSVPositionAttribute int position = positionAttr.Position; try { // get the CSV data to be manipulate // and written to object object data = fields[position]; // set the value on our target object with the data property.SetValue(target, Convert.ChangeType(data, property.PropertyType), null); } catch { // simple error handling if (!supressErrors) throw; } } } } You should be able to figure out what is going on, by reading the comments above. Basically, we check each property to see if we can write to it, and if we can, we see if it has a CSVPosition attribute. If it does, we then retrieve the position value, and pull the appropriate string from the fields array, and then set the value on that property. Its all pretty straightforward. The one thing to be aware of is that someone could theoretically assign more than one CSVPosition attribute to a given property. The way the code is written, however, only the first CSVPosition attribute will be used. You may also wonder why the following line of code was used in our Load routine: // get the CSV data to be manipulate and written to object object data = fields[position]; Couldn�t we just as easily pass the fields[position] data element directly to the SetValue method? We certainly could. That line, however, leads us to look at the next problem I wanted to solve. That problem is, what happens if the incoming string value needs to be processed or formatted so its default state be used as is. Examples might include getting a value �One� that we want to assign to an integer value, or maybe we want to format a particular string in a certain way before assigning it to the target property. What we would like is to be able to point the Load routine to a special data transformation routine that is most likely different for each property. How can we do that? Once again, .NET Reflection will ride to the rescue. Using .NET Reflection, we can call methods on a given object dynamically, even if we don�t know what the names of those methods are at design time. So, the question quickly becomes � how do we let our processing routine know that: We will solve both problems by extending our CSVPosition attribute and modifying our Load method. Our new CSVPositionAttribute class will now look like this: [AttributeUsage(AttributeTargets.Property)] public class CSVPositionAttribute : System.Attribute { public int Position; public string DataTransform = string.Empty; public CSVPositionAttribute(int position, string dataTransform) { Position = position; DataTransform = dataTransform; } public CSVPositionAttribute(int position) { Position = position; } public CSVPositionAttribute() { } } As you can see, all we have done is add a new public field named DataTransform. This field will hold the name of another method on the same class that will be used as a data transformation routine. There may be a way to do this with delegates as well, but I haven�t found a way yet. So, with my brute force method, we can now modify our Load routine to look like: try { // get the CSV data to be manipulate and written to object object data = fields[position]; // check for a Tranform operation that needs to be executed if (positionAttr.DataTransform != string.Empty) { // Get a MethodInfo object pointing to the method declared by the // DataTransform property on our CSVPosition attribute MethodInfo method = targetType.GetMethod(positionAttr.DataTransform); // Invoke the DataTransform method and get the newly formated data data = method.Invoke(target, new object[] { data }); } // set the ue on our target object with the data property.SetValue(target, Convert.ChangeType(data, property.PropertyType), null); } The code now checks for a DataTrasform value and, if present, invokes that method via Reflection and pass the returned data on to the target property. I�ve assumed that any transformation routine that may be used are methods on the same object that is having its properties updated. This would seem to make sense since the object should be responsible for controlling how its data is formatted. The last thing I did was add an additional method to my Loader class: public static X LoadNew<X>(string[] fields, bool supressErrors) { // Create a new object of type X X tempObj = (X) Activator.CreateInstance(typeof(X)); // Load that object with CSV data Load(tempObj, fields, supressErrors ); // return the new instanace of the object return tempObj; } Here is a brief example of how to use this code. I have a Customer class that I would like to populate from some CSV data. The Customer class has been marked up as shown below: class Customer { private string _name; private string _title; private int _age; private DateTime _birthDay; [CSVPosition(2)] public string Name { get { return _name; } set { _name = value; } } [CSVPosition(0,"TitleFormat")] public string Title { get { return _title; } set { _title = value; } } [CSVPosition(1)] public int Age { get { return _age; } set { _age = value; } } [CSVPosition(3)] public DateTime BirthDay { get { return _birthDay; } set { _birthDay = value; } } public Customer() { } public string TitleFormat(string data) { return data.Trim().ToUpper(); } public override string ToString() { return "Customer object [" + _name + " - " + _title + " - " + _age + " - " + _birthDay + "]"; } } Populating this class with data using the Loader class can be done in one of two ways. First, we can instantiate an instance of our class and pass it to the Loader to be populated. Or, we can use the LoadNew method on our Loader class and have it pass back a populated object on its own. Both examples are shown below: static void Main(string[] args) { string[] fields = { " Manager", "38", "John Doe", "4/1/68" }; Customer customer1 = new Customer(); ClassLoader.Load(customer1, fields, true); Console.WriteLine(customer1.ToString()); Customer customer2 = ClassLoader.LoadNew<Customer>(fields,false); Console.WriteLine(customer2.ToString()); Console.ReadLine(); } That is all there is to it. Hope it helps, and happy coding. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/cs/CSVandReflection.aspx
crawl-002
refinedweb
2,009
50.36
User talk:TheDarthMoogle From Uncyclopedia, the content-free encyclopedia Hello, TheDarthMoogle,. 07:58, March 8, 2013 (UTC) UnNews:Ikea furniture found to contain traces of horse meat I reply on the UnNews talk page that you are doing fine with this but there are grammar errors. I already have an UnNews in about the Ikea meatball news, but your take is a little different, so that's okay. I took the period off the headline, so you can find the UnNews at the above title. Spıke ¬ 18:55 8-Mar-13 Nope; upon closer review--the line about Korea, the line about horse DNA among Ikea staff--I believe you didn't just work from the same real-world news story, you took my UnNews and changed a few things. That isn't right, and I have put the story in your userspace, as shown above, where you can work on it. If I am mistaken, please explain and I will put it back. Spıke ¬ 22:02 8-Mar-13 - Apologies if it seemed I'd taken ideas out of your own spin on the story; I gained inspiration from the real live story, and launched straight into the article without seeing what angles other people had taken. My intention was to stay in the absurd as much as possible, but I kinda failed at that. It seems we have similar trains of thought. I'm happy for my article to be taken down. Practise makes perfect :D TheDarthMoogle (talk) 09:11, March 9, 2013 (UTC) If you are really telling me you stumbled on both these angles rather than taking them from my UnNews, I'll put it back. Spıke ¬ 09:19 9-Mar-13 The Game Now, the next thing you're going to tell me is that you just got the notion to create this article and are not re-creating the article that received a maintenance tag, the author abandoned, and Mordillo deleted? That would be a major misstep. Spıke ¬ 18:51 18-Mar-13 - You sent me some words, but I have absolutely no idea what they mean...It was a requested article... so I made it? Is that a crime, or I am I not worthy of making those kinds of articles? - If the article sucks, then please tell me; I don't get nearly enough criticism in my life, but otherwise I feel kinda hurt that whatever you sent was so aggressive... --TheDarthMoogle (talk) 18:55, March 18, 2013 (UTC) I didn't read through the article. All I know is that, again, it had been created before, abandoned and deleted by an Admin, and you just re-created it. If this is your first time working on it, then what I said doesn't apply. Good luck with the Requested Articles. (PS--Discuss what you are going to do on your talk page; use Talk:The Game to discuss the article.) Cheers. Spıke ¬ 19:00 18-Mar-13 Day So, it's got a shed load of redirects. Now to actually come up with a decent article to follow suit. This looks like a tough cookie, and it's tempting just to revise what got deleted; looks like a good angle, but was kinda crude. Here we go... --TheDarthMoogle (talk) 19:04, March 18, 2013 (UTC) Occupy St. Peters Thanks for your kind words on the talk page, it's nice to get compliments on pages every once in awhile. You've made my day (at least until I eat some rice and lentils, and then that will make my day, at least until...). I thought of the concept and wrote some of it before actually googling it and there are people with websites about occupying St. Peter's Square, but mostly to complain about paedo priests and the usual. Yet with this new Pope the possibility is there that some may try it. The last picture, of that massive statue, is the real room where popes meet the public and make announcements, and I'd never seen it before two days ago! Its been there for 34 years! Have you seen it before? I look forward to seeing your work, but since this post gets long, thanks again, and good to meet you. Aleister 11:52 19-3-'13 HowTo:Keep Americans Out Of Your Country As you note on this article's talk page, you got to this point and then could not figure out where you are going with it. What we have now is basically not an article but an outline of an article. When something is this incomplete, the best way is in your userspace, such as User:TheDarthMoogle/articlename. If it were in your personal namespace, it would not have bothered me that you added an Oscar Wilde quote that has nothing to do with Oscar Wilde. The reason it bothers me (and two other senior editors) in mainspace is written up at Uncyclopedia:Templates/In-universe/Quotes. Yes, we will find a better name for that essay. The article cannot survive as-is in mainspace, as it is nothing but lists. As always, the problem with lists is that they encourage bad overnight editors to come along and add just one thing. Fundamentally, this article needs a better theme than to simply flog the stereotypes about Americans with which we are all familiar. I think stuff like this has already been done but can't give you a citation. I did not like the way this article began. The structure of the first sentence is: "X is bad, but X is awful." Never use the first person ("I") when writing in an encyclopedia, and try just to say what you are going to say rather than explaining your intentions beforehand. For gassy phrases such as "It is recommended that," please see User:SPIKE/Cliches-1. Good luck! Spıke ¬ 12:09 20-Mar-13 Nanotechnology This is a good start and an improvement. But it reads as heavy with your own disdainful personal opinion, and still too light on the humor. Your opinion is welcome and it gives the article a theme, but keep working to weave lighter humor into the article, such as with analogies and artful phrasing. (Perhaps more artful than your frequent recourse to "bullshit," which likewise suggests you are too interested in expressing disdain to make the reader laugh.) Also, there are either too many subsections or too little material; the result looks listy, and as always, the problem with lists is that a lot of bad overnight visitors will try to add just one more thing. Hope this helps! Spıke ¬ 12:22 23-Apr-13 - [Cheers for the advice.] Yeah, I'll keep working on it through the course of the day. It was more an off the cuff thing as my brain nearly jumped out the window from reading the original article. I get your concern about lists. My head works that way, and I need to tell it to stop doing that. As for the humour, I was taking a more rational, than disdainful approach, but yes. I could be a lot more subtle, but I'm wary of just ending up waffling. TheDarthMoogle (talk) 12:59, April 23, 2013 (UTC) Rational is okay; just make it funnier, and there is no rush. "Waffling" is always a risk, but mostly when the Anons arrive and we get Theory, then Alternate Theory, then Other Alternate Theory, and the results look very unencyclopedic. Happy editing! Spıke ¬ 13:10 23-Apr-13 From the Sandbox Your conversation with Simsie at Uncyclopedia:Sandbox was good conversation and good editor's advice; I've taken the liberty of moving it here, as the Sandbox can be erased by anyone at any time. What am I doing with my life? No, I'm serious. I just don't get anything, it's like I've injected fiberglass into my brain, and drank three consecutive slushies. My desperate attempts to be funny are met with unanimous agreement that I'm grindingly mediocre. *Sigh* I guess I'll go make that HowTo on masturbating furiously with knives. TheDarthMoogle (talk) 07:45, April 24, 2013 (UTC) - I have created HowTo:Masturbate Furiously With Knives. You may all bask in its glory, and I can die happy now. - Your request for a Pee Review has not yet been engaged, but Admin Mhaille has seen fit to do some tidying up, as you have seen. By the way, there was a notorious prosecution in Massachusetts of operators of a day-care center, accused of masturbating furiously with knives on some of its customers; one operator is still in jail despite the total absence of knife-wounds or any physical evidence (only young testimony coached by social workers). This might fit in somehow. Spıke ¬ 10:43 24-Apr-13 What am I doing up this late at night? Actually, what am I doing up this time of morning? I shouldn't be wide awake, but I am. Never try too hard to be funny, it just forces the joke, and the reader can sense your desperation. Let the creativity flow, then edit what happens later. Use an outline as a framework. If you can come up with a good idea and a good outline, then all you need are some funny details. This message is a recording and will self destruct in 10..9..8...7....6...5...4...3...2...1...-- Simsilikesims(♀UN) Talk here. 08:24, April 24, 2013 (UTC) - Yeah (regarding being funny, not regarding what Simsie is doing awake so late). I usually am driving around and see something that must be misinterpreted and typed into Uncyclopedia, versus staring at a screen and having the right phrase come to me. Spıke ¬ 10:43 24-Apr-13 Beating a stick with a dead horse I dunno how to flesh out my articles. I look at it in the edit view thinking "yes, this is clearly a substantial work". The whole thing ends up three times shorter than I thought it was. What I write is hilariously lacking in jokes to begin with, so I've not yet learnt to pad out what I'm writing, to make up for the inadequate length. Curse my brain. --TheDarthMoogle (talk) 14:02, May 2, 2013 (UTC) - Do you think anyone reads the Sandbox? (I do because I have to look at Special:RecentChanges to see everything happening on the wiki.) You could create a Forum, but I'm not sure Uncyclopedians want to be broadcast to, to help you write. - Please don't "pad out" anything. We are not paid-by-page-count. If your article isn't article-length, hold it until you see ways to take the humor in entirely new directions. They always exist. Also, read more about your subject on Wikipedia and think how it would come out if someone misinterpreted what you just read. Illustrations are also good, provided you can give each one a funny caption. Spıke ¬ 14:10 2-May-13 - I don't expect anyone to read the Sandbox, I just like screaming into space every now and then, and you as an Admin have the right to stop me, and charge me with public indecency. --TheDarthMoogle (talk) - No chance! It is the least indecent of the many outbursts here.... Spıke ¬ 14:24 2-May-13 - ....The most indecent outbursts are saved for me! Sir ScottPat UnS CUN VFH and Bar (talk) 12:24 10, July ¬ 15:10 13-May-13 Hi there Hello. I am ScottPat and I am more noobish than you but I just wanted to say (because didn't realise you were an active user and I've looked at your forum edits today) that on the topic of Uncyclopedia being a ghost town: I don't think it is at all. There are many friendly users and enough active users to give variety however I do agree that most of the active users don't particularly enjoy pee reviewing. Nice to meet you and if you want any help, want to collaborate on an article, want to set up a group on Uncyclopedia, want to subscribe to UnSignpost or want to read some of my articles (some are on VFH (one just got featured on main page) or pee review) then just visit my user page and talk page. I am on here practically everyday. Thanks. Sir ScottPat UnS CUN VFH and Bar (talk) 12:24 10, July 2014 - Interesting. I shall make a note of everything you said in my little box. --TheDarthMoogle (talk) 14:47, May 4, 2013 (UTC) Don't know what you mean by little box but thanks for showing an interest in UnSignpost. Sir ScottPat UnS CUN VFH and Bar (talk) 12:24 10, July 2014 Express Delivery...is not available today (just normal delivery) The Newspaper That Contains Neither News Nor Paper. here!) Sir ScottPat (talk) VFH UnS NotM WotM WotY 06:29, May 18, 2013 (UTC) The UnSignpost has arrived...Quick hide! The Newspaper Not Secretly Controlled By ME, I Swear! out. Sir ScottPat (talk) VFH UnS NotM WotM WotY 16 voters. Thanks. Sir ScottPat (talk) VFH UnS NotM WotM WotY 06:57, May 22, 2013 (UTC) - PS - You may vote for yourself on NotM. Sir ScottPat (talk) VFH UnS NotM WotM WotY 07:00, May 22, 2013 (UTC) - Good god, whyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy --TheDarthMoogle (talk) 16:00, May 22, 2013 (UTC) The UnSignpost hath cometh The Newspaper That Openly Admits Its Liberal And Conservative Biases! May, 29th 2013 • Issue 185 • Brought to you with help from a couple of Erwins. Sir ScottPat (talk) VFH UnS NotM WotM WotY 08:24, May 29, 2013 (UTC) Now With 0 Trans Fat! June, 14th 2013 • Issue 186 • This newspaper may not be able to tell you who Denza but then it can't tell you much anyway! Recent news from old newspapers Your #1 source for Cajek ban jokes! June, 27th 2013 • Issue 187 • Something you certainly were not expecting! Anton (talk) Uncyclopedia United) Uncyclopedia United 17:48, July 5, 2013 (UTC) PLS edition of UnSignpost with extra poo STOP... SIGNPOST TIME!! July, 13th 2013 • Issue 189 • Bringing you offensive information faster than the Brit-Slayer! 12:00, August 13, 2013 (UTC) It is back! August 1st, 2013 • Issue #190 • Archives • Press Room Anton (talk) Uncyclopedia United 12:36, August 28, 2013 (UTC) UnSignpost - Delivered every [other] week! August 1st, 2013 • Issue #191 • Archives • Press Room
http://uncyclopedia.wikia.com/wiki/User_talk:TheDarthMoogle?oldid=5726981
CC-MAIN-2014-23
refinedweb
2,421
71.04
I have two classes in separate python modules and I need to access some methods of the either classes from the other. They are not in base and derived class relationship. Please see the example below. Foo imports Bar and inside the Foo class it creates a Bar obj and then calls Bar.barz(). Now before returning control, it has to call the track method in Foo. As I understand, I won't be able to use 'super' in this case, as there is no inheritance here. Also, I won't be able to move the track method to Bar as I need to track different bar types. Any suggestion on how to get this done would be great. Thanks in advance. Cheers - b ---------- * foo.py * import bar class Foo: def fooz(): print "Hello World" b = Bar() c = b.barz() ... def track(track_var): count += 1 return sth2 * bar.py * class Bar: def barz(): track_this = ... if Foo.track(track_this): pass else: ... return sth1 -------------- next part -------------- An HTML attachment was scrubbed... URL: <> An HTML attachment was scrubbed... URL: <>
https://grokbase.com/t/python/python-list/10awg3vk35/calling-a-method-from-invoking-module
CC-MAIN-2022-33
refinedweb
176
86.3
I recently upgraded a reasonably large program to use Unicode instead of single-byte characters. Apart from a few legacy modules, I had dutifully used the t- functions and wrapped all my strings literals and character constants in _T() macros, safe in the knowledge that when it came time to switch to Unicode, all I had to do was define UNICODE and _UNICODE and everything would Just Work (tm). Man, was I ever wrong :(( So, I write this article as therapy for the past two weeks of work and in the hope that it will maybe save others some of the pain and misery I have endured. Sigh... In theory, writing code that can be compiled using single- or double-byte characters is straight-forward. I was going to write a section on the basics but Chris Maunder has already done it. The techniques he describes are widely known so we'll just get right on to the meat of this article. There are wide versions of the usual stream classes and it is easy to define t-style macros to manage them: #ifdef _UNICODE #define tofstream wofstream #define tstringstream wstringstream // etc... #else #define tofstream ofstream #define tstringstream stringstream // etc... #endif // _UNICODE And you would use them like this: tofstream testFile( "test.txt" ) ; testFile << _T("ABC") ; Now, you would expect the above code to produce a 3-byte file when compiled using single-byte characters and a 6-byte file when using double-byte. Except you don't. You get a 3-byte file for both. WTH is going on?! It turns out that the C++ standard dictates that wide-streams are required to convert double-byte characters to single-byte when writing to a file. So in the example above, the wide string L"ABC" (which is 6 bytes long) gets converted to a narrow string (3 bytes) before it is written to the file. And if that wasn't bad enough, how this conversion is done is implementation-dependent. I haven't been able to find a definitive explanation of why things were specified like this. My best guess is that a file, by definition, is considered to be a stream of (single-byte) characters and allowing stuff to be written 2-bytes at a time would break that abstraction. Right or wrong, this causes serious problems. For example, you can't write binary data to a wofstream because the class will try to narrow it first (usually failing miserably) before writing it out. This was particularly problematic for me because I have a lot of functions that look like this: void outputStuff( tostream& os ) { // output stuff to the stream os << .... } which would work fine (i.e. it streamed out wide characters) if you passed in a tstringstream object but gave weird results if you passed in a tofstream (because everything was getting narrowed). Stepping through the STL in the debugger (what joy!) revealed that wofstream invokes a std::codecvt object to narrow the output data just before it is written out to the file. std::codecvt objects are responsible for converting strings from one character set to another and C++ requires that two be provided as standard: one that converts chars to chars (i.e. effectively does nothing) and one that converts wchar_ts to chars. This latter one was the one that was causing me so much grief. The solution: write a new codecvt-derived class that converts wchar_ts to wchar_ts (i.e. do nothing) and attach it to the wofstream object. When the wofstream tried to convert the data it was writing out, it would invoke my new codecvt object that did nothing and the data would be written out unchanged. A bit of poking around on Google Groups turned up some code written by P. J. Plauger (the author of the STL that ships with MSVC) but I had problems getting it to compile with Stlport 4.5.3. This is the version I finally hacked together: #include <locale> // nb: MSVC6+Stlport can't handle "std::" // appearing in the NullCodecvtBase typedef. using std::codecvt ; typedef codecvt < wchar_t , char , mbstate_t > NullCodecvtBase ; class NullCodecvt : public NullCodecvtBase { public: typedef wchar_t _E ; typedef char _To ; typedef mbstate_t _St ; explicit NullCodecvt( size_t _R=0 ) : NullCodecvtBase(_R) { } protected: virtual result do_in( _St& _State , const _To* _F1 , const _To* _L1 , const _To*& _Mid1 , _E* F2 , _E* _L2 , _E*& _Mid2 ) const { return noconv ; } virtual result do_out( _St& _State , const _E* _F1 , const _E* _L1 , const _E*& _Mid1 , _To* F2, _E* _L2 , _To*& _Mid2 ) const { return noconv ; } virtual result do_unshift( _St& _State , _To* _F2 , _To* _L2 , _To*& _Mid2 ) const { return noconv ; } virtual int do_length( _St& _State , const _To* _F1 , const _To* _L1 , size_t _N2 ) const _THROW0() { return (_N2 < (size_t)(_L1 - _F1)) ? _N2 : _L1 - _F1 ; } virtual bool do_always_noconv() const _THROW0() { return true ; } virtual int do_max_length() const _THROW0() { return 2 ; } virtual int do_encoding() const _THROW0() { return 2 ; } } ; You can see that the functions that are supposed to do the conversions actually do nothing and return noconv to indicate that. The only thing left to do is instantiate one of these and connect it to the wofstream object. Using MSVC, you are supposed to use the (non-standard) _ADDFAC() macro to imbue objects with a locale, but it didn't want to work with my new NullCodecvt class so I ripped out the guts of the macro and wrote a new one that did: #define IMBUE_NULL_CODECVT( outputFile ) \ { \ NullCodecvt* pNullCodecvt = new NullCodecvt ; \ locale loc = locale::classic() ; \ loc._Addfac( pNullCodecvt , NullCodecvt::id, NullCodecvt::_Getcat() ) ; \ (outputFile).imbue( loc ) ; \ } So, the example code given above that didn't work properly can now be written like this: tofstream testFile ; IMBUE_NULL_CODECVT( testFile ) ; testFile.open( "test.txt" , ios::out | ios::binary ) ; testFile << _T("ABC") ; It is important that the file stream object be imbued with the new codecvt object before it is opened. The file must also be opened in binary mode. If it isn't, every time the file sees a wide character that has the value 10 in it's high or low byte, it will perform CR/LF translation which is definitely not what you want. If you really want a CR/LF sequence, you will have to insert it explicitly using "\r\n" instead of std::endl. wchar_t is the type that is used for wide characters and is defined like this: typedef unsigned short wchar_t ; Unfortunately, because it is a typedef instead of a real C++ type, defining it like this has one serious flaw: you can't overload on it. Look at the following code: TCHAR ch = _T('A') ; tcout << ch << endl ; Using narrow strings, this does what you would expect: print out the letter A. Using wide strings, it prints out 65. The compiler decides that you are streaming out an unsigned short and prints it out as a numeric value instead of a wide character. Aaargh!!! There is no solution for this other than going through your entire code base, looking for instances where you stream out individual characters and fix them. I wrote a little function to make it a little more obvious what was going on: #ifdef _UNICODE // NOTE: Can't stream out wchar_t's - convert to a string first! inline std::wstring toStreamTchar( wchar_t ch ) { return std::wstring(&ch,1) ; } #else // NOTE: It's safe to stream out narrow char's directly. inline char toStreamTchar( char ch ) { return ch ; } #endif // _UNICODE TCHAR ch = _T('A') ; tcout << toStreamTchar(ch) << endl ; Most C++ programs will be using exceptions to handle error conditions. Unfortunately, std::exception is defined like this: class std::exception { // ... virtual const char *what() const throw() ; } ; and can only handle narrow error messages. I only ever throw exceptions that I have defined myself or std::runtime_error, so I wrote a wide version of std::runtime_error like this: class wruntime_error : public std::runtime_error { public: // --- PUBLIC INTERFACE --- // constructors: wruntime_error( const std::wstring& errorMsg ) ; // copy/assignment: wruntime_error( const wruntime_error& rhs ) ; wruntime_error& operator=( const wruntime_error& rhs ) ; // destructor: virtual ~wruntime_error() ; // exception methods: const std::wstring& errorMsg() const ; private: // --- DATA MEMBERS --- // data members: std::wstring mErrorMsg ; ///< Exception error message. } ; #ifdef _UNICODE #define truntime_error wruntime_error #else #define truntime_error runtime_error #endif // _UNICODE /* -------------------------------------------------------------------- */ wruntime_error::wruntime_error( const wstring& errorMsg ) : runtime_error( toNarrowString(errorMsg) ) , mErrorMsg(errorMsg) { // NOTE: We give the runtime_error base the narrow version of the // error message. This is what will get shown if what() is called. // The wruntime_error inserter or errorMsg() should be used to get // the wide version. } /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */ wruntime_error::wruntime_error( const wruntime_error& rhs ) : runtime_error( toNarrowString(rhs.errorMsg()) ) , mErrorMsg(rhs.errorMsg()) { } /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */ wruntime_error& wruntime_error::operator=( const wruntime_error& rhs ) { // copy the wruntime_error runtime_error::operator=( rhs ) ; mErrorMsg = rhs.mErrorMsg ; return *this ; } /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */ wruntime_error::~wruntime_error() { } /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */ const wstring& wruntime_error::errorMsg() const { return mErrorMsg ; } ( toNarrowString() is a little helper function that converts a wide string to a narrow string and is given below). wruntime_error simply keeps a copy of the wide error message itself and gives a narrow version to the base std::exception in case somebody calls what(). Exception classes that I define myself, I modified to look like this: class MyExceptionClass : public std::truntime_error { public: MyExceptionClass( const std::tstring& errorMsg ) : std::truntime_error(errorMsg) { } } ; The final problem was that I had lots and lots of code that looked like this: try { // do something... } catch( exception& xcptn ) { tstringstream buf ; buf << _T("An error has occurred: ") << xcptn ; AfxMessageBox( buf.str().c_str() ) ; } where I had defined an inserter for std::exception like this: tostream& operator<<( tostream& os , const exception& xcptn ) { // insert the exception // NOTE: toTstring() converts a string to a tstring - defined below os << toTstring( xcptn.what() ) ; return os ; } The problem is that my inserter called what() which only returns the narrow version of the error message. But if the error message contains foreign characters, I'd like to see them in the error dialog! So I rewrote the inserter to look like this: tostream& operator<<( tostream& os , const exception& xcptn ) { // insert the exception if ( const wruntime_error* p = dynamic_cast<const wruntime_error*>(&xcptn) ) os << p->errorMsg() ; else os << toTstring( xcptn.what() ) ; return os ; } Now it detects if it has been given a wide exception class and if so, streams out the wide error message. Otherwise it falls back to using the standard (narrow) error message. Even though I might exclusively use truntime_error-derived classes in my app, this latter case is still important since the STL or other third-party libraries might throw a std::exception-derived error. wWinMainCRTStartupas your entry point (in the Link page of your Project Options). // get our EXE name TCHAR buf[ _MAX_PATH+1 ] ; GetModuleFileName( NULL , buf , sizeof(buf) ) ; it is wrong for double-byte characters. The call to GetModuleFileName() needs to be written like this: GetModuleFileName( NULL , buf , sizeof(buf)/sizeof(TCHAR) ) ; WEOF, not EOF. HttpSendRequest()accepts a string that specifies additional headers to attach to an HTTP request before it is sent. ANSI builds accept a string length of -1 to mean that the header string is NULL-terminated. Unicode builds require the string length to be explicitly provided. Don't ask me why. Finally, some little helper functions that you might find useful if you are doing this kind of work. extern std::wstring toWideString( const char* pStr , int len=-1 ) ; inline std::wstring toWideString( const std::string& str ) { return toWideString(str.c_str(),str.length()) ; } inline std::wstring toWideString( const wchar_t* pStr , int len=-1 ) { return (len < 0) ? pStr : std::wstring(pStr,len) ; } inline std::wstring toWideString( const std::wstring& str ) { return str ; } extern std::string toNarrowString( const wchar_t* pStr , int len=-1 ) ; inline std::string toNarrowString( const std::wstring& str ) { return toNarrowString(str.c_str(),str.length()) ; } inline std::string toNarrowString( const char* pStr , int len=-1 ) { return (len < 0) ? pStr : std::string(pStr,len) ; } inline std::string toNarrowString( const std::string& str ) { return str ; } #ifdef _UNICODE inline TCHAR toTchar( char ch ) { return (wchar_t)ch ; } inline TCHAR toTchar( wchar_t ch ) { return ch ; } inline std::tstring toTstring( const std::string& s ) { return toWideString(s) ; } inline std::tstring toTstring( const char* p , int len=-1 ) { return toWideString(p,len) ; } inline std::tstring toTstring( const std::wstring& s ) { return s ; } inline std::tstring toTstring( const wchar_t* p , int len=-1 ) { return (len < 0) ? p : std::wstring(p,len) ; } #else inline TCHAR toTchar( char ch ) { return ch ; } inline TCHAR toTchar( wchar_t ch ) { return (ch >= 0 && ch <= 0xFF) ? (char)ch : '?' ; } inline std::tstring toTstring( const std::string& s ) { return s ; } inline std::tstring toTstring( const char* p , int len=-1 ) { return (len < 0) ? p : std::string(p,len) ; } inline std::tstring toTstring( const std::wstring& s ) { return toNarrowString(s) ; } inline std::tstring toTstring( const wchar_t* p , int len=-1 ) { return toNarrowString(p,len) ; } #endif // _UNICODE /* -------------------------------------------------------------------- */ wstring toWideString( const char* pStr , int len ) { ASSERT_PTR( pStr ) ; ASSERT( len >= 0 || len == -1 , _T("Invalid string length: ") << len ) ; // figure out how many wide characters we are going to get int nChars = MultiByteToWideChar( CP_ACP , 0 , pStr , len , NULL , 0 ) ; if ( len == -1 ) -- nChars ; if ( nChars == 0 ) return L"" ; // convert the narrow string to a wide string // nb: slightly naughty to write directly into the string like this wstring buf ; buf.resize( nChars ) ; MultiByteToWideChar( CP_ACP , 0 , pStr , len , const_cast<wchar_t*>(buf.c_str()) , nChars ) ; return buf ; } /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */ string toNarrowString( const wchar_t* pStr , int len ) { ASSERT_PTR( pStr ) ; ASSERT( len >= 0 || len == -1 , _T("Invalid string length: ") << len ) ; // figure out how many narrow characters we are going to get int nChars = WideCharToMultiByte( CP_ACP , 0 , pStr , len , NULL , 0 , NULL , NULL ) ; if ( len == -1 ) -- nChars ; if ( nChars == 0 ) return "" ; // convert the wide string to a narrow string // nb: slightly naughty to write directly into the string like this string buf ; buf.resize( nChars ) ; WideCharToMultiByte( CP_ACP , 0 , pStr , len , const_cast<char*>(buf.c_str()) , nChars , NULL , NULL ) ; return buf ; } General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/stl/upgradingstlappstounicode.aspx
crawl-002
refinedweb
2,263
56.79
A reimplementation of Unbound using GHC Generics. @lambdageek how do I reuse generic implementations for functorised ASTs? ie: data Expr a = Var a (Name (Expr a)) | Other Constructors I'd like Alpha to ignore the annotations (a). However, writing a manual implementation to the class is tedious (given there are 6 methods that need to be written) if I only need to change 1-2 cases. Is there a way to default to the generic implementation in other cases? Hey, sorry I didn't notice your message @xldenis . All the methods have default implementations provided that Expr a is an instance of Generic, so you can just change the ones you want. If you want to call down to the generic impl, you would need to do something like (GHC.Generics.to . gopen ctx pat . GHC.Generics.from) subTerm The various g* functions are defined in Unbound.Generics.LocallyNameless.Alpha. If you can stand a newtype, what I usually do is something like newtype Annot a = Annot {getAnnot :: a } deriving ({- everything -}) and then write an instance for Annot a that just uses (==) and compare and is the identity for functions like open and close and then populate your AST with Annot a values fvas (Alpha a, Typeable b) => Fold a (Name b)in the sense of the lens libary's Fold. What that means, roughly is that given some syntax structure a(which is an instance of Alpha) you can use fvto iterate over free variables Name bthat occur within it and summarize them using some kind of Monoidvery similar to how the Foldabletypeclass works. More concretely, if you have some Exprtype that has variables of type Name Exprin it, you can do toListOf fv :: Expr -> [Name Expr](using the definition of toListOffrom lens, or the one in Unbound.Generics.LocallyNameless.Internal.Fold) more generally foldMapOf fv :: Monoid r => (Name Expr -> r) -> Expr -> rlets you use some other monoid to summarize the free variables. One that I find useful all the time, for example, is Data.Set.Lens.setOf fv :: Expr -> Data.Set.Set (Name Expr)( setOfis in lensand Data.Set.Setis from containers) fvand lenses more generally, this is one of the ways I frequently use fv: checking whether a variable is in the free variables of another term: import Control.Lens (noneOf) import Data.Typeable (Typeable) import Unbound.Generics.LocallyNameless -- This uses lenses because unbound-generic's fv supports lenses! -- fv is a Getter combinator that gets the free variables of a structure, and -- noneOf lets us fold over that structure, checking that none of them are (== t) notInFreeVars :: (Alpha a, Typeable a) => Name a -> a -> Bool notInFreeVars t t1 = noneOf fv (== t) t1 unbind2or lunbind2. If you have Lam (Bind Var Expr)in your datatype definition, and Lam b1and Lam b2, you can dosomething like this in the Fresh mmonad: Just (v1, e1, _, e2) <- unbind e1 e2to get at the subexpressions. If you are in LFresh m, then lunbind b1 b2 $ \(Just (v1, e1, _, e2) -> ... )will work. Lam (Bind (Var, Embed Type) Expr)- ie, the pattern portion of the Bindis more complicated than just a single variable, you will want to hold on to both sides when you unbind2. something like Just (p1, e1, p2, e2) <- unbind b1 b2. And then unpack p1 and p2 - the variables will be renamed when unbinding so that they are same, but the Embeded types may differ.
https://gitter.im/lambdageek/unbound-generics?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge
CC-MAIN-2020-16
refinedweb
568
59.74
In the way of development of programs using FORTAN and COBAL like erlier languages the complexity was increasing for larger programs. So, the developers found out structured programing language like C. Later on these structured programing languages also became complex in developing large projects , so developers come up with the Object Oriented programing (OOPs) language. Structure programing works only for what is happening inside the code but does not explains which data is affecting (not works around data). Object Oriented Programing works on both; around the code (what is happening) and around the data (which data is affecting). To achieve the Object Oriented Programing concept we have to fallow three principles of OOPS conecpt and those are Let us learn about each concept It means binding the Code and Data together; encapsulation provides a mechanism to restrict the data.C# basic unit of encapsulation is Class, C# uses a class specification to consrtuct objects. Objects are instances of the class. Code and data are memebers of the class, the data defined in the class is field. According to greek meaning it is "manyforms". We can describe polymorphism based on programing language like this. "Method name is same but its signature is different", it means same method name we can use more than one time within a class but its signature should be different. Signature is nothing but type of parameters or number of parameters passing to that method. method name(signature){ } method name(signature){ } For example, You drive an vehicle, which vehicle. In OO programming, Vehicle would be the base class, and each vehicle manufacturer would have its own implementation. For instance, Maruti has K-series engine technology, which is its own implementation. Volvo uses diesel engines, which is the TDI technology. More importantly, you may add a feature in the vehicle and implement make/model implementation, such as Car, Truck, or Suv etc. It means "Reusability". Inheritence is the process by which one object can import the properties of another object. We can achive the reusability using hierarchies, if we will not use hierarchies then each object would have to explicitly defines all of its characteristics (spcific and general characteristics). By using inheritance process an object need only to define those qualities that make it special within its class. It can inherit its general characteristics form parent or base class object. The concept of inheritance is used to make things from general to more specific e.g. take Animal is the general class in that we can classify animals as Mammal and Reptile () these are two derived classes of super classs Animal and dog is a child class for both Mammal and Animal classes . C# is a complete Object Oriented Programming language from Microsoft that is inherently supported in .NET. Lets learn about C# now. Create a sample.cs file in Notepad and write following code. using System; class sample { static void Main(){Console.WriteLine("Hello World !");Console.Read();}} using System; class sample { static void Main(){Console.WriteLine("Hello World !");Console.Read();} static void Main() { Console.WriteLine("Hello World !");Console.Read(); Console.WriteLine("Hello World !"); Console.Read(); Now open the Visual Studio Command Prompt and write following line of code. C:\> Csc sample.cs You should be get a .exe file named sample.exe. Run this exe file by double clicking and you will see a console window opens and write the text “Hello World!” Press Enter key and that window closes. Now let me define each line of statement one by one. static: a method that is declared with static keyword that method is called before an object of its class has been created. void: it means the Main() method doesn't return a value.In the above program if you type "main and writeline" insted of "Main and WriteLine" then compile time errors will occur. Because C# is case sensitive language and you will be forced to use the method or class name the way it has been defined in C#. To print the result in console application Console.WriteLine(“ Hello World !”) is used and in web application Response.WriteLine(“Hello World !”) is used. In C#, all variables must be declared before they are used. Keywords determines the features built in the language. C# has two types of keywords reserved and contextual keywords. The reserved keywords cannot be used as names for variables, classes and methods. The Contextual keywords are those that have a special meanings. In special cases they act as keywords. C# Reserved keywords C# Context keywords An identifier is a name assigned to a variable name, method name or any userdefined names. Identifiers may start with Underscore (_) but should not start with a digit. Reserved keywords of C# can be used as an identifier if it preceded by "@" otherwise not. ex: "@for" is identifier "for" is not a identifier Reference: I have taken reference of the C# 4.0 Complete reference book that I am going through these days. Hope this article was useful. Do let me know your comment or feedback. In case you want to learn ASP.NET with Tips and Tricks, I found .NET Tips and Tricks very useful. In the next article, we are going to learn Data types, control statements etc. Thanks for reading. Latest Articles Latest Articles from Ksuresh Login to post response
http://www.dotnetfunda.com/articles/show/1476/csharp-fundamentals
CC-MAIN-2020-10
refinedweb
886
58.08
libprof is a small gint library that can be used to time and profile the execution of an add-in. Using it, one can record the time spent in one or several functions to identify performance bottlenecks in an application. libprof’s measurements are accurate down to the microsecond-level thanks to precise hardware timers, so it can also be used to time even small portions of code. libprof is built only once for both fx-9860G and fx-CG 50, but if you use different compilers you will need to install it twice. The dependencies are: The Makefile will build the library without further instructions. % make By default sh3eb-elf is used to build; you can override this by setting the target variable. % make target=sh4eb-elf Install as usual: % make install # or % make install target=sh4eb-elf To access the library, include the <libprof.h> header file. #include <libprof.h> For each function you want to time, libprof will create a counter. At the start of the program, you need to specify how many functions (libprof calls them contexts) you will be timing, so that libprof can allocate enough memory. libprof also needs one of gint’s timer to actually measure time; it must be one of timers 0, 1 and 2, which are the only one precise enough to do this job. You can use any timer which you are not already using for something else. These settings are specified with the prof_init() function. /* Initialize libprof for 13 contexts using timer 0 */ prof_init(13, 0); You can then measure the execution time of a function by calling prof_enter() at the beginning and prof_end() at the end. You just need to “name” the function by giving its context ID, which is any number between 0 and the number of contexts passed to prof_init() (here 0 to 12). void function5(void) { prof_enter(5); /* Do stuff... */ prof_leave(5); } This will add function5()’s execution time to the 5th counter, so if the function is called several times the total execution time will be recorded. This way, at the end of the program, you can look at the counters to see where most of the time has been spent. To retrieve the total execution time of a function, use prof_time() : uint32_t total_function5_us = prof_time(5); This time is measured in microseconds, even though the timers are actually more precise than this. Note that the overhead of prof_enter() and prof_leave() is usually less than 1 microsecond, so the time is very close to the actual time spent in the function even if the context is frequently entered and left. At the end of the program, free the resources of the library by calling prof_quit(). prof_quit(); The number of contexts must be set for all execution and all context IDs must be between 0 and this number (excluded). Managing the numbers by hand is error- prone and can lead to memory errors. A simple way of managing context numbers without risking an error is to use an enumeration. enum { /* Whatever function you need */ PROFCTX_FUNCTION1 = 0, PROFCTX_FUNCTION2, PROFCTX_FUNCTION3, PROFCTX_COUNT, }; Enumerations will assign a value to all the provided names, and increment by one each time. So for example here PROFCTX_FUNCTION2 is equal to 1 and PROFCTX_COUNT is equal to 3. As you can see this is conveniently equal to the number of contexts, which makes it simple to initialize the library: prof_init(PROFCTX_COUNT, 0); Then you can use context names instead of numbers: prof_enter(PROFCTX_FUNCTION1); /* Do stuff... */ prof_leave(PROFCTX_FUNCTION1); If you want to use a new context, you just need to add a name in the enumeration (anywhere but after PROFCTX_COUNT) and all IDs plus the initialization call will be updated automatically. prof_enter() and prof_leave() will add the measured execution time to the context counter. Sometimes you want to make individual measurements instead of adding all calls together. To achieve this effect, clear the counter before the measure using prof_clear(). Here is an example of a function exec_time_us() that times the execution of a function f passed as parameter. uint32_t exec_time_us(void (*f)(void)) { int ctx = PROFCTX_EXEC_TIME_US; prof_clear(ctx); prof_enter(ctx); f(); prof_leave(ctx); return prof_time(ctx); } The overhead of prof_enter() and prof_leave() is usually less than a microsecond, but the starting time of your benchmark might count (loading data from memory to initialize arrays, performing function calls…). In this case, the best you can do is measure the time difference between two similar calls. If you need something even more precise then you can access libprof’s counter array directly to get the timer-tick value itself: uint32_t elapsed_timer_tick = prof_elapsed[ctx]; The frequency of this tick is PΦ/4, where the value of PΦ can be obtained by querying gint’s clock module: #include <gint/clock.h> uint32_t tick_freq = clock_freq()->Pphi_f / 4; One noteworthy phenomenon is the startup cost. The first few measurements are always less precise, probably due to cache effects. I frequently have a first measurement with an additional 100 us of execution time and 3 us of overhead, which subsequent tests remove. So it is expected for the first few points of data to lie outside the range of the next.
https://gitea.planet-casio.com/Lephenixnoir/libprof
CC-MAIN-2019-51
refinedweb
860
58.52
Using ProcessorExpert for specifying an external clock (crystal=8MHz, bus=16 MHz) and then loading the code into the S908AW32 through the "True time simulator and real-time debugger" tool, the program hangs waiting for the PLL to stabilize, so never reaches the main function. It's not necesary to use the PE, however, setting up the clock module by hand is a bit confused, that's way I tried to use PE, but if there is some C code for initializing the external clock using (or even bypassing it) the FLL please let me know it (code, links, app-notes, etc). My board (of my own) already includes a 8 MHz crystal because I need a very exact base-time. I'm reading the app-note AN2494 and the AW32 datasheet, but some other help will be truly apreciated. Thanks in advanced =) I cannot see any signal from the crystal pins using an oscilloscope, so I'm wondering if pins PTG6 and PTG5 are being used by the ICG. Suposing there are noise and bad ground planes, I may see something (at least noisy) from the crystal, however I cannot see anything, which makes me think those pines are disconected from the ICG. The datasheet said that if some conditions are not met (8.2.1 and 8.2.1), those pins are not used by the ICG. But I cannot understand them at all =( Here something is talked about So, I will add a question: how can I tell the CPU to use the PTG5..6 as a crystal input?? Thanks =) #include <hidef.h> /* for EnableInterrupts macro */ #include "derivative.h" /* include peripheral declarations */ void main(void) { SOPT_COPE=0; ICGC1=0b11111000; ICGC2=0b01100011; while(ICGS1_LOCK==0); PTCDD_PTCDD6=1; EnableInterrupts; /* enable interrupts */ /* include your code here */ for(; PTCD_PTCD6=1; PTCD_PTCD6=0; } for(; __RESET_WATCHDOG(); /* feeds the dog */ } /* loop forever */ /* please make sure that you never leave main */ } Message Edited by bigmac on 2008-02-03 04:05 PM Those values were calculated following the example in the datasheet (8.5.3), but edited for my needs: Ficg=Fext*P*N/R; P=1, Fext=8M, Ficg=16M N/R=Ficg/(Fext * P)=16M/8M=2 so N=MFD=16 and R=RFD=8, did I something wrong?? ... Yes something is very bad, but I have corrected to this ICGC2=0b00000001; with N=4 and R=2 I'm writing each of the two registers as a byte, so I guess there is not problem with that. In one pin of the crystal I have 0V and in the other 5V and the distance from the crystal to the chip is aprox 0.375 in, and both are in the same plane, and the caps are 39 pF Using PE as an experiment, in the very first rutine when the debugger reaches the line 6 the it lost its track and resets, which makes me think that the crystal is not working in anyway 1 void _EntryPoint(void) 2 { 3 ... /* some init code */ 4 /* System clock initialization */ 5 /* ICGC1: HGO=0,RANGE=1,REFS=0,CLKS1=1,CLKS0=0,OSCSTEN=1,LOCD=0,??=0 */ 6 setReg8(ICGC1, 0x54); 7 /* ICGC2: LOLRE=0,MFD2=0,MFD1=0,MFD0=0,LOCRE=0,RFD2=0,RFD1=0,RFD0=0 */ 8 setReg8(ICGC2, 0x00); 9 ... /* some other init code */ 10 while(!ICGS1_ERCS) { /* Wait until external reference is not stable */ 11 } How can I tell the CPU to conect the PTG5..6 to work as a input/output crystal?? No, I don't have the parallel resistor The REFS bit was set to 0 by ProcessorExpert, and I dont know why, but using a PE was an experiment to see what could be the right definition, but doesnt work. I'll try the watch-dog to see what happen. Thanks for you support, but for today is enough, I'm gonna sleep, and I hope to hear something from you soon. Thank you Regards, As are the caps. If you set it up like that, I know that the code I have shown will work. Message Edited by JimDon on 2008-02-03 09:39 AM The caps depend somewhat on the rock. All the examples I looked had 18-20pf - If 33 works so be it. This code will set a 32 Mhz CLK, 16 Mhz bus clock. It seems to lock just fine, i tested it. He used a 1 Meg bypass and 18 pf start caps. I set High Gain, High Frequency, xtal modes. If you use low power it does not seem to lock. Here is the schematic: Message Edited by JimDon on 2008-02-03 01:49 AM Message Edited by JimDon on 2008-02-03 01:51 AM
https://community.nxp.com/thread/37913
CC-MAIN-2019-26
refinedweb
787
77.57
In 2019, I presented “An Introduction to Svelte” at Scott Logic’s monthly “Newcastle TekTalks”. I planned to do a blog post afterwards, writing down what I had already presented. Nearly 18 months later, here is attempt #5. The more I reflected on Svelte, the harder it became to summarise it effectively. If you focus on a high-level overview, you miss all the tiny features which make Svelte a joy. But if you zoom in on the exciting details, you miss the overarching picture which makes it all possible. From my observations, I’m not the only one who has this struggle. The creator of Svelte, Rich Harris, described Svelte as: “It’s a component framework, but it’s also a compiler, and it’s also a library of the things that you need when you’re building an application, and a philosophy of building web apps I guess.” Describing Svelte as a philosophy surprised me. But as I tried to organise my thoughts to write this blog post, I arrived at what I believe to be Svelte’s underlying philosophy. Great DX shouldn't come at the expense of great UX In other words, Svelte is designed to make it as easy as possible to create an incredible user experience. Trade-Offs? You may have been disappointed at how ordinary that sounds. Isn’t that the goal of every tool used by developers? Well, yes, but actually no. Large packages can impair the user experience. Large bundles take longer to download, increasing the initial load time of the application. They also have more code that needs to run, slowing the app down at run time. Thus building a framework is often a balancing act between adding features to make things easier for the developer, and keeping the package as small as possible. Svelte can avoid this trade-off because, in addition to being a component framework, it is also a compiler. This decouples the developer experience from the user experience. The developer can write the code in an environment optimally suited for developing. The compiler then converts the code, at build time, to produce an optimal user experience. Frameworks Are For Organising Your Mind In Rich Harris’ talk “Rethinking reactivity”, he described an epiphany he had that lead to the creation of Svelte: Frameworks are not tools for organising your code, they are tools for organising your mind. In other words, frameworks aren’t there to help browsers run your app; they’re there to help developers create the app. Any application could, in theory, be written in pure JavaScript, and the browser would happily run it. Because of this, Svelte gives you a framework and tools to use for developing, and then “magically disappears” at build time. Why is this helpful? It means that Svelte can give first-class support to a lot more features than a typical framework. For example, Svelte has a state management system baked into the framework. If you don’t need to use it, the compiler removes the code at build time, so your final package stays unbloated and, well, svelte! Another example is animation and transitions. Svelte makes motion incredibly easy to manage. If you take advantage of its capabilities, at build time the compiler will compile the code into CSS, creating the smoothest possible user experience. And if you don’t use it, the compiler removes the code. Write Less Code Lean codebases are better than large codebases. Smaller codebases are easier to read and understand, take less time to create, and have fewer bugs. Reducing the amount of code you have to write is an explicit goal of Svelte. Rich Harris claims from his experience that a React component is typically 40% larger than its Svelte equivalent. This makes writing Svelte components a joy. For example, let’s compare how you would create a simple component which takes an input and displays the value. In React, it would look something like: import { useState } from 'react'; function MyInput() { const [value, setValue] = useState(''); const handleChange = (event) => { setValue(event.target.value); } return ( <> <input value={value} onChange={handleChange} /> <p>{value}</p> </> ); } export default MyInput; Here is the equivalent in Svelte: <script> let value = ''; </script> <input bind:value /> <p>{value}</p> This is significantly shorter and easier to read. And even if you’ve never even heard of Svelte, I bet you can understand exactly what that code is doing. It’s Fast… Really Fast Svelte is incredibly performant. There are two main reasons Svelte can achieve this performance: forward referencing and less code (surprise, surprise). Svelte uses forward referencing to keep track of which values are dependant upon other values. Therefore, the code doesn’t need to spend time comparing virtual DOM trees to see what’s changed - it can just immediately update what needs to be updated. Let’s see what this looks like by extending our example from above. <script> let value = ''; $: message = `${value} is our value`; </script> <input bind:value /> <p>{message}</p> Here, we’ve created a reactive declaration. We’ve tied the value variable to the message variable. Now, when value updates, Svelte automatically knows to update It isn’t even limited to variables. We can tie the value variable to a statement like console.log, which will log value every time value is updated. <script> let value = ''; $: console.log(`The value is ${value}`); </script> <input bind:value /> The other main performance gain comes from the compiler. It strips out any unused code at build time and reworks the code to be incredibly efficient in the browser. In the words of Rich Harris: “There’s only one reliable way to speed up your code, and that is to get rid of it.” For example, one of the hardest parts of a large codebase to edit is the CSS. It’s so tricky to know if a line of CSS is still being used, so it always gets left “just in case.” Svelte makes this easy: all CSS is scoped to the component which defines it. That means updating CSS in one component is not going to have unintended consequences somewhere else in your app. If you do end up with CSS which isn’t being used, the compiler will give you a warning, so you know you can safely remove it. Even if you don’t remove the code, the compiler will. All of this means the user only requests what the app needs to load, and only runs what the app needs to run. Beginner Friendly In my experience, Svelte is one of the most beginner-friendly frameworks. It’s been designed so that you can use your existing HTML, CSS, and JS knowledge with as little onramp as possible. Svelte is a superset of HTML. This means there are fewer “gotchas” that come with learning it (e.g. it uses class="", as opposed to className="" in React). It’s also very forgiving. The focus when you’re coding is to tell the compiler what you want it to do. The compiler then handles making it run efficiently. For example, when we defined our reactive declaration above, it would have been equally valid to swap the order of the variables: <script> $: message = `${value} is our value`; let value = ''; </script> <input bind:value /> <p>{message}</p> The compiler knows that message is dependant upon value, so the compiled code will place them in the correct order. This is nice for when we make mistakes (which never happens) but it also gives us more freedom in how we organise our code. Another example is inline event handlers. Some frameworks recommend avoiding them for performance reasons, particularly inside loops. That advice doesn’t apply to Svelte - the compiler is smart enough to know what we want. <button on:click={() => { console.log('Clicked'); }}> Click me </button> Conclusion Despite Svelte 3 being only being a couple of years old, there’s already a lot to be excited about. It won State of JavaScript 2019’s Prediction Award, awarded “to an up-and-coming technology that might take over”. Sure enough, in 2020 it was voted the number one framework in terms of both interest and satisfaction. If you’re interested in trying Svelte out, the best place to start is the official documentation. The website has a very well written tutorial which covers all the essentials. There’s also a REPL where you can try coding with Svelte right in the browser. Svelte’s philosophy is a paradigm breaker. By removing the trade-off between DX and UX, it can prioritise both. In Svelte, creating apps and using apps spark joy.
https://blog.scottlogic.com/2021/01/18/philosophy-of-svelte.html
CC-MAIN-2021-21
refinedweb
1,439
63.9
When it comes to Python and its visualisation capabilities, Matplotlib is undoubtedly the mother of all visualisation libraries. Matplotlib is a very popular library that has revolutionised the concept of making impressive plots with Python effortlessly. Deep Learning DevCon 2021 | 23-24th Sep | Register>> In our previous articles, we introduced you to some of the most popular plotting libraries such as Pandas plots, Seaborn, Plotly and Cufflinks. Most of these are built on top of Matplotlib which makes it an important library to know about. In this article, we will introduce you to Matplotlib and will take you through a hands-on session to plot beautiful visualisations. Plotting With Matplotlib Matplotlib supports a wide variety of plots from the basic line and scatter plots to advanced multi-dimensional plots. We will start with basic plots and will discuss some of the best practices to make an attractive and intuitive visualisation. Installing Matplotlib Use the pip installer to install Matplotlib into your working environment. Type and execute the following command in your terminal. pip install matplotlib If you are using Anaconda distribution use conda install matplotlib to install the library. Let’s make some plots! Importing the libraries import pandas as pd import matplotlib.pyplot as plt %matplotlib inline The %matplotlib inline function allows for the plots to be visible when using Jupyter Notebook. Importing the dataset data = pd.read_csv("sample_data.csv") Here we will use a simple data set made of random numbers. This is what the data looks like. Quick Plots The Matplotlib enables us to plot to functional plots with ease. These plots are on-the-go plots that helps us visualise data in a quick and effortless way by calling the plot method and passing the axes as arguments. Simple Plot Let us plot a simple line plot to depict how the value of A changes for each observation in the dataset. plt.plot(data['A'], c = 'b') plt.title("Change In A") plt.xlabel("Index") plt.ylabel("A") plt.legend() - In the above code block, we pass an indexed data (data[‘A’]) to the plot method of the Matplotlib object. The c argument sets the colour of the plot. In the above case c=’b’ sets the colour of the plot to blue. - The title method sets the passed string as the main title of the plot. The xlabel and ylabel methods label the x-axis and y-axis respectively. - The legend method displays the plot legends. Output: Scatter Plot Lets us scatter plot the values of columns A and B against each other. plt.scatter(x = data['A'],y = data['B'],s = 500, c = 'r', marker = "*",alpha = 1, linewidths = 1, edgecolors='b') plt.title("Scattered A vs B") plt.xlabel("A") plt.ylabel("B") The Scatter method in the above code block specifies that the plots are scattered. - x = data[‘A’] sets the x-axis with values from the feature A of the dataset - y = data[‘B’] sets the y-axis with values from the feature B of the dataset - marker = “*” sets the plot symbol as *. For the list of all available markers click here. - s = 500 sets the size of the marker symbol in the plot to 500 - linewidths = 1 sets the border width of the marker symbol used - edgecolors=’b’ sets the colour of the marker symbol edge to blue Matplotlib supports all HTML colour codes which can be passed in as arguments. You can get all HTML colour codes here. Copy the colour code of your choice and prepend it with a ‘#’ symbol and set it to the colour parameter. Output: Pie Chart plt.pie(x = [10,10,30,25,5,20], explode = [0.1,0.1,0.1,0.1,0.1,0.1], labels = ['A','B','C','D','E','F'], radius=2, autopct='%.01f%%',shadow=True, textprops={'size': 'smaller'}) The above code block produces a pie chart for the values passed in as x. - explode = [0.1,0.1,0.1,0.1,0.1,0.1] spaces each block or wedge at 0.1 units away. - labels = [‘A’,’B’,’C’,’D’,’E’,’F’] sets the label for each wedge. - radius=2 sets the radius of the pie chart to 2. - autopct=’%.01f%%’ displays the x values in percentage inside the pie chart. - shadow=True enables shadow for the plot. - textprops={‘size’: ‘smaller’} sets the size of the text Output: Creating Sophisticated Plots With Objects Matplotlib also has object-based plotting which adds flexibility to its plots. Let’s look at some examples. Subplots - Initializing an empty figure object fig = plt.figure(tight_layout=True) - Initializing a 2×2 grid of Axes inside the figure object fig, axes = plt.subplots(2, 2) - Setting a title fig.suptitle('Change In A,B & C ') - Plotting Index vs A in subplot[0,0]– 0th row and 0th column axes[0,0].plot(data['A']) - Plotting Index vs B in subplot[0,1]– 0th row and 1st column axes[0,1].plot(data['B']) - Plotting Index vs C in subplot[1,0]– !st row and 0th column axes[1,0].plot(data['C']) - Plotting D vs A for A values greater than 0 in subplot[1,1] axes[1,1].bar(data[data['A']>0]['D'], data[data['A']>0]['A']) Output: Let’s look at another one: - Import additional library import matplotlib.gridspec as gridspec - Initialize the figure object fig = plt.figure(tight_layout=True) - specify the geometry of the grid to place the subplots(The number of rows and number of columns of the grid need to be set.) gs = gridspec.GridSpec(2, 2) - Plotting in Subplot1 ax = fig.add_subplot(gs[0, :]) ax.scatter(x = data['A'], y = data['B'], c='r') ax.set_ylabel('B') ax.set_xlabel('A') - Plotting in Subplot2 ax = fig.add_subplot(gs[1, 0]) ax.plot(data['A']) ax.set_ylabel('A') ax.set_xlabel('Index') fig.align_labels() - Plotting in Subplot3 ax = fig.add_subplot(gs[1, 1]) ax.plot(data['B']) ax.set_ylabel('B') ax.set_xlabel('Index') fig.align_labels() Output: Improving The Plots Drawing Axis Lines #Scatter plot for A vs B plt.scatter(data['A'],data['B'],s = 200, c = '#DAFF33', marker = "*",alpha = 0.5, linewidths = 3, edgecolors='#0C96F0',zorder=1) #Scatter plot for A vs C plt.scatter(data['A'],data['C'],s = 100, c = '#F03C0C', marker = "X",alpha = 1, edgecolors='#0CF05F',zorder=2) #Display Legends plt.legend(loc = 'lower right') #Draw axis lines plt.axhline(0.5,ls = '--' ) #horizontal line plt.axvline(0,ls = '--') #vertical line The first code block plots two scatter plots on the same graph which are differentiated by two different marker symbols and colours. The axhline and axvline methods allow us to draw horizontal and vertical lines from the axes respectively at the specified value/constant. - ls = ‘–’ sets the line style to dashed lines - zorder = 1 sets the current plot/layer in the background. - loc = ‘lower right’ in legend method relocates the legends to the lower right corner of the graph. Output: Colour Shading #horizontal shading plt.axhspan(0.5, 2, alpha = 0.3, color = 'r') #vertical shading plt.axvspan(0, 2, alpha = 0.2) The axhspan and axvspan methods allow us to make horizontal and vertical shades from the axes respectively at the specified range. Appending the above code block to the previous section code will produce the following output. Closing Note Matplotlib is an essential package that allows users to make visualisations with less effort. Many modern data visualisation libraries are built on top of Matplotlib and have similar methods and API calls for visualising with various kinds of plots.
https://analyticsindiamag.com/beginners-guide-to-data-visualisation-with-matplotlib/
CC-MAIN-2021-39
refinedweb
1,256
58.79
Creating Imports On this page: Introduction When packages on the fly To import packages on-the-fly, follow these steps: -+Enter. If there are multiple choices, select the desired import from the list. Importing an XML namespace To import an XML namespace, follow these steps: - Open the desired file for editing, and start typing a tag. If a namespace is not bound, the following prompt appears: - Press Alt+Enter. If there are multiple choices, select the desired namespace from the list. Importing TypeScript symbols In the TypeScript context, PyCharm PyCharm, PyCharm will display the following pop-up message: Press Alt+Enter to have an import statement generated and inserted automatically. In either case, PyCharm inserts an import statement:
https://www.jetbrains.com/help/pycharm/2016.3/creating-imports.html
CC-MAIN-2016-50
refinedweb
118
55.13
24933/integrating-metamask-referenceerror-window-defined-error I am facing some problem while integrating web3 from Metamask using React. The code I am using is as follows: import Web3 from 'web3' let web3; window.addEventListener('load', function () { if (typeof window.web3 !== 'undefined') { web3 = new Web3(window.web3.currentProvider); } else { // No web 3 provider console.log("Please install Metamask"); } }); export default web3; Getting the following error: window is not defined ReferenceError: window is not defined at Object../lib/getWeb3.js (lib/getWeb3.js:5:0) You can't use MetaMask on server-side because window is not defined on Server. A workaround for this is that you can connect to INFURA when you want to use web3 in your React component server-side or without MetaMask support. Here's how you can use react-web3-provider component. Add the Web3Provider to your root React component: import Web3Provider from 'react-web3-provider'; ReactDOM.render( <Web3Provider defaultWeb3Provider="" > <App /> </Web3Provider> ) Then in the component where you want to use Web3: import { withWeb3 } from 'react-web3-provider'; class MyComponent { render() { const { web3 } = this.props; web3.eth.getAccounts(console.log); // Version 1.0.0-beta.35 return "Web3 version: {web3.version}"; } } export default withWeb3(MyComponent); Window is a browser thing. And because you are using this on the server-side, it won't work. You might want to use global to make it work. I think the onload is giving the issue. Try this ...READ MORE The challenge you are facing is mostly ...READ MORE You can't claim BCH without knowing private ...READ MORE Try compiling and migrating the contract: $ truffle ...READ MORE This was a bug. They've fixed it. ...READ MORE Summary: Both should provide similar reliability of ...READ MORE recipes is a dynamic storage array. You need ...READ MORE I had to update the version of ...READ MORE It appears you are using web3.js v1.0. ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/24933/integrating-metamask-referenceerror-window-defined-error
CC-MAIN-2021-04
refinedweb
324
52.66
In post-commit feedback on D104830 Jessica Clarke pointed out that unconditionally adding va_list to the std namespace caused namespace debug info to be emitted in C, which is not only inappropriate but turned out to confuse the dtrace tool. Therefore, move va_list back to std only in C++ so that the correct debug info is generated. We also considered moving __va_list to the top level unconditionally but this would contradict the specification and be visible to AST matchers and such, so make it conditional on the language mode. To avoid breaking name mangling for __va_list, teach the Itanium name mangler to always mangle it as if it were in the std namespace when targeting ARM architectures. This logic is not needed for the Microsoft name mangler because Microsoft platforms define va_list as a typedef of char *. Depends on D116773 The fact that va_list is in the std namespace does leak out into __builtin_dump_struct, possibly the odd other place, and of course to external AST consumers. I think it'd make sense to keep ASTContext as putting it in the std namespace for C++ (like it does for Arm, and used to for AArch64) but also have ItaniumMangle override it to the std namespace so that non-C++ gets the right mangling? Otherwise the AST and __builtin_dump_struct won't match the Arm spec. I'm not aware of any of those places causing an actual problem though? The AST isn't a stable interface, and builtin_dump_struct is for debugging purposes only. I would expect debug info consumers to be able to handle va_list in the global namespace as this is the status quo for C. So I'm somewhat inclined to do the simple thing here first, and then look at making things more conditional if a problem comes up. In D116774#3227280, @jrtc27 wrote: The fact that va_list is in the std namespace does leak out into __builtin_dump_struct, possibly the odd other place, and of course to external AST consumers. It can also leak out in funny places like AST dumping (to text, but more interestingly, to JSON). But the one place that may be observable and really matters (that I can think of) are AST matchers that get used by clang-tidy. I think this could make some sense. We typically try to have the AST reflect the reality of the program, and from that perspective, I think I would expect there to be a std namespace component for this in the AST if the spec calls for one even when obtaining the type through stdarg.h instead of cstdarg. Ping? Ping? I'd really like to get this fixed in 14. Make it conditional Could use isAArch64 (which then also picks up aarch64_32) Could use isARM. Does this also need to care about isThumb, or has that been normalised to Arm with a T32 subarch? Use isARM() etc Thanks, looks good to me Only a small nit from me. Hm, the advantage of leaving it as it is is then it completely reverts D104830 (ignoring the added CFI test) rather than being written in a slightly different but equivalent way. Don't really care either way myself though, both have their merits. I don't insist on the change; I mostly found it odd to have an uninitialized local that is initialized on the very next line.
https://reviews.llvm.org/D116774
CC-MAIN-2022-21
refinedweb
560
66.17
In my last article, I demonstrated using MyXaml to create a simple blog reader. In this article, I'd like to demonstrate how to create vector graphics applications using VG.net's runtime engine in conjunction with declarative markup. In particular, I will be demonstrating the code that creates this working clock: <?xml version="1.0" encoding="utf-8" standalone="no"?> <MyXaml xmlns:def="Definition" xmlns="Prodige.Drawing, Prodige.Drawing" xmlns: <Picture Name="Clock"> </Picture> </MyXaml> The initial markup declares the assembly namespaces and associates them with xmlns prefixes. The default namespace in this case is the Prodige.Drawing vector graphics runtime engine. The Picture class is a container for vector graphic elements. Picture Normally, one or more Picture objects are drawn on a Canvas, which is a user control and can be added to a Form. Because the VG.net designer can generate the MyXaml markup directly, I've put together a small loader vgLoader.exe. The loader tells the parser to instantiate the Picture and then invokes Picture's DisplayInForm method. This returns an initially sized Form which can then be displayed. The code for the actual loading is as follows: Parser parser=new Parser(); object picture=parser.LoadForm(filename, "*", null, null); Type type=picture.GetType(); MethodInfo mi=type.GetMethod("DisplayInForm"); Form form=mi.Invoke(picture, new object[] {new Size(10, 10)}) as Form; form.ShowDialog(); Since the loader doesn't know the name of the Picture, it uses the "*" wildcard to tell the parser to instantiate the first class that it encounters after the <MyXaml> tag. The following sections discuss how the pieces of the clock are constructed. <?xml version="1.0" encoding="utf-8" standalone="no"?> MyXaml xmlns:def="Definition" xmlns="Prodige.Drawing, Prodige.Drawing" xmlns: The clock frame consists of two circles (Ellipse classes), one draw inside the other. A linear gradient fill is used to create the shadow effect of light being cast on the clock from the upper left. In this markup, both the outer and inner rim use the same style, however the inner rim overrides the Angle property. Ellipse Angle <Ellipse Name="face" Location="114, 114" Size="173, 173" StyleReference="Face" DrawAction="Fill" /> <pds:Style <pds:Fill> <pds:LinearGradientFill </pds:Fill> </pds:Style> A third ellipse and an additional style is created to add the clock face. <Ellipse Name="shadow" Location="102, 102" Size="200, 200" StyleReference="ClockShadow" DrawAction="Fill" /> <pds:Style <pds:Fill> <pds:SolidFill </pds:Fill> </pds:Style> A shadow is another ellipse and style. Objects are drawn one on top of the other, so, the actual order of the vector graphics elements so far is: <Elements> <Ellipse Name="shadow" Location="102, 102" Size="200, 200" StyleReference="ClockShadow" DrawAction="Fill" /> <Ellipse Name="outerRim" Location="100, 100" Size="200, 200" StyleReference="Rim" DrawAction="Fill" /> <Ellipse Name="innerRim" Location="110, 110" Size="180, 180" StyleReference="Rim" DrawAction="Fill"> <Fill> <pds:LinearGradientFill </Fill> </Ellipse> <Ellipse Name="face" Location="114, 114" Size="173, 173" StyleReference="Face" DrawAction="Fill" /> </Elements> <Picture> <TextAppearance> <pds:TextAppearance </TextAppearance> <Elements> ... The TextAppearance property of the Picture object is instantiated to the default text appearance, which is used for each numeral: TextAppearance <Rectangle Name="one" Text="1" Location="220, 124.3782" Size="30, 30" StyleReference="Numeral" DrawAction="Fill" /> <Rectangle Name="two" Text="2" Location="245.6218, 150" Size="30, 30" StyleReference="Numeral" DrawAction="Fill" /> <Rectangle Name="three" Text="3" Location="255, 185" Size="30, 30" StyleReference="Numeral" DrawAction="Fill" /> <Rectangle Name="four" Text="4" Location="245.6218, 220" Size="30, 30" StyleReference="Numeral" DrawAction="Fill" /> <Rectangle Name="five" Text="5" Location="220, 245.6218" Size="30, 30" StyleReference="Numeral" DrawAction="Fill" /> <Rectangle Name="six" Text="6" Location="185, 255" Size="30, 30" StyleReference="Numeral" DrawAction="Fill" /> <Rectangle Name="seven" Text="7" Location="150, 245" Size="30, 30" StyleReference="Numeral" DrawAction="Fill" /> <Rectangle Name="eight" Text="8" Location="124.3782, 220" Size="30, 30" StyleReference="Numeral" DrawAction="Fill" /> <Rectangle Name="nine" Text="9" Location="115, 185" Size="30, 30" StyleReference="Numeral" DrawAction="Fill" /> <Rectangle Name="ten" Text="10" Location="124.3782, 150" Size="30, 30" StyleReference="Numeral" DrawAction="Fill" /> <Rectangle Name="eleven" Text="11" Location="150, 124.3782" Size="30, 30" StyleReference="Numeral" DrawAction="Fill" /> <Rectangle Name="twelve" Text="12" Location="185, 115" Size="30, 30" StyleReference="Numeral" DrawAction="Fill"/> <pds:Style <pds:Fill> <pds:SolidFill </pds:Fill> </pds:Style> Each numeral is placed on top of the clock face, and therefore appears after the "face" Ellipse in the Elements list. If you're wondering about whether I hand coded the precision of the placement to the ten-thousandth's decimal, the answer is no. Since my understanding of vector graphics is rather new, Frank Hileman drew the clock in the VG.net designer. I asked Frank how he did it, and this is what he said: Elements First I created "twelve" and positioned it at the top. I selected "twelve" and set the TransformationReference Type property to "Absolute". Then I changed the TransformationReference Location to the center of the circles: TransformationReference Type TransformationReference Location <TransformationReference> <TransformationReference Location="200, 200" Type="Absolute"/> </TransformationReference> Now any change to Rotation will go about that Location. I did a copy/paste of "twelve", creating an identical object in the same place. Let's make that one "three". Change the Rotation property to 90, and the Text property to "3". You now have text rotated about the center of the clock. Now we need to remove the Rotation, but relative to the text center, and not the clock center. Select "three". Right click on the TransformationReference property, and click "Reset". The reference point goes back to Center, but the object does not change position. Now right click on Rotation, and click "Reset". The rotation is gone, but the text does not move. I copied the "twelve" object 11 times, each time setting the Rotation property by a multiple of 30 degrees, changing the Name and Text properties, and doing a Reset on the TransformationReference and Rotation, in that order. <Group Name="minute" StyleReference="Minute"> <TransformationReference> <TransformationReference Location="200, 200" Type="Absolute" /> </TransformationReference> <Elements> <Path Name="leftMinute" DrawAction="Fill"> <PathPoints> <PathPoint Point="200.101, 120" Type="Start" /> <PathPoint Point="199.7635, 131.6271" Type="Control1" /> <PathPoint Point="195, 194.9518" Type="Control2" /> <PathPoint Point="195.0503, 198.8948" Type="EndBezier" /> <PathPoint Point="195.1006, 202.8378" Type="Control1" /> <PathPoint Point="197.7291, 205.1745" Type="Control2" /> <PathPoint Point="200, 204.9767" Type="EndBezier" /> <PathPoint Point="200.2, 204.5694" Type="EndLine" /> </PathPoints> </Path> <Path Name="rightMinute" DrawAction="Fill" Scaling="1, -1.213767" Rotation="180"> <PathPoints> <PathPoint Point="205.09, 197.4994" Type="Start" /> <PathPoint Point="202.8521, 197.6623" Type="Control1" /> <PathPoint Point="200.1495, 195.748" Type="Control2" /> <PathPoint Point="200.0996, 192.4994" Type="EndBezier" /> <PathPoint Point="200.0498, 189.2508" Type="Control1" /> <PathPoint Point="204.7664, 137.0788" Type="Control2" /> <PathPoint Point="205.09, 127.4994" Type="EndBezier" /> </PathPoints> </Path> </Elements> </Group> <pds:Style <pds:Fill> <pds:LinearGradientFill </pds:Fill> </pds:Style> I asked Frank how he made the hands, and here's what he said: I have to admit, there is a Hack in there. Here is how I did it: I created a 3-point spline for one half of the hand. Then I converted that to a Path, and tweaked the control points a bit. Then I did a copy/paste, creating an identical object in the same place. To mirror, I set the scale X property to -1 (for the minute hand). Then I moved the hand over to the right, using grid snap to align with the other. Since each Path displays a linear gradient, but one displays in the opposite direction (because of negative scaling), together they give it that 3D effect.Now why to you see that weird scaling in the generated xml? This is because I resized the two halves after I created them. Since the left half did not need a -1 scaling, the designer transformed the points in the path. I could have removed the weird scaling on the right half with ApplyTransformation, but I needed to leave the negative scaling in the right half, to reverse the gradient (so I can keep the same Style for both). By default, when you resize, the designer does not apply the transform to the points if the object already has a scaling. So the right half kept its scaling, but it is modified. If I had just resized the left half correctly before the copy paste, you would not see that weird scaling.The hour hand was done similarly but I did it horizontally.The Bounds and Angle on the LinearGradientFill were carefully chosen to line the darker edge of the gradient up with the angle of the Path.Now the Hack. There was a small line visible up the middle of the arrows (still is at smaller scales, didn't get rid of it completely). This is caused by the fill algorithms in GDI+ not aligning edges of filled areas perfectly, so you see the background a bit. I went and added an extra point to the left half to cover that. I also tried to cover it by tweaking the end points a bit but that never really worked. A better choice I realize now would be to draw a single pixel line up the middle, behind the two filled halves. We want the minute hand to be the bottom-most hand, so it gets declared immediately after the numerals, and has its own style. In this markup, a group (a composite of elements) is being declared, one for each half of the minute hand. The Path class defines a set of figures each of which contains a set of straight and curved segments (the path points were determined by the designer, not by me!). Path Also note that the entire group uses a TransformationReference to specify an absolute reference point for the group. This allows us to rotate the starting point of the minute hand paths about the center of the clock. <Group Name="hour" StyleReference="Hour"> <TransformationReference> <TransformationReference Location="200, 200" Type="Absolute" /> </TransformationReference> <Elements> <Path Name="leftHour" DrawAction="Fill" Rotation="270"> <PathPoints> <PathPoint Point="225.1051, 179.8949" Type="Start" /> <PathPoint Point="217.4753, 179.0615" Type="Control1" /> <PathPoint Point="188.4821, 174.8949" Type="Control2" /> <PathPoint Point="179.3263, 174.8949" Type="EndBezier" /> <PathPoint Point="170.1706, 174.8949" Type="Control1" /> <PathPoint Point="170.1053, 178.2542" Type="Control2" /> <PathPoint Point="170.1706, 179.8949" Type="EndBezier" /> <PathPoint Point="170.4581, 180.1053" Type="EndLine" /> </PathPoints> </Path> <Path Name="rightHour" DrawAction="Fill" Scaling="1.831152, -1" Rotation="270"> <PathPoints> <PathPoint Point="217.3672, 179.8948" Type="Start" /> <PathPoint Point="213.2005, 179.0614" Type="Control1" /> <PathPoint Point="197.3672, 174.8948" Type="Control2" /> <PathPoint Point="192.3672, 174.8948" Type="EndBezier" /> <PathPoint Point="187.3672, 174.8948" Type="Control1" /> <PathPoint Point="187.3315, 178.2541" Type="Control2" /> <PathPoint Point="187.3672, 179.8948" Type="EndBezier" /> <PathPoint Point="187.5243, 180.1052" Type="EndLine" /> </PathPoints> </Path> </Elements> </Group> <pds:Style <pds:Fill> <pds:LinearGradientFill </pds:Fill> </pds:Style> The hour hand is almost identical to the minute hand--it consists of a group of elements containing two paths, one for the left side and one for the right side of the hour hand. A separate style is used. <Polyline Name="second" StyleReference="Second" DrawAction="Edge"> <TransformationReference> <TransformationReference Location="200, 200" Type="Absolute" /> </TransformationReference> <Points> <Vector X="200" Y="200" /> <Vector X="200" Y="135" /> </Points> </Polyline> <pds:Style <pds:Stroke> <pds:Stroke </pds:Stroke> </pds:Style> Being a straight line, the second hand is simpler and is implemented as a PolyLine with a start point and an end point. PolyLine The only thing left now is to have the clock tell the time! We need to do three things: 1. Add an xmlns to the System.Windows.Forms namespace, so that the complete namespace list now reads: System.Windows.Forms <MyXaml xmlns:def="Definition" xmlns="Prodige.Drawing, Prodige.Drawing" xmlns:pds="Prodige.Drawing.Styles, Prodige.Drawing" xmlns: 2. Instantiate a Timer (the reason for the System.Windows.Forms namespace): Timer <Picture Name="Clock"> <wf:Timer ... 3. Implement the event handler: <def:Code <reference assembly="System.Windows.Forms.dll"/> <reference assembly="System.Xml.dll"/> <reference assembly="myxaml.dll"/> <reference assembly="Prodige.Drawing.dll"/> <![CDATA[ using System; using System.ComponentModel; using System.Diagnostics; using System.Windows.Forms; using MyXaml; using Prodige.Drawing; class AppHelpers { public Parser parser; public AppHelpers() { parser=Parser.CurrentInstance; } public void OnTick(object sender, EventArgs e) { DateTime n = DateTime.Now; Polyline second=(Polyline)picture.Elements["second"]; Group minute=(Group)picture.Elements["minute"]; Group hour=(Group)picture.Elements["hour"]; second.Rotation = 360F * ((n.Second+ n.Millisecond/1000F)/60F); minute.Rotation = 360F * (n.Minute + n.Second/60F)/60F; hour.Rotation = 360F * (n.Hour + n.Minute/60F)/12F; } } ]]> </def:Code> If you don't like the in-line code intermingled with the markup, you can wire up the event handler in your own assembly. At the beginning of this article I showed a code snippet for the loader: By specifying a target object for events (change the first null to a "this" or any other instance of a class that contains the event handler): object picture=parser.LoadForm(filename, "*", this, null); you can copy the OnTick method directly into your assembly, and the parser will automatically wire up the event to your assembly. OnTick The ability for MyXaml to work with third party runtimes provides an exceptionally easy way of plugging in functionality to an application. Compared to the C# code, the markup is less than 1/10th the size, and in my opinion is a lot more readable and easier to edit. And some truly amazing applications can be written in conjunction with VG.net's free runtime vector graphics engine. I hope this article stimulates a lot of discussion and whets your appetite for the beauty of vector graphics! Additional examples of vector graphics are provided in the download. 1. The version of MyXaml included in this demo is a pre-release of the next beta (0.95). You can download the source from the CVS site on or wait for me to release the next version. 2. The vector graphics engine in this demo is not necessarily the most current. You can download the latest runtime vector graphics engine at. (Obviously, there is no source for the VG engine. But the runtime is free and fully documented). VG.net is also offering a 30-day time limited beta version of their designer. 1. MyXaml--XAML-style GUI Generator 2. Vector Graphics and Declarative Animation with Avalon - the Analog Clock 3. Adobe SVG Analog Clock (requires an SVG viewer) 4. Scalable Vector Graphics (SVG) 1.1. Specification This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here System.Xml #ifdef LoadLibrary General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/6733/A-Vector-Graphics-Rendered-Animated-Clock?msg=797081
CC-MAIN-2015-40
refinedweb
2,537
50.12
Details Description According to j2se 1.4.2 specification for Charset forName(String charsetName) the method must throw UnsupportedCharsetException "if no support for the named charset is available in this instance of the Java virtual machine". The method does not throw exception if a unsupported name started with "x-". For example, the method throws an exception for not supported name "xyz", but does not for "x-yz". Code to reproduce: import java.nio.charset.*; public class test2 { public static void main (String[] args) { try catch (UnsupportedCharsetException e){ System.out.println("***OK. Expected UnsupportedCharsetException " + e); } } } Steps to Reproduce: 1. Build Harmony (check-out on 2006-01-30) j2se subset as described in README.txt. 2. Compile test2.java using BEA 1.4 javac > javac -d . test2.java 3. Run java using compatible VM (J9) > java -showversion test2 Output: C:\tmp>C:\jrockit-j2sdk1.4.2_04\bin\java.exe -showversion test2 java version "1.4.2_04" Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2_04-b05) BEA WebLogic JRockit(TM) 1.4.2_04 JVM (build ari-31788-20040616-1132-win-ia32, Native Threads, GC strategy: parallel) ***OK. Expected UnsupportedCharsetException java.nio.charset.UnsupportedCharsetException: x-yz C:\tmp>C:\harmony\trunk\deploy\jre\bin\java -showversion test2 (c) Copyright 1991, 2005 The Apache Software Foundation or its licensors, as applicable. ***BAD. UnsupportedCharsetException must be thrown instead of creating Charset[x-yz] Suggested junit test case: ------------------------ CharsetTest.java ------------------------------------------------- import java.nio.charset.*; import junit.framework.*; public class CharsetTest extends TestCase { public static void main(String[] args) public void test_forName() { try catch (UnsupportedCharsetException e) { } } } Activity ICU team has fixed this bug. Here are the libraries I build from icu4jni's latest code. Richard, When you say "icu4jni's latest code" can you be more specific? Was this HEAD or a release ... ? We need to know exactly what it contains before deciding whether to use it. Thanks, Tim Svetlana, We have decided to defer this fix until the next official release of ICU4JNI becomes available. Let us know if this is a problem. Thanks Tim Tim, I have no objection. Let's wait. I verified that it has been fixed in icu4jni3.6. Would you please close it? Was fixed in later version of ICU. This behaviour originates from the ICU provider code, I'll see what they say about it first:
https://issues.apache.org/jira/browse/HARMONY-64
CC-MAIN-2015-14
refinedweb
388
52.36
My scope: I'm trying to create a CORBA solution for two apps one at .NET side (server) and the other on python (client). I'm using IIOPNet for server and IDLs generation and OmniORBpy for Stubs generations and client calls. In general is working for simple calls like the typical example: Adder. But when a i try to call a method with a custom class it doesn't work. I have this class on server side(my Remove Object) to be called from the client : public class MyRemoteObject : MarshalByRefObject { public override object InitializeLifetimeService() { return null; } public bool DoSomething(InterfaceRequest requestData) { return true; } } The input param class type is declared like this on server side (quite simple): [Serializable] public class InterfaceRequest { public int SiteId; public int UserId; } I generate my IDLs using CLSIDLGenerator and later my python stubs like "omniidl -bpython -CPython ...", until here everything is OK. So i start the server (VS debug environment) and now on my client code i resolve the service name, i narrow my remote object successfully and i create my request object but when i try to do this: request = InterfaceRequest() request.SiteId = 1 request.UserId = 3 result = remoteObj.DoSomething(request) Python blows up with no warning, no exception, any message of any kind, (i updated trace label on my omniORB config file to the highest [40] but nothing is getting traced), it simply crashes, i have tried a lot of stuff and i get always same result. The problem is related to the param of course (i guess). I'm running the client side like this: python client.py -ORBInitRef NameService=corbaname::localhost:8087 (My last approach: python object reference architecture and conforming value type param passed by value are not matching at some point) Tech details: NET 4.0, IIOPNet (last), Python 2.6, omniORB-4.1.5, omniORBpy-3.5. Every help is appreciated, i'm little stuck with this, Thanks. View Tag Cloud Forum Rules
http://forums.codeguru.com/showthread.php?515869-CORBA-IIOPNet-and-OmniORBpy-remote-method-call-with-valuetype-param-issue&p=2031201&mode=linear
CC-MAIN-2016-18
refinedweb
325
53.41
This patch allows to run programs linked against LinuxThreads that expect to find __errno_location and friends in libpthread. diff -urNdp nptl/Versions nptl/Versions --- nptl/Versions 2002-10-10 09:28:35.000000000 +0200 +++ nptl/Versions 2002-10-31 21:51:46.000000000 +0100 @@ -71,6 +105,10 @@ libpthread { # Functions which previously have been overwritten. sigwait; sigaction; __sigaction; _exit; _Exit; longjmp; siglongjmp; + + # These are provided for compatibility with programs explicitly linking to + # the LinuxThreads1 version of them: the code is the same as the libc ones + __errno_location; __h_errno_location; __res_state; } GLIBC_2.1 { diff -urNdp nptl/errno.c nptl/errno.c --- nptl/errno.c 1970-01-01 01:00:00.000000000 +0100 +++ nptl/errno.c 2002-10-31 21:48:21.000000000 +0100 @@ -0,0 +1,53 @@ +/* MT support function to get address of `errno' variable, non-threaded + version. + Copyright (C) 1996, 1998,> +#include <resolv.h> +#undef errno +#undef _res + +#if USE_TLS && HAVE___THREAD +extern __thread int errno; +extern __thread int h_errno; +extern __thread struct __res_state _res; +#else +extern int errno; +extern int h_errno; +extern struct __res_state _res; +#endif + +int * +__errno_location (void) +{ + return &errno; +} + +int * +__h_errno_location (void) +{ + return &h_errno; +} + +struct __res_state * +__res_state (void) +{ + return &_res; +} Attachment: pgp00004.pgp Description: PGP signature
https://listman.redhat.com/archives/phil-list/2002-November/msg00004.html
CC-MAIN-2019-22
refinedweb
198
59.09
kirag0112BAN USER def compare(x, y): compared = cmp(x, y) switch = {0 : "{} equals {}".format(x, y), -1 : "{} is less than {}".format(x, y), 1 : "{} is greater than {}".format(x, y)} return switch[compared] def compare(x, y): compared = cmp(x, y) switch = {0 : "{} equals {}".format(x, y), -1 : "{} is less than {}".format(x, y), 1 : "{} is greater than {}".format(x, y)} return switch[compared] would happen if there were a cycle. Say you have (src, dst) pairs: (A,B), (B,C), (C,D), (D,E), (C,F), (F,B). In your solution there are two equally valid destinations, B and E. How would you choose which one is the final destination?- kirag0112 September 07, 2015 EDIT: sorry i commented the same thing twice, once before and once after signing in. I cannot remove the anonymous one.
https://careercup.com/user?id=5739617787576320
CC-MAIN-2019-51
refinedweb
136
78.45
06 May 2011 10:07 [Source: ICIS news] LONDON (ICIS)--BASF is seeing a strong, cracker-led business environment running into the second quarter driven by demand and prices, but has no plans to add significant additional capacity, company executives of the global chemical giant said on Friday. Various ethylene (C2) producers in the ?xml:namespace> “We have seen strong cracker margins throughout the first quarter, we believe primarily driven by strong demand,” management board member and new CFO Hans-Ulrich Engel said. However, new CEO Kurt Bock added that the company has no plans to add a cracker to its portfolio. Bock and Engel take their new positions after BASF’s annual general meeting on Friday. “Prices are very strong in the BASF reported a “powerful” start to 2011 with good capacity utilisation. Net profit in the first quarter of 2011 more than doubled to €2.41bn ($3.49bn) from first quarter 2010 on year-on-year sales, which were 25% higher at €19.36bn. Operating income – earnings before interest and tax (EBIT) before special items – was up 38.6% from the year-ago period at €2.55bn. Chemical division operating profits for first quarter 2011 hit a record €765m from €461m in first quarter 2010 and €537m in fourth quarter 2010. First quarter sales rose 27% year on year to €3.28bn. Sales for the petrochemicals sub-sector were up 28% year on year at €2.21bn. Margins were particularly strong in acrylics and caprolactam (capro). BASF runs its crackers primarily to support its downstream chemicals activities. Operating rates are high, Engel said, with the favourable margin situation running into the second quarter. “Demand is strong and this is driving the pricing situation,” he added. The company is also positive for the near-term business environment. “What we have now on our [order] books gives us some visibility for the next two months,” Bock said. BASF is a “small” net buyer of propylene (C3), a small net seller of ethylene and a net buyer of butadiene (BD). $1 = €0.69
http://www.icis.com/Articles/2011/05/06/9457470/basf-sees-strong-cracker-led-business-environment-for-q2.html
CC-MAIN-2013-48
refinedweb
342
55.03
Update (12th Feb 2004): This was originally posed as an academic question, with the intention of the mechanism to act as an extra layer of protection for experienced programmers, in much the same way as strict, warnings and taint checks provide extra safety. jarich presents an excellent explanation and response on this. My apologies to anyone who thought that this was intended instead of other security mechanisms. This was certainly not my intention. The thoughts presented here are instead intended as an additional tool for the experienced programmer who wants additional safety layers to protect against programmer error. Perl has a concept of 'tainted' data. Data that is considered tainted cannot be used to open files or used with anything involving the shell. It's a great way of stopping uncleaned data from being used where it should not. However, to my knowledge there is no mechanism in Perl to prevent private data from being used where it should not. If I have a credit-card number that I wish to process, then I don't want that being printed to the screen, to a log-file, saved to disk, and definitely not sent out in an e-mail. There are programming practices that can help prevent this (eg, undefining the variable as soon as you're finished with the sensitive information), but no inbuilt mechanism to prevent accidental disclosure. As such, I'd like to float the idea of 'restricted' data. 'Restricted' information cannot be sent to an I/O boundry which is not specifically marked as being able to accept restricted data. In our credit card example above, the connection to the credit-card processing facility would be marked as being able to accept restricted data, but everything else would not. In the same way that the result of any expression involving tainted data is tainted, the result of any expression using restricted data would also be considered restricted. There are many applications for such a framework. Passwords, encryption keys, and other sensitive data in many systems can be marked as restricted, providing an extra safeguard against them inadventantly being revealed. In government or military systems, classified information could also be marked as restricted. I'm hoping to gauge the community's feel for both the need for restricted data, and also any input on what may be felt as the best way to implement these features. As such, I have a few questions I would like to pose: As stated in the introduction, the purpose of this node is determine if the idea has merit, and to discuss the best way to progress forward if it does. All the very best, # Should there exist restricted data which cannot be cleaned? Eg, passwords or decryption keys? # Should there exist a hierachy of restricted information (Eg, 'Top Secret' data cannot be sent via a 'Secret' channel, but the reverse would be fine.), or should data simply be restrict/unrestricted, in the same way that data is either tainted/untainted. As a run-time directive, it'd be nice. sorta like perl -r (argh), for restrictive, but it'd create a cost for every access. Do people care? Yes and no. Will it happen? Who knows :) For example, with the credit card data problem: An easy solution would be for the browsers to have the ability to PGP encrypt data, with your key, before submitting it. All this would really take, is to get a promo campaign going to get the average person familiar with the basics of keys. It's not that hard, but it seems complicated at first. Then the browser developers could build in PGP encryption of form-data. Windows has security? Perhaps you are mistaking this for some of the ridiculous DRM schemes that have recently become fashionable. I don't think that's what the original poster intended. I read it as being about a way that the *programmer* could specify how his data could be used, as an additional safety-net in case he tried to do something dumb like spew a credit card number into the error logs. Which seems jolly sensible to me. And encryption does not automagically provide security. Your naivety here is touching, but dangerous. And encryption does not automagically provide security. Your naivety here is touching, but dangerous. What? Encryption does not provide security? Maybe not "automagically", but neither would some security method that he is describing, it would be nothing more than "flock", unless it used encryption. As far as the windows scheme to do it, I'm only hazarding a guess from memeory, because I don't even boot windows anymore. It's been years. But I remember hacking some registery entries, trying to find some software key, and windows had some sort of "encryption ring" in the registery, where software writers could put id's and keys ( and who knows what else) to prevent people from running programs without authorization. It could( and probably is) used to hide data from unauthorized access. But considering how easy the crackers come out with patches to bypass all this "security", I would say it's worthless, except to keep the "honest guy honest". It won't stop the person with bad intent. But proper PGP encryption of data is about as good as you will get, as long as you don't consider Tempest style snooping. (Which is probably more widespread than people want to admit.) Then don't do any of those things. There isn't any safeguard for programmer error. It happens all the time. Passwords, encryption keys, and other sensitive data in many systems can be marked as restricted, providing an extra safeguard against them inadventantly being revealed. I couldn't help but think about Java's private, protected and private protected (or is it protected private?) safeguards. By default, Perl 6 will make class variables private, which I think is a good thing.. You might be able to do something by tieing your filehandles. Or you could dive in the Perl internals, and use XS to set some special magic on sensitive data. Abigail First off, I'd like to mention that a large part of security does not rest with the programmer; it rests with the operating system. If your operating system allows an attacker to gain access to swap space, they can scan if for senstitive information. If your operating system allows an attacker to access memory that isn't his or hers, they can (once again) scan for senstitive information. (This is, by the way, why the OpenBSD project encrypts swap space.) Having said that, I think it would be great if some kind of Restricted Data switch was available, i.e. by calling restrict ($var); only I/O conduits marked safe for restricted data could use it. I would also love to see some kind of encryption built in, so that even on OSes that allow programs to peek at the private data of other programs, it would be impossible to understand the data -- een if it could be viewed. (Although it is important to point out that an attacker could look for encrypted data -- because they would know it was valuable, and if the encryption could be compromised this would be much more insecure then if the data wasn't encrypted) What mechanism (if any) should be providing for 'cleaning' restricted data (eg, so only the last four digits of your credit card number can be shown)? Regular expressions are the obvious choice. I think that people should be able to create subroutines to "unrestrict" their data, since they are the ones restricting its use in the first place. So you could create a subroutine to chop off the first 12 digits of a credit card number, to X out the numbers in a bank account, or whatever you deemed suitable. Should there exist a hierachy of restricted information (Eg, 'Top Secret' data cannot be sent via a 'Secret' channel, but the reverse would be fine.), or should data simply be restrict/unrestricted, in the same way that data is either tainted/untainted. I think that users should be able to create thier own hierarchy. You're going to run into a lot of situations where an arbitrary hierarchy won't quite fit. But if those conditions were met, or there was by default a "Secrect" -> "Top Secret" -> "I could tell you but I'd have to kill you" hierarchy that would be even better. As far as computational cost of such a system, I think it would be negligible. The programmer would have to explicitely invoke it, and even then the only data that would need to be sepcially handled would be invoked as "restricted". Of course, this could be a problem if the restriction system worked something like the $`, $' variables -- i.e. use one slow your whole program down. For example, customer credit card numbers should only be passed to the card-processing facility, while your SQL DBMS connection password should only be passed to the database. They should both be marked as "restricted", but you don't want to let them cross-contaminate. (Unless you explicitly enable that.) Instead of a flag or an incremental level, something along the lines of categories might be more useful -- this data is restricted as "customer private" while that other data is marked as "server internals", and only channels/objects which are approved for that category can access them. More generally, you might want to look at Capabilities for a more sophisticated way of building an access control model. #!/usr/bin/perl -wTR use strict; use CGI; use DBI; restrict DBPassword, DB; my DBPassword $passwd = "abcdef"; my DB $dbh = DBI->connect("DBI:mysql:something", "someone", $passwd); # ALLOWED my $cgi = new CGI; print $cgi->header(); print $passwd; # NOT ALLOWED, program terminates print STDERR $passwd; # NOT ALLOWED, program terminates open(FILE, "> somefile") or die "Failed to open: $!"; print FILE $passwd; # NOT ALLOWED, program terminates restrict CreditCard, CreditCardGateway; my CreditCard $credit_card = $cgi->param("credit_card"); my CreditCard $expiry = $cgi->param("expiry"); my $foo = "$credit_card $expiry"; # Foo is now # CreditCard type too. print $foo; # NOT ALLOWED, program terminates print STDERR $foo; # NOT ALLOWED, program terminates print FILE $foo; # NOT ALLOWED, program terminates my CreditCardGateway $gateway; open ($gateway, "| cc_card_gateway") or die "failed to open gateway: $ +!"; print $gateway $foo; # ALLOWED print $gateway $credit_card; # ALLOWED print $gateway $expiry; # ALLOWED $foo++; # Still of CreditCard type... [download] So, does anyone other than pjf and I think this would be worth while? Update: changed the title I'll still going to say I don't understand the motivation, but this makes your goals more clear. I don't have a vested interest in fighting this one, but I am enjoying playing devil's advocate... So, if the goal is to keeping newbie programmers from using a variable without first running a subroutine on the variable, what is to say they can't use a weak subroutine to "clean" a variable? filter CreditCard, sub { return $_ }; [download] Also, what is to prevent someone from just passing around the gateway variable as a "key" within the code? sendToEvil(DB,$passwd) [download] I could be missing some of the finer points, but essentially my point is "a lock is no good if the key is under the door mat". It seems the key is under the door mat. If not, I'm darn sure I could get that key fairly easily, and that by definition of the facilities employed in implementing this, isn't not really a security measure at all. Now some languages have private variables, and this is marginally useful if your fear is someone printing a password. It seems we would go further by trying to find a way to write private variables that can only be read from certain packages. However, this not say some one can't modify the original code (or perform other exploits) to defeat this "security". So why don't we trust the code we are running? Is this part of a plugin architecture where arbitrary users can upload code? If not, don't allow that. Otherwise, isolate your credit card and password items into modules no folks have value tampering with. If you really can't trust your fellow coders on a project, you might (possibly) be able to write FETCH/STORE kind of wrappers that deny access from outside the package. (This is theory, I don't know...). Essentially security should deal with external sources getting at data -- other users, other programs, networked or not. When you can't trust your own code, that's sandboxing, and is a different problem. Calling this security, at least in my eyes, gives us a false illusion of being secure. This is just a very small piece...helping to know that you have not handled data loosely throughout your app. For one who is teaching security, start with the basics. Network security. Open ports. Packet sniffers. Plaintext data. Encoding is not Encryption. Injections. SSL without keys and key exchange. DOS vulnerabilities. Cross-site scripting and SQL vulnerabilities. Changing HTML forms to alter important fields. Spoofing and IP games (arp). Man-in-the-middle. Local security. Permissions. Uploading and executing code to gain local access. Only once that you have stopped all-of-the-above is the "restricted code" module really important. All of the others have higher gains and are more likely to be exploited by 'evil'. I'm not a security expert by any grounds, but where I work I've seen and fixed numerous holes in our mystery app (FYI -- it's not Perl) since I'm one of few that has interest in finding/closing them. The most obscene was encoding passwords BASE64 (plaintext-equivalent) and leaving the file permissions as 655! Local socket exploits (non-root user being able to connect and manipulate root daemon) were also found. We also used to use a lot of plaintext network traffic. It's a big deal, and you've got to look everywhere to clean up what most folks don't know to think about. This is all white-hat easy stuff too... I'm sure it can get a lot more evil/complicated if someone really wanted in to our app. In conclusion, we aren't even close to secure now...but it's getting better. ------ We are the carpenters and bricklayers of the Information Age. Please remember that I'm crufty and crochety. All opinions are purely mine and all code is untested, unless otherwise specified. I don't agree with putting this kind of thing in the core. If your code had "use Data::Restrict;" or some similar module invocation near the top, that'd be fine by me. One way this could be made a module is to make that module override all the output functions, which is what I had said already. The data structures I used, although ugly, get the job done. Your restrict function, as part of a module, could be just a way to set values in such a data structure. The new versions of the output functions in the same module would use that data structure. Your example seems to have a weakness I and others have already pointed out -- restricting the printing of one variable at a time does not prevent assigning the value of it to another variable, then printing that. You could carry magic around in the language for every variable, but that would likely be bad for the common case. Since what is being proposed is sort of like a SuperTaint -- "don't even let this variable be output until cleaned or pointed at a certain output path" -- then perhaps it could be worked into the core to use the same Taint flag and just add code to the path when a restrict option is passed to the interpreter. I still don't like that. It's bulky, clumsy, and the porters have enough work to shoulder now. The smart thing to do from a security standpoint is always to deny by default and implicitly allow what is needed. This is the same when one is protecting oneself from oneself as when protecting oneself from strangers. I've shown code which does that. I've explained ways to further the protection, as by using Safe to disable the core's output routines except inside the module handling this. Using Safe, in fact, allows one to prevent variables from being in a scope where they are not wanted. Anything inside a Safe compartment has to be explicitly handed a variable in order to be able to get to that variable's value. By making the code for your program modular to the point that each fundamentally different operation can be in a separate compartment,one can only share those variables which are sensitive with compartments which need them. Any compartments which don't need to do output can be left unable to do so. Any compartments which need to do output but which don't need access to the sensitive variables can be part of a namespace that can't reach those variables. This part is all accomplished just by good use of Safe.pm. In addition, a restriction on printing variables other than those explicitly allowed can be helpful I guess, but not all that necessary, as proper use of Safe keeps the scope of the variable very small, keeps the areas of the program which can do output fairly small, and keeps the parts where the scopes of the sensitive variables and the ability to do output overlap only where absolutely needed. To debug those parts then becomes much simpler. High level white collar crime is epidemic. 1. In a long project, how would I figure out which data is restricted? If the data is restricted 2,000 lines above where I tried to use it, short of an egg hunt, how would I know? (Seeing if the program dies when trying to print is an unacceptable solution.) 2. If I unrestrict the data for an operation, do I need to restrict it again? What if I forget, since I might have dozens of restricted variables? One omission could be disastrous. 3. And would people become more careless, thinking they are safe with this proposed option? (Read #2) I am sure that there are others with my opinion and I urge some of the more experienced monks to clearly explain the situations in which we would need restricted data... If each restriction class (i.e. CreditCard, CreditCardGatewy, PinNumber, Password) is made equivalent to a unique bit flag, you could add them together to produce a "security level bit vector". This might force more than one kind of cleaning to be done, or might restrict more kinds of usage (i.e. is allowed in a certain db field, is not allowed in a cookie, output anywhere triggers a logged alert). This could be extended to objects, and maybe using @ISA to hold the different security classes would lend itself to describing the concatenation of restriction types. If extended to only cleaning certain fields of an object (or keys of a hash) you could i.e. keep $o->{MothersMaidenName} while cleaning out $o->{PinNumber}. It could scan symbol table for variables and hash keys which match certain built-in and customizable regexes, like /sec|pass|pwd|pin|salt/i, so you can enjoy reduced stress by maintaining the practice of naming sensitive variables a certain way, as in Hungarian notation (not used much in Perl though). This could also be used to drop a security policy on top of an existing module without changing the module itself (basically tainting specific parts of some data structures it uses). Then perl -cw would tell you what it finds, i.e. "Hash key mypass used insecurely at User.pm line 100", in the case that you have a debugging statement left in. This is not of course security, but it is neat in terms of engraving that "line in the sand" a bit deeper. Some effects I could imagine that would be useful: Data::Dumper would show only securely cleaned strings. Variable $cc_pin would be cleaned before destruction so it is eradicated from memory Security policy can be implemented at prototype level Maybe could be mixed with other prototyping modules, to allow only certain things to be sent to certain gateways (inverse of the original intent of the module). By catching how certain variables are used, this might be used for debugging. Being able to overlay policies over specific uses of parts of objects might be interesting for other uses than security. Maybe it would work like this. (Pardon I am saying 'secure' this, not 'restrict' this): use CreditCard; use Person; # a Class::DBI object use Data::Dumper; use Data::Secure qw(SecurityTypes secure secureall); my $secmgr = new Data::Secure->SecurityManager; # to alter general ope +rations $secmgr->regex = '^sec|pass|cc_/i'; # overwrite default regex $secmgr->classes = qw(CreditCard CreditCardGateway); # each has a set +of policies and cleaners associated with it. my $pincleaner = $secmgr->cleaner->new(CreditCard, qw(&cleanccpin &cle +anccnum)); # cleaners called in series $pincleaner->disable(&cleanccnum); # unload one my $perry = new Person("Perry Rhodan"); print $perry->password; # carps since "password" matches the regex secure CreditCard $perry->card; # now $perry->card ISA CreditCard my $buckaroo = secure Person->new("Buckaroo Banzai"); # $buckaroo now +secured by default policy. secure Person; # or just do this for all new instances of Person from +now on $buckaroo->addsec qw(PersonalData DenyCookies DenyTmpFolder); # Add po +licy classes, so now ISA CreditCard, PersonalData, etc. secure SecureRamen $buckaroo->noodles; # this now safe to eat secureall (CreditCard | PersonalData) Person; # secure all current and + future instances of Person. Can you do this, which also needs Credit +Card to be treated as a scalar value? my $kyle = new Person "Kyle McLaughlin"; print Data::Dumper $kyle; # this now would substitute asterisks for di +gits in $kyle->card->cc_number my CreditCardGateway $gw = $some_io_object; print $gw $kyle->card->cc_number; # okay here and db only [download] To add something of my own, I once mentioned on PM when I was writing a shopping cart app that did realtime credit card billing, that I was disconcerted at being able to find the credit card numbers by grepping the linux swap file after perl had quit. People said, well yeah. I was able to solve this I think by overwriting strings with garbage of the same length and then undefing them. Also I wrote an easy to use wrapper I call QuickCrypt.pm which I sometimes use to quickly encrypt and websafepack form data with different algorithms. I am reminded of lazy loading in Class::DBI which, if it new which fields were sensitive, might be induced to keep them unread normally. Perhaps just adding some patches to common modules to let them know some data is sensitive might be useful. Anyway while developing channels to intentionally restrict your own actions is one thing, I wouldn't mind the converse as well, that is to restrict outsiders (say on a shared server) as much as possible from potentially viewing your data. There is a danger of being seduced by obfuscation and not getting real security, but some things that pop into mind are: - overwriting used data - providing an encrypted ipc cache in ram - transparently encrypting/decrypting sensitive scalars and lists so they are seldom visible (interpreter could generates a key when launched but that key will be in memory..) - maintaining an ongoing encrypted io stream which data could be injected into without being obvious - generating a noisy stream that is hard for traffic analyzers to pull apart. These things all have in common the idea of keeping an admin of a shared server (or a malicious piece of spyware) in the dark as much as possible, and to remove clues about running processes. I don't know if any of these really would work against anything but the simplest thieves though. Anyway it seems that as long as you keep both eyes open, everything is relative. So if you can get perl modules, the interpreter, the os or the underlying hardware/network to cooperate with you I'd say you have a fair chance of relative success. Probably I'd guess ease of use would be the second most important thing after adding some magic like Abigail suggests. A quick brainstorm: A Tie::Scalar::SelfDestruct module. After FETCH is called a given number of times (default 1, but can be changed via a param passed to tie), the data inside is undefed. Of course, you have to be careful that you call FETCH only that number of times. Any more, and you'll just get undef back. Any less, and you were better off not using it at all (though a quick while($tied_scalar) { 0 } would take care of that if you had to). The usual problems with tie apply, of course. ---- : () { :|:& };: Note: All code is untested, unless otherwise stated my $super_secret = tie 'Tie::Scalar::SelfDestruct', 1; $super_secret = whatever (); print EMAIL_TO_BAD_GUY $super_secret; print DATABASE $super_secret; [download] In fact, you are even worse off, because you don't know what you have send to the bad guy. For most languages, that type of interpretation gets done at compile time. If you can play with pointers/change the permissions dynamically, it's easy to get around, you are right. But, the DB user/password@sid should never go anywhere except to the database. (It shouldn't come back from the database, even.) I don't think this is really useful, as it stands. It also sounds like it's something that should be done at the application level, not at the language or OS level. The hard part is making sure things bound for output file 1 aren't going to output file 2. A rather simplistic form of this can be accomplished in plain Perl without modules so long as you only use a special restricted print sub for output. Here's an example: #!/usr/bin/perl -w use strict; my %allowed; my %file; my ( $foo, $bar, $baz ) = ( 'foo', 'bar', 'baz' ); my $eol = "\n"; $allowed{ \$foo } = 'a'; $allowed{ \$bar } = 'a'; $allowed{ \$baz } = 'b'; $allowed{ \$eol } = 'b'; $file{ 'a' } = *STDERR; $file{ 'b' } = *STDOUT; sub rp { my $fh = shift; for ( my $i = 0; $i < @_; $i++ ) { if ( $fh eq $file{ $allowed{ \$_[$i] } } ) { print $_[$i]; } else { print "\nAttempt to print unallowed data using restricted +print\n"; } } } rp (*STDERR, $foo, $bar, $baz, $eol ); rp (*STDOUT, $foo, $bar, $baz, $eol ); [download] Now, that's not very useful if someone throws a print() in the program instead of using the restricted printing sub. If you could throw that functionality into a moduie and use Safe or some other way to make sure that only your restricted printing module can issue a print, printf, sprintf, write, etc. then you have a good start. If one could override the built-in output statements, that'd work, too. Part of the problem is that you don't really want to restrict a particular variable in an application when security is of such import. You want to make sure that nothing can be output except to where explicitly allowed. If you disagree with that statement and really do want to allow by default and restrict explicitly, then the example above wouldn't be too difficult to change in that direction. I'm not really sure how well the idea will help secure applications, as I haven't thought about it yet. It shouldn't be too difficult to implement, though, even as a hack well above the core which wouldn't slow anything down when this idea isn't in use. I'd personally, like I did here, look at the output routines first. Wrapping it up in the filehandle somehow, whether through IO::Handle, layers, tie, or whatever could be an option, but that's back to explicit restrictions instead of explicit allow. (Yes, I do realize that I denounced it and then provided a possible implementation... perhaps I am crazy already) Hell yes! Definitely not I guess so I guess not Results (51 votes), past polls
http://www.perlmonks.org/index.pl?node_id=327440
CC-MAIN-2014-52
refinedweb
4,696
60.24
Better Errors Messages for Flutter Imagine that it’s blinking… … is probably an article I should have published on April 1st. Still, here it is. The Amiga had an iconic way to display fatal errors: The guru meditation alert. Let’s recreate this for Flutter. Here is how I want to display the alert: showAlert(context, 42); Instead of complicated and talkative text messages, I will display succient hexadecimal error codes which need no translation or localization. I’m utterly convinced that this is the future. import 'package:flutter/widgets.dart'; void showAlert(BuildContext context, int code) { final hex = code.toRadixString(16).toUpperCase().padLeft(8, '0'); ... } Notice, how I not try to include a library for left-padding my string for improved stability and protection against rage-quitting developers. To display anything above the normal UI, an OverlayEntry which is then added to the current Overlay can be used. The such an Overlay is automatically created as part of a WidgetsApp or MaterialApp widget. The overlay entry must be explicitly removed again. I’m using a GestureDetector so that the user has to tap the overlay to make it go away. void showAlert(BuildContext context, int code) { final hex = code.toRadixString(16).toUpperCase().padLeft(8, '0'); final black = Color(0xFF000000); final red = Color(0xFFFF0000); OverlayEntry overlay; overlay = OverlayEntry(builder: (context) { return Positioned( left: 0, right: 0, child: GestureDetector( onTap: () => overlay.remove(), child: Container( color: black, height: 128, ), ), ); }); Overlay.of(context).insert(overlay); } Because I don’t want to depend on material.dart I create my own Color objects. And because I need to refer to the overlay variable inside of the onTap handler of the builder function I cannot use the usual final variable definition that declares and initializes a variable in one step. Overlays are a bit special. The Overlay widget is basically a Stack so I can use a Positioned widget to position my alert (currently just a black Container wrapped inside a GestureDetector) at the top edge of the screen. At this point, I can display a black overlay which is removed if I tap it. Because an OverlayEntry isn’t automatically wrapped with a DefaultTextStyle, Text widgets without style or with a TextStyle that does not explicitly prohibits inheritance are displayed with a very ugly default text style (large redish text that has yellow double-underlining). Therefore, it is important to add an inherit: false property like so: Container( alignment: Alignment.center, color: black, height: 128, child: Text( 'Software Failure. Tap to continue.\n\n' 'Guru Meditation #48454C50.$hex', textAlign: TextAlign.center, style: TextStyle( color: red, fontFamily: 'Courier', fontSize: 16, fontWeight: FontWeight.bold, inherit: false, ), ), ) It starts to look like the real deal (please ignore the notch for now): Let’s add the iconic red border next. There’s one problem, though. I don’t want the text to wrap (which would happen on smaller devices). Furthermore, if you compare the Courier font I used with the original Amiga font, it runs too wide. So let’s transform the Text widget by scaling it horizontally to 75%. To keep it at the same width, I have to enlarge the width by 133% at the same time. The text should now always fit the screen. Container( alignment: Alignment.center, decoration: BoxDecoration( color: black, border: Border.all(color: red, width: 8), ), padding: EdgeInsets.symmetric( horizontal: 8, vertical: 16, ), child: FractionallySizedBox( widthFactor: 4 / 3, child: Transform( alignment: Alignment.center, transform: Matrix4.identity().scaled(3 / 4, 1), child: Text( ... ), ), ), ), We’re almost there… As you probably see on my screen shots, if the device has a notch, the alert doesn’t look right. I’d like to move the box below the notch but cover everything above in black. The Amiga also displays a small black border around the red border, so let’s add this, too. A SafeArea widget inside another Container can do both. I need to disable its bottom margin, though. return Positioned( left: 0, right: 0, child: Container( color: black, child: SafeArea( minimum: EdgeInsets.all(8), bottom: false, child: GestureDetector( ... ), ), ), ); And there, it is, perfectly aligned: For the final and most important step, I will add blinking. The following might be a bit hacky, I don’t know, but it works and I don’t have to create a custom StatefulWidget. I’m using a StatefulBuilder instead. Each time, that widget asks its builder function to recreate the UI, I setup a timer to toggle the border color from red to black and back again after 700ms. Then, I’m using an AnimatedContainer to implicitly animate this color change within 300ms. Some things are just too easy in Flutter. GestureDetector( onTap: () { blink = null; overlay.remove(); }, child: StatefulBuilder( builder: (context, setState) { Future.delayed(Duration(milliseconds: 700)) .then((_) { if (blink != null) setState(() => blink = !blink); }); return AnimatedContainer( duration: Duration(milliseconds: 300), curve: Curves.easeInOut, alignment: Alignment.center, decoration: BoxDecoration( color: black, border: Border.all( color: blink ? red : black, width: 8), ), ... ); }, ), ), Here is the hacky part: Because Flutter throws an exception if I tap the alert and setState is then called on a disposed widget, I need to protect myself against this case by setting blink to null and explicitly checking for it. Perhaps, I should have used a Timer instead of a Future because that timer could be cancelled, I think. The future is inevitable. Back to the future… And there you have it, an alert box that looks much better than the usual modal dialog. And at least with old folk like myself, people will have a positive nostalgic feeling instead of cold-blooded anger because your app didn’t work for them as expected. Here is the source code, by the way. Thanks for reading ❤ If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter A Design Pattern for Flutter ☞ An introduction to Dart and Flutter ☞ Introduction to Flutter: Building iOS and Android Apps from a Single Codebase ☞ Building DashCast, a podcast app in Flutter Build Android and iOS apps with a flutter framework You’ll learn Thanks for watching:.
https://morioh.com/p/5d04e0bc9a4b
CC-MAIN-2020-05
refinedweb
1,012
58.08
When you want to add some style to your application, you likely look for ways to make your User Interface stand out. Whether it is using a specific font or a different color palate, you want to make the user feel attracted to your UI. One way to customize is to update your icons. If you are a mobile developer, regardless of the platform you develop for, there is a straightforward process for adding icons to your application. In Flutter, it’s not that complicated, but there are some things you should be aware of so that you don’t make time-consuming mistakes. How to Customize the Application Launcher Icon Instead of using the generic application icon Flutter provides, you can create your own. To do that, we will need to use a package called Flutter Launcher Icons. We’ll go through creating one step by step. This is how your launcher icon looks by default: Let’s assume we want this image to be our application launcher icon: First, add the image you would like to use as that icon inside your project under the assets folder (if you don’t have an assets folder, create one): Then add the dependency to your pubspec.yaml file under dev_dependencies: dev_dependencies: flutter_launcher_icons: "^0.9.2" Add this configuration inside your pubspec.yaml file: flutter_icons: android: "launcher_icon" ios: true image_path: "assets/doughnut.png" The flutter_icons configuration has several keys to alter what is going to be rendered and for which platform. - Android/iOS – specify for which platform you want to generate an icon. You can also write the file name instead of true. - image_path – the path to the asset you wish to make into the application launcher icon. For example, “assets/doughnut.png”. There are more configurations available, but we won’t delve into them here. You can find out more by going here. Now, run flutter pub get in the terminal or click Pub get inside the IDE. Run the command below in the terminal: flutter pub run flutter_launcher_icons:main Run your application and you should see that the launcher icon has changed. How to Generate Custom Icons in Flutter We will be able to generate custom icons through FlutterIcon.com. It allows us to either: - Upload a SVG that gets converted into an icon - Choose from a huge selection of icons from a different set of icon packages ☝️ There is a package called FlutterIcon that has all of the icons shown, but due to it’s heavy size, I recommended only choosing the icons that you need and not using it. Let’s demonstrate how to import custom icons into your application using this website. Imagine we have the following form in our application: You can see that we used icons for each TextFormField. Below is the code for the first TextFormField: TextFormField( controller: pillNameTextEditingController, decoration: const InputDecoration( border: OutlineInputBorder(), hintText: 'What is the pill's name?', prefixIcon: Icon(Icons.title) ), validator: (value) { if (value == null || value.isEmpty) { return 'Please enter a pill name'; } return null; } ) How about we change the first TextFormField’s icon into something more relevant? On FlutterIcon.com: - Choose the icons that you want to use/upload a SVG file - Give a meaningful name to your icon class (We’ll call our class CustomIcons) - Press Download In the .zip folder that you downloaded, there are several files: - A fonts folder with a TTF file with the name of the class you chose - A config.json file that’s used to remember what icons you chose - A dart class with the name of the class you chose Inside your project, import the .ttf file into a folder called fonts under the root directory. It should look like this: Place the .dart class inside your lib folder. If you take a look inside the dart file, you will see something similar (you might see more IconData objects if you chose more than one icon to download): import 'package:flutter/widgets.dart'; class CustomIcons { CustomIcons._(); static const _kFontFam = 'CustomIcons'; static const String? _kFontPkg = null; static const IconData pill = IconData(0xea60, fontFamily: _kFontFam, fontPackage: _kFontPkg); } Add the following to your pubspec.yaml file: fonts: - family: CustomIcons fonts: - asset: fonts/CustomIcons.ttf Run flutter pub get in the terminal or click Pub get inside the IDE. Go to the place where you want to use your custom icons and use it like this: n.dart TextFormField( controller: pillNameTextEditingController, decoration: const InputDecoration( border: OutlineInputBorder(), hintText: 'What is the pill's name?', prefixIcon: Icon(CustomIcons.pill) ), validator: (value) { if (value == null || value.isEmpty) { return 'Please enter a pill name'; } return null; } ) Troubleshooting Custom Icons in Flutter If your custom icons are showing up as squares with X’s in them, something is not right. You might also see in the logger the following warnings: Warning: No fonts specified for font CustomIcons Warning: Missing family name for font. This could be for several reasons: - Make sure your pubspec.yaml file is valid. Meaning there are no extra spaces, indention, and so on. You can use this tool for it. - Make sure that you have correctly referenced your font in your pubspec.yaml file. - Make sure that you have placed your .ttf file inside a fonts directory under the root directory of your project (not inside the assets directory). - Uninstall your application and reinstall it on your device. If you’d like to take a look at a real application using both types of icons, you can check it out here: And you can see the source code here: Thank you for reading!
https://envo.app/how-to-add-custom-icons-to-your-flutter-application/
CC-MAIN-2022-33
refinedweb
930
63.29
php's strtr for python php wiki php stands for php abbreviation php w3schools php language php download php example php has the strtr function: strtr('aa-bb-cc', array('aa' => 'bbz', 'bb' => 'x', 'cc' => 'y')); # bbz-x-y It replaces dictionary keys in a string with corresponding values and (important) doesn't replace already replaced strings. A naive attempt to write the same in python: def strtr(strng, replace): for s, r in replace.items(): strng = strng.replace(s, r) return strng strtr('aa-bb-cc', {'aa': 'bbz', 'bb': 'x', 'cc': 'y'}) returns xz-x-y which is not we want ( bb got replaced again). How to change the above function so that it behaves like its php counterpart? (I would prefer an answer without regular expressions, if possible). Upd: some great answers here. I timed them and found that for short strings Gumbo's version appears to be the fastest, on longer strings the winner is the re solution: # 'aa-bb-cc' 0.0258 strtr_thg 0.0274 strtr_gumbo 0.0447 strtr_kojiro 0.0701 strtr_aix # 'aa-bb-cc'*10 0.1474 strtr_aix 0.2261 strtr_thg 0.2366 strtr_gumbo 0.3226 strtr_kojiro My own version (which is slightly optimized Gumbo's): def strtr(strng, replace): buf, i = [], 0 while i < len(strng): for s, r in replace.items(): if strng[i:len(s)+i] == s: buf.append(r) i += len(s) break else: buf.append(strng[i]) i += 1 return ''.join(buf) Complete codes and timings: Here is a naive algorithm: Use an index to walk the original string character by character and check for each index whether one of the search strings is equal to the string from the current index on. If a match is found, push the replacement in a buffer and proceed the index by the length of the matched string. If no match is found, proceed the index by one. At the end, concatenate the strings in the buffer to a single string. def strtr(strng, replace): buffer = [] i, n = 0, len(strng) while i < n: match = False for s, r in replace.items(): if strng[i:len(s)+i] == s: buffer.append(r) i = i + len(s) match = True break if not match: buffer.append(strng[i]) i = i + 1 return ''.join(buffer) What is PHP? - Manual, The PHP Hypertext Preprocessor (PHP) is a programming language that allows web developers to create dynamic content that interacts with databases. PHP is PHP is a popular general-purpose scripting language that is especially suited to web development. Fast, flexible and pragmatic, PHP powers everything from your blog to the most popular websites in the world. The following uses regular expressions to do it: import re def strtr(s, repl): pattern = '|'.join(map(re.escape, sorted(repl, key=len, reverse=True))) return re.sub(pattern, lambda m: repl[m.group()], s) print(strtr('aa-bb-cc', {'aa': 'bbz', 'bb': 'x', 'cc': 'y'})) Like the PHP's version, this gives preference to longer matches. PHP Tutorial, Learn the fundamentals of PHP, one of the most popular languages of modern web development. PHP is a server scripting language, and a powerful tool for making dynamic and interactive Web pages. PHP is a widely-used, free, and efficient alternative to competitors such as Microsoft's ASP. PHP 7 is the latest stable release. def strtr(strng, replace): if replace and strng: s, r = replace.popitem() return r.join(strtr(subs, dict(replace)) for subs in strng.split(s)) return strng j=strtr('aa-bb-cc', {'aa': 'bbz', 'bb': 'x', 'cc': 'y'}) assert j=='bbz-x-y', j PHP, PHP is a widely used server-side programming language that's become increasingly fast and powerful over the years. PHP works well with HTML and PHP is a popular general-purpose scripting language that is especially suited to web development.It was originally created by Rasmus Lerdorf in 1994; the PHP reference implementation is now produced by The PHP Group. str.translate is the equivalent, but can only map to single characters. PHP Tutorial, An easy-to-read, quick reference for PHP best practices, accepted coding standards, and links to authoritative PHP tutorials around the Web. PHP (recursive acronym for PHP: Hypertext Preprocessor) is a widely-used open source general-purpose scripting language that is especially suited for web development and can be embedded into HTML. The answers on this thread are so out-dated. Here we go... Option #1: Use the str.format() function to handle this: "Hello there {first_name} {last_name}".format(first_name="Bob", last_name="Roy") Option #2: Use the Template class from string import Template t = Template('Hello there $first_name $last_name') t.substitute(first_name="Bob", last_name="Roy") Reference: Python String Formatting Best Practices Learn PHP, learn-php.org is a free interactive PHP tutorial for people who want to learn PHP, fast. PHP Assignment Operators. The PHP assignment operators are used with numeric values to write a value to a variable. The basic assignment operator in PHP is "=". It means that the left operand gets set to the value of the assignment expression on the right. PHP Courses & Tutorials, PHP is a server side programming language. When a user requests a web page that contains PHP code, the code is processed by the PHP module installed on Comparison Operators. Comparison operators, as their name implies, allow you to compare two values. You may also be interested in viewing the type comparison tables, as they show examples of various type related comparisons. PHP: The Right Way, PHP Agency is a national, financial services company started in 2009 with humble origins and a grand vision. Our mission since day one is to change an elitist financial services industry and in so doing, provide EVERYONE access to the best insurance products offered. Learn PHP, Loosely == equal comparison. If you are using the == operator, or any other comparison operator which uses loosely comparison such as !=, <> or ==, you always have to look at the context to see what, where and why something gets converted to understand what is going on. - We're both missing this (from the strtr doc): The longest keys will be tried first. - No, it doesn't give preference to longer matches, it depends on the arbitrary order of dictionary keys: strtr('xxa-bb-cc', { 'xx': 'bbz', 'xxa': 'bby'})-> 'bbza-bb-cc'. Using sorted(repl.keys(), key=len, reverse=True)in place of repl.keys()should fix that. - @Duncan: How surprising, thanks for pointing out (I always thought Python's regave the longest match, but clearly it doesn't.) - Repetitions x*, x+, x?and x{m,n}are all greedy, so they'll repeat xas much as they're allowed and able, x*?, x+?, x??, x{m,n}?are all non-greedy so they match as short as possible. x|yis also non-greedy in the sense that if xmatches the engine won't even consider y. That's what happened here: the alternation is tested strictly left to right and stops as soon as it finds a match. - @Duncan: That makes sense, thanks for clarifying (I knew about the greedy and non-greedy repetitions, but didn't know about the |operator). - Thanks, this solution appears to be the fastest on longer subject strings (see the update). - Looks cool, but making 1+2+...+len(repl)recursive calls... I don't know. - Hey, you asked for a non-regex version that behaves like php's, you didn't ask for fast. ;) (Besides, I suspect copying the dict is worse than the recursive calls.) - @kojiro: fair enough. You definitely win the beauty prize in this thread. Too bad I can't accept multiple answers... - BTW, is there any reason why you used **instead of just dict(replace)? - No good reason. The bad reason is that I didn't take time to check that dict(adict) would make a copy, instead of just returning the same dict. – updated answer code.
https://thetopsites.net/article/50447923.shtml
CC-MAIN-2021-31
refinedweb
1,315
67.04
C++ Keywords: bool Introduction The Boolean data type is used to declare a variable whose value will be set as true (1) or false (0). To declare such a value, you use the bool keyword. The variable can then be initialized with the starting value. A Boolean constant is used to check the state of a variable, an expression, or a function, as true or false. You can declare a Boolean a variable as: bool GotThePassingGrade = true; Later in the program, for a student who got a failing grade, you can assign the other value, like this GotThePassingGrade = false; Here is an example: #include <iostream> using namespace std; //--------------------------------------------------------------------------- int main(int argc, char* argv[]) { bool MachineIsWorking = true; cout << "Since this machine is working, its value is " << MachineIsWorking << endl; MachineIsWorking = false; cout << "The machine has stopped operating. " << "Now its value is " << MachineIsWorking << endl; return 0; } //---------------------------------------------------------------------------
http://www.functionx.com/cpp/keywords/bool.htm
CC-MAIN-2016-26
refinedweb
144
55.78
Model instance reference¶ This. Creating objects¶ To create a new instance of a model, just instantiate it like any other Python class:") Validating objects¶ There are three steps involved in validating a model: - Validate the model fields - Model.clean_fields() - Validate the model as a whole - Model.clean() - Validate the field uniqueness -. This method calls Model.clean_fields(), Model.clean(), and Model.validate_unique(),, nor as a result of ModelForm validation. In the case of ModelForm validation, Model.clean_fields(), Model.clean(), and Model.validate_unique() are all called individually. You’ll need to call full_clean manually when you want to run one-step model validation for your own manually created models. For example: try: article.full_clean() except ValidationError as e: # Do something based on the errors contained in e.message_dict. # Display them to a user, or handle them programatically. pass The first step full_clean() performs is to clean each individual field.. This method should be used to provide custom model validation, and to modify attributes on your model if desired. For instance, you could use it to automatically provide a value for a field, or to do validation that requires access to more than a single field: def clean(self): from django.core.exceptions import ValidationError #()] Finally, full_clean() will check any unique constraints on your model.. Saving objects¶ To save an object back to the database, call save(): If you want customized saving behavior, you can override this save() method. See Overriding predefined model methods for more details. The model save process also has some subtleties; see the sections below. Auto-incrementing primary keys¶. The pk property. Explicitly specifying auto-primary-key values¶. What happens when you save?¶ When you save an object, Django performs the following steps: Emit a pre-save signal. The signal django.db.models.signals.pre_savewithis sent, allowing any functions listening for that signal to take some customized action. How Django knows to UPDATE vs. INSERT¶or the empty string), Django executes a SELECTquery to determine whether a record with the given primary key already exists. - If the record with the given primary key does already exist, Django executes an UPDATEquery. - If the object’s primary key attribute is not set, or if it’s set but a record doesn’t exist,. Forcing an INSERT or UPDATE¶. Updating attributes based on existing fields¶., see the documentation on F() expressions and their use in update queries. Specifying which fields to save¶. Deleting objects¶ Issues a SQL DELETE for the object. This only deletes the object in the database; the Python instance will still exist and will still have data in its fields. For more details, including how to delete objects in bulk, see Deleting objects. If you want customized deletion behavior, you can override the delete() method. See Overriding predefined model methods for more details. Other model instance methods¶ A few object methods have special purposes. _ this code is correct and simple, it may not be the most portable way to write this kind of method. The reverse() function is usually the best approach.) For example: def get_absolute_url(self): specfication, unicode strings containing characters outside the ASCII range at all.. Extra instance methods¶ In addition to save(), delete(), a model object might have some of the following methods: For every field that has choices set, the object will have a get_FOO_display() method, where FOO is the name of the field. This method returns the “human-readable” value of the field. For example: from django.db import models class Person(models.Model): SHIRT_SIZES = ( 'Large' methods.
https://docs.pythontab.com/django/django1.5/ref/models/instances.html
CC-MAIN-2018-51
refinedweb
584
51.85
Example 1: Check if a string is numeric public"); } } Output 12345.15 is a number In the above program, we have a String named string that contains the string to be checked. We also have a boolean value numeric which stores if the final result is numeric or not. To check if the string contains numbers only, in the try block, we use Double's parseDouble() method to convert the string to a Double. If it throws an error (i.e. NumberFormatException error), it means the. Example 2: Check if a string is numeric or not using regular expressions (regex) public class Numeric { public static void main(String[] args) { String string = "-1234.15"; boolean numeric = true; numeric = string.matches("-?\\d+(\\.\\d+)?"); if(numeric) System.out.println(string + " is a number"); else System.out.println(string + " is not a number"); } } Output least 1 or more numbers ( \\d). (\\.\\d+)?allows zero or more of the given pattern (\\.\\d+)in which \\.checks if the string contains .(decimal points) or not - If yes, it should be followed by at least one or more number \\d+.
https://www.programiz.com/java-programming/examples/check-string-numeric
CC-MAIN-2022-40
refinedweb
181
67.15
> > > > Should the actual atomic type be present in the attribute element? > > > What's the use case for this? The type is declared in the index which > should > > > be know by the user. It could be done but it's not clear how this would > look > > > like. Do you want the QName of the atomic type of the sequence type? > > > > > > > If you have "by @id" where @id can be 1 or "1", you might want to create > > two > > > > index keys? > > > I don't understand this. > > Is it possible to have an index key as xs:anyAtomicType? > > If yes, you might have an index key of value xs:string("1") and another key > of > > value xs:integer(1). > > Is this use case possible? > > > > It is not for value indexes, but it is for general indexes. Currently, the > keys() function is not implemented for general indexes. If it is implemented > for general indexes, then the type info should be included. > > > > > > > > > If a key returns (), you might want to distinguish with empty string for > > > > probing later? > > > I have made a fix such that the value attribute is only present if the > entry > > > is not the empty sequence. > > Cool. > > Consider the following index key declaration: > > by xs:string(@team) as xs:string, > > xs:string(@country) as xs:string?, > > xs:string(@league) as xs:string?; > > > > If keys() returns: > > <keys> > > <key value="Foo" /> > > <key value="Bar" /> > > </keys> > > It's not good because I cannot probe the index with this info. > > Does it makes sense? > > > Yes, William is right. I think we need an extra attribute per key item. The > attribute name would be something like "is_empty", and its value "true" or > "false". The is_empty attribute could be optional; if it is not there, "false" > is implied. > > By the way, I really dislike "attribute" as the tag name of an element. > Something like "key_item" would have been much better. > > > > > > > > > > > By using keys(), I got the following output: > > > > <key xmlns="- > > > > xquery.com/modules/store/static/indexes/dml"><attribute > > > ></attribute><attribute value="4"></attribute></key> > > > > > > > > Is the namespace useful or does it just make the life of the user > harder? > > > It's consistent with the keys function of the maps module. We can remove > it > > > but should remove it for both then (which is a backwards incompatible > > change). > > I would fix it here and fix it for map() in 3.0. > > Except if the local name attribute means something. > > > > I don't have a strong preference for this. Either way looks ok to me. So sorry it's a mistake from me, I meant to write about the local-name attribute, I'm fine with the namespace. -- Your team Zorba Coders is subscribed to branch lp:zorba. Advertising -- Mailing list: Post to : zorba-coders@lists.launchpad.net Unsubscribe : More help :
https://www.mail-archive.com/zorba-coders@lists.launchpad.net/msg03831.html
CC-MAIN-2018-09
refinedweb
460
76.93
This content was originally released as a download on Leanpub Full-Stack Python Let me start by getting this out of the way: I really like programming in Python, and I'm not a big fan of JavaScript. But let’s face it, JavaScript is the way of the web, and Python doesn’t run in a web browser. So end of story, right? Well not so fast, because just like the popular TypeScript language gets transpiled into JavaScript to run in a web browser, Transcrypt does the same thing for Python. Because of the way Transcrypt maps Python data types and language constructs to JavaScript, your Python code is able to utilize the full ecosystem of JavaScript libraries that exist. Transcrypt acts as a bridge that enables you to take advantage of existing JavaScript web application technologies rather than trying to reinvent them. And, it does it in a way that doesn't significantly affect application performance over using plain JavaScript, or that requires a large runtime module to be downloaded to the client. And though we use JavaScript libraries, we don't have to code in JavaScript to use their APIs. Features of Transcrypt include: - It's PIP installable - Python code is transpiled to JavaScript before being deployed - It uses a very small JavaScript runtime (~40K) - It can generate sourcemaps for troubleshooting Python in the browser - The generated JavaScript is human-readable - The generated JavaScript can be minified - Performance is comparable to native JavaScript - It maps Python data types and language constructs to JavaScript - It acts as a bridge between the Python and JavaScript worlds - It supports almost all Python built-ins and language constructs - It only has limited support for the Python standard library - Your Python code can “directly” call JavaScript functions - Native JavaScript can call your Python functions - It only supports 3rd party Python libraries that are pure Python npm instead of pip Most Python language constructs and built-ins have been implemented in Transcrypt, so working with standard Python objects like lists, dictionaries, strings, and more will feel just like Python should. Generally speaking however, third-party Python libraries are not supported unless the library (and its dependencies) are pure Python. What this means is that instead of turning to urllib or the requests library when you need to make an HTTP request from your web browser application, you would utilize window.fetch() or the JavaScript axios library instead. But you would still code to those JavaScript libraries using Python. Installation Getting started with Transcrypt is pretty easy. Ideally, you would want to create a Python virtual environment for your project, activate it, and then use PIP to install Transcrypt. Transcrypt currently supports Python 3.9 or Python 3.7 so you will need to create your virtual environment with one of those versions, and then install the appropriate version of Transcrypt: $ python3.9 -m venv venv or $ python3.7 -m venv venv $ source venv/bin/activate (for Windows use venv\Scripts\activate ) (venv) $ pip install transcrypt==3.9 or (venv) $ pip install transcrypt==3.7.16 Hello World With Transcrypt installed, we can try a simple Hello World web application to see how it works. We'll create two files: a Python file with a few functions, and an HTML file that we will open up in a web browser: def say_hello(): document.getElementById('destination').innerHTML = "Hello World!" def clear_it(): document.getElementById('destination').innerHTML = "" Listing 2: hello.html <!DOCTYPE html> <html lang="en"> <body> <script type="module"> import {say_hello, clear_it} from "./__target__/hello.js"; document.getElementById("sayBtn").onclick = say_hello; document.getElementById("clearBtn").onclick = clear_it; </script> <button type="button" id="sayBtn">Click Me!</button> <button type="button" id="clearBtn">Clear</button> <div id="destination"></div> </body> </html> We then transpile the Python file with the Transcrypt CLI: (venv) $ transcrypt --nomin --map hello Here, we passed the transcrypt command three arguments: --nominturns off minification to leave the generated code in a human-readable format --mapgenerates sourcemaps for debugging Python code in the web browser hellois the name of python module to transpile We can serve up the Hello World application using the built-in Python HTTP server: (venv) $ python -m http.server This starts up a webserver that serves up files in the current directory, from which we can open our HTML file at: As you can see with this simple demonstration, we have Python calling methods of JavaScript objects using Python syntax, and JavaScript calling "Python" functions that have been transpiled. And as mentioned earlier, the generated JavaScript code is quite readable: Listing 3 (Generated code): __target__/hello.js // Transcrypt'ed from Python import {AssertionError, ... , zip} from './org.transcrypt.__runtime__.js'; var __name__ = '__main__'; export var say_hello = function () { document.getElementById ('destination').innerHTML = 'Hello World!'; }; export var clear_it = function () { document.getElementById ('destination').innerHTML = ''; }; //# sourceMappingURL=hello.map Sourcemaps To demonstrate the sourcemap feature, we can again create two source files: a Python file with a function to be transpiled, and an HTML file that will be the entry point for our application in the web browser. This time, our Python file will have a function that outputs text to the web browser console using both JavaScript and Python methods, along with a JavaScript method call that will generate an error at runtime: Listing 4: sourcemap.py def print_stuff(): console.log("Native JS console.log call") print("Python print") console.invalid_method("This will be an error") Listing 5: sourcemap.html <!DOCTYPE html> <html lang="en"> <body> <script type="module"> import {print_stuff} from "./__target__/sourcemap.js"; document.getElementById("printBtn").onclick = print_stuff; </script> <button type="button" id="printBtn">Print</button> </body> </html> (venv) $ transcrypt --nomin --map sourcemap This time, with the built-in Python HTTP server started using: (venv) $ python -m http.server We can open our test application at: If you open the developer console in the web browser and then click the button, the first two calls will execute, printing the text to the web browser console. The call to the JavaScript console.log() method behaves as you would expect. But as you can see here, the Python print() function ends up getting transpiled to call the JavaScript console.log() method as well. The third function call generates an error since we are trying to call a nonexistent method of the JavaScript console object. However, what's nice in this case is that the sourcemap can direct us to the cause of the problem in our Python source file. So, even though it is the generated JavaScript that is actually running in the web browser, using a sourcemap, we can still view our Python code right in the web browser and see where the error occurred in the Python file as well. React Now that we've seen how Transcrypt lets us make calls to JavaScript, let's step it up and use Transcrypt to make calls to the React library. We'll start with another simple Hello World application again, but this time doing it the React way. We'll stick with the two source files: a python file to be transpiled and an HTML file that will be opened in a web browser. The HTML file will be doing a little extra work for us in that it will be responsible for loading the React JavaScript libraries. Listing 6: hello_react.py useState = React.useState el = React.createElement def App(): val, setVal = useState("") def say_hello(): setVal("Hello React!") def clear_it(): setVal("") return [ el('button', {'onClick': say_hello}, "Click Me!"), el('button', {'onClick': clear_it}, "Clear"), el('div', None, val) ] def render(): ReactDOM.render( el(App, None), document.getElementById('root') ) document.addEventListener('DOMContentLoaded', render) Listing 7: hello_react.html <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"/> <script crossorigin </script> <script crossorigin </script> <script type="module" src="__target__/hello_react.js"></script> </head> <body> <div id="root">Loading...</div> </body> </html> Now transpile the Python file with Transcrypt: (venv) $ transcrypt --nomin --map hello_react Once again, after Transcrypt is done generating the JavaScript files, start up the built-in Python HTTP server using: (venv) $ python -m http.server Then open the demo React application at: While functionally the same as the first demo application we did, this time React adds dynamically generated HTML as a child of a specified element - in this case, the "root" div. Here, we added some convenience variables, useState and el, to map the global React methods to local Python variables. The React createElement() method is the workhorse of the library and is used to generate HTML elements in the browser dynamically. React is declarative, functional, and is based on state. What this means, is that you define the view, and then React handles when and how it gets updated when there are changes in state. By design, React state variables are immutable and use a setter function to make updates. This helps React to know when changes to state occur, so it can then re-render the view as needed. In this example, we used the React useState() method to create the val variable and its corresponding setVal() setter function. The return statement of a React functional component generally consists of a number of nested and chained calls to the React createElement() function that collectively form a tree structure of HTML elements and/or React components. This is where the view is declaratively defined. It may take some time to get more comfortable with this if you are not used to doing functional programming in Python. The ReactDOM render() function takes the top-level React component and a reference to the HTML element to attach it to in the DOM. This is where it adds the dynamically generated HTML tree that React produces as a child of the specified element. Building a React Application Having done a simple React application, let's now create one that has a few more moving parts. This demo will take a value entered through the UI and add it to a list when submitted. Most web applications of any utility will get large enough to where it becomes too unwieldy to manage manually. This is where package managers and application bundlers come into play. For this next example, we'll use the Parcel bundler to build and bundle the application so you can see what this developer stack might look like for larger applications. First, we need to install the necessary JavaScript libraries to support the development toolchain. This does require Node.js to be installed on your system so that we can use the Node Package Manager. We start by initializing a new project and installing the Parcel bundler library along with the plug-in for Transcrypt: $ npm init -y $ npm install parcel-bundler --save-dev $ npm install parcel-plugin-transcrypt --save-dev Then we can install the React libraries: $ npm install react@16 react-dom@16 Because of a version incompatibility, there is a file in the current Transcrypt plug-in that requires a patch. The file in question is: ./node_modules/parcel-plugin-transcrypt/asset.js In that file, change line 2 that loads the Parcel Logger module from this: const logger = require('parcel-bundler/src/Logger'); to this: const logger = require('@parcel/logger/src/Logger'); Once this modification is made to change the location of the Parcel Logger module, the Transcrypt plug-in for Parcel should be working. NOTE FOR WINDOWS USERS: For those of you using Windows, two more changes need to be made to the asset.js file for it to work in Windows environments. The first is to modify the default Transcrypt build configuration to just use the version of Python that you set your virtual environment up with. To do that, change line 14 that defines the Transcrypt command to simply use python instead of python3, changing it from this: "command": "python3 -m transcrypt", to this: "command": "python -m transcrypt", The second change has to do with modifying an import file path so that it uses Windows-style back-slashes instead of the Linux/Mac style forward-slashes. For this modification, we can use a string replace() method on line 143 to make an inline correction to the file path for Windows environments. So change this line: this.content = `export * from "${this.importPath}";`; to this: this.content = `export * from "${this.importPath.replace(/\\/g, '/')}";`; At some point, I would expect that a modification will be incorporated into the parcel-plugin-transcrypt package so that this hack can be avoided in the future. Now that we have a bundler in place, we have more options as to how we work with JavaScript libraries. For one, we can now take advantage of the Node require() function that allows us to control the namespace that JavaScript libraries get loaded into. We will use this to isolate our Python-to-JavaScript mappings to one module, which keeps the rest of our code modules all pure Python. Listing 8: pyreact.py # __pragma__ ('skip') def require(lib): return lib class document: getElementById = None addEventListener = None # __pragma__ ('noskip') # Load React and ReactDOM JavaScript libraries into local namespace React = require('react') ReactDOM = require('react-dom') # Map React javaScript objects to Python identifiers createElement = React.createElement useState = React.useState def render(root_component, props, container): """Loads main react component into DOM""" def main(): ReactDOM.render( React.createElement(root_component, props), document.getElementById(container) ) document.addEventListener('DOMContentLoaded', main) At the top of the file, we used one of Transcrypt's __pragma__ compiler directives to tell it to ignore the code between the skip/noskip block. The code in this block doesn't affect the transpiled JavaScript, but it keeps any Python linter that you may have in your IDE quiet by stubbing out the JavaScript commands that are otherwise unknown to Python. Next, we use the Node require() function to load the React JavaScript libraries into the module namespace. Then, we map the React createElement() and useState() methods to module-level Python variables as we did before. As we'll see shortly, this will allow us to import those variables into other Python modules. Finally, we moved the render() function we created previously into this module as well. Now that we have the JavaScript interface somewhat self-contained, we can utilize it in our application: from pyreact import useState, render, createElement as el def ListItems(props): items = props['items'] return [el('li', {'key': item}, item) for item in items] def App(): newItem, setNewItem = useState("") items, setItems = useState([]) def handleSubmit(event): event.preventDefault() # setItems(items.__add__(newItem)) setItems(items + [newItem]) # __:opov setNewItem("") def handleChange(event): target = event['target'] setNewItem(target['value']) return el('form', {'onSubmit': handleSubmit}, el('label', {'htmlFor': 'newItem'}, "New Item: "), el('input', {'id': 'newItem', 'onChange': handleChange, 'value': newItem } ), el('input', {'type': 'submit'}), el('ol', None, el(ListItems, {'items': items}) ) ) render(App, None, 'root') As mentioned before, we import the JavaScript mappings that we need from the pyreact.py module, just like we would any other Python import. We aliased the React createElement() method to el to make it a little easier to work with. If you're already familiar with React, you're probably wondering at this point why we're calling createElement() directly and not hiding those calls behind JSX. The reason has to do with the fact that Transcrypt utilizes the Python AST module to parse the PY files, and since JSX syntax is not valid Python, it would break that. There are ways to utilize JSX with Transcrypt if you really wanted to, but in my opinion the way you have to do it kind of defeats the purpose of using JSX in the first place. In this module, we created two functional React components. The App component is the main entry point and serves as the top of the component tree that we are building. Here we have two state variables that we create along with their companion setter functions. The newItem state variable will hold an entered value that is to be added to the list. The items state variable will then hold all of the values that have been previously entered. We then have two functions, one to perform an action when the form submits the value that was entered, and another that synchronizes the value that is being entered with the state of our React component. Then, in the return statement of the App() function, we declare the tree of elements that define the UI. The top of the element tree starts with an HTML form. This allows us to take advantage of its default submit button, which in this case calls our handleSubmit() function that will add new values to the list. In the handleSubmit() function, when adding a new item to our list, we used an in-line compiler directive to let Transcrypt know that this particular line of code is using an operator overload: setItems(items + [newItem]) # __:opov` By default, Transcrypt turns off this capability as it would cause the generated JavaScript to take a performance hit if it were enabled globally due to the overhead required to implement that feature. If you'd rather not use the compiler directive to enable operator overloading only where needed, in a case like this you could also call the appropriate Python operator overload dunder method directly as shown in the commented line just above it. Inside (or below) that, we have an input element for entering new values along with a corresponding label element that identifies it in the UI. The input element has the handleChange() function as its onChange event handler that keeps the React state synced up with what the UI is showing. Next in the element tree is the list of values that have already been entered. These will be displayed in the UI using an HTML ordered list element that will number the items that are added to it. This brings us to this module's second functional component, ListItems, that renders the values in our items state variable as HTML li elements. The items are passed into this component as a property that we deconstruct into a local variable. From there, we use a Python list comprehension to build the list of li elements by iterating through the items. The last step is to call the imported render() function that will attach our App React component to the DOM hook point identified by 'root' in the HTML file: render(App, None, 'root') You'll notice that because we put all of the Python-to-JavaScript mappings in the pyreact.py module, that this module can be 100% pure pythonic Python. No mixing of languages, no weird contortions of the Python language, and no JavaScript! To complete this demo, we now just need an HTML entry point that we can load into a web browser: Listing 10: index.html <!DOCTYPE html> <html lang="en"> <head> <script src="app.py"></script> <title>React to Python</title> </head> <body> <div id="root"></div> </body> </html> This time, instead of running Transcrypt directly, we can run the parcel command using the Node npx package runner. And thanks to the Transcrypt Parcel plugin, it will also run Transcrypt for us and bundle up the generated JavaScript files: (venv) $ npx parcel --log-level 4 --no-cache index.html This also starts up the Parcel development webserver that will serve up the generated content using a default route at: And with this, we have the foundational basis for building React applications using Python! For more... If you are interested in learning more details about what is presented here, the React to Python book dives a lot deeper into what is needed to develop complete web applications using this approach. The book Includes: - Setting up the required developer environment tools - Creating CRUD Forms - Asynchronous requests with a Flask REST service - Basics of using the Material-UI component library - Single Page Applications - Basic user session management - SPA view Routing - Incorporating Google Analytics into your application - Walks you through building a complete demo project Resources Source Code: Transcrypt Site: Transcrypt GitHub: React to Python Book: Discussion (4) Just for the records: I had to create the venv environement with python3.10 to make the hello.html tutorial work. It looks like the default version of Transcrypt on PyPI was recently updated to use version 3.9. I updated the post to reflect this and now have the instructions for using either Python 3.9 or Python 3.7 Thanks for letting me know! This is so creative. Loved it ♥ Thank you! This workflow has been working out pretty good for me.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/jennasys/creating-react-applications-with-python-2je1
CC-MAIN-2021-39
refinedweb
3,411
59.74