text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Hi, I had my solution all good before I tidy up my class library projects. Since then I keep getting "could not get dependencies for project reference 'project.dll' " everytime i compiled my website. I read on the web, that it could be a namespace problem. I've fixed all my namespaces and the problem still exists. Does anyone has any suggestio how to solve this issue ? Thank you for your help, View Complete Post There is a C#.net libraray project whose dll name is customer.dll. It has a class name customer with a function name show(). I want to call this function from another project but dont want to add reference to the caller project. Can this be done? Is there any C#.net class that can acheive this? The issue pertains to the Visual Studio 2010. I have two projects: a class library and a setup project. The class library exposes a COM object (plugin) using the .NET COM interoperability. The library depends on a number of COM type libraries. For each COM type library there is a corresponding primary interop assemby, properly registered using "regasm /codebase". The target framework is 3.5 (the hosting application that loads the plugin only supports 2.0 runtime). The library compiles without any issues, but setup project fails to detect any of the COM dependencies. The issue is easily reproducable, just create a simple class library in VS2010, change the target framework to 3.5, add a reference to a COM type library. Create a setup project and add the "Primary Output" to it. The COM dependencies would not show up. It seems to only happen if I switch the target framework to 3.5 BEFORE building the deployment package. Switiching the framework to 4.0 and changing the "Embed Interop Types" to "False" does fix the issue. I was going to submit an issue to Microsoft, but want to see if there someone out there who ran into the same issue. Thanks for any suggestions When running my Windows Forms project in the debugger using F5 it uses the assembly version in the GAC in stead of the actual project referenced, thus ignoring the changes I make to my reference project. If I remove the assembly from the GAC it works, but I woul like to be able to develop and test my assembly withoud having to remove my installed version first every time. What is the best way to achieve this? Giving my reference project a defferent version number for instance? Is this behaviour considered a bug that will be patched any time soon? hi, I have an dought on adding name space in project. In one one of my project I want to add "System.SserviceModel.Web" name space. But the name space is not in .net tab of refence dialog box. How can I add this reference to the .net tab window? System.Exception was unhandled by user code Message=System.IO.FileLoadException: Could not load file or assembly 'IReportDef, Version=1.0.0.0, Culture=neutral, PublicKeyToken=ac2eee5c6b52ff89' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) File name: 'IReportDef, Version=1.0.0.0, Culture=neutral, PublicKeyToken=ac2eee5c6b52ff89' at System.RuntimeTypeHandle._GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, Boolean loadTypeFromPartialName) at System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) at System.RuntimeType.PrivateGetType(String typeName, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) at System.Type.GetType(String typeName) at System.Data.XSDSchema.SetProperties(Object instance, XmlAttribute[] attrs) at System.Data.XSDSchema.HandleElementColumn(XmlS if the machine support .net3.5, .net2.0 project can reference .net3.5 aseembly and can load and call the functions. But why .net2.0 project or .net3.5 project can not reference .net4.0 assembly even the machine support .net4
http://www.dotnetspark.com/links/33274-could-not-get-dependencies-for-project-reference.aspx
CC-MAIN-2017-04
refinedweb
653
61.33
- Tutoriels - 2D Roguelike tutorial - Writing the Player Script Writing the Player Script Vérifié avec version: 5 - Difficulté: Intermédiaire This is part 9 of 14 of the 2D Roguelike tutorial in which we write the Player script which will take input and control the player's movement. Writing the Player Script Intermédiaire 2D Roguelike tutorial Transcriptions - 00:02 - 00:04 In this video we're going to create the script - 00:04 - 00:06 for our player character. - 00:07 - 00:09 Before we begin writing our player script - 00:09 - 00:12 we're going to add a few things to our game manager - 00:12 - 00:15 that our player script is going to make use of. - 00:15 - 00:17 Let's open the game manager in Monodevelop. - 00:18 - 00:20 In Game Manager the first thing that we're going to do - 00:20 - 00:22 is we're going to add a public variable of the type - 00:22 - 00:25 integer called playerFoodPoints - 00:25 - 00:27 and initialise that to 100. - 00:28 - 00:30 The next thing that we're going to do is declare a public - 00:30 - 00:32 boolean called playersTurn - 00:32 - 00:35 which we're going to initialise to true. - 00:35 - 00:38 We're going to set playersTurn to HideInInspector, - 00:38 - 00:40 which means that although the variable will be - 00:40 - 00:43 public it won't be displayed in the editor. - 00:43 - 00:45 Next we're going to add a public function - 00:45 - 00:48 that returns void called GameOver - 00:48 - 00:51 in which we're going to disable the game manager. - 00:51 - 00:54 Let's save our script and return to the editor. - 00:56 - 00:58 In our Scripts folder we're going to choose - 00:58 - 01:01 Create - C# Script - 01:01 - 01:03 and we'll call this one Player. - 01:03 - 01:05 We'll open it in Monodevelop. - 01:07 - 01:10 Our Player class is going to inherit - 01:10 - 01:12 from the moving object class we wrote - 01:12 - 01:17 previously instead of from the default monobehaviour. - 01:17 - 01:19 The first thing that we're going to do in Player is - 01:19 - 01:21 declare a public integer - 01:21 - 01:24 called wallDamage and set it to 1. - 01:25 - 01:27 WallDamage is the damage that the - 01:27 - 01:29 player is going to apply to the - 01:29 - 01:31 wall objects when it chops them. - 01:31 - 01:33 We're also going to declare two other public - 01:33 - 01:38 integers called pointsPerFood and pointsPerSoda. - 01:38 - 01:40 These are going to be the number of points added to - 01:40 - 01:42 the player's score when they pick up - 01:42 - 01:45 a food or soda pickup on the board. - 01:45 - 01:47 Next we're going to add a public variable - 01:47 - 01:49 of the type float called - 01:49 - 01:53 restartLevelDelay and initialise it to 1. - 01:53 - 01:55 We're going to declare a private variable of - 01:55 - 01:57 the type animator called Animator. - 01:57 - 02:00 We're going to use this to store a reference - 02:00 - 02:02 to our animator component. - 02:02 - 02:05 We're also going to declare a private int called food, - 02:05 - 02:08 which is going to store the player's score - 02:08 - 02:11 during the level before parsing it back to the - 02:11 - 02:13 game manager as we change levels. - 02:13 - 02:15 We're going to add the protected - 02:15 - 02:19 and override keywords to our start function. - 02:20 - 02:22 We're doing this because we're going to have a different - 02:22 - 02:25 implementation for start in the player class - 02:25 - 02:28 than we have in the moving object class. - 02:28 - 02:30 In start we're going to get a reference - 02:30 - 02:33 to our animator component using getComponent. - 02:33 - 02:36 Next we're going to set food to the value - 02:36 - 02:39 of playerFoodPoints, which is stored in the game manager. - 02:40 - 02:42 We're doing this so that player can manage - 02:42 - 02:44 the food score during the level - 02:44 - 02:46 and then store it in the game manager - 02:46 - 02:48 as we change levels. - 02:50 - 02:52 Next we're going to call the start function - 02:52 - 02:55 of our base class moving object - 02:56 - 02:58 We're going to declare a private function that - 02:58 - 03:00 returns void called onDisable. - 03:00 - 03:02 OnDisable is part of the Unity API - 03:02 - 03:05 and what it's going to do is it'll be called - 03:05 - 03:09 when the player game object, in this case is disabled, - 03:09 - 03:12 we're going to use it to store the value of food - 03:12 - 03:16 in the game manager as we change levels. - 03:17 - 03:19 We're also going to declare a private function - 03:19 - 03:22 that returns void called checkIfGameOver. - 03:22 - 03:24 CheckIfGameOver is going to be simple. - 03:24 - 03:28 We're just going to check if our food score - 03:28 - 03:31 is less than or equal to 0 - 03:32 - 03:36 then we're going to call the gameOver function - 03:36 - 03:37 of Game Manager. - 03:37 - 03:39 The next thing that we're going to do is declare - 03:39 - 03:42 our AttemptMove function. - 03:42 - 03:44 We're going to declare AttemptMove with the - 03:44 - 03:46 protected and override keywords - 03:46 - 03:48 and it's going to return void. - 03:49 - 03:51 It's also going to take a generic - 03:51 - 03:54 parameter T, which is going to specify - 03:54 - 03:56 the type of component that we're expecting - 03:56 - 03:59 our mover to encounter. - 03:59 - 04:01 AttemptMove is also going to take two - 04:01 - 04:03 parameters of the type int - 04:03 - 04:06 called xDir and yDir, - 04:06 - 04:09 for X direction and Y direction. - 04:09 - 04:11 The first thing that we're going to do in AttemptMove is - 04:11 - 04:14 that we're going to subtract 1 from the players food total. - 04:14 - 04:17 This means that every time the player moves - 04:17 - 04:19 they're going to lose 1 food point, - 04:19 - 04:21 which is one of the core mechanics of the game. - 04:21 - 04:25 Next we're going to call the AttemptMove function - 04:25 - 04:28 of our base class moving object. - 04:28 - 04:32 We're going to parse in our generic parameter T - 04:32 - 04:34 along with our integers - 04:34 - 04:36 xDir and yDir. - 04:37 - 04:41 Next we're going to declare a rayCastHit2D called Hit, - 04:41 - 04:43 which is going to allow us to reference the result - 04:43 - 04:46 of the line cast done in move. - 04:46 - 04:48 Since the player has lost food points - 04:48 - 04:51 by moving we're going to call the checkIfGameOver function - 04:51 - 04:53 to check if the game has ended. - 04:54 - 04:57 Finally we're going to set our playersTurn - 04:57 - 04:59 variable in the Game Manager to false - 04:59 - 05:01 to say that the player's turn has ended. - 05:03 - 05:05 In update we're going to check - 05:05 - 05:07 if it's currently the player's turn - 05:07 - 05:10 using the boolean that we created in the game manager. - 05:11 - 05:13 If it's not the player's turn - 05:13 - 05:16 we're going to return, meaning the other code that will - 05:16 - 05:18 follow will not be executed. - 05:20 - 05:23 Next we're going to declare two integers - 05:23 - 05:27 called horizontal and vertical and set them to 0. - 05:27 - 05:29 We're going to use these to store the - 05:29 - 05:34 direction that we're moving, either as a 1 or a -1 - 05:34 - 05:37 along the horizontal and vertical axis. - 05:37 - 05:39 The next thing that we're going to do is get some - 05:39 - 05:41 input from the input manager, - 05:41 - 05:44 cast it from a float to an integer - 05:44 - 05:49 and store it in our horizontal variable that we declared. - 05:49 - 05:52 We're going to do the same thing for the vertical axis. - 05:53 - 05:55 Currently the movement code that we're writing - 05:55 - 05:58 is going to be based on a keyboard or - 05:58 - 06:02 controller input for a standalone build of our game. - 06:03 - 06:05 Later on we're going to write a - 06:05 - 06:07 version of our movement code which will take - 06:07 - 06:09 mobile or touchscreen input. - 06:10 - 06:12 The next thing that we're going to do is we're going to check - 06:12 - 06:15 if we're moving horizontally, and if so we're going to set - 06:15 - 06:17 vertical to 0. - 06:17 - 06:18 We're going to do this to prevent the player - 06:18 - 06:20 from moving diagonally. - 06:20 - 06:24 We're then going to check if we have a non-0 value - 06:24 - 06:26 for horizontal or vertical. - 06:27 - 06:29 If we do have a non-0 value, - 06:29 - 06:31 meaning we're attempting to move, - 06:31 - 06:34 we're going to call our AttemptMove function. - 06:35 - 06:37 When we do we're going to parse in - 06:37 - 06:40 the generic parameter Wall. - 06:40 - 06:44 Meaning that we're expecting that the player may - 06:44 - 06:48 encounter a wall, which is an object that it can interact with. - 06:49 - 06:51 You'll remember that when we wrote MovingObject - 06:51 - 06:55 we gave the AttemptMove function the generic parameter T. - 06:55 - 06:57 This is so that we could specify - 06:57 - 06:59 what component we expect to be - 06:59 - 07:03 interacting with when we call the function. - 07:03 - 07:05 In this case from the Player script - 07:05 - 07:07 so that we can specify that we expect to be interacting - 07:07 - 07:09 with the wall, or in the case of - 07:09 - 07:11 the Enemy script we can specify that we - 07:11 - 07:13 expect to be interacting with a player. - 07:14 - 07:17 We're also going to parse in horizontal and vertical - 07:17 - 07:19 as parameters, which are going to be the direction - 07:19 - 07:21 that the player is going to attempt to move in. - 07:23 - 07:25 The next thing that we're going to do is write - 07:25 - 07:29 an implementation for OnCantMove. - 07:29 - 07:31 You may remember, in MovingObject - 07:31 - 07:33 we declared OnCantMove as an - 07:33 - 07:36 abstract function without any implementation. - 07:36 - 07:39 Now we're going to define that implementation - 07:39 - 07:40 for the player class. - 07:41 - 07:44 OnCantMove is going to be a protected - 07:44 - 07:46 override function that returns void - 07:46 - 07:49 and takes a generic parameter called T - 07:49 - 07:53 as well as a parameter of the type T called component. - 07:54 - 07:56 Note here that we're using a - 07:56 - 07:58 lowercase c for component. - 07:59 - 08:01 In the case of the player, we want them to - 08:01 - 08:03 take an action if they're trying to move - 08:03 - 08:05 in to a space where there's a wall - 08:05 - 08:07 and are blocked by it. - 08:07 - 08:09 The first thing that we're going to do is that we're - 08:09 - 08:12 going to declare a variable called hitWall - 08:12 - 08:13 of the type Wall, - 08:13 - 08:16 and we're going to set it to equal - 08:16 - 08:19 the component that was parsed in as a parameter - 08:19 - 08:22 while casting it to a wall. - 08:22 - 08:26 Next we're going to call the DamageWall function - 08:26 - 08:28 of the wall that we hit. - 08:28 - 08:30 We're going to parse in our variable - 08:30 - 08:33 wallDamage for how much damage the - 08:33 - 08:35 player is going to do to the wall. - 08:36 - 08:38 The last thing that we're going to do is - 08:38 - 08:40 we're going to set the PlayerChop trigger - 08:40 - 08:43 of our animator component that we stored a reference to earlier. - 08:44 - 08:46 We're going to do this by parsing in the name of - 08:46 - 08:48 the parameter that we want to set as a string. - 08:49 - 08:51 The next thing that we're going to do is that we're going to - 08:51 - 08:54 declare a private function that returns void - 08:54 - 08:56 called Restart. - 08:56 - 08:59 All we're going to do in here is reload the level, - 08:59 - 09:03 we're going to call Restart if the player collides - 09:03 - 09:05 with the exit object - 09:05 - 09:07 meaning that we're going to the next level. - 09:07 - 09:09 In Restart we're going to call - 09:09 - 09:11 Application.LoadLevel - 09:11 - 09:15 parsing in the parameter Application.LoadedLevel. - 09:15 - 09:17 Meaning that we're going to load the last scene - 09:17 - 09:19 that was loaded, which in this case would be main, - 09:19 - 09:21 which is the only scene in the game. - 09:21 - 09:24 In many games, to load another level - 09:24 - 09:26 we would load another scene. - 09:26 - 09:29 In this case we're restarting our main scene - 09:29 - 09:31 because our levels are being generated - 09:31 - 09:33 procedurally via script. - 09:34 - 09:36 Next we're going to declare a public function that - 09:36 - 09:38 returns void called LoseFood. - 09:38 - 09:41 which takes an integer parameter called loss. - 09:41 - 09:44 LoseFood is called when an enemy attacks the player. - 09:45 - 09:48 Loss specifies how many points - 09:48 - 09:50 the player will lose. - 09:50 - 09:52 In LoseFood the first thing that we're going to do - 09:52 - 09:56 is set the playerHit trigger in our animator. - 09:57 - 10:00 Next we're going to subtract our parameter loss - 10:00 - 10:02 from the player's food total. - 10:03 - 10:06 Lastly we're going to call the CheckIfGameOver function - 10:06 - 10:08 because our player has lost food - 10:08 - 10:10 we want to see if the game has ended. - 10:10 - 10:13 We want to give the player the ability to interact - 10:13 - 10:14 with the other object on the board, - 10:14 - 10:19 namely the exit, soda and food objects. - 10:20 - 10:24 We're going to do this using onTriggerEnter2D, - 10:24 - 10:26 which is part of the Unity API. - 10:26 - 10:28 We're going to declare a private function that returns - 10:28 - 10:31 void called OnTriggerEnter2D, - 10:31 - 10:33 parsing in the parameter of the type - 10:33 - 10:35 Collider2D called other. - 10:36 - 10:40 You'll remember that we set our exit, food and soda - 10:40 - 10:43 prefabs colliders to IsTrigger - 10:44 - 10:46 and so what we're going to do is we're going to - 10:46 - 10:50 check the tag of the other object that we collided with - 10:50 - 10:53 to see first if it's tagged Exit. - 10:53 - 10:56 If the tag of the other object is equal to Exit - 10:57 - 10:59 we're going to invoke our - 10:59 - 11:01 start function that we declared - 11:01 - 11:04 parsing in our Restart level delay, - 11:04 - 11:06 in this case 1 second, - 11:06 - 11:08 so that we can call that function - 11:08 - 11:11 1 second after we've collided with the exit trigger - 11:11 - 11:13 meaning that there's going to be a 1 second pause - 11:13 - 11:15 and then we're going to restart the level. - 11:15 - 11:17 Since the level is now over we're going to set - 11:17 - 11:20 enabled for the player to equal false. - 11:20 - 11:22 Next we're going to check if the other object's - 11:22 - 11:25 tag is equal to food. - 11:25 - 11:28 If that's the case we're going to add food points - 11:29 - 11:31 and we're going to set the object - 11:31 - 11:34 that we collided with, the food object in this case, - 11:35 - 11:36 to inactive. - 11:36 - 11:39 Lastly we're going to do more or less the same thing - 11:39 - 11:41 for our soda object. - 11:48 - 11:50 And there we have it, that's going to allow - 11:50 - 11:54 our player to use the core mechanics of the game - 11:54 - 11:56 we're going to add a few other things to this script later on - 11:56 - 11:59 but for now let's save and return to the editor. - 12:02 - 12:05 In the editor we're going to highlight our Player prefab - 12:07 - 12:11 go to Component - Scripts - 12:11 - 12:14 and add the Player component that we just created. - 12:15 - 12:17 We're going to set the blocking layer variable - 12:17 - 12:19 to BlockingLayer, - 12:20 - 12:21 and so that looks good. - 12:22 - 12:24 With that done we have some of the core - 12:24 - 12:26 interactive functionality in place for our game. - 12:27 - 12:29 We're going to return to the Player script - 12:29 - 12:33 to add UI elements, audio and our mobile controls. - 12:34 - 12:36 In the next video we're going to start setting up - 12:36 - 12:39 the animation controllers for our enemies. Please note that this lesson has been updated to reflect changes to Unity's API. Please refer to the upgrade guide PDF found in your asset package download or available directly here. Player Code snippet using UnityEngine; using System.Collections; using UnityEngine.SceneManagement; //Allows us to use SceneManager /. private Animator animator; //Used to store a reference to the Player's animator component. private int food; //Used to store player food points total during level. /; //Disable the food object the player collided with. other.gameObject.SetActive (false); } //Check if the tag of the trigger collided with is Soda. else if(other.tag == "Soda") { //Add pointsPerSoda to players food points total food += pointsPerSoda; //Disable the soda object the player collided with. other.gameObject.SetActive (false); } } //Restart reloads the scene when called. private void Restart () { //Load the last scene loaded, in this case Main, the only scene in the game. SceneManager.LoadScene (0); } / GameOver function of GameManager. GameManager.instance.GameOver (); } } } Tutoriels apparentés - GetAxis (Cours) - Inheritance (Cours)
https://unity3d.com/fr/learn/tutorials/projects/2d-roguelike-tutorial/writing-player-script
CC-MAIN-2019-35
refinedweb
3,439
63.73
. Runs file tests on a set of files. Exports one function, validate, which takes a single multi-line string as input. Each line of the string contains a filename plus a test to run on the file. The test can be followed with || die to make it a fatal error if it fails. The default is || warn. Prepending ! to the test reverses the sense of the test. You can group tests (e.g., -rwx); only the first failed test of the group produces a warning. For example: use File::CheckTree; $warnings += validate( q{ /vmunix -e || die /bin cd csh !-ug sh -ex /usr -d || warn "What happened to $file?\n" }); Available tests include all the standard Perl file-test operators except -t, -M, -A, and -C. Unless it dies, validate returns the number of warnings issued. ================= File::Compare Compares the contents of two sources, each of which can be a file or a filehandle. Returns 0 if the sources are equal, 1 if they are unequal, and -1 on error. File::Compare provides two functions: * compare (file1, file2[, buffsize]) Compares file1 to file2. Exported by default. If present, buffsize specifies the size of the buffer to use for the comparison. * cmp (file1, file2[, buffsize]) cmp is a synonym for compare. Exported on request. ================= File::Copy Copies or moves files or filehandles from one location to another. Returns 1 on success, 0 on failure, or sets $! on error. * copy (source, dest[, buffsize]) Copies source to dest. Takes the following arguments: source The source string, FileHandle reference, or FileHandle glob. If source is a filehandle, it is read from; if it's a filename, the filehandle is opened for reading. dest The destination string, FileHandle reference, or FileHandle glob. dest is created if necessary and written to. buffsize Specifies the size of the buffer to be used for copying. Optional. * cp (source, dest[, buffsize]) Like copy, but exported only on request. use File::Copy "cp" * move (source, dest) Moves source to dest. If the destination exists and is a directory, and the source is not a directory, then the source file is renamed into the directory specified by dest. Return values are the same as for copy. * mv (source, dest) Like move, but exported only on request. use File::Copy "mv" ================= File::DosGlob Provides a portable enhanced DOS-like globbing for the standard Perl distribution. DosGlob lets you use wildcards in directory paths, is case-insensitive, and accepts both backslashes and forward slashes (although you may have to double the backslashes). Can be run three ways: From a Perl script: require 5.004 use File::DosGlob 'glob'; @perlfiles = glob "..\pe?l/*.p?"; print <..\pe?l/*.p?>; With the perl command, on the command line: # from the command line (overrides only in main::) % perl -MFile::DosGlob=glob -e "print <../pe*/*p?>" With the perlglob.bat program on the DOS command line: % perlglob ../pe*/*p? When invoked as a program from the command line, File::DosGlob prints null-separated filenames to STDOUT. ================= File::Find Looks for files that match a particular expression. Exports two functions: * find (\&wanted, dir1[, dir2 ...]) Works like the Unix find command; traverses the specified directories, looking for files that match the expressions or actions you specify in a subroutine called wanted, which you must define. For example, to print out the names of all executable files, you could define wanted this way: sub wanted { print "$File::Find::name\n" if -x; } Provides the following variables: $File::Find::dir Current directory name ($_ has the current filename in that directory). $File::Find::name Contains "$File::Find::dir/$_". You are chdired to $File::Find::dir when find is called. $File::Find::prune If true, find does not descend into any directories. $File::Find::dont_use_nlin Set this variable if you're using the Andrew File System (AFS). * finddepth (\wanted, dir1[, dir2...]) Like find, but does a depth-first search. The standard Perl distribution comes with a Perl script, find2perl, which takes a Unix find command and turns it into a wanted subroutine. ================= File::Path Creates and deletes multiple directories with specified permissions. Exports two methods: * mkpath (path, bool, perm) Creates a directory path and returns a list of all directories created. Takes the following arguments: path Name of the path or reference to a list of paths to create. bool Boolean. If true, mkpath prints the name of each directory as it is created. Default is false. perm Numeric mode indicating the permissions to use when creating the directories. Default is 0777. * rmtree (root, prt, skip) Deletes subtrees from the directory structure, returning the number of files successfully deleted. Symbolic links are treated as ordinary files. Takes the following arguments: root Root of the subtree to delete or reference to a list of roots. The roots, and all files and directories below each root, are deleted. prt Boolean. If true, rmtree prints a message for each file, with the name of the file and whether it's using rmdir or unlink to remove it (or if it's skipping the file). Default is false. skip Boolean. If true, rmtree skips any files to which you do not have delete access (under VMS) or write access (under other operating systems). Default is false. ================= File::Spec Performs common operations on file specifications in a portable way. To do that, it automatically loads the appropriate operating-system-specific module, which is one of File::Spec::Mac, File::Spec::OS2, File::Spec::Unix, File::Spec::VMS, or File::Spec::Win32. The complete reference of available functions is given in File::Spec::Unix; the functions are inherited by the other modules and overridden as necessary. Subroutines should be called as class methods, rather than directly. File::Spec::Mac File::Spec::Os2 File::Spec::Unix File::Spec::VMS File::Spec::Win32 NOTE: There are quite a few functions!! ================= File::Stat Provides the same file status information as the Perl functions stat and lstat. Exports two functions that return File::stat objects. The objects have methods that return the equivalent fields from the Unix stat(2) call: Field Meaning dev Device number of filesystem ino Inode number mode File mode nlink Number of links to the file uid Numeric user ID of owner gid Numeric group ID of owner rdev Device identifier size Size of file, in bytes atime Last access time mtime Last modified time ctime Inode change time blksize Preferred blocksize for filesystem I/O blocks Number of blocks allocated You can access the status fields either with the methods or by importing the fields into your namespace with the :FIELDS import tag and then accessing them by prepending st_ to the field name (e.g., $st_mode). Here are examples of doing it both ways: use File::stat; $stats = stat($file); print $stats->uid; print $st_uid; * stat (file) Returns status information for the file or filehandle pointed to by file. If file is a symbolic link, returns the information for the file that the link points to. * lstat (file) Returns the same information as stat, but if file is a symbolic link, returns the status information for the link. ================= Thanks, maneshr. But do give me some time to go thro' this. I'll get back after this weekend. take your time. Have a nice. let me know. Also i dont think there is one big module which can get/read heder info for all possible file types.. "....perhaps the width & height values of an image file? ..." For images you can use the Image::Size module (), TIFF and the PPM family of formats (PPM/PGM/PBM). Also if you are on UNI* systems you can use the file command to help you determine the file type.. check the man pages at () for more info. But that's not what I want actually. I have an image file uploaded to my server. I want to find its actual width & height. I guess it must be available in the header somewhere.); But I didn't get the last example: use Image::Size; # Get the size of an in-memory buffer ($buf_x, $buf_y) = imgsize($buf); What does it do? class,. #So, imgsize("globe.gif") would be approximately equivalent to: open FILE,"<globe.gif" or return (undef,undef,"Can't open image file globe.gif: $!"); binmode FILE; $buf = \join'',<FILE>; close FILE; return imgsize($buf); Thanks maneshr! It's at: Do post a comment there!
https://www.experts-exchange.com/questions/10339206/File-modules.html
CC-MAIN-2018-17
refinedweb
1,393
66.33
Here is a post about installing a module in python3. When I use brew install python, then it installs it for 2.7. When I use the method suggested by dan, which aimed to install it directly in python3 (who i really thank), but which didn't work : # Figure out the path to python3 PY3DIR=`dirname $(which python3)` # And /then/ install with brew. That will have it use python3 to get its path PATH=$PY3DIR:$PATH brew install mapnik The installation was successful but in python2. so I get: For non-homebrew Python, you need to amend your PYTHONPATH like so: export PYTHONPATH=/usr/local/lib/python2.7/site-packages:$PYTHONPATH so i finally add the path manually in python3 : import sys sys.path.append('/usr/local/lib/python2.7/site-packages') I get this error : Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/site-packages/mapnik/__init__.py", line 69, in <module> from _mapnik import * ImportError: dlopen(./_mapnik.so, 2): Symbol not found: _PyClass_Type Referenced from: ./_mapnik.so Expected in: flat namespace in ./_mapnik.so Please help, I have spent so many hours on this ... Thanks!!! The Mapnik python bindings depend on boost_python. And both need to use the same python. The problem is likely that homebrew is providing a bottle of boost which includes boost python built against python 2.7 and not python 3.x.
http://www.dlxedu.com/askdetail/3/9976c73e755e41011be1af31bfd037c4.html
CC-MAIN-2018-30
refinedweb
237
74.9
Slaying Changelog's compilation beast How I reduced cross-module compilation dependencies by 98% Recently, I read a pair of articles outlining how to improve compilation performance in Elixir apps. Since I don’t yet have the luxury of working with Elixir professionally, it was interesting knowledge but not immediately actionable. Then I watched Jerod Santo’s foray into Phoenix Live View. Quickly, it became clear that the Changelog repo was suffering from several cross-module compilation dependencies. With the Phoenix server running, changes to a single file would require recompilation of 220 files. On a fresh Phoenix project, you change one file and it is the only one that needs to be recompiled. So, what had gone wrong? First, some background It’s important to understand the options you have for reusing modular code in Elixir. I’ll focus on the two options relevant to the problem at hand. importallows you to call functions from a module without prepending the module’s name. So, you can write say_hi()instead of Changelog.Greeter.say_hi(). aliasallows you to call functions from a module using only the last part of a module’s name. Hence, you can write Greeter.say_hi()instead of Changelog.Greeter.say_hi(). You can also alias a module with an entirely different name, ie alias Changelog.Greeter, as: Momallows you to call Mom.say_hi(). There are a couple of catches with using imports. First, they make your code more ambiguous. If you are reading dozens or hundreds of lines of code, it’s not clear where the imported functions are coming from. Second and more importantly, imports can create cross-module compilation dependencies. If Changelog.Greeter has been imported to other modules, they will all need to be recompiled when Changelog.Greeter is modified. Watching Jerod’s Phoenix jam session, it seemed apparent that the repo was a little too reliant on imports. It doesn’t have to be this way At this point, it may sound like I have a full grasp of the inner workings of Elixir’s compiler. This is the benefit of hindsight, and I still can’t claim to understand it completely. Prior to opening an issue and saying It doesn’t have to be this way, I had largely avoided deeply researching what happens when the compiler runs. It was a topic for a more educated and skilled future-self to explore when the time was right. Casting aside my trepidation, I bravely opened an issue offering to slay the compilation monster. After aknowlegement in the issue and a shout-out from Jerod, it was time to put the articles to use. Finding the right solution While I initially attempted to manually search and replace imports (converting them to aliases) this was a futile approach. Fortunately, Elixir contributor and splendid dude Wojtek Mach chimed in, pointing to his second article addressing these compilation complications. Most importantly, he had created a script to programatically convert module imports to aliases. Running the script on the Changelog source code, I ran into a problem. In Elixir, you can pipe calls from one function to another. It’s one of the tools provided by the language to make our human lives a little easier. Typically you will see functions piped to each other on separate lines, but it sometimes makes sense to pipe a few calls on a single line. This was tripping up the import2alias script, leading to mangled and broken code. import2alias uses a compilation tracer to collect information about where a module’s functions are being called throughout the source code. The module name, function name, line number, column, and arity are provided to the tracer. Then, the script collects these results and prepends the user-defined alias to each of the function calls for a provided module. To illustrate the problem I found with the script, imagine Changelog.Greeter has two functions, say_hi() and offer_help(). You might have another module that calls say_hi() then offer_help(), which looks like this using the pipe feature: say_hi() |> offer_help() When the script prepends the alias to say_hi, offer_help is pushed to a new column since it’s on the same line: Greeter.say_hi() |> offer_help() import2alias only collects the column info before any changes are made, so when it goes to prepend an alias to offer_help, it starts on the wrong column and we get mangled code: Greeter.say_Greeter.offer_helphi() |> offer_help() Fortunately, Wojtek was already aware of this limitation, and pointed to a discussion about adding a new Elixir feature to modify code in a more reliable way. In order to replace the Changelog imports, I modified the script to perform a simple string replace instead of relying on which column the function name started. This worked well enough, only requiring two or three manual fixes where a string was found multiple places in a line. For example, the app.html.eex template has a description meta tag that contained the string ‘description’ twice. After running the script with the String.replace modification, the alias was prepended to the meta tag’s name attribute, and it had to be fixed. # Error from the String.replace/3 approach <meta name="Description.description" content="<%= Description.description(assigns) %>"> # The corrected name attribute <meta name="description" content="<%= Description.description(assigns) %>"> #results After converting a few imported modules to use aliases, the application no longer needs to recompile 220 modules; now it only recompiles 5. There is still room for improvement, but this marks a 98% reduction in cross-module compilation dependencies. With this improvement in place, I took the liberty to add Phoenix’s live reloader back to the application. When you are hacking away on a Phoenix application with the live reloader enabled, the application will automatically recompile watched files (web views and assets), and trigger a browser refresh. No more toggling between your editor and the browser just to refresh the page. This sums up where you should look first if you are running into this recompilation pain during development. It’s not uncommon or unwise to use or import modules from external packages. (It is usually recommended by the package’s documentation.) However, importing your own custom modules will likely lead to longer recompilation times as your application grows. My rule of thumb would be to use imports of your own code very judiciously. - Wojtek Mach If you are starting a new Elixir/Phoenix application or improving an existing one, it’s best to alias your own modules when you don’t want to write out the full module name. The Future Elixir 1.11 will make it possible for import2alias to modify .eex files which currently do not provide column information to the tracer function. With this and the proposed Code.format_quoted/2, the script will be able to safely and accurately convert all imported function calls with aliased calls, even if they’re piped on the same line. Open source is a beautiful thing. Editor’s Note: This post now has a companion episode of our Backstage podcast where Owen joins Jerod to discuss the entire process in detail. Listen below 👇 Discussion
https://changelog.com/posts/how-i-reduced-changelogs-compilation-dependencies-by-98
CC-MAIN-2021-04
refinedweb
1,185
64.51
I used a greedy alorithme. leftis a hashmap, left[i] counts the number of i that I haven't placed yet. endis a hashmap, end[i] counts the number of consecutive subsequences that ends at number i Then I tried to split the nums one by one. If I could neigther add a number to the end of a existing consecutive subsequence nor find two following number in the left, I returned False def isPossible(self, nums): left = collections.Counter(nums) end = collections.Counter() for i in nums: if not left[i]: continue left[i] -= 1 if end[i - 1] > 0: end[i - 1] -= 1 end[i] += 1 elif left[i + 1] and left[i + 2]: left[i + 1] -= 1 left[i + 2] -= 1 end[i + 2] += 1 else: return False return True @jedihy I got a similar idea and implement that on C++ by my own, you can check it out here: Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/99542/python-esay-understand-solution
CC-MAIN-2017-51
refinedweb
169
62.72
If you're familiar with the Regex class (in the System.Text.RegularExpressions namespace) you may have already noticed that it, too, has the ability to compile your favorite regular expressions into a .NET assembly. In fact, the .NET Common Language Runtime (CLR) contains a whole namespace full of classes to help us build assemblies, define types, and emit their implementations, all at run time. These classes, which comprise the System.Reflection.Emit namespace, are known collectively as "Reflection.Emit." Java programmers have long enjoyed the benefits of reflection and full-fidelity type information, and .NET (finally) delivers that bit of nirvana to the Windows platform. But the classes in the Reflection.Emit namespace raise the bar even further, allowing us to generate new types and emit new code, dynamically. And, as the Regex class demonstrates, Reflection.Emit is not just for building compilers (although it's certainly good for that the JScript .NET compiler makes heavy use of Reflection.Emit). It is a very important bit of software technology in its own right because, when combined with the power of .NET's Intermediate Language (IL), it allows us to do something we've never really been able to do before: generate portable, low-level code at run time. Hello, Reflection.Emit! Before we start writing full-fledged compilers, let's take a look at a trivial example of Reflection.Emit in action (Listing 1) to get acquainted with the cast of characters there are quite a few. Long before we begin emitting any actual code, we have to set up the context in which that code will live. An application domain, an assembly, a module, a type, and a method are all required, at the very least. This is all the same stuff required of any executable .NET code. Even if we never intend on saving this code to disk, the architecture of the .NET run-time environment still requires our code stream to belong to an assembly, a module, and so forth. Emitting the actual code is the very last thing we do (before saving it, or executing it). Figure 1 shows the cascading set of "Builder" classes employed by Reflection.Emit to model this architecture. The goal of the program in Listing 1 is simple: generate a dynamic assembly that houses a single module and a single type, and exposes a single method (which simply writes a line of text to the console), then save this assembly to disk as an executable file. (For a more useful example of Reflection.Emit in action, the sample code available online includes an RPN arithmetic expression engine.) Step one is to define a new, dynamic assembly in our current application domain. (Reflection.Emit does not allow us a way to add code to a preexisting assembly.) For our purposes, a weakly named assembly, i.e., one without a cryptographic signature, will suffice. AssemblyName an = new AssemblyName(); an.Name = "HelloReflectionEmit"; AppDomain ad = AppDomain.CurrentDomain; AssemblyBuilder ab = ad.DefineDynamicAssembly(an, AssemblyBuilderAccess.Save); Next, we spawn a dynamic module from our assembly. Even though we intend to save the module and assembly as a single file, the two abstractions are distinct to Reflection.Emit the module represents a physical store of code and resources, and the assembly contains the metadata for those modules. Most of the time, you'll simply want to use the same name for the assembly and module, which is what we do here. Also, because we intend on saving this code to disk, we must specify a filename for the module. We specify the same filename that we'll specify to AssemblyBuilder.Save(), later on, in order to have the assembly metadata and module merged into one single executable file. ModuleBuilder mb = ab.DefineDynamicModule(an.Name, "Hello.exe"); Now we're getting somewhere. It's time to declare a type: public class Bar in namespace Foo. Note the "Namespace.Typename" syntax, which is the same syntax used elsewhere in the CLR's Reflection library. This syntax is well documented a full tour can be had, starting with the MSDN documentation for the System.Type.FullName property. TypeBuilder tb = mb.DefineType("Foo.Bar", TypeAttributes.Public|TypeAttributes.Class); Reflection.Emit is just like C# (and most other object-oriented programming languages) in that if we neglect to build a default constructor for this type, one will be generated for us, which simply calls the default constructor of the base class. So, to make this new class useful, all we need to do is implement a method. Skipping ahead just a bit, we'll want this method to act as the EXE's entry point, so let's design it as a static method, accepting an array of strings, and returning an integer. The parameters and return type are optional (as is the name "Main"), but hey, if we're going to write a "Hello, World" program, let's do it properly. MethodBuilder fb = tb.DefineMethod("Main", MethodAttributes.Public| MethodAttributes.Static, typeof(int), new Type[] { typeof(string[]) }); The actual code emitted by Listing 1 a single call to System.Console.WriteLine() is described by a short sequence of IL instructions. We'll get better acquainted with IL, the Intermediate Language that is the heart and soul of .NET, in the next section. // Emit the ubiquitous "Hello, World!" method, in IL ILGenerator ilg = fb.GetILGenerator(); ilg.Emit(...); // stay tuned The TypeBuilder.CreateType() method effectively "closes the door." No new code or data members will be allowed in afterward. So all that's left to do is declare the assembly's subsystem and entry point, then save our shiny new EXE file to disk. Newly created assemblies are born as DLLs by default, unless/until AssemblyBuilder.SetEntryPoint() is called, which effectively transforms the assembly into an EXE file. // Seal the lid on this type Type t = tb.CreateType(); // Set the entrypoint (thereby declaring it an EXE) ab.SetEntryPoint(fb,PEFileKinds.ConsoleApplication); // Save it ab.Save("Hello.exe"); To recap what we've done so far: IL code can exist only as the body of a method or constructor. Methods and constructors can only exist within the context of a type (or a module, in the case of global functions). Each type must belong to a module and each module must be associated with an assembly. Even assemblies, when dynamically generated, do not exist in a vacuum they are associated with an application domain, and thus are given a finite boundary for security, lifetime, and remoting purposes. Now, let's learn a bit about IL, so we can finish the implementation of our Hello, World program. IL: A Hitchhiker's Guide IL is the native language of .NET what machine language is to a CPU. Once loaded into an application domain, each IL method is translated into native machine code immediately before it's first executed. This process is known as "just-in-time compilation," or JIT compilation. The JIT compiler is a wonderful thing because it decouples the software we ship from any specific hardware platform, and it allows our code to be optimized for whichever/whatever hardware and operating system the user happens to be running, present or future. IL and JIT compilation are, at long last, a license for chip manufacturers to innovate. Most of the Win32 programmers I work with today don't possess an intimate knowledge of x86 assembly language, but it's a funny thing: They're all sufficiently familiar with native x86 code to step through it in a debugger (well, most of them are). Unfortunately, stepping through IL in a debugger is a difficult thing to arrange, unless you've compiled it directly with ILASM.EXE. But IL is worth getting to know anyway, and Reflection.Emit is just one good reason why. At any rate, if you can comprehend x86 at all, even just stepping through the most trivial of functions in a debugger, you'll find IL a refreshing walk in the park. And if not? Don't worry. With a little help from ILDASM.EXE, you can fake it. ILDASM.EXE is the disassembler tool included with the .NET Framework SDK. It is simply indispensable for learning IL. How do you calculate the cosine of a 64-bit IEEE value in IL? You could dig through a mountain of spec documents... Nah, just write a few lines of C# code, compile, then disassemble. Figure 2 shows ILDASM.EXE in action. If you have experience with any low-level machine languages at all, the first thing you'll notice about IL is that it is entirely stack based there are no registers. This makes sense if you consider that IL is designed to be portable to chip architectures other than good 'ol x86. After all, who can say how many registers the next generation of chips from Intel and AMD will have? IL avoids making any assumptions along these lines by simply not employing the concept of registers at all. All parameters for every instruction (even simple arithmetic and comparison operations) are either specified explicitly as operands, or taken from the stack. The results of these operations are then pushed onto the stack. The mapping of an IL method's stack space onto the physical chip's register space is left completely to the JIT compiler and its optimization logic. Example 1 is a small sampling of some common IL instructions you may encounter, including the instructions emitted by Listing 1. Every IL instruction consists of an opcode followed by zero or more operands. In general, it's helpful to think of the opcodes beginning with "Ld" as pushing items onto the stack, and the opcodes beginning with "St" as popping items off the stack. For a complete reference, refer to the Common Intermediate Language (CIL) specification (see References). Or, just fire up ILDASM.EXE, and fake it. Now as promised, let's revisit those calls to ILGenerator.Emit() from Listing 1. The first opcode, Ldstr, will take its operand (a string literal) and push a reference to it onto the stack. For this to work at run time, the string itself must obviously be contained within the module's data section, somewhere. This is where the power of a class library like Reflection.Emit becomes truly apparent it hides these gruesome details behind such innocent little method calls. We needn't worry about laying out vast, cumbersome data sections to store our strings, and patching up our code with the appropriate references. Reflection.Emit will handle all those messy details for us. ILGenerator ilg = fb.GetILGenerator(); ilg.Emit(OpCodes.Ldstr, "Hello, World!"); The next opcode, Call, will execute a subroutine (a method). The parameters for the method call will be taken from the stack, and its return value, if any, will be pushed on. The return type of Console.WriteLine() is void, so in this case, nothing will be left on the stack frame afterward. ilg.Emit(OpCodes.Call, typeof(Console).GetMethod("WriteLine", new Type[] {typeof(string)} ));?) We finish up our method implementation with two more opcodes: Ldc_I4_0 followed by Ret. This easy sequence pushes the number 0 onto the stack (as a 4-byte integer) then exits the method, effectively returning 0. The Ldc_I4_0 opcode is a special "short form" opcode it's functionally equivalent to the Ldc_I4 opcode followed by a 4-byte operand of 0x00000000. But loading the value 0 onto the stack is such a common operation, the designers of IL included a number of special shorthand opcodes like this, to keep life easy (and binaries tiny). ilg.Emit(OpCodes.Ldc_I4_0); ilg.Emit(OpCodes.Ret); Similar short form opcodes exist for the constants 1 through 7, and a not-quite-as-short form (Ldc_I4_S) exists for signed, single-byte operands -128 through +127. However, one must be careful when emitting instructions of this latter type. Consider the following code: ilg.Emit(OpCodes.Ldarg_S, 13); Which will generate the corresponding IL: ldarg.s 13 nop nop nop Whoa, where did all those nop instructions come from? ILGenerator.Emit() is a very heavily overloaded method. The ldarg.s expects an 8-bit operand, but in our call to ILGenerator.Emit() we accidentally specified a 32-bit value (0x0000000D, aka 13). But why does this matter? To understand, we must consider IL's serialization format. Each IL instruction is serialized as a simple sequence of bytes, one after the next, with no delimiter between instructions the byte-size of each IL instruction is simply determined by the opcode. Multibyte operands are serialized in little-endian order (with the low-order bytes before the high-order bytes). When the run time encounters the ldarg.s opcode, for example, it will know that the next instruction begins just one byte further on (after the operand). Lucky for us, 0x00 happens to be the bytecode for nop (which does nothing, by design). And IL operands are always written in little-endian form, so the 0x0D will be emitted earliest in memory, and therefore be treated as the operand to our ldarg.s instruction. So, all the superfluous 0x00 bytes (nop instructions) will be tucked neatly behind the end of the ldarg.s 13 instruction. One could argue that this is a bug in Reflection.Emit in a perfect world, the ILGenerator.Emit() function would be smart enough to validate its operand-parameters against the opcode specified, and either convert the parameters or throw an exception, as appropriate. Until this is fixed, you'll want to take great care to cast your operands explicitly: ilg.Emit(OpCodes.Ldarg_S, (sbyte)13); In a way, ILGenerator.Emit() is so heavily overloaded that it suffers from the same fundamental problem as C's printf() function: The compiler can't be expected to catch errors based on the semantics of the function, and the implementation never quite catches everything you think it should because the matrix of possible input is so complex. Scared yet? The single-byte short form operations can actually pose far greater dangers than emitting an occasional nop. Remember that the 8-bit operand to short-form instructions is considered a signed value (with range -128 to +127). This means that if you happen to specify an 8-bit value larger than 127, it stands at risk to be sign-extended as a negative number by the runtime, and thus misused in ways that are difficult to forsee: // load an array of double[300] onto the stack int max = 300; ilg.Emit(OpCodes.Ldc_I4, max); ild.Emit(OpCodes.Newarr, typeof(double)); int idx = 200; // valid array index, but >127 ilg.Emit(OpCodes.Ldc_I4_S, (sbyte)idx); ilg.Emit(OpCodes.Ldelem_R8); // kaboom: IndexOutOfRangeException In this code, the value 200 is treated as -56 (or 0xC8) when cast as an sbyte. When the runtime gets around to using this value for something meaningful (like comparing it to the size of an array, which is done implicitly as part of the Ldelem instruction), it will be sign-extended to 32-bits: 0xFFFFFFC8, not 0x000000C8 as one might expect. Clearly, 0xFFFFFFC8 is outside the range of our little array, and this code will blow up at run time, but only when idx>127. Be careful out there. The lesson here is clear: Always review a representative sample of the IL code generated by your Reflection.Emit call, for correctness, personally. Don't rely on unit-testing to uncover all the subtle signed/unsigned mismatch problems, let alone discover superfluous nop instructions. Still not convinced? We'll explore still-uglier problems in the next section. More Fun With IL: Validity, Verifiability, and Security Now that we're all IL experts (our generated "Hello, World" code in Listing 1 works brilliantly, after all), it's interesting to consider some of the things you can't do with IL (at least from within the confines of a safe/verifiable execution context). IL certainly offers a "Turing Complete" set of instructions, and then some. However, it has some interesting constraints and limitations when compared to most native machine languages. For the most part, these limitations exist for one of two reasons: to simplify the implementation of JIT compilers (a stated goal of the Common Intermediate Language specification), or to help the run time verify the safety of your code. The first issue you should be aware of is that there are two flavors of IL opcodes: verifiable and unverifiable. A most tempting example of an unverifiable opcode is cpblk, which is similar in purpose to the C language memcpy() function. If a method contains any unverifiable opcodes such as cpblk, it will not be allowed to execute in restricted, secure contexts (instead, a Security.VerificationException will be thrown). The typical example of a "restricted, secure context" is code downloaded from the Internet, but the real definition depends on your users' .NET security policies. Another thing you might notice about IL is the conspicuous lack of a "peek" instruction, which is curious for a stack-based language. Unlike the system stack used by, say, a native x86 thread, you can't just access any arbitrary value on an IL stack frame. You can only pop values off the top, one at a time. In fact, you can't even peek at the top element, without explicitly popping it off and pushing it back on. One might think this limitation exists for the sake of simplifying the JIT compiler certainly, limiting the complexity of IL in this way allows for a more efficient JIT compilation experience, enhancing the system's ability to map objects on the IL stack frame onto physical hardware resources (memory, chip registers, and the like) efficiently. But it also provides a measure of security, so that untrusted code can't snoop for interesting information further down the caller's stack frame. If a "peek" instruction did exist, it would almost certainly be unverifiable. Later in this section, we'll also see that it's illegal (invalid IL) to pop off more items than exist on our stack frame. While it's possible to execute unverifiable IL in an appropriately trusted context, it's never possible to execute invalid IL. That may seem like a painfully obvious statement, but unfortunately Reflection.Emit makes it very easy to generate invalid IL, so the topic bears further examination. Remember our earlier example, where we accidentally emitted an ldarg.s instruction with a 4-byte operand, and so produced a few unexpected nop instructions? The converse situation (specifying a too-small operand) is actually far worse: ilg.Emit(OpCodes.Ldarg, (sbyte)13); // ldarg expects a 16-bit operand Here, the ldarg instruction is expecting a 16-bit operand, but we only give it 8 bits. This emitted code will fail spectacularly, but not because 8 bits are missing from the operand the "missing" bits will be taken from the opcode of the following instruction (remember our earlier discussion of IL's serialization format), effectively transforming the remainder of the codestream into useless garbage. When we think about correctness and validity in a stream of low-level code, we typically think about the aforementioned problems (invalid opcodes and such). But the correctness and validity of IL is a far more subtle matter. For example, a method cannot be allowed to grow its stack frame to an indefinite size (or reduce it below zero). You may have already noticed the .maxstack directive output in all methods disassembled by ILDASM this qualifier exists on all IL methods, to inform the system that the stack size for the given method will never exceed a clearly defined, finite depth. The JIT compiler contains a code-safety verifier, which will walk the branches of each method's flow-control logic, making careful note of the number (and type) of items on the stack at each instruction point. (Note that this is possible only because the number and type of items consumed/produced on the stack is well-defined by each IL opcode.) If the depth of a method's stack frame at any instruction point ever exceeds .maxstack or drops below zero, or if two incompatible stack states are ever deemed possible at a single instruction point, the system will throw an InvalidProgramException. To illustrate this rule, consider the following bit of IL, which is invalid: .method private hidebysig static void Kaboom1(int32 x) cil managed { // if (x == 0) goto S1; ldarg.0 brfalse.s S1 ldc.i4 13 // kaboom: InvalidProgramException S1: // return; ret } The ldc.i4 13 instruction alters the state of the stack frame by pushing a 4-byte integer but this instruction may or may not be skipped by the conditional branch instruction (brfalse.s S1) that precedes it. When the JIT compiler gets around to this method, it will be unable to determine the depth of the stack frame at S1, and it will complain, loudly. To fix the above code, simply remove the ldc.i4 13 instruction (or replace it with a nop instruction, or follow it by a pop instruction, or... you get the idea). To summarize, more generally: Correct IL must never allow a branch instruction to result in two or more incompatible stack-states, at any single instruction point. By "incompatible stack-state" we refer not only to the depth of the stack, but also to the types of the items on it. (Note we should say "nature," rather than "type," because the CLI only considers a small subset of the basic types we know and love, for the purposes of IL validation: object references, addresses, and the basic numeric types. Refer to the CIL spec for complete details.) By way of (bad) example, consider the following IL code. It is invalid because the nature of the item on the stack at S2 is indeterminate (maybe int64, maybe float64): .method private hidebysig static double Kaboom2(int32 x) cil managed { // if (x == 0) goto S1; ldarg.0 brfalse.s S1 ldc.i8 7 // push((int64)7); br.s S2 // goto S2; S1: ldc.r8 13.0 // push((float64)13); // kaboom: InvalidProgramException S2: // return (pop()); ret } To fix this code, one might simply insert a conv.r8 instruction after the ldc.i8 7 instruction. This would keep the state of the stack frame at S2 healthy, happy, and deterministic. The topic of security and verifiability in IL is an interesting one, deserving an article all its own. As a bare minimum, it's important to understand how the CLR's verifier (and the JIT compiler) will be evaluating your code, before you undertake a nontrivial project with Reflection.Emit. Otherwise, you might labor all night to produce a brilliant work of IL art, only to get an InvalidProgramException for your efforts when you execute your generated code, the next day. Would You Like Your Code for Here, or To Go? When creating a dynamic assembly with Reflection.Emit, you must declare, ahead of time, what you plan on doing with it. Do you want to run it or save it? Or both? (Of course, if your answer is "neither," then you should probably should have stopped reading this article long ago.) You must make a special note of what you pass for the AssemblyBuilderAccess parameter to the AppDomain.DefineDynamicAssembly() method, because it affects how you must call some of the AssemblyBuilder methods, later (namely, AssemblyBuilder.DefineDynamicModule(), and of course, AssemblyBuilder.Save()). This portion of the Reflection.Emit API is a bit schizophrenic. The reason for the schizophrenia is that there are really two different use-cases for generating dynamic assemblies: "transient" dynamic assemblies (created with the AssemblyBuilderAccess.Run flag), which are never intended to be written to disk, and "persistable" dynamic assemblies (created with AssemblyBuilderAccess.Save or AssemblyBuilderAccess.RunAndSave), which are. public enum AssemblyBuilderAccess { Run = 1, // transient Save = 2, // persistable RunAndSave = 3 // persistable } One could argue that the DefineDynamicAssembly() method should have been refactored into two distinct methods, perhaps named something like DefinePersistableDynamicAssembly() and DefineTransientDynamicAssembly(), rather than switching semantics based on a flag specified at run time. But nobody ever consults me about these things, so we're stuck with using the AssemblyBuilderAccess enumeration to specify, at run time, what we would like to simply declare at design time. The AssemblyBuilderAccess flags do have other implications, with respect to the lifetime of dynamically generated code. The .NET run time never garbage-collects code. This is just as true for code generated with Reflection.Emit as it is for conventionally loaded assemblies the lifetime of all executable code is tied to the lifetime of its application domain. Put another way, the only way to unload code is to unload its application domain. However, if your dynamic assembly's code is generated via AssemblyBuilderAccess.Save, then the code is not immediately eligible for execution it exists only as a stream of bytes in memory, and is therefore fair game for garbage collection. Clearly, one should take care when designing systems that will have users generating lots of code into transient dynamic assemblies, because you'll be consuming resources that won't go away unless/until the hosting application domain goes away. In fact, it's interesting to consider how the .NET regular expression classes are designed in this regard, because they don't really offer an easy way for callers to deal with this problem. The Regex class allows one to specify an option (RegexOptions.Compiled) which will cause the underlying implementation of the regex state machine to be generated, via Reflection.Emit, into a transient dynamic assembly. [Flags] public enum RegexOptions { None = 0x00, IgnoreCase = 0x01, Multiline = 0x02, ExplicitCapture = 0x04, Compiled = 0x08, // Reflection.Emit! Singleline = 0x10, IgnorePatternWhitespace = 0x20, RightToLeft = 0x40, ECMAScript = 0x100 } This feature greatly increases performance for frequently executed regex searches, at some expense of startup time (Reflection.Emit is not terribly slow, but it's not free). However, there's another caveat: Unless you go to great lengths to instantiate and compile your Regex objects within their own application domain, there'll be no way to release the resources they consume once generated, the IL code that represents the regex state machine will not be released even when the corresponding Regex object is freed and garbage-collected. The only way to unload the compiled regexes' code is to unload the entire application domain. This is all well and good if your app needs only a fixed set of regexes, which are well known at design time. But it's probably inappropriate if, for example, you're implementing a search engine where users can enter new and various regexes all day long. There are two ways to workaround this problem with RegexOptions.Compiled. One workaround involves a two-phase approach: use the static Regex.CompileToAssembly() method to save your compiled regexes to disk as a persistable dynamic assembly (say, "MyTempRegexes.dll"), then load that assembly into a new application domain via AppDomain.Load(). The other alternative is to instantiate the Regex within a remote application domain, directly, via AppDomain.CreateInstance(). This solution is a little cleaner, but requires a bit more typing: Because the Regex class is [Serializable], and it does not derive from System.MarshalByRefObject, the resulting object (and its IL code) will end up back in your own application domain unless we build a remotable wrapper class (call it, say, MarshalByRefRegex) to expose the underlying regex functionality across the app domain boundary. This is an important reminder: Application domains are more than just code-lifetime boundaries they are also marshaling boundaries. So if we employ either of these techniques to segregate our transient dynamic assemblies into their own app domains, we will incur small a performance penalty each time we access them, because the calls must be marshaled across an app domain boundary. The performance hit will be light because the calls are only being marshaled intraprocess, but still it will likely consume much of the performance gain achieved by compiling with Reflection.Emit in the first place depending on your application, of course. Remember, this issue is only relevant in the (arguably rare) case that we care about unloading our emitted code. In all other cases, we are free to use Reflection.Emit within our own app domains, without worry. Conclusion In much the same way that XML has saved us from ever again needing to design low-level file formats for our apps, Reflection.Emit technology may save us from ever again having to devise our own state machines (like regular expression engines) that are so commonplace in advanced applications. Close your eyes, and imagine the possibilities: parsers, interpreters, state machines, static table-lookup code... All of these things and more can now be implemented as fast, native code, optimized for each individual user's platform, thus offering a level of performance never before attainable. The Age of The Interpreter might finally be over. The downloadable sample code accompanying this article includes a parser and an engine for evaluating RPN arithmetic expressions. The design follows very closely after the classes in the System.Text.RegularExpressions namespace to offer a familiar look and feel, but also to offer the same level of control over how the code is generated. Reference The CIL Instruction Set Specification Download code at Chris Sells in an independent consultant, specializing in distributed applications in .NET and COM, as well as an instructor for DevelopMentor. He's written several books, including ATL Internals , which is in the process of being updated for ATL7 as you read this. He's also working on Essential Windows Forms for Addison-Wesley and Mastering Visual Studio .NET for O'Reilly. In his free time, Chris hosts the Web Services DevCon (November, 2002) and directs the Genghis source-available project. More information about Chris, and his various projects, is available at. Shawn Van Ness is a software engineer and consultant, specializing in .NET, COM and XML technologies. Shawn has contributed numerous tools and technologies for software development to the public domain. Find out more at.
http://www.drdobbs.com/generating-code-at-run-time-with-reflect/184416570
CC-MAIN-2014-41
refinedweb
4,987
55.54
Matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. On Ubuntu workstations this is available in the system Python and also in the Anaconda Scientific Python Distribution which is provided as a module. On managed Windows machines this can be installed easily using WPKG. There is some documentation on the website including an FAQ, Tutorial and User's guide. Have to build this from source on SuSE. Needed to install python-gtk-devel. It builds OK without it but you won't get a GUI! Numpy is also a dependency. python setup.py install --prefix=/usr/local/shared/suse-11.1/i386/python On Ubuntu it's part of the distro. You can test it by doing: python from pylab import * plot([1,2,3,4]) show() This should draw a straight line graph.
https://www.ch.cam.ac.uk/computing/software/matplotlib
CC-MAIN-2020-10
refinedweb
148
59.5
This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project. > If you can persuade the C++ committee that offsetof is important not > only for legacy C code but also for new C++ code, then there's a good > chance that this restriction will be loosened in a future version of the > language. On the other hand, you can also expect to be asked why > you want to use offsetof instead of pointers-to-member. (And if you can > show that there are things you can't do with pointers to members, you > might find that people will prefer to solve the problem by removing > whatever limitation pointers to members have.) I've been having the same problem with offsetof. The thing I want to do that can't be done with member pointers is: struct B { int y; }; struct A { int x; B b; }; int A::*myptr1 = &A::x; // fine int A::*myptr2 = &A::b.y; // error that is, I want to refer to something in a near-POD nested in a near-POD. The nested design is mainly to break up a large namespace. With normal data layout, this is a reasonable thing to do. It works correctly with offsetof except for the warning. g++-3.3.4 doesn't have the advertised -Wno-invalid-offsetof. So, my workaround is: A *dummy=0; int myofs=&(char *)dummy->b.y - (char *)dummy; which isn't the most elegant solution. -- Trevor Blackwell tlb@anybots.com (650) 210-9272
http://gcc.gnu.org/ml/gcc/2004-06/msg00227.html
crawl-002
refinedweb
254
74.19
On Nov 10 2016, Paul Eggert <address@hidden> wrote: > On 11/10/2016 01:52 AM, Andreas Schwab wrote: >>> I'm curious about what the difference actually is, >> 17851608 bytes. > > Thanks, and now I'm curious about where that difference comes from. I > assume that unexelf.c code like this: > > #ifdef HAVE_SBRK > new_break = sbrk (0); > #else > new_break = (byte *) old_bss_addr + old_bss_size + 17851608; > #endif > > would not be a good idea, even if it happens to work on this particular > ARM64 build, as the number 17851608 must be specific to the platform > and/or the build. (The number is 0 on x86-64, for example.) But how can I > compute the number? If you don't dump the heap then you miss everything allocated through lisp_malloc that isn't explicitly copied to pure space.."
https://lists.gnu.org/archive/html/bug-gnu-emacs/2016-11/msg00365.html
CC-MAIN-2021-17
refinedweb
131
71.04
![if !(IE 9)]> <![endif]> The PVS-Studio team is now actively developing a static analyzer for C# code. The first version is expected by the end of 2015. And for now my task is to write a few articles to attract C# programmers' attention to our tool in advance. I've got an updated installer today, so we can now install PVS-Studio with C#-support enabled and even analyze some source code. Without further hesitation, I decided to scan whichever program I had at hand. This happened to be the Umbraco project. Of course we can't expect too much of the current version of the analyzer, but its functionality has been enough to allow me to write this small article. Umbraco is an open-source content management system platform for publishing content on the World Wide Web and intranets. It is written in C#, and since version 4.5, the whole system has been available under an MIT License. The project is of medium size, but its C# portion is fairly small, while most of the code is written in JavaScript. In all, the project consists of 3200 ".cs" files that make a total of 15 Mbytes. The number of C#-code lines is 400 KLOC. Analysis for this article was done using the alpha-version of PVS-Studio 6.00. The release will see two major changes: The pricing policy won't change. We are not making a new product; we are just extending the capabilities of the existing one by simply introducing support for one more programming language. Previously, you could use PVS-Studio to scan projects written in languages C, C++, C++/CLI, and C++/CX. Now you will get the option to analyze C# projects as well. This won't affect the price in any way. Those who have already purchased the tool to analyze C++ code will be able to analyze C# code too. I would often argue at conferences that creating a C# analyzer didn't look like an interesting job. Lots of bugs peculiar to C++ are simply impossible in C#. And that's really so. For example, C# doesn't have such functions as memset(); therefore, it doesn't suffer from the tons of troubles related to it (see examples for memset(): V511, V512, V575, V579, V597, V598). But I gradually changed my mind. You see, most of the bugs detected by PVS-Studio have to do with programmers' carelessness rather than language specifics. By carelessness I mean typos and poor modifications of copy-pasted code. This is what the PVS-Studio analyzer is really good at, and we thought that what had helped in C++ would also help in C#. The C# language doesn't protect you from typing a wrong variable name or the "last line effect" which has to do with lack of attention. Another important thing that prompted us to make a C# analyzer was the release of Roslyn. Without it, development would have been just too costly. Roslyn is an open-source platform for analysis and compilation of C# and Visual Basic languages. Roslyn does two basic operations: it builds a syntax tree (parsing) and compiles it. In addition, it allows you to analyze the source code, recursively traverse it, handle Visual Studio projects, and execute the code at runtime. For C++, my favorite diagnostic is V501. Now it has a counterpart in the C# module as well - V3001. Let's start with this one. Code sample No.1 There is an attribute called "focalPoint": [DataMember(Name = "focalPoint")] public ImageCropFocalPoint FocalPoint { get; set; } This attribute is of type 'ImageCropFocalPoint' which is defined as follows: public class ImageCropFocalPoint { [DataMember(Name = "left")] public decimal Left { get; set; } [DataMember(Name = "top")] public decimal Top { get; set; } } It's hard to make any mistake when working with an attribute like that, isn't it? Well, the author of that code made one - a sad typo in method HasFocalPoint(): public bool HasFocalPoint() { return FocalPoint != null && FocalPoint.Top != 0.5m && FocalPoint.Top != 0.5m; } 'Top' is checked twice, while 'Left' is not checked at all. PVS-Studio's diagnostic message: V3001 There are identical sub-expressions 'FocalPoint.Top != 0.5m' to the left and to the right of the '&&' operator. ImageCropDataSet.cs 58 Code sample No.2 protected virtual void OnBeforeNodeRender(ref XmlTree sender, ref XmlTreeNode node, EventArgs e) { if (node != null && node != null) { if (BeforeNodeRender != null) BeforeNodeRender(ref sender, ref node, e); } } PVS-Studio's diagnostic message: V3001 There are identical sub-expressions 'node != null' to the left and to the right of the '&&' operator. BaseTree.cs 503 The 'node' reference is checked twice. The 'sender' reference was probably meant to be checked too. Code sample No.3 public void Set (ExifTag key, string value) { if (items.ContainsKey (key)) items.Remove (key); if (key == ExifTag.WindowsTitle || // <= key == ExifTag.WindowsTitle || // <= key == ExifTag.WindowsComment || key == ExifTag.WindowsAuthor || key == ExifTag.WindowsKeywords || key == ExifTag.WindowsSubject) { items.Add (key, new WindowsByteString (key, value)); .... } PVS-Studio's diagnostic message: V3001 There are identical sub-expressions 'key == ExifTag.WindowsTitle' to the left and to the right of the '||' operator. ExifPropertyCollection.cs 78 'key' is compared twice to the 'ExifTag.WindowsTitle' constant. I can't say for sure how serious this bug is. Perhaps one of the checks is just superfluous and can be removed. But it's also possible that the comparison should be done over some other variable. Code sample No.4 Here's another example where I'm not sure if there is a real error. However, this code is still worth reviewing. We have an enumeration with 4 named constants: public enum DBTypes { Integer, Date, Nvarchar, Ntext } For some reason, the SetProperty() method handles only 3 options. Again, I'm not saying this is a mistake. But the analyzer suggests reviewing this fragment and I totally agree with it. public static Content SetProperty(....) { .... switch (((DefaultData)property.PropertyType. DataTypeDefinition.DataType.Data).DatabaseType) { case DBTypes.Ntext: case DBTypes.Nvarchar: property.Value = preValue.Id.ToString(); break; case DBTypes.Integer: property.Value = preValue.Id; break; } .... } PVS-Studio's diagnostic message: V3002 The switch statement does not cover all values of the 'DBTypes' enum: Date. ContentExtensions.cs 286 Code sample No.5 public TinyMCE(IData Data, string Configuration) { .... if (p.Alias.StartsWith(".")) styles += p.Text + "=" + p.Alias; else styles += p.Text + "=" + p.Alias; .... } PVS-Studio's diagnostic message: V3004 The 'then' statement is equivalent to the 'else' statement. TinyMCE.cs 170 Code sample No.6, No.7 At the beginning of the article, I said that C# doesn't protect you from the "last line effect". Here's an example to prove that: public void SavePassword(IMember member, string password) { .... member.RawPasswordValue = result.RawPasswordValue; member.LastPasswordChangeDate = result.LastPasswordChangeDate; member.UpdateDate = member.UpdateDate; } PVS-Studio's diagnostic message: V3005 The 'member.UpdateDate' variable is assigned to itself. MemberService.cs 114 The programmer was copying class members from the object 'result' to 'member'. But at the end (s)he relaxed and unwittingly copied the member 'member.UpdateDate' into itself. Another thing that makes me feel suspicious about this code is that the method SavePassword() deals with passwords, and it means that one must be especially careful about it. The same code fragment can be found in file UserService.cs (see line 269). My guess is that the programmer simply copied it there without checking. Code sample No.8 private bool ConvertPropertyValueByDataType(....) { if (string.IsNullOrEmpty(string.Format("{0}", result))) { result = false; return true; } .... return true; .... return true; .... return true; .... return true; .... .... return true; } PVS-Studio's diagnostic message: V3009 It's odd that this method always returns one and the same value of 'true'. DynamicNode.cs 695 The method uses lots of 'if' and 'return' statements. What doesn't look right to me is that all the 'return' statements return 'true'. Isn't there a bug somewhere? What if some of those should return 'false'? Code sample No.9 Now let's test your attentiveness: try to find a bug in the code fragment below. Just examine the method but don't read my explanation after it. To prevent you from accidentally reading it, I inserted a separator (a unicorn image :). public static string GetTreePathFromFilePath(string filePath) { List<string> treePath = new List<string>(); treePath.Add("-1"); treePath.Add("init"); string[] pathPaths = filePath.Split('/'); pathPaths.Reverse(); for (int p = 0; p < pathPaths.Length; p++) { treePath.Add( string.Join("/", pathPaths.Take(p + 1).ToArray())); } string sPath = string.Join(",", treePath.ToArray()); return sPath; } Figure 1. Separating code from explanation. PVS-Studio's diagnostic message: V3010 The return value of function 'Reverse' is required to be utilized. DeepLink.cs 19 When calling the Reverse() method, the programmer intended to change the array 'pathPaths'. (S)he was probably mislead by the fact that an operation like that is totally correct when we deal with lists (List<T>.Reverse). But when applied to arrays, the Reverse() method doesn't change the original array. To work with arrays, this method is implemented through extension method Reverse() of the 'Enumerable' class and returns a modified collection rather than reverse the items directly. A correct way of doing that would be like this: string[] pathPaths = filePath.Split('/'); pathPaths = pathPaths.Reverse().ToArray(); Or even like this: string[] pathPaths = filePath.Split('/').Reverse().ToArray(); Code sample No.10 The PVS-Studio analyzer output a few V3013 warnings reporting some methods whose bodies looked strangely alike. To my mind, all of those are false positives. Only one of the warnings is probably worth checking out: public void GetAbsolutePathDecoded(string input, string expected) { var source = new Uri(input, UriKind.RelativeOrAbsolute); var output = source.GetSafeAbsolutePathDecoded(); Assert.AreEqual(expected, output); } public void GetSafeAbsolutePathDecoded(string input, string expected) { var source = new Uri(input, UriKind.RelativeOrAbsolute); var output = source.GetSafeAbsolutePathDecoded(); Assert.AreEqual(expected, output); } PVS-Studio's diagnostic message: V3013 It is odd that the body of 'GetAbsolutePathDecoded' function is fully equivalent to the body of 'GetSafeAbsolutePathDecoded' function. UriExtensionsTests.cs 141 Inside the GetAbsolutePathDecoded() method, we may need to use source. GetAbsolutePathDecoded() instead of source.GetSafeAbsolutePathDecoded() I'm not sure about that, but this spot should be inspected. The article is meant for a new audience, so I anticipate a number of questions people may want to ask. I'll try to answer these questions in advance. Did you report the bugs you'd found to the project developers? Yes, we try to do it all the time. Do you run PVS-Studio on itself? Yes. Does PVS-Studio support Mono? No. For more detailed answers to these and other questions see the post "Readers' FAQ on Articles about PVS-Studio". There aren't many bugs in this project. Our C++-oriented readers know why it happens so, but since we yet have to charm and lure C# programmers into our camp, I'll clarify some important points here: Thank you for reading this article, and may your programs stay bug ...
https://www.viva64.com/en/b/0357/
CC-MAIN-2019-39
refinedweb
1,811
59.8
When do you call spiders.job.run()? Are you using a pipeline so you can use the `close_spider` function? In the pipeline you could create an empty item list when you start the spider, append the items and iterate through them in `close_spider`? Okay. I have already done this, when I was using scrapy. But how will I do this with scrapinghub library? In case of scraping hub: from scrapinghub import ScrapinghubClient client = ScrapinghubClient('API KEY') project = client.get_project(PROJECT_ID) spider = project.spiders.get(spider_name) job = spider.jobs.run(site_id=site_id, url=url, scraper_id=scraper_id) item_list = list() for item in job.items.iter(): if 'opportunities' in item: item_list.append(item['opportunities']) print item_list Now, when I run this, It shows empty list. The request is not waiting for the job to complete its process. How will I get the result? Please send the suggestion as soon as possible. Hi Scrapinghub Team, Could you please provide you technical support contact number. It will help me a lot. Thanks gauravinvolvesoft Hi, I am using scrapinghub python library. I ran spider using spider.jobs.run() and then I used job.items.iter() in the python script. The problem is when python request to run the spider, it is not waiting for its completion because of this it is giving me the blank list as a output. So, please suggest how the request will wait to complete its scraping and then gives result. Is there any method which will wait for the job to complete its scraping and provide the scraped item as output? Thanks
https://support.scrapinghub.com/support/discussions/topics/22000010103
CC-MAIN-2018-51
refinedweb
261
69.48
So i have a problem that reads: Each line of the file consists of a student’s name followed by an unpredictable number of test scores. The number of students is also unpredictable. The desired output is as shown where the numbers there represent the average test score rounded to the nearest whole number. Create a class called StudentAverages that will input the StudentScores.in text file and produce the indicated output. i attached the file that i have to read from. My code so far is: import java.io.*; import java.util.*; public class fdhg { public static void main(String args[]) throws IOException { Scanner sf = new Scanner(new File("Grades.txt")); int maxIndx = -1; String text[] = new String[1000]; while(sf.hasNext()) { maxIndx++; text[maxIndx]=sf.nextLine(); } sf.close(); String answer=""; int sum=0; double average=0; int n=0; String a=""; for(int j=0;j<=maxIndx; j++) { Scanner sc=new Scanner(text[j]); sum=0; String s=""; n=0; while(sc.hasNext()) { int i=sc.nextInt(); sum=sum+i; n=n+1; } average=sum/n; System.out.println(average); } } } So i know how to get the averages if there were just numbers in the file but there is a name at the beginning. How do i print out their name with out having any idea what the contents of the file are. thanks for any help you can offer. Edited 3 Years Ago by Dani: Formatting fixed
https://www.daniweb.com/programming/software-development/threads/406726/how-to-import-a-file-and-find-the-average-of-the-numbers-in-the-file
CC-MAIN-2016-50
refinedweb
240
76.42
Create a triangle in OpenGL Part I Go to OpenGL Home Last time we saw how to create a triangle using constant vertices in a vertex shader. Let’s say that we have a model like a monster or a car with a mesh containing around 2000 vertices. What are we going to do? Write all 2000 vertices in our vertex shader? No, that would be madness; we have to pass them dynamically to our shader. This way we can pass different models with different number of vertices to the same shader. Usually a game model is loaded from a file exported from Maya/3DS Max or other rendering programs. To keep things super simple let’s create the same triangle from the previous tutorial in a C++ class and send it to the vertex shader. In a future tutorial we will see how to load meshes from a file. In this first part we will talk about: - Vertex Format and attributes - create a basic class to hold our objects In part II we will talk about: - Vertex Array Object (VAO) - Vertex Buffer Object (VBO) In part III we will talk about: - Tying attributes - Modify our vertex shader - Draw triangle First we want to create a structure called VertexFormat which describes the contents(also called attributes) of a vertex like position, texture coordinates, normals, material etc also . For the moment we only need position for our triangle. Because we are in 3D we always need to know the x,y,z values of a vertex. For this reason we will use a 3D vector which is the simplest way to represent the position. Later on we will need to do vector operations like: dot product, cross product, subtraction, addition, etc; so using a vector makes sense. We can create a vector class to handle these operations or we can save some time and use a math library. Let’s use OpenGL Mathematics (GLM) because it provides all we need at this moment. To use GLM in our project, you can download the latest archive from their website and put the contents in the Dependencies folder near GLEW and FreeGLUT folders or we can install it using NuGet Package Manager. This time I will use NuGet to add GLM in our project: When the NuGet console pops up, just write : Install-Package glm and that’s all. Now we are ready to create our VertexFormat structure #ifndef VertexFormat_H_ #define VertexFormat_H_ #include "glm\glm.hpp" //installed with NuGet struct VertexFormat { glm::vec3 position;//our first vertex attribute VertexFormat(const glm::vec3 &pos) { position = pos; } }; #endif Next step is to create a our C++ class called GameModels.h to hold our game models: #ifndef _GAME_MODELS_H_ #define _GAME_MODELS_H_ #pragma once #include "../Dependencies/glew/glew.h" #include "../Dependencies/freeglut/freeglut.h" #include "VertexFormat.h" //added 3/21/2015 #include <vector> #include <map> namespace Models { //I explain this structure in Part III struct Model { unsigned int vao; std::vector<unsigned int> vbos; Model(){} }; class GameModels { public: GameModels(); ~GameModels(); void CreateTriangleModel(const std::string& gameModelName); void DeleteModel(const std::string& gameModelName); unsigned int GetModel(const std::string& gameModelName); private: std::map<std::string, Model> GameModelList;//keep our models }; } #endif We will expand and modify this class in future tutorials, but for the moment this is all we need to create our triangle. We store our models in a map and access them by name. The private map GameModelList holds Vertex Array Objects (VAOs) which we will talk about in the next tutorial. Don’t forget to create GameModels.cpp too. So far, you should have these folders and files. packages.config comes with the NuGet configuration for GLM.
http://in2gpu.com/2014/12/02/create-triangle-opengl-part/
CC-MAIN-2017-22
refinedweb
611
50.77
Page Objects Refactored: SOLID Steps to the Screenplay/Journey Pattern Page Objects Refactored: SOLID Steps to the Screenplay/Journey Pattern Automated web tests should follow good object-oriented design principles, too. Join the DZone community and get the full member experience.Join For FreeScreenplay Pattern, an alternative approach that could save you the trouble. Why Page Objects PageObjects were recommended at a time when Selenium and WebDriver were being used increasingly by testing teams, and not necessarily by those with extensive programming skills. Prior to this, many test-automators would get themselves into trouble creating tests that had repetition and inconsistencies causing ‘flaky’ tests. For some, this gave test automation a bad name as those inexperienced in programming misinterpreted the failings of poor coding for failings of the testing frameworks. Simon Stewart talked of this in his article “My Selenium Tests aren’t Stable”when he said: “Firstly, let’s state clearly: Selenium is not unstable, and your Selenium tests don’t need to be flaky. The same applies for your WebDriver tests. […] When your tests are flaky, do some root cause analysis to understand why they’re flaky. It’s very seldom because you’ve uncovered a bug in the test framework.” Page Objects & Baby Steps The solution to this problem had to be good enough to prevent test code becoming too riddled with code smells and be accessible to the many testers “scripting” automated tests with little or no OO programming experience. In an ideal world, this would have sparked interest in the concepts behind the design of PageObjects and result in continuous and merciless refactoring — removing the code smells as they emerged. Unfortunately, as with many examples, PageObjects have remained a wholesale drop-in solution for many teams who have managed to apply some refactoring, but not enough to avoid subsequent maintenance overheads. Some teams claim this hasn’t affected them. However, many teams we’ve worked with, making similar claims, later realised that they had simply become accustomed to the inherent overheads — not realising that life could actually be a lot easier. The Common Code Smell in Page Objects “A code smell is a surface indication that usually corresponds to a deeper problem in the system. The term was first coined by Kent Beck while helping me with my Refactoring book.” -Martin Fowler The most common code-smell we see in PageObjects is ‘Large Class’. Of course, the term ‘Large’ is subjective and we’ve met a variety of people with different views on what ‘Large’ means. Many well regarded programmers consider a class that is larger than a single screen (or their head) as a “Large Class”. This is the guideline that we subscribe to. A reasonable size by these standards would amount to 50–60 lines of code (including imports, requires, etc.). If you are reading this thinking “yep, I’m with you” then we’re on the same page. If you are thinking “a class that small can’t do anything useful” then I hope we can change your mind. Let’s start with the de-facto LoginPage example on the Selenium Wiki. Representing just two fields and one button, it requires 45 lines of code (minus the comments), already near the upper limit. Consider a more involved page with tens-of-elements and this number can grow considerably, as in this example with over 200 lines of code. Why is this a problem? Apart from taking longer to understand what a class does (and how), a large class is an indicator that other good programming principles are absent. For example, they are likely to have multiple responsibilities, violating the Single Responsibility Principle (SRP) making it harder to see where a change to the code should be made. They are also more likely to contain duplication — which can result in a bug being fixed in one place but recurring elsewhere. One approach to this is to think of each PageObject not as an object representing a page but as objects within a page, as described by Martin Fowler. This means thinking of each object like a widget or page component. Even still, if a component has more than a couple of fields and buttons (as with the LoginPage example) the PageObject can still grow well beyond the 50–60 lines mark. Addressing this is where some SOLID principles can be helpful. SOLID Principles Maintainability of code can make or break any project, whether it is to write test code, production code or both. If maintenance overheads increase, the time-to-market increases along with costs. Code smells help us recognise when there is a potential problem. SOLID principles help us recognise when we’ve resolved those problems in an effective way. SOLID is an acronym coined by Michael Feathers and Bob Martin that encapsulates five good object-oriented programming principles: Single Responsibility Principle Open Closed Principle Liskov Substitution Principle Interface Segregation Principle Dependency Inversion Principle For the purposes of this article, we’ll concentrate on the two that have the most noticeable effect on refactoring of PageObjects — the Single Responsibility Principle (SRP) and the Open Closed Principle (OCP). SRP The SRP states that a class should have only one responsibility and therefore only one reason to change. This reduces the risk of us affecting other unrelated behaviours when we make a necessary change. “If a class has more than one responsibility, then the responsibilities become coupled. Changes to one responsibility may impair or inhibit the class’ ability to meet the others. This kind of coupling leads to fragile designs that break in unexpected ways when changed.” -Robert Martin, Agile Principles, Patterns & Practices PageObjects commonly have the following responsibilities: - Provide an abstraction to the location of elements on a page via a meaningful label for what those elements mean in business terms. - Describe the tasks that can be completed on a page using its elements(often, but not always, expressing navigation in the PageObject returned by a task). Let’s consider a simple todo list application (e.g. the Todo MVC example app). The structure of a typical PageObject for the above example might look like this: As these responsibilities are in a single class, when the way we locate a specific element is altered this class requires change. If the sequence of interactions required to complete a task change, again so must this class. There we have more than one reason for change — violating the SRP. OCP The Open Closed Principle (coined by Bertrand Meyer in Object-Oriented Software Construction) states that a class should be open for extension, but closed for modification. This means that it should be possible to extend behaviour by writing a new class without changing existing, working code. .” — Robert C. Martin, The Open Closed Principle, C++ Report, Vol. 8, January 1996 In practice, this means: “Adding a new feature would involve leaving the old code in place and only deploying the new code, perhaps in a new jar or dll or gem.” — Robert Martin, The Open Closed Principle Let’s say you have a TodoListPage and we want to add the ability to sort todo items alphabetically. The most likely approach would be to ‘open’ the TodoListPage class and modify it with new method to handle this behaviour. These steps would likely be repeated for any behaviour added to the page. Furthermore, there are times where a task may span multiple pages. Having to take this approach may cause you to artificially split the task up across two classes to conform to PageObject dogma. To satisfy OCP it should be possible to simply add a new class that describes how to sort the list. Refactoring with the SRP & OCP in mind To satisfy the OCP, a naive approach would be to extract classes into smaller PageObjects where adding a new behaviour would involve adding a new class. While this satisfies the OCP in this example, it does not satisfy the SRP. In each of the above classes the two responsibilities of a) how to find the elements; and b) how to complete a given task; are still grouped together. Each class has two reasons to change, violating the SRP. Additionally, the task responsibility is limited to the elements of a single page and where tasks share an element the tasks must either duplicate or refer to elements declared in other tasks. This is bad. To satisfy the SRP, a similarly naive approach might be to extract a class for each obvious responsibility, such as locating elements and performing tasks. While this satisfies the SRP it does not satisfy the OCP (each new task requires editing the Tasks class) Instead of the above, a less naive approach would be to combine the Extract Class refactoring with the Replace Method with Method Object refactoring. The result satisfies both OCP and SRP. New behaviours do not require you to modify existing classes and each class has only one reason to change — i.e. if the way we locate elements no longer works or, for the tasks, if the way a given task is performed no longer works. The key phrase here in Robert’s explanation of OCP is: “…you extend the behavior of such modules by adding new code, not by changing old code that already works.” This is where we begin to move away from PageObjects as we know them and see something closer to the Screenplay Pattern (formerly known as the Journey Pattern) begin to emerge. Dogmatically, the TodoListPage in this example doesn’t quite satisfy the OCP, however, we believe this is a reasonable compromise where discoverability of cohesive elements is useful (i.e. elements that make sense as a single unit, e.g. a todo list item has a title, a complete button and a delete button that don’t make sense independently). Furthermore, these classes are essentially metadata and carry very low risk when the class is changed. Domain Strain Beyond these design issues there is also another fundamental problem with PageObjects. Some developers and testers may think of the product they’re developing/testing in terms of “pages” (although the relevance of this in an era of single-page web-apps is debatable). User stories (done right) steer us to think in terms of something valuable a user will be able to achieve once the resulting capability is implemented. For this reason, we believe that behaviours are the primary concern of our tests; the implementation — a secondary concern. This is especially true if you’re employing BDD and writing acceptance tests where the user’s ability to complete a goal is more relevant than the eventual solution. For these reasons, we start from a different perspective: Roles ➭ Who is this for? ..Goals ➭ Why are they here and what outcome do they hope for? …Tasks ➭ What will they need to do to achieve these goals? ….Actions ➭ How they complete each task through specific interactions? By starting from this point of view, this changes the way we perceive the domain and therefore how we model it. This thinking takes us away from pages. Instead, we find we have actors with the abilities to play a given role. Each test scenario becomes a narrative describing the tasks and how we expect a story to play out for a given goal. This perspective, combined with the OO design principles we’ve outlined above, is what steers us away from PageObjects and is what gave us the Screenplay Pattern. The Screenplay Pattern by Example The Screenplay Pattern has been around since 2007 and arose independently of PageObjects. We introduce it as a refactoring because, for many, it helps to start somewhere familiar. To understand it more fully, we’ll take you step-by-step through an example, using the built-in support for this pattern in the Serenity framework. All of these examples can be found on GitHub. For this example, we’re going to continue using the Todo MVC sample app as inspiration, starting with a simple user-story: As James (the just-in-time kinda guy) I want to capture the most important things I need to do So that I don’t leave so many things until the last minute We’re going to apply the following thinking to how we implement one of the acceptance tests for this story: Roles ➭ Who is this for? ..Goals ➭ Why are they here and what outcome do they hope for? …Tasks ➭ What will they need to do to achieve these goals? ….Actions ➭ How they complete each task through specific interactions? Which, using the built in Screenplay support in the Serenity framework, results in a scenario that looks like this: @Test public void should_be_able_to_add_the_first_todo_item() { givenThat(james).wasAbleTo(Start.withAnEmptyTodoList()); when(james).attemptsTo(AddATodoItem.called("Buy some milk")); then(james).should(seeThat(TheItems.displayed(), hasItem("Buy some milk"))); } Let’s explore the basics of how this works before we delve into more detail… Roles In the example scenario, “James” is a persona we are using to understand a specific role. An Actor is used to play this role, performing each Task in the scenario: Actor james = Actor.named(“James”); Goals We see from the scenario title that his goal is to add the first todo item. @Test public void should_be_able_to_add_the_first_todo_item() Tasks To achieve this goal, there are two tasks involved: Start.withAnEmptyTodoList(), which in this case opens the app in a browser with an empty todo list; and AddATodoItem.called(“Buy some milk”). To see if this goal has been fulfilled James performs another special kind of task — he asks the question: what are TheItems.displayed()? While we like to use given/when/then, these statements can be written without these methods, which is how we would write these tasks in a cucumber step definition: @Given(“James starts with one item in his todo list”) public void start_with_one_todo(){ james.attemptsTo( Start.withAnEmptyTodoList(), AddATodoItem.called(“Buy some milk”) ); } view raw Note that both attemptsTo() and wasAbleTo() can take a list of tasks to perform. Actions Within each task is a performAs() method where ‘instructions’ for the task live. These are the actions required to complete that task. In this case we enter some text: public <T extends Actor> void performAs(T theActor) { theActor.attemptsTo( Enter.theValue(thingToDo) .into(WHAT_NEEDS_TO_BE_DONE) .thenHit(RETURN) ); } Screen Components Actions, like Enter.theValue( thingToDo ).into( WHAT_NEEDS_TO_BE_DONE ) can interact with elements of a screen component: public class ToDoList { public static Target WHAT_NEEDS_TO_BE_DONE =the(“’What needs to be done?’ field”).locatedBy(“#new-todo”); public static Target ITEMS = the(“List of todo items”).locatedBy(“.view label”); //… } Now, let’s examine how this all works in a little more detail. Actors Have Abilities James is the name that we’ve given to an instance of an Actor who, in this case, has the Ability to BrowseTheWeb. We say who James is and what abilities he has like this (perhaps in a @Before method): Actor james = Actor.named(“James”); james.can(BrowseTheWeb.with(hisBrowser)); An Actor is responsible for performing things. This can be anything that is Performable such as a Task or Action. In Serenity, we say Actor.named(“James”) so that we get this information in some nice reporting (more on this another time). This separation of actors and their respective browsers also makes it easy to have more than one actor when relevant: @Before public void ourUsersCanBrowseTheWeb() { james.can(BrowseTheWeb.with(hisBrowser)); jane.can(BrowseTheWeb.with(herBrowser)); } @Test public void should_not_affect_todos_belonging_to_another_user() { givenThat(james).wasAbleTo(Start.withATodoListContaining(“Walk the dog”, “Put out the garbage”)); andThat(jane).wasAbleTo(Start.withATodoListContaining(“Walk the dog”, “Feed the cat”)); when(james).attemptsTo( CompleteItem.called(“Walk the dog”), Clear.completedItems() ); then(jane).should(seeThat(TheItems.displayed(), contains(“Walk the dog”, “Feed the cat”))); } Tasks Involve Actions The task AddATodoItem in our scenario is created by the line AddATodoItem.called(“Buy some milk”) and is given to our actor who will call the performAs(Actor) method. If you are familiar with Hamcrest (also found within JUnit) you’ll be comfortable with using static creation methods to instantiate an object that is passed in for deferred execution — i.e. Hamcrest matchers — for example equalTo(…) or hasItems(…). In Hamcrest you pass a matcher to an Assertclass. Here, we pass a Task to an instance of an Actor via its attemptsTo() or wasAbleTo() methods. One or more actions will be required to complete a task. Because this is a very simple task there happens to be only one Action involved — typing some text into a field followed by hitting the return key. You write these in the performAs() method of the task. The Actor will call this method, passing in a reference to itself. You might be more familiar with a method like this being one of many methods on a TodoListPage class, however with the Screenplay Pattern — applying the SRP & OCP — this task is literally in a class of its own: public class AddATodoItem implements Task { private final String thingToDo; public <T extends Actor> void performAs(T theActor) { theActor.attemptsTo( Enter.theValue(thingToDo) .into(WHAT_NEEDS_TO_BE_DONE) .thenHit(RETURN) ); } public static AddATodoItem called(String thingToDo) { return instrumented(AddATodoItem.class, thingToDo); } public AddATodoItem(String thingToDo) { this.thingToDo = thingToDo; } } Note: the instrumented(…) method happens to be from the Serenity framework, enabling reporting features such as writing each task into the report along with the actions performed. We find that most of the time we are interested in how the task was performed and much less in how it was created, so we place the most valuable information at the top of the class. Further down is where we put the creation method and constructor where they “cuddle” (i.e. no extra line break) to show how close they are in their relationship to each other. Actions Abstractions There are some really good reasons to abstract yourself away from any 3rd party library, even when that is Selenium/WebDriver. The Enter action is built into Serenity, following the same pattern as our tasks, and helps us to remain isolated from Selenium: public <T extends Actor> void performAs(T theActor) { theActor.attemptsTo( Enter.theValue(thingToDo) .into(WHAT_NEEDS_TO_BE_DONE) .thenHit(RETURN) ); } This Action requires the Actor to have the Ability to BrowseTheWeb which was given to the actor earlier. Remember when we said how in a @Beforemethod you might write: Actor james = Actor.named(“James”); james.can(BrowseTheWeb.with(hisBrowser)); With the Ability and Action concepts (along with Question and Targetclasses explained below) Selenium is kept very much behind the scenes. This approach continues our use of the SRP and OCP allowing us to add new and interesting ways to leverage Selenium, or whatever framework is behind this abstraction. In future, there may be numerous jars available where there are many different implementations of how we interact with the product, perhaps using another framework to ‘browse the web’ or ‘use a mobile app’ or ‘use REST APIs’ and so on. Questions and Consequences Beyond tasks and the actions within them, the actor needs to understand the expected Consequence of a scenario. In this case the actor should: seeThat(TheItems.displayed(), hasItem("Buy some milk")); This is made up of the question TheItems.displayed() and a Hamcrest matcher hasItem(“Buy some milk”). The actor is able to then evaluate that consequence within it’s should method: then(james).should(seeThat(TheItems.displayed(), hasItem("Buy some milk"))); This is how the question TheItems.displayed() can be implemented: public class TheItems implements Question<List<String>> { @Override public List<String> answeredBy(Actor actor) { return Text.of(ToDoList.ITEMS) .viewedBy(actor) .asList(); } public static Question<List<String>> displayed() { return new TheItems(); } } User Interface The elements WHAT_NEEDS_TO_BE_DONE and ToDoList.ITEMS from the task and question above can be found in a representation of a ‘screen component’ (or widget if you prefer). Each Target within this component gives us the means of locating a given element on the screen. In our example, these can be found in our user_interface package. package net.serenitybdd.demos.todos.user_interface; import net.serenitybdd.screenplay.targets.Target; public class ToDoList { public static Target WHAT_NEEDS_TO_BE_DONE = the(“’What needs to be done?’ field”).locatedBy(“#new-todo”); public static Target ITEMS = the(“List of todo items”).locatedBy(“.view label”); public static Target ITEMS_LEFT = the(“Count of items left”).locatedBy(“#todo-count strong”); public static Target TOGGLE_ALL = the(“Complete all items link”).locatedBy(“#toggle-all”); public static Target CLEAR_COMPLETED = the(“Clear completed link”).locatedBy(“#clear-completed”); public static Target FILTER = the(“filter”).locatedBy(“//*[@id=’filters’]//a[.=’{0}’]”); public static Target SELECTED_FILTER = the(“selected filter”).locatedBy(“#filters li .selected”); } Each screen component is a cohesive set of elements that belong together. Here is another example, the TodoListItem: package net.serenitybdd.demos.todos.user_interface; import net.serenitybdd.screenplay.targets.Target; public class TodoListItem { public static Target COMPLETE_ITEM = the(“Complete item tick box”).locatedBy( “//*[@class=’view’ and contains(.,’{0}’)]//input[@type=’checkbox’]”); public static Target DELETE_ITEM = the(“Delete item button”).locatedBy( “//*[@class=’view’ and contains(.,’{0}’)]//button[@class=’destroy’]”); } These screen components are essentially data (which you could of course refactor out into, say, a json file) but for convenience and discoverability these have been implemented here as simple classes with constants. Were we to take the OCP to the extreme you could split these further, adding a new ‘component’ to the user_interface package with each new enhancement. We think that applying the OCP to this extreme would result in greater loss than gains; these screen components help form a cohesive picture and are so simple that opening them for modification is very low risk. Furthermore, if an element is moved in the application such that it should be grouped with a different set of components in another class, moving each Target remains extremely low-risk when refactored through an IDE. There will be no methods in this class that depend on the Target you want to move (as well as targets that stay where they are) as you might have in ‘traditional’ PageObjects. As a result, this change can be made with ease and confidence that it will not cascade into a massive refactoring of where dependent methods might live. A Place for Everything In this example, the result is a folder structure that can be easily browsed to understand what tasks are available and what user interface components there are. . ├── model │ ├── ApplicationInformation.java │ ├── TodoStatus.java │ └── TodoStatusFilter.java ├── questions │ ├── ApplicationDetails.java │ ├── ClearCompletedItemsOptionAvailability.java │ ├── CurrentFilter.java │ ├── DisplayedItems.java │ ├── ElementAvailability.java │ ├── ItemsLeftCounter.java │ ├── PlaceholderText.java │ ├── TheItemStatus.java │ └── TheItems.java ├── tasks │ ├── AddATodoItem.java │ ├── AddTodoItems.java │ ├── ClearCompletedItems.java │ ├── CompleteAll.java │ ├── CompleteItem.java │ ├── DeleteAnItem.java │ ├── FilterItems.java │ └── Start.java └── user_interface ├── ApplicationHomePage.java ├── ToDoList.java └── TodoListItem.java Note: Because this is such a simple project, there is only one topic (e.g. Working with Todos). On a larger project, rather than group by types (e.g. Tasks, Questions), we’d group by cohesive topics. We take great care in choosing the names of each class to ensure it is intuitively discoverable and tells you at a glance what it does from a user’s perspective. The end result of the Screenplay Pattern, is ease of maintenance through readability, discoverability, consistency and structural simplicity where there is a place for everything — making it easier to put everything in the right place. In Closing The thinking and pattern we’ve shown you is an alternative approach to automating web application tests and more. The approach has been around in one form or another since its early conception by Antony Marcano in 2007. While this model arose independently, it is where you might end up following merciless refactoring of your everyday PageObject with SOLID principles in mind. It was later illustrated in a more mature form with JNarrate and Screenplay4J, significantly through refinements made by Andy Palmer and Antony Marcano together. These ideas were used extensively across numerous organisations with one subsequently open-sourcing their interpretation. Further implementations followed, integrating the ideas of others including Andrew Parker, Jan Molak, Kostas Mamalis and John Ferguson Smart. Now, thanks to John, the Serenity framework supports (even favours) the approach. Yet with all these refinements and specific implementations over time, the fundamentals of the original ideas have remained the same — except its name. The narrative of a user journey is what you see, hence its former name The Journey Pattern. There is, however, so much more to the pattern that this simply doesn’t communicate… The narrative for each scenario is a script; there is a cast of actors playing the roles; each actor has the task of performing action to the best of their ability… In our search for a better name, it turns out that the right metaphor was always there, staring us in the face and pointing us at the perfect name. The final piece of the puzzle has now fallen into place and our journey of continuous improvement has given us The Screenplay Pattern. Published at DZone with permission of Antony Marcano , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/page-objects-refactored-solid-steps-to-the-screenp
CC-MAIN-2020-24
refinedweb
4,165
52.8
List in J2ME List in J2ME J2ME Canvas List Example will explain you, how to create list of items. In this example we are going to create a simple list of text, that will show Adding customitem in canvas in J2ME? Adding customitem in canvas in J2ME? In J2ME how should we add... for customitem.The key event handling is allowed in canvas not in form.So i try to use canvas instead of form Comment PHP Script - MobileApplications Comment PHP Script I wanted to write a PHP Script that makes it possible to post the comment on reply. A user can add his comment to any of the reply in the forum. How can I do j2me j2me Hi, how can add image in forms but using lick a button. does not using canvas in j2me for Symbian development. I am using Canvas. I needed to know, if there is any method to get...J2ME how to get the color of a pixel in J2ME but not in Java... line using J2ME J2ME Canvas Repaint J2ME Canvas Repaint In J2ME repaint is the method of the canvas class, and is used to repaint the entire canvas class. To define the repaint method in you midlet follow j2me mobile list - MobileApplications j2me mobile list on which mobiles the j2me applications are compatible to Draw Triangle J2ME Draw Triangle As you already aware of the canvas class and it's use in J2ME application, we are using canvas class to draw the triangle on the screen. In this example Reply me - Java Beginners Reply me If u r not understood my just posted question plz let me know Hi Ragini, Please specify your requirements in detail. It would be good for me to provide you the solution if problem is clear. And also j2me and blutooth j2me and blutooth how to pass canvas object between two devices using blutooth Reply - Java Beginners : 0px; LIST-STYLE-TYPE: none } #nav UL { PADDING-RIGHT: 0px...; LINE-HEIGHT: 1em; PADDING-TOP: 0px; LIST-STYLE-TYPE: none reply List in J2ME List in J2ME Exclusive List MIDlet Example This example illustrates how to create a Exclusive List. The Exclusive List is used to select only one list element at a time JSP Comment and HTML Comment JSP Comment and HTML Comment Difference between JSP Comment & HTML Comment ? JSP Comments are removed by the JSP Engine during the translation phase (JSP Comments are not even part of the compilation unit J2ME Canvas Example J2ME Canvas Example A J2ME Game Canvas Example This example illustrates how to create...):- This is also the void type method, it is used to paint the canvas. Other List of mobiles List of mobiles which mobile phones support j2me Hidden Comment Hidden Comment What is a Hidden Comment ? Hi, Hidden... to the client. The JSP engine ignores a hidden comment, and does not process any code within hidden comment tags J2ME List Image J2ME List Image List Image MIDlet Example This example illustrates how to create list with image symbol. In this example we are trying to create list using of List class J2ME Tutorial ; List in J2ME J2ME Canvas List Example will explain you, how...; J2ME Contact List This example illustrates how to create your phone... one option at a time. J2ME List Image Rectangle Canvas MIDlet Example Rectangle Canvas MIDlet Example  ... of rectangle in J2ME. We have created CanvasRectangle class in this example that extends to the Canvas class to draw the given types of rectangle. In this figure J2ME Icon MIDlet Example J2ME Icon MIDlet Example In this example we are going to create icon list of different... the property of List class. Then we are creating the constructor of IconList class j2me image application j2me image application i can not get the image in my MIDlet .........please tell me the detailed process for creating immutable image without Canvas class Difference between documentation comment and multiline comment in java? Difference between documentation comment and multiline comment in java? Difference between documentation comment and multiline comment in java list and textbox list and textbox sir, i want to create a search, where when i... and displayed. Also like a list i should navigate and select the appropriate word which... this in j2me COMMENT & HIDDEN COMMENT IN JSP COMMENT & HIDDEN COMMENT IN JSP In this section , we will learn how to implements comment & hidden comment in JSP. COMMENT : Comment generates a comment that is sent to the client. The comment use in JSP is very similar Text Example in J2ME Text Example in J2ME In J2ME programming language canvas class is used to paint and draw... a canvas class to draw such kind of graphics in the J2ME application. J2ME method and want to make some improvement, I've tried like this private String, I want to save list in record store.How can i do this. Hi Friend, Please visit the following link: Thanks How to comment javascript code? How to comment javascript code? How to comment javascript code JavaScript Comment Tag JavaScript Comment Tag How to write comment in JavaScript javascript comment syntax html javascript comment syntax html javascript comment syntax html j2me - Java Beginners j2me can we append a list to a form in j2me. Hi Friend, Please visit the following link: http Draw arc in J2ME Draw arc in J2ME The given example is going to draw an arc using canvas class of J2ME. You can also set a color for it, as we did in our example. Different methods j2me project - Java Beginners j2me project HOW TO CREATE MIDLET WHICH IS A GPRS BASED SOLUTION AND WILL PROVIDE THE SALES PERSONNEL WITH A LIST OF MENU ITEMS. TH SALES PERSONNEL...:// Thanks j Key Codes Example J2ME Key Codes Example  ... key pressed on the canvas. In this application we are using the keyPressed... is created in the KeyCodeCanvas class, in which we inherited the canvas class Timer Animation J2ME Timer Animation  ... and implement it in the canvas class. In this Tutorial we have given you a good example, which helps you to understand using of timer class for drawing the canvas Canvas placing problem Canvas placing problem how to place a canvas in swt under a toolbar reply to the mail reply to the mail Hi! which jar file needed to be added? Thanks Question not clear Canvas Hi sir, i need a source code for simple calculator along with buttons in mobile application j2me j2me COmmand c=new Command("Exit",Command.EXIT,100);Please expalin abt 3 parameters reply must reply must is it critical to do a software job based on games(java) i know core java & advanced java basics only please give me answer JAD file what is necesary of that Hi Friend, Please visit the following link: JAD File Thanks Hi Friend, Please visit the following link: JAD File Image Example J2ME Image Example In this application we are going to simply create an image using canvas... in the class. When the run() method is invoke, which repaint the canvas Books a list of all the J2ME related books I know about. The computer industry is fast... organised this list with the most recently published books at the top. Pro J2ME Polish... J2ME Books   J2ME J2ME i wann source code to find out the difference between two dates.. and i wan in detail.. plz do favour for me.. import java.util.... (ParseException e) { e.printStackTrace(); } } } i wann j2me code Reply - Struts J2ME Vector Example J2ME Vector Example  .... In this example we are using the vector class in the canvas form. The vector class...;unconditional) {} } class VectorCanvas extends Canvas{   Frame Animation J2ME Frame Animation  ... it in the canvas class. In this example we are creating a frame using Gauge class. When the command action call the "Run" command ,which display the canvas
http://www.roseindia.net/tutorialhelp/comment/89967
CC-MAIN-2014-10
refinedweb
1,307
62.58
lightGallery alternatives and similar libraries Based on the "Modals and Popups" category. Alternatively, view lightGallery alternatives based on common mentions on social networks and blogs. SweetAlert9.3 0.0 L4 lightGallery VS SweetAlertA beautiful replacement for JavaScript's "alert" Magnific-Popup8.7 0.0 L2 lightGallery VS Magnific-PopupLight and responsive lightbox script with focus on performance. sweetalert28.5 9.2 L1 lightGallery VS sweetalert2A beautiful, responsive, highly customizable and accessible (WAI-ARIA) replacement for JavaScript's popup boxes. Zero dependencies. fancyBox7.7 0.0 L1 lightGallery VS fancyBoxjQuery lightbox script for displaying images, videos and more. Touch enabled, responsive and fully customizable. tether7.5 9.1 L3 lightGallery VS tetherA positioning engine to make overlays, tooltips and dropdowns better X-editable7.5 0.0 L3 lightGallery VS X-editableIn-place editing with Twitter Bootstrap, jQuery UI or pure jQuery vex7.0 0.0 lightGallery VS vexA modern dialog library which is highly configurable and easy to style. #hubspot-open-source bootstrap-modal6.9 0.0 lightGallery VS bootstrap-modalExtends the default Bootstrap Modal class. Responsive, stackable, ajax and more. screenfull.js6.8 1.9 lightGallery VS screenfull.jsSimple wrapper for cross-browser usage of the JavaScript Fullscreen API colorbox6.8 1.2 L1 lightGallery VS colorboxA light-weight, customizable lightbox plugin for jQuery Bootbox6.8 3.8 L2 lightGallery VS BootboxWrappers for JavaScript alert(), confirm() and other flexible dialogs using Twitter's bootstrap framework lightgallery.js6.4 4.0 lightGallery VS lightgallery.jsFull featured JavaScript image & video gallery. No dependencies swipebox5.1 1.5 lightGallery VS swipeboxA touchable jQuery lightbox baguetteBox.js4.9 0.0 L4 lightGallery VS baguetteBox.js:zap: Simple and easy to use lightbox script written in pure JavaScript iziModal4.7 0.0 lightGallery VS iziModalElegant, responsive, flexible and lightweight modal plugin with jQuery. jquery.avgrund.js4.5 0.0 L5 lightGallery VS jquery.avgrund.jsAvgrund is jQuery plugin with new modal concept for popups css-modal4.3 0.0 L4 lightGallery VS css-modalA modal built with pure CSS, enhanced with JavaScript jBox4.0 7.4 L1 lightGallery VS jBoxjBox is a jQuery plugin that makes it easy to create customizable tooltips, modal windows, image galleries and more. jquery-popbox2.5 0.0 lightGallery VS jquery-popboxjQuery PopBox UI Element 🦞 Modali1.6 0.0 lightGallery VS 🦞 ModaliA delightful modal dialog component for React, built from the ground up to support React Hooks. hColumns1.4 0.0 lightGallery VS hColumnsjQuery.hColumns is a jQuery plugin that looks like Mac OS X Finder's column view for the hierarchical data. F$D€1.2 0.0 lightGallery VS F$D€F$D€ - Client not paid? Add opacity to the body tag and increase it every day until their site completely fades away toppy1.1 0.0 lightGallery VS toppyOverlay library for Angular 7+ keukenhof0.5 3.3 lightGallery VS keukenhofLightweight and easy to use the library for modals ModalSquared.js0.2 1.1 lightGallery or a related project? README lightGallery A customizable, modular, responsive, lightbox gallery plugin. No dependencies.\ Available for React.js, Angular, Vue.js, and typescript. Core features - Fully responsive. - Modular architecture with built in plugins. - Highly optimized for touch devices. - Mouse drag supports for desktops. - Double-click/Double-tap to see actual size of the image. - Animated thumbnails. - Social sharing. - YouTube Vimeo Wistia and html5 videos Support. - 20+ Hardware-Accelerated CSS3 transitions. - Dynamic mode. - Inline gallery - Full screen support. - Zoom in/out, Pinch to zoom. - Swipe/Drag up/down support to close gallery - Browser history API(deep linking). - Responsive images. - HTML iframe support. - Multiple instances on one page. - Easily customizable via CSS (SCSS) and Settings. - Smart image preloading and code optimization. - Keyboard Navigation for desktop. - SVG icons. - Accessibility support. - Rotate, flip images. - And many more. Documentation Installation lightGallery is available on NPM, Yarn, Bower, CDNs, and GitHub. You can use any of the following method to download lightGallery NPM - NPM is a package manager for the JavaScript programming language. You can install lightgalleryusing the following command npm install lightgallery YARN - Yarn is another popular package manager for the JavaScript programming language. If you prefer you can use Yarn instead of NPM yarn add lightgallery Bower - You can find lightGallery on Bower package manager as well bower install lightgallery --save GitHub - You can also directly download lightgallery from GitHub CDN - If you prefer to use a CDN, you can load files via jsdelivr, cdnjs or unpkg Include CSS and Javascript files First of all, include lightgallery.css in the <head> of the document. If you want include any lightGallery plugin such as thumbnails or zoom, you need to include respective css files as well. Alternatively you can include lightgallery-bundle.css which contains lightGallery and all plugin styles instead of separate stylesheets. If you like you can also import scss files instead of css files from the scss folder. <head> <link type="text/css" rel="stylesheet" href="css/lightgallery.css" /> <!-- lightgallery plugins --> <link type="text/css" rel="stylesheet" href="css/lg-zoom.css" /> <link type="text/css" rel="stylesheet" href="css/lg-thumbnail.css" /> <!-- OR --> <link type="text/css" rel="stylesheet" href="css/lightgallery-bundle.css" /> </head> Then include lightgallery.umd.js into your document. If you want to include any lightgallery plugin you can include it after lightgallery.umd.js. <body> .... <script src="js/lightgallery.umd.js"></script> <!-- lightgallery plugins --> <script src="js/plugins/lg-thumbnail.umd.js"></script> <script src="js/plugins/lg-zoom.umd.js"></script> </body> lightGallery supports AMD, CommonJS and ES6 modules too. import lightGallery from 'lightgallery'; // Plugins import lgThumbnail from 'lightgallery/plugins/thumbnail' import lgZoom from 'lightgallery/plugins/zoom' The markup lightgallery does not force you to use any kind of markup. you can use whatever markup you want. Here can find detailed examples of different kinds of markups. If you know the original size of the media, you can pass it via data-lg-size="${width}-${height}" attribute for the initial zoom animation. But, this is completely optional. <div id="lightgallery"> <a href="img/img1.jpg" data- <img alt=".." src="img/thumb1.jpg" /> </a> <a href="img/img2.jpg" data- <img alt=".." src="img/thumb2.jpg" /> </a> ... </div> Initialize lightGallery Finally, you need to initiate the gallery by adding the following code. <script type="text/javascript"> lightGallery(document.getElementById('lightgallery'), { plugins: [lgZoom, lgThumbnail], speed: 500, licenseKey: 'your_license_key' ... other settings }); </script> License Key You'll receive a license key via email one you purchase a license More info Plugins As shown above, you need to pass the plugins via settings if you want to use any lightGallery plugins. If you are including lightGallery files via script tag, please use the same plugins names as follows. lgZoom, lgAutoplay, lgComment, lgFullscreen, lgHash, lgPager, lgRotate, lgShare, lgThumbnail, lgVideo, lgMediumZoom Browser support lightGallery supports all major browsers including IE 10 and above. License Commercial license If you want to use lightGallery README section above are relevant to that project's source code only.
https://js.libhunt.com/lightgallery-alternatives
CC-MAIN-2021-43
refinedweb
1,150
52.26
4 thoughts on “Hosting a Custom Web Service with the M3 API Toolkit” Interesting approach. What do you see as the advantages of this vs. the native REST web services exposed in the Grid-enabled versions of M3, or using Infor’s LWS/MWS to expose an API as a SOAP web service? I’m honestly not all that sure. After I write a few more posts using this setup you will probably be able to answer that question better than me. I’ve never used the built in web services so I don’t really know what their limitations are. The thing I like about this setup is that it is very user friendly for people developing Windows programs. At the end of the day the communication system is pretty much the same both ways though. The big difference will be in programming clients and making your own Data Contracts. Hi Thibaud, just stumbled into this useful article as I am going to implement REST for the first time; thanks for this. But I want to comment 2 of your statements regarding API’s: “Deployment can be difficult. The toolkit needs to be installed on every computer or device that wants to communicate with M3.” Depends on the language and the environment. If you are using a Net language and have Smart Office installed, you can use the Lawson.M3.MI namespace- all objects are there. If you don’t use Smart office but a Net language, you can simply copy the binaries to your program directory; on Windows 64 bit it is MvxSockx64.dll and MvxSockNx64.dll. When these file are added to the Smart Office folder, it is even possible to use API’s in old versions of Smart office, i.e. 9.1.2, which has not support for API communication. No path variable, no registration is required. “If database access is desired drivers are required and permissions will need to be granted for every client.” Wrapping an SQL Query in a M3 Web Service is a powerful method to avoid local components. If Smart Office is present on the PC, the DynamicWS dll can be used in a Net application to communicate over Web Services. But you can also create a proxy class in Visual Studio from the WSDL file, or create a fully generic solution by talking via SOAP/HTTP to the M3 Web Service server as SmartData does. In all these cases, the database connection is set up on the MWS Server as an additional Web Service context of type “Database”, with a central JDBC-Driver, and central User Credentials. Carefully developed SQL queries are often faster than API’s and consume less system resources. /Heiko Heiko, I suppose you are right when it comes to copying those files. I guess I sort of had this figured out when I deployed the WCF service and if one of those two files are missing it will throw an exception. You can see that I mention that in the publish section. At one point I had some difficulty deploying a client application and probably didn’t realize at the time the second file was all I needed and that no registration was required. I suppose that in the midst of my difficulty I just installed the toolkit and obviously it started working. As far as the Database access goes I never had any intention of pulling data from the system using an API call. I agree, that is way too slow and limited in capability compared to a query. Not to mention you have to rip apart super long strings depending on what program you are using. In the above project a direct reference to a database can be made to run a query. The resulting data can then be sent to the clients as objects defined by the data contracts that you’ve set up. In this setup the idea is that you don’t have to rely on external DLL files in order for your program to run. A good example of this is if you were using a windows tablet and you were using enterprise sideloading to deploy apps. In this case you might avoid external dependencies as a matter of convenience. One of the other benefits is that the more you run code you remove from the client applications the faster it is to update. Suppose you have a bug in one of your transactions or you want to start adding additional information. Since most of your code resides in the Web service you can update the transaction and the clients will immediately get the fix without having to redeploy the client software. This is of course provided that the contracts haven’t changed. For me the reason for using a setup like this is to eventually be able to integrate with other software packages including scheduling software. I can also record some additional data in a custom database to get a better analysis of how long manufacturing operations are taking all without having to rely on clients doing it themselves. On another note if you are looking for customization because you need to run transactions in large quantities on a frequent basis this option is also very attractive because clients can choose to run things asynchronously if they don’t require feedback. This forces the server to do all the work and leaves the client free to move on to the next task. I think I have a few Ideas of things I can put in my next post to show off what a data lookup looks like as well as what a Client App would look like. I think I can find some time in the next week or so to do some writing. . . Hopefully.
https://m3ideas.org/2015/05/04/hosting-a-custom-web-service-with-the-m3-api-toolkit/
CC-MAIN-2018-39
refinedweb
966
68.91
/* -*- XMLNameSpaceMap_h_ #define nsXMLNameSpaceMap_h_ #include "nsVoidArray.h" class nsIAtom; /** * nsXMLNameSpaceMap contains a set of prefixes which are mapped onto * namespaces. It allows the set to be searched by prefix or by namespace ID. */ 00050 class nsXMLNameSpaceMap { public: /** * Allocates a new nsXMLNameSpaceMap (with new()) and initializes it with the * xmlns and xml namespaces. */ static NS_HIDDEN_(nsXMLNameSpaceMap*) Create(); /** * Add a prefix and its corresponding namespace ID to the map. * Passing a null |aPrefix| corresponds to the default namespace, which may * be set to something other than kNameSpaceID_None. */ NS_HIDDEN_(nsresult) AddPrefix(nsIAtom *aPrefix, PRInt32 aNameSpaceID); /** * Add a prefix and a namespace URI to the map. The URI will be converted * to its corresponding namespace ID. */ NS_HIDDEN_(nsresult) AddPrefix(nsIAtom *aPrefix, nsString &aURI); /* Remove a prefix from the map. */ NS_HIDDEN_(void) RemovePrefix(nsIAtom *aPrefix); /* * Returns the namespace ID for the given prefix, if it is in the map. * If |aPrefix| is null and is not in the map, then a null namespace * (kNameSpaceID_None) is returned. If |aPrefix| is non-null and is not in * the map, then kNameSpaceID_Unknown is returned. */ NS_HIDDEN_(PRInt32) FindNameSpaceID(nsIAtom *aPrefix) const; /** * If the given namespace ID is in the map, then the first prefix which * maps to that namespace is returned. Otherwise, null is returned. */ NS_HIDDEN_(nsIAtom*) FindPrefix(PRInt32 aNameSpaceID) const; /* Removes all prefix mappings. */ NS_HIDDEN_(void) Clear(); ~nsXMLNameSpaceMap() { Clear(); } private: nsXMLNameSpaceMap() NS_HIDDEN; // use Create() to create new instances nsVoidArray mNameSpaces; }; #endif
http://xulrunner.sourcearchive.com/documentation/1.9.0.1/nsXMLNameSpaceMap_8h-source.html
CC-MAIN-2018-05
refinedweb
232
57.67
Self-containmentsuggest change Modern headers should be self-contained, which means that a program that needs to use the facilities defined by header.h can include that header ( #include "header.h") and not worry about whether other headers need to be included first. Recommendation: Header files should be self-contained. Historical rules Historically, this has been a mildly contentious subject. Once upon another millennium, the AT&T Indian Hill C Style and Coding Standards stated:. This is the antithesis of self-containment. Modern rules However, since then, opinion has tended in the opposite direction. If a source file needs to use the facilities declared by a header header.h, the programmer should be able to write: #include "header.h" and (subject only to having the correct search paths set on the command line), any necessary pre-requisite headers will be included by header.h without needing any further headers added to the source file. This provides better modularity for the source code. It also protects the source from the “guess why this header was added” conundrum that arises after the code has been modified and hacked for a decade or two. The NASA Goddard Space Flight Center (GSFC) coding standards for C is one of the more modern standards — but is now a little hard to track down. It states that headers should be self-contained. It also provides a simple way to ensure that headers are self-contained: the implementation file for the header should include the header as the first header. If it is not self-contained, that code will not compile. The rationale given by GSFC includes: §2.1.1 Header include rationale This standard requires a unit’s header to contain #include statements for all other headers required by the unit header. Placing #include for the unit header first in the unit body allows the compiler to verify that the header contains all required #include statements. An alternate design, not permitted by this standard, allows no #include statements in headers; all #includes are done in the body files. Unit header files then must contain #ifdef statements that check that the required headers are included in the proper order. One advantage of the alternate design is that the #include list in the body file is exactly the dependency list needed in a makefile, and this list is checked by the compiler. With the standard design, a tool must be used to generate the dependency list. However, all of the branch recommended development environments provide such a tool. A major disadvantage of the alternate design is that if a unit’s required header list changes, each file that uses that unit must be edited to update the #include statement list. Also, the required header list for a compiler library unit may be different on different targets. Another disadvantage of the alternate design is that compiler library header files, and other third party files, must be modified to add the required #ifdef statements. Thus, self-containment means that: - If a header header.hneeds a new nested header extra.h, you do not have to check every source file that uses header.hto see whether you need to add extra.h. - If a header header.hno longer needs to include a specific header notneeded.h, you do not have to check every source file that uses header.hto see whether you can safely remove notneeded.h(but see Include what you use. - You do not have to establish the correct sequence for including the pre-requisite headers (which requires a topological sort to do the job properly). Checking self-containment See Linking against a static library for a script chkhdr that can be used to test idempotence and self-containment of a header file.
https://essential-c.programming-books.io/self-containment-1a09d20d474b455aab220926f14115d0
CC-MAIN-2021-25
refinedweb
624
64.91
The little endl that couldn t - Alyson Gregory - 2 years ago - Views: Transcription 1 This is a pre-publication draft of the column I wrote for the November- December 1995 issue of the C++ Report. Pre-publication means this is what I sent to the Report, but it may not be exactly the same as what appeared in print, because the Report and I often make small changes after I submit the final draft for a column. Comments? Feel free to send me mail: The little endl that couldn t Waste not, want not. It s true. Look for ways to use what looks like scrap, and you never know what will turn up. When I m developing a program, I turn it over to friends when the program has achieved a state that is something more than a prototype, but less than a beta release. Tell me everything that s wrong with it, I ll ask, and, if I m lucky, they ll oblige. It s not always great for my ego, but it certainly improves my programs. These days I spend more time writing than programming, but I ve come to discover that the two tasks have much in common. As a result, when I m working on a book and I think I m close to being finished, I turn it over to reviewers and ask them to tell me everything that s wrong with it. I then revise the manuscript, taking the reviewers comments into account. As I write this in August, I m in the process of finishing work on More Effective C++. This column is due for publication around November, so the book may be available by the time you read this. (If it s not, it probably means my release schedule slipped. I told you the tasks were similar.) Last month I got back the reviewers comments on my draft manuscript, and one of the changes I ve made is the elimination of one Item I d planned to include in the book. Even though the Item isn t suitable for More Effective C++, it s instructive to give it a look anyway. What follows is the Item that won t be, a More Effective C++ outtake. After the Item, I ll explain why I decided the material isn t ready for prime time. Avoid gratuitous use of endl. Sometimes the most insignificant-looking things can make a big difference. For example, some people use the manipulator endl as a shorthand for inserting a newline into an output stream. That is, instead of writing this, cout << "The value of x is " << x << \n ; they write this: cout << "The value of x is " << x << endl; Perhaps the reason they do this is they think the two expressions are equivalent, and they use the endl version because it s easier to type. Well, it is easier to type, but endl is not equivalent to a newline, and the difference between the two 2 should be enough to convince anybody that there s more to writing programs than finding the easiest sequence of characters to type. Inserting endl into an output stream does two things. First, it inserts a newline. That s hardly news. Second, it flushes the stream s buffer, thus forcing any pending output to be immediately written. From an efficiency point of view, this latter action is a cause for concern. There s a reason why streams are buffered in the first place: writing a whole bunch of characters to an output device (or reading a whole bunch of characters from an input device) is, on a per-character basis, much less expensive than is writing (or reading) the same number of characters one by one. Furthermore, many I/O operations can proceed perfectly acceptably even if the actual physical operations are buffered. For example, most programs don t much care exactly when the data destined for an output file appears in the file as long as it s there by the time the program exits. Buffering, therefore, is a way for the I/O system to improve the efficiency of a program without changing its behavior. That s why the I/O subsystems of virtually all programming languages employ buffering. Of course, sometimes you want to ensure that the data you insert into a stream is written right away. For example, error messages should typically not be buffered they should appear immediately. Here s something you might find in an ANSI C compiler: cerr << "Sorry, unlike K&R C, in ANSI C the" << "numeral 8 is not an octal digit.\n"; When you write to cerr, however, there s still no need to use endl, because the designers of the iostream library, realizing that error messages shouldn t be buffered, specified that cerr not buffer its output. So it doesn t. If you have a stream object of your own that you d like to go unbuffered, you can set it up so there s no need to manually flush the buffer each time you write to the stream. Configuring such a stream is a little tricky, however. You do it via the following far-from-intuitive machinations, in which I assume you wish to set up an unbuffered ofstream object, error- File, that corresponds to the output file errors.txt: // create an ofstream not associated with any file ofstream errorfile; // disable buffering for the ofstream errorfile.setbuf(0,0); // associate the ofstream with "errors.txt" errorfile.open("errors.txt"); From here on out, you can use errorfile just like any other ofstream object, and you can revel unreservedly in its unbuffered glory. As we ve seen, you don t need to use endl with cerr, but perhaps you need it with cout. After all, a read from cin is typically preceded by a prompt written to cout, and it would 3 be quite the pity if the read operation were attempted before the prompt had been displayed, wouldn t it? Not to worry. Those infamous iostream designers have beaten you to the punch once again. cin and cout are tied together such that any attempt to read from cin will automatically cause buffered output destined for cout to be flushed. Prompts written to cout will therefore always appear before reads from cin are initiated. Even if cin and cout didn t coordinate their activities, using endl to terminate a prompt would produce ugly output, because the newline component of endl would force the user to type the desired input on the line after the prompt. It s usually much more professional-looking to have them type the requested input on the same line as the prompt like this: string name; cout << "Please enter your name: "; cin >> name; In this example, it s unfortunate that name has to be constructed before it can read the value it s supposed to initially hold, but we re required to have an existing String object before we can call operator>>, so we re stuck with this small inefficiency. C est la C++. You, too, can tie an input stream to an output stream, thus guaranteeing that the output stream is flushed before input is read. You do it using a function called tie. You invoke tie on an input stream and pass it a pointer to the output stream to which you want to tie it: ostream promptstream; istream inputstream; // always flush promptstream before reading from // inputstream inputstream.tie(&promptstream); Okay, you don t need endl to flush messages to error streams, you don t need it for prompts, and you don t want to use it as a simple shorthand for a newline. In fact, the only time you do want to use it is if you re writing to a buffered stream and you really, truly want to both insert a newline and force an immediate flushing of the stream s buffer. Such times are rare, and your uses of endl should be equally rare. And yet, it really is a pain to have to type \n each time you want to write a newline. If endl s out, what s in? One thing you might consider is a little judicious pollution of your global namespace. You might define a suitable constant character as follows: const char NL = \n ; Now you can enter a newline into an output stream as easily as you like, but you won t have to cripple your program s carefully crafted I/O buffering mechanism to do it: cout << "The value of x is " << x << NL; Hey! This is even shorter than endl only two characters! And if the pollution of the global scope offends you (it should), 4 you can even place your newline abbreviation in a personal namespace that you only import when needed: namespace PersonalStuff { const char NL = \n ;... }; void printvalue(int x) { using PersonalStuff::NL; cout << "The value of x is " << x << NL; } As you can see, there s nothing wrong with shorthands, but it is important to make sure that the shorthands you use mean what you want them to mean. endl usually doesn t, so you shouldn t use it unless you re sure you really want both a newline and a flush. * * * * * Okay, that s the Item as I d planned to have it in the book. Why did I decide to exclude it? This particular Item was part of a chapter on efficiency, and some reviewers thought the efficiency gain by following the advice above wasn t worth the amount of text I devoted to the topic. That s probably true, but unnecessary use of endl happens to be one of my pet peeves, so I probably would have kept the Item anyway. After all, it s my book, and if you can t say what you want in your own book, when can you? One reviewer (Eric Nagler) brought up a more important problem. He wrote: The user is going to have the buffer flushed, like it or not, because of the fact that the ios::unitbuf bit is turned on by default. So using \n vs. endl is a moot point. If the buffer really should not be flushed, then this bit needs to be turned off. Had I been living under a rock? I d never heard of ios::unitbuf. So I turned to Steve Teale s book on IOstreams [cite it here]. There I found that if unitbuf is set, an ostream s buffer is automatically flushed after each write operation, just as Eric had written. (This is illustrative of why I am firmly committed to having reviewers.) However, Teale s book also said that the default setting for unitbuf is implementation-dependent, so I was hoping I could still squeak the endl Item in. After all, if many implementations defaulted unitbuf to 0, my advice would still be valid much of the time, and Teale s book implied that it was common to default unitbuf to 0. Just to be on the safe side, I looked unitbuf up in the April 1995 draft ANSI/ISO C++ standard document [cite it here]. I was unable to determine the default setting for the bit, but while wandering around the iostream portion of the standard, I noticed something that took me by surprise: cout is supposed to be unbuffered. You 5 may not find this to be a great shock, but traditionally cout has been a buffered ostream. For unbuffered output, the traditional stream of choice has been cerr. It seems the ANSI/ISO committee decided to part with tradition. My guess is that cout is still buffered under many (if not most) implementations, but the standard will be the standard, so in the future, cout will not be buffered. Yet most of the places where I observe endl being used gratuitously, the destination stream is cout. Under those conditions, the call to endl is still unnecessary (the stream is already being flushed after each write), but it s no longer defeating the buffering system as I said it was in my would-be Item above. Once implementations come into conformance with the standard, then, my explanation would have been misleading. I hope my book will be useful long after implementations have started to conform to the standard, so I decided to abandon the Item now instead of regretting its inclusion later. As long-term advice, it just couldn t cut the mustard. For a shorter-term publication like a magazine column, however, it suffices just fine. Waste not, want not :-) References Steve Teale, C++ IOStreams Handbook, Addison-Wesley, Accredited Standards Committee X3J16, Working Paper for Draft Proposed International Standard for Information Systems Programming Language C++, April 1995. Manual on Close-Out Procedures. Third Edition January 1985 Manual on Close-Out Procedures Third Edition January 1985 Manual on Close-Out Procedures Third Edition January 1985 Municipal Securities Rulemaking Board 1900 Duke Street, Suite 600 Alexandria, VA 22314. Part 1: Jumping into C++... 2 Chapter 1: Introduction and Developer Environment Setup... 4 Part 1: Jumping into C++... 2 Chapter 1: Introduction and Developer Environment Setup... 4 Chapter 2: The Basics of C++... 35 Chapter 3: User Interaction and Saving Information with Variables... 43 Chapter What is Good Writing? Full Version What is Good Writing? More More information Visit ttms.org by Steve Peha 2 The best way to teach is the way that makes sense to you, your kids, and your community. 3 What is Good How to Misuse Code Coverage How to Misuse Code Coverage Brian Marick Testing Foundations 1 marick@testing.com Code coverage tools measure how thoroughly tests exercise programs. I believe they are misused more often Clinical Trials: What You Need to Know Clinical Trials: What You Need to Know Clinical trials are studies in which people volunteer to test new drugs or devices. Doctors use clinical trials to learn whether a new treatment works and is safe Base Tutorial: From Newbie to Advocate in a one, two... three! Base Tutorial: From Newbie to Advocate in a one, two... three! BASE TUTORIAL: From Newbie to Advocate in a one, two... three! Step-by-step guide to producing fairly sophisticated database applications Do-it-Yourself Recovery of Unpaid Wages Do-it-Yourself Recovery of Unpaid Wages How to Represent Yourself Before the California Labor Commissioner A Publication of: The Legal Aid Society-Employment Law Center 600 Harrison Street, Suite 120 San What are requirements? 2004 Steve Easterbrook. DRAFT PLEASE DO NOT CIRCULATE page 1 C H A P T E R 2 What are requirements? The simple question what are requirements? turns out not to have a simple answer. In this chapter we Information for parents living apart from their child Information for parents living apart from their child a child maintenance decisions guide Understand your child maintenance choices Tools to help you set up a child maintenance arrangement Ideas from other Should a Test Be Automated? Brian Marick Testing Foundations marick@testing.com I want to automate as many tests as I can. I m not comfortable running a test only once. What if a programmer then changes the code and introduces Protect and Benefit From Your Ideas How To Protect and Benefit From Your Ideas FOREWORD T his book has been prepared for general use and guidance of inventors. It states the Association s best judgment as to the law of intellectual property Tips for writing good use cases. Transforming software and systems delivery White paper May 2008 Tips for writing good use cases. James Heumann, Requirements Evangelist, IBM Rational Software Page 2 Contents 2 Introduction 2 Understanding, Turning Strategies into Action Turning Strategies into Action Economic Development Planning Guide for Communities Turning Strategies into Action Table of Contents Introduction 2 Steps to taking action 6 1. Identify a leader to drive Using Credit to Your Advantage. Using Credit to Your Advantage. Topic Overview. The Using Credit To Your Advantage topic will provide participants with all the basic information they need to understand credit what it is and how to make
http://docplayer.net/45596-The-little-endl-that-couldn-t.html
CC-MAIN-2018-09
refinedweb
2,677
59.74
Free "1000 Java Tips" eBook is here! It is huge collection of big and small Java programming articles and tips. Please take your copy here. Take your copy of free "Java Technology Screensaver"!. JavaFAQ Home » Java Newsletters The Java FAQ Daily Tips is a newsletter that is only sent to those who have specifically subscribed to it. ****************************************************************** * * ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ * > The Java FAQ Daily Tips, week edition < * ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ * * Issue No: 7 18 September 2000 * ****************************************************************** Table of Contents > How can I let dial a phone number with Java > About serialization... > Again about difference between AWT and Swing > Can a java application be run off of a CD...? > How to cut, copy & paste? > How does my applet read images from jar file? > Essential difference between an abstract class and an interface. ****************************************************************** Tip 1 How can I let dial a phone number with a modem in a Java app.? Is there a way without a System.exec() call and without any M$ classes? A: You could use javax.comm to do it manually via the serial port and the good old AT command set on the modem. Alternatively, you might look at JTAPI, but that might have its own problems and a lot of overkill. ****************************************************************** Tip 2 Question: If I have a class that implements the Serializable interface, but it has member variables that didn't work. Answer: Do you really need to serialize those members of your class that. ****************************************************************** Tip 3 I have a question: What are the architectural differences between Swing and AWT?. by Odd Vinje odvinjee@online.no ****************************************************************** Extra Tip from our sponsor. When you want to bring files from job or your friend home simply copy them to your free Internet 100Mb X: drive and forget about floppy/zip disks and other staff! It is better to keep saved money in your pockets than diskettes! Just add it to your computer as an aditional drive (easy auto plugin!) and use with ususal file explorer or any other kind of file management browser. Do not leave it for tomorrow! Do it here right now! ******************************************************************* Tip 4 Can a java application be run off of a CD without installing anything (i.e. runtime, etc) on the target computer? I would like to put my application and hand it out as a demo, but I want to make it easy to view. Answer1: The JRE was made so that it didn't need to be "installed". What I did in one case was to simply put the JRE into a jre folder in the same directory as my application then invoke it from that directory using: jreinj. by Dale King KingDo@TCE.com Answer2: you could try a Java to native compiler. ****************************************************************** Tip 5 I've got a (simple) menu on a new application and am trying to put in the works behind the cut, copy & paste menu options - does anyone know how I can do this - what's the code or can you point me in the right direction? A: Look at java.awt.datatransfer package. It contains much of the tools necessary to implement cut. copy, paste. ****************************************************************** Tip 6. A: The following is from: import java.applet.*; import java.awt.*; import java.io.*; public class ResourceDemoApplet extends Applet { Image m_image; public void init() { try {); } } by David Risner drisner@eskimos.com ****************************************************************** Tip 7 What is the essential difference between an abstract class and an interface? What dictates the choice of one over the other? A: You can only extend one class (abstract or not) whereas you can always implement one or more interfaces. Interfaces are Java's way to support multiple inheritances. Please recommend us to your friends and colleagues! later also! ------------------------------------------------------------------- RSS feed Java FAQ News
http://javafaq.nu/java-article326.html
CC-MAIN-2018-13
refinedweb
612
75
SYNOPSIS #include <pthread.h> int pthread_rwlock_rdlock(pthread_rwlock_t *rwlock); int pthread_rwlock_tryrdlock(pthread_rwlock_t *rwlock); DESCRIPTION The The calling thread acquires the lock when a writer does not hold the lock and there are writers blocked on the lock. When a writer holds the lock, the calling thread does not acquire the read lock. When the read lock is not acquired, the calling thread blocks until the lock can be acquired. The calling thread may detect a deadlock (and return EDEADLK) if at the time the call is made it holds a write lock. A thread may hold multiple concurrent read locks on the specified rwlock argument (that is, it can successfully call the No more than sizeof(unsigned long) simultaneous read locks can be applied to a read-write lock. The When a thread waiting for a read-write lock for reading receives a signal, that thread, upon return from the signal handler, resumes waiting for the read-write lock for reading as if no interruption occurred. PARAMETERS RETURN VALUES On success, the - EAGAIN No read lock was acquired because the maximum number of read locks for rwlock was exceeded. - EBUSY A writer holds the lock and thus, the read-write lock could not be acquired. - EDEADLK The current thread already owns the read-write lock for writing. - EINVAL The rwlock attribute points to a value that is not an initialized read-write lock object. CONFORMANCE UNIX 03. MULTITHREAD SAFETY LEVEL MT-Safe. PORTING ISSUES Currently, sizeof(unsigned long) is 32 bits for all systems on which the NuTCRACKER Platform runs. AVAILABILITY PTC MKS Toolkit for Professional Developers PTC MKS Toolkit for Professional Developers 64-Bit Edition PTC MKS Toolkit for Enterprise Developers PTC MKS Toolkit for Enterprise Developers 64-Bit Edition SEE ALSO - Functions: pthread_rwlock_destroy(), pthread_rwlock_timedrdlock(), pthread_rwlock_timedwrlock(), pthread_rwlock_trywrlock(), pthread_rwlock_unlock(), pthread_rwlock_wrlock() PTC MKS Toolkit 10.3 Documentation Build 39.
https://www.mkssoftware.com/docs/man3/pthread_rwlock_rdlock.3.asp
CC-MAIN-2021-39
refinedweb
307
52.9
I just finished another course at school that involved studying an open-source application. I’ve gotten tired of studying Java and using Java tools, so I decided to try studying a .net project. Luckily I stumbled upon NDepend which is the most fully featured tool for studying .net code that I’ve seen to date. In the past when I needed to do some kind of code analysis (usually generating DSM’s and maybe taking a brief look at some metrics), Lutz Roeder’s (I mean red-gate’s) Reflector and its’ various add-ins has been sufficient for me, and not to mention free. But I found these tools to be somewhat limiting when doing large amounts of analysis (the kind I don’t typically get time to do at work!) I’ve really enjoyed working on these code analysis projects in the past, but this was my favorite one yet. It was more enjoyable to me because I was working on a .net project, making it easier to see how I can directly apply this to my own work (while working on the Java projects does a good job of teaching the concepts involved, it is hard to apply them to my work without the tools). But it was also enjoyable because NDepend was such a joy to use (and I think I’ve just barely scratched the surface of what it can do). NDepend can do all the things you’d expect from a code analysis tool. It does a good job creating DSM’s, has very pretty visual displays of said DSM’s, and provides quick access to code metrics. I especially like the mouse-over highlighting in the visual dependency graphs, which helps you to tell at a moments notice what depends on a class, what the class depends on, whether any cyclic dependencies are present, and a ballpark idea of how many dependencies exist. I haven’t gotten into any of the .net or visual-studio specific features yet (but I am sure they will make life much easier). My favorite feature (by far) in working on this project was its’ “Code Query Language” (CQL). CQL is a SQL-like dialect that lets you write queries directly against your codebase. This allows you to get very quick answers to questions like “What classes in my assembly depend on FooClass?” or “How many classes have more than <insert arbitrary threshold here> Efferent Couplings?”. So if you need to find out some information about your code, instead of compiling metrics and poring over the results, you only need to ask NDepend the question directly, and see the answer as fast as you can type. After using this feature for a month or two, I don’t know how I could go back to living without it! An example query would look something like this: // <Name>Most Complex Methods used by Type X</Name> SELECT TOP 10 METHODS WHERE IsUsedBy "AssemblyX.NamespaceY.TypeZ" ORDER BY CyclomaticComplexity DESC In a larger project, you can use the FROM clause to limit what you bring back to methods from a particular assembly or namespace, like so: // <Name>Most Complex Methods from Assembly Q used by Type X</Name> SELECT TOP 10 METHODS FROM ASSEMBLIES "AssemblyQ" WHERE IsUsedBy "AssemblyX.NamespaceY.TypeZ" ORDER BY CyclomaticComplexity DESC Another cool thing about CQL is the ability to add constraints. So you could set it up to be warned whenever you run analysis if you had a method with over 200 lines of code, or methods with excessive complexity and an insufficient number of comments. Here is an example of a constraint that ships with NDepend, meant to warn the user if any fields break encapsulation. // <Name>Fields should be declared as private</Name> WARN IF Count > 0 IN SELECT TOP 10 FIELDS WHERE !IsPrivate AND // These conditions filter cases where fields doesn't represent state that should be encapsulated. !IsInFrameworkAssembly AND !IsGeneratedByCompiler AND !IsSpecialName AND !IsInitOnly AND !IsLiteral AND !IsEnumValue As I think these few examples show, this CQL is very intuitive, and I really recommend checking it out if you get a chance. Even being quite the SQL junkie myself, I found it hard to find anything to complain about. I especially liked the visual display of the query results. Basically in the NDepend window there is a rectangle filled with unevenly sized blocks. Each block represents a field or method (depending what you are looking at) and they form larger blocks representing types. The blocks can be sized based on different attributes like number of lines of code, or number of efferent couplings. Below is a screenshot (from NDepend’s website) showing what this looks like: When all is said and done, I’ve really come to love, and even depend on NDepend. I think it will be an indisposable part of my toolkit. What is scary about this is that I haven’t even gotten started yet. I haven’t used the build comparison feature that lets you watch how your assembly evolves over time (in the past, when I’ve used the Java tools, I needed to create a separate project for each, version, and compare manually, so I’m sure I will love this feature!). Another thing that I think is really cool is the fact that you can integrate the NDepend analysis with your build process, so you can be warned of violations of your established constraints in real time, not just whenever you get around to running another analysis. You can even embed the constraints in your source code! When I get into all these features I’ll be sure to share. *** If you have questions on C#, check out our C# Forum or even the ASP.net Forum Would be nice if that could produce some nice Mandelbrot Set Fractal Art 🙂 You just need to write the code to do it 😉
http://blogs.lessthandot.com/index.php/desktopdev/mstech/fun-with-ndepend/
CC-MAIN-2017-22
refinedweb
986
66.07
Hello, I need some help finishing this shopping cart java program. Here are the directions: Project description: Write a Java program to simulate an online shopping cart. An online shopping cart is a collection of items that a shopper uses to collect things for purchase. A shopper can add items to the cart, remove them, empty the cart, view the items in the cart, and end shopping and proceed to checkout. Using the Java ArrayList class, you will write a program to support these functions. Each item added to the cart will be represented with the CartItem class (se attached .java files). When your program begins, you will display a menu of actions the shopper can perform: SHOPPING CART OPTIONS 1 add an item to your cart 2 remove an item from your cart 3 view the items in your cart 4 end shopping and go to checkout 5 empty your cart 6 exit the program Your program will allow the shopper to add and remove items to the shopping cart. The program should continue as long as the shopper want to keep going. The shopper can exit by choosing option 6, and in this case the shopper will exit without making a purchase. The shopper can also exit by selecting option 4, and go to checkout. In this option, the program will display the amount due by adding up the cost of all the items in the cart. Use the NumberFormat class to format the amount in currency. Code Shopping: import java.util.ArrayList; import java.util.Scanner; import java.util.Scanner; public class Shopping { /** * In this program you will replicate an online shopping * cart. You will use the ArrayList class to hold the * items in your shopping cart. * You will use the CartItem class to represent items in * your shopping cart. * In this driver program you will do the following: * Create the shopping cart object * Offer a menu of options: * 1 add an item to your cart * 2 remove an item from your cart * 3 view the items in your cart * 4 end shopping and go to checkout * 5 empty your cart * 6 exit the program * Use the Scanner class to collect input */ public static void main(String[] args) { // TODO Auto-generated method stub ArrayList<CartItem> shoppingCart = new ArrayList<CartItem>(); Scanner scan = new Scanner(System.in); ArrayList<Integer> intList = new ArrayList<Integer>(); boolean keepGoing = true; int choice = 0; int input = 0; int index=0; int total = 0; Integer item; while(keepGoing) { System.out.println("\nMenu - Managing a List"); System.out.println("1 Add an item to your cart"); System.out.println("2 Remove an item from your cart"); System.out.println("3 View the items in your cart"); System.out.println("4 Exit and add up the total"); System.out.println("5 Empty your cart"); System.out.println("6 Exit"); System.out.println("Select a menu option"); choice = scan.nextInt(); if (choice <1 || choice >6) { System.out.println("Enter a value between 1 and 6:"); } else { switch (choice) { case 1: //add an integer System.out.println("Enter an item:"); input = scan.nextInt(); item = new Integer(input); intList.add(item); //intList.add(input); break; case 2: //remove from the list System.out.println("Enter an item to remove:"); input = scan.nextInt(); item = new Integer(input); if (intList.contains(item)) { intList.remove(item); System.out.println(item + " has been removed."); } else { System.out.println(item + " was not found in your shopping cart."); } break; case 3: //view the items in your cart System.out.println(intList); break; case 4: //Exit and add up the total for (int i = 0; i<intList.size(); i++) { item = intList.get(i); total = total + item.intValue(); } System.out.println("Total is "+ total); System.out.println("Goodbye"); keepGoing = false; break; case 5: //Empty the list intList.clear(); break; case 6: //exit keepGoing = false; System.out.println("Goodbye"); break; } } } } } Code CartItem: public class CartItem { private String product; private int quantity; private double price; //constructor public CartItem() { product = ""; quantity = 0; price = 0.0; } public String getProduct() { return product; } public double getPrice() { return price; } public int getQuantity() { return quantity; } //constructor with parameters public CartItem(String inProduct, int inQuant, double inPrice) { product = new String(inProduct); quantity = inQuant; price = inPrice; } //getter setter public methods for each instance data public boolean equals(CartItem item) { //write the code for the equals method //return true; boolean result = false; if (this.product.equalsIgnoreCase(item.getProduct()) && this.price == item.getPrice()) result = true; else result = false; return result; } public String toString() { //write code for toString method String result=""; return result; } } i believe that most of the work needs to be done in the CartItem file. I need to write getter setter public methods for each instance data, write code for toString method, and also the constructor with parameters. Also, at line 58 and line 66, the code is trying to read an integer using the nextInt() method. But I need to have the code read a string. Please help me change it to read a string. Thanks !
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/34681-write-java-program-simulate-online-shopping-cart-printingthethread.html
CC-MAIN-2014-52
refinedweb
829
56.86
0 Members and 1 Guest are viewing this topic. from __future__ import print_functionimport sysimport osimport os.pathimport structOUTDIR = 'extract'try: os.mkdir(OUTDIR)except: passf = open(sys.argv[1], 'rb')assert f.read(4) == b'\xA5UPG'f.seek(32)while True: assert f.read(4)[:2] == b'GW' file_size, section_size = struct.unpack('<LL', f.read(8)) file_name = f.read(20).strip(b'\x00')[1:].decode('ascii') print(file_name, file_size) open(os.path.join(OUTDIR, file_name), 'wb').write(f.read(file_size)) if section_size != 0xffffffff: f.seek(section_size - file_size, os.SEEK_CUR) else: break Small update: it seems GW Instek has shut the SSH server down in the most recent firmware updates. I guess they are reading EEVblog! Quote from: nctnico on November 24, 2017, 11:54:46 pmSmall update: it seems GW Instek has shut the SSH server down in the most recent firmware updates. I guess they are reading EEVblog!Imagine how many they'd sell if it were hackable... Besides that I think it might even be possible to create a firmware update package which enables the SSH login again. 3.Updated the License encoding rule , though not very useful . Which HTML file generator? Can't find a link to anything here. Quote from: Distelzombie on April 09, 2018, 08:05:39 amWhich HTML file generator? Can't find a link to anything here.It's in reply #3. Very cool. How does a hacked GDS-1054B @ 100Mhz compare to the hacked Rigol 1054 or the unhacked Siglent SDS1104X-E? Price seems similar.Great, I can get the GDS-1054B with deferred payment - meaning it's a good option against the Rigol. Couldn't find that for the Siglent.Edit: Damn... the Instek comes with only 70Mhz probes. Why does it always comes back to the Rigol?Is the hack "real"? Like, real increase in bandwidth etc. What else is software upgradeable? However the GW Instek doesn't have bugs and it has features like free-form math, signal filtering, data logging, etc which are very useful when developing circuits and/or hunting for rare events. For 100MHz 250Ms/s is more than enough and high waveforms/s is not really important unless you get into triple digits. Even very high end oscilloscopes don't have high waveforms/s. The same goes for the history buffer. You can always turn on segmented recording and get exactly the same. The GW Instek can also do statistic analysis on the recorded segments.)rfLoop, you compared the Siglent to the unhacked version? Doesn't it get more memory and stuff, like the Rigol? I really wish I could get the Siglent somewhere, but that is impossible without hire-purchase/installment payment - what is even the correct term? (german)) Yes, 1GSa/s for every channel and they did not lie. Take Ch 1 alone...max 1GSa/s and then take Ch2 alone...again max 1GSa/s and so on. All channels have max 1GSa/s. Fact is - and this is true fact. It have two ADC chip. Rigol DS1000Z series? All because it is designed for hack as marketing trick. Siglent have all out from factory box and lot of more. It beats this Rigol wonder box just hands down in every single corner and after then give also lot of more powerful tool with performance what Rigol Zbox can only dream.They are like night and day if compare performance as real tool. Just forget this Rigol 1kZ. Only feature there is that it is bit more cheap.
https://www.eevblog.com/forum/testgear/possible-gw-instek-gds-1000b-hack/msg1253146/
CC-MAIN-2019-43
refinedweb
585
79.26
The baud rate detection is performed only if no previous record of the baud rate is stored in RAM. Four copies are stored in the 87C52 internal RAM, at locations 0x78, 0x79, 0x7A and 0x7B. If all 4 of these agree, baud rate detection is skipped. To detect the baud rate, PAULMON2 waits for you to press Enter, where your PC should send a carriage return character (ascii code 13). Elapsed times between bit transitions are measured, and some sanity tests are performed to verify that the byte sampled was likely to be 13. A brief silent time is also required after the byte, except that ascii code 10 (newline) is allowed. These sanity checks make PAULMON2's automatic baud rate detection very resiliant to misdetection of baud rate if the wrong key is pressed. However, in some rare cases, these requirements may not be met and it will continue waiting for what appears to be a valid Enter keypress. In this case, the TX LED will flash brightly shortly after the process is reset, and then remain dark. If your PC is set to a similar baud rate, many characters of "garbage" may appear on the screen. Only a single character when the board is powered on can be due to the voltage change as the power comes up, and not necessrily due to actual data transmission at the wrong baud rate. However, if the incorrect baud rate is significantly different that your PC's setting, the PC may ignore all the bytes because they will likely not have valid stop bits. The single bright flash on the TX LED after reset, with no activity on the RX LED, is your best indication that the board has "learned" an incorrect baud rate. The quick solution is to simply turn off power to the board for several seconds. Usually 5 seconds is long enough for the RAM to "forget". It is ok to leave the serial cable connected when the board is without power. The most common cause of incorrect baud rates is a Microsoft Windows "plug and play" attempt to initialize a modem. If the board is connected while the PC reboots, and windows attempts to send commands to a modem, those commands may contain the carriage return character which triggers the board to learn the baud rate Windows used for the modem. Most terminal emulators can be configured to send carriage return (CR, ascii 13), newline (NL, ascii 10), or both (CR/NL, both bytes). Most programs default to CR/NL. Using only CR is ok. If your terminal emulator sends only NL, you should change the setting to transmit CR or CR/NL. At 115200 baud, PAULMON2's bit transition timing measurement lacks the time resolution to distinguish CR from many other characters, including NL. However, the startup code was carefully designed and tested to "fail" with the correct baud rate in this high speed case. At lower speeds, it continues to wait for the correct CR character before making the baud rate calculation. This problem is usually caused by a faulty serial cable, where the ground (pin 5) is not connected. Often times the ground of the power supply for the 8051 board is at a similar potential as your computer's ground, which allows only intermitently correct communication. Check your serial cable, and also the quality of connections on the 9 pin connector on your board. Also, check that your power supply really is DC voltage (not AC), that it's connections are reliable, and it is at the same ground potential as your computer. All assembled boards shipped by PJRC are fully tested and this chip is known to be good. All unassembled kits are provided with a chip that was pre-programmed and verified. It is possible, but very unlikely the chip from an unassembled kit is bad. If your 87C52 chip was not purchased from PJRC, the 87C52 may not be correctly programmed with the PAULMON2 code. PJRC provides all of the code for free, but absolutely no technical support is available for using third party equipment to program your own 87C52 chip. Please contact the manufacturer of your EPROM programmer for technical support if your 87C52 was not correctly programmed. You can order a replacement pre-programmed 87C52 chip from PJRC. These chpis are verified after programming, and all batches qualified by sampling several chips in assembled boards.
http://www.pjrc.com/tech/8051/board5/startup.html
crawl-001
refinedweb
737
68.5
Revision history for Unexpected 0.44.1 2016-05-08 23:04:28 - Added coverage URI to travis.yml - Added coverage badge and coverage report posting 0.43.1 2015-11-29 15:38:05 - Refactored smoker exceptions to dump file - Added another broken smoker to exception list - Can set multiple exception attributes from exception classes 0.42.1 2015-10-04 11:09:47 - Strictify the POD encoding - Better subroutine naming - Toolchain update 0.41.1 2015-09-14 09:58:50 - Turning off CPAN Testing Aug 2015. Since the site went down I cannot see the reports so there is no point leaving them running - Eliminated Try::Tiny from abridged stacktrace 0.40.1 2015-09-04 11:51:43 - Cache the exception class lookup in Functions Won't need to except EXCEPTION_CLASS when importing throw - Upstream toolchain update 0.39.1 2015-08-26 23:36:44 - Added RequestFactory duck type - Added tests for clone method - Added clone method to Throwing - Replaced 01always-pass.t with Test::ReportMetadata 0.38.1 2015-05-01 14:10:50 - Fixed prototype on inflate_placeholders - Exposed inflate_placeholders - Deprecated quote_bind_values. Deleted interpolate_msg - Stop quote_maybe in Functions::interpolate_msg with extra default - Updated copyright year 0.37.1 2015-04-22 21:52:56 - Dropped Coveralls due to permission creep on Github - Added Kwalitee badge - Added NonZeroPositiveNum and PositiveNum to Types - Added interpolate_msg which uses new default placeholders - Reversed missing placeholder change - Made boolean overload subclassable - Replaced last _sub with lexical - Missing placeholders replaced with undef and null - Call instance_of in catch_class matching 0.36.1 2014-12-22 01:02:53 - Added explicite bool overload - Broken smoker a54c1c84-6bf5-1014-b4f9-dcd54300afcd added to skip list - Updated prereqs 0.35.1 2014-12-01 17:18:12 - Switched to lexical subroutines - Added coderef|error, arrayref to constructor method signature - Switched back to alternate D::C::R::Coveralls The alternate version uses Furl and works the official version uses HTTP::Tiny and fails to validate the host cert - Added exception, throw and throw_on_error functions - Made is_one_of_us an exported function 0.34.1 2014-10-07 17:21:03 - Updated prereq for Exporter::Tiny closes RT#99263 (tobyink++) 0.33.1 2014-10-02 18:31:16 - Fixed class coderef dereferencing bug 0.32.1 2014-10-02 01:38:57 - Made tests skip broken dev releases of Exporter::Tiny 7b202d3a-49be-11e4-8c55-4b712c6f0924 and RT#99263 0.31.1 2014-10-01 23:43:15 - Added coderef and list constructor signature - Added two colons test skip line to t/boilerplate.t 0.30.1 2014-08-19 00:02:19 - Bumped Type:Tiny version RT#98113 - Removed version from POD - Removed strictures::defanged unnecessary warnings::bits is going undefined. Something in the test suite since first occurance was on 02pod.t - Corrected inline package definition in type tests - More breakage 494fe6de-2168-11e4-b0d1-f6bc4915a708 - Added fury badge - Updated build_requires dependencies - Split out boilerplate from test scripts - Moved s::d calls til after min perl ver tests - Added strictures::defanged to tests because 160dd1a2-1ebe-11e4-ae61-5739e0bfc7aa 0.28.1 2014-08-07 16:37:55 - Added another constructor method signature - Added Coverall instructions to tavis.yml - Added Travis and Coverall badges to POD 0.27.1 2014-07-16 15:51:37 - Fixed frame skipping sub in stack tracing role Something in the way Moo constructs attributes if different between most systems and some BSD smokers. The Unexpected::TraitFor::TracingStacks::_build_trace frame is reported as __ANON__ - Removed dependency on strictures 0.26.1 2014-07-16 08:07:00 - Skipping Type::Tiny 0.44. Doesn't install with Moo 1.005 - Tests set ENV variable when stack trace broken. More debug op - Removed filtered frames 0.25.1 2014-07-16 06:26:37 - Stack trace broken on some bsd smokers 0.24.1 2014-07-15 22:09:12 - Releasing - Added catch_class to ::Functions - Replaced namespace::sweep with namespace::autoclean - CPAN Testing failure 20ba3f5a-f94d-11e3-82c2-bc3ea1235623 Seifer Cross unknown host. Undefined install paths 0.23.1 2014-05-28 11:24:47 - Added new construction signature, <error>, <hash_ref> - Added stringification test - Moved location of namespace::clean calls. Yuck - Made LoadableClass TC output require_module error - Removed Ident: lines 0.22.1 2014-01-24 20:30:06 - Removed VERSION from all but main module - Updated git pre commit hook - Method add_exception now only adds one exception at a time 0.21.1 2014-01-15 17:13:43 - Added ::Functions::has_exception. Exception descriptor DSL - Renamed ::EC::has_exception to add_exception 0.20.1 2014-01-01 01:20:15 - Defined the Unspecified exception class - Added default error message to ::ExceptionClasses - Updated Build.PL. Remove prereqs below min_perl_ver 0.19.1 2013-12-06 12:33:21 - CPAN Testing failure 073a6592-5dc5-11e3-8778-8bb49a6ffe4e Sebastian Woetzel openbsd.my.domain. We require version.pm 0.88 Smoker says 0.99002 installed runs tests using 0.82. This is a candidate for the Admin interface when it arrives - Using DZ::P::AbstractFromPOD - Using DZ::P::LicenseFromModule - Added prototypes to ::Functions 0.18.1 2013-11-30 15:52:42 - ::F::import test exception_class for is_exception 0.17.1 2013-11-30 15:09:48 - Added ::Functions::is_class_loaded 0.16.1 2013-11-27 12:15:44 - Replaced Class::Load with Module::Runtime - Dropped DZ::P::MarkdownInRoot from dist.ini 0.15.1 2013-11-21 16:59:45 - Added exception class function export - Renamed placeholder state mutator to quote_bind_values - Added placeholder quoting in inflate_message 0.14.1 2013-10-19 18:31:27 - Updated git hooks - Type::Tiny exception class renamed RT#89410 0.13.1 2013-09-27 13:27:37 - Replaced Exporter::TypeTiny with Exporter::Tiny 0.12.1 2013-09-03 11:58:21 - ::TF::EC requires subclasses to call has_exception 0.11.4 2013-09-03 11:50:34 - Added ::TraitFor::ExceptionClasses 0.10.1 2013-08-26 22:34:17 - Terminate if MB version too low 0.9.1 2013-08-25 14:01:22 - Updated Build.PL template. Tests MB version - Improved Devel::Cover scores 0.8.2 2013-08-24 00:56:57 - Bumped version to fix mad meta data 0.7.6 2013-08-16 22:45:20 - Raised min Perl ver to 5.10.1. Using // - Lowered min Perl ver to 5.8 - Readded dependencies - Switched to Dist::Zilla
https://metacpan.org/changes/distribution/Unexpected
CC-MAIN-2016-22
refinedweb
1,066
50.33
Hi,maybe somebody can help me: As it is mentioned in the EJB3 Trailblazers [],you can annotate an entity method with @Remove so it would be removed from EntityContext when called. But the EJB3 persistence specs does not mention a mechanism like the above and as far as I can see, @Remove annotated methods do not remove the entity.I believe the Trailblazers are incorrect in this point. So is there a solution for removing an entity from its EntityContext from the entity itself? I have a scenario like public class Entity { public methodXXX() { remove(); } public remove() { // remove this entity from EntityContext } } Retrieving data ...
https://developer.jboss.org/message/353319
CC-MAIN-2019-39
refinedweb
104
50.77
There’s been a lot of well-deserved hype around Svelte recently, with the project accumulating over 24,000 GitHub stars. Arguably the simplest JavaScript framework out there, Svelte was written by Rich Harris, the developer behind Rollup. There’s a lot to like about Svelte (performance, built-in state management, writing proper markup rather than JSX), but the big draw for me has been its approach to CSS. Single file components React does not have an opinion about how styles are defined —React Documentation A UI framework that doesn't have a built-in way to add styles to your components is unfinished. —Rich Harris, creator of Svelte In Svelte, you can write CSS in a stylesheet like you normally would on a typical project. You can also use CSS-in-JS solutions, like styled-components and Emotion, if you'd like. It’s become increasingly common to divide code into components, rather than by file type. React, for example, allows for the collocation of a components markup and JavaScript. In Svelte, this is taken one logical step further: the Javascript, markup and styling for a component can all exist together in a single `.svelte` file. If you’ve ever used single file components in Vue, then Svelte will look familiar. // button.svelte <style> button { border-radius: 0; background-color: aqua; } </style> <button> <slot/> </button> Styles are scoped by default By default, styles defined within a Svelte file are scoped. Like CSS-in-JS libraries or CSS Modules, Svelte generates unique class names when it compiles to make sure the styles for one element never conflict with styles from another. That means you can use simple element selectors like div and button in a Svelte component file without needing to work with class names. If we go back to the button styles in our earlier example, we know that a ruleset for <button> will only be applied to our <Button> component — not to any other HTML button elements within the page. If you were to have multiple buttons within a component and wanted to style them differently, you'd still need classes. Classes will also be scoped by Svelte. The classes that Svelte generates look like gibberish because they are based on a hash of the component styles (e.g. svelte-433xyz). This is far easier than a naming convention like BEM. Admittedly though, the experience of looking at styles in DevTools is slightly worse as the class names lack meaning. It’s not an either/or situation. You can use Svelte’s scoped styling along with a regular stylesheet. I personally write component specific styles within .svelte files, but make use of utility classes defined in a stylesheet. For global styles to be available across an entire app — CSS custom properties, reusable CSS animations, utility classes, any ‘reset’ styles, or a CSS framework like Bootstrap — I suggest putting them in a stylesheet linked in the head of your HTML document. It lets us create global styles As we've just seen, you can use a regular stylesheet to define global styles. Should you need to define any global styles from within a Svelte component, you can do that too by using :global. This is essentially a way to opt out of scoping when and where you need to. For example, a modal component may want to toggle a class to style the body element: <style> :global(.noscroll) { overflow: hidden; } </style> Unused styles are flagged Another benefit of Svelte is that it will alert you about any unused styles during compilation. In other words, it searches for places where styles are defined but never used in the markup. Conditional classes are terse and effortless If the JavaScript variable name and the class name is the same, the syntax is incredibly terse. In this example, I’m creating modifier props for a full-width button and a ghost button. <script> export let big = false; export let ghost = false; </script> <style> .big { font-size: 20px; display: block; width: 100%; } .ghost { background-color: transparent; border: solid currentColor 2px; } </style> <button class:big class:ghost> <slot/> </button> A class of ghost will be applied to the element when a ghost prop is used, and a class of big is applied when a big prop is used. <script> import Button from './Button.svelte'; </script> <Button big ghost>Click Me</Button> Svelte doesn’t require class names and prop names to be identical. <script> export let primary = false; export let secondary = false; </script> <button class:c-btn--primary={primary} class:c-btn--secondary={secondary} <slot></slot> </button> The above button component will always have a c-btn class but will include modifier classes only when the relevant prop is passed in, like this: <Button primary>Click Me</Button> That will generate this markup: <button class="c-btn c-btn--primary">Click Me</button> Any number of arbitrary classes can be passed to a component with a single prop: <script> let class_name = ''; export { class_name as class }; </script> <button class="c-btn {class_name}"> <slot /> </button> Then, classes can be used much the same way you would with HTML markup: <Button class="mt40">Click Me</Button> From BEM to Svelte Let's see how much easier Svelte makes writing styles compared to a standard CSS naming convention. Here's a simple component coded up using BEM. .c-card { border-radius: 3px; border: solid 2px; } .c-card__title { text-transform: uppercase; } .c-card__text { color: gray; } .c-card--featured { border-color: gold; } Using BEM, classes get long and ugly. In Svelte, things are a lot simpler. <style> div { border-radius: 3px; border: solid 2px; } h2 { text-transform: uppercase; } p { color: gray; } .featured { border-color: gold; } </style> <div class:featured> <h2>{title}</h2> <p> <slot /> </p> </div> It plays well with preprocessors CSS preprocessors feels a lot less necessary when working with Svelte, but they can work perfectly alongside one another by making use of a package called Svelte Preprocess. Support is available for Less, Stylus and PostCSS, but here we'll look at Sass. The first thing we need to do is to install some dependencies: npm install -D svelte-preprocess node-sass Then we need to import autoPreprocess in rollup.config.js at the top of the file. import autoPreprocess from 'svelte-preprocess'; Next, let’s find the plugins array and add preprocess: autoPreprocess() to Svelte: export default { plugins: [ svelte({ preprocess: autoPreprocess(), ...other stuff Then all we need to do is specify that we’re using Sass when we’re working in a component file, using type="text/scss" or lang="scss" to the style tag. <style type="text/scss"> $pink: rgb(200, 0, 220); p { color: black; span { color: $pink; } } </style> Dynamic values without a runtime We’ve seen that Svelte comes with most of the benefits of CSS-in-JS out-of-the-box — but without external dependencies! However, there’s one thing that third-party libraries can do that Svelte simply can’t: use JavaScript variables in CSS. The following code is not valid and will not work: <script> export let cols = 4; </script> <style> ul { display: grid; width: 100%; grid-column-gap: 16px; grid-row-gap: 16px; grid-template-columns: repeat({cols}, 1fr); } </style> <ul> <slot /> </ul> We can, however, achieve similar functionality by using CSS variables. <script> export let cols = 4; </script> <style> ul { display: grid; width: 100%; grid-column-gap: 16px; grid-row-gap: 16px; grid-template-columns: repeat(var(--columns), 1fr); } </style> <ul style="--columns:{cols}"> <slot /> </ul> I’ve written CSS in all kinds of different ways over the years: Sass, Shadow DOM, CSS-in-JS, BEM, atomic CSS and PostCSS. Svelte offers the most intuitive, approachable and user-friendly styling API. If you want to read more about this topic then check out the aptly titled The Zen of Just Writing CSS by Rich Harris. Happy to see a Svelte article on CSS Tricks! Your BEM to Svelte explanation says it all. Why have we been writing BEM for all these years when a compiler could do it for us? Although maybe should not promote writing code where we style base elements directly. Even in scoped code this could easily lead to problems when expanding upon the code. A better idea in my opinion is to use simple class names like in Bootstrap. In Svelte, you don’t have the CSS clashes that Bootstrap has due to its lack of namespacing. Been playing with Svelte CSS and context based theme using CSS Properties (CSS Vars). This is incredibly powerful. Should be much more performant than approach used normally used with Styled Components and Emotion. Specially with new CSS.registerProperty() which allows for CSS vars to not inherit making them more performant as they will only affect one element. Nice post! Have you used external sass-files? I’m having a pain in the rear trying to import a single global.scss file, even though preprocessing sass in the style tags is working. Cheers! In App.svelte: That’s what I’m using today, but then I can’t use the sweet component styling for App :P It’s not that bad, but it would be nice with a better solution.
https://css-tricks.com/what-i-like-about-writing-styles-with-svelte/?ref=webdesignernews.com
CC-MAIN-2019-47
refinedweb
1,529
60.45
honey the monster, codewitch wrote: if i can't do generic programming i'm a sad honey bear. honey the monster, codewitch wrote:generics need to be able to do more. honey the monster, codewitch wrote:i mean procedural as in procedures rather than objects to divvy up code. honey the monster, codewitch wrote:If there's a better word for that I'm unaware of it. You could say that all imperative languages are procedural if they have functions/methods but that's almost too general to be useful. honey the monster, codewitch wrote:As far as your yucks, i come from a C++ background and happen to like generic programming. to each their own. Eddy Vluggen wrote:That's a non-complaint; like I said, you can put all your procedures in a God-object Eddy Vluggen wrote:I'd say you haven't worked in a strict procedural language Eddy Vluggen wrote:Haven't seen much of that, so not going to comment on it. But still, yuck. honey the monster, codewitch wrote:Now I wonder what you'd consider procedural. honey the monster, codewitch wrote:Spoken like someone that's never used it. honey the monster, codewitch wrote:I wish it was more available in places other than C++. 10 GOTO HELL public class SomeParsingData {} public class FA<T> { // ... public virtual void MyOperation(SomeParsingData data) {} } public class CharFA : FA<char> { public override void MyOperation(SomeParsingData data) {} } static class FAUtil { public static void MyOperation<T>(FA<T> target, SomeParsingData data) => target.MyOperation(data); } class FA<TInput,TAccept> { } class CharFA<TAccept> : FA<char,TAccept> { } var fa = new FA<char,string>(); // instantiate the specialization than var fa = new CharFA<string>(); // <-- what i have to do now General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Lounge.aspx?fid=1159&df=90&mpp=25&sort=Position&view=Normal&spc=Relaxed&prof=True&select=5638329&fr=58501
CC-MAIN-2020-50
refinedweb
317
53.31
Hello, The usecase is the measurment of 2 voltagea of max -+10. I use Arduino UNO (Atmega328) and two ADC components (AD7893ANZ-10), datasheet available on , trying to make this run I faced two challenges which was not solvable by my resaerches on it, why I hope for help here. The issues are following: 1. The TWI adress of the ADC component is not avialable in Datasheet. I tried three I2C device scanners, which I foundnon forumnof Arduino, but no I2C device couldnt be found. One of scanners is posted below and the scheme of the circuit(with pull up resistors etc.) is attached. on the circuit scheme the pin "convst" is not connetcted but I also tried it with convst pin being pulsed with T=12µs and D=50% according to datasheet. #include <Wire.h> void setup() { int count = 0; Serial.begin (9600); Wire.begin(); Serial.println ("I2C-Bus-Scanner"); Wire.setClock(100000L); Serial.println("Scan mit 100 kHz"); scan(); Wire.setClock(400000L); Serial.println("Scan mit 400 kHz"); scan(); } void scan(void) { int count = 0; for (int i = 0; i < 128; i++) { Wire.beginTransmission(i); if (Wire.endTransmission () == 0) { /* gefunden */ Serial.print ("ID = "); Serial.print (" 0x"); Serial.println(i, HEX); count++; } delay (20); } Serial.print (count); Serial.println (" Devices gefunden\n"); } void loop() { } 2. I considered also to use the SPI interface. But there are no CS and no pin to connect MOSI on the ADC (see datasheet page9). The alternative solution for CS pin is explained in data sheet as following: " To chip select the AD7893 in systems where more than one device is connected to the 68HC11's serial port, a port bit, configured as an output from one of the 68HC11's parallel ports, can be used to gate on or off the serial clock to the AD7893. A simple AND function on this port bit and the serial clock from the 68HC11 will provide this function. The port bit should be high to select the AD7893 and low when it is not selected. "; I tried to figure out how to realize the AND function but its not even clear for me if this is related to HW or SW yet. I hope the problem is formulated detailed enough, if its not the case pls let me know. every hint is appreciated :) Matt wrote: > The TWI adress of the ADC component is not avialable in Datasheet. This ADC does not have a TWI (I²C) interface! According to Figure 3 and the explanations above, you need to start the conversation by a third wire (CONVST), then wait a while before reading out the data. To read out the data you need to send 16 clock pulses and read in the data bits (while the clock is low). > I considered also to use the SPI interface. To think this is possible by sending out 2 bytes via an unused MOSI wire. Connect MISO and SCK to the ADC and also connect another general purpose write to CONVST. However due to the timing requirements I assume that using the SPI peripheral does not provide any benefit compared to bit-banging by a small 16-step for-loop. Hi, thanks a lot for the hints. This part in datasheet did let me belive it works with twi: "The AD7893 provides a two-wire serial interface that can be used for connection to the serial ports of DSP processors and microcontrollers." For spi , do you have a hint how to choose the ADCs by µC, thus there is no cs pin and there are 2 ADCS. yours,
https://embdev.net/topic/489396
CC-MAIN-2021-04
refinedweb
597
72.97
BlackBerryProfileGetDataFlag #include <bb/platform/identity/BlackBerryProfileGetDataFlag> To link against this class, add the following line to your .pro file: LIBS += -lbbplatform The flags for Profile data retrieval. Multiple flags can be combined using bitwise 'OR' unless stated otherwise. ids_get_data() flags parameter Overview Public Types Index Public Types - Default 0x00000000 Use the default flags for get requests. If options are not specified, the get request synchronizes with the remote copy so that the cache is always up to date while the device has appropriate data coverage. - CacheData 0x00000001 - Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/cascades/bb__platform__identity__blackberryprofilegetdataflag.html
CC-MAIN-2017-30
refinedweb
103
50.43
To test the accuracy of a model we will test the model on data that it has not seen before. We will divide available data into two sets: a training set that the model will learn from, and a test set which will be used to test the accuracy of the model on new data. A convenient way to split the data is to use scikit-learn’s train_test_split method. This randomly divides the data between training and test sets. We may specify what proportion to keep for the test set (0.2 – 0.3 is common). import numpy as np from sklearn import datasets from sklearn.model_selection import train_test_split # Load the iris data iris=datasets.load_iris() # Extra out the feature data (data), and the classification (target) X=iris.data y=iris.target X_train,X_test,y_train,y_test=train_test_split( X,y,test_size=0.3,random_state=0) # Random_state is integer seed. # If this is omitted than a different seed will be used each time Let’s look at the size of the data sets: print ('Shape of X:', X.shape) print ('Shape of y:', y.shape) print ('Shape of X_train:', X_train.shape) print ('Shape of y_train:', y_train.shape) print ('Shape of X_test:', X_test.shape) print ('Shape of y_test:', y_test.shape) OUT: Shape of X: (150, 4) Shape of y: (150,) Shape of X_train: (105, 4) Shape of y_train: (105,) Shape of X_test: (45, 4) Shape of y_test: (45,) The data has been split randomly, 70% into the training set and 30% into the test set. One thought on “63. Machine learning: Splitting data into training and test sets”
https://pythonhealthcare.org/2018/04/14/63-machine-learning-splitting-data-into-training-and-test-sets/
CC-MAIN-2018-51
refinedweb
266
76.32
Using the API - Making NAO speak¶ Making NAO speak¶ Try to run the following code: from naoqi import ALProxy tts = ALProxy("ALTextToSpeech", "<IP of your robot>", 9559) tts.say("Hello, world!") Using a proxy¶ ALProxy is an object that gives you acces to all the methods or the module your are going to connect to. - class ALProxy(name, ip, port)¶ - name - The name of the module - ip - The IP of your robot - port - The port on which NAOqi listens (9559 by default) Every method of the module are directly accessible through the object, for instance: almemory = ALProxy("ALMemory", "nao.local", 9559) pings = almemory.ping()
http://doc.aldebaran.com/2-4/dev/python/making_nao_speak.html
CC-MAIN-2019-13
refinedweb
104
55.27
100.000 E-Mail Adressen aus der Esoterikbranche abzugeben! Gut gehendes Beraterportal abzugeben! Top-Domains und diverse andere Portale abzugeben! Hallo liebe Kollegin, lieber Kollege, ich bin der Betreiber vom Beraterportal „okulusvidens.net“. Ich erfülle mir jetzt einen lang ersehnten Traum. Ich werde im März/April mit meinem Segelboot eine Weltumseglung starten. Vorrausichtlich werde ich ca. 2 Jahre unterwegs sein. Aus diesem Grunde werde ich mein Geschäft verkaufen. Okulus Videns hat in den vergangenen 2 Jahren knapp 1 Million € Umsatz gemacht (Bücher können eingesehen werden), mit diversen Zusatzeinnahmen sogar ca. 1,3 Millionen. Sollten Sie an diesem Beraterportal Interesse haben, wenden Sie sich unter der E-Mail Adresse info@... <mailto:info@...> an mich. Wenn nicht, dann interessieren Sie vielleicht für 100.000 E-Mail Adressen von Kunden aus der Esoterikbranche oder an Top-Domains aus derselben Branche. Ich hatte noch einige Projekte (Link-Portal, Reise-Portal, Flirt-Portal , Cam-Portal, Auktions-Portal, Partner-Portal, e-Books usw.), die ich allerdings noch nicht beworben habe, die gebe ich ebenfalls ab. Eine Übersicht der Top-Domains und der Portale erhalten Sie auf der Webseite <> . Ich hoffe, es ist für Sie etwas Interessantes dabei. Wenn nicht, fragen Sie doch einfach bei mir Mal nach, vielleicht kann ich durch meine Verbindungen für Sie etwas organisieren. Mit freundlichen Grüßen Anton Lehmann P.S.: Sollten Sie kein Programm besitzen, das Massen-E-Mail versenden kann, bekommen Sie von mir eines kostenlos. Hi all, Do we have any design document for lift off, definitely developers must have followed a pattern to before developing the product. Please send me if there is any. Can we use the product to install C++ based programs and do some native operations also? Like getting the disk space, update registry etc? Please answer this would be helpful Thanks Dennis __________________________________________________ Do You Yahoo!? Listen to your Yahoo! Mail messages from any phone. [Stuart Barlow] These two bugs have been fixed in the CVS version of liftoff. Installing the CVS version is a little more cumbersome but it is doable. The main issue (apart from downloading through CVS) is editing the data/builder.properties file. My file looks like this: # name of the list of standard files stdlist=i:\\java\\liftoff\\data\\def_list.list # name of the directory that contains the installer classes installer=i:\\java\\liftoff\\data\\installer # Jar loader class install_class=i:\\java\\liftoff\\data\\Install.class I can then start the builder with (all on one line): java -Ddatadir=i:\java\liftoff\data -cp i:\java\liftoff\lib\LiftOff.jar net.sourceforge.liftoff.builder.Main In general the CVS version of liftoff have good support for Windows. I use w2k myself so I know that it works there. The problems related to the installer for jython have been few. A new release should be made from the CVS, but I don't have the necesary rights to create new releases in the liftoff project. regards, finn check for lib in classpath exception in called method installer.Install2.main java.lang.NullPointerException at installer.items.InstallableContainer.initInstallables(InstallableContainer.java :252) at installer.items.InstallableContainer.<init>(InstallableContainer.java:92) at installer.Info.loadInstallerProps(Info.java:301) at installer.Install2.<init>(Install2.java:61) at installer.Install2.main(Install2.java:81) at java.lang.reflect.Method.invoke(Native Method) at Install.main(Install.java:355) java.lang.reflect.InvocationTargetException *************************************************************************. ************************************************************************* [Dorothy Weil] >If you are happy with the configuration files installer.properties and >installer.filelist, is there a command line option to create the class/zip >file? No. I've though about it myself, but I don't create an installer file that often. I also always have a GUI available. Patches are always welcome, so if you need the feature so badly that you want to code it yourself, I will gladly commit it to the codebase for you. regards, finn If you are happy with the configuration files installer.properties and installer.filelist, is there a command line option to create the class/zip file? Thank You, Dorothy Weil > Dorothy Weil > Software Developer > Umbrella Management Systems Configuration Management Integrated Operations Systems Solutions (IOSS) > Qwest > Voice: (612) 664-4477 > mailto:dxweil@... > > [Moved to jython-dev and cc'ed: liftoff-users] [Brad Clements] >I'd like to add support for NetWare in the installer.. How do I go about doing that? Start by checking out the CVS version of liftoff. Build liftoff by running "bootstrap" and then "build" from within the "src" directory. Before running liftoff, some directories must be configured in the data/builder.properties file: # name of the list of stdandard files stdlist=i:\\java\\liftoff\\data\\def_list.list # name of the directory that contains the installer classes installer=i:\\java\\liftoff\\data\\installer # Jar loader class install_class=i:\\java\\liftoff\\data\\Install.class In src\installer\net\sourceforge\liftoff\installer you will find a os.properties file that maps the os.name to a platform action class. A new action class must most likely be created: NetwareAction.java. Use the WindowsAction class as a beginning. >The OS type reported (on my platform) is "NetWare 5.00", though "NetWare*" should >be used to match any version of NetWare. > >To install, the .class needs to be unzipped somewhere (I suggest sys:Jython) By default the installer picks the user.home property. If that property is initialized by the netware JVM, we should just use that. >(yes, that's right, "sys:Jython" NetWare uses volume:path/path/file) Perhaps the path handling code in liftoff can deal with that already. Perhaps not. >After unzipping to the target directory (wherever the user specified, might be >vol1:jython) these steps are needed: > >1. Add jython.jar to classpath by appending a line like this to the file sys:etc/java.cfg > >CLASSPATH=$CLASSPATH;sys:Jython/jython.jar > >(substitute the correct path) I don't yet have a wellformed opinion about this, but I would prefer if the installation set the classpath in the script files if at all possible. Not only does that match windows and unix behavior, it also avoids a lot of problems with proper access rights to sys:etc (I guess). >(anyone know if SERVLETCLASSPATH should be updated as well?) These changes to sys:etc/java.cfg should instead be described in doc/readme files. >2. Create two files in the target installation directory, like this: > >filename: jython.ncf > >Contents: > >java -ns -Dpython.home=sys:jython org.python.util.jython %1 %2 %3 %4 %5 %6 %7 %8 %9 > > >Note that the correct path needs to be inserted in place of "sys:jython", there's just one >line in the .ncf file. > >filename: jythonc.ncf > >Contents: > >java -ns -Dpython.home=sys:jython org.python.util.jython >"sys:jython/Tools/jythonc/jythonc.py" %1 %2 %3 %4 %5 %6 %7 %8 %9 > >(should be all one line, also put the correct path to sys:jython in both locations) That's relative easy with liftoff. A new builder property "exec.template.netware" must be added to liftoff and two jython specific template files must be added jython. >3. Admonish the user after installation with the following message: > >------ >You must unload java to complete the installation. Unload java by typing the command >"unload java" at the system console. Note that this will unload any DirXML drivers you >may have loaded. > >You may wish to add <installationdir> to the system search path by adding the following >line to sys:system/autoexec.ncf: > >search add <installationdir> >----- >(replace <installationdir> with the appropriate path to where the .ncf files ended up) > That is easy with liftoff's templates. >If someone could tell me how to make these changes to the installer (and submit a >patch) or if someone could just make the changes and I'll test (hint hint) I'd would be >very happy, and so would many NetWare users.. Absolutely. Making it easier for netware users to use jython would be a good thing for all. When I create the installable jython-21a1.class I run this windows script from within the jython CVS directory: java -Ddatadir=i:\java\liftoff\data -cp i:\java\liftoff\lib\LiftOff.jar net.sourceforge.liftoff.builder.Main where i:\java\liftoff is the CVS version of liftoff. I open the installer\liftoff.props file with the file/open and create the class with Create/Class menu. If you can't upload patches to the liftoff project, you can put them under the jython project instead. I have commit rights to liftoff and will be able to commit them. regards, finn Hi, I wondered why the uninstall is different than the install mechanisme. I though it would be an improvement if the uninstall classes and the unstall.dat data file was stored in one file similar to the way the installer is packaged into one .class file. A patch that does exactly that is available here: A short description of the changes: - An Uninstall loader class is added. It is almost a copy of the Install class but the Uninstall class loads all entries in the zip archive and then closes the archive. That way, the Uninstall.class file can be deleted as a normal installable file. - The InstallerLib will write the Uninstall.class, the installer classes and the uninstall state to a runnable zipfile. The UninstallInfo installable is no longer needed. - The uninstall process will attempt to read the install state as a resource from the classloader. - The location of the Uninstall.class can be specified in the builder. I want it in _top_. Others will most likely want it in jlib. - The zip subpackage from builder is copied to a zip subpackage in installer. Ideally this should perhaps be shared in a common package. regards, finn [I wrote] >I have a patch ... and here it is: Index: FileSelect.java =================================================================== RCS file: /cvsroot/liftoff/liftoff/src/installer/net/sourceforge/liftoff/installer/awt/FileSelect.java,v retrieving revision 1.1 diff -u -5 -r1.1 FileSelect.java --- FileSelect.java 2000/07/31 19:17:26 1.1 +++ FileSelect.java 2000/12/28 20:15:43 @@ -199,11 +199,11 @@ listLabel = new Label("Files"); //listPanel.add(listLabel, "North" ); files = new List(10); files.setBackground(Color.white); - files.addItemListener(this); + files.addActionListener(this); listPanel.add(files,"Center"); la.setConstraints(listPanel,span); add(listPanel); @@ -366,20 +366,10 @@ public void itemStateChanged( ItemEvent e ) { String selected = ""; - if( e.getSource() == files ) { - selected = files.getSelectedItem(); - if( selected != null ) { - String name = currentDir + File.separator + selected ; - filenameField.setText( name ); - setDir( name ); - } - return; - } - if( e.getSource() == pathNow ) { selected = pathNow.getSelectedItem(); if( selected != null ) { setDir(selected); } @@ -405,10 +395,20 @@ } } public void actionPerformed( ActionEvent e) { + + if( e.getSource() == files ) { + String selected = files.getSelectedItem(); + if( selected != null ) { + String name = currentDir + File.separator + selected ; + filenameField.setText( name ); + setDir( name ); + } + return; + } if( e.getSource() == filenameField ) { System.err.println( "Enter : " + e ); checkField( e.getActionCommand() ); } else if( e.getActionCommand().equals("home") ) { Hi, I've recieved a bug report from a user who get a ArrayIndexOutOfBoundsException when double clicking on a directory in the FileSelect dialog. (The FileSelect dialog is shown when clicking the [...] button on the destination card). The problem may be limited to windows but happens with all versions of JDK. AFAICT, the problem occur because the itemStateChanged that is fired on the first click on a java.awt.List modifies the list. A split second later the 2nd click is received and that fires an ActionEvent which in the internal part of a java.awt.List does a: list.getItem(list.getSelectedIndex()) but because the itemStateChanged() modified the list, no item is selected. The result is: >java.lang.ArrayIndexOutOfBoundsException: -1 < 0 > at java.util.Vector.elementAt(Vector.java:427) > at java.awt.List.getItemImpl(List.java:269) > at java.awt.List.getItem(List.java:261) > at sun.awt.windows.WListPeer$1.run(WListPeer.java:167) > at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:154) > at java.awt.EventQueue.dispatchEvent(EventQueue.java:317) > at java.awt.EventDispatchThread.pumpOneEvent(EventDispatchThread.java:103) This little program show the same problem: import java.awt.*; import java.awt.event.*; public class l1 implements ItemListener { public static void main(String[] args) { List list = new List(); fillList(list); list.addItemListener(new l1()); Frame frame = new Frame(); frame.setBounds(new Rectangle(400, 300)); frame.add(list); frame.show(); } public void itemStateChanged(ItemEvent e) { List list = (List) e.getSource(); fillList(list); } public static void fillList(List list) { list.removeAll(); list.add("Item 1"); list.add("Item 2"); list.add("Item 3"); list.add("Item 4"); list.add("Item 5"); list.add("Item 6"); } } I can't think of a solution if we want to keep using single click as directory navigation in the FileSelect dialog. Does anyone know how the example above can work with both single click and double-clicks? I have a patch which moves the event handling code from the itemStateChanged() to the actionPerformed() on the list. That prevents the exception, but also changes the behavior of the file select dialog. Andreas, would that be an acceptable solution? regards, finn Finn Bock wrote: >? German translation: "Der Installer kennt ihr Betriebssystem nicht.\n" + "Das System meldet {0} als Namen. Bitte wählen Sie\n" + "das am besten passende System aus der folgenden Liste" The english seems to be correct, but i am not the one to judge this ... Ciao Andreas? regards, finn Hi, For jython I added two small special features and I'm not sure how usefull it will be for others. The actual implementation is also a bit hackish. 1) Jython have no need for the classpath editor card during installation. So I added a boolean property "card.classPathCard" to the builder and added a test to Install2.java: if ("yes".equals(Info.getProperty("card.classPathCard"))) frame.addCard( new ClasspathCard() ); 2) Jython have no need for the system wide installation when root. All of jython is always installed under a common root. So I added a boolean property "destination.systemwide" to the builder and changed the test in UnixAction to be: // if the user is root, select systemwide installation if( "root".equals(Info.getProperty("user.name")) && "yes".equals(Info.getProperty("destination.systemwide")) ) { Are these features sufficiently useful and general? Are there better ways of implementing them? regards, finn [Finn] >. [Joi Ellis] . The java2 requirement only concern the builder. The generated installer can still run on java1. The builder already require the javax.swing package and while it may be possible to find a standalone javax.swing to use with a java1 VM, I don't think it is easy to get running together. >> If java1 is a requirement for the builder, I'll copy&paste the sort() >> method from FileSelect. > >Be careful doing this, it may violate your (and our) license agreement with >Sun. Better to use your own implementation or get one from another GNU >project. FileSelect.java is already part of LiftOff. There should be no licence problem with copying from it. regards, finn On Mon, 20 Nov 2000, Finn Bock wrote: >. > If java1 is a requirement for the builder, I'll copy&paste the sort() > method from FileSelect. Be careful doing this, it may violate your (and our) license agreement with Sun. Better to use your own implementation or get one from another GNU project. -- Joi Ellis Software Engineer Aravox Technologies joi@..., gyles19@... No matter what we think of Linux versus FreeBSD, etc., the one thing I really like about Linux is that it has Microsoft worried. Anything that kicks a monopoly in the pants has got to be good for something. - Chris Johnson A comment about the changes for WindowsAction: - I removed the guessing and fall back from getJvmPath. If the requested jvm path does not exists it will return null. The hasJRE & hasConsoleJvm methods will then only return true if the executables actually exists in the file system. IMHO unices should do something similar but I hesitate to make changes there because I can't test it sufficiently. regards, finn. If java1 is a requirement for the builder, I'll copy&paste the sort() method from FileSelect. regards, finn Hi, The default language when installing under an unknown language is german. I do not feel that it is the best possible choice. I would rather see english as the fall back language. Andreas, would that be acceptable to you? regards, finn The suggested patch: Index: installer/net/sourceforge/liftoff/installer/Info.java =================================================================== RCS file: /cvsroot/liftoff/liftoff/src/installer/net/sourceforge/liftoff/installer/Info.java,v retrieving revision 1.2 diff -u -r1.2 Info.java --- installer/net/sourceforge/liftoff/installer/Info.java 2000/11/20 17:55:25 1.2 +++ installer/net/sourceforge/liftoff/installer/Info.java 2000/11/20 18:34:18 @@ -130,7 +130,7 @@ return i; } } - return -1; + return 1; // The index of Locale.US in instLocales } /** Index: installer/net/sourceforge/liftoff/installer/TextResources.java =================================================================== RCS file: /cvsroot/liftoff/liftoff/src/installer/net/sourceforge/liftoff/installer/TextResources.java,v retrieving revision 1.1 diff -u -r1.1 TextResources.java --- installer/net/sourceforge/liftoff/installer/TextResources.java 2000/07/31 19:17:26 1.1 +++ installer/net/sourceforge/liftoff/installer/TextResources.java 2000/11/20 18:34:18 @@ -4,6 +4,6 @@ import java.util.*; -public class TextResources extends TextResources_de { +public class TextResources extends TextResources_en { } Hi, I'm a developer on the jython project. We needed a opensource installer and have decided on LiftOff. We have a need for some additional features and for improved windows and mac support. I made some changes and bugfixes for liftoff and Andreas made me a developer. Keep in mind that I'm not an expert on LiftOff and that all my modifications can be backed out if needed. regards, finn)) I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/liftoff/mailman/liftoff-users/
CC-MAIN-2017-30
refinedweb
2,976
52.66
Get ready -- the accept() call is kinda weird! What's going to happen is this: someone far far away will try to connect() to your machine on a port that you are listen()'ing on. Their connection will be queued up waiting to be accept()'ed. You call accept() and you tell it to get the pending connection. It'll return to you a brand new socket file descriptor to use for this single connection! That's right, suddenly you have two socket file descriptors for the price of one! The original one is still listening on your port and the newly created one is finally ready to send() and recv(). We're there! The call is as follows: #include <sys/socket.h> int accept(int sockfd, void *addr, int *addrlen); sockfd is the listen()'ing socket descriptor. Easy enough. addr will usually be a pointer to a local struct sockaddr_in. This is where the information about the incoming connection will go (and you can determine which host is calling you from which port). addrlen is a local integer variable that should be set to sizeof(struct sockaddr_in) before its address is passed to accept(). Accept will not put more than that many bytes into addr. If it puts fewer in, it'll change the value of addrlen to reflect that. Guess what? accept() returns -1 and sets errno if an error occurs. Betcha didn't figure that. Like before, this is a bunch to absorb in one chunk, so here's a sample code fragment for your perusal: #include <string.h> #include <sys/types.h> #include <sys/socket.h> #define MYPORT 3490 /* the port users will be connecting to */ #define BACKLOG 10 /* how many pending connections queue will hold */ main() { int sockfd, new_fd; /* listen on sock_fd, new connection on new_fd */ struct sockaddr_in my_addr; /* my address information */ struct sockaddr_in their_addr; /* connector's address information */ int sin_size;.. Need help? accounthelp@everything2.com
https://everything2.com/user/iain/writeups/accept%2528%2529--%2522Thank+you+for+calling+port+3490.%2522
CC-MAIN-2018-09
refinedweb
321
64.71
Important: Please read the Qt Code of Conduct - command prompt command execution in Qt How command prompt command execution in Qt application like in case of - gedit my.txt in command prompt my.txt file be open to edit in case of - mkdir Qt in command prompt i have to execute this command and want to create folder Thank you in Advance. Any one here ? QProcess::execute("gedit my.txt") QProcess::execute("mkdir folder") In practice you should probably use the overload to pass the argument(s) as a QStringList. P.S. To create a folder, don't actually use a sub-process of mkdir, use the in-built Qt function for creating a directory. @Pranit-Patil said in command prompt command execution in Qt: Any one here ? You posted that 9 minutes after your original query. Do you pay for my support, or anyone else's, within 10 minutes? :( Or do you just think it's my/our duty? In which case I'll leave it to others for you from now on... @JonB i m doing simply like this but not getting o/p QProcess process; QString exePath = "C:/Windows/ystem32/cmd.exe"; process.start(exePath); process.execute("cmd.exe"); @Pranit-Patil I gave you the lines to type for your question. If you choose to do something quite different and wrong, that's up to you. Perhaps others will sort you out. i tried your examples QProcess::execute("gedit my.txt"); QProcess::execute("mkdir Test"); but not getting any error or any output. Neither command should produce any error or output. Unless you don't even have gedit.... And if you're Windows (your question does not even mention the OS), you may need QProcess::execute("cmd /c mkdir Test");. @JonB Im Developing application in Qt creator 4.6.0 on windows 7 -32bit #include <QProcess> #include <QDir> void Command::on_pushButton_clicked() { QProcess process; QString exePath1 = "D:/Phoenix"; process.start(exePath1); process.execute("cmd /c mkdir test"); } not getting any output. - jsulm Lifetime Qt Champion last edited by jsulm @Pranit-Patil "not getting any output." - what "output" do you expect? Why do you start new process to create a directory? You can do this way faster and easier with Qt () Also this does not make any sense: QProcess process; QString exePath1 = "D:/Phoenix"; process.start(exePath1); process.execute("cmd /c mkdir test"); You can't start a directory as it is not an executable. There is no need to use both start() AND execute(). Did you actually read QProcess documentation?! It has a nice example: QString program = "./path/to/Qt/examples/widgets/analogclock"; QStringList arguments; arguments << "-style" << "fusion"; QProcess *myProcess = new QProcess(parent); myProcess->start(program, arguments); @jsulm I already pointed all this stuff to OP above. Additionally, I also deliberately used QProcess::execute()in my examples for him to keep it down to one liners. Each time he has also added an additional QProcess::start(). I suggested he just copy my examples verbatim to avoid this, but we don't seem to be getting anywhere.... I'll leave it to you :) @JonB Thank you for your reply Sorry sir i didn't understand what your saying ..? aim - create folder in my system using commands of cmd prompt in any location through QT application. Im new in Qt i don't have more knowledge on it plz suggest
https://forum.qt.io/topic/93785/command-prompt-command-execution-in-qt
CC-MAIN-2021-21
refinedweb
560
66.84
The $if statement has been enhanced to allow constant expression evaluation. To match the flavor of the basic $if statement, we have introduced to forms: $ife expr1 == expr2 true if (expr1-expr2)/(1+abs(expr2)) < 1e-12 $ife expr true if expr1 <> 0 The expressions follow the standard GAMS syntax and are explained in more detail below. The .==. form is convenient when expecting rounding errors. For example, we can write $ife log2(16)^2 == 16 (true statement) which is more convenient than having to write $ife NOT round(log2(16)^2-16,10) (true statement) $ife round(log2(16)^2,10)=16 (true statement) A new variant on the $if statement has been introduced. It follows the usual structures and allows appropriate nesting. The syntax for the condition are the same as for the $if statement. The $ifthen and $elseif have the same modifiers as the $if statement, namely I for case insensitive compare and E for constant expression evaluation. In the example below we will execute all blocks of such a statement. $maxgoto 10 $set x a $label two two $eval x 1 $label three display 'x=%x%'; $ifthen %x% == 1 $eval x %x%+1 $elseif %x% == 2 $eval x %x%+1 $elseif %x% == 3 $eval x %x%+1 $elseif %x% == 4 $eval x %x%+1 $else $set x done $endif $if NOT %x% == done $goto three This is a bit contrived but illustrates some of more subtle features. Anytime we use a looping construct via a $goto statement we have to protect ourselves against the potential of an infinite loop. The number of times we jump back to a label is counted and when a limit is reached, GAMS will issue an error. It is important to note that the %string% references are substituted only once. Lengthy and nested ithen/else structures can become difficult to debug. Tagging of the begin, the $ifthen and the end, the $endif can be helpful. For example, the next line will fail because the tags do not match: $ifthen.one x == x $endif.one As with the $if statement, the statement on the line with the $ifthen style statements is optional. The following two statements give the same results: $iftheni %type% == low $include abc $elseifi %type% == med $include efg $else $include xyz $endif $iftheni %type% == low $include abc $elseifi %type% == med $include efg $else $include xyz $endif The statements following directly a $ifthen, $elseif, or $else on the same line can be a sequence of other dollar control statements or contain proper GAMS syntax. The statements following directly a $endif can only contain another dollar control statements. $ifthen.two c==c display 'true for tag two'; $ifthen.three a==a $log true for tag three display ' then clause for tag three'; $ifthen.four x==x display 'true for tag four'; $log true for tag four $else display ' else clause for tag four'; $endif.four $log endif four $endif.three $log endif three $endif.two $log endif two This will produce a GAMS program like 1 display 'true for tag two'; 3 display ' then clause for tag three'; 4 display 'true for tag four'; with the following log output --- Starting compilation true for tag three true for tag four endif four endif three endif two Those three new dollar control options are similar to $set, $setlocal and $setglobal. Those statements are used to assign values to ‘environment variables’ which are named string to be substituted by the %var% reference. The $eval options interpret the argument as a constant expression (see for more details below) and encode the result as a string. For example: $if NOT set d $eval d 5 $eval h '24*%d%' $eval he '0' $eval dd '0' Sets d days / day1*day%d%/ h hours / hour1*hour%h% / dh(d,h) / $label more $eval dd '%dd%+1' $eval hs '%he%+1' $eval he %he%+24 day%dd%.hour%hs%*hour%he% $ife %dd%<%d% $goto more /; will produce the expanded input source code below: 3 Sets d days / day1*day5/ 4 h hours / hour1*hour120 / 5 dh(d,h) / 7 day1.hour1*hour24 10 day2.hour25*hour48 13 day3.hour49*hour72 16 day4.hour73*hour96 19 day5.hour97*hour120 21 /; The syntax of constant expression use in data statements and $conditions follows the GAMS syntax, but restricted to scalar values and a subset of functions, as summarized below: OR XOR EQV IMP AND NOT < <= = <> >= > LE LE EQ NE GE GT + - binary and unary * / ^ ** abs ceil cos exp floor frac IfThen log log2 log10 max min mod PI power round sign sin sleep sqr sqrt tan trunk When used in data statement, the constant expressions have to be enclosed in a pair of Square brackets [ ] or curly brackets or braces { }. Spaces can be used freely inside those brackets. When used in dollar control options, like $eval or $if statements brackets are not required, however, we may have to enclose the expression within quotes (single or double) if we want to embed spaces or continue with additional $options on the same line. For example, when using $eval followed by another $statement: $eval x 3 / 5 * pi $eval y "3/5*pi" $eval z pi / 2 $eval a ' exp(1) / pi ' $set b anything goes here $show . Level SetVal Type Text -------------------------------------------- 0 x SCOPED 1.88495559215388 0 y SCOPED 1.88495559215388 0 z SCOPED 1.5707963267949 0 a SCOPED 0.865255979432265 0 b SCOPED anything goes here As with other dollar control statements, without the quotes, the entire remaining line will be interpreted as one argument. The $ife and $ifthene/$elseife statements have a related problem which has been inherited by mimicking the Windows bat and cmd scripting conventions. When a constant expression contains spaces it has to be enclosed in quotes, as shown below. $ife (2+3)>=(2+2) display 'OK, no spaces'; $ife ' (2 + 3) >= (2 + 2) ' display 'now we need quotes'; Finally, here are some data examples: Scalars x PI half / {pi/2} /, e famous number / [ exp( 1 ) ] /; Parameter y demo / USA.(high,low) [1/3], USA.medium {1/4} /; Information about the licensing process is now available at compile and execution time. Two new System environment variables LicenseStatus and LicenseStatusText complement the other license related variables. In addition, two functions have been added to retrieve the licensing level and status. The use of those variables is demonstrated in the updated library model .licememo.. Here is an extract: $set filename %gams.license% $if '%filename' == '' $set filename %gams.sysdir%gamslice.txt if(%system.licensestatus%, put '**** Error Message: %system.licensestatustext%' / '**** License file : %filename%' / '**** System downgraded to demo mode'// ); If called with an incorrect license, the report may contain: **** Error Message: could not open specified license file **** License file : garbage **** System downgraded to demo mode The variable system.licensestaus returns zero if no error has been encountered by the licensing process. The variable system.licensestatustext contains the respective explanation about a licensing failure. The above example uses compile time string substitutions and are not updated when execution a pre compiled work file. Two new functions, LicenseLevel and LicenseStatus, provide this information at runtime: Some special features for option files or other information files require writing complete expanded references of indexed identifiers like variables or equations. For example, the options for CPLEX indicator variables can now be written more compactly. For example, we no can write: loop(lt(j,jj), put / 'indic ' seq.tn(j,jj) '$' y.tn(j,jj) yes / 'indic ' seq.tn(jj,j) '$' y.tn(j,jj) NO ); This will produce indic seq('A','B')$y('A','B') YES indic seq('B','A')$y('A','B') NO indic seq('A','C')$y('A','C') YES indic seq('C','A')$y('A','C') NO indic seq('B','C')$y('B','C') YES indic seq('C','B')$y('B','C') NO Besides the more compact GAMS code, it provides complete syntax checking at compile time. New syntax has been added to extract more information about a controlling set. It is similar to the ord(i) function but uses the dot notation. The new functions are: The following example illustrates some of those new features: set i / '-inf',1,12,24,'13.14',inf /; parameter report; report(i,'value') = i.val; report(i,'length') = i.len; report(i,'offset') = i.off; report(i,'position') = i.pos; display report; The display shows ---- 6 PARAMETER report value length offset position -inf -INF 4.000 1.000 1 1.000 1.000 1.000 2.000 12 12.000 2.000 2.000 3.000 24 24.000 2.000 3.000 4.000 13.14 13.140 5.000 4.000 5.000 inf +INF 3.000 5.000 6.000
http://www.gams.com/docs/release/rel_cmex226.htm
crawl-001
refinedweb
1,445
62.78
I have an existing database that uses the SQL "uniqueidentifier" type, which looks like a different GUID format than a .Net GUID. Question: Using Dapper, how do I map SQL uniqueidentifier column to .Net? Issue: Using Dapper-dot-net, when I map the column using the .Net GUID type, it compiles and runs, but all of the .ToString() output displays the same incorrect GUID. (well it doesn't match what I see in SSMS) When I try to map the column using the string type, I get a compiler error. Ideas? Is this possible? I also found that I could cast it to a varchar in T-SQL, but I wanted the actual GUID type in .NET. With Dapper 1.38 (also using SQL Server as the underlying RDBMS), I had no trouble with mapping if I made the POCO property a Nullable<Guid>. A SqlGuid or a String did not work. public class MapMe { public Guid? GuidByDapper { get; set; } } class Program { static void Main(string[] args) { using (var connection = Utility.GetConnection()) { var mapped = connection.Query<MapMe>("SELECT NEWID() AS GuidByDapper").Single(); Console.WriteLine("Guid: {0}", mapped.GuidByDapper); } } } As a Console app it spits out the following: Guid: 85d17a50-4122-44ee-a118-2a6da2a45461
https://dapper-tutorial.net/knowledge-base/20685800/usando-dapper-dot-net---como-mapeo-la-columna-sql-uniqueidentifier-a-un-tipo--net-
CC-MAIN-2019-47
refinedweb
203
68.97
14 April 2011 14:26 [Source: ICIS news] WASHINGTON (ICIS)--?xml:namespace> However, in its monthly report the department said that the 0.7% advance in its closely watched producer price index (PPI) was attributed almost wholly to a 2.6% gain in energy products during March, especially in gasolines. In organic chemicals, producer prices (also known as wholesale prices) rose 3.4% last month compared with February, the department’s data showed. This strong gain followed advances of 4.5% in February and 2.8% in January. For plastic resins and materials, the gains were more modest, with wholesale prices gaining 0.8% in March from February, the department said. The March advance followed a more modest 0.5% gain in producer prices for resins in February, with both months well below the sharp 3.1% gain in resins wholesale prices seen in January. The overall producer price index has seen steady gains since the first of the year, with the PPI advancing by 1.6% in February and 0.8% in January. As with other recent gains in the overall producer price level, almost all of the March PPI advance was laid to sharp advances in energy product pricing. The department said that nearly 90% of the PPI increase in March was attributed to the 2.6% gain in energy products, and within that general category wholesale prices for gasoline shot up by 5.7%, accounting for more than 80% of the energy products price advance. The increase in wholesale fuel prices last month marked the sixth straight monthly advance, the department said. Gasoline prices at the retail level have been near or exceeding $4/gal (€2.76/gal) in many parts of the country, a level generally regarded as the pricing pain-point for US consumers. In recent history, when gasoline reaches or exceeds $4/gal, US consumers begin to cut back on driving and make other spending reductions. If gasoline prices at the pump continue to rise – and the March wholesale price gains for fuels indicate that they will – it could begin to undermine the still-modest Consumer spending is the driving force of
http://www.icis.com/Articles/2011/04/14/9452697/us-chemical-and-resin-wholesale-prices-show-gains-in-march.html
CC-MAIN-2014-52
refinedweb
358
66.84
import java.io.*; import java.text.DecimalFormat; import java.lang.Math; import javax.swing.JOptionPane; import javax.swing.*; import java.awt.*; import java.awt.event.*; import java.util.Date; import java.text.*; public class MortGuiCalculator3 { public static void main(String[] args) { // declare and construct variables int numMonths,i,toggleInt; int numYears=0; double loantotal,paypermonth,newloantotal; double annualint=0.0; double monthint=0.0; String totalLoanstring, interestLoanstring,decide,Loanterm; int years[] = {7,15,30}; //Number of years in the loan double intRates[]={.0535,.055,.0575}; //Interest rate for the loan String sample[] = {"7 year at 5.35%","15 year at 5.5%","30 year at 5.75%"}; //Decimal Formats DecimalFormat money = new DecimalFormat("$###,###,###,###.00"); DecimalFormat percent = new DecimalFormat("#.00%"); //Loop do { decide=JOptionPane.showInputDialog(null, "Press '1' to start the Mortgage Calculator or '0' to exit"); toggleInt = Integer.parseInt(decide); if (toggleInt>0) { //Total amount borrowed totalLoanstring=JOptionPane.showInputDialog(null, "How much is the total loan amount?"); loantotal = Double.parseDouble(totalLoanstring); switch (toggleInt) { case 1: String option = (String)JOptionPane.showInputDialog(null,"Which loan option?","Options",JOptionPane.QUESTION_MESSAGE,null,sample,sample[sample.length-1]); //Total number of years of the loan break; } monthint=annualint/12; //The monthly interest rate of the loan numMonths = numYears*12; //Number of months paypermonth=loantotal*(monthint/(1-(Math.pow((1+monthint),0-numMonths)))); //The monthly payment //Output Monthly Payment JOptionPane.showMessageDialog(null,"Your loan of " + money.format(loantotal) + " for " + numYears +" years \nat an annual interest rate of " + percent.format(annualint) + "\nwill cost you " + money.format(paypermonth) + " a month for the life of the loan.\n\nPlease click OK to return to the main menu.","McBride Financial Services Mortage Calculator",JOptionPane.INFORMATION_MESSAGE); ; } } while (toggleInt>0); System.exit(0); } } Pull Down menu in Java Page 1 of 1 Adding values in pull down menu in Java 4 Replies - 1501 Views - Last Post: 19 September 2009 - 12:31 PM #1 Pull Down menu in Java Posted 18 September 2009 - 04:06 PM I need to set values for my pull down menu so that I am able to calculate the accurate amounts for this mortage calculator. The pull down works fine but whenever I would select the option it would display the results as "0". I am fairly new to Java programming so if anyone help me out I will greatly appreciate it! Thanks!By the way the assignment requires that I use arrays for the program. Replies To: Pull Down menu in Java #2 Re: Pull Down menu in Java Posted 18 September 2009 - 04:24 PM First of all, annualint and numYears are set to 0.0 at the starting, but never changed throughout the code. After you set the String option, you should check what its contents are equal to, then set annualint and numYears as the corresponding numbers from the years[] and intRates[] arrays. Like this: Like this: String option = (String)JOptionPane.showInputDialog(null,"Which loan option?","Options",JOptionPane.QUESTION_MESSAGE,null,sample,sample[sample.length-1]); if (option.equals("7 year at 5.35%")) { numYears = years[0]; annualint = intRates[0]; } if (option.equals("15 year at 5.5%")) { //etc. This post has been edited by Overachiever: 18 September 2009 - 04:27 PM #3 Re: Pull Down menu in Java Posted 18 September 2009 - 07:09 PM What do you call a "pull down menu" ? #4 Re: Pull Down menu in Java Posted 19 September 2009 - 12:22 PM I think maybe a JComboBox? #5 Re: Pull Down menu in Java Posted 19 September 2009 - 12:31 PM Page 1 of 1
http://www.dreamincode.net/forums/topic/126654-pull-down-menu-in-java/page__p__771829
CC-MAIN-2016-22
refinedweb
586
52.05
SVG is great. Best way to build scalable graphics on the web. SVG can do everything from simple logos to data visualization and even animation. The best part is, you can manipulate SVG with both CSS and JavaScript. It's an image that's part of your DOM. 🤯 Look at this animated fire example. Isn't it neat? No animated image required, just some SVG elements and a bit of JavaScript. No wonder then that SVG is the norm when it comes to data visualization and other programmable graphics. Just one problem: SVG sucks at layout Absolutely terrible. There's no layout support at all. You get absolute positioning and that's it. Sure, you can position absolutely within absolutely positioned elements, which is sort of relative positioning, but ugh ... Absolute positioning hell Say you're building a small dashboard. Like of different scatterplots looking at a dataset about dog breeds. Because the data is there, and you can, of course. You create a scatterplot component. It takes an x and a y position and sizing info. Inside, it places two axes, a caption, and the datapoints. The <Scatterplot> is absolutely positioned via a translate transformation. That moves an SVG element by a vector from (0, 0), thus rendering at the (x, y) coordinate. You render each scatterplot like this: <scatterplot data={data} x={100} y={100} width={350} height={350}>}></axis></axis>< Absolute or relative, doesn't matter. You're gonna have one hell of a fun time calculating the position of each individual element by hand. D3 scales help, but you still have to think about it. SVG itself offers zero help. Want to resize your scatterplots? Here's what happens 👇 Want to resize your browser? Here's what happens 👇 Ugh. react-svg-flexbox to the rescue You can fix the layouting problem with react-svg-flexbox. It's a small library, not a lot of stars, but omg so perfect. Built on top of Facebook's css-layout, which has recently become a part of yoga, it lets you use CSS flexbox to position SVG elements. Flexbox might be confusing to grok – I look at tutorials any time I use it for anything – but it's way better than doing it yourself. How many engineers worked on browser layout engines over the past two decades? Wouldn't wanna retrace all those steps yourself 😅 Wrap our dashboard in a <Flexbox> element and… import Flexbox from "react-svg-flexbox";// ...// render() etc.<svg><flexbox style={{ flexdirection: "row", justifycontent: "flex-start" }}><scatterplot data={data} width={200} height={200}></flexbox></svg> We take <Flexbox> out of react-svg-flexbox, use flexbox styles to say we want to render in a row that starts at the beginning, and the rest happens on its own. Note that react-svg-flexbox passes x and y props into our components, so we had to take out manual positioning. Our dashboard now uses up all the space it can 👇! o/ Responsive layout with react-svg-flexbox For the biggest win, we add flexWrap: wrap to our <Flexbox> component. Like this 👇 <flexbox 1024="" style={{ flexdirection: "row", justifycontent: "flex-start", flexwrap: "wrap", width: 1024, height: }}></flexbox> 1024="" style={{ width: "100%", height: }} ref={this.svgRef}><flexbox 1024="" style={{ flexdirection: "row", justifycontent: "flex-start", flexwrap: "wrap", width: this.state.width, height: }}>}</flexbox></svg> And your dashboard becomes responsive! Yay Hey look, a responsive dataviz dashboard made with React. It's all one big SVG 🤘— Swizec Teller (@Swizec) August 17, 2018 Article out tomorrow pic.twitter.com/4hgMOYXqaL See the code, play with examples You can see a full set of react-svg-flexbox examples on their Storybook. Code for my dog breed dashboard example is on GitHub here. You can try it live here. Fin Use react-svg-flexbox. Your life will improve. The best thing that's ever happened to me for SVG coding. Thanks Cody Averett for finding this gem ️
https://swizec.com/blog/build-responsive-svg-layouts-with-reactsvgflexbox
CC-MAIN-2020-50
refinedweb
651
67.96
Sep 26, 2018 12:53 PM|daliborsoftad|LINK In every classic asp.net projects that references some net standard library every rebuild causes iisexpress.exe to crash. Application: iisexpress.exe Framework Version: v4.0.30319 Description: The process was terminated due to an internal error in the .NET Runtime at IP 73A39FFD (739A0000) with exit code 80131506. Faulting application name: iisexpress.exe, version: 10.0.14358.1000, time stamp: 0x574fc56b Faulting module name: clr.dll, version: 4.7.3163.0, time stamp: 0x5b58fbbe Exception code: 0xc0000005 Fault offset: 0x00099ffd Faulting process id: 0x457c Faulting application start time: 0x01d44f3dacea0183 Faulting application path: C:\Program Files (x86)\IIS Express\iisexpress.exe Faulting module path: C:\Windows\Microsoft.NET\Framework\v4.0.30319\clr.dll Report Id: 053f0094-64f0-4aab-8396-48575a1feaed Faulting package full name: Faulting package-relative application ID: Exception thrown at 0x73A39FFD (clr.dll) in iisexpress.exe: 0xC0000005: Access violation reading location 0x0000001C. It is annoying that on every rebuild I must restart iis express. Tool: Visual Studio Community 2017, Version 15.8.4 Sep 27, 2018 02:32 AM|Brando ZWZ|LINK Hi daliborsoftad, According to your description, I have created a net standared project and referenced it in the classic asp.net projects (.net framework 4.6.1), it works well. The IIS doesn't show any error message. My VS version is as below: My test demo Net standared class library: public class Class1 { public string create() { return "create"; } } MVC: public ActionResult Index() { Class1 class1 = new Class1(); String re = class1.create(); return View(); } Result: Best Regards, Brando Sep 27, 2018 08:25 AM|daliborsoftad|LINK Hi Brando, Have you tried to run your application form visual studio without debugging (Ctrl + F5)? And while application is running change something in net standard class library. For example: public class Class1 { public string create() { return "createChanged"; } } After that try to run your MVC application without debugging again (Ctrl + F5). In my case this second application running with changes kill my iis express process. Also my net standard library using entity framework core to communicate with mssql database and bunch of other stuffs from net core DI, logging, configuration etc. Oct 01, 2018 01:58 AM|Brando ZWZ|LINK Hi daliborsoftad, According to your description, I have created a test demo on my side and run with crtl+F5. It also work well. Result: Could you please share the project on the github or the onedrivew for me to reproduce the issue on my local side? Best Regards, Brando Oct 01, 2018 11:08 AM|daliborsoftad|LINK Hi Brando, Never mind, It is a private project of my company so I am not allowed to share source code in public. I can only tell you that my asp.net mvc project using Umbraco as presentation layer, but all other layers (services, repositories, common utilities and helper libraries) are separated projects in netstandard. We are using net standard for other layers because in the future we are planning to switch presentation layer with some netcore CMS. Currently we cannot find any good one netcore CMS to replace Umbraco. This problem only happens in development when same instance of IISExpress try to reload old dlls with new one. This is not problem in production. Oct 02, 2018 06:09 AM|Brando ZWZ|LINK Hi daliborsoftad, I suggest you could try to use different VS version to try again. If this issue is still exists, I suggest you could use VS help tag to send feedback to VS team. Best Regards, Brando 5 replies Last post Oct 02, 2018 06:09 AM by Brando ZWZ
https://forums.asp.net/t/2147293.aspx?Referencing+net+standard+library+in+classic+asp+net+projects+net+framework+4+6+1+crashes+iisexpress+exe
CC-MAIN-2019-47
refinedweb
601
57.77
Bold fonts not working for many fixed-width fonts in linux I'm on Arch Linux using Plasma 5, and noticed my bold fonts stopped working in Konsole. I investigated a bit and noticed this doesn't work: #include <QApplication> #include <QMainWindow> #include <QPainter> class FontTest : public QWidget { public: void paintEvent(QPaintEvent *) { QPainter p(this); p.setPen(Qt::white); QFont font("Hack"); p.setFont(font); p.drawText(50,50, "Not Bold"); font = p.font(); font.setBold(true); p.setFont(font); p.drawText(50,80, "Bold"); } }; int main(int argc, char *argv[]) { QApplication a(argc, argv); QMainWindow w; FontTest t; w.setCentralWidget(&t); w.setMinimumSize(200, 200); w.show(); return a.exec(); } Both times the font is not bold. With most sans/sans-serif fonts this is not a problem. I also see this with Liberation Mono, Nimbus Mono, and several others. The only fixed-width font that works is DejaVu Sans Mono for some reason. If I do a font.setStyleName("Bold") then it works as expected. I imagine there's some issue with the font matching. I wanted to see if anyone had any ideas before I decided to make a bug report. My fontconfig is vanilla, with default conf files. I only noticed this problem within the last week. Right now I'm running Qt 5.8.0 EDIT: tested with the latest dev branch and the problem is still there. Filed a bug report: - ambershark Moderators I'm betting you screwed up your fonts somehow on arch. I'm running the most recent Arch and that code you posted works fine. I did not test in KDE though I just used xfce since I only had access to my server to test this on right now. I can test on my arch desktop tomorrow if you think it's an issue though. Happy to compare environments if you have any questions. Edit: Tested on Qt 5.8.0. - ambershark Moderators @mikelui Tested again with my desktop arch and Plasma this time. Same thing, works fine. Qt 5.8.0, latest everything from pacman today. Again I can help you compare environments but this probably has to do with your font installs or more than likely your theme or kde settings. Quick test, you can try logging out to a console, move your .config directory, then login to kde again and test it see if it changes. If it does you know it's a kde setting. Then you can just rm -rf your new .config and move your old one back to get your settings back. Here is a screenshot showing the result (I changed your color to black so it would show up better on my light theme): Well I'll be darned...that indeed works. I tried removing my fontconfig files before, but that didn't change anything. Do you have any ideas as to which config file would screw with this? edit: Turned out to be something in kdeglobals I deleted all the font settings there and it fixed the issue....strange. - ambershark Moderators @mikelui Yea I've had KDE configs get screwed up so that's something I do to test whenever I think something might be KDE's fault. :) Glad you found the offending config file. That's always the hard part if you don't know the kde internals (and most of us don't). @ambershark It turned up again in Kate: I suppose there's some problem when KDE sets the font as Regular instead of just the general font family. Maybe the qFontDatabase ends up looking for a bold version of "Hack Regular" instead of a bold version of "Hack", but I couldn't say for certain. That was the same line in the kdeglobals that I deleted to get bold working in that basic Qt application in the OP Thanks for giving me a starting point of where to look! - ambershark Moderators @mikelui Yea in reading that post you linked it looks like it's the "Regular" part that is causing the issues. Which makes sense as Regular and Bold are mutually exclusive. But yea, at least you know the problem is on your system with a specific font and not something you have to worry about for your software when it's released. :) @Rendom I'm not sure exactly what it was, I just deleted the whole file and let its defaults get recreated. For further reference, see this KDE bug report: The issue seems to intermittently pop back up for me, on ArchLinux
https://forum.qt.io/topic/78036/bold-fonts-not-working-for-many-fixed-width-fonts-in-linux
CC-MAIN-2018-17
refinedweb
757
74.29
#include <basisCurves.h> BasisCurves are a batched curve representation analogous to the classic RIB definition via Basis and Curves statements. BasisCurves are often used to render dense aggregate geometry like hair or grass. A 'matrix' and 'vstep' associated with the basis are used to interpolate the vertices of a cubic BasisCurves. (The basis attribute is unused for linear BasisCurves.) A single prim may have many curves whose count is determined implicitly by the length of the curveVertexCounts vector. Each individual curve is composed of one or more segments. Each segment is defined by four vertices for cubic curves and two vertices for linear curves. See the next section for more information on how to map curve vertex counts to segment counts. Interpolating a curve requires knowing how to decompose it into its individual segments. The segments of a cubic curve are determined by the vertex count, the wrap (periodicity), and the vstep of the basis. For linear curves, the basis token is ignored and only the vertex count and wrap are needed. The first segment of a cubic (nonperiodic) curve is always defined by its first four points. The vstep is the increment used to determine what vertex indices define the next segment. For a two segment (nonperiodic) bspline basis curve (vstep = 1), the first segment will be defined by interpolating vertices [0, 1, 2, 3] and the second segment will be defined by [1, 2, 3, 4]. For a two segment bezier basis curve (vstep = 3), the first segment will be defined by interpolating vertices [0, 1, 2, 3] and the second segment will be defined by [3, 4, 5, 6]. If the vstep is not one, then you must take special care to make sure that the number of cvs properly divides by your vstep. (The indices described are relative to the initial vertex index for a batched curve.) For periodic curves, at least one of the curve's initial vertices are repeated to close the curve. For cubic curves, the number of vertices repeated is '4 - vstep'. For linear curves, only one vertex is repeated to close the loop. Pinned curves are a special case of nonperiodic curves that only affects the behavior of cubic Bspline and Catmull-Rom curves. To evaluate or render pinned curves, a client must effectively add 'phantom points' at the beginning and end of every curve in a batch. These phantom points are injected to ensure that the interpolated curve begins at P[0] and ends at P[n-1]. For a curve with initial point P[0] and last point P[n-1], the phantom points are defined as. P[-1] = 2 * P[0] - P[1] P[n] = 2 * P[n-1] - P[n-2] Pinned cubic curves will (usually) have to be unpacked into the standard nonperiodic representation before rendering. This unpacking can add some additional overhead. However, using pinned curves reduces the amount of data recorded in a scene and (more importantly) better records the authors' intent for interchange. Linear curve segments are defined by two vertices. A two segment linear curve's first segment would be defined by interpolating vertices [0, 1]. The second segment would be defined by vertices [1, 2]. (Again, for a batched curve, indices are relative to the initial vertex index.) When validating curve topology, each renderable entry in the curveVertexCounts vector must pass this check. Linear interpolation is always used on curves of type linear. 't' with domain [0, 1], the curve is defined by the equation P0 * (1-t) + P1 * t. t at 0 describes the first point and t at 1 describes the end point. For cubic curves, primvar data can be either interpolated cubically between vertices or linearly across segments. The corresponding token for cubic interpolation is 'vertex' and for linear interpolation is 'varying'. Per vertex data should be the same size as the number of vertices in your curve. Segment varying data is dependent on the wrap (periodicity) and number of segments in your curve. For linear curves, varying and vertex data would be interpolated the same way. By convention varying is the preferred interpolation because of the association of varying with linear interpolation. To convert an entry in the curveVertexCounts vector into a segment count for an individual curve, apply these rules. Sum up all the results in order to compute how many total segments all curves have. The following tables describe the expected segment count for the 'i'th curve in a curve batch as well as the entire batch. Python syntax like '[:]' (to describe all members of an array) and 'len(...)' (to describe the length of an array) are used. The following table descrives the expected size of varying (linearly interpolated) data, derived from the segment counts computed above. Both curve types additionally define 'constant' interpolation for the entire prim and 'uniform' interpolation as per curve data. As an example of deriving per curve segment and varying primvar data counts from the wrap, type, basis, and curveVertexCount, the following table is provided. The strictest definition of a curve as an infinitely thin wire is not particularly useful for describing production scenes. The additional widths and normals attributes can be used to describe cylindrical tubes and or flat oriented ribbons. Curves with only widths defined are imaged as tubes with radius 'width / 2'. Curves with both widths and normals are imaged as ribbons oriented in the direction of the interpolated normal vectors. While not technically UsdGeomPrimvars, widths and normals also have interpolation metadata. It's common for authored widths to have constant, varying, or vertex interpolation (see UsdGeomCurves::GetWidthsInterpolation()). It's common for authored normals to have varying interpolation (see UsdGeomPointBased::GetNormalsInterpolation()). The file used to generate these curves can be found in pxr/extras/examples/usdGeomExamples/basisCurves.usda. It's provided as a reference on how to properly image both tubes and ribbons. The first row of curves are linear; the second are cubic bezier. (We aim in future releases of HdSt to fix the discontinuity seen with broken tangents to better match offline renderers like RenderMan.) The yellow and violet cubic curves represent cubic vertex width interpolation for which there is no equivalent for linear curves. For any described attribute Fallback Value or Allowed Values below that are text/tokens, the actual token is published and defined in UsdGeomTokens. So to set an attribute to the value "rightHanded", use UsdGeomTokens->rightHanded as the value. Definition at line 263 of file basisCurves.h. Computes interpolation token for n. If this returns an empty token and info was non-NULL, it'll contain the expected value for each token. The topology is determined using timeCode. Definition at line 452 of file basisCurves.h. Construct a UsdGeomBasisCurves on UsdPrim prim . Equivalent to UsdGeomBasisCurves::Get(prim.GetStage(), prim.GetPath()) for a valid prim, but will not immediately throw an error for an invalid prim Definition at line 275 of file basisCurves.h. Construct a UsdGeomBasisCurves on the prim held by schemaObj . Should be preferred over UsdGeomBasisCurves(schemaObj.GetPrim()), as it preserves SchemaBase state. Definition at line 283 of file basisCurves.h. Destructor. Returns the kind of schema this class belongs to. Reimplemented from UsdSchemaBase. Computes interpolation token for n. If this returns an empty token and info was non-NULL, it'll contain the expected value for each token. The topology is determined using timeCode. Computes the expected size for data with "uniform" interpolation. If you're trying to determine what interpolation to use, it is more efficient to use ComputeInterpolationForSize Computes the expected size for data with "varying" interpolation. If you're trying to determine what interpolation to use, it is more efficient to use ComputeInterpolationForSize Computes the expected size for data with "vertex" interpolation. If you're trying to determine what interpolation to use, it is more efficient to use ComputeInterpolationForSize See GetBasisTypeWrapAttr(), and also Usd_Create_Or_Get_Property for when to use Get vs Create. If specified, author defaultValue as the attribute's default, sparsely (when it makes sense to do so) if writeSparsely is true - the default for writeSparsely is false. Attempt to ensure a UsdPrim adhering to this schema at path is defined (according to UsdPrim::IsDefined()) on this stage. If a prim adhering to this schema at path is already defined on this stage, return that prim. Otherwise author an SdfPrimSpec with specifier == SdfSpecifierDef and this schema's prim type name for the prim at path at the current EditTarget. Author SdfPrimSpec s with specifier == SdfSpecifierDef and empty typeName at the current EditTarget for any nonexistent, or existing but not Defined ancestors. The given path must be an absolute prim path that does not contain any variant selections. If it is impossible to author any of the necessary PrimSpecs, (for example, in case path cannot map to the current UsdEditTarget's namespace) issue an error and return an invalid UsdPrim. Note that this method may return a defined prim whose typeName does not specify this schema class, in case a stronger typeName opinion overrides the opinion at the current EditTarget. Return a UsdGeomBasisCurves holding the prim adhering to this schema at path on stage. If no prim exists at path on stage, or if the prim at that path does not adhere to this schema, return an invalid schema object. This is shorthand for the following: The basis specifies the vstep and matrix used for cubic interpolation. Return a vector of names of all pre-declared attributes for this schema class and all its ancestor classes. Does not include attributes that may be authored by custom/extended methods of the schemas involved. Linear curves interpolate linearly between two vertices. Cubic curves use a basis matrix with four vertices to interpolate a segment. If wrap is set to periodic, the curve when rendered will repeat the initial vertices (dependent on the vstep) to close the curve. If wrap is set to 'pinned', phantom points may be created to ensure that the curve interpolation starts at P[0] and ends at P[n-1]. Definition at line 347 of file basisCurves.h. Compile time constant representing what kind of schema this class is. Definition at line 269 of file basisCurves.h.
https://www.sidefx.com/docs/hdk/class_usd_geom_basis_curves.html
CC-MAIN-2022-21
refinedweb
1,693
55.54
- NAME - VERSION - SYNOPSIS - DESCRIPTION - PUTTING IT ALL TOGETHER - SUGAR FUNCTIONS - PATH GENERATION - CAVEATS - BUGS - AUTHOR NAME CatalystX::Routes - Sugar for declaring RESTful chained actions in Catalyst VERSION version 0.02 SYNOPSIS package MyApp::Controller::User; use Moose; use CatalystX::Routes; BEGIN { extends 'Catalyst::Controller'; } # /user/:user_id chain_point '_set_user' => chained '/' => path_part 'user' => capture_args 1 => sub { my $self = shift; my $c = shift; my $user_id = shift; $c->stash()->{user} = ...; }; # GET /user/:user_Id get '' => chained('_set_user') => args 0 => sub { ... }; # GET /user/foo get 'foo' => sub { ... }; sub _post { ... } # POST /user/foo post 'foo' => \&_post; # PUT /root put '/root' => sub { ... }; # /user/plain_old_catalyst sub plain_old_catalyst : Local { ... } DESCRIPTION WARNING: This module is still experimental. It works well, but the APIs may change without warning. This module provides a sugar layer that allows controllers to declare chained RESTful actions. Under the hood, all the sugar declarations are turned into Chained subs. All chain end points are declared using one of get, get_html, put, or del. These will declare actions using the Catalyst::Action::REST::ForBrowsers action class from the Catalyst::Action::REST distribution. PUTTING IT ALL TOGETHER This module is merely sugar over Catalyst's built-in Chained dispatching and Catalyst::Action::REST. It helps to know how those two things work. SUGAR FUNCTIONS All of these functions will be exported into your controller class when you use CatalystX::Routes. get ... This declares a GET handler. get_html This declares a GET handler for browsers. Use this to generate a standard HTML page for browsers while still being able to generate some sort of RESTful data response for other clients. If a browser makes a GET request and no get_html action has been declared, a get action is used as a fallback. See Catalyst::TraitFor::Request::REST::ForBrowsers for details on how "browser-ness" is determined. This declares a POST handler. put This declares a PUT handler. del This declares a DELETE handler. chain_point This declares an intermediate chain point that should not be exposed as a public URI. chained $path This function takes a single argument, the previous chain point from which the action is chained. args $number This declares the number of arguments that this action expects. This should only be used for the end of a chain. capture_args $number The number of arguments to capture at this point in the chain. This should only be used for the beginning or middle parts of a chain. path_part $path The path part for this part of the chain. If you are declaring a chain end point with get, etc., then this isn't necessary. By default, the name passed to the initial sugar function will be converted to a path part. See below for details. action_class_name $class Use this to declare an action class. By default, this will be Catalyst::Action::REST::ForBrowsers for end points. For other parts of a chain, it simply won't be set. PATH GENERATION All of the end point function ( get, post, etc.) take a path as the first argument. By default, this will be used as the path_part for the chain. You can override this by explicitly calling path_part, in which case the name is essentially ignored (but still required). Note that it is legitimate to pass the empty string as the name for a chain's end point. If the end point's name does not start with a slash, it will be prefixed with the controller's namespace. If you don't specify a chained value for an end point, then it will use the root URI, /, as the root of the chain. By default, no arguments are specified for a chain's end point, meaning it will accept any number of arguments. CAVEATS When adding subroutines for end points to your controller, a name is generated for each subroutine based on the chained path to the subroutine. Some template-based views will automatically pick a template based on the subroutine's name if you don't specify one explicitly. This won't work very well with the bizarro names that this module generates, so you are strongly encouraged to specify a template name explicitly. BUGS Please report any bugs or feature requests to bug-catalystx-routes
https://metacpan.org/pod/CatalystX::Routes
CC-MAIN-2016-36
refinedweb
697
66.44
So I'm programming a soccer crossbar challenge game (This is my first game ever) and I added a script to the crossbar that looks like this: using UnityEngine; using System.Collections; using UnityEngine.UI; public class crossbarscript : MonoBehaviour { public AudioSource ping; public static int score; public Rigidbody rb; public Text text; // Use this for initialization void Start () { ping = GetComponent<AudioSource>(); rb = GetComponent<Rigidbody>(); score = 0; } // Update is called once per frame public void OnCollisionEnter (Collision col) { if(col.gameObject.name == "Ball") { text = GetComponent<Text>(); text.text = "Score: " + score; //This is the line the error is pointing at ping.Play(); rb.freezeRotation = true; } } } the line text = GetComponent<Text>(); is unnecessary and is causing your problem. The GameObject you are running this script on does not contain a Text component and is retunning null, this causes text.text to fail on the next line. You should not be needing to be calling GetComponent<Text>() in your collision code. You already have a public variable, it should likely have been set in the designer by dragging the Text object on to the script. Once set there you don't need to set it in your code. See from the Roll-A-Ball tutorial "3.3: Displaying the Score and Text" for a example of how you should be using Text in your code to display score.
https://codedump.io/share/AYUClScYJHvP/1/score-counter-not-working-on-unity3d
CC-MAIN-2017-09
refinedweb
224
63.9
In my previous post I had been fiddling with the html helper used in Razor views. Since then our custom html-extensions have been doing great things for our project. To mention some: - Standardizing the look and feel. It is far more consistent and maintainable to set attributes (including a css class) and event handlers in one centralized place. - Simplify the script. Often a part of the logic the script will follow is already known server side. Instead of writing everything out in javascript rendering the intended statements leads to a leaner client. There is an example in my previous post on building post-data. - Decoupling the libraries used. At the moment we are using the Telerik MVC suite. In my previous post I described how our html helpers build standardized Telerik components for our views. In the not to far future we want to switch to the Telerik Kendo suite. Having wrapped up the library dependency in our Html helper will make this switch a lot easier to implement. What has evolved is the way we work with the model. In MVC the implementation of the controller and the view is clear. When it comes to the implementation of the model there are almost as many different styles of implementation as there are programmers. In general the model can bring any data to the view you can imagine. Not only the source of the data varies, from plain sql to a C# property, also the use of the model’s data varies. It can be a customers name from the database. Or it can be the string representation of some html attribute needed for a fancy picker. Here data and presentation start to get mixed up. Our extensions needed information for the Html-Id. The original Html helper had a custom constructor to get that specific data from the model into the helper. Which required to create our own htmlhelper when starting the view and use that one instead of the standard @html. As seen in the eposHtml in the previous story. It would be cleaner if our extension methods could be satisfied with the default html helper. It would also be cleaner to keep a better separation between ‘real’ data and presentation. The model is available in every HtmlHelper extension method. public static PostDataBuilder<TModel> PostData<TModel>(this HtmlHelper<TModel> htmlHelper) { return new PostDataBuilder<TModel>(Id(htmlHelper.ViewData.Model)); } It’s a property of the ViewData. In our case we needed something to give the control an unique Id. The Id method builds that Id. Previously we passed the Id-base in the constructor, which lead to the custom helper. A far more elegant solution is using a very basic IOC-DI pattern. As implemented By the Id method private static string Id(object model) { var complex = model as IProvideCompositeId; if (complex != null) return complex.CompositeId; var simple = model as IProvideId; return simple == null ? “” : simple.Id < 0 ? String.Format(“N{0}”, Math.Abs(simple.Id)) : simple.Id.ToString(); } The method queries the model first for the IProvideCompositeId interface, in case the model does not implement that it is queried for the IprovideId interface. Resulting in a string which can be safely used in an HtmlId. (A negative number would lead to a ‘–’ in the string, which is not accepted in an Html Id). These interfaces are very straightforward public interface IProvideCompositeId { string CompositeId { get; } } public interface IProvideId { int Id { get; } } In case the model is going to be used in a view requiring unique Id’s the model has to implement one of these interfaces. public class FactuurDefinitie : IProvideCompositeId { public readonly int IdTraject; public readonly int UZOVI; public readonly bool Verzekerd; public FactuurDefinitie(int idTraject, int uzovi, bool verzekerd) { // The usual stuff } public string CompositeId { get { return String.Format(“{0}{1}{2}”, IdTraject, UZOVI, Verzekerd); } } } Working this way: - We can use our custom html extensions in the default html helper - Specific data from the model is available inside our extensions - The model and the view do not get entangled The code is no big deal. I know. But the model is something whose horizons are still not in sight.
http://codebetter.com/petervanooijen/2013/05/14/model-and-beyond/
CC-MAIN-2017-04
refinedweb
691
64.71
Hai everyone, i already can make the movement grid via pathfinding and avoid obstacles for the player. Right now, i want to make the AI move itself based on the how many movement grid and the action point (just like the player does). But, i don't know how to do it.. Right now, i just able to make the character move to the position (but it is not follow the pathfinding, this character suppose to be the AI).. I am stack at this and try to solve this problem, but couldn't. Could you guys help me out? Thanks. Here is the code that i mention in the above (can make the character which is suppose to be AI to move to the position, but it not follow the pathfinding): EDIT: The problem that i couldn't figure it out is to make the AI act as player does. When you see the video that i uploaded in the youtube (link to that has been given on the question below), you will see on the first and second turn, i control the player to move and there is a GUI on left side of the screen (move, attack, magicattack and endturn), when the player's turn ends, AI's turn will active (if you see, there is no GUI and the AI move itself to point destination as described in the code below, but it is not following the pathfinding). My question is: how do i solve this? Thank you very much Collapse | Copy Code using UnityEngine; using System.Collections; public class AIPlayer : Player { void Awake() { moveDestination = transform.position; } // Use this for initialization void Start() ColorChanger(); // Update is called once per frame void Update() public override void TurnUpdate() if (GameManager.instance.currentPlayerIndex == 5) { if (Vector3.Distance(moveDestination, transform.position) > 0.1f) { transform.position += (moveDestination - transform.position).normalized * moveSpeed * Time.deltaTime; if (Vector3.Distance(moveDestination, transform.position) <= 0.1f) { transform.position = moveDestination; actionPoints--; } } else moveDestination = new Vector3(2 - Mathf.Floor(GameManager.instance.mapSize / 2), 1.5f, -2 + Mathf.Floor(GameManager.instance.mapSize / 2)); GameManager.instance.NextTurn(); } base.TurnUpdate(); public override void TurnOnGUI() public override void ColorChanger() base.ColorChanger(); } Develop games in your browser. Powerful, performant & highly capable. Why are you asking on the construct forum how to use unity? You should ask on their forums instead. They'll be able to help you much better.
https://www.construct.net/forum/general/open-topic-33/stuck-when-want-to-make-the-ai-70084
CC-MAIN-2018-26
refinedweb
395
65.93
Game.Tournament Contents Description Tournament construction and maintenance including competition based structures and helpers. This library is intended to be imported qualified as it exports functions that clash with Prelude. import Game.Tournament as T The Tournament structure contain a Map of GameId -> Game for its internal representation and the GameId keys are the location in the Tournament. Duel tournaments are based on the theory from. By using the seeding definitions listed there, there is almost only one way to generate a tournament, and the ambivalence appears only in Double elimination. We have additionally chosen that brackets should converge by having the losers bracket move upwards. This is not necessary, but improves the visual layout when presented in a standard way. FFA tournaments use a collection of sensible assumptions on how to optimally split n people into s groups while minimizing the sum of seeds difference between groups for fairness. At the end of each round, groups are recalculated from the scores of the winners, and new groups are created for the next round. = Int Computes both the upper and lower player seeds for a duel elimiation match. The first argument is the power of the tournament: p :: 2^num_players rounding up to nearest power of 2 The second parameter is the game number i (in round one). The pair (p,i) must obey p > 0 && 0 < i <= 2^(p-1). The location of a game is written as to simulate the classical shorthand WBR2, but includes additionally the game number for complete positional uniqueness. A Single elimination final will have the unique identifier let wbf = GameId WB p 1 where 'p == count t WB'. Constructors Instances data Elimination Source Duel Tournament option. Single elimation is a standard power of 2 tournament tree, wheras Double elimination grants each loser a second chance in the lower bracket. Constructors Instances The bracket location of a game. For Duel Single or FFA, most matches exist in the winners bracket ( WB) , with the exception of the bronze final and possible crossover matches. Duel Double or FFA with crossovers will have extra matches in the loser bracket ( LB). Score a match in a tournament and propagate winners/losers. If match is not scorable, the Tournament will pass through unchanged. For a Duel tournament, winners (and losers if Double) are propagated immediately, wheras FFA tournaments calculate winners at the end of the round (when all games played). There is no limitation on re-scoring old games, so care must be taken to not update too far back ones and leaving the tournament in an inconsistent state. When scoring games more than one round behind the corresponding active round, the locations to which these propagate must be updated manually. To prevent yourself from never scoring older matches, only score games for which safeScorable returns True. Though this has not been implemented yet. gid = (GameId WB 2 1) tUpdated = if safeScorable gid then score gid [1,0] t else t TODO: strictify this function TODO: better to do a scoreSafe? call this scoreUnsafe count :: Tournament -> Bracket -> IntSource Count the number of rounds in a given bracket in a Tournament. TODO: rename to length once it has been less awkwardly moved into an internal part scorable :: GameId -> Tournament -> BoolSource keys :: Tournament -> [GameId]Source Get the list of all GameIds in a Tournament. This list is also ordered by GameId's Ord, and in fact, if the corresponding games were scored in this order, the tournament would finish, and scorable would only return False for a few special walkover games. TODO: if introducing crossovers, this would not be true for LB crossovers => need to place them in WB in an 'interim round'
http://hackage.haskell.org/package/Tournament-0.0.1/docs/Game-Tournament.html
CC-MAIN-2017-04
refinedweb
612
51.78
Have the constructor as static */ public static StaticTest() { System.out.println("Static Constructor of the class"); } public static void main(String args[]) { /*Below: I'm trying to create an object of the class which would intern call the constructor*/ StaticTest obj = new StaticTest(); } } Output: You would get the below error message when you try to run the above java code. “modifier static not allowed here” Why java doesn’t support static constructor? It’s actually pretty simple to understand – Everything that is marked static belongs to the class only, for example static method cannot be inherited in the sub class because they belong to the class in which they have been declared. Refer static keyword. Lets back to constructors, Since each constructor is being called by its subclass during creation of the object of its subclass, so if you mark constructor as static the subclass will not be able to access the constructor of its parent class because it is marked static and thus belong to the class only. This will violate the whole purpose of inheritance concept and that is reason why a constructor cannot be static. Let’s understand this with the help of an example – public class StaticDemo { public StaticDemo() { /*Constructor of this class*/ System.out.println("StaticDemo"); } } public class StaticDemoChild extends StaticDemo { public StaticDemoChild() { /*By default this() is hidden here */ System.out.println("StaticDemoChild"); } public void display() { System.out.println("Just a method of child class"); } public static void main(String args[]) { StaticDemoChild obj = new StaticDemoChild(); obj.display(); } } Output: StaticDemo StaticDemoChild Just a method of child class Did you notice? We just created the object of child class and as a result it first called the constructor of parent class and then the constructor of it’s own class. It happened because the object creation calls constructor implicitly and every child class constructor by default has super() as first statement which calls it’s parent class’s constructor. The statement super() is used to call the parent class(base class) constructor. Above explanation is the reason why constructor cannot be static – Because if we make them static they cannot be called from child class thus object of child class couldn’t be created. Static Constructor Alternative – Static Blocks Java has static blocks which can be treated as static constructor. Let’s consider the below program – public class StaticDemo{ static{ System.out.println("static block of parent class"); } } public class StaticDemoChild extends StaticDemo{ static{ System.out.println("static block of child class"); } public void display() { System.out.println("Just a method of child class"); } public static void main(String args[]) { StaticDemoChild obj = new StaticDemoChild(); obj.display(); } } Output: static block of parent class static block of child class Just a method of child class In the above example we have used static blocks in both the classes which worked perfectly. We cannot use static constructor so it’s a good alternative if we want to perform a static task during object creation. Hi, Static method cannot be inherited in the sub class. Would you please give me one example. Hi Pravat, You can overload a static method but you can’t override a static method. Actually you can rewrite a static method in subclasses but this is not called a override because override should be related to polymorphism and dynamic binding. The static method belongs to the class so has nothing to do with those concepts. The rewrite of static method is more like a shadowing. static member can be inherited but cannot be overridded by subclass. class Parent { static int i = 10; public static void method() { System.out.println(i); } } class Child extends Parent { public void abc () { method(); // child class is inheriting static method } public static void main (String args[]) { Modifier m = new Modifier(); m.abc(); } } Hi, You have mentioned as “It’s actually pretty simple to understand – static method cannot be inherited in the sub class. ” Shouldn’t it be “static constructor cannot be inherited in the subclass”. Please correct me if I am wrong. That is just an example to demonstrate the purpose of static keyword. I have edited the post to make it more clear. I hope its clear now. let me know I you have any other questions. Very useful and abstract example .. thanks for this explanation I was searching this concept on [Stackoverflow] and in-between i lied up here.Very good explanation.Can you tell me the use of Method hiding ? thanks a lot for this example i have been asked about it during an interview… I never saw a such thing in production code. Does someone can tell me if it’s really useful? You would never find it in the production code or anywhere else because static constructors are not allowed in java and this post is to discuss the reason of it. This is the most common question that is asked during interview so here we are discuss why something is not allowed in java. this explanation is not right. First constructors are not inherited at all does not matter static or non-static. Second if we even go by your concept & treat constructors as methods so the constructors have default(package-private) access specifier.It means it won’t be available in the child class. Constructors are not static coz they violate the basic meaning of static. Static means something which is the property of a class while constructors are something which are property of an object so the reason is “concept of static constructor is basic design violation”. Deepak, The points you mentioned are correct. However I did not mention anywhere in the article that constructors are inherited. I just pointed out that they are being invoked during object creation of sub class, that’s all. Also, constructors are not methods because they do not have a return type. I think you got confused because of an ambiguous statement mentioned in the post. I just added more details to the post to make it easy to understand. I hope its clear now. Hi Chaitanya. I just wanted to know what exactly is the use of a static block? Thanks in advance. Hi Arkya, If you want to make something happen before your object has created. Say you want to read a file and then decide which object you want to create. it happens in real time. In that case it is useful… You probably want to underline that static block is only called only before the first instance of the class is created. Actually one can have static constructor by marking their class final TO add one more point that constructor is static create static method in class and create object of that class in that staic method ie class myclass{ myclass(){ sysout(“constructor of class”) } static myclass getObject{ return new myclass(); } } Think how we are able to access constructor from static method give probably you have a mistake in comment to code /*By default this() is hidden here */ here should be “super” not “this”. if cant mark constructor as static, why can we mark a method as a static and inherit in child class Constructor definition should not be static because constructor will be called each and every time when object is created. If you made constructor as static then the constructor will be called before object creation same like main method.
http://beginnersbook.com/2013/05/static-constructor/
CC-MAIN-2017-30
refinedweb
1,227
62.68
I am trying to fetch text data from a website, but this code shows some error. Please let me know where is the error. import requests from bs4 import BeautifulSoup def getportions(soup): for p in soup.find_all("p", {"class": ""}): yield p.text def readpage(address): page = requests.get(address) soup = BeautifulSoup(page.text, "html.parser") output_text = '' for s in getportions(soup): output_text += s.encode("utf8") output_text += "\n" print (output_text) print ("End of article") fp = open("content.txt", "w") fp.write(output_text) if __name__ == "__main__": readpage("") output_text += s.encode("utf8"). TypeError: Can't convert 'bytes' object to str implicitly If you use Python 3, all strings are natively in unicode, and you can specify the encoding when opening a file. You code could become: def readpage(address): ... output_text = '' for s in getportions(soup): output_text += s output_text += "\n" print (output_text) print ("End of article") fp = open("content.txt", "w", encoding='utf8') fp.write(output_text) If you simply want to sanitize the text by replacing all non ascii characters with a ? open the file that way: fp = open("content.txt", "w", encoding='ascii', errors='replace')
https://codedump.io/share/VSQZLXouGdoV/1/error-when-encoding-utf-8
CC-MAIN-2018-22
refinedweb
183
53.98
A bit hacky solution in Creative category for Striped Words by qria import re checkio=lambda t:sum(any(all('@' Oct. 9, 2013 qria on Oct. 9, 2013, 4:34 a.m. <p>Some explanation </p> <pre class='brush: python'>'@'<c </pre> <p>evaluates to False if c is a number</p> <pre class='brush: python'>i&1 </pre> <p>means i%2 ( I don't know why I wrote it like this... i%2 gives the same result...)</p> <p>j^, (c in 'aeiouyAEIOUY')^ just toggles the i&2 if it's True </p> <p>I wrote 'aeiouyAEIOUY' because it's shorter than writing '.upper()'</p> <p>Everything else should be relatively straightforward.</p> <p>Also, I found a interesting thing while doing this</p> <pre class='brush: python'>'a'in 'aeiou' == True </pre> <p>evaluates to False</p> <p>I have no idea why...</p> qria on Oct. 9, 2013, 4:47 a.m. <pre class='brush: python'>>>> False == False == False True >>> False == False == True False </pre> <p>WHAT??? Python!! I TRUSTED YOU!!</p> qria on Oct. 9, 2013, 4:53 a.m. <p>Ok found out why python does that. Apparently python considers == operator as a comparison, meaning that 'False == False == False' is evaluated to 'False == False and False== False'</p> <p>Similarly ('a' in 'aeiou' == True) is the same as ('a' in 'aeiou' and 'aeiou' == True)</p> bryukh on Oct. 10, 2013, 1:38 a.m. <p>Thanks. It was something new for me.</p> veky on Oct. 9, 2013, 4:08 p.m. <p>The above is standard gotcha (in and == are same priority, chained), but here I must say I'm surprised. <em>What</em> did you expect (trust:) Python to reply to these?</p> veky on Oct. 10, 2013, 4:17 a.m. <p>I now (probably) see what you mean. C has poisoned so many people's minds. We (at least <em>should</em>) intuitively know what a==b==c means. Especially when (I hope C hasn't poisoned you here) you <em>do</em> know what a < b < c means. Nobody in his right mind would argue it is (a < b) < c.</p> <p>Of course, if you <em>want</em> (a==b)==c, just say so. And if you want a==(b==c) (it <em>could</em> be different:), say so. a==b==c is something else. [<em>Even in C</em>, if somebody wrote a==b==c without parentheses, I'd be pretty angry.:]</p> bryukh on Oct. 10, 2013, 4:30 a.m. <blockquote> <p>C has poisoned so many people's minds.</p> </blockquote> <p>+5. It was my first serious language :-)</p> <p>Thanks for this explanation.</p> veky on Oct. 10, 2013, 4:46 a.m. <p>Mine was BASIC (and it is equally serious as C, I can argue that as long as you want:-P). And already then people said BASIC scars you for life. Maybe. But C scars you much, much deeper.</p> <p>There are so many braindead decisions that made sense on PDP-11 (where C was developed), and practically nowhere else, and now are propagated through myths and legends about how processors really work.</p> <p>Also, C was made (not designed, just cobbled together of various pieces of B) for writing operating systems. It could be argued it is good for this job even today (especially if you write for some obscure machine). But how many operating systems have you written?</p> <p>Ok, you're bryukh, you write everything:-), so maybe you wrote one or even two. But 99.99% of the time, you write something entirely inappropriate for C. Don't (this is an appeal to you as much as to everybody else). Use the right tool for the job.</p> bryukh on Oct. 10, 2013, 5:07 a.m. <p>Oh, i remember BASIC -- i learned it in the school. But it was not seriously. </p> <blockquote> <p>But how many operating systems have you written?</p> </blockquote> <p>no one :-) I didn't choose - it was our teacher decision in the institute. Later i chose C++.</p> bryukh on Oct. 10, 2013, 5:09 a.m. <blockquote> <p>Use the right tool for the job.</p> </blockquote> <p>I agree with you. For me now it's Python, Javascript and little Java.</p> veky on Oct. 9, 2013, 4:10 p.m. <p>Cool. :-) Expecially three way ^. :-)</p> nickie on Nov. 3, 2013, 10:20 p.m. <p>Oh my... It turns out you can make it even shorter if you try to take out the cryptic bitwise stuff:</p> <pre class='brush: python'>import re checkio=lambda t:sum(len(set((c in 'AEIOUY')==i%2 for i,c in enumerate(w.upper())))==1 for w in re.findall(r"\w\w+",t)) </pre> <p>I'm finding this much easier to understand:</p> <ol> <li><p>re.findall gives you all the words that contain only letters and more than one.</p></li> <li><p>upper() converts each word to uppercase.<( (c in 'AEIOUY') == i%2 for i, c in enumerate(w.upper()) ) ) == 1 for w in re.findall(r"\w\w+",t) ) </pre> qria on Nov. 4, 2013, 3:51 a.m. <p>Thanks for commenting :) I usually don't do much code golfing, but I think it's a fun game to play. I originally solved it with much more readability, but someone challenged me to make it shorter, so.... yeah. I have no idea why this is voted the highest.</p> <p>By the way, you might want to add a logic to check if a word contains a number. Since the question explicitly states that the words including a number is not considered a word, your solution fails the test at</p> <pre class='brush: python'>assert checkio("1st 2a ab3er root rate") == 1 # second test of Extra2 </pre> <p>just insert </p> <pre class='brush: python'>c not in '0123456789' </pre> <p>or </p> <pre class='brush: python'>'@'<c </pre> <p>and it'll work.</p> <p>But replacing complex any(all( ~ ) for j in (0, 1)) with simple len(set( ))==1 was a cool idea, and you probably still beat me. So I like your solution :)</p> nickie on Nov. 4, 2013, 6:25 a.m. <p>You're right. I thought \w was only for alpha but it was alphanumeric. I'm fixing it here (still 9 characters shorter, if that says anything):</p> <pre class='brush: python'>import re checkio=lambda t:sum(len(set((c in 'AEIOUY')==i%2 for i,c in enumerate(w)))==1 for w in re.findall(r"\b[A-Z]{2,}\b",t.upper())) </pre> <p>Fixed the explanation too:</p> <ol> <li><p>upper() converts all letters to uppercase.</p></li> <li><p>re.findall gives you all the words that contain only letters and more than one. \b stands for the empty string at the beginning or the end of a word.<( (letter in 'AEIOUY') == index % 2 for index, letter in enumerate(word) ) ) == 1 for word in re.findall(r"\b[A-Z]{2,}\b", text.upper()) ) </pre> marshall.zheng on Dec. 13, 2013, 2:26 p.m. <p>amazing!</p> PaulBrown on Dec. 19, 2013, 5:01 a.m. <p>Good "code golf" but not very readable.</p> dorothyhs on Jan. 19, 2014, 8:34 p.m. <p>You guys are too good at this. I will catch up.</p> Columpio on Feb. 9, 2014, 7:03 a.m. <p>Using re is shitty trick! Can ya do it in python (not perl) way?</p> ForcePush on April 13, 2014, 4:08 p.m. <p>any(all())? wtf?</p> Oct. 9, 2013 Forum Price Global Activity ClassRoom Manager Leaderboard Coding games Python programming for beginners
https://py.checkio.org/mission/striped-words/publications/qria/python-3/a-bit-hacky/share/90e0ab4aa9ae3411721cf92d1ed69b8b/
CC-MAIN-2021-31
refinedweb
1,310
86.91
Last updated on September 30th, 2017 | In a previous tutorial, we learned how to CRUD objects from the Firebase database, now it’s time to work with lists of data. In this guide, you’ll learn how to read a list from Firebase and display it in your Ionic app, how to add items to that list, and remove items from the list. Displaying a list of data from Firebase To fetch a list from the database, you need to create a database reference to that list, to do that, go into the page you’ll use and import Firebase: import firebase from 'firebase'; Right before the constructor, create the database reference we’ll use for this example, and the array that is going to hold our list to display in the HTML: public items: Array<any> = []; public itemRef: firebase.database.Reference = firebase.database().ref('/items'); constructor(){} Now let’s strt by fetching the list to display in our HTML, for that we’re going to use the ionViewDidLoad() lifecycle event (using the lifecycle events is better than initializing everything in the constructor, this will wait until the page is fully loaded to call firebase). ionViewDidLoad() { itemRef.on('value', itemSnapshot => { // Here we'll work with the list }); } If you followed the objects tutorial, you’ll notice that we’re using the same function to fetch the list than we used to fetch the object, the main difference is what we do with that list now. Instead of just assigning the value like items = itemSnapshot.val() we need to loop through the items and push them into the array: ionViewDidLoad() { this.itemRef.on('value', itemSnapshot => { this.items = []; itemSnapshot.forEach( itemSnap => { this.items.push(itemSnap.val()); return false; }); }); } See what I meant? We’re going into each item from the snapshot, and then pushing that item’s value to our array. To display it in the html file you can do something like: <ion-list> <ion-item * {{ item }} </ion-item> </ion-list> Or something like: <ion-list> <ion-item * {{ item.propertyYouWantToAccess }} </ion-item> </ion-list> What *ngFor="let item of items" is doing here is to loop through the items array, it creates a variable called item and then prints the value for each item inside the array. Adding items to a Firebase list To add more items to your list, we’re going to create an addItem() function in our class, let’s imagine that the items we’re storing are groceries we want to buy, so we’ll store the name and the quantity we want: addItem(name: string, quantity: number): void { // Add the item to the list } Now we’re going to add the item to the same reference we declared before: addItem(name: string, quantity: number): void { this.itemRef.push({ name, quantity }); } The .push() function is doing two things under the hood: - It creates a unique ID for the new item right under the /itemsnode. - It adds the new object under that unique ID. Delete data from a Firebase list To delete items from a list, we’re going to pass the same .remove() function we pass to remove objects, with one difference, it needs to have the ID of the item we’re removing: removeItem(itemId: string): void { this.itemRef.remove(itemId); } Be very careful, if you don’t pass the ID there it will remove the entire list! Under the hood what the .remove() function is doing is going into the database reference and setting it to null, which deletes it from the list. Pingback: Firebase Transactions for real time data update in your Ionic Apps() Pingback: Upload pictures to Firebase Storage from Ionic Framework Camera() Pingback: Firebase Security Rules for Ionic Framework Apps() Pingback: Firebase Transactions for real time data update in your Ionic Apps [RC0 Updated]()
https://javebratt.com/firebase-list-ionic-2/
CC-MAIN-2017-43
refinedweb
632
54.46
OOoPy 1.9 OOoPy: Modify OpenOffice.org documents in Python OpenOffice.org (OOo) documents are ZIP archives containing several XML files. Therefore it is easy to inspect, create, or modify OOo documents. OOoPy is a library in Python for these tasks with OOo documents. To not reinvent the wheel, OOoPy uses an existing XML library, ElementTree by Fredrik Lundh. OOoPy is a thin wrapper around ElementTree using Python’s ZipFile to read and write OOo documents. In addition to being a wrapper for ElementTree, OOoPy contains a framework for applying XML transforms to OOo documents. Several Transforms for OOo documents exist, e.g., for changing OOo fields (OOo Insert-Fields menu) or using OOo fields for a mail merge application. Some other transformations for modifying OOo settings and meta information are also given as examples. Applications like this come in handy in applications where calling native OOo is not an option, e.g., in server-side Web applications. If the mailmerge transform doesn’t work for your document: The OOo format is well documented but there are ordering constraints in the body of an OOo document. I’ve not yet figured out all the tags and their order in the OOo body. Individual elements in an OOo document (like e.g., frames, sections, tables) need to have their own unique names. After a mailmerge, there are duplicate names for some items. So far I’m renumbering only frames, sections, and tables. See the renumber objects at the end of ooopy/Transforms.py. So if you encounter missing parts of the mailmerged document, check if there are some renumberings missing or send me a bug report. There is currently not much documentation except for a python doctest in OOoPy.py and Transformer.py and the command-line utilities. For running these test, after installing ooopy (assuming here you installed using python into /usr/local): cd /usr/local/share/ooopy python run_doctest.py /usr/local/lib/python2.X/site-packages/ooopy/Transformer.py python run_doctest.py /usr/local/lib/python2.X/site-packages/ooopy/OOoPy.py Both should report no failed tests. For running the doctest on python2.3 with the metaclass trickery of autosuper, see the file run_doctest.py. For later versions of python the bug in doctest is already fixed. Usage There were some slight changes to the API when supporting the open document format introduced with OOo 2.0. See below if you get a traceback when upgrading from an old version. See the online documentation, e.g.: % python >>> from ooopy.OOoPy import OOoPy >>> help (OOoPy) >>> from ooopy.Transformer import Transformer >>> help (Transformer) Help, I’m getting an AssertionError traceback from Transformer, e.g.: Traceback (most recent call last): File "./replace.py", line 17, in ? t = Transformer(Field_Replace(replace = replace_dictionary)) File "/usr/local/lib/python2.4/site-packages/ooopy/Transformer.py", line 1226, in __init__ assert (mimetype in mimetypes) AssertionError The API changed slightly when implementing handling of different versions of OOo files. Now the first parameter you pass to the Transformer constructor is the mimetype of the OpenOffice.org document you intend to transform. The mimetype can be fetched from another opened OOo document, e.g.: ooo = OOoPy (infile = 'test.odt', outfile = 'test_out.odt') t = Transformer(ooo.mimetype, ... Usage of Command-Line Utilities A, well, there are command-line utilities now: - ooo_cat for concatenating several OOo files into one - ooo_grep to do equivalent of grep -l on OOo files – only runs on Unix-like operating systems, probably only with the GNU version of grep (it’s a shell-script using ooo_as_text) -). - ooo_prettyxml for pretty-printing the XML nodes of one of the XML files inside an OOo document. Mainly useful for debugging. All utilities take a --help option. Resources Project information and download from Sourceforge main page You need at least version 2.3 of python. For using OOoPy with Python versions below 2.5, you need to download and install the ElementTree Library by Fredrik Lundh. For documentation about the OOo XML file format, see the book by J. David Eisenberg called OASIS OpenDocument Essentials which is under the Gnu Free Documentation License and is also available in print. Version 1.9: Add Picture Handling for Concatenation Now ooo_cat supports pictures, thanks to Antonio Sánchez for reporting that this wasn’t working. - Add a list of filenames + contents to Transformer - Update this file-list in Concatenate - Add Manifest_Append transform to update META-INF/manifest.xml with added filenames - Add hook in OOoPy for adding files - Update tests - Update ooo_cat to use new transform - This is the first release after migration of the version control from Subversion to GIT Version 1.8: Minor bugfixes Distribute a missing file that is used in the doctest. Fix directory structure. Thanks to Michael Nagel for suggesting the change and reporting the bug. - The file testenum.odt was missing from MANIFEST.in - All OOo files and other files needed for testing are now in the subdirectory testfiles. - All command line utilities are now in subdirectory bin. Version 1.7: Minor feature additions Add –newlines option to ooo_as_text: With this option the paragraphs in the office document are preserved in the text output. Fix assertion error with python2.7, thanks to Hans-Peter Jansen for the report. Several other small fixes for python2.7 vs. 2.6. - add –newlines option to ooo_as_text - fix assertion error with python2.7 reported by Hans-Peter Jansen - fix several deprecation warnings with python2.7 - remove zip compression sizes from regression test: the compressor in python2.7 is better than the one in python2.6 Version 1.6: Minor bugfixes Fix compression: when writing new XML-files these would be stored instead of compressed in the OOo zip-file resulting in big documents. Thanks to Hans-Peter Jansen for the patch. Add copyright notice to command-line utils (SF Bug 2650042). Fix mailmerge for OOo 3.X lists (SF Bug 2949643). - fix compression flag, patch by Hans-Peter Jansen - add regression test to check for compression - now release ooo_prettyxml – I’ve used this for testing for quite some time, may be useful to others - Add copyright (LGPL) notice to command-line utilities, fixes SF Bug 2650042 - OOo 3.X adds xml:id tags to lists, we now renumber these in the mailmerge app., fixes SF Bug 2949643 Version 1.5: Minor feature enhancements Add ooo_grep to search for OOo files containing a pattern. Thanks to Mathieu Chauvinc for the reporting the problems with modified manifest.xml. Support python2.6, thanks to Erik Myllymaki for reporting and anonymous contributor(s) for confirming the bug. - New shell-script ooo_grep (does equivalent to grep -l on OOo Files) - On deletion of an OOoPy object close it explicitly (uses __del__) - Ensure mimetype is the first element in the resulting archive, seems OOo is picky about this. - When modifying the manifest the resulting .odt file could not be opened by OOo. So when modifying manifest make sure the manifest namespace is named “manifest” not something auto-generated by ElementTree. I consider this a bug in OOo to require this. This now uses the _namespace_map of ElementTree and uses the same names as OOo for all namespaces. The META-INF/manifest.xml is now in the list of files to which Transforms can be applied. - When modifying (or creating) archive members, we create the OOo archive as if it was a DOS system (type fat) and ensure we use the current date/time (UTC). This also fixes problems with file permissions on newer versions of pythons ZipFile. - Fix for python2.6 behavior that __init__ of object may not take any arguments. Fixes SF Bug 2948617. - Finally – since OOoPy is in production in some projects – change the development status to “Production/Stable”. Version 1.4: Minor bugfixes Fix Doctest to hopefully run on windows. Thanks to Dani Budinova for testing thoroughly under windows. - Open output-files in “wb” mode instead of “w” in doctest to not create corrupt OOo documents on windows. - Use double quotes for arguments when calling system, single quotes don’t seem to work on windows. - Dont use redirection when calling system, use -i option for input file instead. Redirection seems to be a problem on windows. - Explicitly call the python-interpreter, running a script directly is not supported on windows. Version 1.3: Minor bugfixes Regression-test failed because some files were not distributed. Fixes SF Bugs 1970389 and 1972900. - Fix MANIFEST.in to include all files needed for regression test (doctest). Version 1.2: Major feature enhancements Add ooo_fieldreplace, ooo_cat, ooo_mailmerge command-line utilities. Fix ooo_as_text to allow specification of output-file. Note that handling of non-seekable input/output (pipes) for command-line utils will work only starting with python2.5. Minor bug-fix when concatenating documents. - Fix _divide (used for dividing body into parts that must keep sequence). If one of the sections was empty, body parts would change sequence. - Fix handling of cases where we don’t have a paragraph (only list) elements - Implement ooo_cat - Fix ooo_as_text to include more command-line handling - Fix reading/writing stdin/stdout for command-line utilities, this will work reliably (reading/writing non-seekable input/output like, e.g., pipes) only with python2.5 - implement ooo_fieldreplace and ooo_mailmerge Version 1.1: Minor bugfixes Small Documentation changes - Fix css stylesheet - Link to SF logo for Homepage - Link to other information updated - Version numbers in documentation fixed - Add some checks for new API – first parameter of Transformer is checked now - Ship files needed for running the doctest and explain how to run it - Usage section Version 1.0: Major feature enhancements Now works with version 2.X of OpenOffice.org. Minor API changes. - Tested with python 2.3, 2.4, 2.5 - OOoPy now works for OOo version 1.X and version 2.X - New attribute mimetype of OOoPy – this is automatically set when reading a document, and should be set when writing one. - renumber_all, get_meta, set_meta are now factory functions that take the mimetype of the open office document as a parameter. - Since renumber_all is now a function it will (correctly) restart numbering for each new Attribute_Access instance it returns. - Built-in elementtree support from python2.5 is used if available - Fix bug in optimisation of original document for concatenation - Author: Ralf Schlatterbeck - Download URL: - License: GNU Library or Lesser General Public License (LGPL) - Platform: Any - Categories - Development Status :: 5 - Production/Stable -.9.xml
https://pypi.python.org/pypi/OOoPy/1.9
CC-MAIN-2017-22
refinedweb
1,732
59.19
COBRApy is a package for constraints-based modeling of biological networks Project description cobrapy - constraint-based reconstruction and analysis in python What is cobrapy?. Our aim with cobrapy is to provide useful, efficient infrastructure for: - creating and managing metabolic models - accessing popular solvers - analyzing models with methods such as FVA, FBA, pFBA, MOMA etc. - inspecting models and drawing conclusions on gene essentiality, testing consequences of knock-outs etc. Our goal with cobrapy is for it to be useful on its own, and for it to be the natural choice of infrastructure for developers that want to build new COBRA related python packages for e.g. visualization, strain-design and data driven analysis. By re-using the same classes and design principles, we can make new methods both easier to implement and easier to use, thereby bringing the power of COBRA to more researchers. The documentation is browseable online at readthedocs and can also be downloaded. Please use the Google Group for help. By writing a well formulated question, with sufficient detail, you are much more likely to quickly receive a good answer! Please refer to these StackOverflow guidelines on how to ask questions. Alternatively, you can use gitter.im for quick questions and discussions about cobrapy (faster response times). Please keep in mind that answers are provided on a volunteer basis. More information about opencobra is available at the website. If you use cobrapy in a scientific publication, please cite doi:10.1186/1752-0509-7-74 Installation Use pip to install cobrapy from PyPI (we recommend doing this inside a virtual environment): pip install cobra In case you downloaded the source code, run: pip install -e . In the cobrapy directory. For further information, please follow the detailed installation instructions. Contributing Contributions are always welcome! Please read the contributions guideline to get started. License The cobrapy source is released under both the GPL and LGPL licenses. You may choose which license you choose to use the software under. However, please note that binary packages which include GLPK (such as the binary wheels distributed at) will be bound by its license as well. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License or the Lesser of cobrapy For installation help, please use the Google Group. For usage instructions, please see the documentation. All releases require Python 2.7+ or 3.4+ to be installed before proceeding. Mac OS X (10.7+) and Ubuntu ship with Python. Windows users without python can download and install python from the python website. Please note that though Anaconda and other python distributions may work with cobrapy, they are not explicitly supported (yet!). Stable version installation cobrapy can be installed with any recent installation of pip. Instructions for several operating systems are below: Mac OS X or Linux - install pip. - In a terminal, run sudo pip install cobra We highly recommend updating pip beforehand (pip install pip --upgrade). Microsoft Windows The preferred installation method on Windows is also to use pip. The latest Windows installers for Python 2.7 and 3.4 include pip, so if you use those you will already have pip. - In a terminal, run C:\Python27\Scripts\pip.exe install cobra (you may need to adjust the path accordingly). To install without pip, you will need to download and use the appropriate installer for your version of python from the python package index. Installation for development Get the detailed contribution instructions for contributing to cobrapy. Installation of optional dependencies Optional dependencies On windows, these can downloaded from [this site] (). On Mac/Linux, they can be installed using pip, or from the OS package manager (e.g brew, apt, yum). - libsbml >= 5.10 to read/write SBML level 2 files - Windows libsbml installer - Use sudo pip install python-libsbml on Mac/Linux - lxml to speed up read/write of SBML level 3 files. - scipy >= 0.11 for MOMA and saving to *.mat files. - Windows scipy installer - pytest and pytest-benchmark are required for testing You can install all packages directly by pip install "cobra[all]" Solvers cobrapy uses optlang to interface the mathematical solvers used to optimize the created COBRA models, which at the time of writing - ILOG/CPLEX (available with Academic and Commercial licenses). - gurobi - glpk Testing your installation While it is not a hard requirement for using cobrapy, you need pytest and pytest-benchmark to run its tests. First do pip install pytest pytest-benchmark or to install cobrapy directly with the test dependencies pip install "cobra[test]" Then start python and type the following into the Python shell from cobra.test import test_all test_all() You should see some skipped tests and expected failures, and the function should return 0. If you see a value other than 0 please file an issue report. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/cobra/0.9.0/
CC-MAIN-2018-30
refinedweb
832
56.86
Hi, This should be easy, for some reason I can’t remember how to do it. I have in /lib a file called “joe_converter.rb” that I want to be able to run from terminal. Here’s the beginning of “joe_converter.rb”: require ‘active_record’ Takes in a URL, creates all needed objects, and ends up creating an LqSnapshot for that URL class JoeConverter def self.do_the_conversion url, scores, user_id How can I run this from terminal correctly? I tried with “script/runner”, and put in the necessary parameters, but I get back this: wrong number of arguments (0 for 5) (ArgumentError) Anyone have any ideas about this?
https://www.ruby-forum.com/t/how-to-pass-parameters-to-a-script-in-lib-from-terminal/157358
CC-MAIN-2021-31
refinedweb
107
65.32
Hi, Is there a restriction on the number of news items we can retrieve using Python Eikon get_news_headlines API function. I seem to only be retrieving 10 items, even though the date_from and date_to arguments are from 2000 to 2019. Appreciate your help. Regards Leben Hello @lebenjohnson.mannariat Based on the Eikon Data APIs for Python - Reference Guide, the you can set number of news items via the count value in get_news_headlines() . Please add count(max is 100) to the get_news_headlines method. import eikon as ek ek.set_app_key('You App Key') try: headlines = ek.get_news_headlines("NRG", count=20, raw_output=False) print(headlines['text']) except ek.EikonError as error: print(error) Thanks much appreciated. However the max count limit of 100 is very small, as for any topic you would end up getting more hits in a year. Is there any way to increase it? @lebenjohnson.mannariat You cannot retrieve more than 100 headlines in a single request using this interface. You can however submit multiple requests and retrieve news headlines in batches of 100 headlines each in a loop using the timestamp on the earliest headline returned in the previous batch as a value for date_to parameter in the next request. Eikon is an end user product. Any data retrieved from Eikon is licensed for individual user's use only. You should also be aware that the depth of news history available through Eikon Data APIs is 15 months. You cannot retrieve news headlines or stories older than 15 months using Eikon Data APIs. If you need deeper history or need to consume news on an industrial scale, consider Refinitiv Machine Readable News proposition. Thanks Alex, Much appreciated. I was needing the news for some individual research. Regards Leben How to use PCTCHG_WTD and other data items not starting with "TR." Why cant I retrieve a longer series of headlines EikonError: Error code 503 | Server Error: API Proxy is not available Using for loop to get news headlines of multiple companies Not able to fetch NSE Derivative( ACC.NS,SBI.NS ) Tick history data using python api.
https://community.developers.refinitiv.com/questions/45559/get-news-headline-returning-only-10-news-items-any.html
CC-MAIN-2021-17
refinedweb
347
65.93
How do I increase the cell width of the Jupyter/ipython notebook in my browser? I would like to increase the width of the ipython notebook in my browser. I have a high-resolution screen, and I would like to expand the cell width/size to make use of this extra space. Thanks! edit: 5/2017 I now use jupyterthemes: and this command: jt -t oceans16 -f roboto -fs 12 -cellw 100% which sets the width to 100% with a nice theme. If you don't want to change your default settings, and you only want to change the width of the current notebook you're working on, you can enter the following into a cell: from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) From: stackoverflow.com/q/21971449
https://python-decompiler.com/article/2014-02/how-do-i-increase-the-cell-width-of-the-jupyter-ipython-notebook-in-my-browser
CC-MAIN-2019-47
refinedweb
136
70.84
Originally published in my blog: Have you ever tried to: - Create complex generic types in your own project? - Write distributable stubs for your library? - Create custom mypyplugin? - Write custom type-checker? In case you try to do any of these, you will soon find out that you need to test your types. Wait, what? Let me explain this paradox in detail. The first tests for types in Python Let's start with a history lesson. The first time I got interested in mypy, I found their testing technique unique and interesting. That's how it looks like: [case testNestedListAssignmentToTuple] from typing import List a, b, c = None, None, None # type: (A, B, C) a, b = [a, b] a, b = [a] # E: Need more than 1 value to unpack (2 expected) a, b = [a, b, c] # E: Too many values to unpack (2 expected, 3 provided) This looks familiar: [case]defines a new test, like def test_does - The contents inside are raw pythonsource code lines that are processed with mypy # E:comments are assertstatements that tell what mypyoutput is expected at each line So, we can write this kind of tests for our libraries as well, right? That was the question when I started writing returns library (which is a typed monad implementation in Python). So, I needed to test what is going on inside and what types are revealed by mypy. Then I tried to reuse this test cases from mypy. Long story short, it is impossible. This little helper is builtin inside mypy source code and cannot be reused. So, I started to look for other solutions. Modern approach I stumbled on pytest-mypy-plugins package. It was originally created to make sure that types for django works fine in TypedDjango project. Check out my previous post about it. To install pytest-mypy-plugins in your project run: pip install pytest-mypy-plugins It works similar to mypy's own test cases, but with a slightly different design. Let's create a yaml file and place it as ./typesafety/test_compose.yml: # ./typesafety/test_compose.yml - case: compose_two_functions main: | from myapp import first, second reveal_type(second(first(1))) # N: Revealed type is 'builtins.str*' files: - path: myapp.py content: | def first(num: int) -> float: return float(num) def second(num: float) -> str: return str(num) What do we have here? casedefinition, this is basically a test's name mainsection that contains pythonsource code that is required for the test # N:comment that indicates a note from mypy filessection where you can create temporary helper files to be used in this test Nice! How can we run it? Since pytest-mypy-plugins is a pytest plugin, we only need to run pytest as usual and to specify our mypy configuration file (defaults to mypy.ini): pytest --mypy-ini-file=setup.cfg You can have two mypy configurations: one for your project, one for tests. Just saying. Let's have a look at our setup.cfg contents: [mypy] check_untyped_defs = True ignore_errors = False ignore_missing_imports = True strict_optional = True That's the invokation result: » 1 item typesafety/test_compose.yml . [100%] ================================= 1 passed in 2.00s ================================== It works! Let's complicate our example a little bit. Checking for errors We can also use pytest-mypy-plugins to enforce and check constraints on our complex type specs. Let's imagine you have a type definition with complex generics and you want to make sure that it works correctly. That's actually very helpful, because you can check for success cases with raw mypy checks, while you cannot tell mypy to expect an error for a specific expression or call. Let's begin with our complex type definition: # returns/functions.py from typing import Callable, TypeVar # Aliases: _FirstType = TypeVar('_FirstType') _SecondType = TypeVar('_SecondType') _ThirdType = TypeVar('_ThirdType') def compose( first: Callable[[_FirstType], _SecondType], second: Callable[[_SecondType], _ThirdType], ) -> Callable[[_FirstType], _ThirdType]: """Allows typed function composition.""" return lambda argument: second(first(argument)) This code takes two function and checks that their types match, so they can be composed. Let's test it: # ./typesafety/test_compose.yml - case: compose_two_wrong_functions main: | from returns.functions import compose def first(num: int) -> float: return float(num) def second(num: str) -> str: return str(num) reveal_type(compose(first, second)) out: | main:9: error: Cannot infer type argument 2 of "compose" main:9: note: Revealed type is 'def (Any) -> Any' In this example I changed how we make a type assertion: out is easier for multi-line output than inline comments. Now we have two passing tests: » 2 items typesafety/test_compose.yml .. [100%] ================================= 2 passed in 2.65s ================================== Let's test one more complex case. Extra mypy settings We can change mypy configuration on per-test bases. Let's add some new values to the existing configuration: - case: compose_optional_functions mypy_config: # appends options for this test no_implicit_optional = True main: | from returns.functions import compose def first(num: int = None) -> float: return float(num) def second(num: float) -> str: return str(num) reveal_type(compose(first, second)) out: | main:3: error: Incompatible default for argument "num" (default has type "None", argument has type "int") main:9: note: Revealed type is 'def (builtins.int*) -> builtins.str*' We added no_implicit_optional configuration option that requires to add explicit Optional[] type to arguments where we set None as a default value. And our test got it from the mypy_config section that appends options to the base mypy settings from --mypy-ini-file setting. Custom DSL pytest-mypy-plugins also allows to create custom yaml-based DSLs to make your testing process easier and test cases shorter. Imagine, that we want to have reveal_type as a top-level key. It will just reveal a type of a source code line that is passed to it. Like so: - case: reveal_type_extension_is_loaded main: | def my_function(arg: int) -> float: return float(arg) reveal_type: my_function out: | main:4: note: Revealed type is 'def (arg: builtins.int) -> builtins.float' Let's have a look at what it takes to achieve it: # reveal_type_hook.py from pytest_mypy.item import YamlTestItem def hook(item: YamlTestItem) -> None: parsed_test_data = item.parsed_test_data main_source = parsed_test_data['main'] obj_to_reveal = parsed_test_data.get('reveal_type') if obj_to_reveal: for file in item.files: if file.path.endswith('main.py'): file.content = f'{main_source}\nreveal_type({obj_to_reveal})' What do we do here? - We get the source code from the main:key - Then append reveal_type()call from the reveal_type:key As a result, we have a custom DSL that fulfills our initial idea. Running: » pytest --mypy-ini-file=setup.cfg --mypy-extension-hook=reveal_type_hook.hook ================================ test session starts ================================= platform darwin -- Python 3.7.4, pytest-5.1.1, py-1.8.0, pluggy-0.12.0 rootdir: /code, inifile: setup.cfg plugins: mypy-plugins-1.0.3 collected 1 item typesafety/test_hook.yml . [100%] ================================= 1 passed in 0.87s ================================== We pass a new flag: --mypy-extension-hook which points to our own DSL implementation. And it works perfectly! That's how one can reuse a large amounts of code in yaml-based tests. Conlusion pytest-mypy-plugins is an absolute must for people who work a lot with types or mypy plugins in python. It simplifies the process of refactoring and distributing types. You can have a look at the real world example usage of these tests in: Share what your use-cases are! We are still in a pretty early stage of this project and we would like to find out what our users are thinking. Posted on by: Discussion
https://dev.to/wemake-services/testing-mypy-stubs-plugins-and-types-1b71
CC-MAIN-2020-40
refinedweb
1,232
65.52
Lingua::Stem::En - Porter's stemming algorithm for 'generic' English use Lingua::Stem::En; my $stems = Lingua::Stem::En::stem({ -words => $word_list_reference, -locale => 'en', -exceptions => $exceptions_hash, });. 1999.06.15 - Changed to '.pm' module, moved into Lingua::Stem namespace, optionalized the export of the 'stem' routine into the caller's namespace, added named parameters 1999.06.24 - Switch core implementation of the Porter stemmer to the one written by Jim Richardson <jimr@maths.usyd.edu.au> 2000.08.25 - 2.11 Added stemming cache 2000.09.14 - 2.12 Fixed *major* :( implementation error of Porter's algorithm Error was entirely my fault - I completely forgot to include rule sets 2,3, and 4 starting with Lingua::Stem 0.30. -- Benjamin Franz 2003.09.28 - 2.13 Corrected documentation error pointed out by Simon Cozens. 2005.11.20 - 2.14 Changed rule declarations to conform to Perl style convention for 'private' subroutines. Changed Exporter invokation to more portable 'require' vice 'use'. 2006.02.14 - 2.15 Added ability to pass word list by 'handle' for in-place stemming. 2009.07.27 2.16 Documentation Fix Stems a list of passed words using the rules of US English. Returns an anonymous array reference to the stemmed words. Example: my @words = ( 'wordy', 'another' ); my $stemmed_words = Lingua::Stem::En::stem({ -words => \@words, -locale => 'en', -exceptions => \%exceptions, }); If the first element of @words is a list reference, then the stemming is performed 'in place' on that list (modifying the passed list directly instead of copying it to a new array). This is only useful if you do not need to keep the original list. If you do need to keep the original list, use the normal semantic of having 'stem' return a new list instead - that is faster than making your own copy and using the 'in place' semantics since the primary difference between 'in place' and 'by value' stemming is the creation of a copy of the original list. If you don't need the original list, then the 'in place' stemming is about 60% faster. Example of 'in place' stemming: my $words = [ 'wordy', 'another' ]; my $stemmed_words = Lingua::Stem::En::stem({ -words => [$words], -locale => 'en', -exceptions => \%exceptions, }); The 'in place' mode returns a reference to the original list with the words stemmed. Sets the level of stem caching. '0' means 'no caching'. This is the default level. '1' means 'cache per run'. This caches stemming results during a single call to 'stem'. '2' means 'cache indefinitely'. This caches stemming results until either the process exits or the 'clear_stem_cache' method is called. Clears the cache of stemmed words This code is almost entirely derived from the Porter 2.1 module written by Jim Richardson. Lingua::Stem Jim Richardson, University of Sydney jimr@maths.usyd.edu.au or Integration in Lingua::Stem by Benjamin Franz, FreeRun Technologies, snowhare@nihongo.org or Jim Richardson, University of Sydney Benjamin Franz, FreeRun Technologies This code is freely available under the same terms as Perl.
http://search.cpan.org/dist/Lingua-Stem/lib/Lingua/Stem/En.pm
CC-MAIN-2017-51
refinedweb
494
65.12
Damon Payne In order to increase the prestige of the Microsoft MVP program and the practice of .Net user group presentations, I have purchased a new automobile. This is my 2010 Audi S4 – there are many like it but this one is mine. After driving the WRX for almost 9 years I thought I could splurge a little bit. This may be obvious to some, but by calling out to JavaScript within the document hosting your Silverlight application, you can do some powerful things. Back in the day, we used to sometimes use an href with “mailto” as a poor man’s contact form – hosting solutions with databases and backups weren’t always as cheap and plentiful as they are today. If you lack a Service Layer to email solution in your environment, you can always try the following: private void Mailto_Click(object sender, RoutedEventArgs e) { var svc = new Modules.BrowserHostService(); svc.CallHostFunction("MailTo", "damon@damonpayne.com", "Damon", "Try this Wine!", "Roger Sabon Chateauneuf du Pape 2005", "11F16974-8EE1-4c0c-8AB4-21EF0B7BF087"); } Where BrowserHostService is part of a demo that I’m working on. The implementation is very simple: public class BrowserHostService : IHostService { public void CallHostFunction(string func, params string[] args) { HtmlPage.Window.CreateInstance(func, args); } This method does everything needed to invoke a JavaScript method on the document containing the Silverlight application. For invoking the user’s default mail program, you could write a function like what’s shown below. The only item of note is that the parameter orders must match up. If you’ve done much Reflection coding you should be used to this. <script type="text/javascript"> function MailTo(toEmail, fromUser, subject, suggestTopic, suggestCode) { var body = fromUser + ' thought you would really enjoy ' + suggestTopic + ' and there is some info about that on TastingProject.com. ' + 'Why not head over to' + suggestCode + ' and check it out?'; window.open('mailto:'+ toEmail +'?subject=' + subject+'&body='+body); } </script> The preceding C# and JavaScript produces the following result on my machine in Outlook: Of course you would have to rely on your end user to hit the Send button but this may be a quick and dirty way to get email functionality if you don’t have complete control over your environment. On Saturday, October 24th I will be speaking at the Madison .NET User’s Group’s annual Code Camp: MadCamp. You can find general information about MadCamp here. My presentation will be on Advanced Silverlight Topics. They are going to be giving away a lot of cool stuff, particularly around Windows 7. The group has combined the code camp event with a Microsoft sponsored Windows 7 launch party so it should be pretty sweet. I will be tweeting on the subject using #madcamp09 and I hope to see you: If you have worked in the Windows Forms designer, ASP.Net ViewState, ASP.Net Control Templates, or ever used a design surface of any kind you are likely familiar with the concepts of a Naming Container and Name Providers. The first Label I drag onto a form might get a name of “Label0”, then “Label1” and so on. Many visual technologies will throw an exception if two items inside the same container have the same name. We will need this idea for AGT very soon, so now is the time to implement it. The design goals for this next step are fairly simple: The only remaining decisions are small but important. I’m going with “yes”, and “no”, respectively. I’m going to create a simple INamingContainer interface which can be implemented by containers (like a design surface) that want their users to have a unique name. I could make this part of the IDesigner interface but I’d like this to be optional. I am still in my IDesignableControl-centric world though. namespace DamonPayne.AGT.Design { /// <summary> /// Represents something that has children needing unique names /// </summary> public interface INamingContainer { IList<IDesignableControl> Children{get;} } } This is all we need, we can now enumerate the children and generate a unique name. The interface for INameProvider is equally simple. namespace DamonPayne.AGT.Design.Contracts { public interface INameProvider { string GetUniqueName(INamingContainer container, IDesignableControl newChild); } } As I mentioned before, I’d like to follow the familiar convention of ControlType[number], so I’m ready to build a default implementation for INameProvider. This is pretty easy using LINQ. namespace DamonPayne.AGT.Design.Services { /// <summary> /// A name provider that uses the last part of type name plus /// </summary> public class TypeScopedNameProvider : INameProvider { public string GetUniqueName(INamingContainer container, IDesignableControl newChild) { var type = newChild.GetType(); string namePart = type.Name; int existingCount = (from c in container.Children where c.GetType() == type select c).Count(); return namePart + existingCount; } } } In order to finish up, I only need to hook my implementation up using Unity and call the name provider whenever a new item is added to the DesignSurface. [Dependency] public INameProvider NameProvider { get; set; } private void EnsureName(IDesignableControl dc) { if (string.IsNullOrEmpty(dc.DesignTimeName)) { string name = NameProvider.GetUniqueName(this, ((DesignSite)dc).HostedContent); dc.DesignTimeName = name; } } I can drag things out without a name now, and everything looks the way it should with the generated naming. My glorious purple couches and poorly drawn speakers have names! Now that I’ve been learning Expression Design I should really take another crack at drawing those speakers. I’ve really got to fix the way that PropertyGrid looks too. The Service/Module based approach continues to prove itself out as the correct approach for making it easy to add new functionality to the system. Adding a name provider was a small feature, but the next step won’t be so easy. You can get the source code from codeplex. This article represents change set 28479. The live demo has been updated and can be viewed here. Theme by Mads Kristensen Damon Payne is a Microsoft MVP specializing in Smart Client solution architecture. Tweetses by damonpayne Get notified when a new post is published.
http://www.damonpayne.com/2009/10/default.aspx
CC-MAIN-2014-52
refinedweb
986
55.13
Overview Atlassian Sourcetree is a free Git and Mercurial client for Windows. Atlassian Sourcetree is a free Git and Mercurial client for Mac. Creating Simpler Middleware with WSGI Lite(). Table of Contents Introducing WSGI Lite If you've seen the Latinator example from the WSGI PEP, you may recall that it's about three times longer than the example shown above, and it needs two classes* to do the same job. And, if you've ever tried to code a piece of middleware like that, you'll know just how hard it is to do it correctly. (In fact, as the author of the WSGI PEPs, I have almost never seen a single piece of WSGI middleware that doesn't break the WSGI protocol in some way that I couldn't find with a minute or two of code inspection!) But the latinator middleware example shown above is actually a valid piece of WSGI 1.0 middleware, that can also be called with a simpler, Rack-like protocol. And all of the hard parts are abstracted away into two decorators: @Laurentiu Labo and lighten(). The @Laurentiu Labo decorator says, "this function is a WSGI application, but it expects to be called with an environ dictionary, and return a (status, headers, body) triplet. And it doesn't use start_response(), write(), or expect to have a close() method called." The @Laurentiu Labo decorator then wraps the function in such a way that if it's called by a WSGI 1 server or middleware, it will act like a WSGI 1 application. But if it's called with just an environ (i.e., without a start_response), it'll be just like you called the decorated function directly: that is, you'll get back a (status, headers, body) triplet (similar to the "Rack" protocol, aka Ruby's version of WSGI). Pretty neat, eh? But the real magic comes in with the second decorator, lighten(). lighten() accepts either a @Laurentiu Labo application or a WSGI 1 application, and returns a similarly flexible application object. Just like the output of the @Laurentiu Labo decorator, the resulting app object can be called with or without a start_response, and the return protocol it follows will vary accordingly. This means that you can either pass a @Laurentiu Labo app or a standard WSGI app to our latinator() middleware, and it'll work either way. And, you can supply a @Laurentiu Labo or lighten()-ed app to any standard WSGI server or middleware, and it'll Just Work. For efficiency, both @Laurentiu Labo and lighten() are designed to be idempotent: calling them on already-converted applications returns the app you passed in, with no extra wrapping. And, if you call a wrapped application via its native protocol, no protocol conversion takes place - the original app just gets called without any conversion overhead. So, feel free to use both decorators early and often! WSGI Extensions and Environment Keys One of the subtler edge cases that can arise in writing correct middleware is that when you call another WSGI app, it's allowed to change the environ you pass in. And what most people don't realize, is that this means it's not safe to pull things out of the environment after you call another WSGI app! For example, take a look at this middleware example: def middleware(environ, start_response): response = some_app(environ, start_response) if environ.get('PATH_INFO','').endswith('foo'): # ... etc. Think it'll work correctly? Think again. If some_app is a piece of routing middleware, it could already have changed PATH_INFO, or any other environment key. Likewise, if this middleware looks for server extensions like wsgi.file_wrapper or wsgiorg.routing_args, it might end up reading the child application's extensions, rather than those intended for the middleware itself. To help handle these cases, the @Laurentiu Labo decorator can bind a function's keyword arguments to values based on the contents of the environ argument: @lite(path='PATH_INFO', routing='wsgiorg.routing_args') def middleware(environ, path='', routing=((),{})): response = some_app(environ, start_response) if path.endswith('foo'): # ... etc. When @Laurentiu Labo is called with keyword arguments whose argument names match argument names on the decorated function, it wraps the function in such a way that the matching keys from the environ are passed in as keyword arguments. This automatically ensures that you aren't using possibly-corrupted keys from your child app(s), and lets you specify default values (via your function's argument defaults, as shown above). As a convenience for frequently used extensions or keys, you can save calls to lite() and give them names, for example: >>> with_routing = lite(routing='wsgiorg.routing_args') And the resulting decorator is precisely equivalent to invoking @Laurentiu Labo() directly: >>> @with_routing ... def middleware(envrion, routing=((),{})): ... """Some sort of middleware""" You can even stack multiple @Laurentiu Labo() calls (direct or saved), or give them names, docstrings, and specify what module you defined them in: >>> with_path = lite( ... 'with_path', "Add a `path` arg for ``PATH_INFO``", "__main__", ... path='PATH_INFO' ... ) >>> help(with_path) Help on function with_path in module __main__: with_path(func) Add a `path` arg for ``PATH_INFO`` >>> @with_routing ... @with_path ... def middleware(environ, path='', routing=((),{})): ... """Some combined middleware""" By the way, the underlying decorator is smart enough to tell when it's being stacked, and automatically merges the wrappings so there's only one level of calling overhead added, no matter how many of them you stack. (As long as they're not intermingled with other decorators, of course!) Sometimes, an extension may be known under more than one name - for example, an x-wsgiorg. extension vs. a wsgiorg. one, or a similar extension provided by different servers. You could of course bind them to different arguments, but it's generally simpler to just bind a single argument, using a tuple: >>> @lite(routing=('wsgiorg.routing_args', 'x-wsgiorg.routing_args')) ... def middleware(envrion, routing=((),{})): ... """Some sort of middleware""" This will check the environment for the named extensions in the order listed, and replace routing with the first one matched. These argument specifications are called "binding rules", by the way. A rule is either a string (i.e. an instance of basestring), a callable object, or an iterable of rules (recursively). Strings are looked up in the environ, and iterables are tried in sequence until a lookup succeeds. Callable rules, on the other hand, are looked up by being called with a single positional argument: the environ dictionary. They must return an iterable (or sequence) yielding zero or more items. Returning an empty sequence or yielding zero items means the lookup failed, and a default value should be used instead (or the next alternative binding rule provided for that keyword argument). Otherwise, the first item yielded is passed in as the matching keyword argument. Here's an example of using a classmethod as a callable binding rule: >>> class MyRequest(object): ... def __init__(self, environ): ... self.environ = environ ... ... @classmethod ... def bind(cls, environ): ... yield cls(environ) >>> with_request = lite(request=MyRequest.bind) Now, @with_request will create a MyRequest instance wrapping the environ of the decorated function, and provide it via the request keyword argument. The same approach can also be used to do things like accessing environment-cached objects, such as sessions: >>> class MySession(object): ... def __init__(self, environ): ... self.environ = environ ... ... @classmethod ... def bind(cls, environ): ... session = environ.get('myframework.MySession') ... if session is None: ... session = environ['myframework.MySession'] = cls(environ) ... yield session >>> with_session = lite(session=MySession.bind) The possibilities are pretty much endless -- and much more in keeping with my original vision for how WSGI was supposed to help dissolve web frameworks into web libraries. (That is, things you can easily mix and match without every piece of code you use having to come from the same place.) Callables that you use as bindings don't even have to return something from the environment or wrap the environment, by the way - they can just be things that use something from the environment. For example, you could bind parameters to temporary files that will be automatically closed when the request is finished: >>> def mktemp(environ): ... closing = environ['wsgi_lite.closing'] ... yield closing(tempfile(etc[...])) >>> @lite(tmp1=mktemp, tmp2=mktemp) ... def do_something(environ, tmp1, tmp2): ... """Write stuff to tmp1 and tmp2""" You can even use argument bindings in your binding functions, using the @Nicholas Blasingame decorator from the wsgi_bindings module: >>> from wsgi_bindings import bind >>> @bind(closing = 'wsgi_lite.closing') ... def mktemp(environ, closing): ... yield closing(tempfile(etc[...])) @Nicholas Blasingame() is just like @Laurentiu Labo() with keyword arguments (including the ability to save and stack calls), except that it doesn't turn the decorated function into a WSGI-compatible app. (Which is a good thing, since a binding rule is not a WSGI app!) Now, given the above examples, you might be wondering what all that wsgi_lite.closing stuff is about. Well, that's what we're going to talk about in the next two sections... close() and Resource Cleanups So, there's some good news and some bad news about close() and resource cleanups in WSGI Lite. The good news is, @Laurentiu Labo middleware is not required to call a body iterator's close() method. And if your app or middleware doesn't need to do any post-request resource cleanup, or if it just returns a body sequence instead of an iterator or generator, then you don't need to worry about resource cleanup at all. Just write the app or middleware and get on with your life. ;-) Now, if you are yielding body chunks from your WSGI apps, you might want to consider just not doing that. That's because, if you don't yield chunks, you can write normal, synchronous code that won't have any of the problems I'm about to introduce you to... problems that your existing WSGI apps already have, but you probably don't know about yet! (People often object when I say that typical application code should never produce its output incrementally... but the hard problem of proper resource cleanup when doing so, is one of the reasons I'm always saying it.) Anyway, if you must produce your response in chunks, and you need to release some resources as soon as the response is finished, you need to use the wsgi_lite.closing extension, e.g: @lite(closing='wsgi_lite.closing') def my_app(environ, closing): def my_body(): try: # allocate some resources ... yield chunk ... finally: # release the resources return status, headers, closing(my_body()) This protocol extension (accessed as closing() in the function body above) is used to register an iterator (or other resource) so that its close() method will be called at the end of the request, even if the browser disconnects or a piece of middleware throws away your iterator to use its own instead. An important note: items registered with closing() are closed in reverse registration order. This means that if the my_body() iterator above is looping over a sub-app's response, then its finally block may be run before any similar finally block in the sub-app. Therefore, your finally block must not close any resources the sub-app might be using! So, if you are passing any resources down to another WSGI application, be sure to call closing() on them before calling the other application, and then don't close them in your body iterator. Example: @lite(closing='wsgi_lite.closing') def my_app(environ, closing): environ['some.key'] = closing(some_resource()) return subapp(environ) In other words, you should only close resources in your iterator if that's where they were opened, or you are 100% positive they can't be accessed from a sub-app. Otherwise, just call closing() on them as soon as you allocate them. Don't, however, call closing() on objects that don't belong to your function. If you didn't allocate it, closing it is somebody else's job. In particular, you don't need to call closing() on any WSGI or WSGI Lite response bodies, because lighten() takes care of that for you, and you'll end up double-closing things. Okay, so that was the bad news. Not that bad, though, is it? You just need to add an extra argument to @Laurentiu Labo, pay a little bit of attention to the order of resource closing, and register your own objects (but only your own objects) for closing. That's it! Really, the rest of this section is all about what will happen if you don't use the extension, or if you try to do resource cleanup in a standard WSGI app without the benefit of WSGI Lite. As long as you use the extension, your app's resource cleanup will work at least as well as -- and probably much better than! -- it would work under plain WSGI. (And you can make it work even better still if you wrap your entire WSGI stack with a lighten() call... but more on that will have to wait until the end of this section.) So, just to be clear, the rest of this section is about flaws and weaknesses that exist in standard WSGI's resource management protocol, and what WSGI Lite is doing to work around them. What flaws and weaknesses? Well, consider the example above. Why does it need the closing() extension? After all, doesn't Python guarantee that the finally block will be executed anyway? Well, yes and no. First off, if the generator is called but never iterated over, the try block won't execute, and so neither will the finally. So, it depends on what the caller does with the generator. For example, if the browser disconnects before the body is fully generated, the server might just stop iterating over it. Okay, but won't garbage collection take care of it, then? Well, yes and no. Eventually, it'll be garbage collected, but in the meantime, your app has a resource leak that might be exploitable to deny service to the app: just start up a resource-using request, then drop the connection over and over until the server runs out of memory or file handles or database cursors or whatever. Now, under the WSGI standard, middleware and servers are supposed to call close() on a response iterator (if it has one), whenever they stop iterating -- regardless of whether the iteration finished normally, with an error, or due to a browser disconnect. In practice, however, most WSGI middleware is broken and doesn't call close(), because 1) doing so usually makes your middleware code really really complicated, and 2) nobody understands why they need to call close(), because everything appears to work fine without it. (At least, until some black-hat finds your latent denial-of-service bug, anyway.) So, WSGI Lite works around this by giving you a way to be sure that close() will be called, using a tiny extension of the WSGI protocol that I'll explain in the next section... but only if you care about the details. Otherwise, just use the wsgi_lite.closing extension if you need resource cleanup in your body iterator, and be happy that you don't need to know anything more. ;-) Well, actually, you do need to know ONE more thing... If your outermost @Laurentiu Labo application is wrapped by any off-the-shelf WSGI middleware, you probably want to wrap the outermost piece of middleware with a lighten() call. This will let WSGI Lite make sure that your close() methods get called, even if the middleware that wraps you is broken. (Technically speaking, of course, there's no way to be sure you're not being wrapped by middleware, so it's not really a cure-all unless your WSGI server natively supports the extension described in the next section. Hopefully, though, we'll put the extension into a PEP soon and all the popular servers will provide it in a reasonable time period.) The wsgi_lite.closing Extension WSGI Lite uses a WSGI server extension called wsgi_lite.closing, that lives in the application's environ variable. The @Laurentiu Labo and lighten() decorators automatically add this extension to the environment, if they're called from a WSGI 1 server or middleware, and the key doesn't already exist. (This is why you don't need a default value for the closing argument, by the way: the key will always be available to a @Laurentiu Labo app or middleware component, or any sub-app or sub-middleware that inherits the same environment.) The value for this key is a callback function that takes one argument: an object whose close() method is to be called at the end of the request. For convenience, the passed-in object is returned back to the caller, so you can use it in a way that's reminiscent of with closing(file('foo')) as f:. Anyway, the idea here is that a server (or middleware component) accepts these registrations, and then closes all the resources (or generators) when the request is finished. Objects are closed in the reverse order from which they're registered, so that inner apps' resources are released prior to middleware-provided resources being released. (In other words, if an app is using a resource that it received from middleware via its environ, that resource will still be usable during the app's close() processing or finally blocks.) Objects registered with this extension must have close() methods, and the methods must be idempotent: that is, it must be safe to call them more than once. (That is, calling close() a second time must not raise an error.) close() methods are explicitly allowed to registering additional objects to be closed: such objects are effectively "pushed" onto the stack of objects to be closed, with the last added object being closed first. (Note that this implies that a close() method must not directly or indirectly re-register itself, as this would create an infinite loop of closing calls.) Currently, the handling of errors raised by close() methods is undefined, in that WSGI Lite doesn't yet handle them. ;-) (When I have some idea of how best to handle this, I'll update this bit of the spec.) I would like to encourage WSGI server developers to support this extension if they can. While WSGI Lite implements it via middleware (in both the @Laurentiu Labo and lighten() decorators), it's best if the WSGI origin server does it, in order to bypass any broken middleware in between the server and the app. (And, if a @Laurentiu Labo or lighten() app is invoked from a server or middleware that already implements this extension, it'll make use of the provided implementation, instead of adding its own.) Now, if for some reason you want to use this extension directly in your code without using a @Laurentiu Labo() binding, please remember that the WSGI spec allows called applications to modify the environ. This means that you must retrieve the extension before you pass the environ to another app. (That's why we have keyword binding in @Laurentiu Labo(), remember?) Other Protocol Details Technically, WSGI Lite is a protocol as well as an implementation. And there's still one more thing to cover (besides the Rack-style calling convention and closing extension) that distinguishes it from standard WSGI. Applications supporting the "lite" invocation protocol (i.e. being called without a start_response and returning a status/header/body triplet), are identified by a __wsgi_lite__ attribute with a True value. (@Laurentiu Labo and lighten() add this for you automatically.) Any app without the attribute, however, is assumed to be a standard WSGI 1 application, and thus in need of being lighten()-ed before it can be called via the WSGI Lite protocol. (If you want to check for this attribute, or add it to an object that natively supports WSGI Lite, you can use the wsgi_lite.is_lite() and wsgi_lite.mark_lite() APIs, respectively. But even if you want to, you probably don't need to, because if you call @Laurentiu Labo or lighten() on an object that's already "lite", it's returned unchanged. So it's easier to just always call the appropriate decorator, rather than trying to figure out whether to call it. Idempotence == good!) Anyway, the rest of the protocol is defined simply as a stripped down WSGI, minus start_response(), write(), and close(), but with the addition of the wsgi_lite.closing key. That's pretty much it. Known Limitations You knew there had to be a catch, right? Well, in this case, there are three. First, if you lighten() a standard WSGI app that uses write() calls instead of using a response iterator, you must have the greenlet library installed, or you'll get an error when write() is called. Why? Well, it's complicated. But the chances are pretty good that you don't have any code that uses write(), and if you do, well, greenlet works on lots of platforms and Python versions. Anyway, that's the first limitation. The second limitation is that WSGI Lite cannot work around broken WSGI 1 middleware that lives above your application in the call stack! That is, if your code runs under a middleware component that alters your response, but forgets to make sure your app's response's close() method gets called, then none of the fancy resource closing features in WSGI Lite will work properly. So, until standard WSGI servers support the wsgi_lite.closing extension, you can (and should) work around this by wrapping your entire WSGI stack with a lighten() call. This way, as long as your server isn't broken, it'll call WSGI Lite's closer, and all will be well with your resource closing. Third and finally, the lighten() wrapper doesn't support broken WSGI apps that call write() from inside their returned iterators. While some servers allow it, the WSGI specification explicitly forbids it, and to support it in WSGI Lite would force all wrapped WSGI 1 apps to pay in the form of unnecessary greenlet context switches, even if they never used write() at all. Since the current "word on the street" says that very few WSGI apps use write() at all, I figure it's okay to blow up on the even smaller number that are also spec violators, rather than burden all apps with extra overhead just to support the ill-behaved ones. However, if you feel otherwise, let me know about it via the Web-SIG. (Especially if you have a workable suggestion for how to work around it without making things slower for the apps that don't call write()!) Current Status The code in this repository is experi-mental, and possibly very-mental or just plain detri-mental. It has not been seriously used or battle-hardened as yet, even though test coverage is now at 100%, and there are some fairly exhaustive WSGI compliance tests that exercise many obscure corners of the WSGI protocol. Ironically enough, however, that may well mean that there is important "WSGI" code out there that won't work with this module yet, precisely because that other code is not compliant with the spec! So, while this project's code should work quite well for compliant code, this doesn't mean it will play well with all the code you're using in all your project(s). Exercise it carefully, and don't assume that because it works great for one of your apps or middleware components, it'll therefore work great with all of them! In general, though, this is still alpha software, and things may change or break. It might even be that the whole thing was a really stupid idea that won't actually work in the real world for some reason. So, I've really just thrown this out there for people to see and play with, so I can get some feedback on its actual usability. Feel free to drop me an email via the Web-SIG mailing list, to let me know what you think. Hopefully, we'll soon get any glitches sorted out, and nail this down to something that's less of a moving target, and maybe even turn it into a PEP and a stdlib contribution! (Oh, and last, but not least... this package is under the Apache license, since that's what the PSF uses for software contributed to Python, and hopefully that's where this is headed, assuming we don't find some sort of glaring hole in the protocol or concept, of course, and it's in sufficiently high demand.)
https://bitbucket.org/chrism/wsgi_lite/overview
CC-MAIN-2017-51
refinedweb
4,079
60.04
Benchmarking Nearest Neighbor Searches in Python I recently submitted a scikit-learn pull request containing a brand new ball tree and kd-tree for fast nearest neighbor searches in python. In this post I want to highlight some of the features of the new ball tree and kd-tree code that's part of this pull request, compare it to what's available in the scipy.spatial.cKDTree implementation, and run a few benchmarks showing the performance of these methods on various data sets. My first-ever open source contribution was a C++ Ball Tree code, with a SWIG python wrapper, that I submitted to scikit-learn. A Ball Tree is a data structure that can be used for fast high-dimensional nearest-neighbor searches: I'd written it for some work I was doing on nonlinear dimensionality reduction of astronomical data (work that eventually led to these two papers), and thought that it might find a good home in the scikit-learn project, which Gael and others had just begun to bring out of hibernation. After a short time, it became clear that the C++ code was not performing as well as it could be. I spent a bit of time writing a Cython adaptation of the Ball Tree, which is what currently resides in the sklearn.neighbors module. Though this implementation is fairly fast, it still has several weaknesses: - It only works with a Minkowski distance metric (of which Euclidean is a special case). In general, a ball tree can be written to handle any true metric (i.e. one which obeys the triangle inequality). - It implements only the single-tree approach, not the potentially faster dual-tree approach in which a ball tree is constructed for both the training and query sets. - It implements only nearest-neighbors queries, and not any of the other tasks that a ball tree can help optimize: e.g. kernel density estimation, N-point correlation function calculations, and other so-called Generalized N-body Problems. I had started running into these limits when creating astronomical data analysis examples for astroML, the Python library for Astronomy and Machine Learning Python that I released last fall. I'd been thinking about it for a while, and finally decided it was time to invest the effort into updating and enhancing the Ball Tree. It took me longer than I planned (in fact, some of my first posts on this blog last August came out of the benchmarking experiments aimed at this task), but just a couple weeks ago I finally got things working and submitted a pull request to scikit-learn with the new code. The new code is actually more than simply a new ball tree: it's written as a generic N dimensional binary search tree, with specific methods added to implement a ball tree and a kd-tree on top of the same core functionality. The new trees have a lot of very interesting and powerful features: The ball tree works with any of the following distance metrics, which match those found in the module scipy.spatial.distance: ['euclidean', 'minkowski', 'manhattan', 'chebyshev', 'seuclidean', 'mahalanobis', 'wminkowski', 'hamming', 'canberra', 'braycurtis', 'matching', 'jaccard', 'dice', 'kulsinski', 'rogerstanimoto', 'russellrao', 'sokalmichener', 'sokalsneath', 'haversine'] Alternatively, the user can specify a callable Python function to act as the distance metric. While this will be quite a bit slower than using one of the optimized metrics above, it adds nice flexibility. The kd-tree works with only the first four of the above metrics. This limitation is primarily because the distance bounds are less efficiently calculated for metrics which are not axis-aligned. Both the ball tree and kd-tree implement k-neighbor and bounded neighbor searches, and can use either a single tree or dual tree approach, with either a breadth-first or depth-first tree traversal. Naive nearest neighbor searches scale as $\mathcal{O}[N^2]$; the tree-based methods here scale as $\mathcal{O}[N \log N]$. Both the ball tree and kd-tree have their memory pre-allocated entirely by numpy: this not only leads to code that's easier to debug and maintain (no memory errors!), but means that either data structure can be serialized using Python's picklemodule. This is a very important feature in some contexts, most notably when estimators are being sent between multiple machines in a parallel computing framework. Both the ball tree and kd-tree implement fast kernel density estimation (KDE), which can be used within any of the valid distance metrics. The supported kernels are ['gaussian', 'tophat', 'epanechnikov', 'exponential', 'linear', 'cosine'] the combination of these kernel options with the distance metric options above leads to an extremely large number of effective kernel forms. Naive KDE scales as $\mathcal{O}[N^2]$; the tree-based methods here scale as $\mathcal{O}[N \log N]$. Both the ball tree and kd-tree implement fast 2-point correlation functions. A correlation function is a statistical measure of the distribution of data (related to the Fourier power spectrum of the density distribution). Naive 2-point correlation calculations scale as $\mathcal{O}[N^2]$; the tree-based methods here scale as $\mathcal{O}[N \log N]$. As mentioned above, there is another nearest neighbor tree available in the SciPy: scipy.spatial.cKDTree. There are a number of things which distinguish the cKDTree from the new kd-tree described here: like the new kd-tree, cKDTreeimplements only the first four of the metrics listed above. Unlike the new ball tree and kd-tree, cKDTreeuses explicit dynamic memory allocation at the construction phase. This means that the trained tree object cannot be pickled, and must be re-constructed in place of being serialized. Because of the flexibility gained through the use of dynamic node allocation, cKDTreecan implement a more sophisticated building methods: it uses the "sliding midpoint rule" to ensure that nodes do not become too long and thin. One side-effect of this, however, is that for certain distributions of points, you can end up with a large proliferation of the number of nodes, which may lead to a huge memory footprint (even memory errors in some cases) and potentially inefficient searches. The cKDTreebuilds its nodes covering the entire $N$-dimensional data space. this leads to relatively efficient build times because node bounds do not need to be recomputed at each level. However, the resulting tree is not as compact as it could be, which potentially leads to slower query times. The new ball tree and kd tree code shrinks nodes to only cover the part of the volume which contains points. With these distinctions, I thought it would be interesting to do some benchmarks and get a detailed comparison of the performance of the three trees. Note that the cKDTree has just recently been re-written and extended, and is much faster than its previous incarnation. For that reason, I've run these benchmarks with the current bleeding-edge scipy. But enough words. Here we'll create some scripts to run these benchmarks. There are several variables that will affect the computation time for a neighbors query: - The number of points $N$: for a brute-force search, the query will scale as $\mathcal{O}[N^2]$ . Tree methods usually bring this down to $\mathcal{O}[N \log N]$ . - The dimension of the data, $D$ : both brute-force and tree-based methods will scale approximately as $\mathcal{O}[D]$ . For high dimensions, however, the curse of dimensionality can make this scaling much worse. - The desired number of neighbors, $k$ : $k$ does not affect build time, but affects query time in a way that is difficult to quantify - The tree leaf size, leaf_size: The leaf size of a tree roughly specifies the number of points at which the tree switches to brute-force, and encodes the tradeoff between the cost of accessing a node, and the cost of computing the distance function. - The structure of the data: though data structure and distribution do not affect brute-force queries, they can have a large effect on the query times of tree-based methods. - Single/Dual tree query: A single-tree query searches for neighbors of one point at a time. A dual tree query builds a tree on both sets of points, and traverses both trees at the same time. This can lead to significant speedups in some cases. - Breadth-first vs Depth-first search: This determines how the nodes are traversed. In practice, it seems not to make a significant difference, so it won't be explored here. - The chosen metric: some metrics are slower to compute than others. The metric may also affect the structure of the data, the geometry of the tree, and thus the query and build times. In reality, query times depend on all seven of these variables in a fairly complicated way. For that reason, I'm going to show several rounds of benchmarks where these variables are modified while holding the others constant. We'll do all our tests here with the most common Euclidean distance metric, though others could be substituted if desired. We'll start by doing some imports to get our IPython notebook ready for the benchmarks. Note that at present, you'll have to install scikit-learn off my development branch for this to work. In the future, the new KDTree and BallTree will be part of a scikit-learn release. %pylab inline Welcome to pylab, a matplotlib-based Python environment [backend: module://IPython.zmq.pylab.backend_inline]. For more information, type 'help(pylab)'. import numpy as np from scipy.spatial import cKDTree from sklearn.neighbors import KDTree, BallTree For spatial tree benchmarks, it's important to use various realistic data sets. In practice, data rarely looks like a uniform distribution, so running benchmarks on such a distribution will not lead to accurate expectations of the algorithm performance. For this reason, we'll test three datasets side-by-side: a uniform distribution of points, a set of pixel values from images of hand-written digits, and a set of flux observations from astronomical spectra. # Uniform random distribution uniform_N = np.random.random((10000, 4)) uniform_D = np.random.random((1797, 128)) # Digits distribution from sklearn.datasets import load_digits digits = load_digits() print digits.images.shape (1797, 8, 8) # We need more than 1797 digits, so let's stack the central # regions of the images to inflate the dataset. digits_N = np.vstack([digits.images[:, 2:4, 2:4], digits.images[:, 2:4, 4:6], digits.images[:, 4:6, 2:4], digits.images[:, 4:6, 4:6], digits.images[:, 4:6, 5:7], digits.images[:, 5:7, 4:6]]) digits_N = digits_N.reshape((-1, 4))[:10000] # For the dimensionality test, we need up to 128 dimesnions, so # we'll combine some of the images. digits_D = np.hstack((digits.data, np.vstack((digits.data[:1000], digits.data[1000:])))) # The edge pixels are all basically zero. For the dimensionality tests # to be reasonable, we want the low-dimension case to probe interir pixels digits_D = np.hstack([digits_D[:, 28:], digits_D[:, :28]]) # The spectra can be downloaded with astroML: see from astroML.datasets import fetch_sdss_corrected_spectra spectra = fetch_sdss_corrected_spectra()['spectra'] spectra.shape (4000, 1000) # Take sections of spectra and stack them to reach N=10000 samples spectra_N = np.vstack([spectra[:, 500:504], spectra[:, 504:508], spectra[:2000, 508:512]]) # Take a central region of the spectra for the dimensionality study spectra_D = spectra[:1797, 400:528] print uniform_N.shape, uniform_D.shape print digits_N.shape, digits_D.shape print spectra_N.shape, spectra_D.shape (10000, 4) (1797, 128) (10000, 4) (1797, 128) (10000, 4) (1797, 128) We now have three datasets with similar sizes. Just for the sake of visualization, let's visualize two dimensions from each as a scatter-plot: titles = ['Uniform', 'Digits', 'Spectra'] datasets_D = [uniform_D, digits_D, spectra_D] datasets_N = [uniform_N, digits_N, spectra_N] fig, ax = plt.subplots(1, 3, figsize=(12, 3.5)) for axi, title, dataset in zip(ax, titles, datasets_D): axi.plot(dataset[:, 1], dataset[:, 2], '.k') axi.set_title(title, size=14) We can see how different the structure is between these three sets. The uniform data is randomly and densely distributed throughout the space. The digits data actually comprise discrete values between 0 and 16, and more-or-less fill certain regions of the parameter space. The spectra display strongly-correlated values, such that they occupy a very small fraction of the total parameter volume. Now we'll create some scripts that will help us to run the benchmarks. Don't worry about these details for now -- you can simply scroll down past these and get to the plots. from time import time def average_time(executable, *args, **kwargs): """Compute the average time over N runs""" N = 5 t = 0 for i in range(N): t0 = time() res = executable(*args, **kwargs) t1 = time() t += (t1 - t0) return res, t * 1. / N TREE_DICT = dict(cKDTree=cKDTree, KDTree=KDTree, BallTree=BallTree) colors = dict(cKDTree='black', KDTree='red', BallTree='blue', brute='gray', gaussian_kde='black') def bench_knn_query(tree_name, X, N, D, leaf_size, k, build_args=None, query_args=None): """Run benchmarks for the k-nearest neighbors query""" Tree = TREE_DICT[tree_name] if build_args is None: build_args = {} if query_args is None: query_args = {} NDLk = np.broadcast(N, D, leaf_size, k) t_build = np.zeros(NDLk.size) t_query = np.zeros(NDLk.size) for i, (N, D, leaf_size, k) in enumerate(NDLk): XND = X[:N, :D] if tree_name == 'cKDTree': build_args['leafsize'] = leaf_size else: build_args['leaf_size'] = leaf_size tree, t_build[i] = average_time(Tree, XND, **build_args) res, t_query[i] = average_time(tree.query, XND, k, **query_args) return t_build, t_query def plot_scaling(data, estimate_brute=False, suptitle='', **kwargs): """Plot the scaling comparisons for different tree types""" # Find the iterable key iterables = [key for (key, val) in kwargs.iteritems() if hasattr(val, '__len__')] if len(iterables) != 1: raise ValueError("A single iterable argument must be specified") x_key = iterables[0] x = kwargs[x_key] # Set some defaults if 'N' not in kwargs: kwargs['N'] = data.shape[0] if 'D' not in kwargs: kwargs['D'] = data.shape[1] if 'leaf_size' not in kwargs: kwargs['leaf_size'] = 15 if 'k' not in kwargs: kwargs['k'] = 5 fig, ax = plt.subplots(1, 2, figsize=(10, 4), subplot_kw=dict(yscale='log', xscale='log')) for tree_name in ['cKDTree', 'KDTree', 'BallTree']: t_build, t_query = bench_knn_query(tree_name, data, **kwargs) ax[0].plot(x, t_build, color=colors[tree_name], label=tree_name) ax[1].plot(x, t_query, color=colors[tree_name], label=tree_name) if tree_name != 'cKDTree': t_build, t_query = bench_knn_query(tree_name, data, query_args=dict(breadth_first=True, dualtree=True), **kwargs) ax[0].plot(x, t_build, color=colors[tree_name], linestyle='--') ax[1].plot(x, t_query, color=colors[tree_name], linestyle='--') if estimate_brute: Nmin = np.min(kwargs['N']) Dmin = np.min(kwargs['D']) kmin = np.min(kwargs['k']) # get a baseline brute force time by setting the leaf size large, # ensuring a brute force calculation over the data _, t0 = bench_knn_query('KDTree', data, N=Nmin, D=Dmin, leaf_size=2 * Nmin, k=kmin) # use the theoretical scaling: O[N^2 D] if x_key == 'N': exponent = 2 elif x_key == 'D': exponent = 1 else: exponent = 0 t_brute = t0 * (np.array(x, dtype=float) / np.min(x)) ** exponent ax[1].plot(x, t_brute, color=colors['brute'], label='brute force (est.)') for axi in ax: axi.grid(True) axi.set_xlabel(x_key) axi.set_ylabel('time (s)') axi.legend(loc='upper left') axi.set_xlim(np.min(x), np.max(x)) info_str = ', '.join([key + '={' + key + '}' for key in ['N', 'D', 'k'] if key != x_key]) ax[0].set_title('Tree Build Time ({0})'.format(info_str.format(**kwargs))) ax[1].set_title('Tree Query Time ({0})'.format(info_str.format(**kwargs))) if suptitle: fig.suptitle(suptitle, size=16) return fig, ax Now that all the code is in place, we can run the benchmarks. For all the plots, we'll show the build time and query time side-by-side. Note the scales on the graphs below: overall, the build times are usually a factor of 10-100 faster than the query times, so the differences in build times are rarely worth worrying about. A note about legends: we'll show single-tree approaches as a solid line, and we'll show dual-tree approaches as dashed lines. In addition, where it's relevant, we'll estimate the brute force scaling for ease of comparison. We will start by exploring the scaling with the leaf_size parameter: recall that the leaf size controls the minimum number of points in a given node, and effectively adjusts the tradeoff between the cost of node traversal and the cost of a brute-force distance estimate. leaf_size = 2 ** np.arange(10) for title, dataset in zip(titles, datasets_N): fig, ax = plot_scaling(dataset, N=2000, leaf_size=leaf_size, suptitle=title) Note that with larger leaf size, the build time decreases: this is because fewer nodes need to be built. For the query times, we see a distinct minimum. For very small leaf sizes, the query slows down because the algorithm must access many nodes to complete the query. For very large leaf sizes, the query slows down because there are too many pairwise distance computations. If we were to use a less efficient metric function, the balance between these would change and a larger leaf size would be warranted. This benchmark motivates our setting the leaf size to 15 for the remaining tests. Here we'll plot the scaling with the number of neighbors $k$. This should not effect the build time, because $k$ does not enter there. It will, however, affect the query time: k = 2 ** np.arange(1, 10) for title, dataset in zip(titles, datasets_N): fig, ax = plot_scaling(dataset, N=4000, k=k, suptitle=title, estimate_brute=True) Naively you might expect linear scaling with $k$, but for large $k$ that is not the case. Because a priority queue of the nearest neighbors must be maintained, the scaling is super-linear for large $k$. We also see that brute force has no dependence on $k$ (all distances must be computed in any case). This means that if $k$ is very large, a brute force approach will win out (though the exact value for which this is true depends on $N$, $D$, the structure of the data, and all the other factors mentioned above). Note that although the cKDTree build time is a factor of ~3 faster than the others, the absolute time difference is less than two milliseconds: a difference which is orders of magnitude smaller than the query time. This is due to the shortcut mentioned above: the cKDTree doesn't take the time to shrink the bounds of each node. This is where things get interesting: the scaling with the number of points $N$ : N = (10 ** np.linspace(2, 4, 10)).astype(int) for title, dataset in zip(titles, datasets_N): plot_scaling(dataset, N=N, estimate_brute=True, suptitle=title) We have set d = 4 and k = 5 in each case for ease of comparison. Examining the graphs, we see some common traits: all the tree algorithms seem to be scaling as approximately $\mathcal{O}[N\log N]$, and both kd-trees are beating the ball tree. Somewhat surprisingly, the dual tree approaches are slower than the single-tree approaches. For 10,000 points, the speedup over brute force is around a factor of 50, and this speedup will get larger as $N$ further increases. Additionally, the comparison of datasets is interesting. Even for this low dimensionality, the tree methods tend to be slightly faster for structured data than for uniform data. Surprisingly, the cKDTree performance gets worse for highly structured data. I believe this is due to the use of the sliding midpoint rule: it works well for evenly distributed data, but for highly structured data can lead to situations where there are many very sparsely-populated nodes. As a final benchmark, we'll plot the scaling with dimension. D = 2 ** np.arange(8) for title, dataset in zip(titles, datasets_D): plot_scaling(dataset, D=D, estimate_brute=True, suptitle=title) As we increase the dimension, we see something interesting. For more broadly-distributed data (uniform and digits), the dual-tree approach begins to out-perform the single-tree approach, by as much as a factor of 2. In bottom-right panel, we again see a strong effect of the cKDTree's shortcut in construction: because it builds nodes which span the entire volume of parameter space, most of these nodes are quite empty, especially as the dimension is increased. This leads to queries which are quite a bit slower for sparse data in high dimensions, and overwhelms by a factor of 100 any computational savings at construction. In a lot of ways, the plots here are their own conclusion. But in general, this exercise convinces me that the new Ball Tree and KD Tree in scikit-learn are at the very least equal to the scipy implementation, and in some cases much better: - All three trees scale in the expected way with the number and dimension of the data - All three trees beat brute force by orders of magnitude in all but the most extreme circumstances. - The cKDTreeseems to be less optimal for highly-structured data, which is the kind of data that is generally of interest. - The cKDTreehas the further disadvantage of using dynamically allocated nodes, which cannot be serialized. The pre-allocation of memory for the new ball tree and kd tree solves this problem. On top of this, the new ball tree and kd tree have several other advantages, including more flexibility in traversal methods, more available metrics, and more availale query types (e.g. KDE and 2-point correlation). One thing that still puzzles me is the fact that the dual tree approaches don't offer much of an improvement over single tree. The literature on the subject would make me expect otherwise (FastLab, for example, quotes near-linear-time queries for dual tree approaches), so perhaps there's some efficiency I've missed. In a later post, I plan to go into more detail and explore and benchmark some of the new functionalities added: the kernel density estimation and 2-point correlation function methods. Until then, I hope you've found this post interesting, and I hope you find this new code useful! This post was written entirely in the IPython notebook. You can download this notebook, or see a static view here.
http://jakevdp.github.io/blog/2013/04/29/benchmarking-nearest-neighbor-searches-in-python/
CC-MAIN-2018-39
refinedweb
3,689
53.51
22 May 2013 19:56 [Source: ICIS news] HOUSTON (ICIS)--Ford Motor Co said on Wednesday that it will increase its North American manufacturing capacity by 200,000 vehicles to meet increased demand for its cars and light trucks. The ?xml:namespace> The American Chemistry Council (ACC) estimates that each automobile contains an average of $3,297 (€2,571) worth of chemicals, such as acrylonitrile-butadiene-styrene (ABS), nylon, polycarbonate (PC) and others. Ford built 2.8m vehicles in According to the company, it is already working at near full capacity, and cutting its annual shutdown period by just one week will add 40,000 units. It also plans to speed up production at other plants to meet demand. Ford recently said it would add a third shift to
http://www.icis.com/Articles/2013/05/22/9671370/US-Ford-to-increase-North-American-production-by-200000-units.html
CC-MAIN-2014-49
refinedweb
129
51.38
On Mon, 2002-11-18 at 16:58, Stuart Donaldson wrote: > It looks like the MixIn approach would work, but it doesn't feel like a good > solution. Perhaps I just don't have enough of the python religion, but it > feels like this kind of solution should be reserved for extreme cases. > > The well designed solution seems to encourage the approach of overriding the > methods through subclassing. This works better for documentation tools, and > just plain old following the code. The idea of the MixIn is pretty cool for > cases where subclassing can't be implemented. But in the case of Webware, > we have control over the modules in question, and the design of the rest of > the system seems to support subclassing. It's not so much a Python thing. I originally thought subclassing would be best, but I realized that they it can be awkward in some ways. If you want to use local subclassing you could always just change the import statement to use your local subclass. This won't be entirely seemless for upgrading, but I don't think that's a big problem. Of course, you couldn't easily distribute the enhanced class -- but I think that's a problem anyway with subclassing. You can distribute a subclassing of a single class, but that doesn't leave a good way to use two customizations (even though they might not otherwise conflict). Even if it seems kind of weird, I think the mixin approach solves most of these -- if you make your customization into a plug-in, it can change the classes when its initialized. This should make it easy to distribute customizations. >? I dunno... everyone who uses Webware is a developer, after all, so it's a vague distinction. But I suppose since we're talking about changes to Webware itself it's a -devel sort of discussion... just a slightly smaller group of readers. Ian
http://sourceforge.net/p/webware/mailman/message/3314943/
CC-MAIN-2014-41
refinedweb
322
70.23
10 May 2012 11:40 [Source: ICIS news] LONDON (ICIS)--World oil demand has halted its decline and has started to show growth following stabilisation in the ?xml:namespace> The cartel’s global oil demand growth forecast for 2012 is set at 900,000 bbl/day to average 88.7m bbl/day, a minor upward change from the last estimate. “World oil demand has – at least, for the short term – stopped its decline and has begun to show growth,” OPEC said in its May monthly oil report. Oil use in both the And in the This would cut the current 2012 world demand growth forecast by between 200,000 bbl/day and 300,000 bbl/day, said OPEC. Also, if Projected demand for OPEC crude in 2012 was unchanged from the previous report, at an average of 30m bbl/day. Non-OPEC supply is forecast to grow by 600,000 bbl/day in 2012, a slight upward revision of 50,000 bbl/day from the previous report. OPEC’s 2012 world economic growth expectations were unchanged
http://www.icis.com/Articles/2012/05/10/9558137/world-oil-demand-halts-decline-starts-to-show-growth-opec.html
CC-MAIN-2015-06
refinedweb
176
68.5
Although Raspberry Pi and Arduino are two different hardware in terms of their applications and structure, but they both are considered as two competing open source hardware platforms. They both have very strong community and support. Today we will slightly change things, and show you how we can take advantage of both of them. If you have both Arduino and Raspberry pi boards, this article will show you how to use Raspberry pi and Python to control the Arduino. We will use PyFirmata firmware to give commands to Arduino using Raspberry Pi python script. PyFirmata is basically a prebuilt library package of python program which can be installed in Arduino to allow serial communication between a python script on any computer and an Arduino. This python package can give access to read and write any pin on the Arduino. So here we will run python program on Arduino using Raspberry pi. So in this tutorial we will take advantage of this library and will use this in our Arduino board to control Arduino using Raspberry Pi. Requirements - Raspberry Pi with Raspbian OS installed in it - Arduino Uno or any other Arduino board - Arduino USB cable - LED In this tutorial I am using External Monitor using HDMI cable to connect with Raspberry Pi. If you don’t have monitor, you can use SSH client (Putty) or VNC server to connect to Raspberry pi using Laptop or computer. If you find any difficulty then follow our Getting stared with Raspberry Pi Guide. Installing PyFirmata in Arduino using Raspberry Pi To upload PyFirmata firmware in Arduino, we have to install Arduino IDE in Raspberry Pi. Follow these steps to install: Step 1:- Connect the Raspberry Pi to the internet. Open command terminal and type the following command and hit enter sudo apt-get -y install arduino python-serial mercurial Wait for few minutes, it will take time. This command will install the Arduino IDE in your Raspberry Pi. Step 2:- Now, we will install pyFirmata files using the given github: git clone Then run the following command: cd pyFirmata sudo python setup.py install Step 3:- We have installed all the required files and setups. Now, connect your Arduino board with Raspberry Pi using USB cable and launch Arduino IDE by typing arduino in terminal window. Step 4:- Then type lsusb command to check whether Arduino is connected with your raspberry pi. In Arduino IDE, Go to tools and choose your board and Serial Port. Step 5:- Upload the PyFirmata firmware on the Arduino by clicking File -> Examples -> Firmata -> Standard Firmata and then click upload button . As shown below. We have successfully installed the pyFirmata firmware in the Arduino board. Now, we can control our Arduino using Raspberry Pi. For demonstration we will blink and fade an LED on the Arduino by writing python codes in Raspberry Pi. Code Explanation For coding part, you should read documentation of pyFirmata for better understanding. We will use pyFirmata functions to write our code. You can find pyFirmata documentation by following the link. So let’s start writing the code Open your favorite text editor on the Raspberry Pi and import pyFirmata library. import pyfirmata Define pin on the Arduino to connect the LED led_pin = 9 Now, we have to write serial port name on which Arduino board is connected using pyfirmata.Arduino() function and then make an instance by assigning port in board variable. board = pyfirmata.Arduino("/dev/ttyACM0") print "Code is running” In while loop, make led pin HIGH and low using board.digital[].write() function and give delay using board.pass_time() function. while True: board.digital[led_pin].write(0) board.pass_time(1) board.digital[led_pin].write(1) board.pass_time(1) Our code is ready, save this code by putting .py extension to the file name. Open command terminal and type python blink.py to run the code on the Arduino board. Make sure your Arduino board is connected with your Raspberry Pi board using USB cable. Now, you can see Blinking LED on the Arduino board. Complete code for blinking LED using pyFirmata is given at the end. Fading LED on Arduino using pyFirmata Now, we will write code for fading the LED to make you more familiar with the pyFirmata functions. This code is easy as previous one. You have to use two for loops, one for increase brightness and other for decrease brightness. In this code, we have defined the pins in different way like led = board.get_pin('d:9:p') where d means digital pin. This is function of pyFirmata library. Read the documentation for more details. Complete code for Fading LED using pyFirmata is given at the end. Now, you can add more sensors to your system and make it more cool, check our other Arduino projects and try building them using Raspberry pi and python script. Python code for LED blink: import pyfirmata led_pin = 9 board = pyfirmata.Arduino("/dev/ttyACM0") while True: board.digital[led_pin].write(0) board.pass_time(1) board.digital[led_pin].write(1) board.pass_time(1) Python code for Fading LED: import time import pyfirmata delay = 0.3 brightness = 0 board = pyfirmata.Arduino("/dev/ttyACM0") led = board.get_pin('d:9:p') while True: # increase for i in range(0, 10): brightness = brightness + 0.1 print "Setting brightness to %s" % brightness led.write(brightness) board.pass_time(delay) # decrease for i in range(0, 10): print "Setting brightness to %s" % brightness led.write(brightness) brightness = brightness - 0.1 board.pass_time(delay)
https://circuitdigest.com/microcontroller-projects/controlling-arduino-with-raspberry-pi-using-pyfirmata
CC-MAIN-2019-39
refinedweb
913
57.06
Ok i am writing a program with Arrays and could use a little help. I have to build a two dimensional array with data from a file. The file has date that first is two numbers followed by a list of charterers. So the fist line of data is 2 4ABCDEFGH which is 2 Rows and 4 Columns and array that looks like A B C D E F G H I need to have a function to read in the array and then a function to print the array. I am having trouble reading the rows and columns from the file and getting them in the array. Where R is 2 and C is 4 i Tried infile>>R>>C Array[R][C]={ }; // It kept saying i had an array of value 0 I am also not sure my function prototype and function header are right and this is really giving me some problems. The only way i can get it to work is by making R and C global and setting them equal to an integer not reading the value from the file. I am also have troubling filling the array with the charterers. #include "stdafx.h" #include <iostream> #include <string> #include <iomanip> #include <fstream> using namespace std; const int C=4; const int R=2; void showarray(char [][C], int,char); int _tmain(int argc, _TCHAR* argv[]) { int rows,cols; char X; ifstream infile; ofstream outfile; infile.open("Infile.txt"); infile>>rows>>cols; infile>>X; char Array[R][C]={ }; showarray(Maze,R,X); return 0; } void showarray(char Array[][C], int R,char X) { for(int row=0; row<R; row++) { for(int col =0; col<C; col++) {Array[row][col]=X; {cout<<Array[row][col];} } } cout<<endl; }
https://www.daniweb.com/programming/software-development/threads/262020/arrays-and-functions
CC-MAIN-2017-47
refinedweb
291
62.61
API Access in KACE1000 Version 12.0.149 Hi everyone, until the last versions of KACE 1000 we used WSAPI to connect our linux based machines to KACE. Now we got the information, that this way is out of date and we need to use the "regular" API. I think there is a fundamental problem because when we run the url: It shows: {"error": "API disabled."} But I can not find a way to enable the API support. Can you help me with this? We use the version: Model: K1000 Hardware Model: Virtual (VMware/Hyper-V/Nutanix) Version: 12.0.149 -> we also set up two organisations Thanks Alex Answers (2) You start with a POST request to authenticate to the SMA server. See the API documentation for more details. Save the x-kace-csrf token to be used in subsequent API calls. You may want to search previously posted questions regarding the API, as many have code examples.What programming language do you plan to use with the API? - Thanks for your help! We used Python in the past with WSAPI, so we would like to keep Python :-) I saw your other examples and tried it with these but still get error messages. I created a new user in KACE for the API-usage with admin rights in /systemui/settings_users_list.php (Settings-> Administrators). Error messages: { "errorCode": -1, "errorDescription": "Invalid CSRF Token" } { "errorCode": -1, "errorDescription": "Invalid user or password" } The code is: import requests import json from requests.sessions import Session session = Session(); # Start of POST to authenticate to SMA server. # The x-dell-csrf-token is saved to be used in subsequent API calls. # Change the URL IP and credentials for your environment. url = " payload = json.dumps({ "userName": "...", "organizationName": "..." }) headers = { 'Accept': 'application/json', 'Content-Type': 'application/json', 'x-dell-api-version': '5' } #To avoid SSL-error messages: session.verify = False response = session.post(url, headers=headers, data=payload) # The following line will display the JSON returned from the response. print(json.dumps(response.json(), indent = 3)) csrf_token = response.headers.get('x-dell-csrf-token') # You should see the x-dell-csrf-token in the print output print(csrf_token) # The following API call will retrieve the list of devices on the SMA url = " headers = { 'Accept': 'application/json', 'Content-Type': 'application/json', 'x-dell-api-version': '5', 'x-dell-csrf-token': csrf_token } response = session.get(url, headers=headers) print(json.dumps(response.json(), indent = 3)) session.close() - Alex-2k19 3 months ago - ok.... the script is good! It's all about my user! KACE doesn't use the user I created under "/systemui/settings_users_list.php (Settings-> Administrators)". Even with a new user I created under "/adminui/user_list.php (Settings -> User)" it was not working! We do authenticate the KACE users against Active Directory so I tried it with an AD user and it was working like a charm! - Alex-2k19 3 months ago I would suggest that you download the latest API guide here If, once you have verified the commands that you are using are correct and still come back with an error, then I suggest you log a call with Support Let's just hope that Quest are not returning to the Bad old days from a few years ago when they didn't bother top keep the API up to date......... Good Luck
https://www.itninja.com/question/api-access-in-kace1000-version-12-0-149
CC-MAIN-2022-21
refinedweb
555
58.18
In my first three articles in this series, I named my picks for the most important contributions to C++ in the categories of books, non-book publications, and software: In this fourth installment, I focus on people. C++ is a technology, but behind the technology are the people who invent it, shape it, popularize it, and use it. This week, I explain who I consider to be the five most important people who’ve been involved in C++. The people on the list all have a strong public presence. There are two reasons for this. First, such “front men” (and I’m sorry, but they’re all men, and I really am sorry about that) have a direct effect on more people, simply by virtue of being more visible. The more people you affect, the more important you are. That’s just the way it is. The second reason why the list is filled with public figures is that, as I explained in the opening article in this series, my perspective is largely that of an outsider. There may be or have been people who’ve had an enormous impact on C++ who operate or have operated behind the scenes, out of my view. Maybe Stroustrup is just a pretty-boy stage presence for somebody else who does or did all the technical work. Maybe the accomplishments of the standardization committee are in fact the dictates of a cabal of individuals who choose to remain unrecognized. If so, they’ve succeeded: I don’t recognize them. And they’re not on the list. As I worked on the list entries, I realized that (1) it helps to have been working with C++ for a long time, and (2) it also helps to be working with C++ today. Lots of people have been important, but the most important ones have been around a long time, have been consistent contributors to C++, and continue to work with it even now. That said, here’s my list of the most important names in C++, ordered by when they published something related to C++ in a form more formal than a newsgroup posting. (This is always later than when they started working with the language, because it takes a while to know enough to have something worth saying to others.) As with my other lists, I limit myself to five names here: no ties, no honorable mentions. Because I’m now dealing with people instead of inanimate objects, that makes this list the most difficult to write. Still, the rules are the rules, and I’m determined to stick to them. Bjarne Stroustrup, 1985-present. Well, duh. He invented the language, he wrote the first compiler for it, he’s published extensively about it (see his publications page for details), he’s been actively involved in its dissemination and standardization, and he continues to work with it to this day. (See, for example, his recent paper on SELLs [PDF] and the STAPL research project, on which he is a collaborator.) Stroustrup could have retired from C++ activities years ago and settled back to bask in the breathless accolades to which all successful inventors are entitled. That he has instead chosen to continue working on the research project he began nearly 30 years ago is a tribute to his dedication to what is now C++ (and was originally “C with Classes”). Though I don’t order by importance any of list entries in this series of articles, it’s hard to imagine anybody being more important to C++ than the person who invented it, initially implemented it, and has since helped guide it into the software development force it is today. Andrew Koenig, 1988-present. Andrew Koenig is the only person on this list who’d probably still be on it even if he’d never published anything about C++. For whatever reason, I tend to think of him as a low-profile “insider,” but his publication history belies this image. He’s written two C++ books (with collaborator Barbara Moo), one C book (which doesn’t really count for C++ purposes, but even so...) and gads of magazine columns (his home page links to lists of them). Still, what strikes me most about Koenig is the frequency with which his name is mentioned by others, especially in the context of C++ standardization. For example, he’s largely responsible for recognizing the significance of the STL and shepherding Alexander Stepanov in his work on adding it to the standard at a time when something of that magnitude shouldn’t even have been considered. I can’t count the number of times I’ve heard or read words from members of the standardization committee to the effect that “Well, we were considering that, but then Andrew pointed out that...” or “that was a problem, but then Andy suggested....” In fact, Koenig is, as far as I know, the only person with a C++ language feature named after him. When, during standardization, it was discovered that the namespace-related name lookup rules sometimes failed to allow code like this to compile, std::cout << someObject; Koenig suggested a modification to the rules that quickly became known as “Koenig lookup.”1 In the standard, this rule is officially known as argument-dependent lookup (“ADL” to fans of casual chic), but the section of the Standard that describes it (section 3.4.2, if you must know) bears the label “[basic.lookup.koenig].” Scott Meyers, 1991-present. As it happens, writing about why you think you’re important is not the thrill you might imagine. Still, I’m doing my best to be objective here, and there’s a fair amount of evidence that I’ve left a mark—possibly a scar—on the world of C++. I’ve written either 3 or 6 books on C++ (depending on whether, like my wife, you don’t count revised editions or, like the person who did the work writing them, you do), and they’ve achieved wide circulation. I’ve written nearly 50 columns and articles about C++ and its application, and over the years I’ve given presentations at conferences and corporate events to, collectively, thousands of developers. Many of the guidelines I’ve published have become part of the accepted wisdom regarding “good” C++ programming, 2 and vendors of lint-like tools for C++ commonly support them, often referring to my writings as their justification. I’ve apparently even played a small role in C++ standardization by, er, making mistakes in public. It’s my understanding that at least two standardization proposals have offered arguments to the effect that “this is a problem we need to address, because even Scott Meyers can’t get it right.” (For one example, check out the proposal for adding smart pointers to TR1, and search for my name.) Herb Sutter, 1997-present. Sutter entered the C++ scene nearly a decade ago, and “prolific” hardly begins to describe the scale of his activities. Start with the three books he authored and the one he co-authored. Continue to the more than 200 columns and articles he’s written or co-written (most by him alone), and let your mind boggle at the fact that for a fair amount of time, he was sole or co- author of three columns simultaneously.3 I can’t imagine how many proposals and other documents he’s written for the C++ standardization committee, but did I mention that he’s the chair for that committee? He’s also a former Editor-in-Chief for C++ Report, a frequent speaker at conferences and similar events, and the most consistently enthusiastic advocate for C++ I’ve ever known. Currently, he’s waging a high-profile campaign arguing that multithreading will soon become critical for performance-sensitive applications, and he’s working on new ways—possible future C++ language or library extensions?—to make it easier for developers to work with multithreaded code. 4 Andrei Alexandrescu, 1998-present. Because of the revolution in our thinking about templates following publication of his Modern C++ Design, Alexandrescu’s name is strongly associated with templates. In some circles, it’s synonymous with templates, but that’s unfortunate. His contributions to C++ encompass far more than just interesting new ways to apply angle brackets (though, as far as I know, he was the first to demonstrate the utility of template template parameters, i.e., template parameters to templates). Even setting aside the book he co-authored with Herb Sutter (C++ Coding Standards, Addison-Wesley, 2005), a quick survey of the 40+ articles he’s published shows contributions in the areas of object copying, alignment constraint enforcement, multithreaded programming, exception-safety, and searching, often with an eye towards improving performance over “standard” approaches. For my money, the only person whose work has been consistently worth paying attention to these last few years is Alexandrescu’s. Most other writers and speakers (including me) periodically return to well-tilled fields from time to time and claim to glean new insights, but with unrivaled frequency, Alexandrescu not only breaks new ground, he plants entirely new crops in it.5 I’ve now listed my picks for the five most important C++-related books, non-book publications, software, and people in the history of the language. I noted at the outset of this series that these lists are inherently subjective, but for my final article in this series, I want to go beyond subjective to downright personal. Next week, I’ll identify my five most important “Aha!” moments in C++—five moments when something suddenly clicked, and I reached a new level of understanding about some aspect of the language, its workings, or its application. Discuss this article in the Articles Forum topic, The Most Important C++ People...Ever. 1. Loosely speaking, Koenig lookup says that when the type of an argument to a function call comes from a namespace, the function to be called should be looked up in that namespace, in addition to all the other places that the name would normally be looked up. For example, given the call “ std::cout << someObject”, operator<< would be looked up in the namespace where someObject is defined, in addition to all the usual places where operator<< would be looked up. This is helpful when, as is typically the case, functions like operator<< are defined in the same namespace as the types they operate on. 2. Most of the guidelines I’ve published over the years were already “common knowledge” in some parts of the C++ community. My primary contribution wasn’t to invent such guidelines, it was to popularize them. 3. The immensity of this accomplishment is easier to appreciate if you’ve been a columnist yourself, as I have been. I had trouble coming up with something worth reading six times a year. Sutter has been known to do it three times a month. 4. He’s also an architect at Microsoft working on C++/CLI, something I mention only in a note, because I consider C++/CLI a dialect of C++ rather than C++ itself. And I still have no idea how he finds the time to work on all the things he does. 5. This doesn’t mean he invents everything he writes or speaks about. Especially since becoming a doctoral student in 2001, he’s often brought academic research results to the attention of the broader C++ community. This has especially been the case with his writings and presentations about lock-free programming [PDF]. Part I in this series, “The Most Important C++ Books...Ever”: Part II, “The Most Important C++ Non-Book Publications...Ever”: Part III, “The Most Important C++ Software...Ever”: Bjarne Stroustrup’s home page: A list of Bjarne Stroustrup’s publications: “A rationale for semantically enhanced library languages” (SELLS), by Bjarne Stroustrup: [PDF] The Standard Template Adaptive Parallel Library (STAPL) research project: Andrew Koenig’s home page: The proposal for adding smart pointers to TR1: Herb Sutter’s home page: Andrei Alexandrescu’s home page A list of Andrei Alexandrescu’s publications:.
https://www.artima.com/cppsource/top_cpp_peopleP.html
CC-MAIN-2018-51
refinedweb
2,020
57.61
Wiki django-piston / Documentation Piston Documentation - Piston Documentation - Getting Started - Resources - Emitters - Mapping URLs - Working with Models - Configuring Handlers - Authentication - Form Validation - Helpers, utils & @decorators - Throttling - Generating Documentation - Tests - Receiving data - Streaming - Configuration variables Getting Started Getting started with Piston is easy. Your API code will look and behave just like any other Django application. It will have an URL mapping and handlers defining resources. To get started, it is recommended that you place your API code in a separate folder, e.g. 'api'. Your application layout could look like this: urls.py settings.py myapp/ __init__.py views.py models.py api/ __init__.py urls.py handlers.py Then, define a "namespace" where your API will live in your top-level urls.py, like so: urlpatterns = patterns('', # all my other url mappings (r'^api/', include('mysite.api.urls')), ) This will include the API's urls.py for anything beginning with 'api/'. Next up we'll look at how we can create resources and how to map URLs to them. Resources A "Resource" is an entity mapping some kind of resource in code. This could be a blog post, a forum or even something completely arbitrary. Let's start out by creating a simple handler in handlers.py: from piston.handler import BaseHandler from myapp.models import Blogpost class BlogpostHandler(BaseHandler): allowed_methods = ('GET',) model = Blogpost def read(self, request, post_slug): ... Piston lets you map resource to models, and by doing so, it will do a lot of the heavy lifting for you. A resource can be just a class, but usually you would want to define at least 1 of 4 methods: read is called on GET requests, and should never modify data (idempotent.) create is called on POST, and creates new objects, and should return them (or rc.CREATED.) update is called on PUT, and should update an existing product and return them (or rc.ALL_OK.) delete is called on DELETE, and should delete an existing object. Should not return anything, just rc.DELETED. In addition to these, you may define any other methods you want. You can use these by including their names in the fields directive, and by doing so, the function will be called with a single argument: The instance of the model. It can then return anything, and the return value will be used as the value for that key. NB: These "resource methods" should be decorated with the @classmethod decorator, as they will not always receive an instance of itself. For example, if you have a UserHandler defined, and you return a User from another handler, you will not receive an instance of that handler, but rather the UserHandler. Since a single handler can be responsible for both single- and multiple-object data sets, you can differentiate between them in the read() method like so: from piston.handler import BaseHandler from myapp.models import Blogpost class BlogpostHandler(BaseHandler): allowed_methods = ('GET',) model = Blogpost def read(self, request, blogpost_id=None): """ Returns a single post if `blogpost_id` is given, otherwise a subset. """ base = Blogpost.objects if blogpost_id: return base.get(pk=blogpost_id) else: return base.all() # Or base.filter(...) Emitters Emitters are what spews out the data, and are the things responsible for speaking YAML, JSON, XML, Pickle and Django. They currently reside in emitters.py as XMLEmitter, JSONEmitter, YAMLEmitter, PickleEmitter and DjangoEmitter. Writing your own emitters is easy, all you have to do is create a class that subclasses Emitter and has a render method. The render method will receive 1 argument, 'request', which is a copy of the request object, which is useful if you need to look at request.GET (like defining callbacks, like the JSON emitter does.) To get the data to serialize/render, you can call self.construct() which always returns a dictionary. From there, you can do whatever you want with the data and return it (as a unicode string.) NB: New in 23ebc37c78e8 : Emitters can now be registered with the Emitter.register function, and can be removed (in case you want to remove a built-in emitter) via the Emitter.unregister function. The built-in emitters are registered like so: class JSONEmitter(Emitter): ... Emitter.register('json', JSONEmitter, 'application/json; charset=utf-8') If you write your own emitters, you can import Emitter and call 'register' on it to put your emitter into action. You can also overwrite built-in, or existing emitters, by using the same name (the first argument.) This makes it very easy to add support for extended formats, like protocol buffers or CSV. Emitters are accessed via the ?format GET argument, e.g. '/api/blogposts/?format=yaml', but since 23ebc37c78e8 , it is now possible to access them via a special keyword argument in your URL mapping. This keyword is called 'emitter_format' (to not clash with your own 'format' keyword), and can be used like so: urlpatterns = patterns('', url(r'^blogposts(\.(?P<emitter_format>.+))$', ...), ) Now a request for /blogposts.json will use the JSON emitter, etc. Additionally, you may specify the format in your URL mapping, via the keyword arguments shortcut: urlpatterns = patterns('', url(r'^blogposts$', resource_here, { 'emitter_format': 'json' }), ) Mapping URLs URL mappings in Piston work just like they do in Django. Lets map our BlogpostHandler: In urls.py: from django.conf.urls.defaults import * from piston.resource import Resource from mysite.myapp.api.handlers import BlogpostHandler blogpost_handler = Resource(BlogpostHandler) urlpatterns = patterns('', url(r'^blogpost/(?P<post_slug>[^/]+)/', blogpost_handler), url(r'^blogposts/', blogpost_handler), ) Now any request coming in to /api/blogpost/some-slug-here/ or /api/blogposts/ will map to BlogpostHandler, with the two different data sets being differentiated in the handler itself. Note that a single handler can be used both for single-object and multiple-object resources. Anonymous Resources Resources can also be "anonymous". What does this mean? This is a special type of resource you can instantiate, and it will be used for requests that aren't authorized (via OAuth, Basic or any authentication handler.) For example, if we look at our BlogpostHandler from earlier, it might be interesting to offer anonymous access to posts, although we don't want to allow anonymous users to create/update/delete posts. Also, we don't want to expose all the fields authorized users see. This can be done by creating another handler, inheriting AnonymousBaseHandler (instead of BaseHandler.) This also takes care of the heavy lifting for you. Like so: from piston.handler import AnonymousBaseHandler, BaseHandler class AnonymousBlogpostHandler(AnonymousBaseHandler): model = Blogpost fields = ('title', 'content') class BlogpostHandler(BaseHandler): anonymous = AnonymousBlogpostHandler # same stuff as before You don't need a "proxy handler" subclassing BaseHandler to use anonymous handlers, you can just point directly at an anonymous resource as well. Working with Models Piston allows you to tie to a model, but does not require it. The benefit you get from doing so, will become obvious when you work with it: - If you don't override read/create/update/delete it provides sensible defaults (if the method is allowed by allowed_methodsof course.) - You don't have to specify fieldsor exclude(but you still can, they aren't mutually exclusive!) - By using a model in a handler, Piston will remember your fields/ excludedirectives and use them in other handlers who return objects of that type (unless overridden.) As we've seen earlier, tying to a model is as simple as setting the model class variable on a handler. Also see: Why does Piston use fields from previous handlers Configuring Handlers Handlers can be configured with 4 different variables. Model The model to tie to. See Working with Models. Fields/Exclude A list of fields to include or exclude. Accepts nested listing, and follows foreign keys and manytomany fields. Also accepts compiled regular expressions. E.g.: import re class FooHandler(BaseHandler): fields = ('title', 'content', ('author', ('username', 'first_name'))) exclude = ('id', re.compile('^private_')) If User can access posts via a Many2many/ForeignKey fields then: class UserHandler(BaseHandler): model = User fields = ('name', ('posts', ('title', 'date'))) will show the title and date from a users posts. To use the default handler for a nested resource specify an empty list of fields: class PostHandler(BaseHandler): model = Post exclude = ('date',) class UserHandler(BaseHandler): model = User fields = ('name', ('posts', ())) This UserHandler shows all fields for all posts for a user excluding the date. Neither fields, nor exclude are required, and either one can be used by itself. Anonymous A pointer to an alternate anonymous resource. See Anonymous Resources Authentication Piston supports pluggable authentication through a simple interface. Resources can be initialized to use any authentication handler that implements the interface. The default is to use the NoAuthentication handler. Adding to the Blogpost example, you could require Basic Authentication as follows: from django.conf.urls.defaults import * from piston.resource import Resource from piston.authentication import HttpBasicAuthentication from mysite.myapp.api.handlers import BlogpostHandler auth = HttpBasicAuthentication(realm="Django Piston Example") blogpost_handler = Resource(BlogpostHandler, authentication=auth) urlpatterns = patterns('', url(r'^blogpost/(?P<post_slug>[^/]+)/', blogpost_handler), url(r'^blogposts/', blogpost_handler), ) Piston comes with 2 built-in authentication mechanisms, namely piston.authentication.HttpBasicAuthentication and piston.authentication.OAuthAuthentication. The Basic auth handler is very simple, and you should use this for reference if you want to roll your own. Note: that using piston.authentication.HttpBasicAuthentication with apache and mod_wsgi requires you to add the WSGIPassAuthorization On directive to the server or vhost config, otherwise django-piston cannot read the authentication data from HTTP_AUTHORIZATION in request.META. See:. An authentication handler is a class, which must have 2 methods: is_authenticated and challenge. is_authenticated will receive exactly 1 argument, a copy of the request object Django receives. This object will hold all the information you will need to authenticate a user, e.g. request.META.get('HTTP_AUTHENTICATION'). Upon successful authentication, this function must set request.user to the correct django.contrib.auth.models.User object. This allows for subsequent handlers to identify who is logged in. It must return either True or False, indicating whether the user was logged in. For cases where authentication fails, is where challenge comes in. challenge will receive no arguments, and must return a HttpResponse containing the proper challenge instructions. For Basic auth, it will return an empty response, with the header WWW-Authenticate set, and status code 401. This will tell the receiving end that they need to supply us with authentication. For anonymous handlers, there is a special class, NoAuthentication in piston.authentication that always returns True for is_authenticated. OAuth OAuth is the preferred means of authorization, because it distinguishes between "consumers", i.e. the approved application on your end which is using the API. Piston knows and respects this, and makes good use of it, for example when you use the Dave Lee decorator, it will limit on a per-consumer basis, keeping services operational even if one service has been throttled. Form Validation Django has an excellent built-in form validation facility, and Piston can make good use of this. You can decorate your actions with a HCC decorator, which receives 1 required argument, and one optional. The first argument is the form it will use for validation, and the second argument is the place to look for data. For the create action, this is 'POST' (default), and for update, it's 'PUT'. For example: from django import forms from piston.utils import validate from mysite.myapp.models import Blogpost class BlogpostForm(forms.ModelForm): class Meta: model = Blogpost ... @validate(BlogpostForm) def create(... Or with a normal form: from django import forms from piston.utils import validate class DataForm(forms.Form): data = forms.CharField(max_length=128) is_private = forms.BooleanField(default=True, required=False) ... @validate(DataForm, 'PUT') def update(.... If data sent to an action that is decorated with a HCC action does not pass the forms is_clean method, Piston will return an error to the client, and will not execute the action. If the validation passes, then the form object is attached to the request object. Thus you can get to the form (and thus the cleaned_data) via request.form as in this example: @validate(EchoForm, 'GET') def read(self, request): return {'msg': request.form.cleaned_data['msg']} Helpers, utils & @decorators For your convenience, there's a set of helpers and utilities you can use. One of those is rc from piston.utils. It contains a set of standard returns that you can return from your actions to indicate a certain situation to the client. Since 26293e3884f4 , these return a fresh instance of HttpResponse, so you can use something like this: resp = rc.CREATED resp.write("Everything went fine!") return resp resp = rc.CREATED resp.write("This will not have the previous 'fine' text in it.") return resp This change is backwards compatible, as it overrides __getattr__ to return a new instance rather than a singleton. Throttling Sometimes you may not want people to call a certain action many times in a short period of time. Piston allows you to throttle requests on a global basis, effectively denying them access until the throttle has expired. Piston will respect OAuth (if used) and limit on a per-consumer basis. If OAuth is not used, Piston will resort to the logged in user, and for anonymous requests, it will fall back to the clients IP address. Throttling can be enabled via the special Dave Lee decorator. It takes 2 required arguments, and an optional third argument. The first argument is the number of requests allowed to be made within a certain amount of seconds. The second argument is the number of seconds. The third argument is optional, and should be a string, which will be appended to the cache key, effectively allowing you to do special throttling for a single action, or group several actions together. If omitted, the throttle will be global. For example: @throttle(5, 10*60) def create(... This will throttle if the client calls 'create' more than 5 times within 10 minutes. You can do grouping like so: @throttle(5, 10*60, 'user_writes') def create(... @throttle(5, 10*60, 'user_writes') def update(... Generating Documentation Chances are, if you intend to publicly expose your API, that you want to supply documentation. Writing documentation is a tedious process, and even more so if you change things in your code. Luckily, Piston can do a lot of the heavy lifting for you here as well. In piston.doc there is a set of methods, allowing you to easily generate documentation using standard Django views and templates. The function generate_doc returns a HandlerDocumentation instance, which has a few methods: - .model (get_model) returns the name of the handler, - .doc (get_doc) returns the docstring for the given handler. - .get_methods returns a list of methods available. The optional keyword argument include_defaults(False by default) will also include the fallback methods, if you haven't overloaded them. This may be useful if you want to use these, and still include them in your documentation. get_methods yields a set of HandlerMethod's which are more interesting: - .signature (get_signature) will return the methods signature, stripping the first two arguments, which are always 'self' and 'request'. The client will not specify these two, so they are not interesting. Takes an optional argument, parse_optional(default True), which turns keyword arguments defaulting to None into "<optional>". - .doc (get_doc) returns the docstring for an action, so you should keep your handler/action specific documentation there. - .iter_args() will yield a 2-tuple with the argument name, and the default argument (or None.) If the default argument is None, the default argument will be 'None' (string). This will allow you to distinguish whether there is a default argument (even if it's None), or if it's empty. For example: from piston.handler import BaseHandler from piston.doc import generate_doc class BlogpostHandler(BaseHandler): model = Blogpost def read(self, request, post_slug=None): """ Reads all blogposts, or a specific blogpost if `post_slug` is supplied. """ ... @staticmethod def resource_uri(): return ('api_blogpost_handler', ['id']) doc = generate_doc(BlogpostHandler) print doc.name # -> 'BlogpostHandler' print doc.model # -> <class 'Blogpost'> print doc.resource_uri_template # -> '/api/post/{id}' methods = doc.get_methods() for method in methods: print method.name # -> 'read' print method.signature # -> 'read(post_slug=<optional>)' sig = '' for argn, argdef in method.iter_args(): sig += argn if argdef: sig += "=%s" % argdef sig += ', ' sig = sig.rstrip(",") print sig # -> 'read(repo_slug=None)' Resource URIs Each resource can have an URI. They can be accessed in the Handler via his .resource_uri() method. Also read FAQ: What is a URI Template. Tests zerok wrote an initial testsuite for Piston, located in tests/. It uses zc.buildout to run the tests, and isolates an environment with Django, etc. The suite comes with two testrunners: tests/bin/test-1.0 and tests/bin/test-1.1 which run the tests against the respective version of Django and are made available after you're finished with the first two steps as described below. Running the tests is very easy: $ python bootstrap.py Creating directory './bin'. [snip] Generated script './bin/buildout'. $ ./bin/buildout -v Develop: 'tests/..' Getting distribution for 'djangorecipe'. Got djangorecipe 0.17.3. Getting distribution for 'zc.recipe.egg'. Got zc.recipe.egg 1.2.2. Uninstalling django-1.0. Installing django-1.0. django: Downloading Django from: Generated script './bin/django-1.0'. Generated script './bin/test-1.0'. $ ./bin/test-1.0 Creating test database... [snip] ... ---------------------------------------------------------------------- Ran 6 tests in 0.283s OK Destroying test database... When running buildout make sure to pass it the -v option. There is currently a small problem with djangorecipe, which is used to create the testscripts etc., that causes the script to hang unless you use the "-v" option. If you'd like to contribute, more tests are always welcome. There is coverage for many of the basic operations, but not 100%. Receiving data Piston, being layered on HTTP, works well with post-data (form data), but also works well with more expressive formats such as JSON and YAML. This allows you to receive structured data easily, rather than just key-value pairs. Piston will attempt to deserialize incoming non-form data via a set of "loaders", depending on the Content-type specified by the client. For example, if we send JSON to a handler giving the content-type "application/json", Piston will do 2 things: - Place the deserialized data in request.data, and - Set request.content_typeto application/json. For form data, this will always be None. You can use it like so (from testapp/handlers.py): def create(self, request): if request.content_type: data = request.data em = self.model(title=data['title'], content=data['content']) em.save() for comment in data['comments']: Comment(parent=em, content=comment['content']).save() return rc.CREATED else: super(ExpressiveTestModel, self).create(request) If we send the following JSON structure into that, it will handle it appropriately: {"content": "test", "comments": [{"content": "test1"}, {"content": "test2"}], "title": "test"} It should be noted that sending anything that deserializes to this handler will also work, so you can send equally formatted YAML or XML, and the handler won't care. If your handler doesn't accept post data (maybe it requires more verbose data), there's an easy way to require a specific type of data, via the utils.require_mime decorator. This decorator takes a list of types it requires, and you can use the shorthand too, like 'yaml', 'json', etc. There's also a shortcut for requiring 'json', 'yaml', 'xml' and 'pickle' all in one, called 'require_extended'. E.g.: class SomeHandler(BaseHandler): @require_mime('json', 'yaml') def create(... @require_extended def update(... Streaming Since b0a1571ff61a , Piston supports streaming its output to the client. This is disabled per default, for one reason: - Django's support for streaming breaks with ConditionalGetMiddlewareand CommonMiddleware. To get around this, Piston ships with two "proxy middleware classes" that won't execute during a streaming scenario, and hence won't look at (and exhaust) the data before sending it to the client. Without these, Django will look at the contents (to figure out E-Tags and Content-Length), and by doing so, the next peek it takes, will result in nothing. In piston.middleware there are two classes you can effectively replace these with. In settings.py: MIDDLEWARE_CLASSES = ( # ... 'piston.middleware.ConditionalMiddlewareCompatProxy', 'piston.middleware.CommonMiddlewareCompatProxy', # ... ) Remove any mentions of ConditionalGetMiddleware and CommonMiddleware, or it won't work. If you have any other middleware that looks at the content prior to streaming, you can wrap those in the conditional middleware proxy too: from piston.middleware import compat_middleware_factory class MyMiddleware(...): ... MyMiddlewareCompatProxy = compat_middleware_factory(MyMiddleware) And then install MyMiddlewareCompatProxy instead. Configuration variables Piston is configurable in a couple of ways, which allows more granular control of some areas without editing the code. Updated
https://bitbucket.org/jespern/django-piston/wiki/Documentation?_escaped_fragment_=mapping-urls
CC-MAIN-2017-30
refinedweb
3,414
50.02
Hello everyone, hope you all are fine and having fun with your lives. Today, I am going to share a new project named as Interfacing of Multiple Temperature sensors DS18B20 arduino. I have already shared a tutorial on Interfacing of Temperature Senor 18B20 with Arduino but in that tutorial, we have seen how to connect. #include <LiquidCrystal.h> #include <OneWire.h> OneWire ds(2); // pin 2 LiquidCrystal lcd(13,12,11,10,9,8); void setup(void) { lcd.begin(20,4); lcd.print("Temp 1 = "); lcd.setCursor(0,1); lcd.print("Temp 2 = "); lcd.setCursor(1,2); lcd.print(""); lcd.setCursor(4,3); lcd.print("Projects.com"); } void loop(void) { byte i = 0; byte data[9]; byte addr[8]; int temp; boolean type; //get the addresses of Temperature Sensors if(!ds.search(addr)){ return; } switch(addr[0]){ case 0x10: type = 1; break;//DS18S20 case 0x22: type = 0; break;//DS1822 case 0x28: type = 0; break;//DS18B20 default: break; } ds.reset(); ds.select(addr); ds.write(0x44); delay(750); ds.reset(); ds.select(addr); ds.write(0xBE); //Leitura for ( i = 0; i < 9; i++) { data[i] = ds.read(); } if(!type){//DS18B20 ou DS1822 lcd.setCursor(9,1); if((data[1]>>7)==1){ data[1] = ~data[1]; data[0] = (~data[0]) + 1; lcd.print("-"); } else{ lcd.print("+"); } temp = (data[1]<<4) | (data[0]>>4); lcd.print(temp); lcd.print("."); temp = (data[0] & 0x0F) * 625; if(temp>625){ lcd.print(temp); } else{ lcd.print("0"); lcd.print(temp); } } else{//DS18S20 lcd.setCursor(9,0); if((data[1]>>7)==1){ data[0] = ~data[0]; lcd.print("-"); } else{ lcd.print("+"); } temp = data[0]>>1; lcd.print(temp); lcd.print("."); lcd.print((data[0] & 0x01)*5); } lcd.print(" "); lcd.write(223);// degree symbol lcd.print("C "); } -
https://www.theengineeringprojects.com/2016/03/interfacing-multiple-ds18b20-arduino.html
CC-MAIN-2019-13
refinedweb
289
55.2
One of the benefits of inheritance is that you can treat a derived object as though it is an object of the inherited class: a Fooby is-a Foo. But as you saw in my previous post, treating a Fooby as a Foo when you’re comparing two objects for equality can result in inconsistencies. Fooby Foo This is a problem because there are times when you really do want a collection of objects that are derived from class Foo, but limit that collection so that no two objects, regardless of their type, have identical Bar values. For example, you might want the code below to keep only the first and third items. Bar var foos = new HashSet<Foo>(); foos.Add(new Foo(0)); foos.Add(new Fooby(0, "Hello")); foos.Add(new Fooby(1, "Jim")); foos.Add(new Foo(1)); Console.WriteLine(foos.Count); If you run that code, you’ll see that it outputs ‘4’, meaning that all four items were added to the collection. Many programmers will attempt to solve this problem by implementing IEquatable<T> on the classes in question. My previous article showed why that approach can’t work. IEquatable<T> There’s simply no good way to supply default behavior that works well in all circumstances. The solution is to stop trying to make the classes understand that you’re changing the meaning of equality. If you want to compare all Foo-derived instances as though they’re actually Foo instances, then supply a new equality comparer that does exactly what you want. The easiest way to create a new equality comparer is to derive from the generic EqualityComparer<T> class, like this: EqualityComparer<T> public class FooEqualityComparer : EqualityComparer<Foo> { public override bool Equals(Foo x, Foo y) { if (ReferenceEquals(x, y)) return true; if (ReferenceEquals(x, null) || ReferenceEquals(y, null)) return false; return x.Bar == y.Bar; } public override int GetHashCode(Foo obj) { if (ReferenceEquals(obj, null)) return -1; return obj.Bar.GetHashCode(); } } EqualityComparer<T> is a base class implementation of the IEqualityComparer<T> generic interface. You could create your own class that implements IEqualityComparer<T> directly, but the documentation recommends deriving a class from EqualityComparer<T>. IEqualityComparer<T> By now you should be familiar with the two methods that IEqualityComparer<T> requires: Equals and GetHashCode. Implementations of these two methods are subject to all of the rules I mentioned in my previous article, including those rules about handling null values. Equals GetHashCode null We can then change the example code above so that the collection uses an instance of this class to do its equality comparisons, rather than depending on whatever possibly incorrect default behavior exists in the classes themselves. var foos = new HashSet<Foo>(new FooEqualityComparer()); foos.Add(new Foo(0)); foos.Add(new Fooby(0, "Hello")); foos.Add(new Fooby(1, "Jim")); foos.Add(new Foo(1)); Console.WriteLine(foos.Count); Now, when the collection wants to compute a hash code, it always calls the GetHashCode method in the supplied equality comparer. And when it needs to compare two items, it always calls that Equals method. The collection is no longer at the mercy of objects’ individual GetHashCode and Equals methods. You don’t need to limit this to collections, by the way. You can create an equality comparer and call its Equals method directly: var myEquals = new FooEqualityComparer(); var f1 = new Foo(0); var fb1 = new Fooby(0, "Hello"); var areEqual = myEquals.Equal(f1, fb1); By supplying your own equality comparer, you control exactly how hash codes are computed and how items are compared. You are no longer hampered by the default meaning of equality as implemented by the individual types. It’s interesting to note that much of what I’ve said about equality comparisons in this and the previous post applies equally to comparisons in general (is object a less than, equal to, or greater than object b). Perhaps I’ll talk about that soon. Do not, under any circumstances, implement the IEquatable<T> interface on a non-sealed class. I’m serious. If you do it, you will be sorry. You cannot make it work. Think I’m crazy? People who know a lot more about this stuff than I do, agree. Let me explain what they apparently knew some time back, and I just recently discovered. Here’s the short version: A class that implements IEquatable<T> is saying, “I know how to compare two instances of type T or any type derived from T for equality.” After all, if type B derives from A, then B is-a A. That’s what inheritance means! The problem with implementing IEquatable<T> on a non-sealed type is that a base class cannot know how to compare instances of derived types. Trying to implement IEquatable<T> on a non-sealed class ends up producing inconsistent results when comparing instances of derived types. T B A Now the long version. I assume here that you understand the difference between reference equality and value equality, know how to properly override the Object.Equals(Object) method on a class, and you’re at least passingly familiar with IEquatable<T>. Object.Equals(Object) To review, you implement the IEquatable<T> generic interface on your class to provide a type-safe method for determining equality of instances. The type-safe Equals(T)method is called by certain methods of the generic collection classes, and is in general slightly faster (according to the documentation) than an overridden Object.Equals method. The documentation recommends that you implement the interface on any class that you expect will be stored in an array or a generic collection, “so that the object can be easily identified and manipulated.” It also recommends that if you implement IEquatable<T>, you should override the base class implementations of Equals(Object) and GetHashCode. Equals(T) Object.Equals Equals(Object) To implement IEquatable<T>, a class merely has to define a public Equals method that accepts a single parameter of type T, and returns a Boolean value indicating whether the object passed is considered equal. IEquatable<T>.Equals is subject to all of the same rules as Object.Equals(Object). It’s really just a type-safe version of that method, and in fact the documentation recommends that if you implement IEquatable<T>, then you should implement the overridden Object.Equals(object) method in terms of IEquatable<T>.Equals. Boolean IEquatable<T>.Equals Object.Equals(object) In the code below, I’ve created a base class, Foo, and a derived class, Fooby. Both override the Equals method as recommended, and both conditionally implement IEquatable<T>. The conditional implementation is there so that I can illustrate why implementing IEquatable<T> on non-sealed classes is a bad idea. //#define UseIEquatableFoo //#define UseIEquatableFooby using System; namespace EqualsFoo { public class Foo #if UseIEquatableFoo : IEquatable<Foo> #endif { public readonly int Bar; public Foo(int bar) { Bar = bar; } #if UseIEquatableFoo // IEquatable<Foo>.Equals public #else private #endif bool Equals(Foo Bar == other.Bar; } public override bool Equals(object obj) { return Equals(obj as Foo); } public override int GetHashCode() { return Bar; } } public class Fooby: Foo #if UseIEquatableFooby , IEquatable<Fooby> #endif { public readonly string Barby; public Fooby(int bar, string by) : base(bar) { Barby = by; } #if UseIEquatableFooby // IEquatable<Fooby>.Equals public #else private #endif bool Equals(Fooby base.Equals(other) && (Barby == other.Barby); } public override bool Equals(object obj) { return Equals(obj as Fooby); } public override int GetHashCode() { return base.GetHashCode() ^ Barby.GetHashCode(); } } } The conditional code for the Equals(Foo) method looks a little weird, I’ll admit. I wrote it this way to eliminate some code duplication. If UseIEquatableFoo is defined, then Equals(Foo) is public, as required for interface implementations. Otherwise it’s private so that all equality checks go through the overridden Equals(Object) method. The Fooby.Equals(Fooby) method is similarly decorated. Equals(Foo) UseIEquatableFoo Equals(Foo) public private Fooby.Equals(Fooby) Leave those two symbols undefined for now. The rules for Object.Equals(Object) say that x.Equals(y) == y.Equals(x). Similarly, the rules for Object.Equals(Object, Object) say that Object.Equals(x, y) == Object.Equals(y, x). I can’t point you to anything that specifically says Object.Equals(x, y) == x.Equals(y), but it seems like a reasonable thing to expect. x.Equals(y) == y.Equals(x) Object.Equals(Object, Object) Object.Equals(x, y) == Object.Equals(y, x) Object.Equals(x, y) == x.Equals(y) Given the above, the following code should output false four times. false)); If you run that code, you’ll see that it does indeed work as expected. Now implement IEquatable<Foo>. Uncomment the conditional symbol UseIEquatableFoo, and run the program again. The output might surprise you: IEquatable<Foo> True True False False To borrow a line from one of my favorite movies: “What the schnitzel?” The problem is overload resolution. When the compiler sees the expression, fb1.Equals(fb2), it has to decide what method call to generate code for. It has the choice of four methods: fb1.Equals(fb2) Foo.Equals(Object) Fooby.Equals(Object) Foo.Equals(Foo) IEnumerable<Foo>.Equals The overload resolution rules pick the “best” function to call based on the types of the arguments. The compiler decided that the best overload is the last one, Foo.Equals(Foo). After all, fb1 is-a Foo, and so is fb2. (If you’re interested in the exact rules used to make this determination, see the specification linked above.) fb1 fb2 The two instances of Fooby are compared as though they are of type Foo. The Barby fields aren’t ever compared, so the result of the comparison is true. This doesn’t happen when we call Object.Equals(fb1, fb2), because Object.Equals(Object, Object) doesn’t know the static types. All it can do is call the virtual Object.Equals(Object) method on fb1. That call is resolved at runtime. Barby true Object.Equals(fb1, fb2) If fb1.Equals(fb2) and Object.Equals(fb1, fb2)don’t agree, you have a problem. Clients using your class will rightfully suspect your competence and the wisdom of continuing to trust anything else you’ve written. If you’re not yet convinced that implementing IEquatable<T> on a non-sealed type is a bad idea, then your only solution to this problem is to make class Fooby implement IEquatable<Fooby>. That will give the compiler a better overload to bind. Just uncomment the UseIEquatableFooby conditional, and re-run the test. Once again, all is right with the world. IEquatable<Fooby> UseIEquatableFooby Or so you think. Let’s move the equality test to a separate method: static void TestEquality(Foo f1, Foo f2) { Console.WriteLine(f1.Equals(f2)); Console.WriteLine(f2.Equals(f1)); Console.WriteLine(Object.Equals(f1, f2)); Console.WriteLine(Object.Equals(f2, f1)); } And add a call to that method after the original tests:)); TestEquality(fb1, fb2); Now what happens when you run the test? False False False False True True False False The problem, once again, is overload resolution. But there’s nothing you can do about it this time. Implementing IEquatable<T> makes the definition of equality dependent on the compile time static type. In this particular case, the compiler sees that the f1 and f2 parameters in the TestEquality method have a static type Foo, so it generates code to call Foo.Equals(Foo), even though their runtime types are Fooby. f1 f2 TestEquality The problem occurs in collections, too, although it’s more difficult to show. To illustrate the problem, I have to generate two Fooby instances that produce the same hash code. The code below generates 8-character hexadecimal strings until it finds two that produce the same hash code. It then creates two Fooby instances and tries to add them to a HashSet<Foo>. HashSet<Foo> // Find two strings that have identical hash codes string clash1 = null; string clash2 = null; var d = new Dictionary<int, string>(); for (var i = 0; i < int.MaxValue; ++i) { string s = i.ToString("X8"); int hash = s.GetHashCode(); string val; if (d.TryGetValue(hash, out val)) { Console.WriteLine("string '{0}' clashes with '{1}'", s, val); clash1 = s; clash2 = val; break; } d.Add(hash, s); } // Add the two items to a HashSet<Foo> var foos = new HashSet<Foo>(); foos.Add(new Fooby(0, clash1)); foos.Add(new Fooby(0, clash2)); Console.WriteLine("foos.Count={0}", foos.Count); On my system, it tells me, "string '000B020A' clashes with '00070101'". "string '000B020A' clashes with '00070101'" If I run the program without implementing IEquatable<T> on the classes, foos.Count returns 2, which is correct. If I enable implementation of IEquatable<T>, foos.Count returns 1. It thinks that Fooby(0, "000B020A") is equal to Fooby(0, "00070101"), which obviously is not true. You can take issue with my GetHashCode implementation if you like, but the nature of hash codes is that you’re mapping an essentially infinite number of objects (how many possible strings are there?) onto a code that can hold a finite (2^32) number of unique values. No matter how good your hash code generation is, there will be duplicates! foos.Count Fooby(0, "000B020A") Fooby(0, "00070101") There is no way to make this work correctly. If you implement IEquatable<T> on a non-sealed class, equality comparisons of derived types will produce inconsistent results. Don’t do it! IEquatable<T> Most likely, the behavior you’re looking for is properly implemented by deriving a class from EqualityComparer<T>. I’ll give you a closer look at that next time. I think this is the first cartoon I’ve ever attempted to draw. It’s certainly the first one I’ve ever published in any form. And, yes, I do need to improve my drawing ability. I’m working.
http://blog.mischel.com/2013/01/
CC-MAIN-2017-22
refinedweb
2,301
58.38
Bummer! This is just a preview. You need to be signed in with a Pro account to view the entire video. Refactoring CSS47:48 with John. - 0:00 [MUSIC] - 0:04 [SOUND] We're gonna be talking a little bit - 0:07 about programming principles that you can apply to your own CSS. - 0:12 And we'll be jumping into that in just a second. - 0:16 For you to get started, let me just introduce myself. - 0:19 My name is John Long. - 0:20 This is me and my wife Sarah. - 0:23 I just got married. - 0:24 >> Whoo! - 0:25 >> Congratulations. - 0:26 >> Whoo! - 0:27 >> This is us on our honeymoon in Hawaii. - 0:30 So it was a pretty great time. - 0:33 I'm a pretty lucky guy, so. - 0:38 So this is me and my wife. - 0:41 You might have heard of me from the Sass Way. - 0:45 I'm the managing editor for The Sass Way. - 0:47 So if you do Sass and you're wondering how to do stuff - 0:50 thesassway.com has a lot of helpful articles, tutorials, things like that. - 0:56 A lot of my stuff is up there. - 0:58 A lot of helpful stuff from other people in the Sass community. - 1:02 Well worth checking out. - 1:04 And then I work at uservoice and uservoice provides feedback tools for companies. - 1:12 We also compete with companies like ZenDesk, Get Satisfaction. - 1:16 We offer full help desk, that kind of stuff. - 1:19 And I do UX design. - 1:22 And we call it, like, full stack UX design. - 1:24 Like, I do the Illustrator comps all the way down to the JavaScript and - 1:31 HTML and CSS and stuff like that. - 1:34 I'm really kind of one of those people who's, like, part programmer and - 1:38 part designer in the sense that, like, I actually enjoy both parts. - 1:43 I love the, the coding aspect because it supports the design. - 1:48 And it's, sometimes when you're working on a design the only way to - 1:52 make sure it's done well is to do it all the way down. - 1:55 So so I really, really enjoy the CSS part of it and - 2:02 one of my passions is just trying to figure out how, how to do that well. - 2:09 So why refactor your CSS? - 2:14 Why do this at all? - 2:17 Well, let's, let's back up just a little bit. - 2:20 And I'm gonna get to a definition of what, - 2:22 what refactoring is all about in a second here. - 2:25 But CSS is as Dan sort of alluded - 2:32 to in his talk has gone through a lot of changes in the last couple of years. - 2:39 It is it started off you know, - 2:43 being a language for how we style documents and things like that. - 2:48 And as the web has kinda evolved, we've gone away from just documents to, like, - 2:53 building full-fledged applications now in CSS. - 2:58 And it's primarily been the domain of designer's, and - 3:03 these are people that, like, spend a lot of time working on the pixels. - 3:08 I don't know who stacks wood like this, but they're obviously a designer, right? - 3:13 We, where we are passionate about the pixels and designer's, you know, - 3:18 from a print background traditionally are willing to spend hours and hours just - 3:23 perfecting, you know, the line heights, the the spacing in between the letters, - 3:29 all that kind of stuff, to ensure that when there stuff goes to print, like. - 3:35 It's pixel perfect in print terms. - 3:38 And so we took a lot of that same sort of knowledge and stuff to CSS and - 3:43 we've been brute forcing our CSS for years and years now. - 3:48 So make sure that there's there has been sort of a an opposition - 3:55 in a way to making things easier a little bit in CSS. - 4:01 It's not that that as - 4:05 designers we we don't want things to be easier, it's just that we spent a lot of - 4:10 time getting to a point where we have like our, our methods and our systems down. - 4:16 And so when somebody else comes along and - 4:18 tries to tell us how to do things it, it can be a little bit harder. - 4:21 So we, we, we end up with this sort of paradox here with design versus - 4:26 programming kinda thing, and the front end, in particular CSS is where - 4:32 a lot of this is sort of, you know, taking place right now, where we - 4:37 have the designers and the programmers, and kinda, kind of a struggle. - 4:42 Sometimes we have programmers that are writing the CSS, - 4:46 and sometimes we have the designers that are writing the CSS. - 4:50 So it's kinda interesting. - 4:51 This is a quote from Andy Hume he did a talk called CSS for - 4:57 Grownups, which I highly recommend. - 4:59 Does anybody listen to this or seen it online? - 5:02 This is one of the best talks on CSS that, - 5:08 that you can find on the internet right now. - 5:12 And one of the things that he says in there is it's almost a challenge to - 5:14 find a development team that's working on a codebase that's more than a couple of - 5:18 years old where the CSS isn't the most frightening and hated part of the system. - 5:24 Has anybody worked on a long running project like this? - 5:26 Maybe you inherited CSS from somebody else. - 5:30 And they just, they have no idea how to write CSS, do they, right? - 5:36 Like, it, this should be obvious. - 5:39 Why do they name it this way, you know. - 5:41 All, all of this sort of stuff. - 5:42 And, and yet, so we have these like differing methodologies even - 5:46 among ourselves in terms of the way that we, we deal with stuff. - 5:50 And when you get multiple people working on a codebase it becomes an issue. - 5:55 Or even if you're just working on your own codebase for a long period of times it - 5:58 becomes an issue cuz your opinions about how things work change. - 6:03 So, CSS is 20 years old as of last Friday this article was published. - 6:10 And we still don't know how to write it. - 6:14 20 years old, - 6:14 we're still talking about, like, how to write CSS the right way kind of thing. - 6:20 It's time for us to really own up and admit that CSS is programming. - 6:24 I mean, we, look at our editors, the things that we, we used to write CSS. - 6:29 It's the same thing that people do for programming. - 6:34 I'm not saying that CSS is a turing complete programming language. - 6:38 If you're from a computer science background. - 6:40 >> But, SASS is. - 6:43 >> yeah, SASS certainly gives you a lot more tools in - 6:46 the programming community for sure. - 6:49 But, we're doing, - 6:51 we're writing, we're writing code is the thing I want you to understand. - 6:56 And the interesting thing is that programmers have been doing this stuff - 7:00 since the 50s, and - 7:01 we haven't learned a thing from them about how to structure our CSS as designers. - 7:06 You know, instead we're, we're, we're still preferring complicated selectors to - 7:11 do things, and various things like that. - 7:12 And we think we're cool because we can make it work, you know, and - 7:16 we can do things that the programmers can't do, right? - 7:19 Cuz we got that domain knowledge, - 7:21 but there's still a lot that we can learn from programming. - 7:24 So, the thing I want each one of you to recognize today is that - 7:29 you're a programmer if you're writing CSS, okay? - 7:32 And you need to learn how to write code. - 7:37 So we have a lot to learn. - 7:38 All right, so let's get back to that question. - 7:41 What is refactoring? - 7:45 All right, so refactoring is a disciplined approach to changing code that makes it - 7:50 easy to understand and cheaper to modify in the future, and here's the key thing, - 7:55 it's without changing the observed behavior. - 7:59 So, to say, - 8:00 to put it simply, refactoring's just a disciplined way of improving your code. - 8:05 Okay? - 8:07 This is not about changing the design of something in terms of - 8:10 the way that it looks, its appearance, kind of thing. - 8:13 And it's more about making stuff reusable and - 8:18 easy to understand and that kind of thing. - 8:21 And what's interesting to me is if you look at a lot of CSS projects, - 8:24 the selectors are long all of this kind of stuff, - 8:29 it's very hard to tell what's being done. - 8:31 And you see designers, like, putting a lot of comments in there, things like that. - 8:36 Refactoring, the more you work at it - 8:39 it's about trying to make your code self-evident. - 8:41 It's really spending time asking the questions, - 8:45 why are we doing what we are doing, how can we make this easier to understand? - 8:50 So, let's talk about some of the goals of refactoring. - 8:54 We've already talked about clean maintainable code. - 8:58 Easy to understand. - 9:00 One of things that's really big is trying to break things down into smaller parts. - 9:05 so, composable parts. - 9:07 So, the, the parts make up things that are bigger and - 9:10 they make up things that are, that are bigger than that. - 9:15 Another really big goal of refactoring is that it's reusable without modification. - 9:20 How many of you ever been on a project and they named a section of the site something - 9:25 like product and then they named another section of the site something like sales. - 9:31 And they've got like combitory selectors kind of thing where everything, - 9:36 you know, comma separated trying to join these things together to use - 9:40 the same styles in both places, right? - 9:45 You should have to add another class in order to use it in a new section of - 9:49 the site, or change the way something is in order to use it. - 9:53 It should be reusable without modification. - 9:56 The other aspect of it is that parts behave the same in all contexts, okay? - 10:02 And this. - 10:03 >> [COUGH] >> Is really important because a lot of - 10:06 the way that CSS is written is contextual, okay? - 10:09 So in this context I want this element to look this way. - 10:12 In this context I want it to look differently, okay? - 10:15 And this, this is, this is bad. - 10:18 Most CSS is written this way today. - 10:22 But you, what you wanna move towards if you want it be reusable is - 10:26 this that in every context this thing looks the same, okay? - 10:31 And there's some strategies for, for dealing with that. - 10:36 And then the last thing is that it's hard to break. - 10:38 How many of you have gone in and, and changed one small section of a, - 10:43 of a CSS rule and - 10:44 all of the sudden, headings are orange or something strange like that, right? - 10:49 Yeah, I mean, we've all done that. - 10:51 CSS is very easy to break, and so we, - 10:54 we need to pay more attention to what we're doing. - 10:56 Now, I have been talking about a lot of things here. - 11:04 But the, the point is this is actually modular CSS. - 11:07 Okay. - 11:08 And there, there are a couple of methodologies out there today. - 11:12 How many of you are familiar with Smacss? - 11:16 Okay, Jonathan Snooks actually doing his talk right now. - 11:20 If you went to that talk and didn't come to mine I'm not offended. - 11:23 You're in a good place. - 11:25 You have the BEM architecture. - 11:28 There's also object oriented CSS. - 11:32 In my opinion, if you're writing CSS today you - 11:35 should be using a variation of one of these methodologies. - 11:41 Or you should at least be familiar with it, cuz it will save you so much work. - 11:49 So we'll look back at our goals real quick. - 11:53 These are what it means to write modular code. - 11:59 And this is what these types of methodologies are trying to achieve. - 12:07 So lets talk about Modular CSS a minute. - 12:12 So this from my own Modular CSS. - 12:16 I've got a section on the sass way. - 12:19 Here's the URL. - 12:22 Where I, where I sort of break things down. - 12:23 In my opinion, mine's this, - 12:26 just trying to make the concept of Modular CSS as simple as possible. - 12:32 And the way that I divide things, - 12:34 unfortunately I can't go in to everything about modular CSS in this talk, - 12:38 but the way that I divide things is this, you have objects, okay. - 12:42 So where you might have a class called button, so that's a very simple object. - 12:48 A lot times objects map very cleanly to some sort of component. - 12:52 Or maybe it's a header or a footer or something like that. - 12:56 So that's an, that, that would just be an object. - 12:58 You would have child objects, things that are inside of that object. - 13:03 And I use a naming convention to keep my selectors simple. - 13:06 You can do this a couple of different ways. - 13:10 Then you have subclasses which is where you like change an object - 13:16 to be as particular way, you know? - 13:20 And this allows you to, to modify things. - 13:22 So for instance, in this case maybe my primary action buttons are blue and - 13:28 all other buttons are gray. - 13:30 So I might use a subclass called primary-button to - 13:34 style those blue buttons. - 13:36 And then you might modifiers that also get applied to those same elements. - 13:41 These four things, they're called different things in BEM smacks and - 13:46 whatnot, but this is the basic idea if you learned these things. - 13:51 So this, this is essentially what I mean when I say Modular CSS. - 13:57 It's learning to think in these small, composable parts and - 14:03 the variations on them. - 14:06 So let's talk a little bit about an example and kind of how I, - 14:10 I personally woke up to the need to be more modular in, in, in my CSS. - 14:16 Has anybody ever styled a menu that's like a drop down menu type thing. - 14:23 And so if you look anywhere on the internet they say that - 14:28 you're supposed to style this with an un-ordered list, right, of some kind. - 14:34 So this is supposed to be a series of un-ordered lists. - 14:36 Even the top part is an un-ordered list. - 14:39 And then each one of these items is supposed to be at or - 14:42 the menu itself is an unordered list with list items and things like that. - 14:47 And what's interesting about this example is, is that it actually leads to some - 14:51 really gnarly CSS if you style it more of a traditional way. - 14:57 So this is this is using Sass over here on the side, - 15:02 so that you know, I've got my nesting, Sass is being interchanged with this. - 15:07 This, this would be a very traditional approach to styling this. - 15:11 One of the things that you quickly realize is that because we - 15:14 have net list nested within lists, and those nested. - 15:18 The second level of nesting we style in a completely different way. - 15:22 We need to use those child selectors. - 15:25 You guys familiar with child selectors in that regard? - 15:28 So this basically says that only the list item that's immediate do I style this way. - 15:35 And what's interesting about this code when I look at it is - 15:41 we've got list elements, we've got a anchor tags, things like that. - 15:47 The HTML is relatively clean. - 15:49 You know, we've only got one class up here, but this code here, - 15:53 the CSS is actually kinda hard to read. - 15:56 It's kinda hard to know what this a is that I'm referring to here. - 16:00 And I have to like sit here and look at it for - 16:03 a little while un, until I figure out like, what that a element actually is. - 16:08 So this, and, and this CSS itself was actually pretty hard to write because the, - 16:15 the child selector sorta problem where we're styling that list - 16:21 within lists [SOUND] you end that with a situation where for - 16:26 the menu part of it, you're styling those elements differently than for - 16:32 the menu bar that goes across the top. - 16:35 And trying to write the selector and I ended up with child selectors that - 16:39 selected the right thing took a little bit of time to figure out what I needed. - 16:45 And so I thought well I'll break this down just a little bit more, and - 16:51 what I ended up with was, you know, I've got menu bar now and then I've got menu - 16:56 bar item and I've got menu a little bit further down if you look here you can see. - 17:01 Menu bar, menu bar item, menu, menu items. - 17:04 So it's starting to break things down a little bit. - 17:06 I've got menu separator here. - 17:10 Instead of just styling off of the tags alone here. - 17:14 Now, now I'm using the classes to kind of break stuff up. - 17:18 And the classes themselves make it immediately obvious what I'm referring to. - 17:23 Now. So if we, if we backed up, in this case, - 17:28 looking at a long CSS file it's hard to know which LI I'm, - 17:32 I'm referring to or whatnot. - 17:36 Now I'm getting a little bit - 17:39 closer to what I want by having additional classes int here. - 17:43 And then if I com, if I go even further in this and break it down even further, - 17:51 I can remove the tags entirely from it, and now it just speaks to me, - 17:56 the, about the different things that, you know, it's referring to. - 17:59 In this case, the menu item targets the length in this scenario. - 18:07 A lot of people don't like to go to completely disc, - 18:11 decompose their stuff to this degree. - 18:14 And there is certainly a balance here as you're working on your own stuff. - 18:19 I tend to favor more of this approach because I - 18:24 like the total separation from my markup. - 18:28 But other people are more they like something more along the lines of - 18:33 what I've done here in, in the middle step. - 18:36 The point is, is that by breaking things down just a little bit you make your - 18:40 code much, much easier to understand. - 18:44 And this is essentially, what the goals of Modular CSS are, - 18:49 is to, to make your code easy to understand, easy, easy to compose. - 18:55 So, I've written a series of articles, if you're not familiar with Modular CSS, - 19:00 that you can get up and running. - 19:03 with. I'm gonna show this URL at - 19:05 the end as well, but this is on The Sass Way. - 19:09 Highly recommend it. - 19:10 Also check out Jonathan Snook's smack approach, SMACCS approach. - 19:15 He's got a, a e-book that he's written which is very helpful. - 19:22 The key thing, the sort of takeaway from the modular approaches is that you, - 19:27 and this is really why the naming becomes so important and, - 19:31 and starting to learn to use just classes to, to represent things is, is that they - 19:35 help you think in terms of objects, not so much in terms of selectors. - 19:40 Selectors are all about how do I get to that thing. - 19:43 And they actually make CSS much more complicated than if, - 19:46 if you're thinking in terms of, of objects. - 19:49 [BLANK_AUDIO] - 19:52 All right, so let's talk a little bit about some refactoring patterns. - 19:59 As you get in to programing, you can go, - 20:03 this is actually taken straight from the Wikipedia page on code refactoring. - 20:13 There are a couple of categories of, of refactorings. - 20:16 And part of the reason that it's helpful to identify what these - 20:20 refactorings are is that, then we can talk about it, we have a common language for - 20:26 describing how to change codes so that it's better. - 20:32 So, refactoring patterns are this idea that we, - 20:36 we want to create a common language to describe stuff, so - 20:40 I can say refactor that into an object or refactor that into a super class. - 20:45 And you know what I mean because we have this common language to, - 20:49 to describe things. - 20:51 So, refactoring typically falls into one of three categories. - 20:54 You have techniques for - 20:56 abstraction, abstraction's kind of a big programming word,. - 21:04 It's basically this idea of, like, characterization giving a name to things. - 21:11 Grouping related functionality into one, one thing. - 21:16 So, we have techniques for abstraction, we also have techniques for - 21:20 breaking up code, just trying to get it into small, reusable parts. - 21:28 And then thirdly, we have techniques for improving names, so there's a whole - 21:32 series of refactorings where it's like rename this into something else, and - 21:36 what's interesting is that so we have common names as an aspect or a common - 21:44 patterns is an aspect of refactoring, but particularly on the naming side - 21:49 of things you can write tools that will do this for you automatically. - 21:54 And if, if you use a programming language like Java, for - 21:58 instance, you can select code and say hey, apply this refactoring to it. - 22:03 And it rewrites your code for you. - 22:07 And it, cuz it, those patterns are so well defined in Java that it can just do that. - 22:14 So refactoring is pretty cool in that regard once you - 22:17 start identifying what some of these things are. - 22:20 So you can say like, rename this variable and it'll rename the variable for you. - 22:26 Cuz it understands your code. - 22:29 So let's talk a little bit about abstraction patterns. - 22:34 we, we talked a lot about objects. - 22:37 So we have this idea of extracting something into an object. - 22:41 You might be renaming a selector. - 22:42 You might be taking a few of the CSS properties and - 22:48 pulling them into their own objects, so you have the separating objects. - 22:53 So you have extracting objects. - 22:55 You have extracting child-objects. - 22:58 Might be a pattern. - 23:01 We're gonna go over these in a little bit more detail. - 23:04 A few of these. - 23:05 We also have extracting things into subclasses we'll talk about that. - 23:11 And extracting modifiers. - 23:13 So these are the, you know, we talked about the different types of modular - 23:16 things, so each one of these things in are different types of abstraction in CSS. - 23:22 And these are four different patterns there. - 23:29 The other major category is breaking your code up into logical pieces, so - 23:33 a couple of ways to do that, decomposing objects. - 23:38 So, breaking them up into different objects assembling larger - 23:44 components from several objects, so larger components from smaller parts. - 23:51 If you're using SAS, extracting parts of your code and - 23:55 like mixins and functions, that kinda stuff. - 23:59 And then, just in general, removing duplication through, - 24:02 through one of those things, you know, whether it's classes or, or mixins or - 24:07 functions, whatever it is, keeping things dry. - 24:10 How many people know what dry means? - 24:13 Okay, so, dry strands for d not repeat yourself. - 24:19 Okay. - 24:20 And there's a rule called the rule of threes I think? - 24:26 Which is essentially that when something occurs three times in your code, - 24:30 then you should find a way to ma, to only write that once. - 24:34 And that's what don't repeating yourself is about. - 24:38 Is when you recognize that you're, that you have duplication in your code, - 24:43 you wanna try and remove that. - 24:47 So improving names. - 24:49 A couple of different kinds of refactorings under this category. - 24:52 You have things simplifying a selector. - 24:56 I would recommend that you consider removing IDs in your code. - 25:00 And removing element selectors as much as you can. - 25:05 Just because they make your selectors speak for themselves, when you do that. - 25:10 When you use class names instead of oh, this is a list, this is you know. - 25:14 Instead you can talk about menus and - 25:17 menu items and, and things that map well visually to what you understand. - 25:22 So you have a vocabulary to explain what, what you're talking about from your code. - 25:27 So renaming objects, this is where you realize that - 25:30 you've named something incorrectly, you want to change it to something else. - 25:34 objects, mixins, functions, variables. - 25:38 You have this concept where you might select some code and - 25:42 turn it into a super class. - 25:45 So move into a superclass or move into a subclass. - 25:48 In programming terms this is called pushing up to a sup, - 25:53 a superclass or, or pulling down into a subclass. - 25:56 So they have this concept of pushing up and pulling down. - 25:59 This is about just moving code into different, - 26:01 different parts of your object hierarchy. - 26:06 And then you have refactoring [COUGH] to comment. - 26:12 Or refractor I wrote this wrong. - 26:16 You have this idea and I'm gonna show, demonstrate this in - 26:19 a second where you might have some code that's really difficult and - 26:22 so you write a comment about it. - 26:24 Well instead you can make this a mixin or something like that. - 26:28 And then you don't need to comment because the mixin's name explains what it - 26:32 is you're doing. - 26:34 So let's talk about a few examples here. - 26:39 So commonly in CSS, we see these commentary selectors, - 26:45 right, where we say content, form, button comma, sidebar button should do this. - 26:52 In, in the modular system, you're going to want to try and - 26:58 take these and turn them into simpler selectors. - 27:01 Any time you see a combinatory selector, - 27:03 it should kinda be a red flag in your mind. - 27:06 Because essentially, - 27:07 you're saying that I have this thing that I don't know what to call it. - 27:10 But when it appears in these two locations, it should look like this. - 27:14 Well, just think of a name for this. - 27:17 And you'll be in a lot better shape. - 27:18 So, in this case we have content form button and side bar button. - 27:22 Let's just refactor this to a class called button. - 27:26 Right, over here on the side. - 27:29 Immediately now we have a good definition for what button should be like. - 27:35 I like to do this as a class. - 27:37 Because if I'd styled the tag button, and - 27:41 I wanted to have a button that looked like something else, - 27:45 I couldn't do that very easily without turning off all of these styles. - 27:50 And one of the, one of the red flags that you should be looking out for - 27:53 is if you have to turn off styles, you're probably doing it wrong, okay? - 27:58 [COUGH] and what do I mean by that? - 28:02 When you write selectors where you're turning off styles you're - 28:06 duplicating the logic for the way something should look - 28:10 in a negative way somewhere else in your code, okay? - 28:14 So you have logic now in two places. - 28:17 And whenever you, you change that in a higher place. - 28:21 Right? Now you have to go and - 28:23 change all the negative versions of that as well. - 28:25 And that's why you end up with these orange heading situations. - 28:28 Right? - 28:30 It's because you change something higher up that's affecting something lower down. - 28:35 So simplifying this to a, a button class would be recommended. - 28:40 Of course, there are all kinds of ways to like, - 28:43 rename selectors, rename objects, this is just one basic kind of refactoring. - 28:50 So, in this case, we end up with, you know, a simple mark up. - 28:53 We did have to add class button to it. - 28:57 But now our, our buttons are styled and green and look pretty. - 29:05 So another idea here we have those green buttons. - 29:08 Maybe we started out with all of our buttons green and - 29:11 we realized green is a little overpowering. - 29:14 And so we want to make green kind of a sub-class. - 29:19 That we only reserve for our primary buttons, those buttons that - 29:22 are the primary action on a form, like Save Changes or something like that. - 29:27 So in this case, extracting a subclass, what I'm gonna do is take those rules for - 29:32 making it green and, and, changing the box shadow, - 29:35 and I'm gonna pull those down into a subclass. - 29:38 So you can see here on the other sites that have button. - 29:41 And it has a grey and dark grey by default for its coloring and - 29:47 then primary button just redefines those things that it's gonna use, okay? - 29:53 The background color and the box-shadow. - 29:55 So this is the pattern of extracting a subclass from a an ex, an existing object. - 30:04 And this changes our markup just a little bit, - 30:06 instead of just saying button for something, now we say button primary. - 30:10 So you can see the difference between these two in terms of - 30:14 the way that they look. - 30:15 In this case I've chosen to duplicate the class names so we have button and, - 30:21 and primary button. - 30:22 There's a couple of ways of doing this particularly if you're using Sass this is - 30:25 my preferred way and one of the things that's nice about this is gripability - 30:31 searching across your entire project I can now find all the buttons and - 30:37 all the primary buttons. - 30:39 If I'm looking, looking for the subclasses of things as well. - 30:47 Here's another pattern. - 30:48 This is called extracting into superclass. - 30:51 This is fairly typical code for - 30:54 like alert banners going across the top of the screen sometime. - 30:57 Maybe your form saves something, and you wanna show a success message. - 31:02 Has a green background on it or something like that. - 31:04 We have three different kinds. - 31:06 We've got an info, a success, and a failure version, and quite simply, one's - 31:13 got a blue background, one's got a green background, one's got a red background. - 31:18 So the way to extract a superclass out of this is to say hey, - 31:23 we've got common stuff. - 31:24 Let's put that in message. - 31:27 And then the sub class now are info message, success message and - 31:31 failure message. - 31:33 So both classes would get applied to our final mark up. - 31:39 So we end up with something like this. - 31:40 Class message info message that's going to make it blue. - 31:45 Success message screen. - 31:48 Failure message would be red. - 31:53 So backing up again. - 31:56 We've added a fourth class that we're using that has that common code. - 32:01 That's extracting into a super class. - 32:03 [BLANK_AUDIO] - 32:07 If you're using Sass, Sass has something called a placeholder selector - 32:13 which some people prefer because it kind of lets you use those super, - 32:19 those single classes you, you get to keep using info, success, and - 32:24 failure and those are the only classes you get on there. - 32:29 Because what it does, we use an what do you call this? - 32:35 This is a place holder selector. - 32:38 It starts with a percent in Sass so it indicates that it's not a class or - 32:42 something like that. - 32:43 And this is gonna get compiled, let me show you the compiled version. - 32:46 So this is what the actual CSS generated looks like from this. - 32:50 And you see that when we take this selector and we extend below, so - 32:56 this extends message, what it does is it's basically collecting these classes and - 33:01 it puts them in place of the placeholder kind of thing over there on the right. - 33:07 So we get info, success and failure in a combgatory selector. - 33:11 Should have those things. - 33:13 And then the specific things like the background color in - 33:17 this case is getting overwritten. - 33:19 So this is in Sass using a placeholder super class in this case. - 33:25 And this is useful if, if your placeholder you're not gonna ever use on it, or - 33:30 your superclass, your never gonna use it on its own. - 33:32 And therefore you just want to keep your markup clean. - 33:41 So in this we've got info, success and failure. - 33:44 Our classes are all still the same. - 33:47 This is a little bit simpler markup but it's still abstract in that regard. - 33:56 It's still been factored out into a superclass. - 34:00 So that's extracting a placeholder into a superclass. - 34:05 Another kind of refactoring is decomposing objects. - 34:08 So this is where you have one object that has a lot of functionality in it and - 34:12 you want to turn it into like a couple of objects. - 34:14 So in this case, this is a button with a class of next on it. - 34:19 Okay? - 34:20 And he has all of the button styles on him in addition to having a next - 34:25 icon in his after pseudo element kind of thing. - 34:30 Not only is this kind of hard to reason about in code, - 34:33 like we don't know what after is being used for. - 34:37 But it's hard to reuse, so - 34:38 if we wanted to have another type of button with a different icon, we have to, - 34:43 like, copy over these styles for the, the button part of it, or something like that. - 34:49 So, one way of dealing with this is just to decompose the objects, - 34:53 turn it into two, so that we have one for the icon, and one for the button itself. - 34:58 So you can see here, we've decomposed the it, the object. - 35:02 We've got two classes. - 35:04 And it turns our markup into something more like this, where we have the icon. - 35:08 I should have put this afterwards, sorry. - 35:10 The icon and the button itself. - 35:13 So we have two objects with two elements, - 35:16 instead of one, which is what we had before. - 35:21 So before we had this after pseudo selector, after we've got - 35:25 two different selectors, two different objects that we can work with. - 35:30 That's, that's the decomposing objects, pattern. - 35:41 And I already mentioned this but this is refactor to comment. - 35:47 This kind of re refactoring. - 35:50 It's interesting a lot of times you'll find yourself in your - 35:53 code like writing some crazy long comment to explain why you're doing something. - 35:57 And sometimes a better way of dealing with that - 36:03 is turning that into a mixin, that just explains exactly what you're doing. - 36:08 So, here's what I did here. - 36:11 This is, this would be a Sass refactor and you'd have to have Sass for this to work. - 36:15 But I called my mixin crazy-ie-min-width-hack. - 36:19 Right? - 36:20 And now this is self-documenting anywhere that I use it. - 36:23 It's actual code that's running instead of just having this weird comment. - 36:29 You know, talking about what I'm doing here. - 36:33 So this is called refactored comment on something. - 36:37 instead. - 36:38 Refactor to comment also weirdly has a way of, - 36:42 of making your code much more succinct and, and readable, I guess you could say. - 36:48 So in this case, where this actually would probably appear, - 36:51 it's only two lines now, instead of having this crazy long thing. - 36:57 And this was really only meant to do like one thing, - 37:00 this is the meat of the video, guy. - 37:04 In terms of, terms of where, where he was. - 37:06 We'd want him to, be more what you're focusing on, - 37:11 not the crazy, min width hack in an object. - 37:15 So that's refactored a comment. - 37:19 Another refactoring would be to namespace something. - 37:25 This can be really helpful if if perhaps you've got some - 37:32 theaming going on or maybe you are working on a long running code base and you need - 37:37 to make some styles legacy, or something like that, on certain section of the site. - 37:43 You could use, like a legacy namespace, so legacy button. - 37:49 Of course, in this case, you need to have a surrounding div named legacy put - 37:53 around all of that code, so there was namespace. - 37:56 But this is a namespace refactoring. - 38:00 You're taking something that's in the global area and - 38:03 you're making it only accessible inside of a legacy thing. - 38:08 Another way of doing a similar thing would be to prefix it. - 38:14 So, let's say that you have some old button styles and - 38:18 you wanna keep using those so you call them legacy button or something like that. - 38:23 So you'd use a prefix style. - 38:26 This would be the same kind of refactoring across your, your application. - 38:30 What would be awesome is if we had programming tools that would re-write our - 38:35 code for us to do these types of refactorings. - 38:38 Right now they're pretty hard to do because you have to go modify all of - 38:41 your markup in order to change something a class name. - 38:48 So one of the points I want to make, and - 38:52 it can be confusing when you're talking about, do I make my CSS modular or not? - 38:58 A lot of the modular CSS people can come across as being very dogmatic, - 39:02 you should never do this, you should never do that. - 39:04 I'm pretty dogmatic, but there are pros and - 39:09 cons to each kind of refactoring and each kind of way of dealing with CSS, - 39:15 and this list that I've just showed you here is no way exhaustive. - 39:20 There's a lot of different ways of changing your code and working as stuff. - 39:24 One of the things that I'd like to just collectively challenge the CSS community, - 39:30 to do is let's document more what these types of refactorings are. - 39:35 I think that would take a lot of the,. - 39:38 And the, and the pros and cons of these refactorings rather than you have to do it - 39:42 in the modular CSS way or, or the, the non modular way. - 39:46 If we document them, - 39:48 it's gonna be a whole lot easier to know how to apply them in different situations. - 39:55 So. - 39:56 Refactoring in general, what are some best practices, for how to do it? - 40:03 So let's look again at our, our definition. - 40:06 Refactoring is a disciplined approach to changing code to - 40:09 make it easy to understand. - 40:12 A disciplined approach. - 40:16 It's so easy on a CSS project to jump in and be like, oh, this is completely wrong. - 40:22 And then you start making changes and you like, break the whole site, right? - 40:28 So we have to figure out ways of being disciplined about what we're doing. - 40:37 So here's some, some principles. - 40:40 Number one, don't try to refactor and add functionality at the same time. - 40:44 This is taken from the book The Pragmatic Programmer. - 40:47 If you're new to programming, this is the one book you should read - 40:53 in terms of it's about how to be a good programmer. - 40:57 Don't try to refactor and add functionality at the same time. - 41:01 So a lot of times I make the mistake of, - 41:06 I'm creating a new thing, and so, in one get connect or - 41:12 whatever, I'm like refactoring something, and creating something new. - 41:18 And I end up breaking old stuff. - 41:20 So, one way of dealing with that is to just break it in two, you know? - 41:26 Do your re-factoring first, change around the way things work. - 41:29 Get it out to your tester or - 41:31 whoever looks at your site, to make sure that everything's correct. - 41:35 Get that in to production. - 41:37 And then go back and add the additional thing that you're, you're looking for. - 41:40 So try and break it in to two. - 41:45 Next first well make sure have good tests before refactoring. - 41:49 This is something that we're still working on in the CSS community - 41:53 is getting good tests. - 41:55 I'm gonna talk a little bit more about tests. - 41:58 But the tests essentially allow you to modify your code with confidence, okay? - 42:04 And if you can't modify your code with confidence, - 42:06 you're not gonna modify your code. - 42:08 Okay. - 42:09 And [COUGH] so if you're, - 42:12 if you're able to get good tests in place, you'll be able to do that. - 42:17 Thirdly, you wanna take short deliberate steps and tests after each step. - 42:23 Okay. - 42:24 So small steps and then test after each step. - 42:30 The last thing they recommend is that you refactor early and often. - 42:35 It's so easy to sort of push of refactorings that you know you need to do, - 42:39 but the longer you push them off the harder they are to do, - 42:42 cuz the code just accumulates, right? - 42:46 so, put these things into practice. - 42:51 So the last thing which I haven't talked a lot about is just testing your CSS. - 42:57 And I don't have a lot of material here on this - 43:01 I primarily do this method which is manual testing. - 43:07 I would recommend, you know, most of us, we're looking through our site by hand. - 43:13 If you can make some example pages, or a style guide or something like that. - 43:18 Just so that you can look at these components as you're developing them - 43:21 in isolation. - 43:23 That's really, really helpful. - 43:28 There's a couple of tools that do exist already for automated testing. - 43:34 If you look down here at the bottom CSS test, that's CSSte.st is - 43:40 a website where you can find a lot of these tools. - 43:45 They've got different reviews on different types of things. - 43:48 But they generally fall into, I think, three different categories. - 43:51 The first is regressive, regression testing. - 43:55 In regression testing the, the most prominent right now is this idea that - 44:02 you create a bunch of example pages showing your components. - 44:05 And then you use like Phantom CSS, for instance. - 44:10 It generates images using a browser that's kind of in memory, and - 44:15 it generates images of each one of your pages. - 44:17 So it would use like the same engine that Chrome runs off of or something like that. - 44:22 And then it tells you if those images change, okay? - 44:26 So if a boarder color changes, or something like that, and - 44:29 then you approve those changes if they are, if they are correct, kind of thing, - 44:34 and this is one method of doing automated testing, - 44:39 is the image regression testing, there is also the linting. - 44:43 Linting can be helpful the idea behind linting is, - 44:48 is that it tells you if your CSS is correct, okay? - 44:52 And there's a couple of different levels. - 44:54 One is just basic syntax checking, making sure you don't have a stray semi colon, - 44:59 or or curly bracket you know, that type of stuff. - 45:05 And then the other is where you're like actually enforcing some - 45:09 kind of methodology. - 45:12 In SCSS Lint for people that are doing Sass, - 45:15 I think there might be another version of the tool. - 45:17 Is there another, another linting tool for Sass? - 45:19 Does anybody know? - 45:21 S. - 45:21 >> There's Grunt. - 45:22 You could do it with Grunt. - 45:24 >> Okay. - 45:25 Okay, so there's something with Grunt that would allow you to do the same thing. - 45:28 SCSS Lint, you can actually write your own rules. - 45:32 So it can tell you when certain things are being violated. - 45:36 Maybe your object system is not being observed, or, or something like that. - 45:40 So linting is another way of helping with your testing. - 45:44 It's not gonna help you on the visual side, but - 45:46 it's gonna help you to make sure that your code is correct. - 45:49 And then there are coverage tools this would be the, the third type and - 45:54 coverage basically tells you what CSS is still active in your project. - 46:01 Which can be really helpful if you're trying to remove - 46:05 old CSS from your project. - 46:08 And [COUGH] there is a Firefox extension called CSS Coverage. - 46:13 And basically you load it up and then I think you page through your site a little - 46:17 bit, and it will show you which rules are being used and which rules aren't. - 46:23 So this is a little bit of a by hands type tool. - 46:26 I think there's a, there are other automated solutions as well but testing. - 46:35 Testing's just so helpful in getting you to a point where you - 46:38 can make changes without, with confidence. - 46:43 Right? - 46:44 Knowing that your site's not gonna break. - 46:45 [SOUND] So finally sorta wrapping up, - 46:49 if you haven't seen it yet go check out CSS For Grownups. - 46:56 Some really great advice on improving your CSS, he covers some similar examples. - 47:05 And then I also have a presentation if you're interested in my - 47:08 modular CSS approach. - 47:10 I gave this in last year it's called Defensive Sass. - 47:16 And again, it's really just trying to get to a point where, - 47:22 get you to a point where the styles that you're writing are bulletproof. - 47:26 Able to be used in a way across your application, - 47:31 reused in small modular components. - 47:34 So that's defensive Sass, I highly recommend checking that out. - 47:37 And that's it. - 47:41 [APPLAUSE] Thank you.
https://teamtreehouse.com/library/refactoring-css
CC-MAIN-2017-17
refinedweb
8,784
80.01
C++ Program to Calculate the Sum of Natural Numbers Grammarly In this post you will learn how to Calculate the Sum of Natural Numbers. This lesson will teach you how to calculate the sum of natural numbers, with a mathematical function using the C++ Language. Let’s look at the below source code. How to Calculate the Sum of Natural Numbers? Source Code #include<iostream> using namespace std; int main() { int n, sum; cin>> n; cout<<" Enter a value : "<< n << endl; sum = n*(n+1)/2; cout<<"\n Sum of first "<< n <<" natural numbers is : "<<sum; return 0; } Input 5 Output Enter a value : 5 Sum of first 5 natural numbers is : 15 find the sum of natural numbers we use the following mathematical formula - First we declare the variable n and sum as integers. We then collect the number from the user and store it in n using function cin>>,display the value using cout<<and the Insertion Operators'<<‘ , ‘>>’. - Using the formula sum = n*(n+1)/2and the value n, we calculate the sum of natural numbers and store it in the integer sum. - Using the output statement coutdisplay the answer. - using for loop and a mathematical function. Let’s look at the below source code. #include<iostream> using namespace std; int main() { int n, i, sum=0; cin >> n; cout << "Enter the n value: "<< n << endl; for (i = 1; i <= n; i++) { sum += i; } cout<<"\n Sum of first "<< n <<" natural numbers is : "<<sum; return 0; } Input 5 Output Enter the n value: 5 Sum of first 5 natural numbers is : 15 - In this source code after declaring the integers we will be using and getting the value form the user, we then use <= n; i++) { sum += i; } - In the for loop first is the initialization of the variable i = 1next the condition to be satisfied i <= nwhich is followed by increment operator i++. - The next line in the for loop is the loop statement sum += iwhich is executed after the for loop is run. - The loop statement is expanded as sum = sum + iwhere the initial value of sum is 0 and i is 1. Now when the for loop is executed, if the condition is satisfied, i.e i is less than or equal to n, the value i is in incremented by one and the mathematical loop statement is executed. - Now the value of sum changes as per the loop statement function. The execution moves back up to the for loop and is executed repeatedly until the value i is not less than or equal to n. Now the function terminates and the last sum value is the Sum of n Natural Numbers. - The rest of the statements are similar to find the Sum of Natural Numbers.
https://developerpublish.com/academy/courses/c-programming-examples-2/lessons/c-program-to-calculate-the-sum-of-natural-numbers/
CC-MAIN-2021-49
refinedweb
461
64.64
img_decode_validate() Find a codec for decoding Synopsis: #include <img/img.h> int img_decode_validate( const img_codec_t* codecs, size_t ncodecs, io_stream_t* input, unsigned* codec ); Arguments: - codecs - A pointer to an array of img_codec_t handles providing a list of codecs to try. The function will try each codec in order until it finds one that validates the data in the stream. - ncodecs - The number of items in the codecs array. - input - The input source. - codec - The address of an unsigned value where the function stores the index of the codec that validated the datastream. This memory is left untouched if no such codec is found. Library: libimg Use the -l img option to qcc to link against this library. Description: This function finds a suitable codec for decoding. Returns: Status of the operation: - IMG_ERR_OK - Success; an appropriate codec was found. Check codec for the index of the codec in the codecs array which validated the datastream. - IMG_ERR_DLL - An error occurred processing the DLL that handles the file type. Check to make sure that the DLL is not missing or corrupt. - IMG_ERR_FORMAT - No installed codec recognized the input data as a format it supports. This could mean the data is of a format that's not supported, or the datastream is corrupt. - IMG_ERR_NOTIMPL - The codec that recognized the input data as the format it supports doesn't have a validate method. Classification: Image library
http://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/i/img_decode_validate.html
CC-MAIN-2019-35
refinedweb
230
66.44
Format page for printing Hi all and Happy New Year, we're still running a rather old version of the channel archiver and our pull down menu has run out of years again :) I changed the code and with some additions of #include <stdlib.h> got it compiled, copied CGIExport to where we want to use it and still get the pull down menu ending in 2012. Stopping and restarting from the web interface did not do the trick. It's getting too late in the day for me to do a restart from the command line, so I can't check that. Any and all help appreciated. Thanks in advance, Maren
http://www.aps.anl.gov/epics/tech-talk/2013/msg00001.php
CC-MAIN-2015-35
refinedweb
112
87.05
Implementing A* Pathfinding in Java Last modified: May 23, 2022 1. Introduction Pathfinding algorithms are techniques for navigating maps, allowing us to find a route between two different points. Different algorithms have different pros and cons, often in terms of the efficiency of the algorithm and the efficiency of the route that it generates. 2. What Is a Pathfinding Algorithm? A Pathfinding Algorithm is a technique for converting a graph – consisting of nodes and edges – into a route through the graph. This graph can be anything at all that needs traversing. For this article, we're going to attempt to traverse a portion of the London Underground system: (“London Underground Overground DLR Crossrail map” by sameboat is licensed under CC BY-SA 4.0) This has a lot of interesting components to it: - We may or may not have a direct route between our starting and ending points. For example, we can go directly from “Earl's Court” to “Monument”, but not to “Angel”. - Every single step has a particular cost. In our case, this is the distance between stations. - Each stop is only connected to a small subset of the other stops. For example, “Regent's Park” is directly connected to only “Baker Street” and “Oxford Circus”. All pathfinding algorithms take as input a collection of all the nodes – stations in our case – and connections between them, and also the desired starting and ending points. The output is typically the set of nodes that will get us from start to end, in the order that we need to go. 3. What Is A*? A* is one specific pathfinding algorithm, first published in 1968 by Peter Hart, Nils Nilsson, and Bertram Raphael. It is generally considered to be the best algorithm to use when there is no opportunity to pre-compute the routes and there are no constraints on memory usage. Both memory and performance complexity can be O(b^d) in the worst case, so while it will always work out the most efficient route, it's not always the most efficient way to do so. A* is actually a variation on Dijkstra's Algorithm, where there is additional information provided to help select the next node to use. This additional information does not need to be perfect – if we already have perfect information, then pathfinding is pointless. But the better it is, the better the end result will be. 4. How Does A* Work? The A* Algorithm works by iteratively selecting what is the best route so far, and attempting to see what the best next step is. When working with this algorithm, we have several pieces of data that we need to keep track of. The “open set” is all of the nodes that we are currently considering. This is not every node in the system, but instead, it's every node that we might make the next step from. We'll also keep track of the current best score, the estimated total score and the current best previous node for each node in the system. As part of this, we need to be able to calculate two different scores. One is the score to get from one node to the next. The second is a heuristic to give an estimate of the cost from any node to the destination. This estimate does not need to be accurate, but greater accuracy is going to yield better results. The only requirement is that both scores are consistent with each other – that is, they're in the same units. At the very start, our open set consists of our start node, and we have no information about any other nodes at all. At each iteration, we will: - Select the node from our open set that has the lowest estimated total score - Remove this node from the open set - Add to the open set all of the nodes that we can reach from it When we do this, we also work out the new score from this node to each new one to see if it's an improvement on what we've got so far, and if it is, then we update what we know about that node. This then repeats until the node in our open set that has the lowest estimated total score is our destination, at which point we've got our route. 4.1. Worked Example For example, let's start from “Marylebone” and attempt to find our way to “Bond Street”. At the very start, our open set consists only of “Marylebone”. That means that this is implicitly the node that we've got the best “estimated total score” for. Our next stops can be either “Edgware Road”, with a cost of 0.4403 km, or “Baker Street”, with a cost of 0.4153 km. However, “Edgware Road” is in the wrong direction, so our heuristic from here to the destination gives a score of 1.4284 km, whereas “Baker Street” has a heuristic score of 1.0753 km. This means that after this iteration our open set consists of two entries – “Edgware Road”, with an estimated total score of 1.8687 km, and “Baker Street”, with an estimated total score of 1.4906 km. Our second iteration will then start from “Baker Street”, since this has the lowest estimated total score. From here, our next stops can be either “Marylebone”, “St. John's Wood”, “Great Portland Street”, Regent's Park”, or “Bond Street”. We won't work through all of these, but let's take “Marylebone” as an interesting example. The cost to get there is again 0.4153 km, but this means that the total cost is now 0.8306 km. Additionally the heuristic from here to the destination gives a score of 1.323 km. This means that the estimated total score would be 2.1536 km, which is worse than the previous score for this node. This makes sense because we've had to do extra work to get nowhere in this case. This means that we will not consider this a viable route. As such, the details for “Marylebone” are not updated, and it is not added back onto the open set. 5. Java Implementation Now that we've discussed how this works, let's actually implement it. We're going to build a generic solution, and then we'll implement the code necessary for it to work for the London Underground. We can then use it for other scenarios by implementing only those specific parts. 5.1. Representing the Graph Firstly, we need to be able to represent our graph that we wish to traverse. This consists of two classes – the individual nodes and then the graph as a whole. We'll represent our individual nodes with an interface called GraphNode: public interface GraphNode { String getId(); } Each of our nodes must have an ID. Anything else is specific to this particular graph and is not needed for the general solution. These classes are simple Java Beans with no special logic. Our overall graph is then represented by a class simply called Graph: public class Graph<T extends GraphNode> { private final Set<T> nodes; private final Map<String, Set<String>> connections; public T getNode(String id) { return nodes.stream() .filter(node -> node.getId().equals(id)) .findFirst() .orElseThrow(() -> new IllegalArgumentException("No node found with ID")); } public Set<T> getConnections(T node) { return connections.get(node.getId()).stream() .map(this::getNode) .collect(Collectors.toSet()); } } This stores all of the nodes in our graph and has knowledge of which nodes connect to which. We can then get any node by ID, or all of the nodes connected to a given node. At this point, we're capable of representing any form of graph we wish, with any number of edges between any number of nodes. 5.2. Steps on Our Route The next thing we need is our mechanism for finding routes through the graph. The first part of this is some way to generate a score between any two nodes. We'll the Scorer interface for both the score to the next node and the estimate to the destination: public interface Scorer<T extends GraphNode> { double computeCost(T from, T to); } Given a start and an end node, we then get a score for traveling between them. We also need a wrapper around our nodes that carries some extra information. Instead of being a GraphNode, this is a RouteNode – because it's a node in our computed route instead of one in the entire graph: class RouteNode<T extends GraphNode> implements Comparable<RouteNode> { private final T current; private T previous; private double routeScore; private double estimatedScore; RouteNode(T current) { this(current, null, Double.POSITIVE_INFINITY, Double.POSITIVE_INFINITY); } RouteNode(T current, T previous, double routeScore, double estimatedScore) { this.current = current; this.previous = previous; this.routeScore = routeScore; this.estimatedScore = estimatedScore; } } As with GraphNode, these are simple Java Beans used to store the current state of each node for the current route computation. We've given this a simple constructor for the common case, when we're first visiting a node and have no additional information about it yet. These also need to be Comparable though, so that we can order them by the estimated score as part of the algorithm. This means the addition of a compareTo() method to fulfill the requirements of the Comparable interface: @Override public int compareTo(RouteNode other) { if (this.estimatedScore > other.estimatedScore) { return 1; } else if (this.estimatedScore < other.estimatedScore) { return -1; } else { return 0; } } 5.3. Finding Our Route Now we're in a position to actually generate our routes across our graph. This will be a class called RouteFinder: public class RouteFinder<T extends GraphNode> { private final Graph<T> graph; private final Scorer<T> nextNodeScorer; private final Scorer<T> targetScorer; public List<T> findRoute(T from, T to) { throw new IllegalStateException("No route found"); } } We have the graph that we are finding the routes across, and our two scorers – one for the exact score for the next node, and one for the estimated score to our destination. We've also got a method that will take a start and end node and compute the best route between the two. This method is to be our A* algorithm. All the rest of our code goes inside this method. We start with some basic setup – our “open set” of nodes that we can consider as the next step, and a map of every node that we've visited so far and what we know about it: Queue<RouteNode> openSet = new PriorityQueue<>(); Map<T, RouteNode<T>> allNodes = new HashMap<>(); RouteNode<T> start = new RouteNode<>(from, null, 0d, targetScorer.computeCost(from, to)); openSet.add(start); allNodes.put(from, start); Our open set initially has a single node – our start point. There is no previous node for this, there's a score of 0 to get there, and we've got an estimate of how far it is from our destination. The use of a PriorityQueue for the open set means that we automatically get the best entry off of it, based on our compareTo() method from earlier. Now we iterate until either we run out of nodes to look at, or the best available node is our destination: while (!openSet.isEmpty()) { RouteNode<T> next = openSet.poll(); if (next.getCurrent().equals(to)) { List<T> route = new ArrayList<>(); RouteNode<T> current = next; do { route.add(0, current.getCurrent()); current = allNodes.get(current.getPrevious()); } while (current != null); return route; } // ... When we've found our destination, we can build our route by repeatedly looking at the previous node until we reach our starting point. Next, if we haven't reached our destination, we can work out what to do next: graph.getConnections(next.getCurrent()).forEach(connection -> { RouteNode<T> nextNode = allNodes.getOrDefault(connection, new RouteNode<>(connection)); allNodes.put(connection, nextNode); double newScore = next.getRouteScore() + nextNodeScorer.computeCost(next.getCurrent(), connection); if (newScore < nextNode.getRouteScore()) { nextNode.setPrevious(next.getCurrent()); nextNode.setRouteScore(newScore); nextNode.setEstimatedScore(newScore + targetScorer.computeCost(connection, to)); openSet.add(nextNode); } }); throw new IllegalStateException("No route found"); } Here, we're iterating over the connected nodes from our graph. For each of these, we get the RouteNode that we have for it – creating a new one if needed. We then compute the new score for this node and see if it's cheaper than what we had so far. If it is then we update it to match this new route and add it to the open set for consideration next time around. This is the entire algorithm. We keep repeating this until we either reach our goal or fail to get there. 5.4. Specific Details for the London Underground What we have so far is a generic A* pathfinder, but it's lacking the specifics we need for our exact use case. This means we need a concrete implementation of both GraphNode and Scorer. Our nodes are stations on the underground, and we'll model them with the Station class: public class Station implements GraphNode { private final String id; private final String name; private final double latitude; private final double longitude; } The name is useful for seeing the output, and the latitude and longitude are for our scoring. In this scenario, we only need a single implementation of Scorer. We're going to use the Haversine formula for this, to compute the straight-line distance between two pairs of latitude/longitude: public class HaversineScorer implements Scorer<Station> { @Override public double computeCost(Station from, Station to) { double R = 6372.8; // Earth's Radius, in kilometers double dLat = Math.toRadians(to.getLatitude() - from.getLatitude()); double dLon = Math.toRadians(to.getLongitude() - from.getLongitude()); double lat1 = Math.toRadians(from.getLatitude()); double lat2 = Math.toRadians(to.getLatitude()); double a = Math.pow(Math.sin(dLat / 2),2) + Math.pow(Math.sin(dLon / 2),2) * Math.cos(lat1) * Math.cos(lat2); double c = 2 * Math.asin(Math.sqrt(a)); return R * c; } } We now have almost everything necessary to calculate paths between any two pairs of stations. The only thing missing is the graph of connections between them. This is available in GitHub. Let's use it for mapping out a route. We'll generate one from Earl's Court up to Angel. This has a number of different options for travel, on a minimum of two tube lines: public void findRoute() { List<Station> route = routeFinder.findRoute(underground.getNode("74"), underground.getNode("7")); System.out.println(route.stream().map(Station::getName).collect(Collectors.toList())); } This generates a route of Earl's Court -> South Kensington -> Green Park -> Euston -> Angel. The obvious route that many people would have taken would likely be Earl's Count -> Monument -> Angel, because that's got fewer changes. Instead, this has taken a significantly more direct route even though it meant more changes. 6. Conclusion In this article, we've seen what the A* algorithm is, how it works, and how to implement it in our own projects. Why not take this and extend it for your own uses? Maybe try to extend it to take interchanges between tube lines into account, and see how that affects the selected routes? And again, the complete code for the article is available over on GitHub.
https://www.baeldung.com/java-a-star-pathfinding
CC-MAIN-2022-27
refinedweb
2,533
63.29
On 10/19/2010 4:59 PM, jonathan wood wrote: > Greetings, > > This is almost certainly a namespace issue. I'd suggest the > following 3 things: > > - Check and/or post your parse sequence using SAXSVGDocumentFactory. > (less likely if unchange during upgrades) I don't know how to do that, never worked with Batik until now. The source of the cocoon-batik-block is at > - Check and/or post(if of appropriate size) your raw input document. > (less likely if unchange during upgrades) > The input document im using for testing is tiger.svg (from) works with cocoon 2.1 and fop-0.20.5. > - Check your classpath for duplicate (xmlparsing?) jars with > differing versions. The xercesImpl 2.8.1 was being included in the webapp, afaik the jdk 1.6 has in included, I changed the pom.xml to exclude xerces, compiled and still get the same error. > Happy to help further if needed... more detail always useful. All the detail is in the file, SVGBuilder.java is the SAXSVGDocumentFactory derived object. SVGSerializerNG.java is where the transcoding code is (notify function). And thanks for the help, I've been pulling my hair with this for over a month. The change to cocoon 2.2 was painful enough (i dont like maven) I hope i can solve this last problem. :) --------------------------------------------------------------------- To unsubscribe, e-mail: batik-users-unsubscribe@xmlgraphics.apache.org For additional commands, e-mail: batik-users-help@xmlgraphics.apache.org
http://mail-archives.apache.org/mod_mbox/xmlgraphics-batik-users/201010.mbox/%3C4CBF44EE.3050607@spectron-msim.com%3E
CC-MAIN-2017-13
refinedweb
239
70.9
Analyzing an SSTV recording… Inspired by this webpage, I decided to write a simple zero-crossing analyzer, just like his. The code turns out to be remarkably simple, and would allow me to reverse engineer modes that aren’t adequately documented. I called this program “analyze”: #include <stdio.h> #include <stdlib.h> #include <math.h> #include <sndfile.h> int main(int argc, char *argv[]) { SNDFILE *sf ; SF_INFO sfinfo ; float *inp ; float t, dt ; float x ; int i ; float pcross=0, cross, freq ; if ((sf = sf_open(argv[1], SFM_READ, &sfinfo)) == NULL) { perror(argv[1]) ; exit(1) ; } fprintf(stderr, "%s: %d channel%s\n", argv[1], sfinfo.channels, sfinfo.channels > 1 ? "s" : "") ; fprintf(stderr, "%s: %dHz\n", argv[1], sfinfo.samplerate) ; fprintf(stderr, "%s: %lld samples\n", argv[1], sfinfo.frames) ; inp = (float *) calloc(sfinfo.frames, sizeof(float)) ; fprintf(stderr, "::: reading %lld frames\n", sfinfo.frames) ; sf_read_float(sf, inp, sfinfo.frames) ; sf_close(sf) ; dt = 1.0 / sfinfo.samplerate ; for (i=0, t=0; i<sfinfo.frames-1; i++, t+=dt) { if (inp[i]*inp[i+1] < 0) { /* we have a zero crossing */ x = -inp[i] / (inp[i+1]-inp[i]) ; cross = t + x * dt ; freq = 1.0 / (2 * (cross - pcross)) ; printf("%f %f\n", cross, freq) ; pcross = cross ; } } } The code is dead simple. It loads the sound file into memory and then figures out (via linear interpolation) the location of all the places where the signal crosses zero (via simple linear interpolation). Each time we have a zero crossing, we compute the frequency of the signal, which is just the reciprocal of twice the difference in the two latest crossing times. This program dumps out the time and frequency of each zero crossing, which you can then easily visualize with gnuplot. Like this: Next step: generate some example sound files, and use them to reverse engineer some of the less well documented modes. Recent Comments
http://brainwagon.org/2014/03/12/analyzing-an-sstv-recording/
CC-MAIN-2016-40
refinedweb
313
58.38
Handle UTF8 file with BOMTag(s): IO From Wikipedia, the byte order mark (BOM) is a Unicode character used to signal the endianness (byte order) of a text file or stream. Its code point is U+FEFF. BOM use is optional, and, if used, should appear at the start of the text stream. Beyond its specific use as a byte-order indicator, the BOM character may also indicate which of the several Unicode representations the text is encoded in. The common BOMs are : UTF8 file are a special case because it is not recommended to add a BOM to them because it can break other tools like Java. In fact, Java assumes the UTF8 don't have a BOM so if the BOM is present it won't be discarded and it will be seen as data. To create an UTF8 file with a BOM, open the Windows create a simple text file and save it as utf8.txt with the encoding UTF-8. Now if you examine the file content as binary, you see the BOM at the beginning. If we read it with Java. import java.io.*; public class x { public static void main(String args[]) { try { FileInputStream fis = new FileInputStream("c:/temp/utf8.txt"); BufferedReader r = new BufferedReader(new InputStreamReader(fis, "UTF8")); for (String s = ""; (s = r.readLine()) != null;) { System.out.println(s); } r.close(); System.exit(0); } catch (Exception e) { e.printStackTrace(); System.exit(1); } } } ?helloworld This behaviour is documented in the Java bug database, here and here. There will be no fix for now because it will break existing tools like javadoc ou xml parsers. The Apache IO Commons provides some tools to handle this situation. The BOMInputStream class detects the BOM and, if required, can automatically skip it and return the subsequent byte as the first byte in the stream. Or you can do it manually. The next example converts an UTF8 file to ANSI. We check the first line for the presence of the BOM and if present, we simply discard it. import java.io.*; public class UTF8ToAnsiUtils { // FEFF because this is the Unicode char represented by the UTF-8 byte order mark (EF BB BF). public static final String UTF8_BOM = "\uFEFF"; public static void main(String args[]) { try { if (args.length != 2) { System.out .println("Usage : java UTF8ToAnsiUtils utf8file ansifile"); System.exit(1); } boolean firstLine = true; FileInputStream fis = new FileInputStream(args[0]); BufferedReader r = new BufferedReader(new InputStreamReader(fis, "UTF8")); FileOutputStream fos = new FileOutputStream(args[1]); Writer w = new BufferedWriter(new OutputStreamWriter(fos, "Cp1252")); for (String s = ""; (s = r.readLine()) != null;) { if (firstLine) { s = UTF8ToAnsiUtils.removeUTF8BOM(s); firstLine = false; } w.write(s + System.getProperty("line.separator")); w.flush(); } w.close(); r.close(); System.exit(0); } catch (Exception e) { e.printStackTrace(); System.exit(1); } } private static String removeUTF8BOM(String s) { if (s.startsWith(UTF8_BOM)) { s = s.substring(1); } return s; } }
https://rgagnon.com/javadetails/java-handle-utf8-file-with-bom.html
CC-MAIN-2019-13
refinedweb
477
57.47
Video Metadata Caching at Vimeo We at Vimeo offer tools and technology to host, distribute, and monetize videos. And with so many videos being created every day, our storage requirements are always on the increase. Did you ever wonder how we tackle this problem? Like you (probably), we look to the cloud; that’s where we store our video source files. Additionally, we include the location of the source file in a database with the metadata of every video on Vimeo. Which means that serving a video is a two-step process. The first step is to query the metadata database so that we know where to look for the video file. The second step is to access that file in cloud storage. The problem is, as the number of requests goes up, we need to query the metadata database more and more. You’d hardly notice it on a smaller scale, but as the load increases, performance goes down. Furthermore, to handle the load, we’d eventually need to increase the number of database nodes, which is an expensive proposition. One way to overcome this dilemma is to cache the metadata, because it rarely changes. Caching not only lowers our latency, but it also increases our capacity, because we can serve more requests in a shorter amount of time. To implement our latency and capacity caching needs, we chose groupcache. Groupcache Groupcache is a caching and cache-filling library that is available for Go. Written by Brad Fitzpatrick, the author of memcached, groupcache is an alternative to memcache that outperforms memcached in a number of scenarios. Like memcached, groupcache is a distributed cache, and it shards by keys to determine which peer is responsible for the key. However, unlike memcached, groupcache has a cache-filling mechanism during cache misses. The cache filling is coordinated with peers so that only the peer responsible for the key populates the cache and multiplexes the data to all the callers. The singleflight package helps with the cache filling by coalescing multiple requests for the same key, so only one call is made to the database, mitigating the thundering herd problem. Another important feature of groupcache is that it doesn’t need to run on a separate set of servers. It acts as a client library as well as a server and connects to its own peers. This reduces the number of configurations required to deploy an application, which makes deployments easier. However, this presents a new challenge when applications are running in a dynamically provisioned environment: discovering and monitoring groupcache peers. Peer discovery and monitoring The groupcache instances need to communicate with each other to request data from peers who are responsible for the specified key. We use Kubernetes for container orchestration, and Kubernetes assigns an IP address to every Pod (the smallest deployable Kubernetes object). We can’t pass the addresses of all the Pods to the application, because IP addresses are dynamically assigned every time that we deploy. Furthermore, we wouldn’t be able to inform the instances of the changes in the peers (namely the peers that were added or removed) if we passed the IP addresses to the application at startup. How can the groupcache instances discover other running instances and monitor for changes in peers? We leveraged Kubernetes features to track groupcache peers to overcome this obstacle. Each Kubernetes Pod is an instance of the application, and since groupcache runs within the application, each Pod also serves as a groupcache peer. We deploy the Pods with application-specific labels and namespaces, which enables us to track changes (creates, updates, and deletes) with the watch API. By tracking the Pods, we can add the newly created or updated Pods to our groupcache peers and remove the deleted Pods. Each instance scans for changes in the Pods and gets the IP addresses of these Pods. After acquiring the IP addresses, groupcache instances can request data from peers and access the distributed cache. With groupcache running without separate servers and Kubernetes tracking Pod changes, we have one more benefit: we can horizontally scale the application. Groupcache uses consistent hashing, so we can easily add more peers without significantly impacting performance and use the cache that already exists. Additionally, the allocated memory for the groupcache cache is divided and distributed across all instances. As we get more requests and need to handle the increased load, we can deploy additional Pods without incrementing the memory for each Pod. This keeps Pod size to a minimum while still affording us the ability to cache the data and handle the increased volume. Caching keys While groupcache has quite a few improvements from memcached, there is one area where it kind of falls short: groupcache doesn’t support versioned values. Without versioned values, a key/value pair is immutable. While this is great for ensuring that the cache isn’t overwritten, it becomes a problem when the cached data changes; we would keep serving stale data until the cache is evicted from groupcache using the least recently used (LRU) policy. While we can temporarily serve stale data when the metadata occasionally changes, we need a technique to invalidate the keys based on time so that we don’t serve stale data forever. We came up with a cache key creation technique that involves adding a caching window to the cache key to invalidate the keys over time. This solution is based on an expiration period constant, which acts as the maximum amount of time that the cache key would be used in the cache before expiring. We added jitter to the cache window to spread out the cache expiration and prevent all the requests from going to the database at the same time, because all keys would be invalidated at the beginning of each cache window. The jitter and the cache window number were calculated as follows: jitter = key % expiration_period cache_window_number = time.Now() + jitter / expiration_period We can perform mathematical operations on our key, because it is a universally unique identifier (UUID) consisting of 32 hexadecimal digits. We use the modulus operation to calculate the jitter, so that we always get a number less than the maximum caching time. This technique calculates the same jitter every time that we get a request for the same cache key and changes the jitter based on the key. The jitter is then used to determine the caching window number. The cache window number is appended to the end of the original key (key_cache-window-number), and the appended string acts as the cache key. By utilizing the current time, we ensure that the same cache window number is calculated for some period of time. By design, the cache window number quotient changes after enough time has passed, resulting in a new cache key (key_cache-window-number-2). The new cache key causes a cache miss, forcing the request to go to the database for data. The constant periodic expiration of cache keys ensures that we are continually renewing our cache and not serving stale data. Results You might be wondering how our performance changed after implementing groupcache, inserting the groupcache peer discovery, and utilizing the cache window solution to create cache keys. To measure the difference in performance, we decided to use the orijtech fork of groupcache, because this fork has OpenCensus metrics and other improvements. The figure above captures the moments after deploying our application with groupcache to production, when we first started seeing requests with latency below 1 ms. Following the traces for these submillisecond requests confirmed that the data being served was from the cache instead of the database. Cache misses still took place, as seen by the higher latency requests. However, the cache-filling mechanism stored the data from the cache misses in the cache and served that data for future requests. We also witnessed the cache-hit ratio rise as the overall cache size increased. Observing this behavior over a few hours also proved that the cache keys weren’t expiring at the same time with our cache window solution. We also noticed that when the request volume decreased, the cache size decreased, as the LRU policy evicted the cache that we were no longer using. Next steps Overall, groupcache is a great solution for implementing our capacity and latency caching needs and maintaining our performance with increasing request volume. But we also see where the implementation can stand a little improvement. We believe in contributing to open-source projects, so we decided to fork from orijtech and create our very own Galaxycache. Be on the lookout for a post about this, which adds new features to groupcache and improves usability and configurability. Join us Do you like working on intriguing projects? We’re hiring!
https://medium.com/vimeo-engineering-blog/video-metadata-caching-at-vimeo-a54b25f0b304
CC-MAIN-2021-21
refinedweb
1,460
59.43
Hi, I am converting my project to use an image resource file as hinted at in the changes.txt file. I have created a resource.py file with a header and images via img2py - the top few lines are at the end of this message. In the file dialog boa even shows that it recognizes the image resource file... guess my tag at the top is correct. When I load up with the image resource module opened I get the no module named embeddedimage message. I have hand edited a file to use the resource file and this works fine from another IDE... so the pythons good but BOA is not impressed. What have I missed? Does the answer lie on this link which I cannot seem to access? >>>Actually' I've posted example here: I know it will be something simple python 2.5 wx 2.8 thanks timb #Boa:PyImgResource: from wx.lib.embeddedimage import PyEmbeddedImage # ***************** Catalog starts here ******************* catalog = {} index = [] #---------------------------------------------------------------------- _1PH_Core_1_72x72Icon = PyEmbeddedImage( "iVBORw0KGgoAAAANSUhEUgAAAEgAAABICAYA
http://sourceforge.net/p/boa-constructor/discussion/127782/thread/1f99e489
CC-MAIN-2014-23
refinedweb
166
68.47
#include <streams.h> Definition at line 457 of file streams.h. Definition at line 472 of file streams.h. Read the specified number of bits from the stream. The data is returned in the nbits least significant bits of a 64-bit uint. Definition at line 477 of file streams.h. This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead. Buffered byte read in from the input stream. A new byte is read into the buffer when m_offset reaches 8. Definition at line 464 of file streams.h. Definition at line 460 of file streams.h. Number of high order bits in m_buffer already returned by previous Read() calls. The next bit to be returned is at this offset from the most significant bit position. Definition at line 469 of file streams.h.
https://doxygen.bitcoincore.org/class_bit_stream_reader.html
CC-MAIN-2021-17
refinedweb
139
78.14
Opened 7 years ago Closed 6 years ago #17566 closed Uncategorized (fixed) RegexURLResolver.__repr__(self) is not unicode safe. Description This is a sister to the (fixed) tickets #15795, #16158. RegexURLResolver.repr(self) can throw a UnicodeDecodeError. The problem comes from the parameter self.urlconf_name. The exception is "UnicodeDecodeError: 'ascii' codec can't decode byte... (etc)." A simple (and possibly naive) solution is to replace the occurrence of self.urlconf_name with str(self.urlconf_name).decode('utf-8') in repr(self). This seems to fix it for now, but someone who is working on this module might want to take a closer look. Attachments (3) Change History (15) comment:1 follow-up: 2 Changed 7 years ago by comment:2 Changed 6 years ago by I have noticed the same bug but I believe it is consequence of having app_name that has non ascii characters in it. May be that it is not supposed to have, but then, I couldn't find other way to display localised app_name in django's admin page. It is triggered only when I try to open missing link during the generation of technical_404_response. Here is a log: Django version 1.4, using settings 'eprod.settings' Development server is running at Quit the server with CONTROL-C. Traceback (most recent call last): File "/usr/lib/python2.7/wsgiref/handlers.py", line 85, in run self.result = application(self.environ, self.start_response) File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/handlers.py", line 67, in __call__ return self.application(environ, start_response) File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/wsgi.py", line 241, in __call__ response = self.get_response(request) File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", line 146, in get_response response = debug.technical_404_response(request, e) File "/usr/local/lib/python2.7/dist-packages/django/views/debug.py", line 432, in technical_404_response 'reason': smart_str(exception, errors='replace'), File "/usr/local/lib/python2.7/dist-packages/django/utils/encoding.py", line 116, in smart_str return str(s) File "/usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py", line 235, in __repr__ return smart_str(u'<%s %s (%s:%s) %s>' % (self.__class__.__name__, self.urlconf_name, self.app_name, self.namespace, self.regex.pattern)) File "/usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py", line 235, in __repr__ return smart_str(u'<%s %s (%s:%s) %s>' % (self.__class__.__name__, self.urlconf_name, self.app_name, self.namespace, self.regex.pattern)) UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 18: ordinal not in range(128) I hope this would help to solve the problem. In case that app_name has to be ascii then what is the proper way to force django admin application to show non-ascii applications names? In fact I would prefer to have plain ascii urls but display name or verbose_name or similar displayed in admin pages. Vitalije comment:3 follow-up: 4 Changed 6 years ago by If app_name contains non-ascii characters, it has to be a Unicode string, which I suspect was not the case in your app. Changed 6 years ago by test django project that demonstrates this bug comment:4 follow-up: 5 Changed 6 years ago by If app_name contains non-ascii characters, it has to be a Unicode string, which I suspect was not the case in your app. Well you suspect wrong. It is indeed unicode here is one of my models: class Dobavljac(models.Model): class Meta: app_label = u'Продавница' db_table = u'prodavnica_dobavljac' verbose_name = u'Добављач' verbose_name_plural = u'Добављачи' ... As matter of fact I just have started a new django project and a new application in it using django-admin.py script. Then I have added two very simple models with a unicode app_name like the one above, created admin.py and registered those two models and tested it produces same error. I attached code if you like to test it yourself. Try to open some url that is not in urlpatterns like "/admin/Продавница/rwerew" and django will try to produce technical_404_response but will throw an exception like mentioned before. HTH Vitalije comment:5 follow-up: 6 Changed 6 years ago by app_label in the Meta options is only used to connect the model to the right app instance. In this case, the only valid value is "test_app_name" - and here it is not required because the models are located in a models.py file at the first level of the test_app_name package. This is largely unrelated to the original ticket - but the app_name is derived from the import path of the app - assigning a different Meta.app_label does not actually change the app_name, but in this case the resolver isn't even getting that far. comment:6 Changed 6 years ago by This is largely unrelated to the original ticket - but the app_name is derived from the import path of the app - assigning a different Meta.app_label does not actually change the app_name, but in this case the resolver isn't even getting that far. No I dont think it is unrelated to the original ticket. But I also must admit that I was wrong to believe that app_name is problem. After thorough investigation I have found that app_name is plain ascii = 'admin'. But I have found that self.regex.pattern is unicode. When I replaced definition of method repr def __repr__(self): return return smart_str(u'<%s %s (%s:%s) %s>' %(self.__class__.__name__, self.urlconf_name, self.app_name, self.namespace, self.regex.pattern)) with the following definition def __repr__(self): return '<%s %s (%s:%s) %s>' %tuple(map(smart_str, (self.__class__.__name__, self.urlconf_name, self.app_name, self.namespace, self.regex.pattern))) the problem was solved. PS: it is repr method of the django.core.urlresolvers.RegexURLResolver class. comment:7 follow-up: 8 Changed 6 years ago by If you have an URL pattern with Unicode characters, I think it should also be prefixed with 'u'. I still think that current Django code is correct. comment:8 follow-up: 9 Changed 6 years ago by If you have an URL pattern with Unicode characters, I think it should also be prefixed with 'u'. I still think that current Django code is correct. Have you tested given django project or look at the attached code, or you are just guessing that I forgot to put u before string literal? I didn't add any unicode url pattern, but admin application did because it uses app_name as part of its url patterns. So the problem is generated inside django and IMHO should be fixed there. Besides I have placed print self.regex.pattern, type(self.regex.pattern) right before the original return statement and in console log it was written as type unicode. So it actually has proper leading "u" ie it IS really unicode, but other arguments are strings. I don't know why python has problem with coercing all those strings in unicode, but I have verified that in my test application the only source of reported error is the above method repr of the RegexURLResolver class, and inside it, the only problem is regex.pattern being a unicode. I did change that method locally in my own installation of django and it works for me. If it is not important for other users / or developers to make that correction part of their own installations / repository that's fine with me. If it is, then here are my 2 cents contribution. Vitalije PS: I feel like my reporting a bug is not very welcomed. I am grateful for all the time and energy that django developers have put in it, and I want to give something back to the community. It took me some time and effort to register, to write report in English (that is not my primary language), to debug code and report back the solution I found, and I don't complain about that effort and time. But I would appreciate (and I do believe some other programmers like me would appreciate also) if our bug reports are investigated more thoroughly before being replied. The next time I (or someone like me) find a bug in django, it can happen very easily that we wouldn't care so much to report it. I am not being impatient while waiting response from developers, so let them take their time as much as they need. But when they decide to respond, please let them take our reports seriously. Let them read our reports, test them, test their own hypothesis before making a reply. It would make us more willing to report. If there is something wrong with my report then please educate me how to make better report and by doing it you would make me feel that I owe you more good reports. That way there will be soon great number of good reports for every single bug, we users find. It is very easy to turn people away (not me, I believe), but not so easy to inspire them to get involved. Having said that I do hope that we will be more and more understanding each other in the future. comment:9 Changed 6 years ago by PS: I feel like my reporting a bug is not very welcomed. (...) I'm very sorry if I gave you the impression that your report was not welcome. It's not the case. In written communication, it's so easy to overlook the tone of responses and I probably did not address your issue with enough care. I will continue to investigate, but if any other reader feels like working on the issue, I have no monopol on it :-) comment:10 Changed 6 years ago by I could (at last) reproduced the issue. The culprit is self.urlconf_name which __repr__ already returns an utf-8-encode string (when self.urlconf_name is a list of RegexURLPattern), which then cannot be re-inserted in an unicode string without being decoded first. It is even reproducible in master. Could you please test the following patch? Changed 6 years ago by Apply force_unicode to self.urlconf_name Changed 6 years ago by Patch for master with tests comment:11 Changed 6 years ago by comment:12 Changed 6 years ago by The fix committed for #17892 [28fd876bae15df747d164dcae229840d2cf135ca] should have fixed this issue also. How did you encounter this error? urlconf_nameis the import path of the URLconf module, and Python doesn't support unicode module names (at least until Python 3.2), so it should always be an ASCII string.
https://code.djangoproject.com/ticket/17566
CC-MAIN-2018-47
refinedweb
1,761
56.15
Recently, I've read the article "A/B Testing Instant.Page With Netlify and Speedcurve" by Tim Kadlec. He measured if instant.page speeds up his website for real users, showcasing how Speedcurve and Netlify features made this very easy. I decided to recreate this experiment for our documentation site because I was curious if using those small scripts could make a difference on our already very fast site. We are not hosted on Netlify, and we don't use Speedcurve, so I had to write it from scratch. Hypothesis: Adding Instant.page or Quicklinks will significantly lower page load times for users. Hint: If you are not interested in the technical implementation, jump to the end of the article to see the charts and conclusions. I used the simplest, naive method of A/B testing: - When a user enters the site, decide if this is going to be a user with the test script or not — 50%/50%. Save this value in a session cookie so that this session will be consistent. - Send the measured values to the server. - Draw a chart to visualize results. 1. Assign the user to a test group platformOS uses Liquid markup as a templating engine, so this is where I perform that logic. There is no native filter to randomize numbers, so I used a snipped I found on the internet: {%- assign min = 1 -%} {%- assign max = 3 -%} {%- assign diff = max | minus: min -%} {%- assign</script> <script> window.instantpage = true; window.addEventListener('load', () =>{ quicklink.listen(); }); </script> {%- endif %} 2. Save the results to a database First, let's create a model that will hold all data: name: performance properties: - name: everything type: integer - name: instantpage type: boolean Its name is performance and it has two properties: - Everything - Integer - holds the value from the point in time where the request started to DOM becoming interactive - Instantpage - Boolean - holds information whether it is a version with the instant.page/quicklinks script or not Now we need an endpoint where the browser will send the data: --- layout: '' method: post slug: __performance-report response_headers: '{ "Content-Type": "application/json" }' --- {% comment %} Parse arguments from JSON to a Hash, so we can iterate over it etc. {% endcomment %} {% parse_json arguments %} {{ context.post_params }} {% endparse_json %} {% comment %} Using GraphQL mutation, forward data to the database. {% endcomment %} {% graphql g, args: arguments %} mutation create_performance_report($everything: Int!, $instantpage: Boolean!) { model_create( model: { model_schema_name: "performance" properties: [ { name: "everything", value_int: $everything }, { name: "instantpage", value_boolean: $instantpage } ] } ) { id } } {% endgraphql %} {% comment %} Try to assign errors to errors variable Try to get ID of the record saved in the DB {% endcomment %} {% assign errors = g | fetch: "errors" | json %} {% assign id = g | fetch: "model_create" | fetch: "id" | plus: 0 %} {% comment %} If there is ID returned by the server, lets return it as a response If there is no ID, lets return errors as a response {% endcomment %} {% if id %} { "id": {{ id }} } {% else %} { "errors": {{ errors }} } {% endif %} To send the observed performance values to the above page, we used a simple AJAX request. const nav = performance.getEntriesByType('navigation')[0]; const DELAY = 100; const report = (data) => { fetch('/__performance-report', { method: 'POST', headers: { 'Accept': 'application/json', 'Content-Type': 'application/json' }, body: JSON.stringify(data), }) .then((res) => res.json()) .catch(console.log); }; // Make sure it gets readings after the is ready by pushing it out of main thread setTimeout(() => { const perf = { instantpage: !!window.instantpage, everything: nav.domInteractive - nav.requestStart, }; if (nav.type === 'navigate') { report(perf); } }, DELAY); And that's it. After deployment, data collection from users started. I let it run for a couple of weeks — now it's time to see the results. 3. Visualizing the results First, we need to pull out the data from the DB. As usual, GraphQL will do the heavy lifting: query get_performance_report($instantpage: Boolean!) { models( per_page: 1000 sort: { created_at: { order: ASC } } filter: { properties: [ { name: "instantpage", value_boolean: $instantpage } { name: "everything", range: { gt: "0", lt: "4000" } } ] } ) { results { x: created_at y: property(name: "everything") } } } Why I'm not pulling anything above 4000? Because I saw that some outliers would skew the scale on the charts too much and make them much less readable when plotted. I decided to remove extreme values from the dataset. Now we need a page to show it on. I decided to use the Chart.js library to draw a chart. It's small, simple, and fast. Dashboard page is pretty long, you can see source code on our github. And the last step: Initialize Chart.js, which was pure pleasure 😃 var el = document.getElementById('comparison'); var ctx = el.getContext('2d'); var chart = new Chart(ctx, { type: 'scatter', data: { datasets: window.datasets, }, options: { scales: { yAxes: [{ ticks: { suggestedMin: 100, suggestedMax: 4000 } }], xAxes: [ { type: 'time', time: { unit: 'day', }, }, ], }, }, }); Conclusions All results on one scatter chart: It does not look like there is a clear winner here. Let's look at the largest groups of points where there is only one variant on the chart. Only clean data points: Quicklinks data points only: It looks like in both cases, everything takes around 500ms and spreads up to 2000ms. Our hypothesis was that instant.page (tested in the first week then switched to quicklinks.js) is making websites faster. In our case, it doesn't look like it is working as well as advertised. We decided not to go forward with either script. Sending less JS down the wire and making sure your website is just fast seems like a better bet. We have reached the point of diminishing returns on our documentation website. It is so fast it is hard to squeeze more out of it. Especially on the frontend without sacrificing features. Nonetheless, I'm glad I did the experiment because it was on my mind for a long time (long before I saw that Tim Kadlec did it) and now I finally know. Discussion
https://dev.to/platformos/we-couldn-t-go-faster-using-quicklinks-or-instant-page-9o2
CC-MAIN-2020-50
refinedweb
955
55.84
David Kastrup <address@hidden> writes: > Nathan <address@hidden> writes: > >> \version "2.14.2" >> >> contempPath = #'((moveto 0.0 0.0) >> (curveto -1.1 1.1 -0.5 1.5 0.5 0.5) >> (lineto 1.1 1.1) >> (closepath)) >> >> #(define-markup-command (contempSignMarkup layout props) () >> (interpret-markup layout props >> (markup #:override '(filled . #t) #:path 0.25 contempPath))) >> >> contempSign = \markup \contempSignMarkup >> >> \relative c'' { >> c16-.^\contempSign r8 >> } >> >> Feel free to adjust the points in \contempPath so the shape is more to >> your liking. >> >> Note that you can't do c16\contempPath. You have to attach it using -, >> ^, or _. > > Two notes: why don't you call the markup command just contempSign? > Since markup commands have a different namespace, contempSignMarkup is > quite redundant. > > And why don't you define > > contempSign = -\markup \contempSignMarkup > > if contempSign is supposed to be used as postevent anyway? Then you > _can_ do c16\contempSign [sic]. Two more notes: StudlyCaps are not the style used for markup commands, you'd use contemp-sign for the markup. And putting the path in a separate variable to keep the markup macro from messing with it seems awkward. There is also no need for an option-less markup command. And putting this together with some goodness from the current release candidate for the next stable version (and assuming that the sign belongs up by default as a postevent) we get \version "2.15.30" #(define-markup-command (contemp-sign layout props) () (interpret-markup layout props #{ \markup \override #'(filled . #t) \path #'0.25 #'((moveto 0.0 0.0) (curveto -1.1 1.1 -0.5 1.5 0.5 0.5) (lineto 1.1 1.1) (closepath)) #})) contempSign = ^\markup \contemp-sign \relative c'' { c16-.\contempSign r8 } -- David Kastrup
http://lists.gnu.org/archive/html/lilypond-user/2012-02/msg00549.html
CC-MAIN-2018-05
refinedweb
283
70.29
SMS notifications are widely used in modern web and mobile applications. Applications can engage with clients effectively and programmatically to facilitate fundamental user interactions in your app, such as two-factor authentication, password resets, user phone number verification, etc. You can use an SMS API to enable SMS notifications in your applications. There are multiple options available, and this article will discuss four different ways of sending SMSs with Node.js to help you decide which option is best for your needs. 1. Using Twilio API Twilio's Programmable SMS API allows you to integrate powerful messaging capabilities into your apps with Node.js. You can send and receive SMS messages, track their delivery, schedule SMS messages to transmit at a later time, and access and edit message history using this REST API. As prerequisites to use Twilio with Node.js, you will need : - A Twilio account, either free or paid. - Node.js installed on your machine - SMS-enabled phone number (you can search for and buy one in the Twilio console). Twilio uses webhooks to notify your app when events like receiving an SMS message occur. Twilio sends an HTTP request (typically a POST or a GET) to the URL you specified for the webhook when an event happens. The event's information, such as the incoming phone number or the body of an incoming message, will be included in Twilio's request. Some webhooks are just informative, while others expect your web app to respond. You must notify Twilio API to which URL these webhooks are sent in all of these cases. Advantages of using Twilio SMS API - Support of multiple programming languages. - On-demand billing. - Good documentation and community support. Disadvantages of using Twilio SMS API - Not mobile-friendly - Expensive compared to other APIs. - Hard to get started without a software development background. Tutorial: How to send SMS using Twilio SMS API The following is the base for all URLs in the Twilio documentation. 1 Sending a text message with the Twilio API is straightforward, requiring ten lines of code roughly. Twilio simplifies the process by providing a helper library. To begin, use npm to install the twilio-node library from the terminal. 1 npm install twilio Next, create a file (sms.js) and open it in the text editor. Load the twilio-node helper library at the top of the file and use it to create a Twilio REST client. Dotenv is then used to populate your environment variables with your account credentials. You can find the credentials you need for this step in the Twilio Console. 1 2 3 4 5 const twilio = require('twilio'); require('dotenv').config(); const accountSid = process.env.TWILIO_ACCOUNT_SID; const authToken = process.env.TWILIO_AUTH_TOKEN; Then all you have to do is start a new Twilio client instance and send an SMS from your Twilio phone number to your phone number. Ensure the phone numbers are replaced with your Twilio and cell phone numbers. 1 2 3 4 5 client.messages.create({ from: process.env.TWILIO_PHONE_NUMBER, to: process.env.CELL_PHONE_NUMBER, body: "Send SMS using Twilio Api in Node.js!" }).then((message) => console.log(message.sid)); Then, by running the following command in the terminal, you can receive the SMS in seconds. 1 run sms.js Likewise, you can now successfully send some messages with the Twilio Programmable SMS API and the Node.js helper library. 2.Using MessageBird API MessageBird provides a variety of SMS delivery options. You can, for example, send SMS via email or an API connection. In addition, users can always check the status of their messages because each one is assigned with a unique random ID, allowing users always to check the status. The SMS API has an HTTP, a RESTful endpoint structure, and an access key for API authorization. The request and response payloads are formatted as JSON with UTF-8 encoding and URL encoded values. As prerequisites, you should have: - NodeJS development environment. - Project directory with the MessageBird SDK (This helper library helps interactions between your Node.js application's code and the MessageBird API) - MessageBird Account. Advantages of using MessageBird API - User-friendly dashboard with features like flow builder. - Cost-effective pricing. - Can be easily integrated with native apps. Disadvantages of using MessageBird API - Deliverability problems with some phone numbers. - Lack of plugins. Tutorial: How to send SMS using MessageBird API The base URL for all URLs in MessageBird's SMS API documentation is as follows. 1 Create a file (sms.js) and open it in a text editor as the next step. Then, using require, you can include the MessageBird SDK (). 1 var messagebird = require('messagebird')('YOUR-API-KEY'); The SDK only accepts one argument, which is your API key. The string YOUR-API-KEY can then be replaced. Then you can create a new instance of the MessageBird client via messages.create()and feed it to the originator (the sender), receivers (the phone numbers who will receive the message), and body (the message's content). You can use the number you purchased as the originator. The message's body will be limited to 160 characters; otherwise, it will be divided into many pieces. Make sure to replace the values in the sample code with actual data when testing. 1 2 3 4 5 messagebird.messages.create({ originator : '31970XXXXXXX', recipients : [ '31970YYYYYYY' ], body : 'Hello World, I am a text message and I was hatched by Javascript code!' } If everything goes properly, you'll get a response; if something goes wrong, you'll get an error response. As shown below, a callback function is used to manage this. 1 2 3 4 5 6 7 8 9 function (err, response) { if (err) { console.log("ERROR:"); console.log(err); } else { console.log("SUCCESS:"); console.log(response); } }); 3. Using Plivo API Plivo API makes it easier to integrate communications into your code by allowing you to send messages to any phone number using HTTP verbs and standard HTTP status codes. Other use cases include sending alerts and receiving delivery status updates. Advantages of using Vonage SMS API - Anyone can easily get started. - User-friendly UI/UX. - Seamless customization. Disadvantages of using Vonage SMS API - Documentation is a bit challenging to understand. - Lack of error messages. Tutorial: How to send SMS using Vonage SMS API 1{version}/ Prerequisites for Plivo are: - Plivo account. - Plivo phone number - NodeJS development environment. - Plivo's Node SDK. To begin, create a new js file (sms.js) and use npm to install Express and the Plivo Node.js SDK. 1 $ npm install express plivo Then you can create a new instance of the Plivo client via messages.create() and feed it the src(the sender id ), dst (the phone numbers who will receive the message), and text (the content of the message). You can use the Plivo console to replace the auth placeholders with your authentication credentials. Actual phone numbers should be substituted for the placeholders (src and dst). Make sure that it's in E.164 format. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 var plivo = require('plivo'); (function main() { 'use strict'; var client = new plivo.Client("<auth_id>", "<auth_token>"); client.messages.create( { src: "<sender_id>", dst: "<destination_number>", text: "Hello, from Node.js!", } ).then(function (response) { console.log(response); }); })(); Keeping your credentials in the auth id and auth token environment variables will be much safer. It will allow you to start the client with no arguments, and Plivo will automatically fetch the information from the environment variables. Finally, save and execute the program to see if you received an SMS on your phone. 1 $ node SMS.js You can also keep track of the status of your outbound messages with Plivo. First, you have to send a request to the server endpoint specifying its URL and HTTP method to set up a server endpoint. Then Plivo will call this endpoint with the latest message details as and when the message status changes. Multichannel Notification Services For this demonstration, I will be using Courier since it supports some well-known SMS providers like Twillio, Plivo, MessageBird, etc. Creating a Courier account You can easily create a free Courier account using the following link -. Creating a New Channel After login into npm install @trycourier/courier command and copy the auto-generated code to your project. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 import { CourierClient } from "@trycourier/courier"; const courier = CourierClient({ authorizationToken: "pk_prod_Y2Y3BK4MAM40YTN029MVR64YFKR0" }); const { requestId } = await courier.send({ message: { to: { phone_number: "5558675309", }, template: "NEDRQK62TWM70MHT84S6FKSXM4VT", data: { test: "test", }, }, }); Note: You can customize the above code based on your requirements. Please refer to Courier documentation for more details. Conclusion This article serves as a compilation of four summarized tutorials on how to send SMS using Node.js. With a basic understanding of Node.js, you should be able to choose between the four options above and enhance the functionality of your web application by sending SMS notifications. Thank you for reading.
https://www.courier.com/guides/nodejs-send-sms/
CC-MAIN-2022-33
refinedweb
1,495
58.08
This article is a sponsored article. Articles such as these are intended to provide you with information on products and services that we consider useful and of value to developers Did you know that optical character recognition (OCR) can bring forth your artistry? Have you seen the newspaper blackout poems? Using OCR I was able to create a simple program that allowed me to create just that. This whitepaper shows how I used OCR Xpress for Java to OCR a scanned newspaper, redact the key words, and accomplished my goal. Redaction is a very popular means of removing sensitive material from any document. However, it can also be entertaining. There were a few functionalities that I required before I channeled my inner Walt Whitman. The first was OCR, the second was word/phrase search, the third was redaction or, in my case, negated redaction, and the final thing was saving it to a file. In this article, I will briefly go over how I was able to create a small application using OCR Xpress. Getting OCR up and running was my first step to becoming a modern-day poet. OCR is the conversion of an image (or images) that has typed / printed text to an electronic output. OCR has many use cases for many different companies and also has a practical need for the blind and visually impaired. My intent, however, was much more whimsical. //required accusoft dependencies package com.accusoft.ocrxpress.samples; import com.accusoft.ocrxpress.*; //java dependencies import java.awt.Color; import java.awt.image.*: import javax.imageio.*; import java.io.*; import java.util.*; //Display Image import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.ImageIcon; import java.awt.Flowlayout: I took the Memory sample that OCR Xpress was packaged with and added a NewspaperBlackout class. Above are the imports and the package that I needed within my class. NewspaperBlackout There are were a few different ways to code this application, but I decided to create a couple helper functions. Below I used InputString, DisplayImage, ConvertToBlack, and ConvertRect. InputString DisplayImage ConvertToBlack ConvertRect public class NewspaperBlackout { public static String InputString() { Scanner scanner = new scanner (System.in); System.out.prints "Type your poem. <Press Enter for default>"); return scanner.nextLine(); } This helper function allows you to either hit "Enter" for the default poem, or type your own poem within the Console window. public static void DisplayImage(BufferedImage bi) { ImageIcon icon = new ImageIcon(bi); JFrame frame = new JFrame(); frame.setLayouts new FlowLayouts()); frame.setSize (bi.getWidth(), bi.getHeight()); JLabel lbl = new JLabel(); lbl.setIcon(icon); frame.add(lbl); frame.setWisible(true); frame.setDefaultCloseOperation(JFrame.EXIT_0N_close); } The helper function displays the final output in a JFrame. public static void convertToBlack(BufferedImage bi) { Color c = new Color (0,0,0); //black pixel for (int x = 0; x < bi.getWidth(); x++) { for (int y = 0; y < bi.getHeight(); y ++) { bi.setRGB(x, y, c.getRGB()); } } } This helper function converts the image to completely black. There are many different ways to accomplish this, and I recommend using the method you feel most comfortable with. public static java.awt.Rectangle ConvertRect (com.accusoft.ocrxpress.Rectangle accusoftRect) { java.awt.Rectangle javaRect = new java.awt.Rectangle(); javaRect.x = accusoftRect.getLeft(); javaRect.y = accusoftRect.getTop (); javaRect.width = accusoftRect.getRight() - accusoftRect.getLeft(); javaRect.height = accusoftRect.getBottom() - accusoftRect.getTop(); return javaRect; } This helper function takes in an Accusoft rectangle, which has Top, Bottom, Right, and Left, and converts it to the standard Java rectangle, x, y, width, and height. public static void main (String[] args) throws OcrxException { String inputImagePath = "images/NP01.bmp"; BufferedImage originalImg = null, outputImg = null; //Load and display original image try { originalImg = ImageIO.read(new File (inputImagePath)); } catch (IOException e) { e.addSuppressed(e); return; } Once I had the newspaper scanned and saved as a BMP image, I was ready to load my image into a bufferedImage by calling ImageIO.read. bufferedImage ImageIO.read 0crXpress ocrx = new 0crXpress(); initializeLicensing(ocrx); RecognitionParameters parameters = new RecognitionParameters(); parameters.setLanguage(Language.ENGLISH); Document document = ocrx.recognizeToMemory(parameters, original Img); Image scanned to BMP? Check. BMP loaded as a bufferedimage? Check. Now onto the next step of my project. OCR Xpress provided a few options: output to file, output to PDF, and output to memory. For my needs, I wanted to use recognizeToMemory. I needed to initialize the OCR Xpress engine, and RecognitionParameters. Then I simply called recognizeToMemory to OCR the image to memory. bufferedimage recognizeToMemory RecognitionParameters Now that I had my newspaper in an electronic form, I could search for words/phrases. This also allowed me to find the rectangle area of each word/phrase that I searched for. I was able to add these to a list of RECTs, which I then merged with my "blackout" page. String searchString = InputString(); if (searchString.isEmpty()) { searchString = "You give something possible by psychic sign passed forth on networks?"; } String[] searchWords = searchString.split("\\s+"); Using the helper function InputString, all I needed to do was pass in a default value and populate the string array with Java’s split function using either the users input or the default value. This part was the most difficult, I actually had to stop and think of a poem to write, using the words given in the article. "You give something possible by psychic sign passed forth on networks?" OK, so not the best, but writing poetry is a lot harder than writing code. The next step (which seemed to be most difficult) was also made easy. First, let me explain redaction and what it’s used for. Redaction is the process of censoring or obscuring part of a text for legal or security purposes. When someone says redaction, you might think top secret "your eyes only" documents. However, every company has sensitive material that they may wish to redact, such as customer data, company data, or employee data to name a few. However, my redaction needs are more of the "I love you" variety. Once the text was redacted, I was then able to merge the redacted areas with the original image that was converted to black. try { outputImg = ImageIO.read(new File (inputImagePath)); ConvertsToBlack(outputImg); } catch (IOException e) { e.addSuppressed(e); return; } There are a few different ways to copy the original image, but for simplicity I created a new bufferedImage outputImg. For this image, I needed to convert all pixels to black. This is where the ConvertToBlack helper function comes into play. outputImg int curWord = 0; for (Word word : document.getWords()) { if (curWord >= searchWords.length) break; if (word.getText().equals(searchWords[curWord]) == false) continue; Raster img = originalImg.getData(ConvertRect(word.getArea())); outputImg.setsData(img); curWord++; } DisplayImage(outputImg); } } I needed to search each word and extract the area corresponding to the image’s coordinates. OCR Xpress provided a getArea() function that did just that. I had the area of the word I was searching. I needed to convert the RECTs, get the pixel data, and then set the data to the outputImg file before moving to the next word. getArea() Then I could display my newspaper blackout poem. Before you run off and create your own newspaper blackout poems, let’s quickly summarize what was mentioned above. I described what OCR was, and how many companies find it very useful in everyday practical means. I also went over redaction and how companies and individuals benefit from its use. Lastly, I showed you how I used OCR and redaction for an interesting and fun project. I would encourage you to try this project for yourself, and share your "artsy" side with the world. You can download OCR Xpress and this example from the following links: OCR Xpress:. To obtain an evaluation license, you will need to contact Accusoft’s support. They will also be able to answer any questions that you might have. OCR Xpress & NewspaperBlackout Application: This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
https://codeproject.freetls.fastly.net/Articles/1121238/Programmable-Poetry-Using-OCR?pageflow=FixedWidth
CC-MAIN-2021-39
refinedweb
1,318
50.53
When organizations purchase Microsoft software via a volume licensing agreement, they often receive Microsoft Software Assurance Training Vouchers, which can be used to purchase free training. Webucator offers volume discounts for public classes when you buy 25 or more class days, up to 30+% off! Microsoft Word is used to create, revise, and save documents for printing and future retrieval. This Word training class is the first in a series of three Microsoft 2007 courses. It will provide you with the basic concepts required to produce basic business documents. Each student in our Live Online and our Onsite classes receives a comprehensive set of materials, including course notes and all the class examples. Our computer technical requirements and setup process is easy, with support just a click away.
http://www.webucator.com/microsoft/course/introduction-microsoft-word-2007-training.cfm
crawl-003
refinedweb
128
52.9
. (This. Every time a program opens a resource, such as a file or network connection, it is important. The resource must be closed in a finally section of a try-catch block: private static void printFileJava6() throws IOException { FileInputStream input; try { input = new FileInputStream("file.txt"); int data = input.read(); while (data != -1){ System.out.print((char) data); data = input.read(); } } finally { if (input != null) { input.close(); } } } Since Java 7(); } } } The try-with-resources statement can be used with any object that implements the Closeable or AutoCloseable interface. It ensures that each resource is closed by the end of the statement. The difference between the two interfaces is, that the close() method of Closeable throws an IOException which has to be handled in some way. In cases where the resource has already been opened but should be safely closed after use, one can assign it to a local variable inside the try-with-resources private static void printFileJava7(InputStream extResource) throws IOException { try (InputStream input = extResource) { ... //access resource } } The local resource variable created in the try-with-resources constructor is effectively final. Java. A common mistake for Java beginners is to use the == operator to test if two strings are equal. For example: public class Hello { public static void main(String[] args) { if (args.length > 0) { if (args[0] == "hello") { System.out.println("Hello back to you"); } else { System.out.println("Are you feeling grumpy today?"); } } } } The above program is supposed to test the first command line argument and print different messages when it and isn't the word "hello". But the problem is that it won't work. That program will output "Are you feeling grumpy today?" no matter what the first command line argument is. In this particular case the String "hello" is put in the string pool while the String args[0] resides on the heap. This means there are two objects representing the same literal, each with its reference. Since == tests for references, not actual equality, the comparison will yield a false most of the times. This doesn't mean that it will always do so. When you use == to test strings, what you are actually testing is if two String objects are the same Java object. Unfortunately, that is not what string equality means in Java. In fact, the correct way to test strings is to use the equals(Object) method. For a pair of strings, we usually want to test if they consist of the same characters in the same order. public class Hello2 { public static void main(String[] args) { if (args.length > 0) { if (args[0].equals("hello")) { System.out.println("Hello back to you"); } else { System.out.println("Are you feeling grumpy today?"); } } } } But it actually gets worse. The problem is that == will give the expected answer in some circumstances. For example public class Test1 { public static void main(String[] args) { String s1 = "hello"; String s2 = "hello"; if (s1 == s2) { System.out.println("same"); } else { System.out.println("different"); } } } Interestingly, this will print "same", even though we are testing the strings the wrong way. Why is that? Because the Java Language Specification (Section 3.10.5: String Literals) stipulates that any two string >>literals<< consisting of the same characters will actually be represented by the same Java object. Hence, the == test will give true for equal literals. (The string literals are "interned" and added to a shared "string pool" when your code is loaded, but that is actually an implementation detail.) To add to the confusion, the Java Language Specification also stipulates that when you have a compile-time constant expression that concatenates two string literals, that is equivalent to a single literal. Thus: public class Test1 { public static void main(String[] args) { String s1 = "hello"; String s2 = "hel" + "lo"; String s3 = " mum"; if (s1 == s2) { System.out.println("1. same"); } else { System.out.println("1. different"); } if (s1 + s3 == "hello mum") { System.out.println("2. same"); } else { System.out.println("2. different"); } } } This will output "1. same" and "2. different". In the first case, the + expression is evaluated at compile time and we compare one String object with itself. In the second case, it is evaluated at run time and we compare two different String objects In summary, using == to test strings in Java is almost always incorrect, but it is not guaranteed to give the wrong answer. Some people recommend that you should apply various tests to a file before attempting to open it either to provide better diagnostics or avoid dealing with exceptions. For example, this method attempts to check if path corresponds to a readable file: public static File getValidatedFile(String path) throws IOException { File f = new File(path); if (!f.exists()) throw new IOException("Error: not found: " + path); if (!f.isFile()) throw new IOException("Error: Is a directory: " + path); if (!f.canRead()) throw new IOException("Error: cannot read file: " + path); return f; } You might use the above method like this: File f = null; try { f = getValidatedFile("somefile"); } catch (IOException ex) { System.err.println(ex.getMessage()); return; } try (InputStream is = new FileInputStream(file)) { // Read data etc. } The first problem is in the signature for FileInputStream(File) because the compiler will still insist we catch IOException here, or further up the stack. The second problem is that checks performed by getValidatedFile do not guarantee that the FileInputStream will succeed. Race conditions: another thread or a separate process could rename the file, delete the file, or remove read access after the getValidatedFile returns. That would lead to a "plain" IOException without the custom message. There are edge cases not covered by those tests. For example, on a system with SELinux in "enforcing" mode, an attempt to read a file can fail despite canRead() returning true. The third problem is that the tests are inefficient. For example, the exists, isFile and canRead calls will each make a syscall to perform the required check. Another syscall is then made to open the file, which repeats the same checks behind the scenes. In short, methods like getValidatedFile are misguided. It is better to simply attempt to open the file and handle the exception: try (InputStream is = new FileInputStream("somefile")) { // Read data etc. } catch (IOException ex) { System.err.println("IO Error processing 'somefile': " + ex.getMessage()); return; } If you wanted to distinguish IO errors thrown while opening and reading, you could use a nested try / catch. If you wanted to produce better diagnostics for open failures, you could perform the exists, isFile and canRead checks in the handler. No Java variable represents an object. String foo; // NOT AN OBJECT Neither does any Java array contain objects. String bar[] = new String[100]; // No member is an object. If you mistakenly think of variables as objects, the actual behavior of the Java language will surprise you. For Java variables which have a primitive type (such as int or float) the variable holds a copy of the value. All copies of a primitive value are indistinguishable; i.e. there is only one int value for the number one. Primitive values are not objects and they do not behave like objects. For Java variables which have a reference type (either a class or an array type) the variable holds a reference. All copies of a reference are indistinguishable. References may point to objects, or they may be null which means that they point to no object. However, they are not objects and they don't behave like objects. Variables are not objects in either case, and they don't contain objects in either case. They may contain references to objects, but that is saying something different. The examples that follow use this class, which represents a point in 2D space. public final class MutableLocation { public int x; public int y; public MutableLocation(int x, int y) { this.x = x; this.y = y; } public boolean equals(Object other) { if (!(other instanceof MutableLocation) { return false; } MutableLocation that = (MutableLocation) other; return this.x == that.x && this.y == that.y; } } An instance of this class is an object that has two fields x and y which have the type int. We can have many instances of the MutableLocation class. Some will represent the same locations in 2D space; i.e. the respective values of x and y will match. Others will represent different locations. MutableLocation here = new MutableLocation(1, 2); MutableLocation there = here; MutableLocation elsewhere = new MutableLocation(1, 2); In the above, we have declared three variables here, there and elsewhere that can hold references to MutableLocation objects. If you (incorrectly) think of these variables as being objects, then you are likely to misread the statements as saying: - Copy the location "[1, 2]" to here - Copy the location "[1, 2]" to there - Copy the location "[1, 2]" to elsewhere From that, you are likely to infer we have three independent objects in the three variables. In fact there are only two objects created by the above. The variables here and there actually refer to the same object. We can demonstrate this. Assuming the variable declarations as above: System.out.println("BEFORE: here.x is " + here.x + ", there.x is " + there.x + "elsewhere.x is " + elsewhere.x); here.x = 42; System.out.println("AFTER: here.x is " + here.x + ", there.x is " + there.x + "elsewhere.x is " + elsewhere.x); This will output the following: BEFORE: here.x is 1, there.x is 1, elsewhere.x is 1 AFTER: here.x is 42, there.x is 42, elsewhere.x is 1 We assigned a new value to here.x and it changed the value that we see via there.x. They are referring to the same object. But the value that we see via elsewhere.x has not changed, so elsewhere must refer to a different object. If a variable was an object, then the assignment here.x = 42 would not change there.x. Applying the equality ( ==) operator to reference values tests if the values refer to the same object. It does not test whether two (different) objects are "equal" in the intuitive sense. MutableLocation here = new MutableLocation(1, 2); MutableLocation there = here; MutableLocation elsewhere = new MutableLocation(1, 2); if (here == there) { System.out.println("here is there"); } if (here == elsewhere) { System.out.println("here is elsewhere"); } This will print "here is there", but it won't print "here is elsewhere". (The references in here and elsewhere are for two distinct objects.) By contrast, if we call the equals(Object) method that we implemented above, we are going to test if two MutableLocation instances have an equal location. if (here.equals(there)) { System.out.println("here equals there"); } if (here.equals(elsewhere)) { System.out.println("here equals elsewhere"); } This will print both messages. In particular, here.equals(elsewhere) returns true because the semantic criteria we chose for equality of two MutableLocation objects has been satisfied. Java method calls use pass by value1 to pass arguments and return a result. When you pass a reference value to a method, you're actually passing a reference to an object by value, which means that it is creating a copy of the object reference. As long as both object references are still pointing to the same object, you can modify that object from either reference, and this is what causes confusion for some. However, you are not passing an object by reference2. The distinction is that if the object reference copy is modified to point to another object, the original object reference will still point to the original object. void f(MutableLocation foo) { foo = new MutableLocation(3, 4); // Point local foo at a different object. } void g() { MutableLocation foo = MutableLocation(1, 2); f(foo); System.out.println("foo.x is " + foo.x); // Prints "foo.x is 1". } Neither are you passing a copy of the object. void f(MutableLocation foo) { foo.x = 42; } void g() { MutableLocation foo = new MutableLocation(0, 0); f(foo); System.out.println("foo.x is " + foo.x); // Prints "foo.x is 42" } 1 - In languages like Python and Ruby, the term "pass by sharing" is preferred for "pass by value" of an object / reference. 2 - The term "pass by reference" or "call by reference" has a very specific meaning in programming language terminology. In effect, it means that you pass the address of a variable or an array element, so that when the called method assigns a new value to the formal argument, it changes the value in the original variable. Java does not support this. For a more fulsome description of different mechanisms for passing parameters, please refer to. Occasionally. New Java programmers often forget, or fail to fully comprehend, that the Java String class is immutable. This leads to problems like the one in the following example: public class Shout { public static void main(String[] args) { for (String s : args) { s.toUpperCase(); System.out.print(s); System.out.print(" "); } System.out.println(); } } The above code is supposed to print command line arguments in upper case. Unfortunately, it does not work, the case of the arguments is not changed. The problem is this statement: s.toUpperCase(); You might think that calling toUpperCase() is going to change s to an uppercase string. It doesn't. It can't! String objects are immutable. They cannot be changed. In reality, the toUpperCase() method returns a String object which is an uppercase version of the String that you call it on. This will probably be a new String object, but if s was already all uppercase, the result could be the existing string. So in order to use this method effectively, you need to use the object returned by the method call; for example: s = s.toUpperCase(); In fact, the "strings never change" rule applies to all String methods. If you remember that, then you can avoid a whole category of beginner's mistakes.
https://sodocumentation.net/java/topic/4388/common-java-pitfalls
CC-MAIN-2021-21
refinedweb
2,297
58.58
JSP pages are converted to servlets by the JSP container in a process that is transparent to the developer. We'll author a simple JSP page that prints out the current date to demonstrate how this works: <html> <head> <title>Date</title> </head> <body> <h3> The date is <% out.println((new java.util.Date()).toString()); %> </h3> </body> </html> Notice how easy it is to combine HTML and Java code in a JSP page. It would be a lot easier to alter the way the page is presented than it would be if the same page were written as a servlet. Save this JSP page as %CATALINA_HOME%/webapps/jsp/date.jsp, start Tomcat, and navigate to: Quite a lot of things happen behind the scenes when a JSP page is deployed in a web container and is first served to a client request. Deploying and serving a JSP page involves two distinct phases: Translation phase - In this phase the JSP page is transformed into a Java servlet and then compiled. This phase occurs only once for each JSP page and must be executed before the JSP page is served. The generated servlet class is known as a JSP page implementation class. The translation phase results in a delay when a JSP page is requested for the first time. To avoid this delay, JSP pages can be precompiled before they are deployed using tools that perform the translation phase when the server starts up. Execution phase - This phase (also known as the request processing phase), is executed each time the JSP page is served by the web container. Requests for the JSP page result in the execution of the JSP page implementation class. The actual form of the source code for the JSP page implementation class depends on the web container. Normally, the class implements javax.servlet.jsp.HttpJspPage or javax.servlet.jsp.JspPage, both of which extend javax.servlet.Servlet. The following code (tidied up a bit) is the source for the page implementation class of date.jsp as generated by Tomcat 4, which stores these files in %CATALINA_HOME%/webapps/work. The class extends HttpJspBase, which is a class provided by Tomcat that implements the HttpJspPage interface. package org.apache.jsp; import javax.servlet.*; import javax.servlet.http.*; import javax.servlet.jsp.*; import org.apache.jasper.runtime.*; public class date$jsp extends HttpJspBase { public class MyFirstJSP$jsp extends HttpJspBase { static {} public date$jsp() {} private static boolean _jspx_inited = false; When a request comes for the JSP page, the container creates an instance of the page and calls the jspInit() method on the page implementation object. It's worth noting that it is up to the container's implementation whether a new instance of the page is used each time, or if a single instance is serviced by multiple threads (which is most common scenario): public final void _jspx_init() throws org.apache.jasper.runtime.JspException {} The generated class overrides the _JSP pageservice() method. This method is called each time the JSP is accessed by a client browser. All the Java code and template text we have in our JSP pages normally go into this method. It is executed each time the JSP page is served by the web container: public void _JSP pageservice(HttpServletRequest request, HttpServletResponse response) throws java.io.IOException, ServletException { JspFactory _jspxFactory = null; PageContext pageContext = null; HttpSession session = null; ServletContext application = null; ServletConfig config = null; JspWriter out = null; Object page = this; String _value = null; try { if (_jspx_inited == false) { synchronized (this) { if (_jspx_inited == false) { _jspx_init(); _jspx_inited = true; } } } _jspxFactory = JspFactory.getDefaultFactory(); response.setContentType("text/html;charset=ISO-8859-1"); pageContext = _jspxFactory.getPageContext(this, request, response, "", true, 8192, true); Here we initialize the implicit variables (which we cover shortly): application = pageContext.getServletContext(); config = pageContext.getServletConfig(); session = pageContext.getSession(); out = pageContext.getOut(); Write the template text stored in the text file to the output stream: // HTML // begin [file="/date.jsp";from=(0,0);to=(7,6)] out.write("<html>\r\n<head>\r\n<title>Date</title>\r\n" + "</head>\r\n<body>\r\n<h3>\r\n" + "The date is\r\n"); // end // begin [file="/date.jsp";from=(7,8);to=(7,57)] out.println((new java.util.Date()).toString()); // end // HTML // begin [file="/date.jsp";from=(7,59);to=(10,7)] out.write("\r\n </h3>\r\n </body>\r\n</html>"); // end } catch (Throwable t) { if (out != null && out.getBufferSize() != 0) { out.clearBuffer(); } if (pageContext != null) { pageContext.handlePageException(t); } } finally { if (_jspxFactory != null) { _jspxFactory.releasePageContext(pageContext); } } } } Now that we've covered the basic objectives and working principles of JSP pages, we'll take a closer look at the various features of JSP that enable the building of powerful enterprise-class presentation components.
https://flylib.com/books/en/1.123.1.65/1/
CC-MAIN-2019-09
refinedweb
780
54.93
08 November 2010 22:10 [Source: ICIS news] HOUSTON (ICIS)--?xml:namespace> Spot margins for ethane-sourced ethylene were assessed at 17.98 cents/lb last week, up from 13.49 cents/lb in the week ended 29 October. The key driver in the margin increase was a 6.88 cent/lb increase, or 17%, in the weekly spot ethylene range, compared with a 2.40 cent/lb increase in ethane costs. Spot margins for naphtha-based ethylene slipped by less than 1 cent to 4.87 cents/lb, as naphtha costs rose by almost 8%. Ethane prices have risen in nine of the past 12 weeks, while naphtha prices have risen in eight of the past 12 weeks. ($1 = €0.71) For more on ethylene
http://www.icis.com/Articles/2010/11/08/9408379/us-ethylene-margins-rise-33-as-surging-spot-values-outpace-ethane.html
CC-MAIN-2015-18
refinedweb
126
86.71
I can see my playlists, but not the tracks in them Spotlight shows all of my playlists, but it won't show me the tracks inside of them. I kept adding xbmc.log statements until I found the culprit. Simply commenting out 2 lines makes it work for me (not sure if it breaks anything else): def create_track_list_items(self, tracks, page = Page()): xbmc.log("tracks are: %s" % ",".join(t.track for t in tracks)) if len(tracks) > 0: indexes = range(0, len(tracks)) xbmc.log("indexes are: %s" % ",".join(str(i) for i in indexes)) #if not page.is_infinite(): # indexes = page.current_range() # xbmc.log("indexes are: %s" % ",".join(str(i) for i in indexes)) for index in indexes: track = tracks[index - page.start] xbmc.log("track is: %s" % track.track) path, item = self.list_item_factory.create_list_item(track, index + 1) xbmc.log("path is: %s" % path) xbmc.log("item is: %s" % item) self.add_context_menu(track, path, item) xbmcplugin.addDirectoryItem(handle=self.addon_handle, url = path, listitem=item) - Log in to comment
https://bitbucket.org/re/spotlight/issues/58/i-can-see-my-playlists-but-not-the-tracks
CC-MAIN-2018-47
refinedweb
168
64.17
Unit Testing 101: Creating Flexible Test Code Unit Testing 101: Creating Flexible Test Code. Introduction When going for test-driven development, we all know that the unit test code becomes critical part of the code base. Additional to this, it even ends up representing a bigger percentage of the overall code base than the actual business code. Is because of this that it is really important to have a solid unit test source code structure; flexible, intuitive and maintainable enough so that when the amount of unit tests grow, it does not become an uncontrollable mess. Always remember: unit testing is not done because is a nice to have or because test-driven development is trendy and cool; it is done because is needed and is critical in order to ease the development cycle and ensure product quality. Academic v.s. Real Life Most articles we follow on the internet on how to build unit tests often use small or reduced scopes in order to keep the concepts easy to digest and understand. For example, they demostrate how to create one or two unit tests to test a simple application feature, like a calculator capability to add two numbers. The problem raises when we get the next day in our computers at work and try to apply our newly acquired knowledge to a more realistic scope: We realize that, in real life, we would have thousands of units tests for a thousand features and not one or two like in the articles we learned from. So, the question is: What do we do when we go beyond examples and get into testing a real-life application? The Scenario Before we can continue, lets use the scenario we mentioned previously. Lets assume we are working in an online shopping application. We are about to create a REST service that allows web clients to handle orders from customers. That service will eventually feature several methods to place new orders, cancel orders, track orders, etc. We'll need to create a bunch of unit tests for that service and all of its methods. A Flexible Structure Now, we have a scenario. Assuming the orders service eventually will feature a bunch of methods, unit tests must cover all of these methods and all of its possible success and failure scenarios. This could result in a couple dozen test methods in no time. What do we do? Do we go down the usual path and put all of these test methods in a single test class since it is testing a single component? Short answer: No. You don't do that. We are starting to see that grouping tests per-component is not enough in order to keep test classes small, so we need to break down grouping one more level. In the case of the orders service, our next grouping level would be per-component feature. We have now two levels of unit test grouping: - Component tests: Top-level group of unit test suites that target a single component. In this case, the orders service. - Component feature tests: Sub-level that group the test suites that target a single component feature. In this case, the orders service method for placing orders. Now, how does this look in code? The Unit Test Code In C#, we can create a single class composed of multiple source files. This is done by using partial classes. So, we will start by defining a new partial class that will represent a part of the component test suite class. [TestFixture] partial class OnOrdersService { // More code here on a second... } The component test suite class uses the following naming convention: public [partial] class On[ComponentName] { } So, when reading the name we have a clear idea of what's the target component to be tested by the suite. Right after the feature test suite class declaration, we start defining the component feature test suite class. This will be a class nested inside the feature test suite class. [TestFixture] partial class OnOrdersService { [TestFixture] public class WhenPlacingOrders { // Component feature test methods to be added here soon... } } The component feature test suite class uses the following naming convention: public class When[ActionPerformed][FeatureName] {} Of course, the next thing to do is add some test methods. Lets expand our example: [TestFixture] partial class OnOrdersService { [TestFixture] public class WhenPlacingOrders { [Test] public void ShouldReturnOrderIdOnValidOrderPlaced() { // Test code for a success scenario... } [Test] public void ThrowErrorOnInvalidOrderPlaced() { // Test code for a failure scenario... } } } The success scenario test methods use the following convention: public void Should[ExpectedOutcome]On[ScenarioCondition](); .. and failure scenario use: public void Should[ExpectedFailure]On[FailureConditions](); Before you stop reading and yell in anger on those funky class names, lets try to read textually down our hierarchy: On orders service... when placing orders... should return order ID on valid order placed. So, by now you should be getting a clearer picture on what we are trying to achieve here. By manipulating class structures (partial and nested classes) we can organize our tests by component and then by component feature while keeping an intuitive and readable structure. This will allow us to keep test classes small and easy maintainable. If you don't trust me with the previous example, this is how the Visual Studio test runner looks when using our handy class structure. Now, lets say that we would like to add tests to cover the service method that cancels orders. In this case we would create a new partial class using the same OnOrdersService class name. We then create a new nested class called WhenCancellingOrders. Now we have a component test suite class split in two files. Each file will contain a single component feature test suite. I think is time we discuss what naming convention these files should follow. Our test runner looks pretty now, but our file structure, which has nothing to do with the runner; has to look pretty as well. We will use the following convention: On[ComponentName]When[ActionPerformed][FeatureName].cs For example: OnOrdersServiceWhenPlacingOrders.cs OnOrdersServiceWhenCancellingOrders.cs There you go. Now it looks good enough. We can go further and play a little with the namespaces, but that is out of the scope of this article. Conclusion That would be it. We now have a consistent test code structure that can grow gradually as requirements and features are added. In future articles we will go through unit test method structuring and test contexts in order to delegate test contest setup code like mocking objects and such. Overcoming Test Automation Roadblocks with Service Virtualization Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/unit-testing-101-creating
CC-MAIN-2018-13
refinedweb
1,110
62.58
SAP ABAP - Data Elements. − Step 1 − Go to Transaction SE11. Step 2 − Select the radio button for Data type in the initial screen of the ABAP Dictionary, and enter the name of the data element as shown below. Step 3 − Click the CREATE button. You may create data elements under the customer namespaces, and the name of the object always starts with ‘Z’ or ‘Y’. Step 4 − Check the Data element radio button on the CREATE TYPE pop-up that appears with three radio buttons. Step 5 − Click the green checkmark icon. You are directed to the maintenance screen of the data element. Step 6 − Enter the description in the short text field of the maintenance screen of the data element. In this case, it is “Customer Data Element”. Note − You cannot enter any other attribute until you have entered this attribute. Step 7 − Assign the data element with the type. You can create an elementary data element by checking elementary type or a reference data element by checking Reference type. You can assign a data element to a Domain or Predefined Type within Elementary Type and with Name of Reference Type or Reference to Predefined Type within Reference Type. Step 8 − Enter the fields for short text, medium text, long text, and heading in the Field Label tab. You can press Enter and the length is automatically generated for these labels. Step 9 − Save your changes. The Create Object Directory Entry pop-up appears and asks for a package. You may enter the package name in which you are working. If you do not have any package then you may create it in the Object Navigator or you can save your data element using the Local Object button. Step 10 − Activate your data element. Click the Activate icon (matchstick icon) or press CTRL + F3 to activate the data element. A pop-up window appears, listing the 2 currently inactive objects as shown in the following screenshot. Step 11 − At this point, the top entry labeled ‘DTEL’ with the name Z_CUST is to be activated. As this is highlighted, click the green tick button. This window disappears and the status bar will display the message ‘Object activated’. If error messages or warnings occurred when you activated the data element, the activation log is displayed automatically. The activation log displays information about activation flow. You can also call the activation log with Utilities(M) → Activation log.
https://www.tutorialspoint.com/sap_abap/sap_abap_data_elements.htm
CC-MAIN-2019-30
refinedweb
405
57.47
.grimp;32 import soot.*;33 import soot.jimple.*;34 35 /** Provides static helper methods to indicate if parenthesization is36 * required. 37 *38 * If your sub-expression has strictly higher precedence than you,39 * then no brackets are required: 2 + (4 * 5) = 2 + 4 * 5 is40 * unambiguous, because * has precedence 800 and + has precedence 700.41 *42 * If your subexpression has lower precedence than you, then43 * brackets are required; otherwise you will bind to your 44 * grandchild instead of the subexpression. 2 * (4 + 5) without45 * brackets would mean (2 * 4) + 5.46 *47 * For a binary operation, if your left sub-expression has the same48 * precedence as you, no brackets are needed, since binary operations49 * are all left-associative. If your right sub-expression has the50 * same precedence than you, then brackets are needed to reproduce the51 * parse tree (otherwise, parsing will give e.g. (2 + 4) + 5 instead52 * of the 2 + (4 + 5) that you had to start with.) This is OK for53 * integer addition and subtraction, but not OK for floating point54 * multiplication. To be safe, let's put the brackets on.55 *56 * For the high-precedence operations, I've assigned precedences of57 * 950 to field reads and invoke expressions (.), as well as array reads ([]).58 * I've assigned 850 to cast, newarray and newinvoke.59 *60 * The Dava DCmp?Expr precedences look fishy to me; I've assigned DLengthExpr61 * a precedence of 950, because it looks like it should parse like a field62 * read to me.63 *64 * Basically, the only time I can see that brackets should be required 65 * seems to occur when a cast or a newarray occurs as a subexpression of66 * an invoke or field read; hence 850 and 950. -PL67 */68 public class PrecedenceTest69 {70 public static boolean needsBrackets( ValueBox subExprBox, Value expr ) {71 Value sub = subExprBox.getValue();72 if( !(sub instanceof Precedence) ) return false;73 Precedence subP = (Precedence) sub;74 Precedence exprP = (Precedence) expr;75 return subP.getPrecedence() < exprP.getPrecedence();76 }77 public static boolean needsBracketsRight( ValueBox subExprBox, Value expr ) {78 Value sub = subExprBox.getValue();79 if( !(sub instanceof Precedence) ) return false;80 Precedence subP = (Precedence) sub;81 Precedence exprP = (Precedence) expr;82 return subP.getPrecedence() <= exprP.getPrecedence();83 }84 }85 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/soot/grimp/PrecedenceTest.java.htm
CC-MAIN-2016-44
refinedweb
385
56.15
Hey there, So Ive been fighting with this kind of problem for a while now. It seems to come and disappear with no clear reason. The arrays in my game (which I edit in the editor) sometimes serialize and sometimes not. They are always public, sometimes unity or c# native types and sometimes custom types by me. Here is a little example: using UnityEngine; using System.Collections; using System.Collections.Generic; using System.Linq; public class Mapbuilder : BaseClass { public int[] stateProbs; } I edit this script here: using UnityEngine; using UnityEditor; using System.Collections; public class Window_MapBuilder : EditorWindow { private int width{ get { return Screen.width; } } private int height{ get { return Screen.height - 22; } } private Mapbuilder builder; [MenuItem ("Custom/MapBuilder")] private static void Init () { EditorWindow.GetWindow<Window_MapBuilder> ().Show (); } void Update () { Repaint (); } void OnGUI () { Undo.RecordObject ((Object)this, "Edit Builder Window"); if (builder == null) { builder = FindObjectOfType<Mapbuilder> (); } minSize = new Vector2 (500, 400); if (builder.stateProbs == null || builder.stateProbs.Length != 5) builder.stateProbs = new int[5]; for (int i = 0; i < 5; i++) { builder.stateProbs [i] = EditorGUI.IntSlider (new Rect (10, sy, w - 40, 20), MapObjects.Tree.TreeObject.growStates [i] + ":", builder.stateProbs [i], 0, 100); sy += 30; } EditorUtility.SetDirty (builder); } I have cut down the scripts to only the parts that seem important. If you want full ones just tell me. Now: I can edit the array in my window and it also saves for when I hit play. It does however reset when I close Unity. A similar thing happens with an array of a custom class I made. Only that this one does not get reset completely but rather reverted to a state it had two days ago. As you can see I already tried things like SetDirty e.g. Now Im fresh out of ideas. You got any? :D Help would be appreciated :D EDIT: I should mention that Baseclass, which Mapbuilder inherits from, inherits from Monobehaviour, so Mapbuilder is basecally a Monobehaviour. Answer by Adam-Mechtley · Mar 19, 2017 at 09:14 AM I'm not sure why you call Undo.RecordObject() passing the EditorWindow itself, but you should call it passing in the builder right before you modify the properties on it. For example: Undo.RecordObject() builder if (builder.stateProbs == null || builder.stateProbs.Length != 5) { Undo.RecordObject(builder, "Set State Probs"); builder.stateProbs = new int[5]; } for (int i = 0; i < 5; i++) { EditorGUI.BeginChangeCheck(); int stateProb = EditorGUI.IntSlider (new Rect (10, sy, w - 40, 20), MapObjects.Tree.TreeObject.growStates [i] + ":", builder.stateProbs [i], 0, 100); sy += 30; if (EditorGUI.EndChangeCheck()) { Undo.RecordObject(builder, "Set State Probs"); builder.stateProbs [i] = stateProb; } } EditorUtility.SetDirty (builder); That said, I generally advise people to use SerializedObject/SerializedProperty for editing a serialized field on something, as it automatically handles Undo/Redo/Dirtying/etc. It would look something like this in your case: // ideally cache this and store in a field on the EditorWindow when builder is first found var serializedBuilder = new SerializedObject(builder); serializedBuilder.Update(); var stateProbs = serializedBuilder.FindProperty("stateProbs"); stateProbs.arraySize = 5; serializedBuilder.ApplyModifiedProperties(); for (int i = 0; i < 5; i++) { var element = stateProbs.GetArrayElementAtIndex(i); EditorGUI.IntSlider (new Rect (10, sy, w - 40, 20), element, 0, 100, MapObjects.Tree.TreeObject.growStates [i] + ":", element.intValue); sy += 30; } serializedBuilder.ApplyModifiedProperties(); Oh my god, I cannot believe it was that simple. Thanks a bunch my friend. I now know why it did serialize in older versions of the game. At that time I had a custom inspector for my Mapbuilder and thus passed IT into the Undo.Record instead of the window. Im so stupid XD Again, thanks alot save progress? and not delete values on load 1 Answer Can't save serialized data (loading work) in Playerprefs for WebGL Build. 0 Answers Serialization Placeholder Value for Different Fields? 1 Answer Store Reference to a Serializable Class 0 Answers Question about Unity serialize system 1 Answer
https://answers.unity.com/questions/1328012/arrays-not-serializing.html
CC-MAIN-2019-04
refinedweb
642
53.17