text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
I want the while at the end to say while JOption says yes. Meaning when asked if they want to play again it will re loop and play it again if they click yes. I have looked and I can't find how to do it. I also used an "int" which was selection, but it says i need a boolean value which kinda makes sense but I dont know how.. Thanks Code : package lab3; // Import import java.util.Scanner; import javax.swing.JOptionPane; // Start public class Problem4 { public static void main (String[] args){ // It's time to play the lottery! int selection; do{ int lottery = (int)(Math.random() * 1000); int runAgain; // Promt the user to enter a guess Scanner input = new Scanner(System.in); System.out.println("Enter your lottery pick (three digits): "); int guess = input.nextInt(); // Get digits from lottery int lotteryDigit1 = lottery / 100; int lotteryDigit2 = lottery / 10; int lotteryDigit3 = lottery % 10; // Get digits from guess int guessDigit1 = guess / 100; int guessDigit2 = guess / 10; int guessDigit3 = guess % 10; System.out.println("The lottery number is " + lottery); // Check the guess if (guess == lottery) System.out.println("Exact match: You win $10,000!"); else if (guessDigit2 == lotteryDigit1 && guessDigit1 == lotteryDigit. Try again!"); selection = JOptionPane.showConfirmDialog(null, "Would you like to run another?","Confirmation", JOptionPane.YES_NO_OPTION);} while (selection = Yes_Option); } } } }
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/5672-while-joptionpane-equal-yes-printingthethread.html
CC-MAIN-2016-26
refinedweb
217
59.3
Date: Mon, 28 Aug 2000 12:02:33 -0700 From: FreeBSD Security Advisories <security-advisories@FREEBSD.ORG> Subject: FreeBSD Security Advisory: FreeBSD-SA-00:41.elf To: BUGTRAQ@SECURITYFOCUS.COM -----BEGIN PGP SIGNED MESSAGE----- ============================================================================= FreeBSD-SA-00:41 Security Advisory FreeBSD, Inc. Topic: Malformed ELF images can cause a system hang Category: core Module: kernel Announced: 2000-08-28 Credits: Adam McDougall <bsdx@looksharp.net> Affects: FreeBSD 3.x, 4.x and 5.x prior to the correction date Corrected: 2000-07-25 (FreeBSD 5.0-CURRENT) 2000-07-23 (FreeBSD 4.0-STABLE) FreeBSD only: Yes I. Background The ELF binary format is used for binary executable programs on modern versions of FreeBSD. II. Problem Description The ELF image activator did not perform sufficient sanity checks on the ELF image header, and when confronted with an invalid or truncated header it suffered a sign overflow bug which caused the CPU to enter into a very long loop in the kernel. The result of this is that the system will appear to lock up for an extended period of time before control returns. This bug can be exploited by unprivileged local users. This vulnerability is not present in FreeBSD 4.1-RELEASE, although 3.5-RELEASE and 3.5.1-RELEASE are vulnerable. III. Impact Local users can cause the system to lock up for an extended period of time (15 minutes or more, depending on CPU speed), during which time the system is completely unresponsive to local and remote users. IV. Workaround None available. V. Solution One of the following: 1) Upgrade your vulnerable FreeBSD system to 4.1-RELEASE, 4.1-STABLE or 5.0-CURRENT after the respective correction dates. FreeBSD 3.5-STABLE has not yet been fixed due to logistical difficulties (and the patch below does not apply cleanly). Consider upgrading to 4.1-RELEASE if this is a concern - this advisory will be reissued once the patch has been applied to the 3.x branch. 2) Apply the patch below and recompile your kernel. Either save this advisory to a file, or download the patch and detached PGP signature from the following locations, and verify the signature using your PGP utility. # cd /usr/src/sys/kern # patch -p < /path/to/patch_or_advisory [ Recompile your kernel as described in and reboot the system ] --- imgact_elf.c 2000/04/30 18:51:39 1.75 +++ imgact_elf.c 2000/07/23 22:19:49 1.78 @@ -190,6 +190,21 @@ object = vp->v_object; error = 0; + /* + * It's necessary to fail if the filsz + offset taken from the + * header is greater than the actual file pager object's size. + * If we were to allow this, then the vm_map_find() below would + * walk right off the end of the file object and into the ether. + * + * While I'm here, might as well check for something else that + * is invalid: filsz cannot be greater than memsz. + */ + if ((off_t)filsz + offset > object->un_pager.vnp.vnp_size || + filsz > memsz) { + uprintf("elf_load_section: truncated ELF file\n"); + return (ENOEXEC); + } + map_addr = trunc_page((vm_offset_t)vmaddr); file_addr = trunc_page(offset); @@ -341,6 +356,12 @@ } error = exec_map_first_page(imgp); + /* + * Also make certain that the interpreter stays the same, so set + * its VTEXT flag, too. + */ + if (error == 0) + nd.ni_vp->v_flag |= VTEXT; VOP_UNLOCK(nd.ni_vp, 0, p); if (error) goto fail; @@ -449,6 +470,17 @@ /* * From this point on, we may have resources that need to be freed. */ + + /* + * Yeah, I'm paranoid. There is every reason in the world to get + * VTEXT now since from here on out, there are places we can have + * a context switch. Better safe than sorry; I really don't want + * the file to change while it's being loaded. + */ + simple_lock(&imgp->vp->v_interlock); + imgp->vp->v_flag |= VTEXT; + simple_unlock(&imgp->vp->v_interlock); + if ((error = exec_extract_strings(imgp)) != 0) goto fail; @@ -610,9 +642,6 @@ imgp->auxargs = elf_auxargs; imgp->interpreted = 0; - /* don't allow modifying the file while we run it */ - imgp->vp->v_flag |= VTEXT; - fail: return error; } -----BEGIN PGP SIGNATURE----- Version: 2.6.2 iQCVAwUBOaq1hlUuHi5z0oilAQGpvgQAoaeqjoU1QppgQ+yXF7KOL6EfTQ9mrdEe zKQ6vU//hc1ejKx9C4zmQybflQIpkHS2TMNAfXuvFG74hvETwa8cpVqolJU29CCf FKlGTCAGCSzosWrndBuvakKqjeVvvQR4JydVhkO04neVEfbUXkich/2PT+3h3dKW GuW3coG8nYE= =2w2A -----END PGP SIGNATURE-----
http://lwn.net/2000/0907/a/fb-kernel.php3
crawl-001
refinedweb
675
50.33
(For more resources related to this topic, see here.) Mission briefing In this article, we focus on the physics engine. We will build a basketball court where the player needs to shoot the ball in to the hoop. A player shoots the ball by keeping the mouse button pressed and releasing it. The direction is visualized by an arrow and the power is proportional to the duration of the mouse press and hold event. There are obstacles present between the ball and the hoop. The player either avoids the obstacles or makes use of them to put the ball into the hoop. Finally, we use CreateJS to visualize the physics world into the canvas. You may visit to play a dummy game in order to have a better understanding of what we will be building throughout this article. The following screenshot shows a player shooting the ball towards the hoop, with a power indicator: Why is it awesome? When we build games without a physics engine, we create our own game loop and reposition each game object in every frame. For instance, if we move a character to the right, we manage the position and movement speed ourselves. Imagine that we are coding a ball-throwing logic now. We need to keep track of several variables. We have to calculate the x and y velocity based on the time and force applied. We also need to take the gravity into account; not to mention the different angles and materials we need to consider while calculating the bounce between the two objects. Now, let's think of a physical world. We just defined how objects interact and all the collisions that happen automatically. It is similar to a real-world game; we focus on defining the rule and the world will handle everything else. Take basketball as an example. We define the height of the hoop, size of the ball, and distance of the three-point line. Then, the players just need to throw the ball. We never worry about the flying parabola and the bouncing on the board. Our space takes care of them by using the physics laws. This is exactly what happens in the simulated physics world; it allows us to apply the physics properties to game objects. The objects are affected by the gravity and we can apply forces to them, making them collide with each other. With the help of the physics engine, we can focus on defining the game-play rules and the relationship between the objects. Without the need to worry about collision and movement, we can save time to explore different game plays. We then elaborate and develop the setup further, as we like, among the prototypes. We define the position of the hoop and the ball. Then, we apply an impulse force to the ball in the x and y dimensions. The engine will handle all the things in between. Finally, we get an event trigger if the ball passes through the hoop. It is worth noting that some blockbuster games are also made with a physics engine. This includes games such as Angry Birds, Cut the Rope, and Where's My Water. Your Hotshot objectives We will divide the article into the following eight tasks: Creating the simulated physics world Shooting a ball Handling collision detection Defining levels Launching a bar with power Adding a cross obstacle Visualizing graphics Choosing a level Mission checklist We create a project folder that contains the index.html file and the scripts and styles folders. Inside the scripts folder, we create three files: physics.js, view.js, and game.js. The physics.js file is the most important file in this article. It contains all the logic related to the physics world including creating level objects, spawning dynamic balls, applying force to the objects, and handling collision. The view.js file is a helper for the view logic including the scoreboard and the ball-shooting indicator. The game.js file, as usual, is the entry point of the game. It also manages the levels and coordinates between the physics world and view. Preparing the vendor files We also need a vendors folder that holds the third-party libraries. This includes the CreateJS suite—EaselJS, MovieClip, TweenJS, PreloadJS—and Box2D. Box2D is the physics engine that we are going to use in this article. We need to download the engine code from. It is a port version from ActionScript to JavaScript. We need the Box2dWeb-2.1.a.3.min.js file or its nonminified version for debugging. We put this file in the vendors folder. Box2D is an open source physics-simulation engine that was created by Erin Catto. It was originally written in C++. Later, it was ported to ActionScript because of the popularity of Flash games, and then it was ported to JavaScript. There are different versions of ports. The one we are using is called Box2DWeb, which was ported from ActionScript's version Box2D 2.1. Using an old version may cause issues. Also, it will be difficult to find help online because most developers have switched to 2.1. Creating a simulated physics world Our first task is to create a simulated physics world and put two objects inside it. Prepare for lift off In the index.html file, the core part is the game section. We have two canvas elements in this game. The debug-canvas element is for the Box2D engine and canvas is for the CreateJS library: <section id="game" class="row"> <canvas id="debug-canvas" width="480" height="360"></canvas> <canvas id="canvas" width="480" height="360"></canvas> </section> We prepare a dedicated file for all the physics-related logic. We prepare the physics.js file with the following code: ;(function(game, cjs, b2d){ // code here later }).call(this, game, createjs, Box2D); Engage thrusters The following steps create the physics world as the foundation of the game: The Box2D classes are put in different modules. We will need to reference some common classes as we go along. We use the following code to create an alias for these Box2D classes: // alias , b2RevoluteJointDef = Box2D.Dynamics.Joints.b2RevoluteJointDef ; We prepare a variable that states how many pixels define 1 meter in the physics world. We also define a Boolean to determine if we need to draw the debug draw: var pxPerMeter = 30; // 30 pixels = 1 meter. Box3D uses meters and we use pixels. var shouldDrawDebug = false; All the physics methods will be put into the game.physics object. We create this literal object before we code our logics: var physics = game.physics = {}; The first method in the physics object creates the world: physics.createWorld = function() { var gravity = new b2Vec2(0, 9.8); this.world = new b2World(gravity, /*allow sleep= */ true); // create two temoporary bodies var bodyDef = new b2BodyDef; var fixDef = new b2FixtureDef; bodyDef.type = b2Body.b2_staticBody; bodyDef.position.x = 100/pxPerMeter; bodyDef.position.y = 100/pxPerMeter; fixDef.shape = new b2PolygonShape(); fixDef.shape.SetAsBox(20/pxPerMeter, 20/pxPerMeter); this.world.CreateBody(bodyDef).CreateFixture(fixDef); bodyDef.type = b2Body.b2_dynamicBody; bodyDef.position.x = 200/pxPerMeter; bodyDef.position.y = 100/pxPerMeter; this.world.CreateBody(bodyDef).CreateFixture(fixDef); // end of temporary code } The update method is the game loop's tick event for the physics engine. It calculates the world step and refreshes debug draw. The world step upgrades the physics world. We'll discuss it later: physics.update = function() { this.world.Step(1/60, 10, 10); if (shouldDrawDebug) { this.world.DrawDebugData(); } this.world.ClearForces(); }; Before we can refresh the debug draw, we need to set it up. We pass a canvas reference to the Box2D debug draw instance and configure the drawing settings: physics.showDebugDraw = function() { shouldDrawDebug = true; //set up debug draw var debugDraw = new b2DebugDraw(); debugDraw.SetSprite(document.getElementById("debug-canvas").getContext("2d")); debugDraw.SetDrawScale(pxPerMeter); debugDraw.SetFillAlpha(0.3); debugDraw.SetLineThickness(1.0); debugDraw.SetFlags(b2DebugDraw.e_shapeBit | b2DebugDraw.e_jointBit); this.world.SetDebugDraw(debugDraw); }; Let's move to the game.js file. We define the game-starting logic that sets up the EaselJS stage and Ticker. It creates the world and sets up the debug draw. The tick method calls the physics.update method: ;(function(game, cjs){ game.start = function() { cjs.EventDispatcher.initialize(game); // allow the game object to listen and dispatch custom events. game.canvas = document.getElementById('canvas'); game.stage = new cjs.Stage(game.canvas); cjs.Ticker.setFPS(60); cjs.Ticker.addEventListener('tick', game.stage); // add game.stage to ticker make the stage.update call automatically. cjs.Ticker.addEventListener('tick', game.tick); // gameloop game.physics.createWorld(); game.physics.showDebugDraw(); }; game.tick = function(){ if (cjs.Ticker.getPaused()) { return; } // run when not paused game.physics.update(); }; game.start(); }).call(this, game, createjs); After these steps, we should have a result as shown in the following screenshot. It is a physics world with two bodies. One body stays in position and the other one falls to the bottom. Objective complete – mini debriefing We have defined our first physical world with one static object and one dynamic object that falls to the bottom. A static object is an object that is not affected by gravity and any other forces. On the other hand, a dynamic object is affected by all the forces. Defining gravity In reality, we have gravity on every planet. It's the same in the Box2D world. We need to define gravity for the world. This is a ball-shooting game, so we will follow the rules of gravity on Earth. We use 0 for the x-axis and 9.8 for the y-axis. It is worth noting that we do not need to use the 9.8 value. For instance, we can set a smaller gravity value to simulate other planets in space—maybe even the moon; or, we can set the gravity to zero to create a top-down view of the ice hockey game, where we apply force to the puck and benefit from the collision. Debug draw The physics engine focuses purely on the mathematical calculation. It doesn't care about how the world will be presented finally, but it does provide a visual method in order to make the debugging easier. This debug draw is very useful before we use our graphics to represent the world. We won't use the debug draw in production. Actually, we can decide how we want to visualize this physics world. We have learned two ways to visualize the game. The first way is by using the DOM objects and the second one is by using the canvas drawing method. We will visualize the world with our graphics in later tasks. Understanding body definition and fixture definition In order to define objects in the physics world, we need two definitions: a body definition and fixture definition. The body is in charge of the physical properties, such as its position in the world, taking and applying force, moving speed, and the angular speed when rotating. We use fixtures to handle the shape of the object. The fixture definition also defines the properties on how the object interacts with others while colliding, such as friction and restitution. Defining shapes Shapes are defined in a fixture. The two most common shapes in Box2D are rectangle and circle. We define a rectangle with the SetAsBox function by providing half of its width and height. Also, the circle shape is defined by the radius. It is worth noting that the position of the body is at the center of the shape. It is different from EaselJS in that the default origin point is set at the top-left corner. Pixels per meter When we define the dimension and location of the body, we use meter as a unit. That's because Box2D uses metric for calculation to make the physics behavior realistic. But we usually calculate in pixels on the screen. So, we need to convert between pixels on the screen and meters in the physics world. That's why we need the pxPerMeter variable here. The value of this variable might change from project to project. The update method In the game tick, we update the physics world. The first thing we need to do is take the world to the next step. Box2D calculates objects based on steps. It is the same as we see in the physical world when a second is passed. If a ball is falling, at any fixed time, the ball is static with the property of the falling velocity. In the next millisecond, or nanosecond, the ball falls to a new position. This is exactly how steps work in the Box2D world. In every single step, the objects are static with their physics properties. When we go a step further, Box2D takes the properties into consideration and applies them to the objects. This step takes three arguments. The first argument is the time passed since the last step. Normally, it follows the frame-per-second parameter that we set for the game. The second and the third arguments are the iteration of velocity and position. This is the maximum iterations Box2D tries when resolving a collision. Usually, we set them to a low value. The reason we clear the force is because the force will be applied indefinitely if we do not clear it. That means the object keeps receiving the force on each frame until we clear it. Normally, clearing forces on every frame will make the objects more manageable. Classified intel We often need to represent a 2D vector in the physics world. Box2D uses b2vec for this purpose. Similar to the b2vec function, we use quite a lot of Box2D functions and classes. They are modularized into namespaces. We need to alias the most common classes to make our code shorter. Shooting the ball In this task, we create a hoop and allow the player to throw the ball by clicking the mouse button. The ball may or may not pass through the hoop based on the throwing angle and power. Prepare for lift off We remove the two bodies that were created in the first task. Those two bodies were just an experiment and we don't need them anymore. Engage thrusters In the following steps, we will create the core part of this article—shooting the ball: We will create a hoop and spawn a ball in the physics world. We create a function for these two tasks: physics.createLevel = function() { this.createHoop(); // the first ball this.spawnBall(); }; We are going to spawn many balls. We define the following method for this task. In this task, we hardcode the position, ball size, and fixture properties. The ball is spawned as a static object until the player throws the ball out: physics.spawnBall = function() { var positionX = 300; var positionY = 200; var radius = 13; var bodyDef = new b2BodyDef; var fixDef = new b2FixtureDef; fixDef.density = 0.6; fixDef.friction = 0.8; fixDef.restitution = 0.1; bodyDef.type = b2Body.b2_staticBody; bodyDef.position.x = positionX/pxPerMeter; bodyDef.position.y = positionY/pxPerMeter; fixDef.shape = new b2CircleShape(radius/pxPerMeter); this.ball = this.world.CreateBody(bodyDef); this.ball.CreateFixture(fixDef); }; We will need to get the ball position to calculate the throwing angle. We define the following method to get the ball position and convert it into screen coordinates: physics.ballPosition = function(){ var pos = this.ball.GetPosition(); return { x: pos.x * pxPerMeter, y: pos.y * pxPerMeter }; }; By using the cursor and ball position, we can calculate the angle. This is the Math function that returns the angle, which will be explained later: physics.launchAngle = function(stageX, stageY) { var ballPos = this.ballPosition(); var diffX = stageX - ballPos.x; var diffY = stageY - ballPos.y; // Quadrant var degreeAddition = 0; // Quadrant I if (diffX < 0 && diffY > 0) { degreeAddition = Math.PI; // Quadrant II } else if (diffX < 0 && diffY < 0) { degreeAddition = Math.PI; // Quadrant III } else if (diffX > 0 && diffY < 0) { degreeAddition = Math.PI * 2; // Quadrant IV } var theta = Math.atan(diffY / diffX) + degreeAddition; return theta; }; We have prepared the Math methods and can finally throw the ball with the following method: physics.shootBall = function(stageX, stageY, ticksDiff) { this.ball.SetType(b2Body.b2_dynamicBody); var theta = this.launchAngle(stageX, stageY); var r = Math.log(ticksDiff) * 50; // power var resultX = r * Math.cos(theta); var resultY = r * Math.sin(theta); this.ball.ApplyTorque(30); // rotate it // shoot the ball this.ball.ApplyImpulse(new b2Vec2(resultX/pxPerMeter, resultY/pxPerMeter), this.ball.GetWorldCenter()); this.ball = undefined; }; We need a target for the throwing action. So, we create the hoop with the following code. A hoop is constructed using a board and two squares: physics.createHoop = function() { var hoopX = 50; var hoopY = 100; var bodyDef = new b2BodyDef; var fixDef = new b2FixtureDef; // default fixture fixDef.density = 1.0; fixDef.friction = 0.5; fixDef.restitution = 0.2; // hoop bodyDef.type = b2Body.b2_staticBody; bodyDef.position.x = hoopX/pxPerMeter; bodyDef.position.y = hoopY/pxPerMeter; bodyDef.angle = 0; fixDef.shape = new b2PolygonShape(); fixDef.shape.SetAsBox(5/pxPerMeter, 5/pxPerMeter); var body = this.world.CreateBody(bodyDef); body.CreateFixture(fixDef); bodyDef.type = b2Body.b2_staticBody; bodyDef.position.x = (hoopX+45)/pxPerMeter; bodyDef.position.y = hoopY/pxPerMeter; bodyDef.angle = 0; fixDef.shape = new b2PolygonShape(); fixDef.shape.SetAsBox(5/pxPerMeter, 5/pxPerMeter); body = this.world.CreateBody(bodyDef); body.CreateFixture(fixDef); // hoop board dimension: 10x80 (5x40 in half value) bodyDef.type = b2Body.b2_staticBody; bodyDef.position.x = (hoopX-5)/pxPerMeter; bodyDef.position.y = (hoopY-40)/pxPerMeter; bodyDef.angle = 0; fixDef.shape = new b2PolygonShape(); fixDef.shape.SetAsBox(5/pxPerMeter, 40/pxPerMeter); fixDef.restitution = 0.05; var board = this.world.CreateBody(bodyDef); board.CreateFixture(fixDef); }; Now, we can initialize the world by calling the createLevel method. At the same time, we should remove the creation of two test objects that we added in the last task: game.physics.createLevel(); In the game.js file, we handle the mousedown and mouseup events to get the position of the cursor and the duration for which the mouse button was kept pressed. The cursor's position determines the angle and the duration determines the power: isPlaying = true; game.tickWhenDown = 0; game.tickWhenUp = 0; game.stage.on('stagemousedown', function(e){ if (!isPlaying) { return; } game.tickWhenDown = cjs.Ticker.getTicks(); }); game.stage.on('stagemouseup', function(e){ if (!isPlaying) { return; } game.tickWhenUp = cjs.Ticker.getTicks(); ticksDiff = game.tickWhenUp - game.tickWhenDown; game.physics.shootBall(e.stageX, e.stageY, ticksDiff); setTimeout(game.spawnBall, 500); }); Finally, we spawn another ball after the last ball is thrown: game.spawnBall = function() { game.physics.spawnBall(); }; After performing these steps, we should get the result as shown in the following screenshot. We can click anywhere on the screen; once the mouse button is released, the ball is thrown towards the position of the cursor. When the angle and power is right, the ball is thrown into the hoop. Objective complete – mini debriefing Thanks to the physics engine, we only have to define the position of the objects, throwing angle, and power to create the entire ball-throwing logic. The engine automatically calculates the path of the throw and the bounce of the ball. Shooting the ball We change the state of the ball from static to dynamic before applying some force to it. Forces only affect dynamic bodies. Once we know any two points in the screen, we can calculate the rotation angle. This is done using the geometry formula. The following figure shows the relationship between the edges of the triangle and angles between them: The calculated angle is correct only if it is in the first quadrant. We determine the quadrant of the mouse cursor by referencing the ball's position as the original point. The following figure shows the four quadrants: According to math, we need to add additional degrees to the calculated result if the cursor is not in the first quadrant. The degrees to be added, depending on the quadrant, are shown as follows: Quadrant 1: Add 0 Quadrant 2 and 3: Add 180 (PI) Quadrant 4: Add 360 (PI * 2) After we get the angle, we need the value of the power. The power is based on the time duration of the mouse button being pressed. The less amount of time the mouse button is being pressed, the less power is applied to the ball. The longer the mouse button is pressed, the more power is applied. You may find that the power will be hard to control if we use a linear scale. It is very difficult to find the right timing. Either the power is too little to be noticeable or too much that the ball flies directly out of the screen. The duration is too sensitive. We use the logarithm to solve this problem. The logarithm makes the shooting power much smoother. Imagine when we click on the mouse. It is not very different if we keep it pressed for 500 milliseconds or 1 second. But the value indeed has doubled. That's why the timing is difficult. The power of force is determined in the millisecond scale. Now with the logarithm, the value is determined by the exponential of the duration. 100 milliseconds may mean that the value is 1 and 1 second may mean that the value is 2. It is still more powerful when kept pressed for longer. But the value difference is not that sensitive anymore. The following figure shows how the logarithm decreases the sensitivity: After we have the angle and power, we decompose the vector into x-axis and y-axis vectors using a geometry formula. This is the impulse force that we apply to the ball. Applying the force There are two ways to apply force in Box2D: ApplyForce and ApplyImpulse. In Box2D, force is often consistently applied to a body for a while. For example, we speed up a car by applying force. We use impulse, for instance, to apply a one-time force. Throwing a ball is more like an impulse than a constant force. Explaining the construction of the physics world The hoop is constructed using three static bodies—a board and two squares, as illustrated in the following figure: Classified intel In the real world, when basketball players throw the ball, they spin the ball as shown in the following figure: We make the ball spin by applying torque to it while the ball is thrown. Summary In this article, we learned to create a simulated physics world and put two objects inside it; and a hoop, and also allowed the player to throw the ball by clicking the mouse button Resources for Article: Further resources on this subject: - HTML5 Games Development: Using Local Storage to Store Game Data [article] - Basic use of Local Storage [article] - Video conversion into the required HTML5 Video playback [article]
https://www.packtpub.com/books/content/html5-game-development-%E2%80%93-ball-shooting-machine-physics-engine
CC-MAIN-2016-36
refinedweb
3,774
59.19
User Details - User Since - Fri, Sep 17, 11:45 AM (4 w, 12 h) Today Cherry-pick to fix extraneous delete in patch, minor style / formatting changes. Parse Content-Length header to avoid HTTP response buffer reallocations, convert HTTP Body to MemoryBuffer. rebase against main Yesterday Add *- C++ -*- header indicator to Caching.h Replaced lit.util.pythonize_bool with llvm_canonicalize_cmake_booleans in lit.site.cfg.py.in. rebase against main rebase against main Refactor HTTP Client to use std::vector & improve code formatting. Wed, Oct 13 Rebase against main. Replace bash-based lit tests with python subprocess based tests. Add debuginfod client tests to unit test suite. Remove .123 file added inadvertently. Replace caching with LTO's localCache and refactor. Expose an optional callback function parameter of fetchInfo to process debugging assets in memory. Update code and documentation for llvm style guidelines. Replace the hex encoding of asset keys with xxhash. Tue, Oct 12 @tejohnson Thanks for your comments! From my perspective this patch is done. Please let me know if you have any further changes to suggest before accepting. Thanks! Make cache dir parameter a Twine and avoid unnecessary copy in conversion to SmallString. rebase against main Remove unnecessary use of \#ifdef LLVM_ENABLE... Improve conformance to llvm coding standards. Use namespace qualifiers in debuginfod library implementation. Replace LLVM_ENABLE_DEBUGINFOD_CLIENT with LLVM_ENABLE_CURL. Mon, Oct 11 Thanks @phosek. I don’t have commit access, could you land this patch for me? Please use “Noah Shutty <shutty@google.com>” to commit the change. rebase against main changed header file from curl.h to curl/curl.h to pass check_symbol_exists test Update to reflect modified Debuginfod / Curl Cmake vars Remove extraneous change to llvm/CMakeLists.txt Move CURL dependency logic to config-ix.cmake. Also update to the same model used for zlib as suggested by phosek. rebase against main rebase against main Make copies of string reference parameters outside the returned lambda. Move Debuginfod client library out of Support. Moved into separate library folder. Fri, Oct 8 Add documentation for new parameters to localCache. Adjust cache name/prefix customization. Move customization arguments to be first. Convert to twines, remove unnecessary copy. Separate customized file prefix from customized cache name (to avoid chaning any behavior from existing ThinLTO cache). Remove debuginfod namespace use from llvm-debuginfod-find. Rebase against main. Make file prefix and error prefix configurable. This sets the prefix for existing use to "Thin" to avoid chaning the behavior. Thu, Oct 7 @MaskRay Thank you for pointing this out. I just added this! The use case is a Debuginfod client implementation. The AssetCache in the debuginfod client revision will be replaced by the localCache that was implemented for ThinLTO, after this revision to move the caching to Support. ThinLTO's caching code has advantages over our own, such as the tweaks for Windows compatibility (example). eliminate http and debuginfod namespaces put everything in llvm:: namespace and rename symbols (e.g. get -> httpGet) for clarity. A few miscellaneous style updates. Updating to meet conventions Added license + descriptive headers to new source files, replaced enum with enum class, remove namespace nesting. Wed, Oct 6 Add missing newlines Added needed newlines at end of files Add lit test requirement Add REQUIRES: debuginfod_client so that llvm-debuginfod-find only runs when client is built.
https://reviews.llvm.org/p/noajshu/
CC-MAIN-2021-43
refinedweb
546
61.63
In order to fully understand this article, you must first read the article: An OO Persistence Approach using Reflection and Attributes in C#. I am quite new in using .NET. In the past I used J2EE and I was quite happy with it. I must admit that there are a few very nice things in .NET, and ASP.NET is a very big step ahead. The first time I saw some colleagues making an ASP.NET web application, I noticed something that I didn�t quite like. Let me explain. Let�s consider that we need to have a page that can create or update user information, considering that the user has an ID, name, username and password. For each of these, we should have a TextBox and a Label, like: Now, if we want to make a new user, we display this form empty, and when Save is pressed, we get the Text property for each field, and we use it to insert a new record in the database. If we want to use the same form to update an user, we need to write a method to check if the user ID is in the request query string. If it is, load the values for the user with id = Request.Params[�userId�], and set the values in each TextBox. Then when the Save button is clicked, we must check if it is an insert of an update (we could use a bool IsUpdate field) and perform the needed operation. This is simple but is much too much to be written especially if you have many forms. Could we automate this in some way? Using the Persistence Framework explained in this article, we can make a web form that knows when it is create or update, and it loads and saves itself very easily. For this, I chose to extend the System.Web.UI.UserControl, so that a form like this could be easily integrated in more pages. For this I use the form class: public class Form : System.Web.UI.UserControl Now let�s begin. Add a new user control to your web application, called UserForm. Drag from the toolbox, 3 Labels, 3 TextBoxes and a Button. Then double click to go to the code-behind of the user control. We need to make some modifications to it. First, it must extend the Form class instead of System.Web.UI.UserControl.; Now we need to overwrite the Page_Load method from the form, in order to make the bindings between controls and User BO properties. This is accomplished using AddSimpleBinding(Control c, string controlProperty, string BOproperty). This tells the form that if it is update, it should load the user object and set (by reflection) the value of each bound control with the value of the BOproperty, and at save time, it sets each BOproperty from the BO, with the value of the controlProperty from the bound control. protected override void Page_Load(object sender, System.EventArgs e) { // Put user code to initialize the page here base.Page_Load(sender,e); if(!Page.IsPostBack) { this.AddSimpleDataBinding(this.TextBox1,"Text","Name"); this.AddSimpleDataBinding(this.TextBox2,"Text","Username"); this.AddSimpleDataBinding(this.TextBox3,"Text","Password"); } } Each form must know the BO type that it is used for. This is done using SetDataObjectType(Type BOtype). We just integrate this into OnInit method generated by Visual Studio .NET. #region Web Form Designer generated code override protected void OnInit(EventArgs e) { InitializeComponent(); this.SetDataObjectType(typeof(User)); base.OnInit(e); } private void InitializeComponent() { this.Save.Click += new System.EventHandler(this.SaveClicked); this.Load += new System.EventHandler(this.Page_Load); } #endregion If �Save� is pressed, it should save the data to the database. FormSave() makes the insert or the update needed. public void SaveClicked(object sender, System.EventArgs e) { this.FormSave(); } The difference between insert and update is made by checking the request query string to see if it contains a parameter with the same name as, the property of the BO that is mapped as primary key. If you look into the User BO class code, you will find that: [DBPrimaryKeyField("ID_User",DbType.Int16)] [DBColumn("ID_User",DbType.Int16,false)] public long UserID { //� } So in this case, it is UserID, so if the query sting is something like : �\....aspx?UserID=50&�, the form gets the data from the database and according to the the bindings declared, the controls are loaded with values. Also an IsUpdate flag is set true and when FormSave is appealed, the form updates the user with UserID = 50, with the new values that the user has modified in the TextBoxes. If the query string did not contain the UserID parameter, the form would have been displayed empty and when FormSave was appealed, a new User BO should be saved to the database. Now let�s complicate things a little bit. Let�s add to the User BO, two new properties: [DBTable("Users","dbo")] [Serializable] public class User { public User(){} private long _id; private string _name; private string _username; private string _password; private bool _active; private int _role; //� code for the other properties [DBColumn("Active",DbType.String,false)] public bool Active { //set, get } [DBColumn("IDRole",DbType.Int16,false)] public int IDRole { //set, get } } Let�s adapt the form: We have added a new CheckBox named CheckBox1 and a new DropDownList, called DropDownList1.; //new controls protected System.Web.UI.WebControls.CheckBox CheckBox1; protected System.Web.UI.WebControls.DropDownList DropDownList1; //a new dataset protected DataSet dataSet1 = new DataSet(�mydataset�); We must bind them to User BO properties, so the code becomes: protected override void Page_Load(object sender, System.EventArgs e) { // Put user code to initialize the page here base.Page_Load(sender,e); if(!Page.IsPostBack) { First let�s load all roles for the user from the Roles table: this.ps.LoadDataSet(this.dataSet1, "select ID,Name from Roles order by Name","roles") this.AddSimpleDataBinding(this.TextBox1,"Text","Name"); this.AddSimpleDataBinding(this.TextBox2,"Text","Username"); this.AddSimpleDataBinding(this.TextBox3,"Text","Password"); � and bindings: this.AddSimpleDataBinding(this.CheckBox1,"Checked","Active"); this.AddListDataBinding(this.DropDownList1, "IDRole",this.dataSet1,"roles","Name","{0}","ID"); } } We use the following method to bind the DropDownList: AddListDataBinding(Control c, string BOproperty, DataSet dataSource, string dataMemberTable, string dataTextField, string formattingExpression, string dataValueField) Simpler, don�t you think? You do not have to code checking if it is insert or update, data loading for update and especially you do not have to code for the insert or update of the data sent by the user. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/aspnet/transparentforms.aspx
crawl-002
refinedweb
1,083
56.25
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. how to make a sequence field renumber lines according to the current filter automatically I created a sequence field as a function field to show the number of records in browse starting from '1', regardless the 'id' value of the record. I used the following code def _get_sequence(self, cr, uid, ids, fields, args, context=None): res={} line_o = self.pool.get('budget.transaction.line') for o in self.browse(cr,uid,ids): res[o.id] = 1+line_o.search_count(cr,uid,[('id','<',o.id)]) return res _columns = { ... ondelete='cascade', select=True, required=True), 'sequence': fields.function(_get_sequence,string='Sequence',type='integer', help="Gives the sequence order when displaying a list of lines."), The issue with me now is that I need this numbering to be recalculated if I changed the filter of applied on the browse from the advanced search option. How to achieve this. Any help is appreciated. You need to declare a field preferably a dummy that no have a calculation method or you just simply forget about it, then you need to override the methods, read and read_group to assign the value to each record that you will display, a little more tricky is the read_group that will apply when you "group by" the data. The override methods need to assign the value of the field by an increment of the sequence value About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-to-make-a-sequence-field-renumber-lines-according-to-the-current-filter-automatically-90196
CC-MAIN-2017-34
refinedweb
280
53.81
feature layers for the Santa Monica Mountains in California. the Visual Studio solution To start the tutorial, complete the Display a scene. These steps are not required, your code will still work if you keep the original name. The tutorial instructions and code use the name Display for the solution, project, and namespace. You can choose any name you like, but it should be the same for each of these. Update the name for the solution and the project. - In Visual Studio, in the Solution Explorer, right-click the solution name and choose Rename. Provide the new name for your solution. - In the Solution Explorer, right-click the project name and choose Rename. Provide the new name for your project. Rename the namespace used by classes in the project. - In the Solution Explorer, expand the project node. - Double-click SceneViewModel.cs in the Solution Explorer to open the file. - In the Sceneclass, double-click the namespace name ( View Model Display) to select it, and then right-click and choose Rename.... AScene - Provide the new name for the namespace. - Click Apply in the Rename: DisplayAScene window that appears in the upper-right of the code window. This will rename the namespace throughout your project. Build the project. - Choose Build > Build solution (or press <F6>). the web scene You can display a web scene using the web scene's item ID. Create a scene from the web scene portal item, and display it in your app. - In Visual Studio, in the Solution Explorer, double-click SceneViewModel.cs to open the file. The project uses the Model-View-ViewModel (MVVM) design pattern to separate the application logic (view model) from the user interface (view). Scene contains the view model class for the application, called Scene. See the Microsoft documentation for more information about the Model-View-ViewModel pattern. Add additional required usingstatements at the top of the class.SceneViewModel.cs Use dark colors for code blocks Add line. Add line. using System; using System.Collections.Generic; using System.Text; using Esri.ArcGISRuntime.Geometry; using Esri.ArcGISRuntime.Mapping; using System.ComponentModel; using System.Runtime.CompilerServices; using Esri.ArcGISRuntime.Portal; using System.Threading.Tasks; In the SceneViewModel class, remove all the existing code in the Setupfunction. Scene()SceneViewModel.cs Use dark colors for code blocks private void SetupScene() { // Create a new scene with an imagery basemap. Scene scene = new Scene(BasemapStyle.ArcGISImageryStandard); // view model "Scene" property. this.Scene = scene; }. Modify the signature of the Setupfunction to include the Scene() asynckeyword and return Taskrather than void.SceneViewModel.cs Expand Use dark colors for code blocks Change line Change line private async Task SetupScene() { } Expand When calling methods asynchronously inside a function (using the awaitkeyword), the asynckeyword is required in the signature. Although a voidreturn type would continue to work, this is not considered best practice. Exceptions thrown by an async voidmethod cannot be caught outside of that method, are difficult to test, and can cause serious side effects if the caller is not expecting them to be asynchronous. The only circumstance where async voidis acceptable is when using an event handler, such as a button click. See the Microsoft documentation for more information about Asynchronous programming with async and await. Modify the call to Setup(in the Scene() Sceneconstructor) to avoid a compilation warning. After changing View Model Setupto an asynchronous method, the following warning appears in the Visual Studio Error List. Scene() Use dark colors for code blocks Copy Because this call is not awaited, execution of the current method continues before the call is completed. Consider applying the 'await' operator to the result of the call. Because your code does not anticipate a return value from this call, the warning can be ignored. To be more specific about your intentions with this call and to address the warning, add the following code to store the return value in a discard.SceneViewModel.cs Expand Use dark colors for code blocks Change line public SceneViewModel() { _ = SetupScene(); } Expand From the Microsoft documentation: "[Discards] are placeholder variables that are intentionally unused in application code. Discards are equivalent to unassigned variables; they don't have a value. A discard communicates intent to the compiler and others that read your code: You intended to ignore the result of an expression." Add code to the Setupfunction to create a Scene() PortalItemfor the web scene. To do this, provide the web scene's item ID and an ArcGISPortalreferencing ArcGIS Online.SceneViewModel.cs Expand Use dark colors for code blocks Add line."); } Expand ArcGISPortal.Createis a static method that returns a new instance of the Async() ArcGISPortalclass. Create a Sceneusing the PortalItem. To display the scene, set the Sceneproperty to this new View Model.Scene Scene.SceneViewModel.cs Expand Use dark colors for code blocks"); // Create the scene from the item. Scene scene = new Scene(sceneItem); // To display the scene, set the SceneViewModel.Scene property, which is bound to the scene view. this.Scene = scene; } Expand Click Debug > Start Debugging (or press <F5> on the keyboard) to run the app. Your app should display the scene that you viewed earlier in the Scene Viewer. What's Next? Learn how to use additional API features, ArcGIS location services, and ArcGIS tools in these tutorials:
https://developers.arcgis.com/net/scenes-3d/tutorials/display-a-web-scene/
CC-MAIN-2022-40
refinedweb
871
59.09
Iterator::IO - Filesystem and stream iterators. This documentation describes version 0.02 of Iterator::IO.pm, August 23, 2005.); This module provides filesystem and stream iterator functions. See the Iterator module for more information about how to use iterators. $iter = idir_listing ($path); Iterator that returns the names of the files in the $path directory. If $path). $iter = idir_walk ($path); Returns the files in a directory tree, one by one. It's sort of like File::Find in slow motion. Requires IO::Dir and Cwd. $iter = ifile ($filename, \%options); Opens a file, generates an iterator to return the lines of the file. \%options is a reference to a hash of options. Currently, two options are supported: chomp => boolean indicates whether lines should be chomped before being returned by the iterator. The default is true. '$/' => value specifies what string to use as the record separator. If not specified, the current value of $/ is used. " rs" or " input_record_separator" may be used as option names instead of " $/", if you find that to be more readable. See the English module. Option names are case-insensitive. $iter = ifile_reverse ($filename, \%options); Exactly the same as "ifile", but reads the lines of the file backwards. The input_record_separator option values undef (slurp whole file) and scalar references (fixed-length records) are not currently supported.. This module exports all function names to the caller's namespace by default. Iterator::IO how to trap and handle these exception objects.. Class: Iterator::X::Exhausted You called value on an iterator that is exhausted; that is, there are no more values in the sequence to return. As a string, this exception is "Iterator is exhausted." $!. Class: Iterator::X::Internal_Error Something happened that I thought couldn't possibly happen. I would appreciate it if you could send me an email message detailing the circumstances of the error. Requires the following additional modules: IO::Dir and Cwd are required if you use "idir_listing" or "idir_walk". IO::File is required if you use "ifile" or "ifile_reverse" Higher Order Perl, Mark Jason Dominus, Morgan Kauffman 2005. Much thanks to Will Coleda and Paul Lalli (and the RPI lily crowd in general) for suggestions for the pre-release version..
http://search.cpan.org/~roode/Iterator-IO-0.02/IO.pm
CC-MAIN-2017-47
refinedweb
361
59.9
Opened 8 years ago Closed 8 years ago Last modified 6 years ago #12101 closed (fixed) django.contrib.gis.gdal.OGRGeometry leaking memory Description gdal.OGRGeometry from WKB and WKT leaks memory like crazy (using GDAL 1.6.0, on debian). Reference counts do not appear to go up, but memory climbs rapidly. From WKB: from django.contrib.gis.geos import Point from django.contrib.gis import gdal geom = Point(1,1) geom.srid = 4326 for i in xrange(1000000): g = gdal.OGRGeometry(geom.wkb, geom.srid) del g From WKT: from django.contrib.gis.geos import Point from django.contrib.gis import gdal geom = Point(1,1) geom.srid = 4326 for i in xrange(1000000): g = gdal.OGRGeometry(geom.wkt, geom.srid) del g Attachments (1) Change History (5) comment:1 Changed 8 years ago by Changed 8 years ago by Fixes memory leak when assigning SpatialReference to OGRGeometry comment:2 Changed 8 years ago by I've come up with a patch that works, and all tests pass with it. However I can't fully explain yet why it works over the previous version, other than that somehow the cloned SpatialReference object references (within OGR, not Python) were not being released. As soon as I can explain why those references were not being released, I'll commit the patch. comment:3 Changed 8 years ago by comment:4 Changed 6 years ago by Milestone 1.2 deleted I've confirmed this is as a bug, the leak is real. I think it's within the SpatialReferencecode because there's no balloon in memory usage when I change to this:
https://code.djangoproject.com/ticket/12101
CC-MAIN-2017-51
refinedweb
271
59.4
Notary Service support for JWT Project description ns_jwt: JSON Web Tokens for Notary Service We will use RS256 (public/private key) variant of JWT signing. (Source:). For signing, , NS is assumed to be in possession of a public-private keypair. Presidio can access the public key through static configuration or, possibly, by querying an endpoint on NS, that is specified in the token. NS tokens carry the following claims: For dates,. Setup and configuration No external configuration except for dependencies (PyJWT, cryptography, python-dateutil). As above, use a virtual environment virtualenv -p $(which python3) venv source venv/bin/activate pip install --editable ns_jwt pip install pytest Testing Simply execute the command below. The test relies on having public.pem and private.pem (public and private portions of an RSA key) to be present in the tests/ directory. You can generate new pairs using tests/gen-keypair.sh (relies on openssl installation). pytest -v ns_jwt Teardown and Cleanup None needed. Troubleshooting CI Logon or other JWTs may not decode outright using PyJWT due to binascii.Error: Incorrect padding and jwt.exceptions.DecodeError: Invalid crypto padding. This is due to lack of base64 padding at the end of the token. Read it in as a string, then add the padding prior to decoding: import jwt with open('token_file.jwt') as f: token_string = f.read() jwt.decode(token_string + "==", verify=False) Any number of = can be added (at least 2) to fix the padding. If token is read in as a byte string, convert to utf-8 first: jwt_str = str(jwt_bin, 'utf-8'), then add padding (Source:) Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/ns-jwt/
CC-MAIN-2021-04
refinedweb
292
58.48
FAQ How do I create an image registry for my plug-in? If you’re writing a plug-in with UI components, it should be a subclass of AbstractUIPlugin. This superclass already provides you with an empty image registry accessible by calling getImageRegistry. When the registry is first accessed, the hook method initializeImageRegistry will be called. You should override this method to populate your image registry with the image descriptors you need. You don’t have to use this registry if you don’t need it, and because it is created lazily on first access, there is no performance overhead if you never use it. Here is an example of a plug-in that adds a sample.gif image to its image registry: public class ExamplesPlugin extends AbstractUIPlugin { public static final String PLUGIN_ID = "org.eclipse.faq.examples"; public static final String IMAGE_ID = "sample.image"; ... protected void initializeImageRegistry(ImageRegistry registry) { Bundle bundle = Platform.getBundle(PLUGIN_ID); IPath path = new Path("icons/sample.gif"); URL url = Platform.find(bundle, path);.
http://wiki.eclipse.org/index.php?title=FAQ_How_do_I_create_an_image_registry_for_my_plug-in%3F&direction=prev&oldid=255686
CC-MAIN-2017-39
refinedweb
167
50.53
I am trying to run the example in this page … n_variable you find it at the end of the post. I am compling it with g++ 4.7.2 and everything goes fine, but when I run it I get: $ g++ -std=c++11 -Wall test.cpp $ ./a.out terminate called after throwing an instance of 'std::system_error' what(): Operation not permitted Cancelled (core dumped) It's a bit strange because on with the same compiler it runs well. Could this be an Arch-related problem? Source code: #include <condition_variable> #include <mutex> #include <thread> #include <iostream> #include <queue> #include <chrono> int main() { std::queue<int> produced_nums; std::mutex m; std::condition_variable cond_var; bool done = false; bool notified = false; std::thread producer([&]() { for (int i = 0; i < 5; ++i) { std::this_thread::sleep_for(std::chrono::seconds(1)); std::unique_lock<std::mutex> lock(m); std::cout << "producing " << i << '\n'; produced_nums.push(i); notified = true; cond_var.notify_one(); } notified = true; done = true; cond_var.notify_one(); }); std::thread consumer([&]() { std::unique_lock<std::mutex> lock(m); while (!done) { while (!notified) { // loop to avoid spurious wakeups cond_var.wait(lock); } while (!produced_nums.empty()) { std::cout << "consuming " << produced_nums.front() << '\n'; produced_nums.pop(); } notified = false; } }); producer.join(); consumer.join(); } Last edited by DarioP (2013-03-06 12:04:16) Offline g++ -std=c++11 -Wall -g -pthread test.cpp 'What can be asserted without evidence can also be dismissed without evidence.' - Christopher Hitchens 'There's no such thing as addiction, there's only things that you enjoy doing more than life.' - Doug Stanhope GitHub Junkyard Offline -pthread Pfff silly me I lost a morning on this! And I even known that option!!! That site should add the option automatically, that was confusing me Thank you very much Offline
https://bbs.archlinux.org/viewtopic.php?pid=1240576
CC-MAIN-2016-26
refinedweb
286
58.89
Hi all, I am very new so excuse the messy code. I am just trying to get this code to work properly first before cleaning it :p . For now I want it to: if user types "no" exit, if user types "yes" then run the next part, or if the user inputs anything else, it repeats the previous step (asking yes or no). At the moment, if the user types anything but "yes" it still tries to give it a price for some reason. I also want it to say the quantity of the apples, but unsure how to make a function return more than 1 thing. #include <iostream> #include <string> using namespace std; int quant(int qnty, int price) { int sum; sum = qnty * price; return (sum); } int apples(int a) { string yn; int answer, qnt; int appleprice = 1.2; cout << "\nWould you like some apples?"; cin >> yn; if (yn == "yes" || "y" || "yeah" || "yep") answer = 1; if (yn == "no") answer = 2; switch(answer) { case 1: { cout << "\nHow many apples would you like? They are $1.20 ea."; cin >> qnt; quant(qnt,appleprice); } break; case 2: {cout << "\nOkay"; } break; default: {cout<<"\nlolwut?"; return 0; } } cout << "\nThe price for " << qnt << " is: " << appleprice; return (a); } int main() { int a; cout << "\nWelcome to the fruit shop!"; cout << "\nWe only have apples at the moment."; apples(a); cout << a; //It keeps telling me "a" is uninitialized, which I dont care, is //there a better way to word this? main(); return 0; }
https://www.daniweb.com/programming/software-development/threads/357499/beginner-help
CC-MAIN-2018-30
refinedweb
246
79.7
. Extension methods, introduced in Orcas, seems very interesting. First of all, here is a brief look at what they are. Extension methods can be created for existing framework classes, or for the classes that you created yourself. For instance - one fine day, I might think that for my console application; the Framework’s Stringclass should have a Dumpmethod for allowing me to dump the string directly to the console. So I’ll go ahead and create an extension method called Dump. Using thiskey word in the parameter signature tells the compiler to apply our Dumpextension method to the Stringclass. Also, note that both the Dumpextension method and the StringExtensionsclass where we define the extension method, are static. Now, I can import the namespace ExtensionDemo.Extensionsand our Dumpextension method will be ‘mapped’ to the type you specified using thiskeyword in the parameter signature of your extension method – i.e, the Stringclass. You can go ahead and call your extension method, much like you call any other method defined in the String class, as shown below. Note that there is a small arrow in intellisense near the Dump method, which represents that it is an extension method. Also, as you might know, even this is possible. Run the application, and you see the string in the console. Remember that Extension methods are a compiler feature. The compiler will expand an extension method call to a static method call when the application is compiled. Hence, you might say an extension method is a new way of writing static methods that can be called using instance method invocation syntax. IL generated for the above extension method and method invocation will be equivalent to having a static method Dump()in StringExtensionsclass, and invoking the same using conventional static invocation, like this //Conventional way to do what ever we've done earlier //using extension methods //StringExtensions.cs using System; namespace ExtensionDemo.Extensions { public static class StringExtensions { //A simple old style static method public static void Dump(String stringToDump) { Console.WriteLine(stringToDump); } } } //Program.cs //Old way of invoking the static method using System.Text; using ExtensionDemo.Extensions; namespace ExtensionDemo { class Program { static void Main(string[] args) { string myString = "hello"; StringExtensions.Dump(myString); } } }Here are few more pointers I can put together. - If you already have an instance method in your class with the same signature of an extension method you apply, the compiler will bind your call against the instance method. - Extension methods are brought into scope at the namespace level. In simple words, you can’t call the method Dump() on a string, if you don’t import the namespace where we have the static class with the extension method Dump() in it. - You should not use Extension methods to extend a class, if you can do that using some other means, like inheritance. - As you might assume, Extension methods can’t access private members of the type (because the type instance is passed as a parameter). - If the type you are applying the Extension method changes your code might break. For example, assume that you are accessing a property PropertyA in your extension method, and tomorrow some one changes the property name to PropertyB - Use Extension method sparingly and judiciously. Will be posting more on combining Lambda expressions with Extension methods, and finally some insights on LINQ as well. "As you might assume, Extension methods can’t access private members of the type" Damn, I guess it's not really "extending" then is it? I can't think of too many situations where this feature would be useful over existing methods. I suppose it can clean the syntax up a bit though. Yes. But think about this. It is the only way to really 'extend' a compiled class :). As I explained in my example, you can use this to write custom methods for Framework classes. For example, LINQ uses extension methods to add methods like Where, Select etc to the existing IEnumerable interface.
http://www.amazedsaint.com/2008/07/extension-methods-in-c-first-look.html
CC-MAIN-2019-51
refinedweb
658
54.93
Matt Lundin <address@hidden> writes: > Hi Eric and Christian, > > Christian Moe <address@hidden> writes: > >> The *conclusion* (where Eric Schulte's new bibtex functions should go) >> is not a big concern to me, but FWIW, the *premise* strikes me as >> unnecessarily restrictive. >> >> I submit that, for any non-Org format or application "foo", the module >> org-foo.el does not have to be restricted to providing an Org link >> type for foo. It seems a sensible namespace for e.g. foo-Org/Org-foo >> conversion functions as well. The fact that several modules so named >> *at present* only provide link functionality does not, I think, amount >> to a convention that this is all they should do. > > Christian, you are right. I stand corrected. I agree that the namespace > can accommodate import/export/conversion features in addition to > hyperlinking. > > Apologies (especially to Eric) for my wavering on where to put this. > This functionality is indeed not a generic bib backend, but rather > tightly integrated with bibtex-mode and the bibtex format. So a full +1 > for adding this to org-bibtex.el. And that's my final answer... :) > No problem here, this is the sort of question I'm happy to defer on, especially as it shouldn't really affect the final functionality. Given that I hadn't yet started to extract the code this is no skin off my nose. I'll go ahead and leave this code where it is. Cheers -- Eric -- Eric Schulte
http://lists.gnu.org/archive/html/emacs-orgmode/2011-04/msg00758.html
CC-MAIN-2016-50
refinedweb
244
66.54
This page contains an archived post to the Java Answers Forum made prior to February 25, 2002. If you wish to participate in discussions, please visit the new Artima Forums. use of "final" in variable declaration Posted by Jody Brown on October 30, 2001 at 4:41 AM Yuan, The important thing to remember here is that although a variable declared final cant be changed, the key here is that it cant be changed "once it has been initialised". Take the following example:- public class MyClass { public final String myVariable; public String myMethod() { // here, myVariable can be initialised . . myVariable = "a value"; // here, myVariable has been set and cannot be changed. }}
http://www.artima.com/legacy/answers/Oct2001/messages/268.html
CC-MAIN-2014-49
refinedweb
110
57.81
Dask Development Log This work is supported by Anaconda Inc To increase transparency I’m trying to blog more often about the current work going on around Dask and related projects. Nothing here is ready for production. This blogpost is written in haste, so refined polish should not be expected. Current efforts for June 2018 in Dask and Dask-related projects include the following: - Yarn Deployment - More examples for machine learning - Incremental machine learning - HPC Deployment configuration Yarn deployment Dask developers often get asked How do I deploy Dask on my Hadoop/Spark/Hive cluster?. We haven’t had a very good answer until recently. Most Hadoop/Spark/Hive clusters are actually Yarn. Unfortunately Yarn has really only been accessible through a Java API, and so has been difficult for Dask to interact with. That’s changing now with a few projects, including: - dask-yarn: an easy way to launch Dask on Yarn clusters - skein: an easy way to launch generic services on Yarn clusters (this is primarily what backs dask-yarn) - conda-pack: an easy way to bundle together a conda package into a redeployable environment, such as is useful when launching Python applications on Yarn This work is all being done by Jim Crist who is, I believe, currently writing up a blogpost about the topic at large. Dask-yarn was soft-released last week though, so people should give it a try and report feedback on the dask-yarn issue tracker. If you ever wanted direct help on your cluster, now is the right time because Jim is working on this actively and is not yet drowned in user requests so generally has a fair bit of time to investigate particular cases. from dask_yarn import YarnCluster from dask.distributed import Client # Create a cluster where each worker has two cores and eight GB of memory cluster = YarnCluster(environment='environment.tar.gz', worker_vcores=2, worker_memory="8GB") # Scale out to ten such workers cluster.scale(10) # Connect to the cluster client = Client(cluster) More examples for machine learning. Previously we had a single example for arrays, dataframes, delayed, machine learning, etc. Now Scott Sievert is expanding the examples within the machine learning section. He has submitted the following two so far: I believe he’s planning on more. If you use dask-ml and have recommendations or want to help, you might want to engage in the dask-ml issue tracker or dask-examples issue tracker. Incremental training The incremental training mentioned as an example above is also new-ish. This is a Scikit-Learn style meta-estimator that wraps around other estimators that support the partial_fit method. It enables training on large datasets in an incremental or batchwise fashion. Before from sklearn.linear_model import SGDClassifier sgd = SGDClassifier(...) import pandas as pd for filename in filenames: df = pd.read_csv(filename) X, y = ... sgd.partial_fit(X, y) After from sklearn.linear_model import SGDClassifier from dask_ml.wrappers import Incremental sgd = SGDClassifier(...) inc = Incremental(sgd) import dask.dataframe as dd df = dd.read_csv(filenames) X, y = ... inc.fit(X, y) Analysis. There’s ongoing work on how best to combine this with other work like pipelines and hyper-parameter searches to fill in the extra computation. This work was primarily done by Tom Augspurger with help from Scott Sievert Dask User Stories Dask developers are often asked “Who uses Dask?”. This is a hard question to answer because, even though we’re inundated with thousands of requests for help from various companies and research groups, it’s never fully clear who minds having their information shared with others. We’re now trying to crowdsource this information in a more explicit way by having users tell their own stories. Hopefully this helps other users in their field understand how Dask can help and when it might (or might not) be useful to them. We originally collected this information in a Google Form but have since then moved it to a Github repository. Eventually we’ll publish this as a proper web site and include it in our documentation. If you use Dask and want to share your story this is a great way to contribute to the project. Arguably Dask needs more help with spreading the word than it does with technical solutions. HPC Deployments The Dask Jobqueue. blog comments powered by Disqus
https://matthewrocklin.com/blog/work/2018/07/08/dask-dev
CC-MAIN-2019-09
refinedweb
720
55.64
Working with Time Series Pandas was developed in the context of financial modeling, so as you might expect, it contains a fairly extensive set of tools for working with dates, times, and time-indexed data. Date and time data comes in a few flavors, which we will discuss here: - Time stamps reference particular moments in time (e.g., July 4th, 2015 at 7:00am). - Time intervals and periods reference a length of time between a particular beginning and end point; for example, the year 2015. Periods usually reference a special case of time intervals in which each interval is of uniform length and does not overlap (e.g., 24 hour-long periods comprising days). - Time deltas or durations reference an exact length of time (e.g., a duration of 22.56 seconds). In this section, we will introduce how to work with each of these types of date/time data in Pandas. This short section is by no means a complete guide to the time series tools available in Python or Pandas, but instead is intended as a broad overview of how you as a user should approach working with time series. We will start with a brief discussion of tools for dealing with dates and times in Python, before moving more specifically to a discussion of the tools provided by Pandas. After listing some resources that go into more depth, we will review some short examples of working with time series data in Pandas. Python. Native Python dates and times: datetime and dateutil¶ Python's basic objects for working with dates and times reside in the built-in datetime module. Along with the third-party dateutil module, you can use it to quickly perform a host of useful functionalities on dates and times. For example, you can manually build a date using the datetime type: from datetime import datetime datetime(year=2015, month=7, day=4) datetime.datetime(2015, 7, 4, 0, 0) Or, using the dateutil module, you can parse dates from a variety of string formats: from dateutil import parser date = parser.parse("4th of July, 2015") date datetime.datetime(2015, 7, 4, 0, 0) Once you have a datetime object, you can do things like printing the day of the week: date.strftime('%A') 'Saturday' In the final line, we've used one of the standard string format codes for printing dates ( "%A"), which you can read about in the strftime section of Python's datetime documentation. Documentation of other useful date utilities can be found in dateutil's online documentation. A related package to be aware of is pytz, which contains tools for working with the most migrane-inducing piece of time series data: time zones. The power of datetime and dateutil lie in their flexibility and easy syntax: you can use these objects and their built-in methods to easily perform nearly any operation you might be interested in. Where they break down is when you wish to work with large arrays of dates and times: just as lists of Python numerical variables are suboptimal compared to NumPy-style typed numerical arrays, lists of Python datetime objects are suboptimal compared to typed arrays of encoded dates. Typed arrays of times: NumPy's datetime64¶ The weaknesses of Python's datetime format inspired the NumPy team to add a set of native time series data type to NumPy. The datetime64 dtype encodes dates as 64-bit integers, and thus allows arrays of dates to be represented very compactly. The datetime64 requires a very specific input format: import numpy as np date = np.array('2015-07-04', dtype=np.datetime64) date array(datetime.date(2015, 7, 4), dtype='datetime64[D]') Once we have this date formatted, however, we can quickly do vectorized operations on it: date + np.arange(12) array(['2015-07-04', '2015-07-05', '2015-07-06', '2015-07-07', '2015-07-08', '2015-07-09', '2015-07-10', '2015-07-11', '2015-07-12', '2015-07-13', '2015-07-14', '2015-07-15'], dtype='datetime64[D]') Because of the uniform type in NumPy datetime64 arrays, this type of operation can be accomplished much more quickly than if we were working directly with Python's datetime objects, especially as arrays get large (we introduced this type of vectorization in Computation on NumPy Arrays: Universal Functions). One detail of the datetime64 and timedelta64 objects is that they are built on a fundamental time unit. Because the datetime64 object is limited to 64-bit precision, the range of encodable times is $2^{64}$ times this fundamental unit. In other words, datetime64 imposes a trade-off between time resolution and maximum time span. For example, if you want a time resolution of one nanosecond, you only have enough information to encode a range of $2^{64}$ nanoseconds, or just under 600 years. NumPy will infer the desired unit from the input; for example, here is a day-based datetime: np.datetime64('2015-07-04') numpy.datetime64('2015-07-04') Here is a minute-based datetime: np.datetime64('2015-07-04 12:00') numpy.datetime64('2015-07-04T12:00') Notice that the time zone is automatically set to the local time on the computer executing the code. You can force any desired fundamental unit using one of many format codes; for example, here we'll force a nanosecond-based time: np.datetime64('2015-07-04 12:59:59.50', 'ns') numpy.datetime64('2015-07-04T12:59:59.500000000') The following table, drawn from the NumPy datetime64 documentation, lists the available format codes along with the relative and absolute timespans that they can encode: For the types of data we see in the real world, a useful default is datetime64[ns], as it can encode a useful range of modern dates with a suitably fine precision. Finally, we will note that while the datetime64 data type addresses some of the deficiencies of the built-in Python datetime type, it lacks many of the convenient methods and functions provided by datetime and especially dateutil. More information can be found in NumPy's datetime64 documentation. Dates and times in pandas: best of both worlds¶ Pandas builds upon all the tools just discussed to provide a Timestamp object, which combines the ease-of-use of datetime and dateutil with the efficient storage and vectorized interface of numpy.datetime64. From a group of these Timestamp objects, Pandas can construct a DatetimeIndex that can be used to index data in a Series or DataFrame; we'll see many examples of this below. For example, we can use Pandas tools to repeat the demonstration from above. We can parse a flexibly formatted string date, and use format codes to output the day of the week: import pandas as pd date = pd.to_datetime("4th of July, 2015") date Timestamp('2015-07-04 00:00:00') date.strftime('%A') 'Saturday' Additionally, we can do NumPy-style vectorized operations directly on this same object: date + pd.to_timedelta(np.arange(12), 'D') DatetimeIndex(['2015-07-04', '2015-07-05', '2015-07-06', '2015-07-07', '2015-07-08', '2015-07-09', '2015-07-10', '2015-07-11', '2015-07-12', '2015-07-13', '2015-07-14', '2015-07-15'], dtype='datetime64[ns]', freq=None) In the next section, we will take a closer look at manipulating time series data with the tools provided by Pandas. index = pd.DatetimeIndex(['2014-07-04', '2014-08-04', '2015-07-04', '2015-08-04']) data = pd.Series([0, 1, 2, 3], index=index) data 2014-07-04 0 2014-08-04 1 2015-07-04 2 2015-08-04 3 dtype: int64 Now that we have this data in a Series, we can make use of any of the Series indexing patterns we discussed in previous sections, passing values that can be coerced into dates: data['2014-07-04':'2015-07-04'] 2014-07-04 0 2014-08-04 1 2015-07-04 2 dtype: int64 There are additional special date-only indexing operations, such as passing a year to obtain a slice of all data from that year: data['2015'] 2015-07-04 2 2015-08-04 3 dtype: int64 Later, we will see additional examples of the convenience of dates-as-indices. But first, a closer look at the available time series data structures. Pandas Time Series Data Structures¶ This section will introduce the fundamental Pandas data structures for working with time series data: - For time stamps, Pandas provides the Timestamptype. As mentioned before, it is essentially a replacement for Python's native datetime, but is based on the more efficient numpy.datetime64data type. The associated Index structure is DatetimeIndex. - For time Periods, Pandas provides the Periodtype. This encodes a fixed-frequency interval based on numpy.datetime64. The associated index structure is PeriodIndex. - For time deltas or durations, Pandas provides the Timedeltatype. Timedeltais a more efficient replacement for Python's native datetime.timedeltatype, and is based on numpy.timedelta64. The associated index structure is TimedeltaIndex. The most fundamental of these date/time objects are the Timestamp and DatetimeIndex objects. While these class objects can be invoked directly, it is more common to use the pd.to_datetime() function, which can parse a wide variety of formats. Passing a single date to pd.to_datetime() yields a Timestamp; passing a series of dates by default yields a DatetimeIndex: dates = pd.to_datetime([datetime(2015, 7, 3), '4th of July, 2015', '2015-Jul-6', '07-07-2015', '20150708']) dates DatetimeIndex(['2015-07-03', '2015-07-04', '2015-07-06', '2015-07-07', '2015-07-08'], dtype='datetime64[ns]', freq=None) Any DatetimeIndex can be converted to a PeriodIndex with the to_period() function with the addition of a frequency code; here we'll use 'D' to indicate daily frequency: dates.to_period('D') PeriodIndex(['2015-07-03', '2015-07-04', '2015-07-06', '2015-07-07', '2015-07-08'], dtype='int64', freq='D') A TimedeltaIndex is created, for example, when a date is subtracted from another: dates - dates[0] TimedeltaIndex(['0 days', '1 days', '3 days', '4 days', '5 days'], dtype='timedelta64[ns]', freq=None) Regular sequences: pd.date_range()¶ To make the creation of regular date sequences more convenient, Pandas offers a few functions for this purpose: pd.date_range() for timestamps, pd.period_range() for periods, and pd.timedelta_range() for time deltas. We've seen that Python's range() and NumPy's np.arange() turn a startpoint, endpoint, and optional stepsize into a sequence. Similarly, pd.date_range() accepts a start date, an end date, and an optional frequency code to create a regular sequence of dates. By default, the frequency is one day: pd.date_range('2015-07-03', '2015-07-10') DatetimeIndex(['2015-07-03', '2015-07-04', '2015-07-05', '2015-07-06', '2015-07-07', '2015-07-08', '2015-07-09', '2015-07-10'], dtype='datetime64[ns]', freq='D') Alternatively, the date range can be specified not with a start and endpoint, but with a startpoint and a number of periods: pd.date_range('2015-07-03', periods=8) DatetimeIndex(['2015-07-03', '2015-07-04', '2015-07-05', '2015-07-06', '2015-07-07', '2015-07-08', '2015-07-09', '2015-07-10'], dtype='datetime64[ns]', freq='D') The spacing can be modified by altering the freq argument, which defaults to D. For example, here we will construct a range of hourly timestamps: pd.date_range('2015-07-03', periods=8, freq='H') DatetimeIndex(['2015-07-03 00:00:00', '2015-07-03 01:00:00', '2015-07-03 02:00:00', '2015-07-03 03:00:00', '2015-07-03 04:00:00', '2015-07-03 05:00:00', '2015-07-03 06:00:00', '2015-07-03 07:00:00'], dtype='datetime64[ns]', freq='H') To create regular sequences of Period or Timedelta values, the very similar pd.period_range() and pd.timedelta_range() functions are useful. Here are some monthly periods: pd.period_range('2015-07', periods=8, freq='M') PeriodIndex(['2015-07', '2015-08', '2015-09', '2015-10', '2015-11', '2015-12', '2016-01', '2016-02'], dtype='int64', freq='M') And a sequence of durations increasing by an hour: pd.timedelta_range(0, periods=10, freq='H') TimedeltaIndex(['00:00:00', '01:00:00', '02:00:00', '03:00:00', '04:00:00', '05:00:00', '06:00:00', '07:00:00', '08:00:00', '09:00:00'], dtype='timedelta64[ns]', freq='H') All of these require an understanding of Pandas frequency codes, which we'll summarize in the next section. The monthly, quarterly, and annual frequencies are all marked at the end of the specified period. By adding an S suffix to any of these, they instead will be marked at the beginning: Additionally, you can change the month used to mark any quarterly or annual code by adding a three-letter month code as a suffix: Q-JAN, BQ-FEB, QS-MAR, BQS-APR, etc. A-JAN, BA-FEB, AS-MAR, BAS-APR, etc. In the same way, the split-point of the weekly frequency can be modified by adding a three-letter weekday code: W-SUN, W-MON, W-TUE, W-WED, etc. On top of this, codes can be combined with numbers to specify other frequencies. For example, for a frequency of 2 hours 30 minutes, we can combine the hour ( H) and minute ( T) codes as follows: pd.timedelta_range(0, periods=9, freq="2H30T") TimedeltaIndex(['00:00:00', '02:30:00', '05:00:00', '07:30:00', '10:00:00', '12:30:00', '15:00:00', '17:30:00', '20:00:00'], dtype='timedelta64[ns]', freq='150T') All of these short codes refer to specific instances of Pandas time series offsets, which can be found in the pd.tseries.offsets module. For example, we can create a business day offset directly as follows: from pandas.tseries.offsets import BDay pd.date_range('2015-07-01', periods=5, freq=BDay()) DatetimeIndex(['2015-07-01', '2015-07-02', '2015-07-03', '2015-07-06', '2015-07-07'], dtype='datetime64[ns]', freq='B') For more discussion of the use of frequencies and offsets, see the "DateOffset" section of the Pandas documentation. Resampling, Shifting, and Windowing¶ The ability to use dates and times as indices to intuitively organize and access data is an important piece of the Pandas time series tools. The benefits of indexed data in general (automatic alignment during operations, intuitive data slicing and access, etc.) still apply, and Pandas provides several additional time series-specific operations. We will take a look at a few of those here, using some stock price data as an example. Because Pandas was developed largely in a finance context, it includes some very specific tools for financial data. For example, the accompanying pandas-datareader package (installable via conda install pandas-datareader), knows how to import financial data from a number of available sources, including Yahoo finance, Google Finance, and others. Here we will load Google's closing price history: from pandas_datareader import data goog = data.DataReader('GOOG', start='2004', end='2016', data_source='google') goog.head() For simplicity, we'll use just the closing price: goog = goog['Close'] %matplotlib inline import matplotlib.pyplot as plt import seaborn; seaborn.set() goog.plot(); Resampling and converting frequencies¶ One common need for time series data is resampling at a higher or lower frequency. This can be done using the resample() method, or the much simpler asfreq() method. The primary difference between the two is that resample() is fundamentally a data aggregation, while asfreq() is fundamentally a data selection. Taking a look at the Google closing price, let's compare what the two return when we down-sample the data. Here we will resample the data at the end of business year: goog.plot(alpha=0.5, style='-') goog.resample('BA').mean().plot(style=':') goog.asfreq('BA').plot(style='--'); plt.legend(['input', 'resample', 'asfreq'], loc='upper left'); Notice the difference: at each point, resample reports the average of the previous year, while asfreq reports the value at the end of the year. For up-sampling, resample() and asfreq() are largely equivalent, though resample has many more options available. In this case, the default for both methods is to leave the up-sampled points empty, that is, filled with NA values. Just as with the pd.fillna() function discussed previously, asfreq() accepts a method argument to specify how values are imputed. Here, we will resample the business day data at a daily frequency (i.e., including weekends): fig, ax = plt.subplots(2, sharex=True) data = goog.iloc[:10] data.asfreq('D').plot(ax=ax[0], marker='o') data.asfreq('D', method='bfill').plot(ax=ax[1], style='-o') data.asfreq('D', method='ffill').plot(ax=ax[1], style='--o') ax[1].legend(["back-fill", "forward-fill"]); The top panel is the default: non-business days are left as NA values and do not appear on the plot. The bottom panel shows the differences between two strategies for filling the gaps: forward-filling and backward-filling. Time-shifts¶ Another common time series-specific operation is shifting of data in time. Pandas has two closely related methods for computing this: shift() and tshift() In short, the difference between them is that shift() shifts the data, while tshift() shifts the index. In both cases, the shift is specified in multiples of the frequency. Here we will both shift() and tshift() by 900 days; fig, ax = plt.subplots(3, sharey=True) # apply a frequency to the data goog = goog.asfreq('D', method='pad') goog.plot(ax=ax[0]) goog.shift(900).plot(ax=ax[1]) goog.tshift(900).plot(ax=ax[2]) # legends and annotations local_max = pd.to_datetime('2007-11-05') offset = pd.Timedelta(900, 'D') ax[0].legend(['input'], loc=2) ax[0].get_xticklabels()[2].set(weight='heavy', color='red') ax[0].axvline(local_max, alpha=0.3, color='red') ax[1].legend(['shift(900)'], loc=2) ax[1].get_xticklabels()[2].set(weight='heavy', color='red') ax[1].axvline(local_max + offset, alpha=0.3, color='red') ax[2].legend(['tshift(900)'], loc=2) ax[2].get_xticklabels()[1].set(weight='heavy', color='red') ax[2].axvline(local_max + offset, alpha=0.3, color='red'); We see here that shift(900) shifts the data by 900 days, pushing some of it off the end of the graph (and leaving NA values at the other end), while tshift(900) shifts the index values by 900 days. A common context for this type of shift is in computing differences over time. For example, we use shifted values to compute the one-year return on investment for Google stock over the course of the dataset: ROI = 100 * (goog.tshift(-365) / goog - 1) ROI.plot() plt.ylabel('% Return on Investment'); This helps us to see the overall trend in Google stock: thus far, the most profitable times to invest in Google have been (unsurprisingly, in retrospect) shortly after its IPO, and in the middle of the 2009 recession. Rolling windows¶ Rolling statistics are a third type of time series-specific operation implemented by Pandas. These can be accomplished via the rolling() attribute of Series and DataFrame objects, which returns a view similar to what we saw with the groupby operation (see Aggregation and Grouping). This rolling view makes available a number of aggregation operations by default. For example, here is the one-year centered rolling mean and standard deviation of the Google stock prices: rolling = goog.rolling(365, center=True) data = pd.DataFrame({'input': goog, 'one-year rolling_mean': rolling.mean(), 'one-year rolling_std': rolling.std()}) ax = data.plot(style=['-', '--', ':']) ax.lines[0].set_alpha(0.3) As with group-by operations, the aggregate() and apply() methods can be used for custom rolling computations. Where to Learn More¶ This section has provided only a brief summary of some of the most essential features of time series tools provided by Pandas; for a more complete discussion, you can refer to the "Time Series/Date" section of the Pandas online documentation. Another excellent resource is the textbook Python for Data Analysis by Wes McKinney (OReilly, 2012). Although it is now a few years old, it is an invaluable resource on the use of Pandas. In particular, this book emphasizes time series tools in the context of business and finance, and focuses much more on particular details of business calendars, time zones, and related topics. As always, you can also use the IPython help functionality to explore and try further options available to the functions and methods discussed here. I find this often is the best way to learn a new Python tool. Example: Visualizing Seattle Bicycle Counts¶ As a more involved example of working with some time series data, let's take a look at bicycle counts on Seattle's Fremont Bridge. This data comes from an automated bicycle counter, installed in late 2012, which has inductive sensors on the east and west sidewalks of the bridge. The hourly bicycle counts can be downloaded from; here is the direct link to the dataset. As of summer 2016, the CSV can be downloaded as follows: # !curl -o FremontBridge.csv Once this dataset is downloaded, we can use Pandas to read the CSV output into a DataFrame. We will specify that we want the Date as an index, and we want these dates to be automatically parsed: data = pd.read_csv('FremontBridge.csv', index_col='Date', parse_dates=True) data.head() For convenience, we'll further process this dataset by shortening the column names and adding a "Total" column: data.columns = ['West', 'East'] data['Total'] = data.eval('West + East') Now let's take a look at the summary statistics for this data: data.dropna().describe() %matplotlib inline import seaborn; seaborn.set() data.plot() plt.ylabel('Hourly Bicycle Count'); The ~25,000 hourly samples are far too dense for us to make much sense of. We can gain more insight by resampling the data to a coarser grid. Let's resample by week: weekly = data.resample('W').sum() weekly.plot(style=[':', '--', '-']) plt.ylabel('Weekly bicycle count'); This shows us some interesting seasonal trends: as you might expect, people bicycle more in the summer than in the winter, and even within a particular season the bicycle use varies from week to week (likely dependent on weather; see In Depth: Linear Regression where we explore this further). Another way that comes in handy for aggregating the data is to use a rolling mean, utilizing the pd.rolling_mean() function. Here we'll do a 30 day rolling mean of our data, making sure to center the window: daily = data.resample('D').sum() daily.rolling(30, center=True).sum().plot(style=[':', '--', '-']) plt.ylabel('mean hourly count'); The jaggedness of the result is due to the hard cutoff of the window. We can get a smoother version of a rolling mean using a window function–for example, a Gaussian window. The following code specifies both the width of the window (we chose 50 days) and the width of the Gaussian within the window (we chose 10 days): daily.rolling(50, center=True, win_type='gaussian').sum(std=10).plot(style=[':', '--', '-']); Digging into the data¶ While these smoothed data views are useful to get an idea of the general trend in the data, they hide much of the interesting structure. For example, we might want to look at the average traffic as a function of the time of day. We can do this using the GroupBy functionality discussed in Aggregation and Grouping: by_time = data.groupby(data.index.time).mean() hourly_ticks = 4 * 60 * 60 * np.arange(6) by_time.plot(xticks=hourly_ticks, style=[':', '--', '-']); The hourly traffic is a strongly bimodal distribution, with peaks around 8:00 in the morning and 5:00 in the evening. This is likely evidence of a strong component of commuter traffic crossing the bridge. This is further evidenced by the differences between the western sidewalk (generally used going toward downtown Seattle), which peaks more strongly in the morning, and the eastern sidewalk (generally used going away from downtown Seattle), which peaks more strongly in the evening. We also might be curious about how things change based on the day of the week. Again, we can do this with a simple groupby: by_weekday = data.groupby(data.index.dayofweek).mean() by_weekday.index = ['Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat', 'Sun'] by_weekday.plot(style=[':', '--', '-']); This shows a strong distinction between weekday and weekend totals, with around twice as many average riders crossing the bridge on Monday through Friday than on Saturday and Sunday. With this in mind, let's do a compound GroupBy and look at the hourly trend on weekdays versus weekends. We'll start by grouping by both a flag marking the weekend, and the time of day: weekend = np.where(data.index.weekday < 5, 'Weekday', 'Weekend') by_time = data.groupby([weekend, data.index.time]).mean() Now we'll use some of the Matplotlib tools described in Multiple Subplots to plot two panels side by side: import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 2, figsize=(14, 5)) by_time.ix['Weekday'].plot(ax=ax[0], title='Weekdays', xticks=hourly_ticks, style=[':', '--', '-']) by_time.ix['Weekend'].plot(ax=ax[1], title='Weekends', xticks=hourly_ticks, style=[':', '--', '-']); The result is very interesting: we see a bimodal commute pattern during the work week, and a unimodal recreational pattern during the weekends. It would be interesting to dig through this data in more detail, and examine the effect of weather, temperature, time of year, and other factors on people's commuting patterns; for further discussion, see my blog post "Is Seattle Really Seeing an Uptick In Cycling?", which uses a subset of this data. We will also revisit this dataset in the context of modeling in In Depth: Linear Regression.
https://jakevdp.github.io/PythonDataScienceHandbook/03.11-working-with-time-series.html
CC-MAIN-2019-18
refinedweb
4,260
54.42
Eirik Chambe in early 90s. In 2008, it was acquired by Nokia and in 2012, by Digia. Google Earth, VLC, Autodesk Maya are some of the companies using Qt. KDE’s mascot Konqi showing his Qt heart. Installing Qt Creator IDE - Go to: - Go Open Source > Download Create your first project 1) Open Qt Creator and click New Project. 2) Choose Application and then Qt Widgets Application. 3) Name your project. 4) Select the default desktop kit which may vary from system to system. 5) Click Next and leave these parameter at default 6) Click Finish 7) Now, you’re going to see something like this. 8) Right click your ‘NewProject‘ or the project name given by you and click Build. 9) Press the green Run button to see the final window. Empty qmake Project 1) We are going to create an application without any default classes. Go to New Project. 2) Select Other Project > Empty qmake Project 3) Follow the same steps as we did in our first project. 4) It’s going to create a project for you and create a .pro file inside the project which is the project file. It’s just like a makefile for your c++. 5) Right click you project > Add New… 6) Create a c++ source file > Name the file (Ex- name.cpp) 7) You’ll see that file.cpp ( or your cpp file ) is added to the current project. 8) Define some fields in .pro file QT += core gui greaterThan(QT_MAJOR_VERSION,4) : QT += widgets SOURCES += \ file.cpp 9) Line 1 = core and gui library gets added, Line 2 = We want to use major version greater than 4 10) Save your project and go to your cpp file (file.cpp in my case). 11) Add the following lines in your code. #include <QApplication> int main(int argc, char* argv[]) { QApplication app(argc,argv); // Whatever you write between line 5 and line 9 will get added to your gui application return app.exec(); } 12) Create a HelloWorld Label. #include <QApplication> #include <QLabel> int main(int argc, char* argv[]) { QApplication app(argc,argv); QLabel *label = new QLabel("Hello World!"); label->show(); return app.exec(); } 13) Build and Run the code to see Hello World! 14) Set a Window Title with label-> setWindowTitle("New Title"); 15) Set window size with label-> resize(600,600); Creating a GUI Application 1) Create a New Project 2) Application > Qt Widgets Application > Follow the similar process as above 3) Go to Forms > mainwindow.ui which is going to look something like this 4) Drag and drop a Push Button from the Buttons section. 5) You can change the objectName for the PushButton by changing the value of that property at bottom right and press enter (I changed it to PushMe). 6) Change the text on the PushButton by double clicking the button and entering the text. 7) Build and Run to see this window. 8) Now, lets perform an action once you press the push button. Click Edit Signal/Slots which is the second button on the left hand side of mainwindow.ui text or press F4. 9) Now, click on the push button and drag/move out the mouse to see ground sort of symbol. 10) Check the box which says Show signals and slots inherited form QWidget and select the appropriate action to perform. Display some text after button click 1) Create a New Project 2) Application > Qt Widgets Application > Follow the similar process as above 3) Add a push button and a text label in mainwindow.ui 4) Right click on your push button > Select Go to slot… > choose clicked() > Press ok 5) It’s going to create a function in your main window class. 6) Check the object name of the label in mainwindow.ui which is label in my case. 7) Enter the following text inside the on_pushButton_clicked() ui->label->setText("Clicked!"); 8) Build and Run– Now, when you click the button the text changes. Displaying Messages 1) Create a New Project 2) Application > Qt Widgets Application > Follow the similar process as above 3) Go to mainwindow.ui > Drag and drop a push button 4) Right click push button > Go to slot… > choose the clicked() option > Press ok which will redirect you to the clicked function. 5) Add the MessageBox library. #include "QMessageBox" In the on_push Button_clicked() function, add QMessageBox::about(this,"Hello!","Stop saying Hello!") 6) Build and Run to see the following output. Yes/No Quit Application 1) Follow the exact above steps and edit the code in the following manner. #include "mainwindow.h" #include "ui_mainwindow.h" #include "QMessageBox" #include <QDebug> MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); } MainWindow::~MainWindow() { delete ui; } void MainWindow::on_pushButton_clicked() { QMessageBox::StandardButton reply = QMessageBox::question(this,"Hello", "Please, no hello!",QMessageBox::Yes | QMessageBox::No); if(reply == QMessageBox::Yes) { QApplication::quit(); } else { qDebug() << "No is clicked"; } } Spacers and Tabs 1) Create a New Project 2) Application > Qt Widgets Application > Follow the similar process as above 3) Go to mainwindow.ui > Drag and drop a label and line layout 4) Select both the objects and arrange them in horizontal order in the menu above or ctrl+L. 5) Add two more push buttons and make it look like the following image 6) Select all of them and arrange in vertical order 7) Use the horizontal spacer by dragging and dropping it on the left hand side of the OK button to give some space. 8) Drag and drop three push buttons and apply vertical layout. 9) Select all > right click > Lay out > Lay Out Horizontally in Splitter 10) Right click anywhere on the working area (not the objects) > Lay out > Lay Out Horizontally 11) Build and Run. New Window from MainWindow 1) Create a New Project 2) Application > Qt Widgets Application > Follow the similar process as above 3) Right click on your project > Add New… > Qt > Qt Designer Form Class > Next > Dialog without Buttons because we need to create a Qt designer form. 4) Go to mainwindow.ui > Add a push button; Whenever you press the button, you want to open the new dialog. 5) Right click the button > Go to slot… > clicked() which opens up the code 6) Include secdialog in the code #include "dialog.h" // see the header file 7) Add the following lines on_pushButton_clicked() Dialog hello; // should be as per your class name hello.setModal(true); hello.exec(); 8) Build and Run 1) Create a New Project 2) Application > Qt Widgets Application > Follow the similar process as above 3) Open mainwindow.ui > Drag and drop a group box > change the title to Login 4) Drag and drop two labels and two line edits and change the label names to username and password. 5) Add a push button to verify username and password after entry. 6) Right click push button > Go to slot… > clicked() and include the QMessageBox library #include <QMessageBox> 7) Inside the on_pushButton_clicked(), enter the following QString username = ui-> lineEdit->text(); QString password = ui-> lineEdit_2->text(); if(username== "Hello" && password == "NoHello") { QMessageBox::information(this,"Login","Correct"); } else { QMessageBox::information(this,"Login","Incorrect"); } 8) Build and Run Displaying Images 1) Create a New Project 2) Application > Qt Widgets Application > Follow the similar process as above 3) Open mainwindow.ui > Drag and drop a label >Remove the text inside it > Go to Edit > Sources > mainwindow.cpp 4) Remove the existing lines and paste the following (this code also includes the previous pushbutton code so you might want to remove it) #include "mainwindow.h" #include "ui_mainwindow.h" #include "dialog.h" #include "QPixmap" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); QPixmap pix("/home/serverprocessor/Pictures/a.png"); // copy the path of your image ui->label->setPixmap(pix.scaled(100,100,Qt::KeepAspectRatio)); } MainWindow::~MainWindow() { delete ui; } void MainWindow::on_pushButton_clicked() { Dialog hello; hello.setModal(true); hello.exec(); } 5) Build and Run to see the image resized. Action, Menu, Toolbar 1) Create a New Project 2) Application > Qt Widgets Application > Follow the similar process as above 3) Open mainwindow.ui > go to Type Here at the top 4) Enter File > Press Enter > Write New > Press Enter > Write Open > Press Enter > Write Exit > Press Enter which adds menu items to your menu bar. You’ll notice some actions created for New, Open and Exit at the bottom. 5) Go to Edit > Right click your project > Add New… > Qt > Qt Resource File > Name it Resource > Finish 6) Open resource.qrc > Add > Add Prefix > Name it /rec > Go to Add > Add Files > Add the images 7) Now, go to mainwindow.ui > Double click any menu item > Choose File… > Choose the image Final Comments Thank you for visiting the website. In case of any doubt , feel free to contact me. Saumitra Kapoor
https://saumitra.co/qt-tutorial-part-1/
CC-MAIN-2019-30
refinedweb
1,459
63.7
I think one should consider to add something like a GLUT_INCLUDE_GL3 define to tell the freeglut header to include GL3/gl3.h instead of GL/gl.h. I am using something like #include <Gl3/gl3.h> #define __gl_h_ #include <freeglut.h> to avoid gl3.h to collide with gl.h, but this is a pretty ugly workaround and adding such a define to freeglut.h would be easy and not break any compatibility! Daniel Kirchner 2010-06-26 (I was not sure whether to file this as a bug or as a feature request, but it causes freeglut's headers to cause compile time errors in conjunction with OpenGL 3 headers, so for me it's rather a bug)
http://sourceforge.net/p/freeglut/bugs/134/
CC-MAIN-2014-15
refinedweb
119
76.52
ISSUE 26 | WINTER 2012 How good is your online library service? What makes a great online library service. VIDEO SAVED THE LIBRARY STAR Video is now very much part of our everyday life, but how can libraries make best use of this media vehicle? TRANSFORMING THE LIBRARY WEBSITE Amy York from MTSU reflects on the library website being an extension of the library, not just a marketing tool. LIVERPOOL HOPE UNIVERSITY Refurbishment of The Sheppard-Worlock Library 2 Panlibus Magazine | Winter 2012 | WELCOME TO PANLIBUS The winter issue 2012 12-13 Transforming the library website Welcome to the final issue of Panlibus Magazine for 2012. The theme for this issue is ‘libraries online’, focusing on tools and media that public and academic libraries can utilise. 14-15 How good is your online library service? 4 Liverpool Hope University Summer 2012 has seen a major refurbishment of The Sheppard-Worlock Library with an investment of over £1.5 million. 6 Warwickshire Library Service David Carter, Strategic Director, Resources Group, Warwickshire County Council discusses the transformation of Warwickshire’s libraries service and how it might have created a blueprint for other services to follow. 8-10 Video saved the library star Video is now very much part of our everyday life, but how can libraries make best use of this media vehicle? access to a quarter of a million books and twelve miles of archive collections. Capita’s Library Management System supports the entire operation, helping everything to run like clockwork. 18 EBSCO Increasing value and usage of information resources through Discovery. 20 Nielsen LibScan data Period eleven library title lendings chart. 22 Talis Education Running an effective readinglist service – it’s all about collaboration at Nottingham Trent University. 12-13 Transforming the library website Amy York from MTSU reflects on the library website being an extension of the library, not just a marketing tool. 24 Bretford Having a loan service of laptops for students is highly desirable. But how can the logistical problems be overcome? 14-15 How good is your online library service? Aideen Flynn from Socitm evaluates what makes a great online library service. 25 Keep kids reading - making the most of digital opportunities Neil Wishart, Director of Solus UK Ltd, explores how libraries can connect with lost audiences. 16-17 Case study The Hive is a state-of-the-art library jointly run by Worcestershire County Council and the University of Worcester; it provides students and members of the public with 26 Product update: Prism Terry Willan provides an update on Capita’s resource discovery solution, featuring all the latest social features. me on the email address below. Mark Travis Editor, Panlibus Magazine mark.travis@capita.co.uk Panlibus Magazine is a Capita production ISSN 1749-1002 Knights Court Solihull Parkway Birmingham Business Park B37 7YB United Kingdom Telephone: Web site: +44 (0)121 717 3500 The views expressed in this magazine are those of the contributors for which Capita accepts no responsibility. Readers should take appropriate advice before acting on any issue raised. Reproduction in whole or part without written permission is strictly prohibited. ©Capita. All rights reserved. Capita and the Capita logo are trademarks of Capita or its licensors in the United Kingdom and/or other countries. Other companies and products mentioned may be the trademarks of their respective owners. | Winter 2012 | Panlibus Magazine 3 The Sheppard-Worlock Library Library refurbishment at The Sheppard-Worlock Library Susan Murray Head of Library Service The Sheppard-Worlock Library Liverpool Hope University Summer 2012 has seen a major refurbishment of The Sheppard-Worlock Library with an investment of over £1.5 million. The main driving force was to improve the range of study spaces in the library to include silent study (with and without laptop use), quiet study and a range of group work areas from bookable group spaces to informal study pods and relaxed reading areas. The IT study spaces have also been improved to provide more room at each workstation with wi-fi electrical power at all desks for students to plug in their own devices. A café has also been created with long opening hours so students don’t need to leave the library to get something to eat and drink or to go to for a break. There are dedicated spaces for the use of postgraduate and final year undergraduate students which provide both study spaces and PCs. The postgraduate space contains four individual study rooms which can be booked for up to a month by research postgraduates or academic staff who need a space to work in the library. Student involvement was vital to the planning process so we sought their input through a range of activities including: • a post-it board for simple questions such as what is the best thing about the library and what could be improved in the library • a survey on what learning spaces the students need • photos of furniture which students could put coloured spots next to indicating which they preferred. 4 Panlibus Magazine | Winter 2012 | Facebook was also an important tool to share information and involve our users, especially in the run up to the re-opening when we put up daily photos to show the new spaces and developments. This continued after the re-opening with the use of post-its for students to say what they thought about the refurbishment, and the reaction in the main has been extremely positive. A key element has been to ensure that regular feedback on questions asked or concerns raised was answered and this has been done though a Response Board to suggestions, ‘You Said … We Did’ posters and meetings such as the SU Forum. One of our aims is to take support to the students, so our Faculty Librarians are based on Subject Support Points in prominent locations on the open floor. This is where they can be found when not delivering sessions, which makes them very accessible to staff and students. We are developing a training room from what was previously a staff office to allow more complex support or support for a small group of students to be provided. We are taking this approach one step further as we start to offer a ‘pop up library’ service around the campus into faculties and learning spaces. Different subjects are taking different approaches so some are at a regular time; others are linked to a specific activity or piece of work. We see this as providing support but also as an opportunity to market Library Services. Detailed records will be kept so we can evaluate which approaches work the best for future planning. Our Roving Library Assistants have been provided with iPads, so when they are out on the floor they have access to Prism, online resources and Library web pages to support the students where they are working, without having to take them to a service point or an OPAC terminal. We are in discussion with Capita about the options that would be available by having Alto on the iPads using Soprano, and how this could increase the support to students and the flexibility of staff undertaking tasks such as reservations. There is still some working going on to create a British Standard vault for our Special Collections, with its own Reading Room which should be completed shortly, so everything is in place for a re-launch of The Sheppard-Worlock Library on our Foundation Day in January. There has been a lot of hard work and flexibility from Library staff and colleagues around the University to achieve this refurbishment but this has been rewarded by responses such as “simply amazing and awesome”, “great to see so many dedicated student work spaces” and, from a recent graduate, “It looks amazing, I want to come back”. FIND OUT MORE Web: Warwickshire County Council David Carter, Strategic Director, Resources Group, Warwickshire County Council discusses the transformation of Warwickshire’s libraries service and how it might have created a blueprint for other services to follow David Carter, Strategic Director, Resources Group, Warwickshire County Council The challenge Last February, Warwickshire Library and Information Service was charged with reducing its budget by £2million from £7.5m, nearly 30% over the following three years. Clearly, we were no longer able to operate a network of 34 public libraries across Warwickshire so our aim was to minimise the damage to the service. We reviewed the whole network looking at • • • numbers of visits loans and customer feedback and identified 16 libraries that could no longer be sustained in their existing form. We proposed to run the county’s 18 most widely used libraries, which research showed accounted for 90% of all library visits and to establish community-managed libraries (CMLs), elsewhere. Addressing the challenge 60 public meetings were organised, each one attended by myself and/or the head of service and other senior library managers as well as the portfolio holder or deputy portfolio holder. Nearly 2,400 members of the public attended these and questionnaires and a blog helped us gather the thoughts of over 5,000 residents. Working with the public, we focused on channelling their initial emotions, energies and even anger towards finding a solution to ensure that the service they cared so passionately about could continue in another form. We helped community groups put together business cases for running their libraries. Sometimes these would involve staying in 6 the existing building, sometimes it meant modifying the existing building — or it might mean sharing a building such as a community centre or village hall. Upon approving the business cases, Warwickshire County Council agreed to release £100,000 in capital funding to the groups and one-off revenue grants in excess of £75,000. Grant applications were submitted by each community group which would, for example, fund building work or open cafés. Libraries staff worked with the volunteers even after management was transferred to the community groups, carrying out training in the library management system. We still provide the bookstock and the core systems. And so…what of the present? At the time of writing, we have 12 community-managed libraries (CMLs) which have been live for between five and ten months. Where there was no sustainable business case, we have provided a mobile library service in two instances, an honesty library and another library which is maintained with the support of another council service. We now have a total of 448 volunteers (at the last count) in the 12 community-managed libraries across the county. This workforce has enabled communitymanaged libraries to open for 241 hours per week in total, a 4.3% rise in opening hours. Before these were community-managed libraries, the total opening hours for the 12 libraries was 231.25 per week. Is it all about issuing books? Not at all. Community groups knew that the way to survive was to branch out and make sure that their communities supported them. And so they have worked to provide a raft of services, making their libraries the focal point of their communities and encouraging more and more new visitors. Here is a sample, by no means comprehensive, of just some of the activities taking place in our CMLs. Panlibus Magazine | Winter 2012 | • Dordon Community Library shares its premises with a dance school, sharing costs and attracting young dancers — who are doing their homework at the library before dance classes. • unchurch Community Library helps D to finance itself, with volunteers also running a charity shop from the former premises of the parish council which has moved in with the community library. • arbury Community Library has a H well-attended café which generates a considerable income. Volunteers bake cakes and staff the café. • addesley Ensor Community Library B sells produce on site from a local grower, saving residents the journey, helping the local business, generating income and bringing new users – grocery shoppers – into the library building. The limit of a word count does not allow me to mention every single CML. But the work that has gone into each one, from county council staff as well as volunteers, has given us real hope that they will continue to flourish. We are proud of how every single one has responded to the challenge of keeping a service in their communities and we are proud of the dedication of staff who worked with the groups to pass on their expertise. The county council is adopting the communitymanaged models in how it runs household waste recycling plants and some of our youth services. Libraries, traditionally held as the bastion of quiet studiousness, are making very loud noises about how local authorities can effectively work with their communities in the new financial climate. FIND OUT MORE Web: TM Offering an equipment loan service is a great way to increase student satisfaction. However, many libraries are worried about the potential drain on valuable staff resources to effectively manage such a facility. The MyritracTM system from Bretford, a member of the Capita Additions Partner Program, offers the security, self-service accessibility and equipment tracking solution needed to manage such a facility with the most efficient use of staff time. • Allows the potential to provide 24/7 access to IT equipment • Compatible with existing library management software • Compatible with standard student interface systems i.e. RFID, swipe card, key pad or biometric access control • User access rights can be set and equipment tracked either locally Intelligent laptop management & tracking The MyritracTM system allows you to take control of all aspects of managing your IT equipment; securely storing it; charging it; running software updates; assigning user access rights and even tracking the IT equipment’s use on a daily basis. Each cabinet is linked to the building’s network and feeds back information about when, at what time and by whom a storage compartment was opened and the allocated item either removed or replaced. or remotely via an intranet or web based connection • Gives responsibility of the IT loan to the student helping to reduce damage & loss • Offers total control over the equipment loan process as ALL transactions are monitored and logged • Integrated alarms to monitor and advise of improper use or abuse • Integrated power management system ensures optimum charging time for stored IT equipment • Booking in/out periods are constantly monitored and logged • Due back times can be set and late alerts issued • Audit reports can be run on equipment stock • Software updates can be made • Laptops are charged and protected All units are built to cater for the specific project needs. Tell us what you want to do and Bretford will show you how the MyritracTM system can facilitate it. Tel: 01753 539955 Email: sales@bretforduk.com Video saved the library star Video saved the library star Video, like photography, has become a very affordable platform in recent years with the smartphones becoming increasingly accessible and usable for recording and uploading video. Hosting video has also become very cheap or even free, with sites like YouTube and Vimeo allowing users to upload countless videos and provide easy access to them. 8 Panlibus Magazine | Winter 2012 | Video saved the library star Andy Tattersall, Information Specialist at ScHARR at the University of Sheffield @ andy_tattersall a.tattersall@sheffield.ac.uk Claire Beecroft, University Teacher/Information Specialist at ScHARR at the University of Sheffield @ beakybeecroft c.beecroft@sheffield.ac.uk The evidence is clear that video is starting to be the dominant medium on the web. Whilst we have seen countless videos go viral featuring talented cats, Gangnam dances and smiling babies, there has been a slow uptake by library and public organisations to utilise this format. In 2011, Cisco predicted that video would make up over 50% of all consumer Internet traffic by 2012, and over 70% of all mobile data traffic by 2016, whilst YouTube recently reported that one hour of content is uploaded per second. So it is a good time to start using video within your library and information service - regardless of the setting, there are no limits to what you can make a video of or about. The only real considerations a library has to make before creating a video are: • Why make a video; is the topic relevant? • Who is your audience? • Where do you host it? And in turn where do you embed it? • Do you go beyond the basics afforded by a camera phone and free video editing and hosting tools? At ScHARR, University of Sheffield, we have identified a range of ways to employ video in our research, teaching and marketing. In turn, we have employed a range of tools and technologies to make these videos using everything from our own smartphones to a HD video camera and tripod. Instruction We have used screencasts to instruct students, researchers and NHS staff on topics including literature searching and reference management. Content such as this would have been supported by face-to-face workshops or via paper-based workbooks. These are by no means defunct as options, but the workshop is a one-off event for a finite amount of students, whilst video can be accessed for some time and has no limit on the number of viewers. Whilst written instructional material can date very quickly and take time to compile, short instructional videos can take minutes and be replaced equally quickly. In addition, at ScHARR we have delivered information study skills via our 3eLearning series of three-minute, multi-format videos to NHS researchers, utilising a Google Site as a simple and effective way of delivering them in one location on a site that works well on mobile devices. Enquiries Using video to answer enquiries might seem a rather long-winded approach, but increasingly we receive many enquiries via email. It can take as long, or even longer, to explain a response to complex query, for instance about the technicalities of literature searching, in words than it can via video. Using a Web-based screencasting tool such as screencastomatic can enable us to give a clear and easy-to-follow response to an enquiry, and can take as little as 10 minutes to record and publish. Such videos can also be used again to resolve similar enquiries. Promotion Video can say much, much more than just plain old text. For example, replacing photographs on staff profile pages with short videos can give a more personal introduction and convey staff and their activities in a more friendly and engaging way. At ScHARR we have marketed our research and Masters courses via video and screencasts including using feedback from successful graduates. Using video within our virtual learning environment, including module introductions, updates and ice breaker videos for distance learners based around the world, has been very effective. We have used video to promote events we are involved in, including creating a one-minute promotional video recorded on a smartphone for a conference presentation and making a video abstract of a workshop we ran at another conference. Getting others involved Making videos can be very simple, but can also be very time consuming, especially if you want to start adding music, text and images. There is only so much a single person can achieve when producing them. We are empowering our colleagues to produce their own content by running video and screencasting workshops. To really embrace video at an organisational level you need allies to help generate content - trying to do it all yourself can be an uphill battle. Once you have learned the dos and don’ts of making and hosting it is crucial you start to share your knowledge with your colleagues. Hosting There are no shortage of sites where you can host your videos on the web. If you work at a large organisation such as a university, you may already have your own video hosting server in addition to those freely available on the web. These platforms can be employed to embed the content in other locations, including our virtual learning environment, blogs, social platforms and organisational web pages. If you do have an institutional video server, the chances are that it is not of the same quality and standard as the likes of Vimeo and YouTube. Nevertheless, it is in your interest to host it on such a platform when considering the privacy of your video. This particularly applies to external hosts such as YouTube and Vimeo. Once you use these tools you are placing your content in the public domain beyond your intranet. When hosting on these sites you have to consider these issues above all else: • Who do you want to share this with? • Do you want people to comment on the video? • Do you want people to embed the video? • How will you tag it? • Does it contain content that infringes copyright? It is a good idea to turn comments off within a video when you are not monitoring it beyond the upload. Remember, not everyone on the web is constructive or polite with their comments, especially on a site like YouTube. Whilst the embedding and sharing of a video is crucial to getting it out there; you have to think about whether your content could be taken and placed into another site that could undermine your own organisation ultimately. You have to be sure that allowing embedding options is the right thing to do. Making a video public is the default, but if it is one directed purely at your colleagues and is of strategic importance to your organisation, then making it private is important. This will at least still allow you to share the URL secretly with your colleagues. When uploading a video | Winter 2012 | Panlibus Magazine 9 Video saved the library star to some of these sites you will need to give it a description and tags that will help others find it. Tags are essential for this reason, especially when there are hundreds of potentially similar videos; but this can also be your downfall when it comes to the host picking similar videos to place alongside your own. Terms relating to sex, religion, conflict and politics may bring up videos that you will not want to be associated with, so it is important you think carefully when tagging. As is the case with the material within your video, it is essential you consider what visuals and sounds you use and whether you have the permission to use them. The University of Sheffield have some fantastic examples. Attribution As mentioned above, choosing to use other people’s content can make or break your video. There is no shortage of tools and resources to help you make a legal video. The various Creative Commons licences not only allow you to share your videos with others so that they can apply them within their organisations and sites, but allow you, as the creator, to enhance your own creations. Creative Commons’ search engine () is a great resource when trying to source images to embed into your videos. In July 2012, Creative Commons announced that four million videos had been uploaded to YouTube on a Creative Commons licence. This is over forty years’ worth of footage that you can remix and reuse under the “CC By Attribution” licence. There are plenty of audio resources you can use to enhance your videos, including those found on YouTube with the hundreds of songs you can use via their pre-approved audio tracks. Beyond that you can search for freely available music via netlabels, field recordings or the Internet Archive. The important thing is to pay attention to each medium’s licence. Is your video for commercial purposes? If so, does the music artist allow it to be used commercially? Whatever you do, it is essential you add credits to your video to say where the music and images originated from. It is not good enough to add text on the description when hosting it, especially if the video gets embedded elsewhere. Kit A video paints a thousand words Making an arresting video is simpler than you think 10 Most of the tools we use for our videos are free, such as the screencasting tool screencastomatic (. com), and tools that many of us have as part of our computer operating systems (such as Windows Movie Maker or Apple’s iMovie). In terms of hardware, a headset with a microphone is essential, but can be bought very cheaply (we have actually seen them in a well-known pound store, though couldn’t comment on the quality). A web-cam is also helpful, and a good quality HD one can be purchased for £30 or less. And never forget the video recording studio in your pocket - smartphone cameras are increasing in their quality, with many now able to produce excellent quality HD video with good audio as well. Do not rule out using your smartphone for recording ‘to camera’ videos where you can simply talk straight at your phone and upload directly to YouTube. Panlibus Magazine | Winter 2012 | Conclusion To quote The Buggles’ 1979 hit ‘Video Killed the Radio Star’, “...we can’t rewind, we’ve gone too far”- video is now at the heart of much of our work and has become a very ‘normal’ way of working, providing excellent solutions to many of the challenges of working in a fastpaced academic environment. There are no shortage of tutorials and resources out there to employ when making, editing and hosting videos. We have just captured a few useful resources below - this is by no means comprehensive - but they are a good place to start making your own videos for promotion, instruction and communication within your library. FIND OUT MORE Software for screencasting Software for video editing movie-maker-get-started Useful phone Apps to make videos, take images and share socially. id486367216?mt=8 id377298193?mt=8 Music and image resources for your videos. py?hl=en-GB&answer=94316 ScHARR 3eLearning website ScHARR Library Vimeo page ScHARR Library YouTube page Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2011–2016 ns525/ns537/ns705/ns827/white_paper_c11-520862. html YouTube launches OneHourPerSecond to visualise how much video is uploaded each second Innovative products, expert service and over 20 years library experience Call us or visit our website for service and product information RFID • EM • self-service • security stock control • promotion • software consultancy • installation • maintenance t: 0845 88 22 778 e: info@2cqr.com w: Iesse Gates (RFID) Enable 3D detection range up to 1,200 cm (books & CD/DVD) Wonderwall (RFID) Making the book do more Baby (RFID) The original and most popular desktop self-service unit Pop-up library (RFID) A library wherever and whenever you want it. Unit 2 Long Bennington Business Park Long Bennington Lincolnshire NG23 5JR T: 0845 88 22 778 F: 0845 88 22 779 info@2CQR.com Thinking Libraries S O L U T I O N S / S U P P O R T / C O N S U L T A N C Y / R F I D & E M P R O D U C T S | Winter 2012 | Panlibus Magazine 11 Transforming the library website Transforming the library website A library website is not just a marketing tool, it is an extension of the library. For some users, it is the library. Many of our patrons download e-books and locate scholarly articles, get reading recommendations and log in to online test prep courses, watch streaming videos and access data sets, all without setting foot in the library building. To once again revise Ranganathan[1], library websites are for use. For this reason, we must make sure that they are usable. Amy York Associate Professor Web Services Librarian Middle Tennessee State University To be honest, we librarians are sometimes our own worst enemies when it comes to creating usable websites. Too much technical jargon, confusing navigational structures, and generally bad design plague many library sites (including my own at various times), and as a result, we as a profession devote quite a lot of text and talk to usability studies and web design guidelines. But beyond our own shortcomings, there is a more insidious threat to website usability: campus IT. Or for public libraries, the council IT department. The level of control that an external IT department wields over your web presence impacts how well you can respond to the needs of your remote users. Some of us in the library world, particularly those of us in academic libraries, are lucky enough to have our own internal IT departments. They maintain our public computers, they investigate new technology part of that marketing effort is played out on the university website. Campus IT, usually in conjunction with a campus marketing department, is constantly reinventing the university web template to appeal to prospective students. We have seen header areas become larger and larger, packed with links for people who don’t attend the school - prospective students, but also alumni who might be moved by all the attention to donate some money. A link to the library site, once a mainstay on university home pages, has by and large disappeared, relegated to the ghetto of the A-Z links list. I advise students in my library to “just Google us.” Another trend in university web design is the huge header image. To be fair, this is a trend across the web, and the newest generation of college students is highly visual, for sure, but the effect is that the main content is pushed so far down the page that the user must scroll to find it. Anyone who even dabbles in web design knows that IT administrators set user permissions for the CMS, so if they don’t want you editing header links or images, you won’t be able to touch them, or even if you’re allowed to the change the image, it may still need to be a certain size and location. Some libraries have successfully played off the huge header trend by embedding library search boxes in header graphics, but these tend to be libraries that host their own websites, and thus have more control over layout. There are other problems that arise from the limits imposed by someone else’s CMS. You may not be able to use some server-side scripts (eg PHP), and you probably won’t have access to the <head> section of your pages, so you won’t be able to link javascripts there. This is not really a big problem and, in fact, many people recommend calling scripts at the bottom of a page so as not to slow down the load time, but a big problem can arise when scripts that you don’t want have been placed in the <head> section. For instance, “Too much technical jargon, confusing navigational structures, and generally bad design plague many library sites (including my own at various times), and as a result, we as a profession devote quite a lot of text and talk to usability studies and web design guidelines.” services, and they run the servers that house our websites. Everything that library IT does supports the mission of the library. They get us. They root for us. This is not always the case with an institutional IT department. If a library is not able to maintain its own servers, it will turn to campus IT for server space, and in most cases, this will mean joining the campus content management system (CMS) and enduring any limits native to it or built into it. At many colleges and universities these days, the focus is on increasing enrolment[2]. Competition for tuition dollars is fierce, so universities have deployed full-on marketing strategies for attracting students, and a large 12 getting important content “above the fold” is a big concern. Perhaps it’s not the big deal that it was in the 1990s when the concept of scrolling was brand new, but according to usability expert Jakob Nielsen (2010), the content above the fold still captures 80% of our attention. Users will scroll, but they won’t pay as much attention to the stuff further down. Our eyes will linger longest on information 300-400 pixels down the page[3]. If a library is locked into a CMS template that buries its content below a large university header, will students find the library resources they seek, or will they give up and move on to the web? The problem with these templates is that once you’re in the CMS, you’re locked in. Panlibus Magazine | Winter 2012 | the director of a community college close to me was surprised when I told him that when I try to go to his library site from my iPhone, I get redirected to the college’s mobile site, which does not include a link to the library. The javascript redirect is called between the <head> tags, which he can’t alter on his library web pages. Speaking of mobile sites, a library that does not have full website control has limited options for deploying a mobile friendly web presence. Will IT offer non-templated server space for a mobile site? When locked in a university CMS, libraries are especially incapable of employing a smart responsive Transforming the library website design strategy (ie without two screen lengths of campus material at the top), which some argue is the future of the web. So what is a library to do? I believe that libraries should maintain as much control as possible over their websites, preferably by hosting them on their own servers. At my university, IT and the library have come to compromise. Until last year, we had a different look than the rest of the campus sites. They were adamant that we conform to university design standards -- a position that we understood, though we weren’t thrilled with the real estate we would lose to university branding and recruitment at the top of the page. We even tried joining the campus CMS, but when it became clear that there were too many technical limits for our content, they agreed that we should stay on our own servers. We use the university’s branding and template, but we maintain full control. Now my university has a new CMS, and there are rumours of a new design in the works. IT is once again talking to us about joining the CMS, and they have offered us a greater degree of control than most departments on campus receive (“we understand that libraries are different”, they say), but I don’t know. I’m more than a little wary. It’s not that IT folks are bad people, it’s just that they have different priorities. And maybe the next template will have a 500 pixel height header graphic. And maybe someone higher up the administrative food chain will decide that we have too much freedom and reduce our permissions. I shudder at the idea of such helplessness. link it to theirs. Win! So to sum up, libraries can have a successful website partnership with campus IT, but it’s nice to have the option not to have one at all. So what do you do if you don’t have a choice? Maybe you don’t have your own library IT department, or maybe you don’t have the space or the budget for a server room. What do you do? Do you completely submit your will? Nah, you talk to IT. You make a case for the features and functions you need. You find examples from other libraries. You promise not to break anything if they give you a little more freedom, and then you don’t, under any circumstances, break anything! In the spirit of co-operation, let me end with a happy tale. That community college director whose library site was redirected to the campus mobile page… guess what? He talked to someone in his IT department, and they offered to build the library a mobile site and References: 1. Noruzi, Alireza (2004). “Application of Ranganathan’s laws to the web.” Webology 1(2).. org/2004/v1n2/a8.html 2. Jon Marcus. “College enrollment shows signs of slowing” May 31, 2012. http:// hechingerreport.org/content/collegeenrollment-shows-signs-of-slowing_8688/ 3. Jakob Nielsen. “Scrolling and Attention.” March 22, 2010, alertbox/scrolling-attention.html FIND OUT MORE Email: Amy.York@mtsu.edu | Winter 2012 | Panlibus Magazine 13 How good is your online library service? How good is your online library service? By Aideen Flynn Socitm Insight Associate These days it seems that most things can be done on the internet. Whether it’s shopping, banking, booking tickets or sharing photos, it’s easy to do it online, at a time convenient to you. So it should be the same for library services. If I want to renew my books, look for a DVD, reserve the latest blockbuster or download an ebook, I should be able to do it quickly and easily. And it’s not just me that wants to be able to do this stuff online. In 2011, library services were the 3rd most popular reason for citizens to visit their council website. Last year, Socitm Insight carried out its annual survey of UK local authority websites and the results were published in Better connected 2012: a snapshot of all local authority websites. The survey looks at a range of the most popular council services from the perspective of the customer and included a library service task. The reviewers that carry out the survey were asked to visit all local authority websites that have a library service and try to renew a library book online. This test covered simple issues such as • Can I find the library system login page easily? • Are there clear instructions about how to login so I can renew my loans? • Is there any help if I get stuck, eg forget my PIN/password? The reviewers were not able to test the entire process, as that would involve setting up library accounts at 206 different local authorities. But they were able to test as far as the login page. There were 11 questions for reviewers to answer, each of which tested a specific part of the process of renewing a library book. 14 The first five questions examined the most likely start points that people use in order to carry out this task and how well library renewals are promoted at each. The five start points are Google, the council home page, the council site search, the council’s A to Z of services list and the main library web page on the council site. There were two questions about the customer journey. This is the sequence of pages that people will have to navigate in order to carry out the task, including: • Was I taken straight to the library account log-in page? (from Google, home page link, library web page link etc.) In many cases the answer to this question was ‘No’. Frequently reviewers found that when they followed a link for ‘Library renewals’, they were misdirected to the catalogue search page instead of landing on the library account login page. • Were all the relevant pieces of information/pages linked together to make a smooth coherent journey? Nearly 40% of websites got a ‘No’ response to this question, with reviewers getting lost on the website and having difficulty getting easily to the library account login page. The third set of questions examined the library account login page: • Is it clear that I have to enter the number on my library card to log in? • If a PIN/password is required to access my account, am I told how to get one? • If a PIN/password is required to access my library account, is there a ‘Forgotten your password’ link? Panlibus Magazine | Winter 2012 | The first of these questions was given a ‘No’ in over 30% of cases, usually because the form field where people are supposed to type in their library card number was given an obscure name, like ‘user ID’. There was no explanation that the number was on the library card. Less than 20% of sites provided a ‘Forgotten your password’ facility. Many library suppliers did not have this provision, but some library services had set up their own online form for requesting a password reminder or re-set. Not as quick as an automated reminder, but still better than providing nothing at all. In addition to these questions, reviewers were asked to rate their experience of both promotion of the task (how easy it was to find) and the customer journey (the whole journey required to carry out the task) on a scale of 1 (poor) to 3 (very good). In order to identify those sites that were providing a good online service for this task, a threshold was set. To pass, 8 out of 11 questions should get a ‘yes’ and the reviewer should score the site at least satisfactory (2) or very good (3) for both promotion and customer journey. Only 51% managed to reach this threshold. So what can be done to improve? Testing your own library web pages is a good place to start. Staff can do it themselves and changes can usually be made quite easily by liaising with your organisation’s web manager. Here are some essential checks you should make: Open your main library web page (the one that you usually find if you type in www. yourcouncilname.gov.uk/libraries) and see if there is a prominent link on that page for How good is your online library service? ‘Renewing library items’. If the link is already there, is it easy to spot or do you have to scroll down a long list to find it? Repositioning the link so that it is higher up on the page will make it easier to find. You should also try clicking the link to make sure that it opens the library account login page. If the link currently goes to the library catalogue search page or somewhere it should not, change the link so that it goes to the right place. You can also check the search on your website. If you type ‘Renew library book’ into the website search field, do the search results display the correct page at the top of the list? If not, talk to your web manager about how to improve this. If your organisation’s website has an A to Z of services list, check that there’s an entry in it for ‘Libraries’ at the very least – this should lead to your main library web page. You could make it even easier by providing an A to Get inspired – a few websites providing easy to use online renewals: Wirral has integrated the library account login fields into their main library web page: has good promotion of library renewals online.htm has well presented promotion of online library facilities. Z entry called ‘Library renewals’ (or something similar) that takes people directly to the library account login page. There may be parts of the library account login page provided by your library system supplier that you would like to improve, eg using friendlier terms for labelling login fields or perhaps adding a link to tell people how to get a password or PIN number. The supplier of your library system should be open to making minor improvements such as adding text or changing form field names. The process for doing this is likely to be different depending on the supplier, your contract arrangements and the version of the library system. The supplier may have to make changes to the page on your behalf or they may provide a tool to enable you or someone in your IT team to do it. If your supplier is not open to making minor, non-technical improvements, then getting involved with the user group for your supplier may help. Get help – download guidance from Socitm Insight: Most local authorities are subscribers of Socitm Insight and will be able to get free access to two useful publications: • Better connected 2012 (includes council scores and recommendations for improvements to library web pages) • Mystery shopping in six London libraries (includes analysis of different presentation of 6 online library services and advice for improvement) The trend towards online self-service is unstoppable. The expectation is that online services will work well and be comparable with commonly used commercial websites. Very often we hear that there is no money to fix problems with online services, but so often, the amount of fixing needed is very small and just requires thought, testing and tweaking rather than cash. So wouldn’t it be good to improve the online experience for your library users? It’s not just about saving money by reducing contact, but giving people a new way to get the most out of their library service and bringing libraries into the virtual world. FIND OUT MORE Email: aideen.flynn@socitm.net Web: These can be found at downloads, (please register on the site first if you haven’t done so before). Check if your authority or supplier is a Socitm Insight subscriber here: www. socitm.net/info/214/socitm_insight/91/paid_ subscribers/1 | Winter 2012 | Panlibus Magazine 15 The Hive Case study: The Hive Key benefits • Allows users to access a single database of resources • Different borrowing rules automatically applied based on user profiles • Ability to use shared PCs within the Hive with different service options depending on whether it is academic or public use • Books can be ordered online and returned to any council or university library • Self service issuing and returning of resources via kiosks and the Hive sorter.. 16 Back in 2004, both the University of Worcester and Worcestershire County Council independently reached the conclusion that their library buildings and services would not support their future plans for development. accompany the University’s existing study facilities. Library staff and users would need access to resources in other libraries in the area and be able to return books to and from other libraries. . “From a technical point of view, what we required was not the norm so we could not simply buy an off-the-shelf product,” explains Paul. After studying the market and the options available, the Hive chose Capita’s Library Management System (LMS).. Complex services To add to the complexities, the Hive was not to be an isolated library but would be part of the wider public library service network of 21 libraries across Worcestershire and Panlibus Magazine | Winter 2012 | “Capita had the right products and also understood and bought into the shared service vision,” comments Stephen Mobley, who is the quality and standards manager for Worcestershire libraries and learning. Transition The implementation was complex because both organisations had to run and provide library services right up until the moment of the opening of the Hive building. The old system shut down on Thursday and the new one was fully functional by the following Monday, with no break in service, as Paul explains. “Capita ensured that users didn’t experience any down time when switching from the old to the new system by maintaining a functioning service during the three day switch period.” Joining resources The opening of the Hive was a milestone in shared library services and Capita’s LMS has allowed the two catalogues to be searched as a single integrated database. The Hive While the collections of both organisations have been combined from the public’s point of view, the ownership of books is still accounted for behind the scenes. “Our users tell us that they really value the new range and depth of the materials that having the joint stock has enabled us to provide,” comments Kathy. “The software has helped us deliver a shared library that manages to meet the needs of both sets of users and both organisations.” Kathy Kirk, Strategic Libraries and Learning Manager, Worcestershire County Council. Differences accommodated When a user logs on to any of the shared PCs, the LMS recognises whether they are a student or a member of the public. This determines the view of the catalogue that is displayed to them. Likewise, students get access to more applications when they log on to any of the shared PCs on site. “Although all material is available to all customers, there is recognition built into the software that certain texts are of key importance to students,” says Paul. For example, these differences allow students to loan out restricted texts for 24 hours, whereas members of the public must use them on site. Equally, with core university texts, the public can only borrow them one at a time while students can borrow up to 12 for two weeks. “It is important that the LMS accommodates such nuanced distinctions and ensures students can access the information they need, when they need it.” books from the public. This is possible as Capita have integrated the LMS with the council’s telephony provider. “We have been able to pool staffing resources so that our services and staff are now available seven days a week until 10pm,” says Kathy. Streamlined services “The information the system gives us also allows us to identify any stock gaps in the joint collections so we can fill them. Our acquisitions are now more targeted, cost efficient and effective,” explains Kathy. “We are also making it easier to access resources,” says Paul. Students and the public can use whichever library they choose in the Worcestershire authority to collect and return books. “The library provision is now one of the best in the country. Capita’s Library Management System has been a core part of us being able to deliver this innovative and complex project,” comments Kathy. “Some students may live 20 miles from the Hive in the centre of Worcester. If they can simply go to their local library to return books, it is a huge benefit to them. Our new LMS and RFID connector from Capita identifies where the book belongs – whether that is the Hive, one of the university’s other facilities or one of the council’s 20 local libraries. The library courier system takes care of the rest,” says Paul. Pooling staff FIND OUT MORE The new joint LMS is helping staff provide better services too. Users can now phone in to the local authority’s call centre for information and as the data is on one system, the call centre can also be used to chase overdue Web: Email: libraries-enquiries@capita.co.uk Tel: 0870 400 5000 | Winter 2012 | Panlibus Magazine 17 EBSCO Publishing Increasing value and usage of information resources through Discovery Sam Brooks, Executive Vice President, EBSCO Publishing Discovery is evolving from simply being a faster and easier way to search a library’s collection through a single search box, to a search that is able to access the highestquality information available in a library’s collection for all levels of users. Discovery that can maximise the available content can increase usage, improve the end-user experience and showcase the value of a library’s collection. User adoption of a discovery tool can increase as users become more familiar with it, but if that tool produces better results, adoption will take place rapidly. An example of this is the data collected by Bournemouth University, which implemented EBSCO Discovery Service (EDS) in September of 2010. User sessions increased by 88 percent from the 2010-11 academic year to the 2011-12 academic year. The success of those sessions is indicated by a 132 percent increase in the combined number of abstracts viewed, full-text downloads through EDS and linkouts to full-text from EDS. Additionally, there was a similar increase in linking to the library’s catalogue (88 percent). See Table 1. Successful searches demonstrate the benefits of shifting from directly searching databases to relying on the discovery service, which can expose them to resources that they may otherwise not find. At Bournemouth, use of the most prevalent full-text databases, either through direct access or through EDS, increased by 40 percent from 2010-11 to 2011-12. Full-text downloads from direct databases decreased from 2010-11 to 2011-12 by more than 23 percent, while full-text downloads from EDS increased more than 104 percent over the same time period, showing an increase in usage of the resources as well as the value of the Discovery Service tool overall. The increase for some databases was even more pronounced. For example, full-text downloads from the university’s primary business database accessed through EDS increased by 139 percent from 2010-11 to 2011-12. See Table 2. In the second year of EDS use at Bournemouth, there was a 1362 percent increase in JSTOR linking and 357 percent increase in ScienceDirect linking. Because EDS allows for the infusion of high-end subject indexes, the statistics related to use of these critical resources can be illuminating. For example, usage records 18 Panlibus Magazine | Winter 2012 | from A&I service CAB Abstracts increased by 81 percent from 2010-11 to 2011-12.* See Table 3. [*Note: Because Bournemouth subscribes to CAB Abstracts on EBSCOhost, the University takes advantage of the EDS “platform blending” technology, which allows for infusion of results from subject indexes that don’t otherwise participate in discovery services.] EDS implementation at Bournemouth and other academic institutions has been successful because it caters to the needs of undergraduates, graduates, postgraduates and faculty. While familiarity with the system and the ease of the single search box has an impact on overall usage, the increased downloads and linking activity indicates that the quality of the searches has also improved significantly. The single search box, with the metadata and robust ability to find the best available content behind it, have to work in combination to improve the user experience and ensure that they are confident in the tool they are using. FIND OUT MORE Web: ‘‘Our D-Tech equipment means students can self navigate the resources and take ownership of their borrowing. ’’ RFID / SECURITY / RFIQ / PEOPLE COUNTERS / EM / STOCK MANAGEMENT / RF / LAPTOP SECURITY / VENDING / SELF SERVICE Nielsen LibScan data Nielsen LibScan data Nielsen LibScan, Period 11, 2012 Library lendings chart Period 11, which covers the period from 7 October until 3 November 2012, saw E L James’s Fifty Shades trilogy arrive to feature prominently in the top ten most borrowed titles chart, below. The three books, which have sold over 10 million copies in total in the UK TCM, had combined lendings of 5,043 in period 11. However, this is not enough to propel James into even the top 100 most borrowed authors for the period, where authors with a broad backlist dominate. James Patterson continues as the most borrowed author, with over 41,000 lendings and is followed by children’s authors Daisy Meadows, Julia Donaldson, Francesca Simon and Jacqueline Wilson. Position The top ten titles include some other big named new entries. Lee Child’s latest Jack Reacher novel, A Wanted Man, joins The Affair in the chart and is one of five titles from the Crime, Thriller & Adventure category in the top ten. J K Rowling’s first adult novel, The Casual Vacancy, was only published in late September but is already proving popular with library users. Kathy Reichs, Tess Gerritsen and James Patterson supply the other thrillers in the top ten, whilst the only children’s title is Aliens Love Underpants. The best-selling titles in the UK TCM for period 11 are led by Reflected in You by Sylvia Day, which is no doubt finding an audience Title amongst Fifty Shades’ fans and may follow its popularity in libraries too. E L James and J K Rowling both still feature among the top ten best-selling titles in the UK, whilst October saw the publication of new titles from James Patterson and Kathy Reichs which may in due course take their place amongst heavily borrowed library titles. FIND OUT MORE Web: Author 1 Fifty Shades of Grey James, E L 2080 2 Wanted Man,A:Jack Reacher Child, Lee 1994 3 Affair,The:Jack Reacher Child, Lee 1868 4 Casual Vacancy,The Rowling, J K 1720 5 Bones are Forever Reichs, Kathy 1626 6 Fifty Shades Darker James, E L 1516 7 Last to Die:(Rizzoli & Isles 10) Gerritsen, Tess 1491 8 Aliens Love Underpants! Freedman, Claire 1449 9 Fifty Shades Freed James, E L 1447 10 11th Hour Patterson, James 1433 Period eleven (4 weeks from 7 October to 3 November) ©2012 Nielsen Book Services Limited [trading as: Nielsen BookScan] 20 Volume Panlibus Magazine | Winter 2012 | we are your txt people Our fully library integrated txt system allows you to send and receive txt messages to multiple people in an instant using your computer. simple :) 2 way communication that is quick, to the person, to the point, secure and cost effective. clever ;) • Capita certified • Inform customers of reservations in seconds • Overdue Reminders managed and sent instantly • Save staff time and reduce cost ssages: schedule metime set the day and tem deliver and let the sys groups: ress book into sort your add handy groups inbox rules: irect txts filter and re-d , e-mail to other phones to sender and auto-reply : delivery report works if a message only ived it’s been rece templates: ly recycle common es used messag in again and aga 2 way: ication true commun way is only ever 2 mail merge: sages personalise mes to when sending groups secure: e we use the sam l security protoco king as online ban ss weapon of ma communication hive: message arc previous store all your easy messages for reference export data:message need copies of ting, no data for a mee problem! integration: ible with we’re compat software almost every e, put system out ther us to the test free training:gets import data: import data from se aba any existing dat support: er every custom they are training until fident happy and con real people here whenever you need us e: txttoolstellmemore@blackboard.com w: t: +44 (0)113 234 2111 Nottingham Trent University Teaching and Learning, Connected Running an effective reading-list service – it’s all about collaboration at Nottingham Trent University strategy, led by course requirements rather than by guesswork, and supported by integrated and scalable technology. The library was certain that change had to start with academics – where the acquisitions workflow Now running for its third full academic starts. Rather than acting as administrators, year, the reading-list management service the library staff would join academics to form at Nottingham Trent University provides a partnership in the management of lists and library-supported lists and content for all the subsequent development of course-driven taught courses and modules. Adoption of acquisitions policies. Layers of relationships the reading-list service at the university, the would form across the university to bring about UK’s third largest provider of undergraduate this cultural change and support the requirement education, now approaches 100 per cent of the for a ‘basic onlineness’ for all taught courses. university’s academics across all subject areas. This would be supported by the robust systems In acquisitions, throughput time has reduced integration, internal adoption and adaptation of from 72 days to 35 days and the provision of an the Talis Aspire Reading-List software. improved service continues to ensure resource After two years, the Nottingham Trent availability for over 26,000 students. University reading-list service reached 70 But it wasn’t always that way. Only three per cent of list coverage. And in its third year, years ago, negative feedback from student the service has now achieved 100 per cent surveys pointed to problems in library adoption. Today, only items on the reading-lists workflows. A traditional budget and acquisitions are approved for acquisition by the library and approach meant that stock selection was done provided to students, making budgeting and by academic liaison staff with varying levels of stock management more predictable. This sort success and little involvement from academics. of improvement doesn’t need large budgets, or Students frequently complained that even increased human capital investment. Instead, ‘essential reading’ items were not available, it enables service efficiency, and enhances and the library itself had only a partial view of learning experiences through active collaboration what books it needed to provide. In addition, between academics, library and students – and the university’s ‘e-first’ strategy was becoming software service providers. Something that the increasingly difficult to deliver. Talis Aspire Reading-List module both facilitates A critical element of the library’s challenge and requires. was its interactions with academics, and insufficient technology to support the service. Librarians became involved only Reading-lists were often passed directly from after the list had been produced, at the academic to the student, bypassing library which point the librarian’s role was and academic liaison staff and making planning largely administrative. and budgeting difficult. All staff involved were Mike Berrington, Deputy University suffering from ‘spreadsheet overload’. Student Librarian at Nottingham Trent concerns revealed a pressing need to engage academics in a collaborative acquisitions University Karen Halliday Senior Product Marketing Manager Talis Education Limited ❝ 22 ❞ ❝ NTU has sought to make the most of the ‘openness’ of Talis Aspire to develop local service enhancements which benefit all three cohorts of list users. Dr Richard Cross, Nottingham Trent University ❞ The selection of Talis as the university’s readinglist software partner has enabled the university library to transform its services. In turn, the university and a cohort of Talis Aspire ReadingList customers in over 45 other universities and colleges, has helped shape the development of the software through an active user community and agile development processes. This participation in innovation, combined with the dedication and expertise of the library staff, and the clear support of the university’s senior management team and academic departments, has delivered substantial changes at the the university – course-driven acquisitions and seamless processes to support learners. And what’s next for the university library? It’s now continuing its collaboration with Talis and is a beta partner for the next major development of Talis Aspire – Digitised Content. ❝ ‘Resource lists are inextricably linked to directed student reading, which is a core activity in all universities, especially in those with large numbers of first-year undergraduates. Mike Berrington, Deputy University Librarian at Nottingham Trent University ❞ FIND OUT MORE Tel: 0121 374 2740 Email: kh@talis.com Web: Web: Panlibus Magazine | Winter 2012 | DO YOU NEED TO SAVE TIME, MONEY AND RESOURCES? A SUBSCRIPTION TO BOWKER BOOKS IN PRINT® COULD BE THE COSTEFFECTIVE SOLUTION YOUR LIBRARY NEEDS. This online English-language database from Bowker® is the one-stop resource offering a wealth of bibliographic and enhanced data, as well as many other value-added services, for both librarians and library users. The data is sourced from our ISBN agencies and publisher relations team in the UK, so you can be assured of comprehensive coverage and up-to-date information. In one resource, you can offer your library users: • Access to information about e-books, audio books and videos, as well as traditional books • An easy-to-use search interface with countless routes for investigation and many ways to refine results • Cover images, tables of contents, professional reviews, full-text previews, author biographies and more, to help make informed selections Whilst your librarians can benefit from: • List management tools to save titles and streamline workflows • Access stock availability information and links to vendors • The option to restrict market access to UK only, via the Administrator settings TRY BOOKS IN PRINT® TODAY TO SEE IF IT CAN HELP YOUR LIBRARY! Free trials are available, contact Bowker at sales@bowker.co.uk to find out more. Laptop loans Laptop loans – how to make it work! Having a loan service of laptops for students is highly desirable. But how can the logistical problems be overcome? Vikki Stapley Marketing Manager Bretford Manufacturing Ltd With the rise in tuition fees of up to £9000 per year, a student’s expectation of services provided in the University has grown significantly. With this in mind, recent surveys show that up to a third of students want the facility to borrow IT equipment including laptops, netbook or tablets and, as a result, this is now a fundamental requirement for prospective students when choosing a university. However, implementing an equipment loan facility into your college or university raises a number of logistical problems, and many universities are worried about the potential costs and drain on valuable staff resources to effectively manage such a facility. They also have concerns about to how they reduce the risk of theft or loss of equipment and keep all the equipment charged and updated ready for use. Swansea University had a desire to implement a self-service laptop loan solution as part of their move towards more flexible campus buildings and services, and away from reliance on large areas of fixed PCs. Self-service library books have been in place for a number of years, and Swansea University were keen to expand on this to implement a self-service laptop loan solution in their main library building, and also in a newly developed study area. “We envisaged the use of self-service lockers as a supplementary service to our fixed, open access PCs – the laptops would cater for students who didn’t need the ’full fat’ service as supplied on our PCs, whilst also allowing them to bring the device to where they were studying in the building (rather than move to a fixed PC),” explains Glen Donnachie, IT Support Manager, Swansea University. The desired solution needed to be a low maintenance and secure service that could be operated 24 hours a day, crucially, allowing a student to self-issue a laptop in the quickest possible time and then use the netbook just as quickly. “Obviously, we were also keen to ensure that we only loan laptops to known, valid students and that the students have to return them in a secure manner, too,” says Glen. It was a fair collection of requirements: students to have 24 hour access to the laptops, getting to use the equipment quickly, low maintenance, available to valid students only, keeping track of returned laptops and having a solution that fitted in with university’s existing systems. The university found one solution that met all these requirements, and that was the Bretford Myritrac system. This system allowed them to take control of all aspects of managing the loan process: allowing them to control who can access the equipment, identifying each user, monitoring usage on a daily basis and providing a secure place to store, charge and simultaneously update all the laptops ready for use. The MyritracTM system facilitates the self-service deployment Swansea were looking for, saving valuable staff time and helping to reduce damage, loss and operational costs significantly. “The Bretford Myritrac solution from the outset was the clear and obvious choice”, explains Glen. “Our security requirements were fully met, and the ability to read our existing student cards for verification was also key.” The University then came up with one more challenge. They wanted to launch the service with little or no publicity or instructions. And, in fact, students started using the system and found it quite intuitive; there was no need to introduce extra directions or instructions or carry out training. Staff support training requirements were minimal too. “It’s innovative solutions like this that will help us towards providing a reactive, student focused service and help enhance their experience,” Glen says. The success of this solution at Swansea University can easily be gauged by the fact that they have now placed another order to increase the numbers of lockers across their campus. In addition to the lockers installed in the Library and study areas, Swansea are now working with their College of Business, Economics & Law to install the system in student-facing areas in academic buildings. FIND OUT MORE Tel: 01753 53 99 55 Web: 24 Panlibus Magazine | Winter 2012 | Solus Keep kids reading making the most of digital opportunities Neil Wishart Director Solus UK Ltd During dinner at this year’s Library Association of Ireland, Public Libraries Section Conference (7th – 9th November 2012) we were involved in an interesting debate regarding whether children in 100 years would be able to read or write and whether the use of physical books was critical to this. The consensus from the librarians at the table was, perhaps understandably, that there should always be a place for physical books in the library. At our lightning session the next day, we referred to the conversation and offered the opinion that whilst children in 2112 would be able to read... they may find traditional writing and spelling more challenging. What’s key is the ability to read; the medium doesn’t matter. Libraries Online is the subject of this Panlibus edition - perhaps it should have been Digital Opportunity? Far from being a negative, the quick shift towards touch devices which are intuitive to use for young and old alike makes libraries accessible like never before. The Reading Trust 2011 Survey highlighted the fact that three in ten children lived in houses without books. When asked by a teacher to bring in a book to discuss in class, one boy in London produced an Argos catalogue, explaining: “it’s the only book my family have”. The same survey found that 85% of children (815 yrs) owned at least one games console and 81% owned a mobile phone. Fast forward less than one year to October 2012 and 53% of teenagers in the UK owns a smartphone (what will it be after Christmas?) and one in ten three year olds uses a tablet. In short, it soon won’t matter if the boy’s family doesn’t own a book (no matter how tragic that is). If the library or school can appeal to him enough, he’ll give reading a chance and rather than downloading yet another free game for his phone or tablet, he’ll use it to access the library and read a free book! Libraries have an opportunity to connect to audiences that would otherwise have been lost to them and help with the ultimate requirement, that children in ten years (never mind 100) can and do read! Being online, releasing apps, creating “Community Spaces”, adopting social media, building games, etc, either alone or combined won’t make the difference. Connecting with people, appealing to them, showing them what is accessible and possible, will. It is also important to let people know that services exist. Too many times recently we’ve been showing library apps to non-library users and whilst they have been senior managers in local authorities, or partner organisations such as the NHS, they’ve been amazed that you can borrow eBooks and magazines free of charge from the library. If people working for the same or partner organisations don’t know the secret, it’s easy to assume the wider community don’t know what’s available. It’s important to note that smartphones and tablets, whilst mobile by nature and used for accessing data remotely, can be used to connect to other hardware, both inside and outside the library. Mobile devices and apps can connect to self-issue machines, to digital signage and to other products that can be used to engage users. Remote access is convenient; linking devices to hardware or stock using RFID is cool and provides another reason for people to be physical rather than virtual users. To close, we’d like to share some really positive news. Edinburgh Libraries recently announced that their strategy of adopting and combining technology, space and social media has led to a significant upturn in their statistics: • Visits - up 9.5% year on year • Issues - up 3.9% year on year • Virtual visits and issues up 251% year on year This is a fantastic achievement and shows that digital opportunities can yield significant results. FIND OUT MORE Email: neilw@solus.co.uk Web: | Winter 2012 | Panlibus Magazine 25 Prism product update Prism product update Terry Willan Business Specialist Capita Prism is the resource discovery solution serving the users of Capita customer libraries, both academic and public. Integration with the local system, Alto, also gives Prism users the full range of availability, delivery and account services, through their web browser and mobile devices. Features are continually added and enhanced through frequent releases. Furthermore, continuous improvement to the infrastructure ensures that Prism remains fit to grow in features, users and data. Substantial changes in the Prism infrastructure will soon be powering some major developments for an exciting 2013. Renewing loans and reserving items online through the catalogue are important activities for users and Prism has always supported them. Recently, a new renewal workflow shows users renewal outcomes immediately in context, and a clear message is given where renewal is disallowed. For reservations, users can now see immediately whether an item is reservable, and if it is not, they can view an explanation. The convenience of self-service is enhanced in the latest release where users can set a new PIN if they have forgotten it, allowing them to quickly continue with what they want to do. There is also an option to enable users who have charges on their account to pay them online through the library’s payment service provider. Other recent developments include browsing through a set of results at the item detail level with buttons to view the next or previous record. More social features are in development, including the ability to add reviews and to tag resources. Social features enable users to actively engage and contribute to the catalogue by sharing their experience of the resources to the benefit of each other. This has been one of the areas of focus in recent Prism development, starting with star ratings. Where appropriate, user-contributed content is shared across libraries of the same type (academic or public) giving each library much more content. The library can activate each social feature separately through the administration console. The ability to create and manage lists of resources in Prism has been popular with both public and academic library users. This has recently been enhanced by allowing users to make lists discoverable by others and to annotate them with a description and tags. Users can now discover such shared lists. Librarians have a range of moderation options but are finding that a relaxed approach works well and helps to encourage extensive and enthusiastic use. Integration with the Open Graph API recently means that rich content from the catalogue is surfaced in Facebook, an important part of the outreach of libraries. This can be seen in the Facebook pages of numerous customer libraries. 26 By default, enrichments in Prism are available from BDS and the textual enrichments are now more integrated, with indexing to improve discovery as well as display of the table of contents, author notes, descriptions, reviews and back cover copy. Library users increasingly expect to be able to search and discover all of the library’s diverse resources in one place. This is a key area of development for Prism which will begin to benefit libraries in the new year. Libraries will be able to augment the catalogue to give users unified search results across the range of content available through the library. This is beginning with archival and local history data and will extend to e-journal articles, e-books, digital content, institutional repositories, dissertations and more. A richer discovery experience for users can also be achieved by providing recommendations and search suggestions. Development is in progress to aggregate loan data by library type to drive recommendations for resources similar to the one being viewed. Data from other sources such as lists will be added later as well as personalisation to continually improve the relevance of the recommendations. The data model is being Panlibus Magazine | Winter 2012 | developed to provide author data with persistent IDs across catalogues to improve author searching and to enable Prism to suggest similar authors. An autosuggest feature will provide options to auto-complete users’ queries as they type, and related searches through features such as ‘Did you mean...’ will provide users with more options to find what they are looking for. The ability to provide more for library consortia is another outcome of the Prism infrastructure developments. It will be easier for libraries to combine their data, enabling users to expand the scope of their search to the holdings of neighbouring or associated institutions as well as supporting tighter consortial arrangements with combined holdings and agreed usage rights. Integration with the local system is vital for providing users with the rich range of services they expect. Prism is continuing to deliver more in this area, soon allowing users to amend their contact details and to set their preferences for how they are notified by the library about nearly dues, overdues, reservations ready and other communications. As changes continue in libraries, technology and the way people discover and interact with content and each other, Prism is well placed to fulfil the evolving needs of libraries and their users. FIND OUT MORE Web: Blog: blogs.capita-libraries.co.uk/prism Reach your audience digital signage desktop messaging mobile applications solus.co.uk/app Solus UK Ltd James Watt Building, James Watt Avenue, East Kilbride G75 0QD Tel. 01355 813600 Email. info@solus.co.uk Web.
https://issuu.com/panlibus/docs/panlibus26
CC-MAIN-2016-50
refinedweb
12,956
55.98
If I want to know the length of a tuple, list or dict, currently I have to #include a certain file with a long path. On top of that, if I am not in namespace boost::python, I have to type "boost::python::len(o)". Points: - Having to #include <boost/python/whatever.hpp> is cumbersome and brings more code into scope than I really need. - boost::python::len() is quite a bit of clutter. - .size() is what one expects to have in C++. - .size() could directly look at sq_length and thus generate less code. David, would you be very upset if I added .size() to tuple.hpp, list.hpp and dict.hpp? Ralf __________________________________________________ Do You Yahoo!? Yahoo! Finance - Get real-time stock quotes
https://mail.python.org/pipermail/cplusplus-sig/2002-August/001658.html
CC-MAIN-2014-10
refinedweb
124
78.96
This is the second this part we are going to see how to consume an external HTTP API and how to integrate the API’s searching capabilities into our application. For the movies and actor look-ups we will be using the TMDb API. From the official site: “The TMDb API is a powerful resource for any developers that want to integrate movie & cast data along with posters or movie fanart. All of the API methods are available in XML, YAML and JSON.” As with most available APIs, you are going to need a valid key in order to be able to use the API. The first step for that is to create a free account at TMDb’s Sign-Up page. After signing up, log into your account and find the link for generating an API key. The list of the available API methods can be found at the TMDb API documentation page and the most important ones are the following: - Movie.search: Provides the easiest and quickest way to search for a movie. - Person.search: Is used to search for an actor, actress or production member. For movies search, the example URL is the following: For people search, the example URL is the following: (where APIKEY has to be replaced with a valid API key) As you can see, the API is pretty easy and straightforward to use. It only involves performing HTTP GET requests at a specific URL and then retrieving the responses into a predefined format. Next, we are going to see how to leverage Android’s networking capabilities in order to consume the API and provide a presentation of the provided data. Note that we will use the XML format for the responses, but this will be showcased in a next tutorial. For manipulating HTTP requests/responses in an Android environment, the standard classes from the java.net package can be used. Thus, classes such as URL, URLConnection, HttpURLConnection etc., can all be used in the known way. However, you can avoid dealing with the low level details by using the Apache HTTP Client libraries. This library is based in the well known Apache Commons HTTP Client framework. Let’s get started with the code. We will create a class named “HttpRetriever” which will be responsible for performing all the HTTP requests and will return the responses both in text format and as a stream (for image manipulation). The code for this class is the following: package com.javacodegeeks.android.apps.moviesearchapp.services; import java.io.IOException; import java.io.InputStream; android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.util.Log; import com.javacodegeeks.android.apps.moviesearchapp.io.FlushedInputStream; import com.javacodegeeks.android.apps.moviesearchapp.util.Utils; public class HttpRetriever { private DefaultHttpClient client = new DefaultHttpClient(); public String retrieve(); if (getResponseEntity != null) { return EntityUtils.toString(getResponseEntity); } } catch (IOException e) { getRequest.abort(); Log.w(getClass().getSimpleName(), "Error for URL " + url, e); } return null; } public InputStream retrieveStream(); return getResponseEntity.getContent(); } catch (IOException e) { getRequest.abort(); Log.w(getClass().getSimpleName(), "Error for URL " + url, e); } return null; } public Bitmap retrieveBitmap(String url) throws Exception { InputStream inputStream = null; try { inputStream = this.retrieveStream(url); final Bitmap bitmap = BitmapFactory.decodeStream(new FlushedInputStream(inputStream)); return bitmap; } finally { Utils.closeStreamQuietly(inputStream); } } } For the actual execution of the HTTP requests, we are using an instance of the DefaultHttpClient class, which, as its name implies, is the default implementation of an HTTP client, i.e. the default implementation of the HttpClient interface. We also use the HttpGet class (in order to represent a GET request) and provide the target URL for its constructor argument. The HTTP client executes the request and provides an HttpResponse object which contains the actual server response along with any other information. For example, we can retrieve the response status code and compare it against the code for successful HTTP requests (HttpStatus.SC_OK). For successful requests, we take reference of the enclosed HttpEntity object and from that we have access to the actual response data. For textual responses we convert the entity to a String using the static toString method of the EntityUtils class. If we wish to retrieve the data as a byte stream (for example in order to handle binary downloads), we use the getContent method of the HttpEntity class, which creates a new InputStream object of the entity. Note that there is also a third method for directly returning Bitmap objects. This will be helpful at the later parts of the tutorial series, where we will be downloading images from the internet. In that method, we execute the GET request and retrieve an InputStream as usual. Then, we use the decodeStream method of the BitmapFactory class to create a new Bitmap object. Note that we do not directly provide the downloaded InputStream, but we first wrap it with a FlushedInputStream class. As mentioned at the official Android developers blog post, there is a bug in the previous versions of the decodeStream method that may cause problems when downloading an image over a slow connection. The custom class FlushedInputStream, which extends FilterInputStream, is used instead in order to fix the problem. The code for that class is the following: package com.javacodegeeks.android.apps.moviesearchapp.io; import java.io.FilterInputStream; import java.io.IOException; import java.io.InputStream; public b = read(); if (b <. Finally, we use the closeStreamQuietly method of a custom Utils class in order to handle exceptions which might occur when closing an InputStream. The code is as follows: package com.javacodegeeks.android.apps.moviesearchapp.util; import java.io.IOException; import java.io.InputStream; public class Utils { public static void closeStreamQuietly(InputStream inputStream) { try { if (inputStream != null) { inputStream.close(); } } catch (IOException e) { // ignore exception } } } Finally, in order to be able to perform HTTP requests the corresponding permission has to be granted. Thus, add the android.permission.INTERNET to the project’s AndroidManifest.xml file which now is as follows: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns: <application android: <activity android: <intent-filter> <action android: <category android: </intent-filter> </activity> </application> <uses-sdk android: <uses-permission android:</uses-permission> </manifest> So, we have prepared the infrastructure for executing HTTP GET requests. At the following tutorials, we will use that in order to retrieve XML data and images for our application’s needs. You can download here the Eclipse project created so far. Im going over this now, lines 17 and 18 do not compile for me, and while i believe this is due to the name of the projects, renaming them to the same as my own doesnt work. Im also making a new class file, and see theres also “services” at the top of this one, not on the first tutorial? It seems like we’re missing a step or something. You say “Let’s get started with the code. We will create a class named “HttpRetriever”. But where do we do that? Is it a new package or what? Hi, im getting “in” is an unknown entity on line 17 of flushedinputstream? Any ideas? Hi Ilias, I, have been programming Android for the last 10 months, but I have never used Web or internet services until now, so I am new to this. I have only just stumbled on you great tutorial and find it excellent! well done. My current problem is the error message “.DefaultRequestDirector(11207): Authentication error: Unable to respond to any of these challenges: {}” I don’t understand the cause, ho how to fix it. Can you possible help? Hey, any updates on this part regarding their new API version? I’d be very thankful if you could provide me with the changed URLs :) Anyone still interested, I have the updated sample at
https://www.javacodegeeks.com/2010/10/android-full-app-part-2-using-http-api.html
CC-MAIN-2017-22
refinedweb
1,282
57.16
Details Description In an effort to adapt Pig to work using Apache Tez (), I made some changes to allow for a cleaner ExecutionEngine abstraction than existed before. The changes are not that major as Pig was already relatively abstracted out between the frontend and backend. The changes in the attached commit are essentially the barebones changes – I tried to not change the structure of Pig's different components too much. I think it will be interesting to see in the future how we can refactor more areas of Pig to really honor this abstraction between the frontend and backend. Some of the changes was to reinstate an ExecutionEngine interface to tie together the front end and backend, and making the changes in Pig to delegate to the EE when necessary, and creating an MRExecutionEngine that implements this interface. Other work included changing ExecType to cycle through the ExecutionEngines on the classpath and select the appropriate one (this is done using Java ServiceLoader, exactly how MapReduce does for choosing the framework to use between local and distributed mode). Also I tried to make ScriptState, JobStats, and PigStats as abstract as possible in its current state. I think in the future some work will need to be done here to perhaps re-evaluate the usage of ScriptState and the responsibilities of the different statistics classes. I haven't touched the PPNL, but I think more abstraction is needed here, perhaps in a separate patch. Issue Links Activity - All - Work Log - History - Activity - Transitions Hi Achal, That's a large patch. Can you give us a roadmap for reading it – what are the changes, at a high level? It looks like you had to change a bunch of stuff that's not (at first glance) directly related to exec mode. Procedurally: - please generate the patch using 'git diff -no-prefix' since the apache pig master is on svn - please post the complete patch to Review Board, for ease of commenting - please make sure that all new files have the apache license headers at the top Thanks -D oh 3 more things I thought you found your way around the -y argument? I still see that in there. Don't comment out blocks of code, just delete them Add some documentation about creating new Exec Engines to the xml-based docs, or at least post it here. Just having it in javadocs is not sufficient. Hi Achal for large patches, please create a review here: Sorry my bad! Was trying to get it out as soon as possible that I brushed over some stuff too fast. Dmitriy V. Ryaboy I will make these changes later today and post them up. I did actually get around the -y argument, just totally forgot to go back and get rid of that. In the meantime, the Review Board is located here for the current patch: Once I update the patch later today I will post it here with the ReviewBoards as well. - Achal I have regenerated the patches taking into account some of the suggestions. I think this should cover most of the cases, although I may have missed certain things. I wanted to post these versions of the patch as early as I could so that people can still consume most of the changes while I continue to review the different pieces. I have separated the patches into 4 main areas – the major execution engine changes, the changes to the MR codebase to make it happen, the changes to further abstract ScriptState/other statistics related stuff, and the testing changes. Hopefully this will make things clearer and easier to consume. And here are the reviewboards for each one. Main ExecEngine changes: MR ExecEngine changes: ScriptState/Statistics changes: Testing Suite changes: - Achal Achal Soni, thank you very much for the great work! Although I am not a Pig old-timer, I have been playing with your patch and have a few high-level comments as follows: - I love your ExecType interface. What I don't like is that there are two things called ExecType: org.apache.pig.ExecType org.apache.pig.backend.executionengine.ExecType Since you're introducing the ServiceLoader framework, the enum seems no longer needed at all. Furthermore, eliminating it helps simplify the constructor code of PigContext and PigServer. - Currently, it is a bit hard to review your patch because there are too many changes. To get it committed faster, I suggest we should avoid unnecessary changes and minimize its scope. For example, having the PigServer constructor signature unchanged helps avoid a lot of changes in Test*.java files. - Julien already pointed out this in the RB, but your patch accidentally reverts a couple of previous commits. I took them out. I went ahead and made these changes myself - here. If everyone thinks this is a step forward, I will upload it in a new patch. Please let me know. I have two more comments in ExecutionEngine and MRExecutionEngine as follows: - Can you simplify the checked exceptions in the ExecutionEngine interface? For example, From: public PigStats launchPig(LogicalPlan lp, String grpName, PigContext pc) throws PlanException, VisitorException, IOException, ExecException, JobCreationException, FrontendException, Exception; To: public PigStats launchPig(LogicalPlan lp, String grpName, PigContext pc) throws Exception; Looks like there's no point of throwing them again in ExecutionEngine because they will be caught as Exception in PigServer anyway. If needed, we should take specific actions per exception in ExecutionEngine. - As for the setProperty method in ExecutionEngine, do we need to pass a properties? Can we construct a properties with the given key/value pair and call recomputeProperties() internally? public void setProperty(Properties properties, String property, String value); Also, as for the setProperty method in MRExecutionEngine, looks like it's mostly duplicate of recomputeProperties(). Can you just reuse recomputeProperties()? Julien said you're working on a new patch. It would be nice if you could incorporate these (of course if you agree with me). Thank you a lot! Cheolsoo Park 1. Do we really throw Exception ? If yes, then let's just throw that. If not then let's instead have FrontEndException, ExecException, IOException. i.e. let's remove the exceptions that are already included by the highest exception level. 2. agreed with you. I would expect the execution engine to handle the Properties internally and the signature of this method to be: public void setProperty(String property, String value); Do we really throw Exception? No, we don't throw Exception to the end user. But currently, PigServer catches them all in a single catch block and sort them out using instanceof calls (see below). Probably we should make ExecutionEngine throw FEE, EE, and IOE and replace instanceof calls with catch blocks in PigServer. try { stats = pigContext.getExecutionEngine().launchPig(lp, jobName, pigContext); } catch (Exception e) { // There are a lot of exceptions thrown by the launcher. If this // is an ExecException, just let it through. Else wrap it. if (e instanceof ExecException){ throw (ExecException)e; } else if (e instanceof FrontendException) { throw (FrontendException)e; } else { int errCode = 2043; String msg = "Unexpected error during execution."; throw new ExecException(msg, errCode, PigException.BUG, e); } } I have taken all the suggestions into account and regenerated a new patch that is hopefully cleaner, smaller, and reflects most of the suggestions. The patch is attached and the review board is the following: Let me know if there are any pressing changes to this patch! I have submitted my review. This looks great Achal Soni! Cheolsoo Park does it look good to you? Once Achal has updated his patch I'm willing to commit. Julien Le Dem, I haven't looked at it yet, but I will review it tonight. I will also run full unit tests. Btw, I was meeting Mark, Olga, Rohini, and Daniel at LinkedIn this morning. We decided to create a tez branch. Rohini suggested that this patch should go into that branch instead of trunk. Can we agree where we should commit this patch first? Personally, I think this can go into trunk directly since it's quite general. But there were some concerns. The point is to be able to implement alternate execution engines without having to fork Pig. I think it should go in trunk. I'd like this patch in trunk since it's not Tez-specific, and allows people to experiment with other runtimes (for example, Spark or Drill). I'd also be in favor putting this in trunk as opposed to a Tez branch. Although the motivation for this is Tez, I think we would want this patch in Pig even if there wasn't Tez support. A couple short comments for Achal: - It looks like the build targets that include the META-INF are only executed when building against hadoopversion=23. The META-INF also don't seem to be included in the pig.jar and pig-withouthadoop.jar that go in the root directory. I tried copying in the correct jars, but it seems like something is still off. - The changes to the try/catch blocks in MapReduceLauncher break on 23, because HadoopShims for 23 doesn't throw an exception where 20 does. Maybe that should be fixed in HadoopShims though. I see 295 failures with "ant clean test". The number looks scary, but all the failures seem to boil down to two reasons: - As Mark already mentioned, META-INF is not executed. So many test cases fail with the following error: error msg: Unknown exec type: local - The removal of PigServer.compilePp() makes many test cases fail with the following error: java.lang.NoSuchMethodException: org.apache.pig.PigServer.compilePp() As for indentation, can you please use 4-spaces instead of tabs? Tabs make indentation look funny at several places. A couple of awk/sed commands should do the job. Otherwise, looks great to me too. Thank you Achal for the wonderful work! Attached test_failures.txt. Hi all, I've reuploaded the patch with all of the changes that Julien suggested as well as accounting for the comments from Mark. The test cases should be ok now (hopefully!) as we changed build.xml to package the META-INF folder and I changed the compilePp() issue. Cheolsoo Park Can you give this a look when you have time, and run the test suite? I think everything should be fine now. Also if you can tell me what exactly I should be doing for indentation/in what files that'd be great. I seem to have some problems with the whitespace/indentation aspect so some pointers here would be awesome. Let me know if anything else seems wrong. Achal Here is the ReviewBoard for the new patch : I will kick off the unit tests with the new patch now. if you can tell me what exactly I should be doing for indentation/in what files that'd be great. This is not a big deal. Basically, you can run the following command to replace tab chars in your patch: sed -i .orig '/^+/,/$/ s/<tab>/<4 whitespaces>/g' updated-8-22-2013-exec-engine.patch Then, the modified patch can be applied with "-l" option (--ignore-whitespace): patch -l < updated-8-22-2013-exec-engine.patch +1 LGTM If test-commit passes I think we can commit to TRUNK All, so here is the list of failing tests: org.apache.pig.test.TestGrunt.testScriptMissingLastNewLine org.apache.pig.test.TestGrunt.testCheckScriptSyntaxWithSemiColonUDFErr org.apache.pig.test.TestGrunt.testExplainDot org.apache.pig.test.TestGrunt.testExplainOut org.apache.pig.test.TestGrunt.testExplainBrief org.apache.pig.test.TestGrunt.testExplainEmpty org.apache.pig.test.TestGrunt.testExplainScript org.apache.pig.test.TestInputOutputMiniClusterFileValidator.testValidationNeg org.apache.pig.test.TestJobStats.testOneTaskReport org.apache.pig.test.TestJobStats.testGetOuputSizeUsingNonFileBasedStorage1 org.apache.pig.test.TestJobStats.testGetOuputSizeUsingNonFileBasedStorage2 org.apache.pig.test.TestJobStats.testGetOuputSizeUsingNonFileBasedStorage3 org.apache.pig.test.TestJobStats.testGetOuputSizeUsingNonFileBasedStorage4 org.apache.pig.test.TestJobStats.testMedianMapReduceTime org.apache.pig.test.TestJobStats.testGetOuputSizeUsingFileBasedStorage org.apache.pig.test.TestMRExecutionEngine.testJobConfGeneration org.apache.pig.test.TestMRExecutionEngine.testJobConfGenerationWithUserConfigs org.apache.pig.test.TestMacroExpansion.test20 org.apache.pig.test.TestMacroExpansion.test21 org.apache.pig.test.TestMacroExpansion.test22 org.apache.pig.test.TestMacroExpansion.test23 org.apache.pig.test.TestMacroExpansion.test32 org.apache.pig.test.TestMacroExpansion.test33 org.apache.pig.test.TestMacroExpansion.test34 org.apache.pig.test.TestMacroExpansion.test35 org.apache.pig.test.TestMacroExpansion.testCommentInMacro org.apache.pig.test.TestMacroExpansion.testNegativeNumber org.apache.pig.test.TestMacroExpansion.typecastTest org.apache.pig.test.TestMacroExpansion.testFilter org.apache.pig.test.TestMapSideCogroup.testFailure2 org.apache.pig.test.TestMergeJoinOuter.testFailure org.apache.pig.test.TestPigRunner.testEmptyFile org.apache.pig.test.TestScriptLanguage.testSysArguments org.apache.pig.test.TestShortcuts.testExplainShortcutNoAlias org.apache.pig.test.TestShortcuts.testExplainShortcutNoAliasDefined I prefer fixing them beforehand to fixing them afterward. Although none of these failures is serious (I believe), can we have a couple of more days before committing Achal's patch? I will make sure it gets committed into trunk because I definitely need it for a Tez branch. Thoughts? Cheolsoo Park Thanks a lot for running the test suite! It's good to see where the patch is failing. I definitely agree that all of these need to be investigated before the patch gets anywhere. I have some ideas about a few of the test cases, looks to be some minor stuff with JobStats and the way Explain works now which I have to look into. The rest I can't really think of off hte top of my head but I'll give it a shot. I'll report back with some more findings as soon as possible. In the Pig-on-Tez meeting in Linkedin we decided to do Tez work on a branch and that Cheolsoo will initiate conversation thread on mailing list for it and take up the task of creating the branch. Tez is relatively new and unstable so it will be wise to not start with code directly on trunk. Hive is also doing their Tez work on a branch. Cheolsoo had a question as to whether we should commit this to trunk and branch after that. I would prefer PIG-3419 to be also put in the branch and not checked into trunk. It makes lot of changes to the Exceptions thrown, removes public methods etc and that might cause backward incompatibility during runtime with code compiled with previous versions of pig. All that needs to be figured out and fixed. So might not be a good idea to get this patch directly into trunk. Thoughts? Rohini, I want to reiterate that this patch has NO tez dependencies (if it does, that's a bug). The intention is not to make Tez possible. It's to make pluggable execution engines possible; and I do not want that functionality to be tied to a tez branch that will be unstable and in heavy development for the foreseeable future. This work will be immediately useful for the Spork (pig on spark) branch, for example. Also, it allows people to work with new runtimes without modifying Pig. So Tez-on-Pig doesn't even have to be done as a branch of this project, someone can go an experiment completely independently. For these reasons, I would like it in trunk. You make a great point about the danger of changing exceptions, public methods, etc. I believe that most of these are project-public, and annotated as such. Do you have specific methods you are concerned about? Ideally we would change as little as possible for the end user. Dmitriy I think the reason we wanted it on the Tez branch is that it might evolve with Tez implementation and so we would merge the updated code back when Tez is ready. Since there are no plans for any additional backend, is there a need to apply this to trunk sooner rather than later? Olga, first commit to the spork branch is from 2012. (the default branch on my github is "spork"). The advantage of having the Execution engine abstraction in trunk is it allows running experimental Pig execution engines implementations like Tez or Spark on an official release of Pig without having to build from a specific branch. The execution engine implementations themselves are fairly independent of Pig and do not need to be maintained in a Pig branch. If the ExecutionEngine abstraction evolves over time that can be done in Trunk and can be merged independently of the Tez implementation itself. I am uploading a new patch that includes the following changes: - Fixes most test cases (issues with JobStats and Explain). - Removes "src/META-INF/services/org.apache.pig.backend.executionengine.ExecType" because it's duplicate. (Probably it was added by mistake.) - Renames TestJobStats.java to TestMRJobStats.java since it tests MRJobStats. - Fixes a bunch of Java warnings. The diff from Achal's last patch can be viewed here. I just kicked off the unit tests again and will let you know how it goes. Thanks! I don't have a very strong opinion on this so whatever you guys decide is fine with me. I think if it does evolve as part of Tez, at least some changes are likely to sneak into Tez branch without going to trunk so they might diverge for a while but if we are ok to take that chance, that's fine Kicked off full unit tests now. It makes lot of changes to the Exceptions thrown, removes public methods etc and that might cause backward incompatibility during runtime with code compiled with previous versions of pig. I am listing all the backward incompatible changes made to public API. Hopefully, this helps us estimate the impact. - PigServer constructor - public PigServer(String execTypeString) throws ExecException, IOException { - this(ExecType.fromString(execTypeString)); + public PigServer(String execTypeString) throws PigException { + this(addExecTypeProperty(PropertiesUtil.loadDefaultProperties(), execTypeString)); + } We can revert PigException back to EE and IOE for this constructor (and other new constructors). Not hard to fix. - PigServer.explain() public void explain(String alias, String format, boolean verbose, boolean markAsExecute, PrintStream lps, - PrintStream pps, - PrintStream eps) throws IOException { + PrintStream eps, + File dir, + String suffix) throws IOException { The method signature changed. Although it's possible that someone uses this method in their applications, there is another method that wraps this one (i.e. public void explain(String alias, PrintStream stream)), and that one is more likely to be used. - Name changes of several public classes: - JobStats to MRJobStats - PigStatsUtil to MRPigStatsUtil - ScriptState to MRPScriptState - HExecutionEngine to MRExecutionEngine - PigContext.getExecutionEngine() - public HExecutionEngine getExecutionEngine() { + public ExecutionEngine getExecutionEngine() { Result of the class name change. - SimplePigStats class is moved from org.apache.pig.tools.pigstats to org.apache.pig.tools.pigstats.mapreduce. - Launcher class is moved from org.apache.pig.backend.hadoop.executionengine.mapReduceLayer to org.apache.pig.backend.hadoop.executionengine. - JobControlCompiler constructor - public JobControlCompiler(PigContext pigContext, Configuration conf) throws IOException { + public JobControlCompiler(PigContext pigContext, Configuration conf) { The IOE was redundant in the first place. So we should remove it. Here is my estimation: - Major - But we can fix it. - Minor - Unlikely used in user code. - Minor - Unlikely used in user code. - Minor - Unlikely used in user code. - Minor - Unlikely used in user code. - Minor - Unlikely used in user code. - Minor - Unlikely used in user code. As long as we fix #1, I think we can go ahead commit the patch to trunk. What do you think? All unit tests pass. Cheolsoo thanks so much for helping with this work! I think #1 and #3 are the issues (#3 will affect Ambrose and probably Lipstick). We can take care of updating Ambrose if we need to. Julien Le Dem do you think this is an important enough semantic change to force advanced clients such as Ambrose to rewrite / recompile? Or should we roll that part back? Bikas Saha thanks for the heads up, we'll need to update the pig-on-tez branch. Fortunately it doesn't affect this patch, since it's framework-independent and has no TEZ references. Looks like this jira wasnt the appropriate one to comment on. Is there a different umbrella jira for Pig on Tez that I can track and post comments on? Bikas Saha Thanks for the heads-up Bikas! This JIRA is not concerned with the Tez integration for Pig and is simply the abstraction in Pig to allow for alternate ExecutionEngines in Pig. But will certainly change this on the Tez integration side of stuff. Thanks a lot Cheolsoo Park for continuing this! I think everything looks good from my end. I can certainly see why we may want to keep this on a different branch until everything is finalized. Certain things may still need more work. For example, OutputStats is not completed abstracted out, as it still has references to POStore which is a MR implementation construct. ScriptState/PPNL/JobStats may still need more abstraction (especially PPNL) and reworking to incorporate a new ExecutionEngine abstraction. I think what we have done here is the minimum foundation for an abstraction though, and it would be appropriate to put into trunk, but these are not my decisions to make. With regard to public methods that were changed, I don't think most of them are a big deal, besides as Cheolsoo said, the PigServer throwing PigException. I never thought IOException was a good exception to throw, but I think reverting PigServer back to IOException as it is userfacing code is not a big deal. The rest of the method signature changes shouldn't be worrisome because most of them are internal to the project. However, the change from JobStats to MRJobStats, while necessary (as each ExecutionEngine would have it's own type of JobStats it would present to the end user), could be problematic because it is userfacing code and would probably break people who were previously using JobStats. That I think is the most important thing to keep in mind. The task of making the PPNL and JobStats clearly tied to the ExecutionEngine should be thought through also. Cheolsoo Park: thanks a lot for looking into this. Here are my thoughts: 1. let's change it back 2. 4. 5. 6. 7. are either internal to Pig or necessary to add the execution engine abstraction. 3. JobStats still exists but the MR specific part is split into MRJobStats which extends JobStats Same thing for PigStatsUtil and ScriptState. Those classes are not disappearing but the MR specific part is abstracted out. HExecutionEngine could be renamed back to what it was but this is again what is becoming the new abstraction. Unfortunately tools like Ambrose and Lipstick depend on the MR specific parts of Pig and look at the internals. This patch is a necessary change so that those tools can work independently of the execution engine in the future. The changes to Ambrose and Lipstick should be minimal though with this patch. But yes they would suffer from some incompatibility, but again there is no way around it when a tool looks inside the execution engine internals. I think we should revert 1. and commit the patch. I agree with all that is said, but there is no need to rename HExecutionEngine back. It doesn't semantically make sense and I don't think that anybody was directly interacting it outside of the test cases? Whatever changes to Ambrose and Lipstick should be communicated clearly also. I have noted some issues with PPNL before with regard to abstraction – namely, Pig provides the MROperPlan to the listeners, which is not relevant in a differen execution engine. Julien suggested this should be fixed in a follow up patch. This will most certainly affect Ambrose and Lipstick so we should be cautious in that regard. Bill Graham looping you in for Ambrose. +1 Cheolsoo Park LGTM! Committed to trunk: Thank you Achal! This break the Oozie build, which uses PigStatsUtil.{HDFS_BYTES_WRITTEN, MAP_INPUT_RECORDS, MAP_OUTPUT_RECORDS, REDUCE_INPUT_RECORDS, REDUCE_OUTPUT_RECORDS} . We need to provide a backward compatible way. I having second thoughts on having this patch in 0.12 in and wondering whether we should revert this and keep it only in Tez branch. Two reasons for that: - Seeing PIG-3457which was my initial concern. - Changing interfaces to be backward compatible is very tricky and the workarounds are hacky or ugly. Faced that with PIG-3255. And this patch introduces lot of changes and new interfaces for the purpose of future work which is yet to take off from POC stages. The interfaces are bound to evolve when actual implementations are done or become different from what is in this patch if we end up finding cleaner abstractions. Putting something in a release which we are not very sure of does not seem like a good idea. Someone who wants to do experimental work can start off with tez branch since it is experimental work anyways. Basically I just want to keep experimentation code separate from production code since we are talking about releasing Pig 0.12. Thoughts? This is not just for Tez. The point is to enable POC work (in branches, forks, etc) and not have each such attempt redo all the work in this ticket. It's the same reason we provide things like pluggable LoadFuncs to let people work on things they want to load we didn't think of loading. We should certainly work to stabilize 0.12 and fix issues like PIG-3457 Would should at least annotate the new interfaces as evolving so we don't need to evolve them in a backward compatible way just yet. Dmitriy, I am not arguing about adding interfaces to enable other frameworks. If you take the LoadFuncs case, a lot of design and work went into that () in coming up with and finalizing the interfaces and there were Loaders and Storers changed or written with the design of the new interfaces before the new interfaces were released. In this case we have added the interfaces without doing something like that and putting it in a release. That was my concern. As Bill suggests if we mark and agree that the new interfaces are unstable and bound to change and no backward compatibility will be provided, then I am good. Because when we actually get to Tez or Spork implementation (not POC), I am sure these are bound to change or more new methods are required. +1 to marking the interfaces as evolving. Reverted in 0.12. Here is the current patch as is, with the changes to /src and /test. Please let me know if there are any questions or other requests. I expect these changes will incite some good discussion about how to achieve a pluggable execution engine!
https://issues.apache.org/jira/browse/PIG-3419
CC-MAIN-2016-40
refinedweb
4,414
56.05
This article will describe how to utilise Windows Forms controls outside of .NET. In a recent MSDN magazine article on .NET Interop available here, various ways of exposing .NET objects to 'legacy' environments are discussed, including the exposure of Windows Forms controls as ActiveX controls. The problem is that the goalposts have moved since the article was written as Beta 2 is now available, and unfortunately this support has been removed - see this posting on the .NET list at. The following image shows a control, written purely within .NET, hosted within an ActiveX control container - in this instance tstcon32.exe. As Beta1 supported this facility, and being somewhat inquisitive, I decided to see if I could find a way to expose controls anyway. The attached project creates the 'Prisoner' control, which won't set the world on fire but does show the main things you need to do in order to get a .NET control up & running within VB6. CAVEAT: As this support has been dropped from Beta2 of .NET, don't blame me if it fries your PC or toasts the cat. Now that's out of the way, how's it done?. using System.Runtime.InteropServices; using System.Text; using System.Reflection; using Microsoft.Win32; [ProgId("Prisoner.PrisonerControl")] [ClassInterface(ClassInterfaceType.AutoDual)] This assigns the ProgID, and also defines that the interface exposed should be 'AutoDual' - this crufts up a default interface for you from all public, non-static members of the class. If this isn't what you want, use one of the other options. If you're using VB.NET, you also need a strong named assembly. Curiously in C# you don't - and it seems to be a feature of the environment rather than a feature of the compiler or CLR. ( ) ; } CodeBase is interesting - not only for .NET controls. It defines a URL path to where the code can be found, which could be an assembly on disk as in this instance, or a remote assembly on a web server somewhere. When the runtime attempts to create the control, it will probe this URL and download the control as necessary. This is very useful when testing .NET components, as the usual caveat of residing in the same directory (etc) as the .EXE does not apply. ( ) ; } The second function will remove the registry entries added when (if) the class is unregistered - it's always a good suggestion to tidy up as you go. Now you are ready to compile & test your control. For this example I have chosen tstcon32.exe, which is available with the installation of .NET. The main reason I've used this rather than VB6 is that I don't have VB6 anymore. First up you need to insert your control, so to do that choose Edit -> Insert New Control, and choose your control from the dropdown... This will result in a display as shown at the top of the article, if you're following along with my example code. The example control only includes one method, 'Question'. To test this within TstCon32, choose Control -> InvokeMethods from the menu, and select the method you want to call. Note that because I defined the interface as AutoDual, I get gazillions of methods. If you implement an interface and expose this as the default interface then the list of methods will be more manageable. Click on the 'Invoke' button will execute the method, which in this instance displays the obligatory message box. Dropping support for creating ActiveX controls from Windows Forms controls is a pain, and one decision I wish Microsoft had not made. This article presents one way of exposing .NET controls as ActiveX controls, and seems to work OK. Having said that, I've not exhaustively tested this and who knows what bugs might be lurking in there. I haven't delved into events yet, nor property change notifications, so there's some fun to be had there if you like that sort of thing. The .NET framework truly is the best thing since sliced bread, but the lack of support for creating ActiveX controls from Windows Forms controls is inconvenient. There are many applications out there (ours included) which can be extended with ActiveX controls. It would be nice given the rest of the support in the framework to be able to expose Windows Forms controls to ActiveX containers, and maybe someday the support will be available. This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here IOleInPlaceObjectImpl DomoG wrote: * This unsupported quasi-solution is to call CoEEShutDown to force shutdown of the CLR. As actually visible in the debugger, calling CoEEShutDown forces the release of the problematic references, along with all managed and unmanaged resources. Therefore the correct reference count is restored, and no native objects assert. General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/1256/Exposing-Windows-Forms-Controls-as-ActiveX-control?PageFlow=FixedWidth
CC-MAIN-2017-26
refinedweb
861
64.41
2.8 Using library header files When using a library it is essential to include the appropriate header files, in order to declare the function arguments and return values with the correct types. Without declarations, the arguments of a function can be passed with the wrong type, causing corrupted results. The following example shows another program which makes a function call to the C math library. In this case, the function pow is used to compute the cube of two (2 raised to the power of 3): #include <stdio.h> int main (void) { double x = pow (2.0, 3.0); printf ("Two cubed is %f\n", x); return 0; } However, the program contains an error--the #include statement for ‘math.h’ is missing, so the prototype double pow (double x, double y) given there will not be seen by the compiler. Compiling the program without any warning options will produce an executable file which gives incorrect results: $ gcc badpow.c -lm $ ./a.out Two cubed is 2.851120 (incorrect result, should be 8) The results are corrupted because the arguments and return value of the call to pow are passed with incorrect types.(6) This can be detected by turning on the warning option -Wall: $ gcc -Wall badpow.c -lm badpow.c: In function `main': badpow.c:6: warning: implicit declaration of function `pow' This example shows again the importance of using the warning option -Wall to detect serious problems that could otherwise easily be overlooked.
http://www.network-theory.co.uk/docs/gccintro/gccintro_19.html
CC-MAIN-2015-32
refinedweb
246
63.19
Student class with a course list Orçamento $5-10 USD We want to keep track of students and the classes they have taken. For this assignment write a program for keeping a course list for a student. (Next assignment, we will add a class that keeps a collection of students). The solution to this problem will require implementation of several classes. A Student has a name and a list of courses. Each Course is represented by another class. A Course class has the name of the course, the number of units, and the grade the student received. Provide a toString method for this class. (You can look at the Employee class to see what a toString method should look like). The courses for a student are kept in a CourseList. A CourseList has a array list of Course objects and methods to add a course, to get the ith course, to get the number of courses, and to calculate the grade point average of the courses. Student has a method to "print" the student name, gpa and the list of courses. It is not good object-oriented programming to have methods in a object that actually print to the console window. So your print method should return a formatted string. The caller of the method can then decide what to do. It could print to the console window or to a JOptionPane, for instance. Write an application to test the Student class. Call it StudentTest. You may need to add other methods to these classes or create other classes. Here is a sample application and its output. But write your own and be sure to test all the methods. /** An application to test the student class */ public class StudentTest { public static void main (String[] args) { Student s = new Student("Bill"); [url removed, login to view]("cis27C", 4, "A"); [url removed, login to view]("pe10", 1, "a"); Course c = new Course("eng1a", 3, "b"); [url removed, login to view](c); [url removed, login to view]([url removed, login to view]()); } } Output: Student:Bill gpa: 3.625 Course[name:CIS27C units:4 grade:A] Course[name:PE10 units:1 grade:A] Course[name:ENG1A units:3 grade:B] Use a object-oriented
https://www.br.freelancer.com/projects/php-java/student-class-with-course-list/
CC-MAIN-2018-05
refinedweb
370
73.07
Compares two floating-point values without risking an exception #include <math.h> int isgreater ( x , y ); int isgreaterequal ( x , y ); The macro isgreater( ) tests whether the argument x is greater than the argument y, but without risking an exception. Both operands must have real floating-point types. The result of isgreater( ) is the same as the result of the operation (x) > ( y), but that operation could raise an "invalid operand" exception if either operand is NaN ("not a number"), in which case neither is greater than, equal to, or less than the other. The macro isgreater( ) returns a nonzero value (that is, true) if the first argument is greater than the second; otherwise, it returns 0. The macro isgreaterequal( ) functions similarly, but corresponds to the relation (x) >= ( y), returning true if the first argument is greater than or equal to the second; otherwise 0. /* Can a, b, and c be three sides of a triangle? */ double a, b, c, temp; /* First get the longest "side" in a. */ if ( isgreater( a, b ) ) temp = a; a = b; b = temp; if ( isgreater( a, c ) ) temp = a; a = c; c = temp; /* Then see if a is longer than the sum of the other two sides: */ if ( isgreaterequal( a, b + c ) ) printf( "The three numbers %.2lf, %.2lf, and %.2lf " "are not sides of a triangle.\n", a, b, c ); isless( ), islessequal( ), islessgreater( ), isunordered( )
http://books.gigatux.nl/mirror/cinanutshell/0596006977/cinanut-CHP-17-121.html
CC-MAIN-2018-43
refinedweb
228
67.69
For data that is not normally distributed, the equivalent test to the ANOVA test (for normally distributed data) is the Kruskal-Wallace test. This tests whether all groups are likely to be from the same population. import numpy as np from scipy import stats grp1 = np.array([69, 93, 123, 83, 108, 300]) grp2 = np.array([119, 120, 101, 103, 113, 80]) grp3 = np.array([70, 68, 54, 73, 81, 68]) grp4 = np.array([61, 54, 59, 4, 59, 703]) h, p = stats.kruskal(grp1, grp2, grp3, grp4) print ('P value of there being a signficant difference:') print (p) OUT: P value of there being a signficant difference: 0.013911742382969793 If the groups do not belong to the same population, between group analysis needs to be undertaken. One method would be to use repeated Mann-Whitney U-tests, but with the P value needed to be considered significant modified by the Bonferroni correction (divide the required significant level by the number of comparisons being made). This however may be overcautious. One thought on “56. Statistics: Multiple comparison of non-normally distributed data with the Kruskal-Wallace test”
https://pythonhealthcare.org/2018/04/13/56-statistics-multiple-comparison-of-non-normally-distributed-data-with-the-kruskal-wallace-test/
CC-MAIN-2018-51
refinedweb
187
58.08
import "android.googlesource.com/platform/tools/gpu/atom" Package atom provides the fundamental types used to describe a capture stream. const NoID = ^ID(0) NoID is used when you have to pass an ID, but don't have one to use. func Register(ty TypeInfo) Register registers the atom type ty with the atom registry. If another atom is already registered with the same type identifer then Register will panic. type Atom interface { binary.Object // API returns the graphics API this atom belongs to. API() gfxapi.API // TypeID returns the identifier of this atom's type. TypeID() TypeID // Flags returns the flags of the atom. Flags() Flags // Mutate mutates the State using the atom. Mutate(*gfxapi.State) error } Atom is the interface implemented by all objects that describe an single event in a capture stream. Typical implementations of Atom describe an application's call to a graphics API function or provide meta-data describing observed memory or state at the time of capture. Each implementation of Atom should have a unique and stable TypeID to ensure binary compatibility with old capture formats. Any change to the Atom's binary format should also result in a new TypeID. func New(id TypeID) (Atom, error) New builds a new instance of the atom with type identifier id. The type must have previously been registered with Register. type Flags uint32 Flags is a bitfield describing characteristics of an atom. const ( DrawCall Flags = 1 << iota EndOfFrame ) func (f Flags) IsDrawCall() bool IsDrawCall returns true if the atom is a draw call. func (f Flags) IsEndOfFrame() bool IsEndOfFrame returns true if the atom represents the end of a frame. type Group struct { binary.Generate Name string // Name of this group. Range Range // The range of atoms this group (and sub-groups) represents. SubGroups GroupList // All sub-groups of this group. } Group represents a named, contiguous span of atoms with support for sparse sub-groups. Groups are ideal for expressing nested hierarchies of atoms. Groups have the concept of items. An item is either an immediate sub-group, or an atom identifier that is within this group's span but outside of any sub-group. For example a Group spanning the atom identifier range [0 - 9] with two sub-groups spanning [2 - 4] and [7 - 8] would have the following tree of items: Group │ ├─── Item[0] ─── Atom[0] │ ├─── Item[1] ─── Atom[1] │ ├─── Item[2] ─── Sub-group 0 │ │ │ ├─── Item[0] ─── Atom[2] │ │ │ ├─── Item[1] ─── Atom[3] │ │ │ └─── Item[2] ─── Atom[4] │ ├─── Item[3] ─── Atom[5] │ ├─── Item[4] ─── Atom[6] │ ├─── Item[5] ─── Sub-group 1 │ │ │ ├─── Item[0] ─── Atom[7] │ │ │ └─── Item[1] ─── Atom[8] │ └─── Item[6] ─── Atom[9] func (*Group) Class() binary.Class func (g Group) Count() uint64 Count returns the number of immediate items this group contains. func (g Group) Index(index uint64) (baseAtomID ID, subgroup *Group) Index returns the item with the specified index. If the item refers directly to an atom identifier then the atom identifier is returned in baseAtomID and subgroup is assigned nil. If the item is a sub-group then baseAtomID is returned as the lowest atom identifier found in the sub-group and subgroup is assigned the sub-group pointer. func (g Group) IndexOf(atomID ID) uint64 IndexOf returns the item index that refers directly to, or contains the given atom identifer. func (g *Group) Insert(atomID ID, count int) Insert adjusts the spans of this group and all subgroups for an insertion of count elements at atomID. func (g Group) String() string String returns a string representing the group's name, range and sub-groups. type GroupList []Group GroupList is a list of Groups. Functions in this package expect the list to be in ascending atom identifier order, and maintain that order on mutation. func (l *GroupList) Add(start, end ID, name string) Add inserts a new atom group into the list with the specified range and name. If the new group does not overlap any existing groups in the list then it is inserted into the list, keeping ascending atom-identifier order. If the new group sits completely within an existing group then this new group will be added to the existing group's sub-groups. If the new group completely wraps one or more existing groups in the list then these existing groups are added as sub-groups to the new group and then the new group is added to the list, keeping ascending atom-identifier order. If the new group partially overlaps any existing group then the function will panic. func (l GroupList) Copy(to, from, count int) Copy copies count groups within the list. func (l GroupList) GetSpan(index int) interval.U64Span GetSpan returns the atom identifier span for the group at index in the list. func (l *GroupList) IndexOf(atomID ID) int IndexOf returns the index of the group that contains the atom identifier or -1 if not found. func (l GroupList) Length() int Length returns the number of groups in the list. func (l *GroupList) Resize(length int) Resize adjusts the length of the list. func (l GroupList) SetSpan(index int, span interval.U64Span) SetSpan sets the atom identifier span for the group at index in the list. func (l GroupList) String() string String returns a string representing all groups in the group list. type ID uint64 ID is the index of an atom in an atom stream. type IDSet map[ID]struct{} IDSet is a set of IDs. func (s *IDSet) Add(id ID) Add adds id to the set. If the id was already in the set then the call does nothing. func (s IDSet) Contains(id ID) bool Contains returns true if id is in the set, otherwise false. func (s *IDSet) Remove(id ID) Remove removes id from the set. If the id was not in the set then the call does nothing. type List []Atom List is a list of atoms. func (l *List) Add(a Atom) Add adds a to the end of the atom list. func (l *List) AddAt(a Atom, id ID) Add adds a to the list before the atom at id. func (l *List) Clone() List Clone makes and returns a shallow copy of the atom list. func (l *List) Decode(d binary.Decoder) error Encode decodes the atom list using the specified encoder. func (l *List) Encode(e binary.Encoder) error Encode encodes the atom list using the specified encoder. func (l *List) WriteTo(w Writer) WriteTo writes all atoms in the list to w, terminating with a single EOS atom. type Observation struct { binary.Generate Range memory.Range // The memory range that was observed. ResourceID binary.ID // The resource identifier holding the memory that was observed. } Observation is an Atom describing a region of application space memory that was observed at capture time. func (a *Observation) API() gfxapi.API Atom compliance func (*Observation) Class() binary.Class func (a *Observation) Flags() Flags func (a *Observation) Mutate(s *gfxapi.State) error func (a *Observation) String() string func (a *Observation) TypeID() TypeID type Range struct { binary.Generate Start ID // The first atom within the range. End ID // One past the last atom within the range. } Range describes an interval of atoms in a stream. func (*Range) Class() binary.Class func (i Range) Contains(atomID ID) bool Contains returns true if atomID is within the range, otherwise false. func (i Range) First() ID First returns the first atom identifier within the range. func (i Range) Last() ID Last returns the last atom identifier within the range. func (i Range) Length() uint64 Length returns the number of atoms in the range. func (i Range) Range() (start, end ID) Range returns the start and end of the range. func (i *Range) SetSpan(span interval.U64Span) SetSpan sets the start and end range using a U64Span. func (i Range) Span() interval.U64Span Span returns the start and end of the range as a U64Span. func (i Range) String() string String returns a string representing the range. type RangeList []Range RangeList is a list of atom ranges. func (l RangeList) Copy(to, from, count int) Copy copies count ranges within the list. func (l RangeList) GetSpan(index int) interval.U64Span GetSpan returns the atom identifier span for the range at index in the list. func (l RangeList) Length() int Length returns the number of ranges in the list. func (l *RangeList) Resize(length int) Resize adjusts the length of the list. func (l RangeList) SetSpan(index int, span interval.U64Span) SetSpan sets the atom identifier span for the group at index in the list. type Resource struct { binary.Generate ResourceID binary.ID // The resource identifier holding the memory that was observed. Data []byte // The resource data } Resource is an Atom that embeds a blob of memory into the stream. These atoms are typically only used for .gfxtrace files as they are stripped from the stream on import and placed into the database. func (a *Resource) API() gfxapi.API Atom compliance func (*Resource) Class() binary.Class func (a *Resource) Flags() Flags func (a *Resource) Mutate(s *gfxapi.State) error func (a *Resource) String() string func (a *Resource) TypeID() TypeID type Transformer interface { // Transform takes a given atom and identifier and Writes out a new atom and // identifier to the output atom Writer. Transform must not modify the atom in // any way. Transform(id ID, atom Atom, output Writer) // Flush is called at the end of an atom stream to cause Transformers that // cache atoms to send any they have stored into the output. Flush(output Writer) } Transformer is the interface that wraps the basic Transform method. func Transform(name string, f func(id ID, atom Atom, output Writer)) Transformer Transform is a helper for building simple Transformers that are implemented by function f. name is used to identify the transform when logging. type Transforms []Transformer Transforms is a list of Transformer objects. func (l *Transforms) Add(t ...Transformer) Add is a convenience function for appending the list of Transformers t to the end of the Transforms list. func (l Transforms) Transform(atoms List, out Writer) Transform sequentially transforms the atoms by each of the transformers in the list, before writing the final output to the output atom Writer. type TypeID uint16 TypeID is an atom type identifier. Each implementation of the Atom interface must have a unique type idenitifier. Any changes to the binary format of an atom must result in a new type identifier to maintain binary compatability. const TypeIDEos TypeID = 0xffff TypeIDEos is used as a special end of stream marker. const TypeIDObservation TypeID = 0xfffe const TypeIDResource TypeID = 0xfffd type TypeInfo struct { ID TypeID // The type identifier for the atom. New func() Atom // The function for creating new instances of the atom type. Name string // The name of the atom. Docs string // The URL to the atom's documentation. } TypeInfo is the type information for a single Atom implementation. type Writer interface { Write(id ID, atom Atom) } Writer is the interface that wraps the basic Write method. Write writes or processes the given atom and identifier. Write must not modify the atom in any way.
https://android.googlesource.com/platform/tools/gpu/+/refs/heads/studio-1.3-dev/atom/
CC-MAIN-2022-27
refinedweb
1,844
66.23
Splitting code into several files You have probably been noticing that your source code files have grown somewhat large and include a diverse range of features: class definitions, class method implementations, other functions, and a main() function. In future assignments you will define several classes, some of which may have nothing to do with each other. At some point, an organization scheme will be needed to reduce complexity. C++ offers a very simplistic technique for splitting code into files (sometimes called “modules” or “libraries” in other languages). The idea is simply to write code in several files, then “include” them all into the same file, giving the appearance (to the compiler) that all the code was written in one file. We have been doing this for some time, using #include <xyz>. Except for one minor change, this is the same technique we’ll use to split our programs into multiple files. A word of advice: Do not write your code in one file, then try to split it into several files later; students often try this and find it frustrating or impossible. Begin your project by creating several files… Understanding #include First, it’s important to see how simple #include really is. When the compiler is reading a file (e.g. myprogram.cpp) and it sees #include <xyz> (which may appear anywhere, though it must be found on a line all by itself), the compiler literally reads the file xyz (wherever that file is on your computer) and pastes its contents exactly where #include <xyz> appeared. So it’s a simple substitution: substitute #include <xyz> with the actual contents of the file xyz. The form of #include we will be using for our own files is #include "abc.h" where abc.h is a file we created. The format #include <abc.h> (with angle brackets) means the file abc.h will be found in some system directory, known to the compiler. The quoted format means the file will be found somewhere locally, probably in the same directory as our program code myprogram.cpp. Common practice for splitting code Since #include does a simple substitution, we could (but won’t) split a large program myprogram.cpp into two smaller files myprogramA.cpp and myprogramB.cpp, with myprogramB.cpp including myprogramA.cpp by placing the statement #include "myprogramA.cpp" at the top of the file myprogramB.cpp. I don’t see anything particularly bad about this approach, but it is virtually unseen in real C++ programming. Instead, we will include only “header files” into our “source files.” Header files usually have the ending .h and source files usually have the ending .cpp. A header file contains only class and function declarations (a declaration is a statement that a class exists and has certain properties and methods, or that a function exists; neither the class methods nor the functions will be defined; that is, their implementations will not be provided in the header file). Some included files may include other files, and may attempt to include files already included; we need to prevent repeated-includes because the compiler is not happy when a class or function or variable of the same name is declared twice. Thus, in every header file (named blah.h in this example), we write the following at the top and bottom: #ifndef BLAH_H #define BLAH_H ... #endif This means “if the name BLAH_H is not already defined, then define it.” If a file is included twice, then BLAH_H will be defined (by the first inclusion) so the entire “if–endif” will be skipped (which is the whole file, because the whole file is between the “if” and “endif”). Of course, BLAH_H can be anything; it could be FOO; we usually write FILE_H for the file named file.h so that we don’t reuse names. Example Here is the Shape class and subclasses split into multiple files, plus a main file. shape.h: #ifndef SHAPE_H #define SHAPE_H class Shape { public: double x; double y; virtual double area() = 0; }; #endif rectangle.h: #ifndef RECTANGLE_H #define RECTANGLE_H #include "shape.h" class Rectangle : public Shape { public: double width; double height; double area(); }; #endif rectangle.cpp: #include "rectangle.h" double Rectangle::area() { return width * height; } ellipse.h: #ifndef ELLIPSE_H #define ELLIPSE_H #include "shape.h" class Ellipse : public Shape { public: double major_axis; double minor_axis; double area(); }; #endif ellipse.cpp: #include "ellipse.h" double Ellipse::area() { return 3.1415926 * major_axis * minor_axis; } main.cpp: #include <iostream> #include "rectangle.h" #include "ellipse.h" using namespace std; int main() { Rectangle r; r.width = 10; r.height = 15; r.x = 3; r.y = 2; cout << r.area() << endl; Ellipse e; e.major_axis = 3; e.minor_axis = 5; e.x = 14; e.y = 68; cout << e.area() << endl; return 0; } Compiling a project that has multiple files The .h files don’t need to be compiled; they will be included by the .cpp files. But the .cpp files do need to be compiled, each separately, and then “linked” together into a grand final program. How this is done depends on which tools you are using. If you are on the “command line” and using g++, you can do this: g++ -c rectangle.cpp g++ -c ellipse.cpp g++ -c main.cpp g++ -o myprogram rectangle.o ellipse.o main.o The first three lines compile each .cpp file separately (producing a corresponding .o file). The fourth line links all the .o files together to create the final program.
http://csci221.artifice.cc/lecture/splitting-code.html
CC-MAIN-2017-30
refinedweb
906
68.16
Hi, On Mon, May 4, 2009 at 3:17 PM, Thomas Müller <thomas.mueller@day.com> wrote: > I propose to add a new 'generics generator class' called 'New' to > jackrabbit-jcr-commons. It would contain constructor methods for the > most commonly used collection objects. > > public class New { > public static <K, V> HashMap<K, V> hashMap() { > return new HashMap<K, V>(); > } > public static <T> ArrayList<T> arrayList() { > return new ArrayList<T>(); > } > public static <T> WeakReference<T> weakReference(T t) { > return new WeakReference<T>(t); > } > ... > } > > What do you think? Are there better solutions? I like the idea, removing clutter is good. A related common annoyance I run into is about creating an initialized collection: List<String> abc = Arrays.asList(new String[] { "a", "b", "c" }); or List<String> abc = new ArrayList() {{ add("a"); add("b"); add("c"); }}; This would be quite a bit more convenient with a utility method like this: List<String> abc = New.list("a", "b", "c"); Felix has a point about the drawbacks of such utility code, but in this case I tend to think that it's probably worth it. The potential savings in extra typing/reading are probably quite a bit larger than the amount of extra utility code needed. PS. This might also be a good idea for inclusion in the upcoming Commons Lang 3.0 version that'll be based on Java 5. BR, Jukka Zitting
http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200905.mbox/%3C510143ac0905040803h5cd265c9p84a92003ce4171ff@mail.gmail.com%3E
CC-MAIN-2014-41
refinedweb
230
65.32
You can subscribe to this list here. Showing 2 results of 2 Hi The following stylesheet: <xsl:stylesheet xmlns:xsl=""; <xsl:template <xsl:copy> <xsl:apply-templates </xsl:copy> </xsl:template> <xsl:template <xsl:namespace <xsl:fallback> <fallback/> </xsl:fallback> </xsl:namespace> </xsl:template> </xsl:stylesheet> produces the following error: $ saxon xsl-nam*.xml xsl-nam*.xsl Error at xsl:namespace on line 12 of xsl-namespace.xsl: XTSE0910: An xsl:namespace element with a select attribute must be empty Failed to compile stylesheet. 1 error detected. But XTSE0910 says: . It seems to be a bug? Regards, --drkm ___________________________________________________________________________ I've done a few experiments and I have to say I'm pretty shocked by the fact that the behaviour of the various Java routines seems to be completely unpredictable when given URIs containing invalid percent-encoding sequences. The java.net.URI constructors apparently don't complain; conversion to a URL doesn't complain; but dereferencing the URL using openConnection() or openStream() throws arbitrary errors such as StringOutOfBoundsException, IndexOutOfBounds, or IllegalArgumentException, etc. The lazy solution would be to do a general catch of these unchecked exceptions and assume that they mean "invalid URI". I can't say I like that approach much. There doesn't seem to be any method in Java for validating the percent-encoding, either. There's a URLDecoder class, but the specification explicitly says that you can't rely on it to detect invalid escape sequences. I think I may have to write my own decoder. Please note that 'file%20%C1' should be an error. If characters are percent-encoded, then they should be encoded using the hex representation of the octets of the UTF-8 encoding of the character. C1 is not the UTF-8 encoding of any character, so it is not allowed. It doesn't particularly worry me that there are some URIs which when dereferenced give you a directory listing. The mapping of URIs to the contents of filestore is entirely platform-dependent. It would be nice if it were well documented, but beyond that, I think one has to accept that the system can do what it likes. The use of "+" to mean space is not a general feature of URI encoding. It's specific, I believe, to the HTTP protocol, and therefore has no place in this interface. Michael Kay > -----Original Message----- > From: saxon-help-bounces@... > [mailto:saxon-help-bounces@...] On Behalf > Of Abel Braaksma > Sent: 12 January 2007 16:49 > To: Mailing list for SAXON XSLT queries > Subject: Re: [saxon] More meaningful error for "Fatal error > during transformation: null" > > Michael Kay wrote: > > Looks to me as if java.net.URL.openStream() has returned an > > IllegalArgumentException, which isn't really supposed to > happen when > > you call a method with no arguments; there's certainly no > clue in this > > exception as to what's actually wrong, it's not an error condition > > documented in the spec of the Java method. In this > situation, there's > > very little Saxon can do other than showing the stackTrace. If it > > can't read the file, then it's supposed to throw an IOException. > > > > I don't think it makes sense for Saxon to catch unchecked > exceptions > > thrown by Java system calls unless it's clearly documented > that this > > is an expected behaviour. For the moment, let's concentrate on > > discovering (a) what the circumstances are under which this > happens, > > and (b) whether the Java runtime is actually behaving as it should. > > > > The "null" in the final message is purely because > getMessage() on the > > IllegalArgumentException returns null. I can only improve that > > cosmetically, I can't actually provide any extra > information because > > Java hasn't supplied any. > > I found a hidden feature in your Saxon, I believe: retrieving > directory contents does not require an extension function, > you can do that easily (see below) from plain legal XSLT (in > Saxon, that is). > > After spending some hours on this, I came to realize that it > must be something very tiny and very odd. Then I decided to > do some comparisons of my CVS history and Local History and > came to the conclusion that I used an incorrect escape > sequence (simple typo) in the url. > > I got curious and did some tests. You may be suprised of the > results (I was!). Note that none of the urls exist in reality. > > <xsl:stylesheet > xmlns: > > <xsl:template > > <!-- (A) index out of range: 99 --> > <xsl:value-of > <xsl:value-of > > <!-- (B)index out of range: 102 --> > <xsl:value-of > <xsl:value-of > > <!-- (C) null --> > <xsl:value-of > > <xsl:value-of > <xsl:value-of > > <!-- (D) returns true()!!! --> > <xsl:value-of > <xsl:value-of > > <!-- (E) returns content of current directory!!! --> > <xsl:value-of > <xsl:value-of > > <!-- (F) returns content of parent-parent directory --> > <xsl:value-of > > <!-- (G) Failed to load document %20%00 --> > <xsl:value-of > > <!-- (H) The filename, directory name, or volume > label syntax is incorrect --> > <xsl:value-of > > </xsl:template> > </xsl:stylesheet> > > > From what I know of the specs, the input to these functions > must be a valid URI. Some of the above clearly are not. I > propose a little improvement where the input argument to > these functions is checked for being a legal URI. If not, a > formal error can be raised, like : "First argument of xxx is > not a valid URI". > > I was also a little surprised that "legal" escape sequences > above xA0 did not work and raise an Index Out Of Bound error. > I thought these were allowed in URIs. But perhaps these are > excluded for security reasons? > > Apparently %00 is allowed but ignored. Not sure why that is. > Security again? I'd vote for disallowing, like any other below %20. > > Finally, I missed the feature that a '+' sign should be > interpreted as a space (which wasn't) and that a space in > itself is allowed (which shouldn't be). > > Regards, > -- Abel Braaksma > > > -------------------------------------------------------------- > ----------- > Take Surveys. Earn Cash. Influence the Future of IT Join > SourceForge.net's Techsay panel and you'll get the chance to > share your opinions on IT & business topics through brief > surveys - and earn cash > > &CID=DEVDEV > _______________________________________________ > saxon-help mailing list > saxon-help@... >
http://sourceforge.net/p/saxon/mailman/saxon-help/?viewmonth=200701&viewday=13
CC-MAIN-2015-06
refinedweb
1,020
55.74
While working in an academic research lab, I was often tasked with generating figures for myself or colleagues for use in publications and presentations. In looking for an alternative to the unwieldy Excel and the rather sterile GraphPad, I came upon Python's popular plotting library matplotlib. Not only is matplotlib free and open source, but I've found it can accomplish almost any plotting task I throw at it. I decided to put this notebook together to share with others what I've learned and to serve as a personal reference. This notebook covers how to take your processed data and turn it into a publication-ready plot using Python and matplotlib. It is meant to guide a user with no knowledge of matplotlib through the process of creating reasonably styled plots and figures that can be tweaked and adjusted as desired. To follow along, this notebook assumes you have Python and matplotlib installed. If you're running linux (I am running Arch Linux) then you can find these packages in your distro's repositories. If you are on Windows or OS X, I'd recommend using the Anaconda Scientific Python Distribution which provides either Python 2 or 3 bundled nicely with commonly used scientific packages and an easy-to-use package manager called conda. As of this writing I'm using Python 3.4.3 and matplotlib 1.4.3. You can also use pip to install the required packages from the Python Package Index if you'd rather go that route. Before starting any plotting task--especially one centered around creating a figure for others to interpret--it is extremely useful to first sketch out what you want the end product to look like. I do this for everything from a multi-panel figure to a simple bar chart. It helps you map out the story you want to tell with your data. Sketching it out also forces you to deal with layout and space requirements up front. In the academic realm, the best plots are usually the most simple ones. From experience, I've found there are two reasons for this: Therefore, the goal of the plots outlined here is to present data in a straightforward and understandable manner. import matplotlib.pyplot as plt import numpy as np import pandas as pd You can set global parameters using the rc module in matplotlib. This will keep our plots looking uniform. The module can be used to set other default parameters of your choosing. Here we'll set the font to Arial at 10 pt size. We also set the default text for mathematical expressions to be the same as normal text. How this is useful will become apparent later. from matplotlib import rc # Set the global font to be DejaVu Sans, size 10 (or any other sans-serif font of your choice!) rc('font',**{'family':'sans-serif','sans-serif':['DejaVu Sans'],'size':10}) # Set the font used for MathJax - more on this later rc('mathtext',**{'default':'regular'}) IPython notebooks can output plots directly in line, a useful feature when you're iterating on and stylizing them. We'll be using this feature throughout the notebook. %matplotlib inline # The following %config line changes the inline figures to have a higher DPI. # You can comment out (#) this line if you don't have a high-DPI (~220) display. %config InlineBackend.figure_format = 'retina' Move into your working directory. When you eventually export your plots this is the directory they will be in. cd ~/Projects/Notebooks /home/jon/Projects/Notebooks The first step is to obtain a figure and axes object from matplotlib. There is more than one way to do this but I find the pyplot.subplots() function to be the most useful. Calling subplots() with no arguments returns a tuple consisting of a matplotlib.figure.Figure object and matplotlib.axes.Axes object. We will use the methods of these two objects to configure the layout of our plots as well as plot the actual data. The idea here is that the figure object contains the axes object. A single figure object can contain multiple axes objects, but each axes object can only be within one figure object. Make sense? Also notice the inline output functionality that is provided by the %matplotlib inline IPython setting we activated earlier. Pretty sweet. fig, ax = plt.subplots() The pyplot.subplots() function can also be passed keyword arguments to change the number of axes (or "subplots", hence the function name) within the figure object. You can also specify the size of the figure object in inches or set other parameters. Take note however, that specifying more than one axes within the figure object makes pyplot.subplots() return a tuple consisting of the figure object and a 2x2 numpy.array of axes object instances. The dimensions of the returned object mirror the layout, with the upper left having the indices (0,0). fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(8,6)) # Iterate through the 2x2 array and set titles of each # subplot to their location in the figure object for (n_row, n_col), axes in np.ndenumerate(ax): axes.set_title('(%d, %d)' % (n_row, n_col)) # Here ax is a 2x2 numpy.ndarray of matplotlib.axes.Axes objects type(ax), ax.shape (numpy.ndarray, (2, 2)) Before we start plotting, we will generate some mock data using the numpy.random.normal function which returns random values from a normal distribution as defined by an input mean ($\mu$) and standard devaition ($\sigma$). You can think of this data as five experimental groups with 10 samples per group. We'll then place this data into a pandas.DataFrame object in which each column is a different experimental group and the indices represent sample number. These data frame objects have useful methods for summarizing the data contained within. # For repeatable "random" data np.random.seed(0) # Specify the mean and standard deviation for each mock data group data_specs = [(2, 2), (7, 1), (4, 2.5), (10, 0.5), (5.5, 0.1)] # Generate data and place into a pandas DataFrame data = [np.random.normal(mu, sigma, 10) for mu, sigma in data_specs] data = pd.DataFrame(data).T data.columns = ['Group_%s' % n for n in range(1,6)] Here's a table of our data as we defined above. Five experimental groups (columns) consisting of ten samples each. Notice how the pandas.DataFrame can be displayed as a convenient table in an IPython notebook. data On to plotting! Here we'll plot the same data but in four different ways: To do this we can use the various plotting methods of the axes objects. Since we have five different experimental groups here, we'll first plot the means of each group as a function of its group number (except for the boxplot, which automatically plots the median, IQR, range, and outliers). Most of matplotlib's plotting functions take at minimum x and y coordinates. Note that boxplot takes a list-like object in which each element is an array or list containing the raw data values. To get these values we use the pandas.DataFrame.values property. fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(8,6)) # Get the means of each group means = data.mean() # Generate some mock x coordinates x = np.arange(len(means)) # Makes a line plot of the means of each group ax[0][0].plot(x, means) # Makes a scatter plot of the means of each group ax[0][1].scatter(x, means) # Makes a bar plot of the means of each group ax[1][0].bar(x, means) # Makes a boxplot of the data values in each group ax[1][1].boxplot(data.values, positions=x); In the above figure, each plot shows one way of representing the same data. You'll notice that even though the same data is plotted, the x and y axis limits are not all the same. Let's change that and make them all have the same x and y limits. We can do this by using the Axes.set_xlim() and Axes.set_ylim() method of each axes object. For the sake of convenience, we can also flatten the array of axes for more compact code when iterating through it using the numpy.ndarray.flat method which returns a python iterator. Thank you numpy arrays. Let's set the x-axes to go from -1 to 5 and the y-axes to go from -5 to 15.) for axes in ax.flat: axes.set_xlim(-1, 5) axes.set_ylim(-5, 15) Okay that looks decent for some plots but a little goofy for others like the bar chart. Let's fix that and only change the y limits for everything except the bar chart.) # Define variables for limits xlims = (-1, 5) ylims = (-5, 15) # Change x and y limits for i, axes in enumerate(ax.flat): if i == 2: # Special case for the bar chart axes.set_ylim(0, 15) else: axes.set_ylim(ylims) axes.set_xlim(xlims) That looks better. Now, let's break out each plot type to illustrate how we can manipulate some basic plot aspects: The most basic plotting method Axes.plot() will simply draw a line between the points. To add some flair we can add some color, change the line style and thickness, and add markers at each data point. The clearest way to do this is by passing keyword arguments to Axes.plot(). Note: some keyword arguments can also be abbreviated for compactness (e.g. linestyle can be replaced with ls). fig, ax = plt.subplots() ax.plot(x, means, color='red', ls='--', marker='o') ax.set_xlim(xlims) ax.set_ylim(ylims) (-5, 15) Now let's throw in some errorbars. To do this we need to use a different plotting function: Axes.errorbar(). While axes.plot() will get you markers at your data points and lines between them, it does not provide functionality for creating errorbars. Passing the yerr keyword argument to Axes.errorbar() defines the errorbars to be added in the y-direction (there's also xerr which does what you'd expect). Your errorbar data should be 1:1 with your plotted data points. Alternatively it can be structured as a 2 x n array where n is the number of data points being plotted (here n = 5). In this case, the first dimension of 2 represents the positive and negative values for your errorbars. There are also plenty of other keyword arguments that you can pass to errorbar to get your line plot looking nice. Here we'll use capsize (the width of the caps), capthick (the line thickness of the error bar caps) and ecolor (the color of the errorbars). fig, ax = plt.subplots() stdev = data.std() ax.errorbar(x, means, yerr=stdev, color='red', ls='--', marker='o', capsize=5, capthick=1, ecolor='black') ax.set_xlim(xlims) ax.set_ylim(ylims) (-5, 15) If we want to get fancy we can wrap these keyword arguments for styling the errorbars into a dict that we will use later with the other plots for consistency. error_kw = {'capsize': 5, 'capthick': 1, 'ecolor': 'black'} Scatter plots can be created using Axes.scatter(), which will simply place a marker at each x-y pair you pass into it. Again, you can style the plot by passing in keyword arguments. One of the cool things you can do with this plotting method is change the size of the markers (in points squared) using the s keyword argument. In our example we'll scale the marker size proportionally with our error measure. You can also change the transparency of the markers with the alpha keyword argument. Note that Axes.scatter() does not take some keyword arguments like linestyle or yerr. If you want to create a scatter plot with errorbars, you can do so by using Axes.errorbar() but omit the keyword argument for linestyle. fig, ax = plt.subplots() stdev = data.std() # Scale our marker size in square points with our error measure markersize = stdev * 100 ax.scatter(x, means, color='green', marker='o', s=markersize, alpha=0.5) ax.set_xlim(xlims) ax.set_ylim(ylims) (-5, 15) Bar charts are made using the Axes.bar() method and depart somewhat from the previous two plotting methods we've described in that there are a few more parameters we're going to want to set. The first is how wide to make each bar, which is set by the width keyword argument and is measured in x-coordinates. Next, if we're plotting error bars, we might only want to plot the positive ones, in which case the variable we pass to yerr will be a 2xn matrix with the first element being only zeros. Remember the keyword arguments we passed into Axes.errorbar()? We can use those here as Axes.bar() has an error_kw keyword argument for error styling that it passes to Axes.errorbar() under the hood. Finally, the default behavior of Axes.bar() is to place the left edge of each bar at its corresponding x-coordinate. To center the bar over the x-coordinate, we need to set the align keyword argument to 'center'. fig, ax = plt.subplots() # Width of each bar in x-coordinates width = 0.75 stdev = data.std() # Positive error bars only error = [np.zeros(len(stdev)), stdev] ax.bar(x, means, color='lightblue', width=width, yerr=error, error_kw=error_kw, align='center') ax.set_xlim(xlims) ax.set_ylim(0, 15) (0, 15) Box plots are made using the Axes.boxplot() method and are nice in that they show a great deal more information about the data you're plotting than a simple mean and standard deviation. Similar to bar charts, the width of each box plot can also be specified using the width keyword argument. The location of the boxplots are set with the positions keyword argument. In our example, we'll pass this argument our x-coordinates. Boxplots are slightly more complicated to style as they are made up of several components: the median, the IQR (or box), the whiskers (range of the data), the caps of the whiskers, and fliers (a.k.a. outliers). In matplotlib, each of these is treated as a matplotlib.lines.Line2D instance. Each of these can be styled by passing a dict of Line2D properties to Axes.boxplot() as specific keyword arguments (see the documentation for an exhaustive list). Here, we will make our boxes, whiskers, and caps solid colored lines while making our medians thicker lines of a different color. We'll also change our fliers to be an 'x' instead of a plus-sign. fig, ax = plt.subplots() # Define styling for each boxplot component medianprops = {'color': 'magenta', 'linewidth': 2} boxprops = {'color': 'black', 'linestyle': '-'} whiskerprops = {'color': 'black', 'linestyle': '-'} capprops = {'color': 'black', 'linestyle': '-'} flierprops = {'color': 'black', 'marker': 'x'} ax.boxplot(data.values,) (-5, 15) Now that we've styled some plots, let's wrap the stylings into some small helper fuctions to keep things tidy. This way we can call one function on each of the axes objects in the figure and not have to worry about plot-specific styling. def custom_lineplot(ax, x, y, error, xlims, ylims, color='red'): """Customized line plot with error bars.""" ax.errorbar(x, y, yerr=error, color=color, ls='--', marker='o', capsize=5, capthick=1, ecolor='black') ax.set_xlim(xlims) ax.set_ylim(ylims) return ax def custom_scatterplot(ax, x, y, error, xlims, ylims, color='green', markerscale=100): """Customized scatter plot where marker size is proportional to error measure.""" markersize = error * markerscale ax.scatter(x, y, color=color, marker='o', s=markersize, alpha=0.5) ax.set_xlim(xlims) ax.set_ylim(ylims) return ax def custom_barchart(ax, x, y, error, xlims, ylims, error_kw, color='lightblue', width=0.75): """Customized bar chart with positive error bars only.""" error = [np.zeros(len(error)), error] ax.bar(x, y, color=color, width=width, yerr=error, error_kw=error_kw, align='center') ax.set_xlim(xlims) ax.set_ylim(ylims) return ax def custom_boxplot(ax, x, y, error, xlims, ylims, mediancolor='magenta'): """Customized boxplot with solid black lines for box, whiskers, caps, and outliers.""" medianprops = {'color': mediancolor, 'linewidth': 2} boxprops = {'color': 'black', 'linestyle': '-'} whiskerprops = {'color': 'black', 'linestyle': '-'} capprops = {'color': 'black', 'linestyle': '-'} flierprops = {'color': 'black', 'marker': 'x'} ax.boxplot(y,) return ax If the default matplotlib axes is not what you're looking for, you can change the appearance with a few lines of code. You can set the properties of each spine through Axes.spines which is a dict of each of the four spines (top, bottom, left, right). Similarly you can access the tick properties of each axis through Axes.xaxis and Axes.yaxis. Personally, I prefer having the axes spines only on the left and bottom of my plots with the x and y ticks on the outside edges. def stylize_axes(ax): ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.xaxis.set_tick_params(top='off', direction='out', width=1) ax.yaxis.set_tick_params(right='off', direction='out', width=1) fig, ax = plt.subplots() stylize_axes(ax) Now let's generate our four plots using our custom plot and axes stylings.) for axes in ax.flat: stylize_axes(axes) Looks pretty good, although there are a few obvious things missing: Let's address these one by one. The label for a specific axis can be easily set using the Axes.set_xlabel() and Axes.set_ylabel() methods. Tick location and labeling are handled by Axes.set_xticks() and Axes.set_xticklabels() respectively (there are analogous methods for the y axis as well. Now let's update our styling helper function to update these settings. def stylize_axes(ax, title, xlabel, ylabel, xticks, yticks, xticklabels, yticklabels): """Customize axes spines, title, labels, ticks, and ticklabels.""" ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.xaxis.set_tick_params(top='off', direction='out', width=1) ax.yaxis.set_tick_params(right='off', direction='out', width=1) ax.set_title(title) ax.set_xlabel(xlabel) ax.set_ylabel(ylabel) ax.set_xticks(xticks) ax.set_yticks(yticks) ax.set_xticklabels(xticklabels) ax.set_yticklabels(yticklabels)) Hmm...what's up with the overlap? We can use Figure.tight_layout() to automatically resize the elements of our plots to eliminate overlap.): # Customize y ticks on a per-axes basis) fig.tight_layout() Much better! Now we're ready to export the figure object so we can use it in our publication. We can do this using Figure.savefig() and pass it the filename, resolution (in DPI), padding whitespace, and transparency (if desired). fig.savefig('Stylized Plots.png', dpi=300, bbox_inches='tight', transparent=True)
https://jonchar.net/notebooks/matplotlib-styling/
CC-MAIN-2020-29
refinedweb
3,071
66.84
To run the several examples, simply click in the code cell, and type shift-ENTER. This file implements several examples of the de-noising algorithms Cadzow rQRd and urQRd. First example runs a test of the Cadzow de-noising method, based on the SVD decomposition of a matrix derived from the data-set (see theory). From the experimental data $X_i$, a Hankel matrix $H$ is built:$$ (H_{ij}) = (X_{i+j-1}) $$ $H$ is rank-limited to $P$ in the absence of noise. In noisy data-sets, this matrix becomes full-rank because of the partial decorrelation of the data-points induced by the noise. Cadzow (Cadzow JA (1988) IEEE Trans. ASSP 36 49–62.) proposed to perform the Singular Value Decomposition ( _see SVD on Wikipedia_ ) of the matrix $H$, and compute a matrix $\tilde{H}$ by truncating to the $K$ largest singular values $\sigma_k$.$$ H = U \Lambda V $$ where $\Lambda$ is a diagonal matrix containing the singular values of $H$.$$ \tilde{H} = U \Lambda_k V $$ where $\Lambda_k$ contains only the $k$ largest singular values. $\tilde{H}$ is not strictly Hankel-structured anymore, but a de-noised signal $\tilde{X}$ can be reconstructed by taking the average of all its antidiagonals.$$ \tilde{X_{l}} = < \tilde{H_{ij}} >_{i+j=l+1} $$ Figure presents the data-set on the left, and its Fourier transform on the right. The data-set is composed of 8 equidistant frequencies of different intensities. Gaussian noise is added to the data. call is and default values are : def test_Cadzow( lendata = 4000, # length of the simulated data rank = 10, # rank at which the analysis is performed orda = 1600, # order of the analysis, the Hankel matrix is (lendata-orda) X orda noise = 200.0, # level of noise in the simulated data iteration=1, # the algorithm can be iterated several times for better (slower!) results noisetype = "additive") # possible noisetype are 'additive' 'multiplicative' 'sampling' 'missing points' you can test_Cadzow() with modifying any parameter %pylab inline params = {'savefig.dpi':120,'legend.fontsize':6,'xtick.labelsize':'small','ytick.labelsize':'small','figure.subplot.wspace':0.5} rcParams.update(params) from Algorithms import Cadzow Cadzow.test_Cadzow() Welcome to pylab, a matplotlib-based Python environment [backend: module://IPython.zmq.pylab.backend_inline]. For more information, type 'help(pylab)'.Initial Noisy Data SNR: -3.72 dB - noise type : additive === Running Cadzow algo === lendata: 4000 orda: 1600 rank: 10 === Result === Denoised SNR: 14.43 dB - processing gain : 18.15 dB processing time for Cadzow : 5.65 sec rQRd is based on a random sampling of $H$ by a random matrix $\Omega$ : $$ \underset{(M \times K)}{Y} = \underset{(M \times N)}{H} \underset{(N \times K)}{\Omega} $$ The matrix $Y$ is much smaller than $H$, it can rapidly by factorized by QR ( _see QR on Wikipedia_ ):$$ Y = QR $$ from which a rank-truncated $\tilde{H}$ is built by projection of the subspace defined by $Q$ : $$ \tilde{H} = QQ^*H $$ $\tilde{X}$ is then computed as before. This second example runs the same test using rQRd. Note that test test here is run on 10.000 points rather than 4.000 default values are : def test_rQRd( lendata = 10000, rank=100, orda=4000, noise = 200.0, iteration = 1, noisetype = "additive") from Algorithms import rQRd rQRd.test_rQRd() Initial Noisy Data SNR: -3.89 dB - noise type : additive === Running rQR algo === lendata: 10000 orda: 4000 rank: 100 === Result === Denoised SNR: 13.45 dB - processing gain : 17.34 dB processing time for rQRd : 2.27 sec urQRd is based on the same grounds as rQRd but contrarily to rQRd, it makes use of fast matrix-matrix products. Those fast products are made possible thanks to classical FFT techniques using convolution products. For urQRd, the product $H\Omega$ has a cost of $KL$log$(L)$. The matrix $Y$ is much smaller than $H$, it can rapidly be factorized by QR ( _see QR on Wikipedia_ ) : $Y = QR$ A second product is necessary for urQRd : $Q^*H$. In this case the product has a cost close to $KL$log$(L)$ when using the fast structured matrix vector product. $\tilde{X}$ the denoised signal is then computed from $Q$ and $Q^*H$ using FFT again and indices inversion. Let $U = Q^*H$, each $\tilde{X_i}$ is obtained from the $i^{th}$ (${n_i}$ elements long) antidiagonal by :$$ \tilde{X_i} = \frac{1}{n_i}\sum_{j=max(i-M+1,1)}^{min(i,N)} H_{i-j+1,j}\approx \frac{1}{n_i} \sum_{k=1}^{K} \sum_{j=max(i-M+1,1)}^{min(i,N)} Q_{i-j+1,k}U_{k,j} $$$$ \;\;\;\;\;\;\;\;\; = \frac{1}{n_i} \sum_{k=1}^{K} \sum_{j=max(i-M+1,1)}^{min(i,N)} Q_{i,j}^{(k)}U_{j}^{(k)} = \frac{1}{n_i} \sum_{k=1}^{K} (Q^{(k)}.U^{(k)})_i $$ $Q^{(k)}$ is the $L \times N$ Toeplitz matrix formed with the vector $[0,....0,X_{0},X_{1},..X_{L},0,....,0]^T$ with $(N-1)$ zeros at each extremity of the vector. $U^{(k)}$ is the matrix $U_{i,j}$ This last operation has a cost of $K(L+N)$log$(L+N)$. urQRd scales globally in $KN$log$(N)$. All those fast calculations are based on the FFT fast matrix-vector product. Fast Hankel Matrix vector product Let $H$ be a Hankel matrix of dimensions $M \times N$, $g$ its generator vector of size $M+N-1$ and $v$ a vector of size $M$. $w$ is a vector of size $M+N-1$ defined from $v$ as : $w_i = v_{i}$ for $1 \leq i \leq N$ and $w_i = 0$ for $N+1 \leq i \leq M+N-1$ let $P$ be the swapping matrix that swaps vector element $i$ with element $M+N-i$. The product Hankel matrix H with vector v, $Hv$ is made faster by performing the calculation: $Hv = FFT^{-1}(FFT(g).FFT(P.w))$ default values are : def test_urQRd( lendata = 10000, rank = 100, orda = 4000, noise = 200.0, iteration = 1, noisetype = "additive") from Algorithms import urQRd urQRd.test_urQRd() Initial Noisy Data SNR: -4.00 dB - noise type : additive === Running urQR algo === lendata: 10000 orda: 4000 rank: 100 === Result === Denoised SNR: 13.22 dB - processing gain : 17.22 dB processing time for urQRd : 0.42 sec prgm = "rQRd.py" # "rQRd.py" "urQRd.py" "Cadzow.py" # depending on your version of IPython, one of the two following line might be used, choose your own import IPython.core.display as disp # see remark above #import IPython.display as disp # see remark above # from pygments import highlight from pygments.lexers import PythonLexer from pygments.formatters import HtmlFormatter formatter = HtmlFormatter(noclasses=True) css = formatter.get_style_defs('.highlight') code = ''.join(list(open("Algorithms/"+prgm))) disp.HTML(highlight(code, PythonLexer(), formatter))
https://nbviewer.ipython.org/gist/anonymous/7515023
CC-MAIN-2022-21
refinedweb
1,111
57.06
partition() – Partial Sort in NumPy Hi Enthusiastic Learners! In this article we will be studying about “partition() – partial sort in NumPy“. Have you ever wondered, if you are given an array and you only want n smallest numbers from your array instead of completely sorting it? Well, that is exactly the case where we should use partition() - partial sort & and we are calling partition() a partial sort because we actually will be sorting only few parts of given array or you can say getting a partition of n smallest values. Before jumping on to partition() – partial sort in NumPy, please check out other articles related to sorting in NumPy: Watch video tutorial here: partition() – Partial Sort Syntax np.partition(arr, pos, axis=-1, kind='introselect', order=None) Where, - arr is the array which has to be sorted - pos position in array. - axis defines the axis along which you need to do sorting & -1 value means last axis. - kind gives you liberty to choose sorting algorithm. Default algo ‘introselect’ - order — if arr is a structured array, this argument specifies which fields to compare first, second, etc. Let’s begin with creating a array arr. import numpy as np arr = np.array([3, 7, 4, 2, 8, 9, 0, 18, 1]) print("-- Base Array --") print(arr) -- Base Array -- [ 3 7 4 2 8 9 0 18 1] Sorting array for pos = 3. Now, partition() will create a copy of original array and will arrange number in such a way such that till pos all values will be the smallest values in array & after that position other values will be as it is. Note that they may not be ordered. Its just that, till pos = 3, that is 4 elements as indexing in arrays starts from 0, we will be getting 4 smallest values & remaining values will be shifted as it is towards right and there is no order in sorted values. Syntax for it will be — np.partition(arr, 2) sort_for_3 = np.partition(arr, 3) print("-- Sorted Array till 3rd Position --") print(sort_for_3) -- Sorted Array till 3rd Position -- [ 2 1 0 3 7 9 4 18 8] As, we can clearly from above example, that 1st four elements (till pos = 3) are smallest among complete array & other values are just shifted towards right in the same order they naturally were. Also, there is no order in sorted values. partition() on 2-dimensional array We can do partial sort for 2-D arrays as well & we can choose axis along which we want values to be sorted. Let’s begin with creating a 2-D array arr_2d = np.random.randint(0, 10, (4, 6)) print("-- 2D Array --\n") print(arr_2d) -- 2D Array -- [[2 5 6 1 6 3] [7 0 2 6 0 0] [1 6 2 6 4 3] [9 5 9 6 9 3]] Get 2 smallest values for each row of array. Syntax for it will be — np.partition(arr_2d, 1, axis=1) sorted_2d = np.partition(arr_2d, 1, axis=1) print("-- Partially Sorted 2D Array -- \n") print(sorted_2d) -- Partially Sorted 2D Array -- [[1 2 6 5 6 3] [0 0 2 6 7 0] [1 2 6 6 4 3] [3 5 9 6 9 9]] From above example it clear, that for each row we are getting smallest value till index (pos) value = 1, that is 2 smallest elements in each row. Stay tuned & keep learning! In our next post we will be covering more sorting techniques that can be achieved in NumPy.
https://mlforanalytics.com/2020/04/14/partition-partial-sort-in-numpy/
CC-MAIN-2021-21
refinedweb
584
67.49
Going with this approach means that maps can safely be used in lists. This is commonly done as a kind of "pseudobean" for unit testing – it's how I first encountered this behavior. The problem with recursively adding map content (or the superset of that functionality, recursively adding any iterable thing in the list), is that it's really going to wreak havoc on unit tests. If you "flatten" maps in some way, this means that you're getting different behavior if you use a bean or a hash map in its place: this is exceedingly common in unit testing, including making it onto the website as a recommended pattern. Any patch which pushes the "flatten maps" approach also needs to include a patch to create a map#flatten which has parallel behavior. If you're going to flatten arrays, then (Object[]).flatten() needs to work too. If you're gonna do that you have to implement DGM.flatten(Object) and DGM.flatten(Object, Closure). This: [[[1, 2, 3, [1, 2, 3, [1, 2, 3] as Object[]] as Object[]] as Object[]] as Object[]].flatten() ==> [1, 2, 3, 1, 2, 3, 1, 2, 3] is far too inconsistent with this: ([1, 2, 3] as Integer[]).flatten() ==> groovy.lang.MissingMethodException: No signature of method: [Ljava.lang.Integer;.flatten() is applicable for argument types: () values: {} A significant case on why that is important is that a user wondering whether arrays are flattened or not will try the simple (second) case first and conclude that arrays are not flattened. They will be greatly and not pleasantly surprised when they discover that is just a special case and that Groovy does flatten arrays, but only when the top-level is a Collection. So yes, the result is a List, which is what createSimilarOrDefaultCollection creates. And flatten as it is currently implemented is flattening multidimensional arrays (as my sample showed), so I don't understand what you're saying there. Hmmm, I suppose this createSimilar jazz for arrays could be extended to consider element type for generics, but that goes beyond this issue (and there is Collection.flatten(Collection) which could be used). Unless there's been something awesome that I've missed out on, generic code is type-erased in Java, which means that the run-time has no idea what the generic type originally was, and there's no value in trying to discern or enforce it in "createSimilarOrDefaultCollection".. Your interpretation is exactly the one I would expect; but you like I know about createSimilarOrDefaultCollection. I am just playing devil's advocate and wondering what our users think. Perhaps we just haven't defined createSimilarOrDefaultCollection fully yet to take into account arrays. If we say flatten is type preserving, why wouldn't I expect it to return an array? I think we can argue either way and I think we will end up with your interpretation, just not wanting to jump the gun. And flatten as it is currently implemented is flattening multidimensional arrays (as my sample showed), so I don't understand what you're saying there. Yes, but I can also imagine it would be useful to have for example a flatten which took int[][][] and returned int[]. That said, I'm fairly sure that the only sensible interpretation is that Groovy coerces arrays to collections. To suppose that flatten might return an array rather than a collection also means that you would have functions like collect, find, and grep return an array rather than a collection when performed on an array. That wouldn't be a very good approach because any actual implementation would have to use variable sized collections and then convert to an array at the end, which is exactly the thing that the user will do if that is what he wants. To reiterate your last comment and reinforce the need for further work on this issue, if findAll already converts arrays to lists as shown below, why not flatten: def nums = 1..10 as int[] assert nums instanceof int[] def evens = nums.findAll{ it % 2 == 0 } assert evens == [2, 4, 6, 8, 10] assert evens instanceof List I think that (along with earlier examples) seems pretty convincing to me. Glad to see that we have arrived at consensus.
http://jira.codehaus.org/browse/GROOVY-2904
crawl-002
refinedweb
711
59.23
There is a blog post by Tim Hentenaar that says that people should not read my book, Learn C The Hard Way. It has the title “Don’t Learn C The Wrong Way” and it asserts that I am teaching C the wrong way, with a few examples as to why. The problem with Tim’s post, is that Tim actually doesn’t know how to teach much of anything, and is completely uninformed of the security defects that his own code has. In fact, Tim successfully demonstrates that he is actually a beginning coder who has no business telling others how to code. In this blog post I will simply take down Tim’s supposedly expert opinion by using his replies to me in an email exchange where he demonstrates his lack of understanding, and then tries to cover for it in the most laughable way. First, let’s establish how much of an expert Tim thinks he is, and what he’s advising you, my reader to do: “Recently, I came across an e-book written by Zed A. Shaw entitled Learn C The Hard Way, and while I can commend the author for spending the time and energy to write it, I would NOT recomend (sic) it to anyone seriously interested in learning the C programming language. In fact, if you were one of the unlucky souls who happened to have purchased it. Go right now and at least try to get your money back!” That’s a very serious condemnation of my book, especially from someone who has never taught C, never written a book, can’t even spell “recommend”, and later demonstrates that he doesn’t have a clue about security defects inherent in C. So what are Tim’s complaints about my book? Tim Has No Teaching Experience The majority of his complaints about my book, Learn C The Hard Way stem from a lack of understanding in my (very successful) teaching method. To Tim, and most old school programmers, the way to teach something is to teach all of the topic at once in one huge chunk. You teach Make by writing a chapter on Make that tells the reader every single little thing about Make possible, and then demonstrate with some code. Here’s Tim’s statement to that effect: “At this point, the only thing I can think is, “I’d just love for you to show me a damn working Makefile!” A novice will be thinking, “What the hell’s a Makefile?” as the concept of a Makefile has not yet been introduced.” Then later he says: “I don’t know how to set-up my environment, this “Makefile” thing pulled a Jimmy Hoffa, and now I have to use this Valgrind thing, after I go download it and build it from source. Great…” The problem is, Tim didn’t read far enough to where I do explain how to make an environment, and misunderstood my purpose at that point in my book. I’m not teaching the reader to write a Makefile and start a project. I’m teaching them to quickly get their very simple C code to compile. My target readers are people who have a language like Python or Ruby but haven’t dealt with a compiled language before. But to Tim, this is insufficient because he thinks a beginner is like him and needs to know all of the Make to be able to use it. This lack of understanding of an actual beginner is exactly why so many programmers are so terrible at teaching, or even writing basic software for non-developers. It’s not that a programmer is somehow emotionless or a “robot” like obnoxious nerd haters say. It’s that the majority of programmers have a far more advanced understanding of computing, and specifically the software they create. Through their path to that understanding have forgotten what it was like to be a beginner. This leads them to assume many things that just aren’t true. Such as, “Unless a beginner is taught every single aspect of Makefile construction they cannot use Make’s implicit build rules to build a basic C file.” This means that Tim’s statements about how I teach are mostly invalid because he doesn’t understand how people learn to code. He’s never had to teach someone who’s just starting out so he thinks blasting them with a treatise on Makefiles is what they need 4 exercises into a course of study. By contrast, I actually sit with real people and have them go through my books, and then adapt the exercises based on where they get stuck. I also used to have comment sections on every page to gather information on how to improve exercises. Tim basically read K&R and wrote some crappy C code, which we’ll see shortly. However, Tim’s rabid and obnoxious condemnation of my book isn’t his actual opinion. In private emails he says this: “I don’t doubt the seriousness of your offer. In fact, one of my colleagues also read my article, and he and I were discussing it this evening, and he told me that he’s a fan of your writing style, and would love to see you write a really good book on C.” Tim doesn’t believe my book is entirely irreparable and a failure as he states, and in private he says there’s only a few problems with it. He even offered to help me make it better despite his lack of experience writing or teaching. What he actually thinks is I should write it the way he would write it, then it’d be a good book for you to buy. Despite Tim’s complete lack of qualifications in programming, writing, education, or anything other than having a blog, he thinks that his opinion is so superior that I should rewrite my book to fit his ideas of education, not a student’s model of learning based on actually sitting with readers and helping them. This kind of arrogance and hubris leads me Tim’s largest failing in his post, this code right here: void copy(char from[], char to[], size_t n) { size_t i = 0; if (!from || !to) return; while (i < n && (to[i] = from[i]) != '\0') i++; to[n] = '\0'; } Tim’s claim is that this function here is superior to a function I had written called “safercopy”, but it has a critical buffer overflow that he actually attempts to defend in the most laughable way. To understand Tim’s failure you need to see my original “safercopy”:; } What sends most C coders into a tizzy about this code is it came from a thought experiment I was doing where I did code analysis on the K&R C book (the book by the authors of C). Many programmers took this as an offense to them (so rational), and so they would focus on how I said this function here (safercopy) was better than a similar string copy function in the K&R C book. The problem is, to discredit my claims that mine is better, they would play this little semantic shell game: - “Your function is vulnerable to Undefined Behavior (UB) just like the K&R function.” - They then write some example that uses a totally different UB from the hundreds available, not the buffer overflow UB from a malformed C string. - Then proclaim that, since both functions are vulnerable to UB, my claim of mine being safe (notice, not safER), are invalid. This is a lot like you buying a new lock for your front door that’s really great, so you tell your friend about it. Your friend goes, “Pfft, your lock is no better than leaving your door open, I could totally break into it.” Your friend then shows up with a SWAT team battering ram and smashes the door in like butter and says, “See? Your lock is pointless. Just leave your door open.” You, and I, aren’t saying a better lock is completely foolproof and perfect. We are saying it is safer, not totally safe. Doors are easily bashed in using countless methods, right down to setting your house on fire. When we talk safety of the lock, we mean against lock picking compared to the other lock. To say I should leave my door open because there’s a thousand ways to get into my house is insane. However, my function is more resistant to a common externally accessible vulnerability. This is something I would love to research, but UB has different levels of exploit surface that is accessible to an attacker from outside the running process. A C string is fairly trivial to clobber so that it is missing the ‘\0’ terminator. It’s a bit more difficult to make random pointers go wherever you want, but still possible. It’s nearly impossible to rewrite the C code for a running process to cause a math error and make a compiler skipped a portion that was considered UB. When studying the security of C code we tend to just assume all UB is the same and don’t make this distinction of accessibility to an attacker. Bad C coders then use this UB to simultaneously defend bad code (“All code is breakable with UB”) and condemn other’s code (“Haha, you’re triggering UB”). When I say my function is safER, I do not mean it is totally invincible. That is impossible in C, and one of the reasons I tell people to not use C anymore. I now firmly believe that C is impossible to write securely and is designed with flaws that are irreparable, mostly because of the huge number of UB that can easily be triggered externally. I mean that the code in this simple function protects against this one buffer overflow that is often externally exploited, while the original K&R code does not. That’s all. Which leads me to Tim’s lack of understanding of his own code. Clearly, he thinks his code is even safer than mine, but if you look at it again: void copy(char from[], char to[], size_t n) { size_t i = 0; if (!from || !to) return; while (i < n && (to[i] = from[i]) != '\0') i++; to[n] = '\0'; } You’ll see that he only has one size, so if that size is invalid for the to variable then you get a buffer overflow. Here’s a trivial demonstration of it: #include <stdio.h> void copy(char from[], char to[], size_t n) { size_t i = 0; if (!from || !to) return; while (i < n && (to[i] = from[i]) != '\0') { printf("to[i]=%c, i=%zu\n", to[i], i); i++; } printf("i=%zu, n=%zu\n", i, n); to[n] = '\0'; } int main(int argc, char *argv[]) { // thanks to @mistahzip for pointing out this // is a better demonstration code char to[] = {'A','A','A','A'}; char from[] = "XXXXXX"; copy(to, from, 6); printf("Final byte is: %x\n", to[3]); } UPDATE: I had my original analysis wrong and I apologize for that. This is a better demonstration of the problem, and a new analysis showing the buffer overflow. Thanks for @mistahzip for setting me straight and putting up with me being an asshole. Just goes to show you, this shit is hard. Tim’s code works as long as the strings are valid, however it’s incredibly common for C strings to be invalid, and that’s how you get the buffer overflows from C strings. In this example, I’ve added printing so you can see what’s going on. I use a malformed to array so that you can see, if it’s wrong then it gets overwritten with garbage. In addition, he does to[n] which will always set the wrong byte if from is larger than to. Any C coder worth their salt would realize this, and in many ways this is worse than even the K&R version since it is more complicated. When you do this on many systems you just get a bus error of some sort, but not all. Many times you’ll have the end of one string still be inside a valid region of memory, and operating systems aren’t even close to foolproof on protecting buffer overflows. If you’re using a system that allocates stacks on the heap (such as in greenthreads), then you’ll typically blast right past this variable and into another function’s code. That’s very dangerous and creates remote code execution vulnerabilities. You may be thinking, “Yeah but I could write code that breaks your safercopy too!” Yes, like I said, C has so much UB it’s an entirely unsafe language and you can destroy anything. The point though is that this is an insanely common and trivial programming error that is just bad math for one parameter. Mine you have the size for both so you don’t make this error as easily. You can still make the error, but it’s harder than with Tim’s. With Tim’s you’ll make this error all the time. Arrogance and Hubris I told Tim about this really silly error in his blog post and did he do the right thing and at least admit publicly that I demonstrated a trivial error in his code? Nope, not only has he not updated his code, further demonstrating that he doesn’t know what he’s talking about at all, but he proceeded to defend his code with the most asinine of defenses: “That’s why strncpy() / strlcpy() were written, but of course with all such things, there’s a performance penalty to pay. Even with length checking, it’s still possible to trigger UB, for example via integer promotion (i.e. strncpy() with a negative length, which I did point out) or having dest and src overlap. … It’s much harder to carry out a buffer overflow attack with SSP, DEP, and ASLR these days. Although there are always ways around the best intentioned restrictions.” His function, in his own words, isn’t wrong because, again, you can use a totally different set of UB to cause problems so this easily externally accessible one isn’t a problem. And there’s also strncpy/strlcpy, so his function is still valid (what?). Oh, and also there’s, like, uhhh oh SSP and DEP that totally protect against these problems (even though they don’t and we see it all the time). These are the words of someone stumbling to still be right to protect their ego, and demonstrates Tim’s lack of intellectual honesty and integrity. Tim Is An Unqualified Beginner This is your classic defense from an arrogant programmer who refuses to admit that he actually doesn’t know what he’s talking about. When I receive complaints that my code isn’t working, even if it’s been run through the ringer over and over, I still go and double and triple check that it’s working. If Tim had sent me this kind of trivial defect I would have fixed my code and worked to find out why I caused the error. To programmers like Tim, who think they know C but are totally clueless about computer security, it’s inconceivable that his code could be wrong. This is a sign of a beginner. A beginning programmer assumes his code is right even in the face of all evidence to the contrary, like Tim does here. They defend it to the end, because they are personally attached to their creation and not objective. An expert assumes his code could be wrong at any moment and adds as many defenses as possible. This shows that you should not listen to Tim about C coding, and definitely not learn anything from him. He is entirely unqualified and should be ignored. Conclusion Tim Hentenaar wrote a confused screed about my book being terrible and claiming nobody should buy it. However, his expertise is completely lacking to make that determination, his code has defects in it, and he arrogantly refused to admit that it had problems. He also defends his security defects with confused logic about UB and the existence of other functions that have nothing to do with his own code. Listening to Tim about how to learn C is therefore a dangerous thing to do. No book is perfect, and let me tell you that first printing of mine had loads of problems, but until Tim writes a better C book you’d do well to ignore his advice and him. In fact, this is the problem with the majority of the detractors from my book. None of them have written books, and many of them don’t even code C or have C in production. Writing books and teaching people is incredibly difficult, much more difficult than hanging out in IRC yelling at beginners about Undefined Behavior or writing blog posts. Over this next week I’m going to systematically take down more of my detractors as I’ve collected a large amount of information on them, their actual skill levels, and how they treat beginners. Stay tuned for more. 2 thoughts on “Taking Down Tim Hentenaar” Brutal! LikeLiked by 1 person I’m working through CTHW and really loving it. Thanks for this post. One thing: the following sentence ought have a gender neutral pronoun: > An expert assumes *his* code could be wrong at any moment and adds as many defenses as possible. LikeLiked by 1 person
https://zedshaw.com/2015/09/28/taking-down-tim-hentenaar/
CC-MAIN-2017-04
refinedweb
2,945
66.67
On Thu, Sep 03, 2009 at 11:48:53AM +0100, Mark McLoughlin wrote: > > @@ -290,10 +305,22 @@ int qemudLoadDriverConfig(struct qemud_driver *driver, > > } > > } > > > > + p = virConfGetValue (conf, "hugetlbfs_mount"); > > + CHECK_TYPE ("hugetlbfs_mount", VIR_CONF_STRING); > > + if (p && p->str) { > > + VIR_FREE(driver->hugetlbfs_mount); > > + if (!(driver->hugetlbfs_mount = strdup(p->str))) { > > + virReportOOMError(NULL); > > + virConfFree(conf); > > + return -1; > > + } > > + } > > + > > How come you probe for a hugetlbfs mount even when the config file > contains it? That's just the most convenient way with the way the config file loading is structured. The previous patch of John's had the probing down in this part of the code in the else {} clause here. The trouble with that is that if there is no config file on disk at all, it'll never be run. Moving it to the top ensures its always got a sensible default. > > diff --git a/src/util.c b/src/util.c > > index 0d4f3fa..35efee2 100644 > > --- a/src/util.c > > +++ b/src/util.c > > @@ -60,7 +60,9 @@ > > #if HAVE_CAPNG > > #include <cap-ng.h> > > #endif > > - > > +#ifdef HAVE_MNTENT_H > > +#include <mntent.h> > > +#endif > > > > #include "virterror_internal.h" > > #include "logging.h" > > @@ -1983,3 +1985,36 @@ int virGetGroupID(virConnectPtr conn, > > return 0; > > } > > #endif > > + > > + > > +#ifdef HAVE_MNTENT_H > > Hmm, if mntent.h isn't found, the qemu driver will fail to link > > Is the idea here that anywhere mntent.h isn't available, the qemu driver > won't be built so we don't need to check HAVE_MNTENT_H in qemu_conf.c? That's an oversight - should have conditionalized the caller, though I wouldn't be surprisd if QEMU driver is already broken on non-UNIX since I don't think anyone's ever really tested it. Not that I ever really expect people to use Windows for the QEMU driver :|
https://www.redhat.com/archives/libvir-list/2009-September/msg00090.html
CC-MAIN-2014-10
refinedweb
283
58.38
sensor_event_is_holstered() Retrieve whether the device is holstered. Synopsis: #include <bps/sensor.h> BPS_API int sensor_event_is_holstered(bps_event_t *event, bool *is_holstered) Since: BlackBerry 10.2.0 Arguments: - event The SENSOR_HOLSTER_READING event to get the value from. - is_holstered This is set to true if the device is holstered, false otherwise. Library:libbps (For the qcc command, use the -l bps option to link against this library) Description: The sensor_event_is_holstered() function sets is_holstered to true when the device is holstered. When the device is removed from the holster is_holstered will be set to false. The value is retrieved from a SENSOR_HOLSTER_READING event. Returns: BPS_FAILURE will be returned if the event passed in is not a SENSOR_HOLSTER_READING, or if is_holstered is NULL. Last modified: 2014-09-30 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/core/com.qnx.doc.bps.lib_ref/topic/sensor_event_is_holstered.html
CC-MAIN-2020-29
refinedweb
138
60.51
In part 1 of this series of Daily Drill Downs, I introduced KiXtart and examined a few constructs and commands available in the language. In part 2, I’ll discuss some more of the basic elements. Loops First, I’ll take a look at Do, While, GoSub, and GoTo. I’ll provide examples as I go along. Like the if...else...endif construct, do...until works exactly as you’d expect. The syntax is: Do code until "expression" So the code, which can be one or more lines, is executed until the result of expression is true. The code will always execute at least once. As with if..., expression may be a test or a command. Of course, if expression never equates to true, the code will keep executing forever. If your program gets stuck in a loop, you may be able to use [Ctrl][Break] to stop it. Most times, you’ll need to use [Ctrl][Alt][Del] and click the End Task button. The while...loop construct is identical to do....until, except that the test is carried out first. So, where the code between do and until will always be executed at least once, the code between while and loop will only be executed if and for as long as the expression equates to true. The syntax for while…loop is: while "expression" code loop If expression is initially false, the code will never execute. While not exactly a loop construct, GoSub allows you to run a section of code as many times as you need. It also allows you to write more structured code and to reduce the duplication of code segments (reusable code). The syntax is: gosub expression where expression must resolve to a label name. For example: $number = 1 gosub "label" + $number exit :label1 subroutine code return The above code also shows that when "adding" a string and a number, the number is converted to a string and the "addition" becomes concatenation. The Return command returns control to the line following the GoSub command. Although GoTo should be avoided where possible (if you’re writing structured code), there are times when it can be useful or maybe even unavoidable. So, for completeness I am including it here. The syntax is goto "expression" As with the While construct, expression must resolve to a label name. Shelling out There may be points in your script where you want to use a DOS command, run a batch file, execute an application, or run another KiXtart script. The following commands allow you to do all of these things. The Shell command is used to load and run a program. Its syntax is: shell "command" The command parameter can be any 16-bit or 32-bit application. If you’re going to run command-interpreter commands, specify the correct command interpreter as part of the command (see my examples below). With Shell, the KiXtart script execution is stopped until the external program exits. If the program you want to run needs to set environment variables, you may need to specify additional environment space by using the /E parameter. The Shell command sets the value of @ERROR to the exit code of the program that’s run. Here are some examples: SHELL @LDRIVE + "\UPDATE.EXE" SHELL "%COMSPEC% /e:1024 /c DIR C:" SHELL "SETW USERNAME=@USERID" SHELL "CMD.EXE /C COPY " + @LDRIVE + "\FILE.TXT C:\" SHELL "%COMSPEC% /C COPY Z:\FILE.TXT C:\" SHELL "C:\WINNT\SYSTEM32\CMD /E:1024 /C " + @LDRIVE + "\SMSLS.BAT" The Run command is exactly the same as Shell except that KiXtart script execution will continue while the external program is running. The syntax for Run is the same as for Shell. The Call command is used to call another KiXtart script. When the called script exits, or a return is encountered in the called script, execution continues in the original script following the line containing the Call command. The syntax is: call "expression" Theoretically, there’s no limit to the number of scripts that can be nested. Obviously, the practical limit on the number of scripts you can call is determined by such factors as: - The amount of available memory at the time KiXtart runs. - The size of the scripts. - The number of variables defined. Don't call us; we'll call you Seeing the Shell and Run commands above, you may be tempted to use DOS commands wherever you feel you need them. Before you do, have a look at the following commands that are available from inside KiXtart: As I mentioned, these commands are all available from within KiXtart and are the exact equivalent of their namesakes in DOS (except Use, which is identical to Net Use). There are some slight differences in the syntax between the KiXtart commands and the DOS equivalents, so check the documentation before you use them. Also, if you need to change the current drive in DOS, you’d enter the drive letter; in KiXtart, you can do exactly the same, but to make it tidier, you can, optionally, precede the drive letter with Go (for example, go a:). More than one result There are times when you may need to use a sequence of If statements to take different actions, depending on the result of a particular expression. In some languages, that’s exactly what you have to do. So you’d write something like this: if $x=1 ? "One" endif if $x=2 ? "Two" endif if $x=3 ? "Three" endif if $x < 1 AND $x > 3 ? "Out of range" endif Fortunately, KiXtart has a neater way of doing this. The Select construct is available in many advanced programming languages. It’s very similar to the list of If statements above, except that you do away with all the Endif statements and you have a simpler "catchall" than the last If in the example above. Using Select, you can write the above example as: CASE $x=1 ? "One" CASE $x= ? "Two" CASE $x=3 ? "Three" CASE 1 ? "Out of range" ENDSELECT I think you’ll agree that this is much neater and easier to read. Notice that you don’t need to specify the out-of-range expression. The reason for this is that only one of the Case statements will be executed, so if $x doesn’t equate to 1, 2, or 3, the final Case statement will be executed. Only one Case statement is executed, so in the following example, the code following CASE $x>0 AND $x<2 will never be executed. Why? CASE $x=1 ? "One" CASE $x=2 ? "Two" CASE $x=3 ? "Three" CASE $x>0 AND $x<2 ? "This line is never reached" CASE 1 ? "Out of range" ENDSELECT The expression following Case doesn’t have to be the same on each line. The first Case could be $x=1; the next could be InGroup("Administrators")—although it’s more likely that the expressions will be of a similar type. Registry One of the most powerful features available with KiXtart is the ability to work with the registry. You can read, change, delete, and add keys and values. This allows KiXtart to be used to enforce certain user-interface configurations so that, even if users know what they need to change to get around your standards, they’ll have to make those changes after each logon. This isn’t difficult for a technically competent user, but it is annoying and, therefore, a reasonable deterrent nonetheless. Read If you want to check the value of a registry key, there are two functions you need. First, you may simply need to know if the key actually exists. For this you use ExistKey. The syntax is: existkey ("subkey") It is important to note that if the subkey exists, the function returns 0 or false. This is because for any other result, the function returns an error code indicating what went wrong. For example: $Result = ExistKey("HKEY_CURRENT_USER\Console\Configuration") If $Result = 0 ? "Key exists...." Endif Once you know that the subkey exists, or if you’re happy to skip that check, you can find the value contained in an entry under the subkey. For this, you use the function ReadValue, whose syntax is: readvalue ("subkey", "entry") where subkey identifies the subkey containing the entry, and entry identifies the entry whose value you want to discover. To read the default entry of a key, specify an empty string as the entry name (“”). If @ERROR is set to 0, then the result of the function is the value of the entry; otherwise, it is the error code representing what went wrong. REG_MULTI_SZ (multistring) variables are returned with the pipe symbol (|) used as the separator between strings. If a string contains a pipe symbol character, it is represented by two pipe symbol characters (||). REG_DWORD variables are returned in decimal format. Here’s an example: $Rows = ReadValue("HKEY_CURRENT_USER\Console\Configuration", "WindowRows") If @ERROR = 0 ? "Number of window-rows: $Rows" Endif Change Once you know an entry exists and you’ve checked its value, you may want to change it. To do so, you use WriteValue, whose syntax is writevalue ("subkey", "entry", "expression", "data type") where subkey identifies the subkey where you want to write a value entry. (This subkey MUST exist.) entry represents the name of the entry. (To write to the default entry of a key, specify an empty string as the entry name; if the entry does not exist, it will be created.) expression is the data to store as the value of the entry, and data type identifies the data type of the entry. The following data types are supported: - REG_NONE - REG_SZ - REG_EXPAND_SZ - REG_BINARY - REG_DWORD - REG_DWORD_LITTLE_ENDIAN - REG_DWORD_BIG_ENDIAN - REG_LINK - REG_MULTI_SZ - REG_RESOURCE_LIST - REG_FULL_RESOURCE_DESCRIPTOR @ERROR is set to 0 if the value is written to the registry successfully. Here’s an example: WriteValue(("HKEY_CURRENT_USER\Console\Configuration", "WindowRows", 24 "REG_DWORD") If @ERROR = 0 ? "Value written to the registry" Endif If you want to delete a registry key, use DelKey, or to delete an entry from a key, use DelValue. KiXtart does not ask for confirmation when registry values are overwritten or when subkeys are deleted. Always be very careful when changing the registry, and preferably back up your system before changing registry values. You can back up parts of the registry in KiXtart using SaveHive (see the KiXtart documentation). The syntax of DelKey is: delkey ("subkey") Here’s an example: $SUCCESS=0 If DelKey("HKEY_CURRENT_USER\MyRegKey") = $SUCCESS ? "Key deleted...." Endif Here I set a variable called $SUCCESS and test the result of the function rather than using a separate test of @ERROR. This makes the code more readable and the reason for the test more obvious. The syntax of DelValue is: delvalue ("subkey", "entry") and here’s an example: $SUCCESS=0 If DelValue("HKEY_CURRENT_USER\MyRegKey", "TestValue") = $SUCCESS ? "Value deleted...." Endif Add KiXtart includes functions to add keys and values to the registry. I’ve already covered the method for adding a value (WriteValue). To add a key, you use AddKey. The syntax is: addkey("subkey") Here’s an example: $SUCCESS = 0 If AddKey("HKEY_CURRENT_USER\MyRegKey") = $SUCCESS ? "Key added...." Endif It is only necessary for the hive part of the subkey to be present—the subkey tree will be created from scratch if need be. For example, the code AddKey("HKEY_CURRENT_USER\MyRegKey\MySubkey\ThirdLevel") will work just fine, even if MyRegKey and MySubkey don’t exist. Registry examples I’ve shown you a few short examples to demonstrate the use of each registry function. Below I have a slightly longer example to show how registry functions would be used in practice: 1. $SUCCESS = 0 2. If ExistKey("HKEY_CURRENT_USER\MyRegKey\MySubkey") <> $SUCCESS 3. If AddKey("HKEY_CURRENT_USER\MyRegKey\MySubkey") <> $SUCCESS 4. ? "AddKey failed with error code (" = @ERROR + ")" 5. Exit 100000+@ERROR 6. Endif 7. Endif 8. If WriteValue("HKEY_CURRENT_USER\MyRegKey\MySubkey","TestValue","Test","REG_SZ") = $SUCCESS 9. ? "TestValue written to registry successfully" 10. Else 11. ? "WriteValue failed with error code (" + @ERROR + ")" 12. Exit 200000+@ERROR 13. Endif 14. ; rest of code In line 2 I check to see whether the key I want to add a value to exists. If not, in line 3 I attempt to add it. If that fails, in line 4 I display an error message and exit. Note that I add 100000 to the error code so that in a calling program, I can tell where the failure occurred and what the error code was. If the key already existed, or has been successfully added, I end up at line 8. Here we attempt to write a value with the name TestValue. If that fails, I display an error message and exit. This time I add 200000 to the error code. I reach line 14 only if everything went well. This is how I’d handle the failure exits: CALL "AddMyValue" CASE @ERROR < 200000 ? "AddKey failed – " CASE @ERROR < 300000 ? "WriteValue failed – " ENDSELECT CASE @ERROR = 200002 "not found" ? CASE @ERROR = 200005 "access denied" ? CASE @ERROR = 200012 "invalid access" ? CASE 1 @ERROR ? ENDSELECT The first Select construct displays where the failure occurred; the second tags on the reason for the failure. The second Select list may well be longer if you want to take into account all possible failure codes for registry editing. Remember that the question mark (?) equates to printing a new line. Conclusion In this installment of my series of Daily Drill Downs, I’ve covered loops, subroutine calls, and the ubiquitous GoTo. I’ve also shown how KiXtart can execute external commands and scripts, and I’ve described the Select construct. A major part of this Daily Drill Down has concentrated on registry manipulation, because that’s one of the most powerful uses of a logon script. In my next installment, I’ll show you how to manipulate the DOS window that KiXtart will be running in, how to communicate with a user, and how to read and write files..
http://www.techrepublic.com/article/creating-logon-scripts-with-kixtart-part-2-basic-elements/
CC-MAIN-2017-04
refinedweb
2,312
64.91
This HTML version of is provided for convenience, but it is not the best format for the book. In particular, some of the symbols are not rendered correctly. You might prefer to read the PDF version, or you can buy a hardcopy from Amazon. So far we have only looked at one variable at a time. In this chapter we look at relationships between variables. Two variables are related if knowing one gives you information about the other. For example, height and weight are related; people who are taller tend to be heavier. Of course, it is not a perfect relationship: there are short heavy people and tall light ones. But if you are trying to guess someone’s weight, you will be more accurate if you know their height than if you don’t. The code for this chapter is in scatter.py. For information about downloading and working with this code, see Section 0.2. The simplest way to check for a relationship between two variables is a scatter plot, but making a good scatter plot is not always easy. As an example, I’ll plot weight versus height for the respondents in the BRFSS (see Section 5.4). Here’s the code that reads the data file and extracts height and weight: df = brfss.ReadBrfss(nrows=None) sample = thinkstats2.SampleRows(df, 5000) heights, weights = sample.htm3, sample.wtkg2 SampleRows chooses a random subset of the data: def SampleRows(df, nrows, replace=False): indices = np.random.choice(df.index, nrows, replace=replace) sample = df.loc[indices] return sample df is the DataFrame, nrows is the number of rows to choose, and replace is a boolean indicating whether sampling should be done with replacement; in other words, whether the same row could be chosen more than once. thinkplot provides Scatter, which makes scatter plots: thinkplot.Scatter(heights, weights) thinkplot.Show(xlabel='Height (cm)', ylabel='Weight (kg)', axis=[140, 210, 20, 200]) The result, in Figure 7.1 (left), shows the shape of the relationship. As we expected, taller people tend to be heavier. Figure 7.1: Scatter plots of weight versus height for the respondents in the BRFSS, unjittered (left), jittered (right). But this is not the best representation of the data, because the data are packed into columns. The problem is that the heights are rounded to the nearest inch, converted to centimeters, and then rounded again. Some information is lost in translation. We can’t get that information back, but we can minimize the effect on the scatter plot by jittering the data, which means adding random noise to reverse the effect of rounding off. Since these measurements were rounded to the nearest inch, they might be off by up to 0.5 inches or 1.3 cm. Similarly, the weights might be off by 0.5 kg. heights = thinkstats2.Jitter(heights, 1.3) weights = thinkstats2.Jitter(weights, 0.5) Here’s the implementation of Jitter: def Jitter(values, jitter=0.5): n = len(values) return np.random.uniform(-jitter, +jitter, n) + values The values can be any sequence; the result is a NumPy array. Figure 7.1 (right) shows the result. Jittering reduces the visual effect of rounding and makes the shape of the relationship clearer. But. This effect is called saturation. Figure 7.2: Scatter plot with jittering and transparency (left), hexbin plot (right). We can solve this problem with the alpha parameter, which makes the points partly transparent: thinkplot.Scatter(heights, weights, alpha=0.2) Figure 7.2 (left) shows the result. Overlapping data points look darker, so darkness is proportional to density. In this version of the plot we can see two details that were not apparent before: vertical clusters at several heights and a horizontal line near 90 kg or 200 pounds. Since this data is based on self-reports in pounds, the most likely explanation is that some respondents reported rounded values. Using transparency works well for moderate-sized datasets, but this figure only shows the first 5000 records in the BRFSS, out of a total of 414 509. To handle larger datasets, another option is a hexbin plot, which divides the graph into hexagonal bins and colors each bin according to how many data points fall in it. thinkplot provides HexBin: thinkplot.HexBin(heights, weights) Figure 7.2 (right) shows the result. An advantage of a hexbin is that it shows the shape of the relationship well, and it is efficient for large datasets, both in time and in the size of the file it generates. A drawback is that it makes the outliers invisible. The point of this example is that it is not easy to make a scatter plot that shows relationships clearly without introducing misleading artifacts. Scatter plots provide a general impression of the relationship between variables, but there are other visualizations that provide more insight into the nature of the relationship. One option is to bin one variable and plot percentiles of the other. NumPy and pandas provide functions for binning data: df = df.dropna(subset=['htm3', 'wtkg2']) bins = np.arange(135, 210, 5) indices = np.digitize(df.htm3, bins) groups = df.groupby(indices) dropna drops rows with nan in any of the listed columns. arange makes a NumPy array of bins from 135 to, but not including, 210, in increments of 5. digitize computes the index of the bin that contains each value in df.htm3. The result is a NumPy array of integer indices. Values that fall below the lowest bin are mapped to index 0. Values above the highest bin are mapped to len(bins). Figure 7.3: Percentiles of weight for a range of height bins. groupby is a DataFrame method that returns a GroupBy object; used in a for loop, groups iterates the names of the groups and the DataFrames that represent them. So, for example, we can print the number of rows in each group like this: for i, group in groups: print(i, len(group)) Now for each group we can compute the mean height and the CDF of weight: heights = [group.htm3.mean() for i, group in groups] cdfs = [thinkstats2.Cdf(group.wtkg2) for i, group in groups] Finally, we can plot percentiles of weight versus height: for percent in [75, 50, 25]: weights = [cdf.Percentile(percent) for cdf in cdfs] label = '%dth' % percent thinkplot.Plot(heights, weights, label=label) Figure 7.3 shows the result. Between 140 and 200 cm the relationship between these variables is roughly linear. This range includes more than 99% of the data, so we don’t have to worry too much about the extremes. A correlation is a statistic intended to quantify the strength of the relationship between two variables. A challenge in measuring correlation is that the variables we want to compare are often not expressed in the same units. And even if they are in the same units, they come from different distributions. There are two common solutions to these problems: If X is a series of n values, xi, we can convert to standard scores by subtracting the mean and dividing by the standard deviation: zi = (xi − µ) / σ. The numerator is a deviation: the distance from the mean. Dividing by σ standard we compute a new variable, R, so that ri is the rank of xi, the distribution of R is uniform from 1 to n, regardless of the distribution of X. Covariance is a measure of the tendency of two variables to vary together. If we have two series, X and Y, their deviations from the mean are where x is the sample mean of X and ȳ is the sample). If you have studied linear algebra, you might recognize that Cov is the dot product of the deviations, divided by their length. So the covariance is maximized if the two vectors are identical, 0 if they are orthogonal, and negative if they point in opposite directions. thinkstats2 uses np.dot to implement Cov efficiently: By default Cov computes deviations from the sample means, or you can provide known means. If xs and ys are Python sequences, np.asarray converts them to NumPy arrays. If they are already NumPy arrays, np.asarray does nothing. This implementation of covariance is meant to be simple for purposes of explanation. NumPy and pandas also provide implementations of covariance, but both of them apply a correction for small sample sizes that we have not covered yet, and np.cov returns a covariance matrix, which is more than we need for now. Covariance is useful in some computations, but it is seldom reported as a summary statistic because it is hard to interpret. Among other problems, its units are the product of the units of X and Y. For example, the covariance of weight and height in the BRFSS dataset is 113 kilogram-centimeters, whatever that means. One solution to this problem is to divide the deviations by the standard deviation, which yields standard scores, and compute the product of standard scores: Where SX and SY are the standard deviations of X and Y. The mean of these products is Or we can rewrite ρ by factoring out SX and SY: This value is called Pearson’s correlation after Karl Pearson, an influential early statistician. It is easy to compute and easy to interpret. Because standard scores are dimensionless, so is ρ. Here is the implementation in thinkstats2: def Corr(xs, ys): xs = np.asarray(xs) ys = np.asarray(ys) meanx, varx = MeanVar(xs) meany, vary = MeanVar(ys) corr = Cov(xs, ys, meanx, meany) / math.sqrt(varx * vary) return corr MeanVar computes mean and variance slightly more efficiently than separate calls to np.mean and np.var. is low. The magnitude of ρ indicates the strength of the correlation. If ρ is 1 or -1, the variables are perfectly correlated, which means that if you know one, you can make a perfect prediction about the other. Most correlation in the real world is not perfect, but it is still useful. The correlation of height and weight is 0.51, which is a strong correlation compared to similar human-related variables. If Pearson’s correlation is near 0, it is tempting to conclude that there is no relationship between the variables, but that conclusion is not valid. Pearson’s correlation only measures linear relationships. If there’s a nonlinear relationship, ρ understates its strength. Figure 7.4: Examples of datasets with a range of correlations. Figure 7.4 is from. It shows scatter plotslinear, the correlation coefficient is 0. The moral of this story is that you should always look at a scatter plot of your data before blindly computing a correlation coefficient. Pearson’s correlation works well if the relationship between variables is linear and if the variables are roughly normal. But it is not robust in the presence of outliers. Spearman’s rank correlation is an alternative that mitigates the effect of outliers and skewed distributions. To compute Spearman’s correlation, we have to compute the rank of each value, which is its index in the sorted sample. For example, in the sample [1, 2, 5, 7] the rank of the value 5 is 3, because it appears third in the sorted list. Then we compute Pearson’s correlation for the ranks. thinkstats2 provides a function that computes Spearman’s rank correlation: def SpearmanCorr(xs, ys): xranks = pandas.Series(xs).rank() yranks = pandas.Series(ys).rank() return Corr(xranks, yranks) I convert the arguments to pandas Series objects so I can use rank, which computes the rank for each value and returns a Series. Then I use Corr to compute the correlation of the ranks. I could also use Series.corr directly and specify Spearman’s method: def SpearmanCorr(xs, ys): xs = pandas.Series(xs) ys = pandas.Series(ys) return xs.corr(ys, method='spearman') The Spearman rank correlation for the BRFSS data is 0.54, a little higher than the Pearson correlation, 0.51. There are several possible reasons for the difference, including: In the BRFSS example, we know that the distribution of weights is roughly lognormal; under a log transform it approximates a normal distribution, so it has no skew. So another way to eliminate the effect of skewness is to compute Pearson’s correlation with log-weight and height: thinkstats2.Corr(df.htm3, np.log(df.wtkg2))) The result is 0.53, close to the rank correlation, 0.54. So that suggests that skewness in the distribution of weight explains most of the difference between Pearson’s and Spearman’s correlation. If variables A and B are correlated, there are three possible explanations: A causes B, or B causes A, or some other set of factors causes both A and B. These explanations are called “causal relationships”. Correlation alone does not distinguish between these explanations, so it does not tell you which ones are true. This rule is often summarized with the phrase “Correlation does not imply causation,” which is so pithy it has its own Wikipedia page:. So what can you do to provide evidence of causation?. An, which is the topic of Chapter 11. A solution to this exercise is in chap07soln.py. chap07soln.py Think Bayes Think Python Think Stats Think Complexity
http://greenteapress.com/thinkstats2/html/thinkstats2008.html
CC-MAIN-2017-47
refinedweb
2,207
65.93
Hello, I'm a beginner at Java and trying to get to know programming and one day work my way up developing android apps. Well I was trying make this program that would read in data from a txt file like this: [img][/img] I wanted the firstname, lastname, id# and dob each to have their own variables and then only to have the lastname and dob be printed to an outputfile(that the user would name) like this: [img][/img] so I decided the best way to do that would be in arrays. I ran into a problem during my first for loop after the trybox where it says that the first array has no source so I tried to move the for loop to include the trybox and that still wouldn't work.. I received these errors: Exception in thread "main" java.util.NoSuchElementException at java.util.Scanner.throwFor(Unknown Source) at java.util.Scanner.next(Unknown Source) at HealthDriver.main(HealthDriver.java:38) After commenting out some code I took out the forloop completely and I just in left the arrays initialized to [0] and it worked until the 2nd for loop and would instead give me the trybox exception error message(in the 2nd trybox)...Could anyone tell me what's wrong here? or maybe someone has an idea how to do this without arrays? Any tips, nudges in the right direction, constructive criticism is welcomed. Thank you all in advanced. :D! import java.util.*; import java.io.*; //Include header public class HealthDriver { public static void main(String[] args) { String writeword; //used to write filename to outputfile String[] aryfirst = new String[4]; String[] arylast= new String[4]; int[] aryid = new int[4]; String[] arydob = new String[4]; //Prompt to enter in data. System.out.printf("Please enter in the name of the file: "); Scanner s = new Scanner(System.in); //Reads in the string name for the file. writeword = s.next(); //Reads in from the file Patient.txt String filename = "Patient.txt"; Scanner sf = null; try { sf = new Scanner(new FileReader(filename)); } catch (Exception e) { System.out.printf("ERROR...There is no Patient.txt"); } //Loop that reads each piece of data from txt and arranges them in each array. int i = 0; for (i = 0; i <=4; i++) { aryfirst[i] = sf.next(); arylast[i] = sf.next(); aryid[i] = sf.nextInt(); arydob[i] = sf.next(); } //Declare a PrintStream variable PrintStream ps=null; try { //This creates a new file or wipes the old one. Based on what user types ps = new PrintStream(writeword); //Loop that should print out each last name and date of birth.. for (i = 0; i <=4; i++) { ps.printf("%-20s", arylast[i]); ps.printf("%15d", arydob[i]); } /* aryfirst[i] = sf.next(); arylast[i] = sf.next(); aryid[i] = sf.nextInt(); arydob[i] = sf.next(); ps.printf("%-20s", lastname); */ //ps.printf("%15d", dateofbirth); } catch (Exception e) { System.out.println("ERROR! Something went WRONG!"); } } }
https://www.daniweb.com/programming/software-development/threads/410190/java-program-file-trouble
CC-MAIN-2018-39
refinedweb
486
76.52
Comparing Objects with IComparable and IComparer compitionpoint June 8, 2018 Comparing Objects with IComparable and IComparer2019-10-12T16:49:35+00:00 .Net framework No Comment There are two useful interfaces that are useful for comparing user-defined objects. These are the IComparable<T> and IComparer<T>interfaces. The IComparable interface is implemented in a class to allow comparison between it and another object. The IComparer<T>is implemented by a separate class which does the comparison of two objects. Note that older, non-generic versions are available to use but the generic versions are much easier and you won’t be needing to convert the objects to be compared. The IComparable<T> Interface Let’s first take a look at how we can use the IComparable interface. The following example shows a class which implements the IComparable<T> interface. public class Person : IComparable<Person> { public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } public int CompareTo(Person other) { if (this.Age > other.Age) return 1; else if (this.Age < other.Age) return -1; else return 0; } } Example 1 When a class uses the IComparable<T> interface, you need to create an implementation for its single method which is the CompareTo()method. The CompareTo() method returns an integer value. It also accepts one argument which is the object to be compared to the current object. Inside our CompareTo() method in Example 1, we compared the age of the current person to the age of the other person passed to the method. By convention, if the current object is considered greater than the other object, then a value greater than 0 should be returned. We simply used 1 as the value but you can use any value greater than 0. If the current object is considered less than the other object, then a value less than 0 should be used as seen in line 12. If the two objects are considered equal, then the value 0 should be returned. The following code shows the Person object that implemented the IComparable<T> in action. The program will determine the youngest and the oldest among three Persons. public class Program { static void Main(string[] args) { Person person1 = new Person { FirstName = "John", LastName = "Smith", Age = 21 }; Person person2 = new Person { FirstName = "Mark", LastName = "Logan", Age = 19 }; Person person3 = new Person { FirstName = "Luke", LastName = "Adams", Age = 20 }; Person youngest = GetYoungest(person1, person2, person3); Person oldest = GetOldest(person1, person2, person3); Console.WriteLine("The youngest person is {0} {1}.", youngest.FirstName, youngest.LastName); Console.WriteLine("The oldest person is {0} {1}.", oldest.FirstName, oldest.LastName); Console.ReadKey(); } private static Person GetYoungest(Person person1, Person person2, Person person3) { Person youngest = person1; if (person2.CompareTo(youngest) == -1) youngest = person2; if (person3.CompareTo(youngest) == -1) youngest = person3; return youngest; } private static Person GetOldest(Person person1, Person person2, Person person3) { Person oldest = person1; if (person2.CompareTo(oldest) == 1) oldest = person2; if (person3.CompareTo(oldest) == 1) oldest = person3; return oldest; } } Example 2 The youngest person is Mark Logan. The oldest person is John Smith. Lines 5-7 creates three Person objects with sample values that we can use. Lines 9-10 creates variables that will hold references to the youngest and oldest Person. In line 9, we called the GetYoungest() method which is defined in lines 19-30. It accepts the three Persons that we will be comparing against each other. Line 21 assumes that the first person is the youngest. We test if person2 is younger than the current youngest person by using the implemented CompareTo() method of the IComparable<T> interface. Inside that method, the age of person2 is compared to the age of the current youngest person. If the person2‘s age is lower, then -1 should be returned and person2 will be considered as the new youngest person as seen in line 24. Line 26-27 uses the same technique to person3. After the comparisons, the youngest person is returned in line 29. Line 10 calls the GetOldest() method which is defined in lines 32-43. The code is similar to the GetYoungest() method except that it compares if a person’s age is greater than the current oldest person. Therefore, we are expecting a return value of 1 instead of -1 when you call the CompareTo() method. Lines 12-15 prints the youngest and oldest person’s name. The IComparer<T> Interface The IComparer<T> is implemented by a seperate class. As the name of the interface suggests, implementing it makes a comparer class. For example, you can create multiple comparers for a Person class. You can make a comparer which compares the age, or a comparer which compares the FirstName or LastName of every person. The following examples uses the Person class created in Example 1. We will be creating three comparer classes for the FirstName, LastName, and Age a person. public class FirstNameComparer : IComparer<Person> { public int Compare(Person person1, Person person2) { return person1.FirstName.CompareTo(person2.FirstName); } } public class LastNameComparer : IComparer<Person> { public int Compare(Person person1, Person person2) { return person1.LastName.CompareTo(person2.LastName); } } public class AgeComparer : IComparer<Person> { public int Compare(Person person1, Person person2) { return person1.CompareTo(person2); } } Example 3 Using the IComparer<T> class requires you to implement one method named Compare() which accepts two T objects and returns an intresult. Since we used IComparer<Person> interface, then the Compare() method will automatically have two Person parameters. The FirstNameComparer compares the first names of two persons being compared. Inside it’s Compare() method, we simply used the already defined CompareTo() method of the String class(since FirstName property is string) for simplicity and return the value it will return. We do the same for the LastNameComparer class but we compare the LastName of two persons instead. The Compare() method of the AgeComparer class uses the CompareTo() method of the actual Person class we defined in Example 1 to save as from repeating the same code again. The Compare() method should return 0 if both parameters are considered equal, a value greater than 0 if the first parameter is greater than the second parameter, and a value less than 0 if the first parameter is less than the second parameter. The following program asks a user which property to use to sort a list of person. public class Program { static void Main(string[] args) { List<Person> persons = new List<Person> { new Person { FirstName = "John", LastName = "Smith", Age = 21 }, new Person { FirstName = "Mark", LastName = "Logan", Age = 19 }, new Person { FirstName = "Luke", LastName = "Adams", Age = 20 }}; Console.WriteLine("Original Order"); foreach(Person p in persons) Console.WriteLine("{0} {1}, Age: {2}", p.FirstName, p.LastName, p.Age); Console.WriteLine(" Sort persons based on their:"); Console.WriteLine("[1] FirstName [2] LastName [3]Age"); Console.Write("Enter your choice: "); int choice = Int32.Parse(Console.ReadLine()); ReorderPersons(choice, persons); Console.WriteLine("New Order"); foreach (Person p in persons) Console.WriteLine("{0} {1}, Age: {2}", p.FirstName, p.LastName, p.Age); } private static void ReorderPersons(int choice, List<Person> persons) { IComparer<Person> comparer; if (choice == 1) comparer = new FirstNameComparer(); else if (choice == 2) comparer = new LastNameComparer(); else comparer = new AgeComparer(); persons.Sort(comparer); } } Example 4 Original Order John Smith, Age: 21 Mark Logan, Age: 19 Luke Adams, Age: 20 Sort persons based on their: [1] FirstName [2] LastName [3]Age Enter your choice: 1 New Order John Smith, Age: 21 Luke Adams, Age: 20 Mark Logan, Age: 19 Original Order John Smith, Age: 21 Mark Logan, Age: 19 Luke Adams, Age: 20 Sort persons based on their: [1] FirstName [2] LastName [3]Age Enter your choice: 2 New Order Luke Adams, Age: 20 Mark Logan, Age: 19 John Smith, Age: 21 Original Order John Smith, Age: 21 Mark Logan, Age: 19 Luke Adams, Age: 20 Sort persons based on their: [1] FirstName [2] LastName [3]Age Enter your choice: 3 New Order Mark Logan, Age: 19 Luke Adams, Age: 20 John Smith, Age: 21 Lines 5-8 creates a List of Person objects containing three Person with predefined values for each property. Lines 11-12 shows the original order of the persons. Lines 14-15 shows the list of options that the user can choose as the basis of the sorting. Lines 17-18 asks the user of the his/her choice. Line 20 calls the ReorderPersons() method defined in lines 27-39. The method accepts the choice and the list of persons to sort. Inside the method, we defined a variable that will hold the type of comparer class to use. We used the IComparer<Person> as the type so it can hold any comparer class since they all implement this interface. We look at the different possible values of choice in lines 31-36 and assign the proper comparer based on the number value. In line 38, we used the Sort()method of the List<T> class. The Sort() method has an overloaded version which accepts an IComparer<T> object. We passed the created comparer class in line 29 containing whichever type of comparer it has based on the choice. The Sort() method will then sort the Person object using the comparer class that we provided.
https://compitionpoint.com/system-icomparable/
CC-MAIN-2021-31
refinedweb
1,527
54.32
What will we cover? You want to start you first OpenCV project in PyCharm. import cv2 And you get. And you are ready. If it worked (no read line under cv2) then skip ahead to Step 5 to try it out. Step 4: Method 2 (if Method 1 fails) Install the OpenCV library in your virtual environment Use pip is the package manager system for Python. You want to ensure you use the pip from the above library. ./pip install opencv-python Download the above image and save it as Castle.png in your project folder. import cv2 img = cv2.imread("Castle.png") gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) cv2.imshow("Over the Clouds", img) cv2.imshow("Over the Clouds - gray", gray) cv2.waitKey(0) cv2.destroyAllWindows() Which should result in something like this.
https://www.learnpythonwithrune.org/install-opencv-4-in-pycharm/
CC-MAIN-2021-25
refinedweb
134
88.23
Geolocation is a crucial aspect of mobile development. Fortunately, finding the location of users on Windows Phone 8 is easy! This tutorial will show you how it's done by demonstrating the Geolocator class. Tutorial Overview The Windows Phone SDK is a very powerful development platform that lets developers create great applications with the Silverlight framework. The recent upgrade of the SDK to version 8.0 (Windows Phone 8) brings about a host of changes that makes some generic tasks easier to perform. One of the areas that saw an improvement in Windows Phone 8 is the Location API. In previous versions of the windows phone SDK, getting a user’s current location was a bit untidy, but the current technique for doing this has been polished and made a bit more direct. The new technique uses Microsoft's ASYNC method call, which improves performance of applications while gaining access to a user's location. Let's dive in and take a look at this new way of accessing the current location on Windows Phone 8. Prerequisites To attempt this tutorial, I will assume you have a basic understanding of the Windows Phone platform. I would also like to believe you have some basic understanding of XAML and C# as these are the language we are going to write our application with. You will also need to have Visual Studio 2012 or higher with the Windows Phone 8 SDK and a working Windows Phone emulator installed on your local machine. You can alternatively use an actual Windows Phone device in place of the emulator. 1. Create a New Project Open Visual Studio and create a new Windows Phone project with File -> New -> Project. On the left hand pane of the new project window make sure you select the Windows Phone option under the Visual C# sub category. Choose Windows Phone App on the main window and name the project anything you would like and then click OK when you are done. 2. Setup the User Interface Now that we have created our Application, we can focus on our main goal -getting the user's current location and displaying it on the screen. In simple terms, our application is going to have only 2 elements displayed, a Button and a TextBlock. The Button would be what we would click to tell our app to grab a user’s current location, and the TextBlock would display the geo coordinates of our current location. Let’s go ahead and create our simple user interface. Use the Visual Studio ToolBox to drag and drop a Button and a TextBlock anywhere on the screen. At this point, your application should have a TextBlock and a Button as part of its User Interface. It should look similar to this: I would strongly recommend you to change the name property of both your Button and TextBlock in order to be consistent with this tutorial. Set the name of the Button to be MyButton and of the TextBlock to be MyTextBlock from your XAML code window. <Button Name="MyButton" Content="Button" HorizontalAlignment="Left" Margin="0,76,0,0"  <TextBlock x: 3. Add Geolocation Logic At this point, we have our UI ready and we can go ahead and start writing the logic for our application. Double-click the Button from the designer view and Visual Studio should automatically take you to the code view with a method already created. private void MyButton_Click(object sender, RoutedEventArgs e) { } This generated method is a delegate or call back method for the click event of our button. This means the code within this method will only execute when our button is clicked. To align with our aim, we would put the code that gets our current location within this method since we want our current location determined whenever our Button is clicked. To achieve this, we are going to use the GeoLocator and the GeoPosition classes. The GeoLocator class helps get our current location and does all the interaction with the GPS/Network and returns a GeoPosition object. On the other hand, the GeoPosition class provides us with a way of consuming the returned data that the GeoLocator returns. Basically, think of the GeoLocator as a request tool and the GeoPosition object as a response tool. These classes also provide room for customizing our requests and responses. For example, we can tell the GeoLocator how accurate (to the nearest meter) we want our current location to be, and how fast we want our current location polled. I have written a method that helps get our current location and I will explain it in detail a bit later. For now, add the following namespace reference to your: Windows.Devices.Geolocation;. Next, copy the code snippet below and paste it in: private async void GetCurrentLocation() { Geolocator locationFinder = new Geolocator  {  DesiredAccuracyInMeters = 50,  DesiredAccuracy = PositionAccuracy.Default }; try { Geoposition currentLocation = await locationFinder.GetGeopositionAsync( maximumAge: TimeSpan.FromSeconds(120), timeout: TimeSpan.FromSeconds(10)); String longitude = currentLocation.Coordinate.Longitude.ToString("0.00"); String latitude = currentLocation.Coordinate.Latitude.ToString("0.00"); MyTextBlock.Text = "Long: " + longitude + "Lat: " + latitude;  }  catch (UnauthorizedAccessException)  {  MessageBox.Show("And Exception Occured");  } }  This method does all the work for us and even goes ahead to set our TextBlock text property for us. Let’s closely examine what the method is doing. Firstly, we create a new GeoLocator Object called locationFinder. We then first tell it how accurate in meters we want our location to be and we set how accurate we want the result. Geolocator locationFinder = new Geolocator { DesiredAccuracyInMeters = 50, DesiredAccuracy = PositionAccuracy.Default }; Next, we instantiate a GeoPosition object called currentLocation within a try/catch block in case of any exceptions. We then assign it to the returned GeoPosition object that our GeoLocator object returns using the GetGeoPostionAsync method. Geoposition currentLocation = await locationFinder.GetGeopositionAsync( maximumAge: TimeSpan.FromSeconds(120),  timeout: TimeSpan.FromSeconds(10)); String longitude = currentLocation.Coordinate.Longitude.ToString("0.00"); String latitude = currentLocation.Coordinate.Latitude.ToString("0.00"); Finally, we get our returned longitude and latitude and set our TextBlock to display these values. MyTextBlock.Text = "Lon: " + longitude + "Lat: " + latitude; It is that simple! There are a few more things to do before we test our application. First, we need to call our GetCurrentLocation method within the delegate method of our Button that was created for us initially. private void MyButton_Click(object sender, RoutedEventArgs e) { GetCurrentLocation(); } This means that whenever we click our button, our GetCurrentLocation Method will execute and our current location would be retrieved for us. Finally, we need to request permission to use the Location API on Windows Phone. We do this by editing our manifest file. In the solutions explorer, look for an entry titled Properties, and toggle it to see its subentries. One of the subentries filenames should be WMAppManifest.xml. Double-click this and you will see a GUI with four tabs, one of these is titled Capabilities. Select that tab and you should see something like this: Now, check the Option ID_CAP_LOCATION if unchecked and save your project (CTRL + S). With that done, you can close the WMAppManifest.xml window. What we have just done is explicitly requested permission to allow our App to use the Windows Phone GPS/Location tool. With that done, we can now run our application for the first time! If you are using a physical windows phone device for testing, make sure you have Location turned on in the setting and have a valid internet connection over a Wifi or Mobile network. This is very important and mandatory for our application to work. If you are using a windows phone emulator, also make sure the Location is enabled as well on the Emulator and that the Internet connection is a working one. To launch the application, look for the Green Play Button on the visual studio menu and select your emulator or Device option if you are using a handset. Make sure the solution configuration on the right side of the button is set to Debug. 4. Test the App Click the Green Button to launch the application. The app should launch and display the page we drew our UI upon. Now, click the button to command the app to get our current location coordinates and you should be presented with a result that looks like this: Conclusion By now you may have noticed that we have successfully achieved what we set out to achieve and our current location is being displayed to us. You can see how easy it was to achieve this with such minimal programming! This is a very simple yet important operation in Windows Phone development Feel free to play around with the customizable settings of the GeoLocator and GeoPosition classes. Thanks for reading! Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/windows-phone-8-sdk-geolocation-services--mobile-20164
CC-MAIN-2018-39
refinedweb
1,471
54.73
When I type in the date it is putting the Sept. in the date spot but it is putting 30, 2007 in the pay to the order slot. Any ideas why? Also, when it asks for my input it asks for the date just fine but then all on the same line it asks Enter first name: Enter last name: Enter amount: Any ideas would be helpful. Code:#include "stdafx.h" #include <iostream> #include <string> using namespace std; string todaysDate; string firstName; string lastName; double amount; void enterData(); void printCheck(); int _tmain(int argc, _TCHAR* argv[]) { enterData(); printCheck(); return 0; } void enterData() { cout << "Enter today's date: "; cin >> todaysDate; cout << "Enter the first name: "; cin >> firstName; cout << "Enter the last name: "; cin >> lastName; cout << "Enter the amount: "; cin >> amount; cout << endl << endl; } void printCheck() { cout << "Zzyz Corp Date: "<< todaysDate <<endl; cout << "1164 Sunrise Avenue "<<endl; cout << "Kalispell, Montana\n "<<endl; cout << "Pay to the order of:" << firstName << lastName << "$" << amount << endl; cout << "UnderSecurity Bank "<<endl; cout << "Missoula, MT "<<endl; cout << " ____________________"<<endl; cout << " Authorized Signature"; cout << endl << endl; }
https://cboard.cprogramming.com/cplusplus-programming/94095-trying-finish-up-program.html
CC-MAIN-2017-30
refinedweb
177
62.14
RationalWiki talk:What is going on in the world? Archives for this talk page: Archive list Oldest archives: 0001 -- 0002 -- 0003 -- 0004 -- 0005 [edit] Bob Jones University Why put the emphasis on "they"? Its not as if the investigative agency was supposed to be on their side really. [edit] What happens >9999? So, um, what happens when the WIGO counter goes past 9999? Does is break the voting app or not? It's probably going to happen this year. Sterile (talk) 21:29, 3 January 2015 (UTC) - I predict a huge anticlimax. 141.134.75.236 (talk) 21:31, 3 January 2015 (UTC) - There have been vote numbers larger than 10k inserted by mistake and nothing broke, so...--ZooGuard (talk) 21:32, 3 January 2015 (UTC) - The above is by an anti-P10K government shill, please ignore it. As a representative of the new "Poll 10K" movement, I predict something between a colossal space whale invasion and instant nuclear annihilation. TH3 31331351 FU22YC47P07470 ☢ (reprimand) 00:42, 4 January 2015 (UTC) - Computers don't think like you or I, sterile. You can get worried at 2,147,483,647. Ikanreed (talk) 20:22, 6 January 2015 (UTC) [edit] "Salon": blog, clog, or world? I was unaware that Salon is considered a sketchy source. Comment? Sprocket J Cogswell (talk) 16:57, 4 January 2015 (UTC) - It's not the greatest nor the worst, but opinion columns really belong in WIGO:Blogs rather than World. The same goes for the Newsweek piece about the Bible. ₩€₳$€£ΘĪÐ Methinks it is a Weasel 17:55, 4 January 2015 (UTC) - Thanks, Weaseloid. Yeah, I wondered about the placement, but it seemed like it was legitimately reporting on this article more than anything else. Yes, there was background spin provided... Apparently the Quanta article was republished by Business Insider, but somehow I don't care to link to that. Yet another site on the edge of respectability? IDK. Sprocket J Cogswell (talk) 18:19, 4 January 2015 (UTC) - It's a fuzzy distinction but I would say it looks more comment & analysis (i.e. blogging, or the journalistic equivalent) than reporting, & not really 'news', given that it was about an article from about a year ago. Wėąṣėḷőįď Methinks it is a Weasel 18:25, 4 January 2015 (UTC) - Salon has occasional straight-up news pieces, but its most-clicked stuff is blogs/clogs. You might have to apply editorial judgement or something - David Gerard (talk) 19:13, 4 January 2015 (UTC) [edit] There's nothing wrong with teaching the law as it exists More of a debate namespace thing, but since an item here brought it up, I'm gonna engage here. I can't condemn anyone who opposes gun control on strictly legalistic grounds. Rule of law matters, and constitutionally recognized rights are important. Teaching children about what the constitution says is good. A healthier electorate might recognize the constitution as a living document a little better, but the tone of the headline here is essentially denialism. It's no different than fundamentalists opposing teaching evolution because they don't like the implications. To me, as a rather radical gun-control proponent(I believe it's evidentially supported) the implications of the second amendment are grim. But those implications don't mean I should ignore the reality of what is law or should oppose teaching that law to students at appropriate education levels. Ikanreed (talk) 20:20, 6 January 2015 (UTC) - PArt of the issue with it comes from different ideas of what the amendment means though. Teaching the context of the amendment, what its historicaly meant and means now can vary across states or even cities/districts within cities, and not all of them are going to have gun control-style ideas about what to be saying about the second amendment. --Miekal 20:31, 6 January 2015 (UTC) - Sure, but that goes for other things they teach in school, like free speech or due process as well. I can't bring myself to say that the second amendment is somehow special in its flexibility. I'm also all for teachers who wish it taking the opportunity to tell students that gun ownership increases your annual death rate by a couple percent during the lecture. Ikanreed (talk) 20:35, 6 January 2015 (UTC) - Considering, as the article says, the curriculum on gun rights is going to be all grades, a minimum of three weeks, and decided by a combination of State board of education on the recommendation or approval of the NRA, i don't see anything but "Yay guns pro gun hate the control" coming from it.--Miekal 20:38, 6 January 2015 (UTC) - Yeah, and that's a point worth mocking. Teaching extant rights isn't. I can criticize the tone of a headline, right? Ikanreed (talk) 20:45, 6 January 2015 (UTC) - I agree the headline itself is crappy, i was just pointing out that the bills themselves aren't so innocent "teach us about the amendment" as your post seems to want it to be. --Miekal 20:47, 6 January 2015 (UTC) - I was raging specifically about the headline. Another failure of clarity on my part. Ikanreed (talk) 20:51, 6 January 2015 (UTC) - Do you mean the article headline or the WIGO entry? - Schools should teach about key amendments & what legal rights they provide, including the 2nd amendment, but don't they already do that? What these bills are trying to set up, on the other hand, is very much politicised teaching that would push students to one side of what's a very divisive issue. Wėąṣėḷőįď Methinks it is a Weasel 20:59, 6 January 2015 (UTC) [edit] Faux news 'expert' Talks shit on faux news, forced to appologise to the entire city of Birmingham, UK and much of London as well. Oldusgitus (talk) 08:44, 12 January 2015 (UTC) - Added to WIGO:Clogs. Wéáśéĺóíď Methinks it is a Weasel 20:46, 12 January 2015 (UTC) [edit] My county has 580 goats! Anyone do any better? Ikanreed (talk) 22:06, 13 January 2015 (UTC) [edit] Hey David Cameron You know it's possible to send encrypted messages over unencrypted messaging services, right? Frederick♠♣♥♦ 07:48, 14 January 2015 (UTC) [edit] Pope Francis Well, he isn't a hypocrite when it comes to satirical commentary of religion within free speech.— Unsigned, by: 74.108.228.157 / talk / contribs [edit] Look If we wanted to use non-black mugshots for target practice we'd have to start arresting people who aren't black. And frankly, that just doesn't make much sense, does it? We're police. We arrest the right kind of people. Ikanreed (talk) 16:00, 16 January 2015 (UTC) - Seriously, are US police departments competing for some Most Openly Racist Corps Award or something? 141.134.75.236 (talk) 16:13, 16 January 2015 (UTC) - It's Florida. Blech. I'm normally the optimistic "kumbaya" type that assumes the best intentions in everyone before leveling accusations, and even I can see that this was an extremely terrible idea. Noir LeSable (talk) 16:52, 16 January 2015 (UTC) - Well if the mug shot was selected at random, it's not really racist. What I don't understand is why they are an actual person's face. At least use a dead person. And don't cops usually shoot for the center of mass instead of the face (where the bullet holes are)?TheriziπosaurusG (talk) 20:25, 16 January 2015 (UTC) - Remember that this is Florida, a place with kinda' a low bar for police departments. The state was, after all, home to the incident where the Key West Police Department was declared a criminal enterprise for acting as a racket for cocaine smugglers. It is easy to get by when you can just say "I know that what we did was stupid, but we have not crossed our 'the police department is literally a racketeering operation' threshold yet, so I think we're still normal by Floridian standards." Crow7878 (talk) 20:49, 16 January 2015 (UTC) [edit] Al Sharpton story I've just come in from a ride, and haven't had my second coffee of the morning yet, so maybe I missing the irony. A tabloid article, equating racial politics in the US with entertainment in the headline, presented on here in a snarky tone which suggests some uppity folks are over-entitled? I'm not a fan of Hollywood and I neither know nor care about the Oscars. I can, however, see that racial politics in the US are still fucked up and I can see a racist subtext from 100 yards. Someone needs to fix up. London Grump (talk) 09:30, 20 January 2015 (UTC) - Seems rather tame in comparison to the John Boehner entry. But then maybe I'm just guilty of having skewered a sacred pig. Slings and Arrows (talk) 10:04, 20 January 2015 (UTC) - More like puking out the same old reactionary bullshit...London Grump (talk) 14:16, 20 January 2015 (UTC) - Indeed, the way the article shoehorns in the phrase "master race" time and time again speaks volumes. Doxys Midnight Runner (talk) 14:22, 20 January 2015 (UTC) - I think we should move that story to CLOGs. Zero (talk - contributions) 17:05, 20 January 2015 (UTC) - Yeah, no kidding. "Most annoying person on the planet" wut? >.> 141.134.75.236 (talk) 17:15, 20 January 2015 (UTC) - Is there a term for people believing silly things about their political foes with respect to who they look up to etc.? You see it a lot with creationists and Darwin and Dawkins, and teabaggers with Gore, and racists with, well, Al Sharpton. Queexchthonic murmurings 17:28, 20 January 2015 (UTC) - Well, portraying the enemies as mindless fanboys/sheeple (which is pretty similar to portraying them as mindless fanatics) is a common strawmanning tactic. 141.134.75.236 (talk) 17:37, 20 January 2015 (UTC) Sharpton in general seems to be defined by people vaguely arguing he's incendiary, without ever, ever citing something he's said. Ikanreed (talk) 17:32, 20 January 2015 (UTC) - True enough. I've seen him accused of being a black supremacist with possibly genocidal ambitions but when I ask for evidence or give them Sharpton quotes which contradict this people shut up. It's like the whitey tape, except there's no timeframe, an eternally regenerating rumor mill. Al is, at worst, a slightly skeezy attention whore. But hey, in the interest of fairness I'll keep the challenge open ended. If anyone can produce a verifiable quote of Sharpton saying he hates white people or calling for black supremacy link it here.--Zipperback (talk) 02:15, 21 January 2015 (UTC) - The post and article are still racist, even if there's film of Al Sharpton and Louis Farrakhan in a milk bar with their feet on the backs of white women looking like Alex and his droogs, planning the revolution. London Grump (talk) 06:47, 21 January 2015 (UTC) - Here are a few of the more controversial quotes from the Reverend Al Sharpton. Now whether they are racist, anti-Semitic, homophobic, or just incoherent babbling is open to interpretation. - "If the Jews want to get it on, tell them to pin their yarmulkes back and come over to my house.” – Quoted in Newsday (August 18, 1991) - “White folks was in caves while we was building empires. We taught philosophy and astrology and mathematics before Socrates and them Greek homos ever got around to it.” – 1994 Sharpton appearance at Kean College - (20 March 2000). - “They tried to say that being gay is a sin, and I said that adultery is a sin. Adultery is responsible for breaking up more marriages, but do we put that in the Constitution? It’s absurd.” -- Remarks regarding homosexual marriage (3 August 2005). - “Jim Crow is old. That's not who I'm mindful of today. The problem is that Jim Crow has sons. The one we've got to battle is James Crow Jr. He's a little more educated. He's a little slicker. He's a little more polished, but the results are the same.” -- Remarks at the funeral of Rosa Parks (3 November 2005). - Slings and Arrows (talk) - So... the most recent of these is from 10 years ago and is completely fucking reasonable statement of modern racism's manifestation. Now, you aren't going to find me approving of anyone using the phrase "the Jews" to say anything at all, but even that isn't nearly as racist or incendiary as what people attribute to him, even today. Ikanreed (talk) 22:04, 21 January 2015 (UTC) - The Jews seem like nice people. 141.134.75.236 (talk) 00:49, 22 January 2015 (UTC) The link to the Sharpton story (which has now been deleted from RationalWiki), was an article that appeared in the New York Daily News; a newspaper that is owned by media mogul Mort Zuckerman (who also owns US News and World Report). Zuckerman is a Jewish businessman, with a predominantly Jewish staff of writers; which is altogether irrelevant, but perhaps worth noting in light of the recent Charlie Hebdo incident. The Daily News is the fourth most widely circulated newspaper in the United States. To suggest that it is guilty of distributing racist material, borders on insanity. You can usually find the newspaper on display at your local supermarket. Slings and Arrows (talk) 00:53, 22 January 2015 (UTC) - Maybe I'm doing something wrong (I'm damn good with Google though) but after 15 minutes of searching, I can't seem to find source for the second one anywhere. I have a couple Yahoo answers search results and page upon page of right-wing blogs. Not a single reliable source for that quote. As far as I can tell, that's the only real problematic quote in the bunch and it seems to not actually be real. Wikipedia has the quote linking back to a google book that can't be accessed and then has a link to another article supposedly appologizing for it but doesn't. Even if it is in the book, why can't I find a single news article commenting on it or a transcript of the speech. 1994 isn't all that long ago. AyzmoCheers 06:19, 22 January 2015 (UTC) - White folks was [sic] in caves while we were building empires…. We taught philosophy and astrology and mathematics before Socrates and those Greek homos ever got around to it. Speech at Kean College (1994), transcribed in The Forward (December 1995), as quoted in Foolish Words : The Most Stupid Words Ever Spoken (2003) by Laura Ward, p. 192. Slings and Arrows (talk) 06:29, 22 January 2015 (UTC) - I think it's at least understandable to look with a jaundiced eye upon anyone who did what he did during the Tawana Brawley fiasco. He was responsible for throwing an enormous amount of gasoline on the fire, taking some remarkably crass actions in the process, and engaged in defamation which irreparably damaged the life of a completely innocent person. 24.186.49.177 (talk) 03:36, 23 January 2015 (UTC) - The Tawana Brawley incident is a classic example of Sharpton's tried-and-true technique. Rush in front of the cameras with allegations of racism, before the facts are even known. And when the story is later discovered to be an outright fabrication, then blame it on media bias. Slings and Arrows (talk) 23:23, 25 January 2015 (UTC) [edit] Baked Bads Just wanted to add clarification on the whole Azucar Bakery thing: According to Silva, the bakery never refused or denied the guy service outright. She did offer a workaround. Honestly, I think that's one of the best ways to handle it. Offer to make the blanks/partials (or the one with the two guys holding hands), and then give extra icing or sugar letters for the message/red X/whatever. Guy gets his cakes espousing his views, baker gets a customer without compromising her views, everybody's happy (Except for Bill Jack and whoever the hateful message is directed towards). Noir LeSable (talk) 19:26, 22 January 2015 (UTC) - "RARGH, you won't let us discriminate against people and NOW YOU WON'T LET US FORCE PEOPLE TO DISCRIMINATE! DOUBLE STANDARD RARGH." Fundies can just fuck right off. Ikanreed (talk) 19:51, 22 January 2015 (UTC) - Ironically, fancy cakes are kinda gay. X Stickman (talk) 20:35, 22 January 2015 (UTC) - *turns into a little black kid* That's gay-cist! And cake-ist! 141.134.75.236 (talk) 11:36, 23 January 2015 (UTC) - You want doctors and pastors to be allowed to deny service based on their beliefs, but when a bakery doesn't want to put an anti-gay message on a cake due to their non-homosexuality-condemning beliefs, it's religious discrimination? Sheesh, religious right, make up your mind. 141.134.75.236 (talk) 11:56, 23 January 2015 (UTC) - This is specifically about trying to bait liberals into a double standard, thinking that forcing someone to say something specific is the same as offering service to people you are bigoted against. They don't understand how one is free speech, and the other is a gigantic social problem people already suffered through decades of tough struggle to reform. Ikanreed (talk) 13:54, 23 January 2015 (UTC) - Typical activist troll, in other words. - Smerdis of Tlön, A ⇒ ¬A. 15:56, 23 January 2015 (UTC) [edit] Um... Call me crazy, but why would a landowner asking for recompensation for a damaged bathroom floor impose on a person's right to pee while standing? To me it seems like it only violates a person's 'right' to fail to aim properly while peeing and fail to clean up the mess they make. >.> 141.134.75.236 (talk) 11:18, 23 January 2015 (UTC) - Almost everywhere there's an expectation that landlords are responsible for maintenance of properties, especially with respect to superficial property damage from day-to-day use. Ikanreed (talk) 13:51, 23 January 2015 (UTC) - I think this is the point. By my understanding, the ruling does not make it an unalienable right for men to pee in ther flat however they please without having to pay for damages. It simply states: It is common for men to pee standing, and because there were no provisions on this in the rental contract, the man could justly assume that he could do so without being charged for damages caused by it. Because in Germany, reparation of damages caused by day-to-day use are mostly the landlord's obligations upon moving out. --Sophophobe (talk) 22:56, 23 January 2015 (UTC) [edit] North Korean restaurants Is it possible the restaurants are just mafia dens? Cracked.com already posted that North Korean agents are known to swap counterfeit notes for real ones and then send them back to the state. They also cite articles relating to North Korea's restaurants already open in East Asia, which are believed to be big on money laundering (also for the purpose of financing the state).-- Forerunner (talk) 15:38, 26 January 2015 (UTC)
http://rationalwiki.org/wiki/RationalWiki_talk:What_is_going_on_in_the_world%3F
CC-MAIN-2015-06
refinedweb
3,214
60.85
How to use python and popen4 to capture stdout and stderr from a command You can use popen to capture stdout from a command: import osAnd your output will be something like: stdout = os.popen("dir asdkfhqweiory") print stdout.read() >>> ================================ RESTART ================================If you wanted the error message, popen won't give it to you. To capture both stdout and stderr, use popen4: >>> Volume in drive C has no label. Volume Serial Number is XXXXXXXX Directory of C:\Python25 >>> import osThis will give you the following output (which includes the error message): (dummy, stdout_and_stderr) = os.popen4("dir asdkfhqweiory") print stdout_and_stderr.read() >>> ================================ RESTART ================================See for more information. >>> Volume in drive C has no label. Volume Serial Number is XXXXXXXX Directory of C:\Python25 File Not Found >>> Related posts - How to use the bash shell with Python's subprocess module instead of /bin/sh — posted 2011-04-13 - How to capture stdout in real-time with Python — posted 2009-10-12 - How to get stdout and stderr using Python's subprocess module — posted 2008-09-23
https://www.saltycrane.com/blog/2007/03/how-to-use-python-and-popen4-to-capture_12/
CC-MAIN-2019-47
refinedweb
173
62.58
Hey, I'm completely new to the raspberry pi and am looking for help with this same error I've been trying to solve for the past few days. I completely reinstalled OpenCV twice now from these two methods on a raspberry pi3 modal B- ... -opencv-3/ ... pberry-pi/ Many of the packages were out of date, so that may be my problem... I looked through many for the up-to-date ones and downloaded them but it only worked one day and now I'm back to where I started with the following error- OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cvtColor, file /build/opencv-ISmtkH/opencv-2.4.9.1+dfsg/modules/imgproc/src/color.cpp, line 3737 Traceback (most recent call last): File "faceRecognition.py", line 11, in <module> gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) cv2.error: /build/opencv-ISmtkH/opencv-2.4.9.1+dfsg/modules/imgproc/src/color.cpp, line 3737: error: (-215) scn == 3 || scn == 4 in function cvtColor Then here's the file faceRecognition.py done in python 3 (3.4.2) [also tried in python 2 (2.7.9)]- import numpy as np import cv2 # multiple cascades: ... arcascades # ... efault.xml face_cascade = cv2.CascadeClassifier('haarcascade_frontalface] cv2.imshow('img',img) k = cv2.waitKey(30) & 0xff if k == 27: break cap.release() cv2.destroyAllWindows() I reviewed a checked to make sure the Haar Cascade worked and it was connected. I did this just yesterday and the error had gone away and the program had been working fine. I shutdown the pi and had it sit out overnight. i plugged it in the following morning and that error came up again. the software I'm using by the way s installed from Noobs. I have tried changing the port for the pi camera to 1, but that was not the solution. I've looked though basically every forum and it seems nothing has worked for me, but I have suspicion that the picamera is simply not loading the video into the correct format for the pi to correctly process the series of jpg images. If you could offer any assistance that would be great, thank you and let me know if you need any other information.
https://www.raspberrypi.org/forums/viewtopic.php?f=28&t=186205&p=1184378
CC-MAIN-2020-10
refinedweb
372
64.71
Cyberworld 0 Posted July 30, 2006 I found this code here a while ago and have tried to make it work better than my own but i failed It seems to work the first two times files are dropped but then it starts to append the old data to the new one. What is wrong with it? Are there any solutions to this problem?? #include <GUIConstants.au3> GUICreate("Drop Area", 400, 300, -1, -1, -1, $WS_EX_ACCEPTFILES) ; The control to receive information ; Y=-100 to hide text and HEIGHT+100 to cover whole the window $drop = GUICtrlCreateInput("", 0, -100, 400, 400, $WS_DISABLED + $ES_AUTOHSCROLL, 0) GUICtrlSetState(-1, $GUI_DROPACCEPTED) ;------------------------ ; Create other controls here $main_win = GUICtrlCreateInput("Test", 14, 14 ,300, 250, BitOr($WS_VSCROLL, $WS_HSCROLL, $ES_MULTILINE)) ;------------------------ GUISetState() $msg = 0 while $msg <> $GUI_EVENT_CLOSE $msg = GUIGetMsg() if not $msg then elseif $msg = $GUI_EVENT_DROPPED then if @GUI_DRAGID = -1 then ; File(s) dropped $files = GUICtrlRead($drop) ; File list in the form: file1|file2|... StringReplace($files, "|", @CR) MsgBox(0, 'Dropped', StringReplace($files, "|", "@CR")) GUICtrlSetData($main_win, $files, default) endif endif wend Share this post Link to post Share on other sites
https://www.autoitscript.com/forum/topic/30064-why-does-this-not-work-with-dragn-drop/
CC-MAIN-2018-51
refinedweb
180
54.7
Idea is to replace interrupt handler for each setup screen, which seems to work fine, but initial interrupt handler: goes to: Code: Select all self.xpt.int_handler = self.first_press and then in the asynchronous loop, which as default loops through display show (co2, particles etc): Code: Select all def first_press(self, x, y): """ If touchscreen is pressed, change status """ gc.collect() # self.display.clear() self.touchscreen_pressed = True Loop practically check if touchscreen is pressed once per millisecond and if condition is true, activate setup screen(s). This logic generally works, but I have hard time to get interrupt handler working first time, meaning, that somehow interrupt is not activated promptly. I may need to press touchscreen a few seconds before interrupt handler activates. Code: Select all async def run_display_loop(self): # TODO: Initial welcome screen? # await self.show_welcome_screen() # NOTE: Loop is started in the main() while True: if self.touchscreen_pressed is True: if self.setup_screen_active is False: # First setup screen self.draw_setup_screen() else: # Draw setup screen just once # TODO: screen timeout pass elif self.touchscreen_pressed is False: rows, rowcolours = await self.show_time_co2_temp_screen() await self.show_screen(rows, rowcolours) rows, rowcolours = await self.show_particle_screen() await self.show_screen(rows, rowcolours) rows, rowcolours = await self.show_status_monitor_screen() await self.show_screen(rows, rowcolours) await asyncio.sleep_ms(1) I tried to change screen update so that instead of using await asyncio.sleep I had a counter for each millisecond, which checked is touchscreen pressed (= interrupt handler), but behavior was same. Documentation ... rules.html explains ." and this is true, because if I long press touch screen digitizer, screen update slows down, but the interrupt handler function is not activated. Is there better ways to implement interrupt handlers or is my logic wrong?
https://forum.micropython.org/viewtopic.php?f=2&t=9605&p=53828
CC-MAIN-2021-10
refinedweb
283
59.19
Lukas Johmann911 Points I don't understand the challenge Hello Treehousers, I'm very confused and don't understand functions and lists enough to complete this challenge. Can someone explain to me the steps to take to coplete this challenge? Challenge Task 1 of 1 OK, I need you to finish writing a function for me. The function disemvowel takes a single word as a parameter and then returns that word at the end. I need you to make it so, inside of the function, all of the vowels ("a", "e", "i", "o", and "u") are removed from the word. Solve this however you want, it's totally up to you! Oh, be sure to look for both uppercase and lowercase vowels! def disemvowel(word): vowel = "a","e", "o", "u" for vowel in word: del(vowel) return word 2 Answers Michal Kozák1,139 Points Hey Lukas, Your code is kinda on the right path, but I'll give you just some hints, so you can try to figure it out for yourself. What are you doing in this exercise is, that you're effectively comparing values in two lists, and then not using letters, that are in both of those lists. - Make your vowels a list using []; you also forgot "i" in there. - Make a list out of the word argument using list(word) function, like test_word = list(word) - Create an empty list for storing the non-vowel letters, like new_word = [] - Run for loop for each letter in the test_word, and test in each loop using if statement, whenever the letter is not in the vowels list. If it's not, append that letter to that empty new_word list - Make a string out of the new_word list using "".join(new_word) You also need to handle uppercases and lowercases. I did it maybe lazily, by going like this: However, much better would be uppercasing or lowercasing each letter in the if statement, using .upper() or .lower() function. vowels = ["a", "e", "i", "o", "u", "A", "E", "I", "O", "U"] Good luck! Lukas Johmann911 Points Hello Michal! Thanks for your answer, I tried going thru the steps you recommended. I am getting a syntax error on line 9. Can you explain what is wrong with my code? vowel = ["a", "e", "i", "o", "u"] test_word = list(word) new_word = [] for vowel.lower() in test_word: if letter.lower() not in vowel: letter += new_word word.join(new_word) def disemvowel(word) return word Michal Kozák1,139 Points Michal Kozák1,139 Points Hey Lukas! Good going, you almost got it. This should be your code, right? At the end, you should end up with a code like this: Hope this works for you, it worked for me in my interpreter.
https://teamtreehouse.com/community/i-dont-understand-the-challenge-4
CC-MAIN-2019-26
refinedweb
451
71.65
Python Tutorial Python represents an interpreted general-purpose programming language that draws from object-oriented, imperative,and functional programming.One aspect of python that makes it attractive is its comprehensive libraries and packages.In this course,the NumPy and SciPy packages will prove most important. The company Enthought offers a Python distribution that is free for students and educators.Once down- loaded,it offers the command-line interpreter ipython that can be used to run the programs needed for this course.When working in ipython,one can call existing functions,define variables,and define new functions. One such function is the type function is the type function which returns the type of any expression.For example in:type(1+1) out:<type ’int’> where the syntax following in:is typed on the command line,and,after pressing enter,the syntax following out:is what is returned by the interpreter.In this case,the output indicates that 1 + 1 is an integer expression. To define a variable within the ipython,simply provide its name,followed by =,followed by an expression whose value the variable will be set to.For example, in:s ="Hello"+"World!" in:s out:’Hello World!’ assigns variable s to the string “Hello World!”.Then s is provided as input,which yields its value as output. Notice here that strings in python can be written with either single or double quotes. Boolean Operators:True,False,not,and,or Bitwise Operators: not:~ and:& or:| xor:^ Some unusual relational operators: inequality:<> reference equality:is 1 For example, in:a = [1]//the list containing element 1 in:b = [1] in:a is b out:false since a and b are referencing different lists. Lists.A list is a comma-delimited list of objects that are enclosed within brackets [].Note that the objects can be of different types.For example, in:a = [1,3.2,’hello’,[2,3,4]] is a list,where the last element of the list is also a list.Brackets are also used to access particular elements of a list. in:a[0] out:1 int:a[4][2] out:4 Python also makes use of the slice operator,which has three components:start:stop:step.In general, a[x:y:z] is the sublist of a that is equal to [a[x],a[x + z],a[x + 2z],...,a[x + mz],where m ≥ 0 is the greatest integer for which mz < y. Note that if x is omitted,then one starts at the beginning of the list.Similarly,if y is omitted,then the sublist is taken to the end of the list.Finally,if z is omitted,then the step size is set to 1.Also,::−1 has the effect of reversing the list. Example 1.Given the list a = [1,2,...,100].Use the slice operator to obtain the sublist of a consisting of i) integers [40,...,50],ii) all even integers of a,and iii) all multiples of 3 that do not exceed 60. 2 List copying.Given list a,a shallow copy of a is obtained using a[:].The copy is shallow,since the resulting list is a new list,yet still has the same references to the object elements of a.To obtain a deep copy,one must use the deepcopy() function from python’s copy package. in:import copy in:copy.deepcopy(a) We note in passing that the access and slice operators also work for strings and tuples (see below),as do the following sequence operators.Let s and t be sequenced objects (e.g.,a lists,tuples,or strings).Let x and i be objects,and n be a nonnegative integer.Then x in s true if x is a member of s s + t concatenation of s with t s*n s concatenated with itself n times len(s) length of s min(s) minimum element of s max(s) maximal element of s s.index(i) first index where i occurs in s s.count(i) number of times i occurs in s Tuples.A tuple is a list that is read-only,and uses parentheses as delimiters instead of brackets. Dictionaries.A dictionary is a list of pairs of the form {x 1 :y 1 ,...,x n :y n },where x i is a hashable object,called the key,and y i is called the value associated with the key.For example, in:d = {’a’:1,’b’:2,’c’:3} in:d[’a’] out:1 in:d.keys() out:[’a’,’b’,’c’] in:d.values() out:[1,2,3] in:d.items() out:[(’a’,1),(’b’,2),(’c’,3)] File I/O. a = open(’filename’) open the file with the given name a = open(’filename’,’x’) open with permission x,where x in {a,w,r} a.close() close file a a.read() read all of file a a.readline() read next line a.readlines() read in all lines that returned in a list a.write(string) write the given text string to the file 3 Modules.A module is a resource (usually a file with a.py extension) that contains resources,such as functions,classes,and commands. import m import module m from m import r import resource r from module m from m import * import all resources from module m m.r apply resource r reload(m) reload module m To import a module,it must be located in a directory that is listed in path which is a list resource in the sys module.For example,to import python modules that are located in /home/tebert/551_book/source The following steps are taken. in:import sys in:sys.path.append(’/home/tebert/551_book/source’) Control Flow.//if blocks if statement: commands elif: commands else: commands for loops for var in set: commands else: commands while loops while condition: commands else: commands 4 functions def name(args): commands return value Example 2.Implement a Python function that returns the minimum of three real inputs. def min(x,y,z): #Below is official python documentation """"computes the minimum of three inputs""" #This is a python comment. if x <= y: if x <= z: m = x m = z elif y <= z: m = y else: m = z return m Note:changing the function heading to def min(x=1,y=2,z=3): establishes default values that are used in case of missing arguments.The following examples illustrate this. in:min() out:1 in:min(x=7) out:2 in:min(x=5,y=8) out:3 When using the for loop,the range() function proves useful.Its syntax is very similar to the slice operator: range(start:stop:step) 5 For example range(5) yields the list [0,1,2,3,4].While range(10:-5:-2) produces [10,8,6,4,2,0,−2,−4]. The map function.The map() function takes two arguments:a function and a list of function inputs in the form of tuples.It then returns a list of the function outputs that correspond to the inputs.For example, using the min() function defined above yields the following. in:map(min,[(1,2,3),(5,3,4),(7,9,4)]) out:[1,3,4] Another useful function is filter() whose first argument is a Boolean function,and whose second argument is a list of inputs to the function.It then returns a sublist of all inputs that caused the Boolean to evaluate to true.Consider the following example. in:filter(isalpha,[’1’,’a’,’B’,’&’,’m’]) out:[’a’,’B’,’m’] For both of the above functions,if one prefers to use an expression instead of a designated function,then the lambda command proves useful.The syntax for the lambda command is lambda x:expr where expr is an expression (that normally contains x).Thus,lambda may be thought of as a function whose input is x,and whose corresponding output is expr. Example 3.Use map,range,and lambda to create a list of the first one hundred positive perfect squares. 6 Defining Python Classes.The syntax for defining a Python class is class classname(superclass) def __init__(self,args): def functionname(self,args): Example 4.Define the point class that represents a two-dimensional cartesian point.Add two methods: (length()) for computing the length of the point when viewed as a vector,and (distance(p)) for computing the distance from another point. class point def __init__(self,x,y): self.x = x self.y = y def length(self): return math.sqrt(self.x**2 + self.y**2) def distance(self,p): return math.sqrt((self.x-p.x)**2+(self.y-p.y)**2) The following creates point objects and computes their distance from each other. in:p1 = point.point(2,4) in:p2 = point.point(2,5) in:p1.distance(p2) out:1 7 Arrays.Arrays is the basic data structure that is used for numerical work.It can be of any dimension, and its elements must all be of the same type:either Boolean (dtype=bool),integer (dtype=int),real (dtype=float),or complex (dtype=complex).Arrays can be constructed using lists. in:a = array([3,-1,2]) in:a2 = array([[1,0,0],[0,1,0],[0,0,1]]) Methods for Array Creation.The following functions can be used for creating arrays. arange(n) creates the array [0,1,...,n-1] arange(i,j,k) [i,i+k,i+2k,...,i+mk],mk < j,(m+1)k >= j ones(n) 1-dimensional array of size n [1.,1.,...,1.] ones((m,n)) m by n integer matrix of ones zeros similar to ones eye(n) identity matrix of size n eye(m,n) identity matrix with extra columns (rows) padded with zeros linspace(i,j,k) array starting at i,ending at j,and having k equal-spaced elements r_[] row concatenation c_[] column concatenation More Array Functions. ndim(a) number of dimensions of a size(a) number of elements of a shape(a) returns a tuple that gives the dimensions of a reshape(a,t) returns the reshaping of a according to tuple t. Using -1 in a component of t means"as many as needed" ravel(a) returns flattened 1d version of a transpose(a) matrix transpose a[::-1] reverse the elements of each dimension a.min(),a.max() minimum and maximum elements of a a.sum() sum of all elements of a a.sum(axis=0) returns an array of column sums a.sum(axis=1) returns an array of row sums a+b a-b matrix addition/subtraction a*b a/b element-wise mult./division c*A A/c scalar multiplication/division dot(a,b) matrix multiplication pow(a,n) returns a with elements raised to nth power pow(n,a) computes number raised to matrix elements where(condition(a)) returns an array of all indices for which condition(a) is true 8 where(cond(a),i,j) returns an array with same dimensions as a with i in entries where cond(a) is true,and j in other entries Example 5.Given matrix a,write code that allows for matrix b to be defined as a random permutation of the rows (columns) of a. Example 6.Define a 4 ×3 matrix a using random integers,and from that create a new boolean matrix b which has a 1 wherever a is positive,and a zero otherwise. Example 7.Define a 12×3 matrix a,and define 4×3 matrices b,c,and d,where b consists of rows 0,3,6,9 of a,c consists of rows 1,4,7,10 of a,and d consists of rows 2,5,8,11 of a. 9 Example 8.Show that,when first reversing the rows of the identify matrix,followed by reversing the elements of each row,then the original matrix results. Example 9.Given two n-dimensional arrays x and y,compute exp (x−y) 2 3 . 10 Log in to post a comment
https://www.techylib.com/en/view/adventurescold/python_tutorial
CC-MAIN-2017-22
refinedweb
2,002
57.98
Most Android devices don't have a physical keyboard. Instead, they rely on a virtual or soft keyboard to accept user input. establishing connections between the keyboard and input fields. In this tutorial, you will learn how to create a fully functional soft keyboard that can serve as your Android device's default keyboard. Premium Option If you're in a hurry, check out Android Keyboard Themes, a ready-to-use solution from Envato Market. The app gives you the flexibility to choose one of the 22 built-in keyboard themes or create your own custom theme. Or you could hire a freelancer on Envato Studio. Just browse through our Mobile & Apps section and you're sure to find an expert who can help you. If you prefer to build your own, read on to find out how. 1. Prerequisites You will need the Eclipse ADT Bundle installed. You can download it from the Android Developer website. 2. Create a New Project Fire up Eclipse and create a new Android application. Call this application, SimpleKeyboard. Make sure you choose a unique package name. Set the minimum required SDK to Android 2.2 and set the target SDK to Android 4.4. This app will have no activities so deselect Create Activity and click Finish. 3. Edit the Manifest A soft keyboard is considered as an Input Method Editor (IME) by the Android operating system. An IME is declared as a Service in AndroidManifest.xml that uses the BIND_INPUT_METHOD permission, and responds to the action android.view.InputMethod. Add the following lines to the application tag of the manifest: <service android: <meta-data android: <intent-filter> <action android: </intent-filter> </service> 4. Create method.xml The service tag in the manifest file containes a meta-data tag that references an XML file named method.xml. Without this file, the Android operating system won't recognize our Service as a valid IME service. The file contains details about the input method and its subtypes. For our keyboard, we define a single subtype for the en_US locale. Create the directory res/xml if it doesn't exist, and add the file method.xml to it. The contents of the file should be: <?xml version="1.0" encoding="utf-8"?> <input-method xmlns: <subtype android: </input-method> 5. Edit strings.xml The strings that this app uses are defined in the res/values/strings.xml file. We're going to need three strings: - the name of the app - the label of the IME - the label of the IME's subtype Update your strings.xml so that it has the following contents: <resources> <string name="app_name">SimpleKeyboard</string> <string name="simple_ime">Simple IME</string> <string name="subtype_en_US">English (US)</string> </resources> 6. Define the Keyboard Layout The layout of our keyboard contains only a KeyboardView. The layout_alignParentBottom attribute is set to true so that keyboard appears at the bottom of the screen. Create a file named res/layout/keyboard.xml and replace its contents with the following: <?xml version="1.0" encoding="UTF-8"?> <android.inputmethodservice.KeyboardView xmlns: The keyPreviewLayout is the layout of the short-lived pop-up that shows up whenever a key on the keyboard is pressed. It contains a single TextView. Create a file named res/layout/preview.xml and add the following to it: <?xml version="1.0" encoding="utf-8"?> <TextView xmlns: </TextView> 6. Define the Keyboard Keys The details of the keyboard keys and their positions are specified in an XML file. Every key has the following attributes: keyLabel: This attribute contains the text that is displayed on the key. codes: This attribute contains the unicode values of the characters that the key represents. For example, to define a key for the letter A, the codes attribute should have the value 97 and the keyLabel attribute should be set to A. If more than one code is associated with a key, then the character that the key represents will depend on the number of taps the key receives. For example, if a key has the codes 63, 33, and 58: - a single tap on the key results in the character ? - two taps in quick succession results in the character ! - three taps in quick succession results in the character : A key can also have a few optional attributes: keyEdgeFlags: This attribute can take the value leftor right. This attribute is usually added to the leftmost and rightmost keys of a row. keyWidth: This attribute defines the width of a key. It's usually defined as a percentage value. isRepeatable: If this attribute is set to true, long-pressing the key will repeat the action of the key multiple times. It is usually set to truefor the delete and spacebar keys. The keys of a keyboard are grouped as rows. It's good practice to limit the number of keys on a row to a maximum of ten, with each key having a width equal to 10% of the keyboard. The height of the keys is set to 60dp in this tutorial. This value can be adjusted, but values less than 48dp are not recommended. Our keyboard will have five rows of keys. We can now go ahead and design the keyboard. Create a new file named res/xml/qwerty.xml and replace its contents with the following: <Keyboard xmlns: android: <Key android: <Key android: <Key android: <Key android: <Key android: </Row> </Keyboard> You may have noticed that some keys have negative values for the codes attribute. Negative values are equal to predefined constants in the Keyboard class. For example, the value -5 is equal to the value of Keyboard.KEYCODE_DELETE. 7. Create a Service Class Create a new Java class and call it SimpleIME.java. The class should extend InputMethodService class and implement the OnKeyboardActionListener interface. The OnKeyboardActionListener interface contains the methods that are called when keys of the soft keyboard are tapped or pressed. The SimpleIME class should have three member variables: - a KeyboardViewreferencing the view defined in the layout - a Keyboardinstance that is assigned to the KeyboardView - a booleantelling us if the caps lock is enabled After declaring these variables and adding the methods of the OnKeyboardActionListener interface, the SimpleIME class should look like this: public class SimpleIME extends InputMethodService implements OnKeyboardActionListener{ private KeyboardView kv; private Keyboard keyboard; private boolean caps = false; @Override public void onKey(int primaryCode, int[] keyCodes) { } @Override public void onPress(int primaryCode) { } @Override public void onRelease(int primaryCode) { } @Override public void onText(CharSequence text) { } @Override public void swipeDown() { } @Override public void swipeLeft() { } @Override public void swipeRight() { } @Override public void swipeUp() { } } When the keyboard is created, the onCreateInputView method is called. All the member variables of the Service can be initialized here. Update the implementation of the onCreateInputView method as shown below: @Override public View onCreateInputView() { kv = (KeyboardView)getLayoutInflater().inflate(R.layout.keyboard, null); keyboard = new Keyboard(this, R.xml.qwerty); kv.setKeyboard(keyboard); kv.setOnKeyboardActionListener(this); return kv; } Next, we create a method that plays a sound when a key is pressed. We use the AudioManager class to play the sounds. The Android SDK includes a few default sound effects for key presses and those are used in the playClick method. private void playClick(int keyCode){ AudioManager am = (AudioManager)getSystemService(AUDIO_SERVICE); switch(keyCode){ case 32: am.playSoundEffect(AudioManager.FX_KEYPRESS_SPACEBAR); break; case Keyboard.KEYCODE_DONE: case 10: am.playSoundEffect(AudioManager.FX_KEYPRESS_RETURN); break; case Keyboard.KEYCODE_DELETE: am.playSoundEffect(AudioManager.FX_KEYPRESS_DELETE); break; default: am.playSoundEffect(AudioManager.FX_KEYPRESS_STANDARD); } } Finally, update the onKey method so that our keyboard app can communicate with input fields (usually EditText views) of other applications. The getCurrentInputConnection method is used to get a connection to the input field of another application. Once we have the connection, we can use the following methods: commitTextto add one or more characters to the input field deleteSurroundingTextto delete one or more characters of the input field sendKeyEventto send events, like KEYCODE_ENTER, to the external application Whenever a user presses a key on the soft keyboard, the onKey method is called with the unicode value of the key as one of its parameters. Based on this value, the keyboard performs one of the following actions: - If the code is KEYCODE_DELETE, one character to the left of the cursor is deleted using the deleteSurroundingTextmethod. - If the code is KEYCODE_DONE, a KEYCODE_ENTERkey event is fired. - If the code is KEYCODE_SHIFT, the value of the capsvariable is changed and the shift state of the keyboard is updated using the setShiftedmethod. The keyboard needs to be redrawn when the state changes so that the labels of the keys are updated. The invalidateAllKeysmethod is used to redraw all keys. - For all other codes, the code is simply converted into a character and sent to the input field. If the code represents a letter of the alphabet and the capsvariable is set to true, then the character is converted to uppercase. Update the onKey method so that it looks like this: @Override public void onKey(int primaryCode, int[] keyCodes) { InputConnection ic = getCurrentInputConnection(); playClick(primaryCode); switch(primaryCode){ case Keyboard.KEYCODE_DELETE : ic.deleteSurroundingText(1, 0); break; case Keyboard.KEYCODE_SHIFT: caps = !caps; keyboard.setShifted(caps); kv.invalidateAllKeys(); break; case Keyboard.KEYCODE_DONE: ic.sendKeyEvent(new KeyEvent(KeyEvent.ACTION_DOWN, KeyEvent.KEYCODE_ENTER)); break; default: char code = (char)primaryCode; if(Character.isLetter(code) && caps){ code = Character.toUpperCase(code); } ic.commitText(String.valueOf(code),1); } } 8. Testing the Keyboard The soft keyboard is now ready to be tested. Compile and run it on an Android device. This app doesn't have an Activity, which means that it won't show up in the launcher. To use it, it should first be activated in the device's Settings. After activating Simple IME, open any app that allows text input (for example, any messaging app) and click on one of its input fields. You should see a keyboard icon appear in the notifications area. Depending on your device, you can either click on that icon or drag the notification bar down and select Simple IME as the input method. You should now be able to type using your new keyboard. Conclusion In this tutorial, you have learned how to create a custom keyboard app from scratch. To change the look and feel of your keyboard, all you have to do is add extra styling to the res/layout/keyboard.xml and res/layout/preview.xml files. To change the positions of the keys, update the res/xml/qwerty.xml file. To add more features to your keyboard, refer to the developer documentation. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/create-a-custom-keyboard-on-android--cms-22615
CC-MAIN-2018-39
refinedweb
1,767
56.35
;; name params body ;; ----- ------ ------------------- (defn greet [name] (str "Hello, " name) ) Clojure is a functional language. Functions are first-class and can be passed-to or returned-from other functions. Most Clojure code consists primarily of pure functions (no side effects), so invoking with the same inputs yields the same output. defn defines a named function: ;; name params body ;; ----- ------ ------------------- (defn greet [name] (str "Hello, " name) ) This function has a single parameter name, however you may include any number of parameters in the params vector. Invoke a function with the name of the function in "function position" (the first element of a list): user=> (greet "students") "Hello, students" Functions can be defined to take different numbers of parameters (different "arity"). Different arities must all be defined in the same defn - using defn more than once will replace the previous function. Each arity is a list ([param*] body*). One arity can invoke another. The body can contain any number of expressions and the return value is the result of the last expression. (defn messenger ([] (messenger "Hello world!")) ([msg] (println msg))) This function declares two arities (0 parameters and 1 parameter). The 0-parameter arity calls the 1-parameter arity with a default value to print. We invoke these functions by passing the appropriate number of arguments: user=> (messenger) Hello world! nil user=> (messenger "Hello class!") Hello class! nil Functions may also define a variable number of parameters - this is known as a "variadic" function. The variable parameters must occur at the end of the parameter list. They will be collected in a sequence for use by the function. The beginning of the variable parameters is marked with &. (defn hello [greeting & who] (println greeting who)) This function takes a parameter greeting and a variable number of parameters (0 or more) that will be collected in a list named who. We can see this by invoking it with 3 arguments: user=> (hello "Hello" "world" "class") Hello (world class) You can see that when println prints who, it is printed as a list of two elements that were collected. An anonymous function can be created with fn: ;; params body ;; --------- ----------------- (fn [message] (println message) ) Because the anonymous function has no name, it cannot be referred to later. Rather, the anonymous function is typically created at the point it is passed to another function. Or it’s possible to immediately invoke it (this is not a common usage): ;; operation (function) argument ;; -------------------------------- -------------- ( (fn [message] (println message)) "Hello world!" ) ;; Hello world! Here we defined the anonymous function in the function position of a larger expression that immediately invokes the expression with the argument. Many languages have both statements, which imperatively do something and do not return a value, and expressions which do. Clojure has only expressions that return a value. We’ll see later that this includes even flow control expressions like if. defnvs fn It might be useful to think of defn as a contraction of def and fn. The fn defines the function and the def binds it to a name. These are equivalent: (defn greet [name] (str "Hello, " name)) (def greet (fn [name] (str "Hello, " name))) There is a shorter form for the fn anonymous function syntax implemented in the Clojure reader: #(). This syntax omits the parameter list and names parameters based on their position. % is used for a single parameter %1, %2, %3, etc are used for multiple parameters %& is used for any remaining (variadic) parameters Nested anonymous functions would create an ambiguity as the parameters are not named, so nesting is not allowed. ;; Equivalent to: (fn [x] (+ 6 x)) #(+ 6 %) ;; Equivalent to: (fn [x y] (+ x y)) #(+ %1 %2) ;; Equivalent to: (fn [x y & zs] (println x y zs)) #(println %1 %2 %&) One common need is an anonymous function that takes an element and wraps it in a vector. You might try writing that as: ;; DO NOT DO THIS #([%]) This anonymous function expands to the equivalent: (fn [x] ([x])) This form will wrap in a vector and try to invoke the vector with no arguments (the extra pair of parentheses). Instead: ;; Instead do this: #(vector %) ;; or this: (fn [x] [x]) ;; or most simply just the vector function itself: vector apply The apply function invokes a function with 0 or more fixed arguments, and draws the rest of the needed arguments from a final sequence. The final argument must be a sequence. (apply f '(1 2 3 4)) ;; same as (f 1 2 3 4) (apply f 1 '(2 3 4)) ;; same as (f 1 2 3 4) (apply f 1 2 '(3 4)) ;; same as (f 1 2 3 4) (apply f 1 2 3 '(4)) ;; same as (f 1 2 3 4) All 4 of these calls are equivalent to (f 1 2 3 4). apply is useful when arguments are handed to you as a sequence but you must invoke the function with the values in the sequence. For example, you can use apply to avoid writing this: (defn plot [shape coords] ;; coords is [x y] (plotxy shape (first coords) (second coords))) Instead you can simply write: (defn plot [shape coords] (apply plotxy shape coords)) let let binds symbols to values in a "lexical scope". A lexical scope creates a new context for names, nested inside the surrounding context. Names defined in a let take precedence over the names in the outer context. ;; bindings name is defined here ;; ------------ ---------------------- (let [name value] (code that uses name)) Each let can define 0 or more bindings and can have 0 or more expressions in the body. (let [x 1 y 2] (+ x y)) This let expression creates two local bindings for x and y. The expression (+ x y) is in the lexical scope of the let and resolves x to 1 and y to 2. Outside the let expression, x and y will have no continued meaning, unless they were already bound to a value. (defn messenger [msg] (let [a 7 b 5 c (clojure.string/capitalize msg)] (println a b c) ) ;; end of let scope ) ;; end of function The messenger function takes a msg argument. Here the defn is also creating lexical scope for msg - it only has meaning within the messenger function. Within that function scope, the let creates a new scope to define a, b, and c. If we tried to use a after the let expression, the compiler would report an error. The fn special form creates a "closure". It "closes over" the surrounding lexical scope (like msg, a, b, or c above) and captures their values beyond the lexical scope. (defn messenger-builder [greeting] (fn [who] (println greeting who))) ; closes over greeting ;; greeting provided here, then goes out of scope (def hello-er (messenger-builder "Hello")) ;; greeting value still available because hello-er is a closure (hello-er "world!") ;; Hello world! 1) Define a function greet that takes no arguments and prints "Hello". Replace the ___ with the implementation: (defn greet [] _) 2) Redefine greet using def, first with the fn special form and then with the #() reader macro. ;; using fn (def greet __) ;; using #() (def greet __) 3) Define a function greeting which: Given no arguments, returns "Hello, World!" Given one argument x, returns "Hello, x!" Given two arguments x and y, returns "x, y!" ;; Hint use the str function to concatenate strings (doc str) (defn greeting ___) ;; For testing (assert (= "Hello, World!" (greeting))) (assert (= "Hello, Clojure!" (greeting "Clojure"))) (assert (= "Good morning, Clojure!" (greeting "Good morning" "Clojure"))) 4) Define a function do-nothing which takes a single argument x and returns it, unchanged. (defn do-nothing [x] ___) In Clojure, this is the identity function. By itself, identity is not very useful, but it is sometimes necessary when working with higher-order functions. (source identity) 5) Define a function always-thing which takes any number of arguments, ignores all of them, and returns the number 100. (defn always-thing [__] ___) 6) Define a function make-thingy which takes a single argument x. It should return another function, which takes any number of arguments and always returns x. (defn make-thingy [x] ___) ;; Tests (let [n (rand-int Integer/MAX_VALUE) f (make-thingy n)] (assert (= n (f))) (assert (= n (f 123))) (assert (= n (apply f 123 (range))))) In Clojure, this is the constantly function. (source constantly) 7) Define a function triplicate which takes another function and calls it three times, without any arguments. (defn triplicate [f] ___) 8) Define a function opposite which takes a single argument f. It should return another function which takes any number of arguments, applies f on them, and then calls not on the result. The not function in Clojure does logical negation. (defn opposite [f] (fn [& args] ___)) In Clojure, this is the complement function. (defn complement "Takes a fn f and returns a fn that takes the same arguments as f, has the same effects, if any, and returns the opposite truth value." [f] (fn ([] (not (f))) ([x] (not (f x))) ([x y] (not (f x y))) ([x y & zs] (not (apply f x y zs))))) 9) Define a function triplicate2 which takes another function and any number of arguments, then calls that function three times on those arguments. Re-use the function you defined in the earlier triplicate exercise. (defn triplicate2 [f & args] (triplicate ___)) 10) Using the java.lang.Math class ( Math/pow, Math/cos, Math/sin, Math/PI), demonstrate the following mathematical facts: The cosine of pi is -1 For some x, sin(x)^2 + cos(x)^2 = 1 11) Define a function that takes an HTTP URL as a string, fetches that URL from the web, and returns the content as a string. Hint: Using the java.net.URL class and its openStream method. Then use the Clojure slurp function to get the content as a string. (defn http-get [url] ___) (assert (.contains (http-get "") "html")) In fact, the Clojure slurp function interprets its argument as a URL first before trying it as a file name. Write a simplified http-get: (defn http-get [url] ___) 12) Define a function one-less-arg that takes two arguments: f, a function x, a value and returns another function which calls f on x plus any additional arguments. (defn one-less-arg [f x] (fn [& args] ___)) In Clojure, the partial function is a more general version of this. 13) Define a function two-fns which takes two functions as arguments, f and g. It returns another function which takes one argument, calls g on it, then calls f on the result, and returns that. That is, your function returns the composition of f and g. (defn two-fns [f g] ___)
https://clojure.org/guides/learn/functions
CC-MAIN-2021-49
refinedweb
1,771
60.24
Asked by: the connection with the debugger has been lost. Question - User772 posted Hi, I can't debug any of my apps anymore either with Xamarin Studio or Visual Studio. The app deploys and the debugger starts for a second and then fails with "the connection with the debugger has been lost" I've recently rebuild my PC and I've got a new phone S4, so I'm not sure if it's a PC or phone issue. Can I find out what is causing this?? Thanks JohnTuesday, May 14, 2013 9:16 AM All replies - User8267 posted I'm having the same problem with the Galaxy S4. When I try to deploy an app (even the default app -> create new android app), it deploys but crashes immediately without giving any information in the debug or xamarin log..Tuesday, May 14, 2013 9:26 AM - User772 posted I've attached a capture of my log. Can someone at support look at this issue please? Do we need a new S4 driver or something like that??Tuesday, May 14, 2013 9:30 AM - User48 posted @JohnWood: This message usually implies that your app doesn't have the INTERNETpermission: Error accepting stdout and stderr Are you trying to debug a Release build?Tuesday, May 14, 2013 3:25 PM - User48 posted @ChilaxX: Can you provide your full Android debug log output?Tuesday, May 14, 2013 3:26 PM - User772 posted Some more background. I'm using the standard template with a small change to make debugging easier (see screenshot) This works fine on a HTC One X on my Main PC and my laptop. Both in Xamarin Studio and Visual Studio. Both Win 8 Pro 64bit with latest stable Xamarin. When debugging on S4 (latest USB driver from Kies) it crashes. Just saw this in the output from debug, if it helps?? Resolved pending breakpoint at 'c:\Crapdump\AndroidApplication1\AndroidApplication1\Activity1.cs:34' to Void AndroidApplication1.Activity1:button_Click (Object, EventArgs) [0x00001]. The program 'Mono' has exited with code 0 (0x0). Mono.AndroidTools.AndroidLogger Error: 0 : [E:]: Error in device tracker System.AggregateException: One or more errors occurred. --->) --- End of inner exception stack trace --- ---> (Inner Exception #0))<---Tuesday, May 14, 2013 3:38 PM - User772 posted I've now moved on to the Beta version. As soon as it gets to the breakpointed code it crashes with this output. 05-14 17:24:53.788 I/Adreno200-EGL(23447): Reconstruct Branch: 05-14 17:24:53.828 D/OpenGLRenderer(23447): Enabling debug mode 0 05-14 17:24:54.769 D/GestureDetector(23447): [Surface Touch Event] mSweepDown False, mLRSDCnt : -1 mTouchCnt : 6 mFalseSizeCnt:0 05-14 17:24:55.459 D/GestureDetector(23447): [Surface Touch Event] mSweepDown False, mLRSDCnt : -1 mTouchCnt : 5 mFalseSizeCnt:0 05-14 17:24:55.619 D/GestureDetector(23447): [Surface Touch Event] mSweepDown False, mLRSDCnt : -1 mTouchCnt : 11 mFalseSizeCnt:0 05-14 17:24:55.709 D/GestureDetector(23447): [Surface Touch Event] mSweepDown False, mLRSDCnt : -1 mTouchCnt : 16 mFalseSizeCnt:0 05-14 17:24:55.820 D/GestureDetector(23447): [Surface Touch Event] mSweepDown False, mLRSDCnt : -1 mTouchCnt : 21 mFalseSizeCnt:0 05-14 17:24:55.940 D/GestureDetector(23447): [Surface Touch Event] mSweepDown False, mLRSDCnt : -1 mTouchCnt : 24 mFalseSizeCnt:0 05-14 17:24:59.703 D/GestureDetector(23447): [Surface Touch Event] mSweepDown False, mLRSDCnt : -1 mTouchCnt : 4 mFalseSizeCnt:0 05-14 17:24:59.703 E/mono-rt (23447): Stacktrace: 05-14 17:24:59.703 E/mono-rt (23447): 05-14 17:24:59.703 E/mono-rt (23447): 05-14 17:24:59.703 E/mono-rt (23447): ================================================================= 05-14 17:24:59.703 E/mono-rt (23447): Got a SIGSEGV while executing native code. This usually indicates 05-14 17:24:59.703 E/mono-rt (23447): a fatal error in the mono runtime or one of the native libraries 05-14 17:24:59.703 E/mono-rt (23447): used by your application. 05-14 17:24:59.703 E/mono-rt (23447): ================================================================= 05-14 17:24:59.703 E/mono-rt (23447): The program 'Mono' has exited with code 0 (0x0).Tuesday, May 14, 2013 4:27 PM - User772 posted @jonp do you know of anyone else having an s4 debug correctly. This is critical I get this working by tomorrow. At the moment I'm using a virtual device which is a joke really considering I have one of the mainstream devices on my desk. My app involves taking pix and video so I'm very restricted at the moment. If it works for someone else then I'll be happy to factory reset the phone. Pretty sure its not my PC configurations. Laptop has worked on other devices and I rebuilt my main PC only a few days ago. Nothing special on it, win 8 visual studio 2012 and the default installation of xamarin. Please help......Tuesday, May 14, 2013 6:34 PM - User772 posted Last update. The debugger will connect on the simple project in stuff on the Oncreate. When it breakpoints on an Event Handler it will crash (as in the screenshot of the code I posted earlier).Tuesday, May 14, 2013 9:00 PM - User48 posted @JohnWood: Which Xamarin.Android version is this, stable 4.6.4 or beta 4.7.4? I've not heard of anyone having success debugging on a S4. I hadn't heard of any issues with the S4 until today, so what I've heard may not mean anything. Probably entirely unrelated, we have found an issue between the just released Xamarin Studio 4.0.4 and Xamarin.Android 4.6.4, which we'll be fixing in a quickly forthcoming 4.6.5.Wednesday, May 15, 2013 1:20 AM - User772 posted @jonp I moved up to the latest beta in the hope that would solve my issues. Will this fix you are doing on 4.6.5 make it into the next 4.7 beta or will I need to downgrade? I don't participially care which at present, I just need it to work.Wednesday, May 15, 2013 7:09 AM - User772 posted I downgraded to 4.6.05. Same issue, S4 app and debugger will bomb out on the first breakpoint :-( Please can you get someone to look at it asap. S4 is going to be the top android device for most people and xamarin needs to work for it.Wednesday, May 15, 2013 2:39 PM - User2490 posted Same report from here. I've traced down the crash to usually happening when the code is about to access a static property or method in a new thread. Sometimes it goes on for a few more steps after that though. Nothing really from mono in the logcat (once or twice I got an empty stacktrace output though), and only this seems related to the crash: I/ActivityManager( 765): Process com.XXXXXX.XXXXXXXXXX (pid 24001) (adj 0) has died. W/ContextImpl( 765): Calling a method in the system process without a qualified user: android.app.ContextImpl.sendBroadcast:1323 com.android.server.am.ActivityManagerService.cleanUpApplicationRecordLocked:12344 com.android.server.am.ActivityManagerService.handleAppDiedLocked:3566 com.android.server.am.ActivityManagerService.appDiedLocked:3670 com.android.server.am.ActivityManagerService$AppDeathRecipient.binderDied:981 W/ActivityManager( 765): Force removing ActivityRecord{423caeb8 u0 com.XXXXXX.XXXXXXXXXXXXXX/XXXXXX.XXXXXXXXXXXXXX.ui.android.SplashView}: app died, no saved state [...Launcher.HomeView stuff...] D/Zygote ( 217): Process 24001 terminated by signal (11) Don't know if that helps =]Thursday, May 16, 2013 1:38 PM - User772 posted @jonp Any more news on this? Are you guys working on a fix? FYI, I've rooted my S4 and tried Wifi ADB. Same result, debugger crashes when hitting an event.Sunday, May 19, 2013 1:14 PM - User12 posted Hey all, I'm able to debug here on a Galaxy S4 running Android 4.2.2, using the current stable Xamarin.Android 4.6.6. Is anyone still able to reproduce on 4.6.6? Any further information you can provide would be very appreciated, especially for those of you who are saying it only occurs after certain steps. Beyond that, full version information (including for the device itself), androidtools logs [0], logcat output [1], and any applicable screenshots or screencastsw would be very helpful in getting to the bottom of this. Thanks! PJ [0] androidtools logs can be found in the Xamarin Studio log directory. Easy access to the log directory is Xamarin Studio -> Help -> Open Log Directory [1], May 23, 2013 6:15 PM - User772 posted Here are my exact steps to reproduce. 4.6.06000 - New Android ICS Application. Change code to do this. using System; using Android.App; using Android.Content; using Android.Runtime; using Android.Views; using Android.Widget; using Android.OS; namespace AndroidApplication1 { [Activity(Label = "AndroidApplication1", MainLauncher = true, Icon = "@drawable/icon")] public class Activity1 : Activity { int count = 1; Button button; protected override void OnCreate(Bundle bundle) { base.OnCreate(bundle); // Set our view from the "main" layout resource SetContentView(Resource.Layout.Main); // Get our button from the layout resource, // and attach an event to it button = FindViewById<Button>(Resource.Id.MyButton); button.Click += button_Click; } void button_Click(object sender, EventArgs e) { button.Text = "clicked"; } } } Put a breakpoint on the "button.Text = "clicked";" in the click event. - run in debug. - click the button. == debugger disconnects and app closes. Also please note that using a breakpoint on the base.oncreate will work and step through ok. It's when the event is fired it crashes. This is the same when using timers and threads. Attached are log files etc.Friday, May 24, 2013 7:42 AM - User772 posted Please please please can you fix this. I'm now having to borrow friends phones so I can debug NFC issues.Tuesday, May 28, 2013 7:11 AM - User772 posted If one of the team want to remote into my PC I'm happy to allow that if it gets to the bottom of the issue quicker.Tuesday, May 28, 2013 7:12 AM - User13403 posted I'm seeing similar issues with a Samsung Galaxy S4 and Xamarin.Android 4.6.7 + Visual Studio 2010. The App runs fine when I'm not debugging, and the same App debugs fine on a Galaxy S3Friday, May 31, 2013 7:33 AM - User210 posted I'm having exactly the same problem with a SG4 and v 4.6.7 (vs2012 and xamarin studio)Friday, May 31, 2013 8:14 AM - User1833 posted Reporting in with the same issue on my S4 with both Xamarin.Android 4.6.7 and the latest beta 4.7. App will run fine without the debugger, and it will actually open with the debugger attached, but on the first EventHandler, it disconnects. I can provide any more details if it will help.Friday, May 31, 2013 3:28 PM - User13793 posted Same problem on the Xiaomi MI2S. Version: 4.6.7Tuesday, June 4, 2013 2:09 PM - User210 posted bumpWednesday, June 5, 2013 7:48 AM - User1568 posted Just me commenting randomly but: have you filed a bug report on this, with a small and extremely focused testcase? And has anyone attempted to replicate this from Eclipse? Just trying to see if we can pin it down. @Woody - does this also happen when you use SetOnClickListenermanually, and set a breakpoint in the handler in the Listenerclass?Wednesday, June 5, 2013 6:21 PM - User772 posted I'm not near a computer at the moment. I sent an email to support about a week ago and got a response saying they have reproduced the issue and will update me when fixed. So I guess they have enough info to fix it. Sooner the better for me. I'm just a bit worried about xamarins focus. Most tutorials and documentation seems to revolve around ios. I'm never going to invest in money grabbing Apple.Wednesday, June 5, 2013 6:28 PM - User1568 posted Ok great. About focus: I think their efforts are shared equally on all platforms, but Xamarin.iOS / Monotouch may still have the slight advantage of being a bit older - Monodroid/Xamarin.Android is a bit younger. And it's simply a more complicated platform, due to device and OS fragmentation. But it's getting a lot of love. And if we sling mud, let's be sure to target Google and Apple equally. Or neither, and just be happy developers working in whatever OS we like :)Wednesday, June 5, 2013 6:55 PM - User210 posted bump, what's going on Xamarin?! Are you planning to fix this problem anytime soon?Monday, June 10, 2013 7:48 AM - User1527 posted I have this same issue 2 months ago, I reinstall Xamarin and solved!Monday, June 10, 2013 9:03 PM - User772 posted On an S4? I have this problem on two different PC's one of which was built fresh, so I'm pretty certain this isn't an installation thing.Tuesday, June 11, 2013 7:36 AM - User13403 posted I'd imagine that was NOT on an S4. While vague, the phone wasn't available "2 months ago".Tuesday, June 11, 2013 7:38 AM - User210 posted I have contacted the support and a bug has finally been filed:, June 12, 2013 10:46 AM - User48 posted While investigating this issue, we have discovered two workarounds: Friday, June 14, 2013 6:58 PM Use a $(TargetFrameworkVersion)of Android v3.1. Use soft breakpoints instead of the normal breakpoints: adb shell setprop debug.mono.env MONO_DEBUG=soft-breakpoints - User9 posted As the bug says, this is something we're still actively looking into resolving, there's two temporary workarounds noted on the bug too. Thanks for your patience on this issue.Monday, June 17, 2013 9:10 AM - User6937 posted This is happening for me too on the S4 guys. I have to use the S2 for on device debugging.Monday, June 24, 2013 8:49 AM - User1567 posted I have the exact same problem as you guys. Plus one of my apps is crashing pretty often (even when not debugging), haven't found the cause yet. App works fine on S3, S2, HTC One X, Nexus 7, etc.Thursday, June 27, 2013 8:44 PM - User210 posted Xamarin, it's about time you provide a fix for this issue! This wait time is unacceptable!Tuesday, July 9, 2013 7:25 AM - User772 posted adb shell setprop debug.mono.env MONO_DEBUG=soft-breakpoints works for me, but I agree, this really should of been fixed properly by now.Tuesday, July 9, 2013 7:43 AM - User21253 posted how can reinstall Xamarin bcs the license of the first has finished and now I want to reinstall?? I unistall and tried to reinstall but they ask for lincese...What can i do?? plz help thanksSunday, September 29, 2013 5:33 PM - User32917 posted For me, it happens every time the breakpoint is standing inside the event handler. How to get around this? Xamarin.Andriud 4.10.1Monday, December 23, 2013 12:38 PM - User3525 posted I am trying to deug on a HTC One X now and gets this error all the time... Any work around? Mono.Debugger.Soft.VMDisconnectedException: Exception of type 'Mono.Debugger.Soft.VMDisconnectedException' was thrown. at Mono.Debugger.Soft.Connection.SendReceive(CommandSet commandset, Int32 command, PacketWriter packet) at Mono.Debugger.Soft.Connection.MethodGetInfo(Int64 id) at Mono.Debugger.Soft.MethodMirror.GetInfo() at Mono.Debugger.Soft.MethodMirror.get_IsGenericMethod() at Mono.Debugging.Soft.SoftDebuggerBacktrace.CreateStackFrame(StackFrame frame, Int32 frameIndex) at Mono.Debugging.Soft.SoftDebuggerBacktrace.GetStackFrames(Int32 firstIndex, Int32 lastIndex) at Mono.Debugging.Client.Backtrace.GetFrame(Int32 n) at Mono.Debugging.Client.Backtrace..ctor(IBacktrace serverBacktrace) at Mono.Debugging.Soft.SoftDebuggerSession.GetThreadBacktrace(ThreadMirror thread) at Mono.Debugging.Soft.SoftDebuggerSession.HandleBreakEventSet(Event[] es, Boolean dequeuing) at Mono.Debugging.Soft.SoftDebuggerSession.HandleEventSet(EventSet es) at Mono.Debugging.Soft.SoftDebuggerSession.EventHandler()Friday, January 3, 2014 11:50 AM - User97927 posted I have big problems with the HTC One M8. I got the " The Connection with the debugger has been lost .."- message a lot, if I want to debug my app. In eight times only one time it works correctlyMonday, March 16, 2015 7:27 AM - User101467 posted I have this error a lot with the moto x (lollipop), none of my other android devices exhibit the problemFriday, March 27, 2015 1:39 PM - User117347 posted I am also having this issue. What I am doing is, I am using Xamarin.Forms.Map service. Every thing is working fine, but when I am deploying it to device, it shows dialog box showing "The connection to the debugger has been lost." My manifest file is given below:Wednesday, April 1, 2015 10:19 AM - User203024 posted I have same problem with Google pixel tabletWednesday, April 6, 2016 6:25 AM
https://social.msdn.microsoft.com/Forums/en-US/ddfff710-db4a-4779-badf-9862e88b4405/the-connection-with-the-debugger-has-been-lost?forum=xamarinandroid
CC-MAIN-2021-31
refinedweb
2,806
58.69
This guide is part of The Complete Guide to ES6 with Babel 6 series. If you’re having trouble upgrading to Babel 6, start with Six Things You Need To Know About Babel 6. So you’ve written your app, your tests and your libraries in ES6. You’ve gotten so used to the new syntax that it feels unnatural even trying to write ES5. And this makes it all the more jarring when you add a Gulp file with an import statement, and suddenly this happens: /unicorn-standard-boilerplate/gulpfile.js:1 (function (exports, require, module, __filename, __dirname) { import del from ^^^^^^ SyntaxError: Unexpected reserved word Oops, gulpfile.js only supports ES5. But lucky for you, teaching it ES6 is almost as simple as renaming a file. Almost… Installing dependencies Skip this section if you’ve already added Babel 6 and any required presets/plugins to your project. Babel 6 doesn’t play well with its younger self, so start by removing any older Babel packages from package.json — babel, babel-core, etc. Next, clean up what’s left by deleting your node_modules directory and reinstalling your non-babel dependencies with npm install. We’ll need to install the babel-core package to get access to the Babel require hook (which Gulp uses), and the babel-preset-es2015 package to get access to Babel’s collection of ES2015 transforms: npm install babel-core babel-preset-es2015 --save-dev If you want to use any of ES6’s new built-ins in your Babel tasks — e.g. Set, Symbol or Promise` — you’ll also need to install the Babel polyfill: npm install babel-polyfill --save-dev --save, not --save-dev? Yep, and they’re correct. The difference is that for Gulp, babel-polyfill only needs to run while Gulp is running — not while your app is. If you’re using babel-polyfill in your application too, keep it is a a dependency, not a devDependency. Configuring Babel Here’s a fun facts about Babel 6: it won’t actually use the ES2015 package which you’ve uninstalled until you tell it to do so. Unfortunately, there is no way to get Gulp to pass this configuration to Babel. So instead we’ll need to create a .babelrc file in the project’s root directory, which applies to the entire project: { "presets": ["es2015"] } Unfortunately, .babelrc is shared between all tools which can’t set their own configuration. For example, the Mocha test runner also reads .babelrc when used with the Babel register script. Plan accordingly, and try not to put anything in there which is going to surprise you later on. Telling Gulp to Babel Once you’ve got Babel installed and configured, teaching Gulp to pass the gulpfile through Babel is surprisingly simple. Just rename your gulpfile.js to gulpfile.babel.js! You can then use the gulp command as before, but with ES6 syntax too! The only caveat is that if you also want to use ES6 built-ins like Set, Promise and Symbol, you’ll need to require the babel-polyfill package that you installed earlier. To do so, just add an import statement to the very top of your gulpfile.babel.js: import `babel-polyfill`; My experience is that the new built-ins aren’t always needed in gulpfiles, so don’t add this import unless you have a reason to. Examples For an example of a package with a gulpfile.babel.js using Babel 6, see Unicorn Standard’s starter-kit. OK, it works! But for how long? Here’s a little story: while writing this article, I learned that the require hook in babel-core which Gulp uses is already set to be deprecated. Despite the fact that Babel 6 already introduced a new way of doing this only a few weeks ago. Unfortunately we can’t use the new package just yet, as Gulp hasn’t quite caught up yet. Staying up to date in our JavaScript-driven world can be tough — and that’s why I provide my Newsletter! Subscribe to receive my latest guides - Learn Raw React Can you link to where the requrie hook deprecation decision was discussed, and/or documentation or clues for what the new method is? Thanks! The new method is to install the package `babel-register`, and use that instead of `babel-core/register`. I’m not sure where the discussion occurred, sorry – I found this by browsing through source code (it has a commented out deprecation warning, so the old method isn’t deprecated quite yet). Thats very cool. Thanks I tend to use npm scripts to run Gulp and I do it by running “babel-node ./node_modules/gulp/bin/gulp”. This seems like a pretty decent way to call Gulp, but it kinda requires you to know where the main file for Gulp is found. Of course it “should” be done the built-in way, but if the built-in way is in constant flux, I’ll take this instead. 🙂
http://jamesknelson.com/teaching-gulp-es6-with-babel-6/
CC-MAIN-2017-34
refinedweb
834
63.59
Answered by: No UpdateSourceTrigger? It looks like WinRT's Binding (in the Windows.UI.Xaml.Data namespace) doesn't have the UpdateSourceTrigger property. Is there any way to get a TextBox to push its updates into the target on each keypress rather than having to wait for a loss of focus? It doesn't even look like we can retrieve the binding expression and give it a poke by hand, because there's no GetBindingExpression helper to get hold of the binding, and even if there were, the binding doesn't have an UpdateSource method. Try running the Data Binding sample from the Metro SDK samples, and for Scenario 1, go to the 2nd TextBox (the one with the TwoWay mode), edit the number, hit the Return key. I can cope with it not updating with every singel keypress in this case (although there are scenarios where you really do want that, particularly around doing a good job with validation of input) But the fact that it doesn't even pick up the change when you hit Enter - doesn't it just feel broken? Question Answers - Ian, your feedback on UpdateSourceTrigger is acknowledged and has been passed onto the appropriate feature crew on our team. Much appreciated! Ashish Shetty, Program Manager, Microsoft | All replies Sure. What product should I file that under? I just searched for "Metro", "WinRT", and "Windows 8" in the Product Directory on connect, and got back nothing for all of them. I don't see anything relevant under or What's the proper place to file feedback for WinRT XAML development? [update] I typed in the code I got for connect on the piece of card in the build attendee pack. But that only appears to let me into WERP (Windows Ecosystem Readiness Program). Its "submit feedback" feature only accepts bug report, not feature requests. (This seems like a feature request.) And it says: "You should only need this form for setup failures." And it wants me to attach a bunch of log files that I'm pretty sure are related to setup and installation. So that seems like very much the wrong place to send this feedback - Here's the exact form I am given to send out: this does not work, pleaes email me msmall at microsoft and let me know - we'll get the right info to you. Matt Small - Microsoft Escalation Engineer - Forum Moderator "the information on how to join the Connect program is included on the download page where you installed Windows Developer Preview." - hmm. The preview came pre-installed on the tablet I received as //build/ conference attendee. So the preview was installed by Microsoft before they provided the tablet. I presume it was downloaded from Microsoft's internal networks... The downloads are available from and there's no information on the connect program there that I can see. I have an MSDN sub. I see the "Windows Developer Preview" section in the Subscriber downloads section, and again, I see no information about Connect when I expand any of the "Details" sections for the various downloads there. But in any case, I don't think any of this actually helps, because the issue I've raised here isn't a bug. It's a feature request. The steps described in that final link seem to be about reporting bugs (which I can already do). I think you've pointed me to instructions for doing the thing that I've already said is not what I need to do. I'm not attempting to report a bug. For connect purposes, this would be a feature request. (Although it's really just a question - I was hoping for some clarification from someone on the relevant team about this.) The log files won't be of any use to you because the issue I'm raising here is an API design issue. In fact, the problem I've described can be seen without even needing to install the Windows Developer Preview. All the information required to discover the problem can be found just by reading the documentation up at the Metro developer centre, specifically the documentation up at As it happens, I was using my main Windows 7 desktop to browse that site when I discovered the problem so the log files from my Win8 device really aren't going to help you. (I did then try this out on my Windows 8 system. The system behaved exactly as the documentation said it should, so I can't reasonably call it a bug. It's a design shortcoming.) I'm afraid the canned response you've been given isn't relevant to the issue I'm describing. (Also, as described above, it appears to contain incorrect information for the scenario that it's actually intended for, so you might need to take that form back to whoever gave it to you. But even when fixed, it's not the right thing for this scenario.) But if the only path for getting this information where it needs to be is to submit it as a 'bug' even though it's not, I can do that. Is that what I should do? - Ian, your feedback on UpdateSourceTrigger is acknowledged and has been passed onto the appropriate feature crew on our team. Much appreciated! Ashish Shetty, Program Manager, Microsoft | I also have consumer preview and need this feature or something similar. Thanks If my response answers your question, please mark it as the "Answer" by clicking that button above my post. My blog: - I'd file a Connect issue so we can at least vote on it, but I can't work out what the right place to submit this would be. Anyone got any suggestions? The Windows Ecosystem Readiness Program at still only seems to be taking bug reports, and this isn't a bug as such. can make a wrapper your self: Here's another workaround on Stack Overflow... Shocked to see yet another key feature missing. Seems like Windows Phone 7.0 all over again :-( WPF and Silverlight were so cool, why not just port 100% to ARM and keep everyone happy?! Actually there are a load of useful WinRT, WPF and Silverlight controls by the same guy. Here's the direct link to his CodePlex "My Toolkit" homepage... Key Artefacts - Edited by Code Chief Tuesday, October 09, 2012 1:26 AM Add direct link. If MS intended to make WinRT apps more responsive, by dropping this useful feature, they have presumably done the exact opposite. By forcing developers to come up with workarounds that tend to be less fine tuned than MS could do, MS Here's a behaviour I did that doesn't require subclassing the Textbox. You still have to supply the property name to the behaviour as GetBindingExpression doesn't exist. using System.Reflection; using Windows.UI.Xaml; using Windows.UI.Xaml.Controls; namespace Flexman { public class TextBoxUpdateSourceBehaviour { private static PropertyInfo _boundProperty; public static readonly DependencyProperty BindingSourceProperty = DependencyProperty.RegisterAttached( "BindingSource", typeof(string), typeof(TextBoxUpdateSourceBehaviour), new PropertyMetadata(default(string), OnBindingChanged)); public static void SetBindingSource(TextBox element, string value) { element.SetValue(BindingSourceProperty, value); } public static string GetBindingSource(TextBox element) { return (string)element.GetValue(BindingSourceProperty); } private static void OnBindingChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { var txt = d as TextBox; if (txt == null) return; txt.Loaded += OnLoaded; txt.TextChanged += OnTextChanged; } static void OnLoaded(object sender, RoutedEventArgs e) { var txt = sender as TextBox; if (txt == null) return; // Reflect the datacontext of the textbox to find the field to bind to. var dataContextType = txt.DataContext.GetType(); _boundProperty = dataContextType.GetRuntimeProperty(GetBindingSource(txt)); // If you want the behaviour to handle your binding as well, uncomment the following. //var binding = new Binding(); //binding.Mode = BindingMode.TwoWay; //binding.Path = new PropertyPath(GetBindingSource(txt)); //binding.Source = txt.DataContext; //BindingOperations.SetBinding(txt, TextBox.TextProperty, binding); } static void OnTextChanged(object sender, TextChangedEventArgs e) { var txt = sender as TextBox; if (txt == null) return; if (_boundProperty.GetValue(txt.DataContext).Equals(txt.Text)) return; _boundProperty.SetValue(txt.DataContext, txt.Text); } } } Usage is: <TextBox Text="{Binding Username}" Flexman:TextBoxUpdateSourceBehaviour. You can also have the behaviour do the binding for you however have commented that part out. Means when MS do fix this in the future it's easier to swap out the behaviour. Enjoy! Glen I used the TextBoxUpdateSourceBehavior in a short article I published in codePoject. However to use it on the same page with multiple TextBox controls there some little modifications to apply. Software Architect, Consitel Pte Ltd, Atrium In-Building suite
http://social.msdn.microsoft.com/Forums/windowsapps/en-US/a04dc907-9ca8-4302-bbad-c00b01b8193f/no-updatesourcetrigger?forum=winappswithcsharp
CC-MAIN-2013-48
refinedweb
1,417
55.03
table of contents - buster 4.16-2 - buster-backports 5.04-1~bpo10+1 - testing 5.10-1 - unstable 5.10 are required (see the description of the EPERM error, below). The size argument specifies the number of supplementary group IDs in the buffer pointed to by list. RETURN VALUE¶On success, getgroups() returns the number of supplementary group IDs. On error, -1 is returned, and errno is set appropriately. On success, setgroups() returns 0. On error, -1 is returned, and errno is set appropriately. ERRORS¶ -). - ENOMEM - Out of memory. - EPERM - The calling process has insufficient privilege (the caller does not have the CAP_SETGID capability in the user namespace in which it resides). - EPERM (since Linux 3.19) - The use of setgroups() is denied in this user namespace. See the description of /proc/[pid]/setgroups in user_namespaces(7). CONFORMING TO¶getgroups(): SVr4, 4.3BSD, POSIX.1-2001, POSIX.1-2008. setgroups(): SVr4, 4.3BSD. Since setgroups() requires privilege, it is not covered by POSIX.1. NOTES¶A process can have up to NGROUPS_MAX supplementary group IDs in addition to the effective group ID. The constant NGROUPS_MAX.
https://manpages.debian.org/buster/manpages-dev/getgroups.2.en.html
CC-MAIN-2021-39
refinedweb
186
63.25
The Practical Client Here’s how to add AngularJS to an ASP.NET MVC application in Visual Studio 2015. In previous columns I’ve looked at using TypeScript with popular JavaScript frameworks like Knockout and Backbone. It makes sense, therefore, to look at how to use TypeScript with one of the most popular JavaScript frameworks: AngularJS 2 (Angular 2). Using Angular 2 in an ASP.NET MVC can involve configuring your computer, Visual Studio, your ASP.NET MVC project, Angular itself and TypeScript. Not surprisingly, doing that and creating a simple "Hello World" application is going to take all of this column (in later columns, I look at using TypeScript to actually create Angular applications). I used Visual Studio 2015 Community Edition and the current versions of ASP.NET MVC, Angular 2 and TypeScript for this project. Configuring Your Computer and Visual Studio Angular 2 works best when it retrieves its components using the Node.js package manager (npm). Your first step, therefore, is making sure that you have Node.js version 4.6.x, or greater, installed on your computer (Node.js is installed with some versions of Visual Studio, but it can be a very old version). To verify that you’re running Node.js version 4.6.x, open a command window and type: node -v If you get a "not recognized" message then you’ll need to install Node.js. Even if you have an acceptable version of Node.js, you must also have npm version 3.x.x or greater. To find that out, type this into your command window: npm -v If you have an earlier version of either Node.js or npm then you should upgrade to the latest versions with this command in your command window: npm install [email protected] -g Finally, you should clear out the npm cache (I found that if I skipped this step I got errors when unpacking Node.js packages): npm cache clean Configuring Visual Studio and TypeScript You’ll also need to get the latest version of TypeScript. You can get it here or, because you have a command window open, you can use npm and type this: npm install -g typescript Even if you have Visual Studio 2015 Community Edition, you’ll need to have Update 3 installed. To check that, open Visual Studio and select Help | About Microsoft Visual Studio from the menu to determine if your version matches mine. If you don’t have Update 3, use Tools | Extensions from the menu, go to Updates and apply Update 3. You also need to configure the Visual Studio project options to ensure that Visual Studio will use the versions of Node.js and npm that you’ve installed. To do that, select Tools | Options from the menu and, in the tree on the left, select Projects and Solutions | External Web Tools. In the panel on the right, move the $(PATH) entry so that it’s above the $(DevEnvDir) entries. Close the dialog when you’re done and restart Visual Studio. Configuring Your Project Now, you’re ready to add your ASP.NET MVC project. I used ASP.NET MVC 5.2.3. To make sure that you’re using that version of ASP.NET MVC (or later) use NuGet Manager to install ASP.NET MVC into your project. To create my ASP.NET MVC project, I selected File | New | Project to open the New Projects dialog. Under Visual C# | Web, I selected ASP.NET Web Application (.NET Framework), gave my project a name, and clicked the OK button. In the resulting New ASP.NET Web Application, I selected MVC and checked off the MVC option. At this point, if you’re following along, you have a vanilla ASP.NET MVC 5.2.3 project. To add my first controller, I used Add | New item and selected MVC 5 Controller – Empty. I called my controller HomeController.cs and, from its Index method, added a View called Index.cshtml. I added my View without a layout page to simplify coding later in the project. Your next step is to let Node.js populate a node_modules folder in your project directory with the JavaScript files you need (this folder doesn’t need to be added to your project). This process is driven by the entries in a package.json file that must be part of your project. To add that file, right-click on your project and select Add | New Item to display the Add New Item dialog. In the dialog, select an npm Configuration File and click the Add button. Once the file is added, paste the text in Listing 1 into the file. { "version": "1.0.0", "name": "asp.net", "private": true, "scripts": { "build": "tsc -p src/", "build:watch": "tsc -p src/ -w", "build:e2e": "tsc -p e2e/", "serve": "lite-server -c=bs-config.json", "serve:e2e": "lite-server -c=bs-config.e2e.json", "prestart": "npm run build", "start": "concurrently \"npm run build:watch\" \"npm run serve\"", "pree2e": "npm run build:e2e", "e2e": "concurrently \"npm run serve:e2e\" \"npm run protractor\" --kill-others --success first", "preprotractor": "webdriver-manager update", "protractor": "protractor protractor.config.js", "pretest": "npm run build", "test": "concurrently \"npm run build:watch\" \"karma start karma.conf.js\"", "pretest:once": "npm run build", "test:once": "karma start karma.conf.js --single-run", "lint": "tslint ./src/**/*.ts -t verbose" }, .4", "systemjs": "0.19.40", "core-js": "^2.4.1", "reflect-metadata": "^0.1.10", "rxjs": "5.0.1", "zone.js": "^0.7.4" }, "devDependencies": { "concurrently": "^3.2.0", "lite-server": "^2.2.2", "typescript": "~2.0.10", "canonical-path": "0.0.2", "tslint": "^3.15.1", "lodash": "^4.16.4", "jasmine-core": "~2.4.1", "karma": "^1.3.0", "karma-chrome-launcher": "^2.0.0", "karma-cli": "^1.0.1", "karma-jasmine": "^1.0.2", "karma-jasmine-html-reporter": "^0.2.2", "protractor": "~4.0.14", "rimraf": "^2.5.4", "@types/node": "^6.0.46", "@types/jasmine": "2.5.36" } } I’m not suggesting that I know that you need all the packages in Listing 1. I’m saying that my project worked with these packages. To start the process of creating the node_modules folder and downloading all your packages, all you should have to do is save the package.json file. You can check out your progress in the View | Output window (provided you change the dropdown list at the top of the window to Bower/npm). Be patient: It can take some time for the process to start and even longer for it to complete. You can check if the process is complete by looking for a node_modules folder in your project directory (because you haven’t added the folder to your project, it won’t appear in Solution Explorer). Once you’ve ensured that the folder isn’t empty, Node.js is done. If simply saving the package.json file doesn’t work for you, right-click on package.json in Solution explorer and select Restore Packages. And, if that doesn’t work, open the CMD window, surf to your project folder and use this command: npm install From the Command Window, npm will automatically look for your package.json file in the current folder and use its contents to populate the node_modules folder. Adding the JavaScript Application Your first step in setting up the client-side part of your application is to add a JavaScript file called systemjs.config.js to your Scripts folder. Listing 2 shows what you need in that file to configure your Angular 2 application (again, I’m not saying everything in this file is necessary).: { defaultExtension: 'js' }, rxjs: { defaultExtension: 'js' } } }); })(this); I added a new folder called "app" to my project’s Scripts folder to hold my application. You don’t have to call the folder "app," but it’s an Angular 2 convention to do so and that’s the folder that’s referenced in my sample systemjs.config.js code. In your application folder, you should add three TypeScript files. Those files will hold: You can add these files in any order, but you might as well start from the top with your module file. To do that, add a TypeScript file named app.module.ts to your Scripts folder. Here’s a basic module definition for this file: import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { AppComponent } from './app.component'; @NgModule({ imports: [BrowserModule], declarations: [AppComponent], bootstrap: [AppComponent] }) export class AppModule { } The first two lines import Angular 2 modules that you need as part of defining your module. The third line imports your component file. The @NgModule block specifies the components in the module. In this case, there are three components: The Angular 2 BrowserModule and a component that’s used both to provide data declarations and to be the start point of the application (called AppComponent in my example). The last line exposes this definition to other TypeScript modules. Again, you don’t have to call the component AppComponent, though this is a TypeScript convention. You’ll get some errors at this point. Don’t panic! The errors related to app.component and AppComponent will be fixed when you add your component file to the project; the message about "Experimental support for decorators…" associated with the last line will go away after you’ve run your application once. Your next step is to define the components in your application with a TypeScript file called app.component.ts to your Scripts folder (the file’s name is another Angular convention). Typical code for this file looks like this: import { Component } from '@angular/core'; @Component({ selector: 'HelloWorld', template: '<h1>Page Says: {{text}}</h1>' }) export class AppComponent { text = 'Hello, World'; } The first line imports an Angular module that you need in this file. The next lines define the parts of this component. In this example, I define a: One thing to be careful of: The delimiters used in the template property in the previous code are not quotes. They’re backticks. You can type them in using the key to the left of the digit 1 at the top of your keyboard (at least, that’s where they are on my keyboard). Inside a template, you use handlebars ( {{ }} ) to define an Angular 2 placeholder. At runtime, in this example {{text}} will be replaced by data associated with the text property in the component that uses this template. In the last line of the component file, I define my component with a property called text and set the text property to "Hello, World." Finally, you can create the TypeScript code that will load your application. Conventionally, that file is called main.ts. A minimal main.ts looks like this: import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; import { AppModule } from './app.module'; platformBrowserDynamic().bootstrapModule(AppModule); The first line in this file imports an Angular 2 component that you need in this file. The second line imports your module from your module file (that is, it must match the name in the export class line of your module file). The last line starts your application and uses the name from the previous import statement. Configuring TypeScript Now that you have your Angular 2 application set up, you need to configure TypeScript. To do that, add a tsconfig.json file to your project and put the contents of Listing 3 in it (in the Add New Item dialog, this file is listed as a "TypeScript JSON Configuration File"). { "compilerOptions": { "noEmitOnError": true, "removeComments": false, "sourceMap": true, "target": "es5", "module": "commonjs", "moduleResolution": "node", "emitDecoratorMetadata": true, "experimentalDecorators": true, "lib": [ "es2015", "dom" ], "noImplicitAny": true, "suppressImplicitAnyIndexErrors": true }, "exclude": [ "node_modules", "wwwroot" ] } Now would also be a good time to go to NuGet Manager and upgrade all the NuGet packages in your project. Creating Your View Finally, you’re ready to create a View that will work with Angular 2. Begin by adding these script elements to your View’s header element (if your View has a layout page, you should add these tags to the head element of your layout page): <script src="node_modules/zone.js/dist/zone.js"></script> <script src="node_modules/reflect-metadata/Reflect.js"></script> <script src="node_modules/systemjs/dist/system.src.js"></script> <script src="~/Scripts/systemjs.config.js"></script> Also in head element, you should add the JavaScript code that, first, tells Angular 2 to look for the JavaScript version of your TypeScript files and, second, runs the code in your bootstrap file. That code looks like this: <script> System.config({ defaultJSExtensions: true }); System.import('./scripts/app/main').catch(function (err) { console.error(err); }); </script> The last step is to use the custom tag you set up in your component file to invoke Angular 2 processing (in my case, that’s a tag called HelloWorld). At run time, Angular 2 will replace the tag with the content of your template and then replace the data associated with the text property. My tag is this simple: <HelloWorld/> After doing all that, if you press F5 you should (after a while) see your page display with your content. For now, that's a great start. Next month, I’ll start building out something more
https://visualstudiomagazine.com/articles/2017/04/01/set-up-aspnet-mvc.aspx
CC-MAIN-2022-40
refinedweb
2,195
67.04
How To Use Clojure For Scripting Getting to REPL Suppose you want to try out Clojure, but don’t want to spent a lot of time setting up a project. You just want to fire up the REPL and start playing with code. Also you want to include other Clojure libraries dynamically on a whim. You don’t want to declare all of them first, or restart your REPL every time you think of a new library to pull in. These steps will show you how to do this. Boot-clj First we are going to install boot-clj. Boot is similar to Lein with the difference that it lets you define your builds in Clojure dynamically. Install boot-clj. brew install boot-clj Start boot repl. boot repl Test the REPL. (println "Hello, world!!") At this point the REPL should work. You can play with all the functions that ship with Clojure. Adding Dependencies For most interesting applications you want to pull in dependencies from mvnrepository or from clojars. Clojure core just isn’t enough. Here is how to pull in 3rd party dependencies. We are going to define a function called deps that will give us the ability to dynamically pull in dependencies from mvnrepository or clojars. Paste this into your REPL. ; Define deps to pull in dependencies dynamically (defn deps [new-deps] (merge-env! :dependencies new-deps)) Testing With CPrint Lets test this using cprint, which is a color pretty printing function. To use cprint we are going to pull in lein-cprint version 1.2.0. You can get the dependency name and version from mvnrepository or clojars. ; Here is how to pull in the dependency (deps '[[lein-cprint "1.2.0"]]) To use a function in the dependency we can either use use or requires. This will pull it into our current namespace. ; `use` imports cprint directly into our namespace (use 'leiningen.cprint) (cprint (range 10)) ; `require` imports cprint as cp/cprint in our namespace (require '[leiningen.cprint :as cp]) (cp/cprint (range 10)) REPL History Now after hacking for a bit you will want to page through your REPL history. To view the REPL history look at .nrepl-history in the directory where you started boot repl. cat .nrepl-history You can copy the history into a file, clean it up, and then refactor it into an elegant script defining functions. Next time you start a session you want to load the script file you created. Here is how to load a script file into the REPL. (load-file "myscript.clj") This should give you a good starting point in how to use the REPL to interactively develop Clojure programs without getting bogged down in setting up a project. REPL in Production Now at this point you might say, “This is a great way to goof around in Clojure, but to put a system into production we have to put on a serious face and give up the joys of the REPL.” Not necessarily. I run production servers with a Boot REPL. You can too. The REPL is a very powerful shell that lets you call and change parts of your Clojure program on the fly. It’s like doing engine repairs on a plane in flight. Using the REPL you inspect and debug errors as they happen. You can dynamically redefine functions. And most importantly you can retain the playful REPL mindset in production. Clojure Script Files Can I run my Clojure scripts like I do with my Python, Ruby, Perl, and Bash scripts? Yes you can. You can run your scripts directly using this shebang line #!/usr/bin/env boot on top of your script file. Save this into hello.clj and try it out. #!/usr/bin/env boot (println "Hello world!") This turns Clojure into a scripting language much like Python, Ruby, Perl, or Bash. Faster Startup Time If you want to speed up your startup time you can pass JVM flags into boot using BOOT_JVM_OPTIONS. Here is how I run boot. export BOOT_JVM_OPTIONS=' -client -XX:+TieredCompilation -XX:TieredStopAtLevel=1 -Xmx2g -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -Xverify:none' This brings the startup time on my MacBook down from 3.2 seconds to 1.6 seconds. The last flag -Xverify:none comes with the following warning from dev.clojure.org: Suppresses the bytecode verifier (which speeds classloading). Enabling this option introduces a security risk from malicious bytecode, so should be carefully considered. Share your thoughts on this post here.
https://asimjalis.github.io/blog/2016/12/07/clojure-for-scripting.html
CC-MAIN-2019-09
refinedweb
746
67.35
What you need: - Raspberry PI with an internet connection - DS18B20 temperature sensor (RasPI tutorial at Adafruit) - Account at sen.se to log your values ----- expert in no time. Basically, you want to create a "channel" for your Raspberry PI by 'adding a device'. sen.se will give you a 5 digit channel number for your RasPI and a very long passphrase that will be your personal identifier. You will need both of these for the source code below. ----- Next, let's connect the DS18B20 to the Raspberry PI. The DS18B20 transmits its temperature reading via I2C bus. Just follow the tutorial at Adafruit. The connection is simple and looks like this: ----- Load the Python script below into your Raspberry Pi and run it. Be certain you enter your personal passphrase identifier and the device channel code that you got earlier from sen.se. After you run the Python script head back over to sen.se. You should see that sen.se has detected a 'heartbeat' from your Raspberry PI. After that, it is just a matter of configuring one of the graphing apps on sen.se. You can make your sen.se data public or private and there are many many tools to manipulate and display your data. ----- Good luck! Python script for the RasPI follows: # WhiskeyTangoHotel.Com # June 2013 # Program reads DS18B20 temp sensor and plots value to sen.se # DS18B20 connections via AdaFruit tutorial # With thanks to @Rob_Bishop # This program is feed customized for RasPI(2) import httplib import json as simplejson from random import randint import time import os import glob # Pass os commands to set up I2C bus os.system('modprobe w1-gpio') os.system('modprobe w1-therm') base_dir = '/sys/bus/w1/devices/' device_folder = glob.glob(base_dir + '28*')[0] device_file = device_folder + '/w1_slave' run_number = 0 SENSE_API_KEY = "long sen.se passphase here. note that it is in quotes" FEED_ID1 = 12345 # five digit sen.se channel code. note it is NOT in quotes def read_temp_raw(): #read the DS18B20 function f = open(device_file, 'r') lines = f.readlines() f.close() return lines def read_temp(): #process the raw temp file output and convert to F lines = read_temp_raw() while lines[0].strip()[-3:] != 'YES': time.sleep(1) lines = read_temp_raw() equals_pos = lines[1].find('t=') if equals_pos != -1: temp_string = lines[1][equals_pos+2:] ambC = float(temp_string) / 1000.0 ambF = ambC * 9.0 / 5.0 + 32.0 return ambF def send_to_opensense(data): # print >> fout, "\t=> Sending to OpenSense: %s" %() # you may get interesting information here in case it fails # print >> fout, response.status, response.reason # print >> fout, response.read() conn.close() except: pass while(True): try: run_number = run_number + 1 ambF = read_temp() print "RasPI(2) Ambient Run:", run_number, " ambF:",ambF data = { 'F' : ambF} send_to_opensense(data) time.sleep(300) except: pass -----
http://www.whiskeytangohotel.com/2013/07/raspberry-pi-charting-ambient-vs.html
CC-MAIN-2019-47
refinedweb
454
68.36
#include <string.h> #include "avcodec.h" #include "rangecoder.h" #include "bytestream.h" Go to the source code of this file. based upon "Range encoding: an algorithm for removing redundancy from a digitised message. G. N. N. Martin Presented in March 1979 to the Video & Data Recording Conference, IBM UK Scientific Center held in Southampton July 24-27 1979." Definition in file rangecoder.c. Definition at line 59 of file rangecoder.c. Referenced by decode_frame(). Definition at line 52 of file rangecoder.c. Referenced by decode_frame(). Definition at line 41 of file rangecoder.c. Referenced by ff_init_range_decoder(). Definition at line 99 of file rangecoder.c.
http://www.ffmpeg.org//doxygen/0.6/rangecoder_8c.html
crawl-003
refinedweb
104
64.67
It occurs to me that there is likely to be quite a few people interested in component development in Flex 2 (Bit of a no brainer really). However I thought I’d do a set of posts focused on tips and tricks to make rapid development of Flex 2 MXML based components more productive (Actionscript 3 ones to follow). This isn’t a ‘cook book’ solution but it does fall in that arena. So to kick it off I thought I’d continue on from my last post and show how to override the default values of a component from a component that inherits from it and from within a composited component. To do this I will concentrate on my 2 example ComboBox components FGComboBox and FGExtendedComboBox. These aren’t really that useful in a production environment but help to illustrate code examples. To start with let’s recap on the two components. First up we have FGComboBox: Next up we have FGExtendedComboBox: As you can see FGExtendedComboBox inherits from FGComboBox but does nothing more. That’s fine but what if we want to override the dataProvider? Well we can tackle this in two ways. We can either implement a dataProvider in the component body or we can place a piece of script in the code to set the dataProvider. There is a slight deviation though, this is due to the fact that once you have extended a component it takes all of its information from the parent class and namespace. So you will get an error if you try and use <mx:dataProvider></dataProvider> inside FGExtendedComboBox. To get around this you have a couple of options from within the component. The first is to just use Actionscript to set the dataProvider. As you can see from the code below we call setData() once the component has initialized. This in turn sets the dataProvider inside our component. Now if you chose to set the value of a property within the component bare in mind that it will likely resolve before the rest of the application has finished its own initialization / creation. Secondly there is setting the dataProvider within the component via MXML. However as we cannot access <mx:dataProvider> how do we do this? Well if you think about it is a logical way it will become eveident. Considering we use ‘this’ to access the component dataProvider method with Actionscript it makes sense that we do the same with the MXML. In this case though we can be implicit with the notion of ‘this’ and just provide a <dataProvider></dataProvider> tag block. Once we have that in place we can go back to using <mx:…> tags within it as you can see below: Obviously it would add clarity if there was a namespace associated with properties that you were going to access internally from the parent. You can do this but it requires you to add in those additional namespaces. It doesn’t appear to add anything beyond clarity in the code so I would just choose whichever one suits your requirements. Below are a few other variants that you can employ including the additional namespace, accessing the super.dataProvider (don’t use this one unless you have a specific reason for altering the parent property). This one utilizes a call to the parent property / method via super(). Not really a solution you are likely to use when extending the MX components but you may wish to do this in a bespoke creation of your own. So that’s about it. From my experience it is easier to start with MXML component development and move onto Actionscript 3.0 component development. Not because AS3.0 is harder, but for the time being pretty much all of your AS3.0 development is going to revolve around Flex (until Flash 9 is released). So to get used to the MXML syntax it makes sense to start from that direction. I’m working on some AS3.0 examples, articles and musings so stick with it – Knowing Flex will pay dividends even if you are a ‘die-hard’ Flasher. Catch my talk on Flex 2 and AS3 component development at Flash on the Beach December 4th-6th 2006 Brighton, UK
https://blog.flashgen.com/2006/11/extending-custom-flex-2-mxml-components/
CC-MAIN-2020-29
refinedweb
706
62.58
This article will create a simple WPF control that enables browsing of map data from OpenStreetMap, as well as enabling searching for places and displaying the results on the map. The attached zip file has both the source code, with documentation, and a sample application. I needed to allow the user to select various locations in my project but didn't want the user to have to install any other applications (such as Google Earth or Bing Maps 3D). One option was to have a web browser in my application pointing to the online versions, but this didn't feel right. Finally, I looked at OpenStreetMap and was impressed by the maps, but couldn't find any controls to put in my application. OpenStreetMap From their Main Wiki page: Basically, OpenStreetMap is a map made by the community for everybody to use. Also, luckily for me, all the details for the file naming conventions are there for creating our own control. All we have to do is download the relevant 256 x 256 pixel square image tiles for the area we want to look at and stitch them together - simple! The original version of the code used a method similar to the WinForms DoEvents method to allow queued up messages in the UI to be processed while the images were updating. There was a reason WPF doesn't implement this method because it's not a good idea to do that! This time around, there's a separate class responsible for fetching the images (either from the cache on disk or from the server) and this is called on a background thread (using ThreadPool.QueueUserWorkItem to take care of thread creation) and then freezing the BitmapImage. Once frozen, this can be passed back to the UI thread, which updates one tile at a time as needed without blocking. DoEvents ThreadPool.QueueUserWorkItem BitmapImage All the basic functionality for displaying a map and searching is in the MapControl project, however, you will need to create the controls for navigating and for getting the search query from the user. I've made a separate project called SampleApp which does just this, but I must confess my design skills suck. MapControl SampleApp This class has helper methods to retrieve information from OpenStreetMap and has the following members: public static class TileGenerator { // The maximum allowed zoom level. public const int MaxZoom = 18; // Occurs when the number of downloads changes. public static event EventHandler DownloadCountChanged; // Occurs when there is an error downloading a Tile. public static event EventHandler DownloadError; // Gets or sets the folder used to store the downloaded tiles. // Note: This should be set before any call to GetTileImage. public static string CacheFolder { get; set; } // Gets the number of tile images waiting to be downloaded. // Note: this is not the same as the number of active downloads. public static int DownloadCount { get; } // Gets or sets the user agent used to make the tile request. // Note: This should be set before any call to GetTileImage. public static string UserAgent { get; set; } // Returns a valid value for the specified zoom, // in the range of 0 - MaxZoom inclusive. public static int GetValidZoom(int zoom); } Before we do anything with any of the map controls, even before trying to call their constructors, we need to set the directory for the tile image cache folder and the user agent we will use to identify ourselves. Here MainWindow is assumed to be the first window loaded but you could instead put this line inside the constructor of the default App class: MainWindow App public MainWindow() { // Very important we set the CacheFolder before doing anything so the // MapCanvas knows where to save the downloaded files to. TileGenerator.CacheFolder = @"ImageCache"; TileGenerator.UserAgent = "MyDemoApp"; this.InitializeComponent(); // Because this will create the MapCanvas, it // has to go after the above. } The actual map is displayed inside the MapCanvas, which is inherited from the WPF Canvas control (hence the well thought out name ;) ). MapCanvas Canvas public sealed class MapCanvas : Canvas { // Identifies the Latitude attached property. public static readonly DependencyProperty LatitudeProperty; // Identifies the Longitude attached property. public static readonly DependencyProperty LongitudeProperty; // Identifies the Viewport dependency property. This will be read only. public static readonly DependencyProperty ViewportProperty; // Identifies the Zoom dependency property. public static readonly DependencyProperty ZoomProperty; // Gets the visible area of the map in latitude/longitude coordinates. public Rect Viewport { get; } // Gets or sets the zoom level of the map. public int Zoom { get; set; } // Gets the value of the Latitude attached property for a given dependency object. public static double GetLatitude(DependencyObject obj); // Gets the value of the Longitude attached property for a given dependency object. public static double GetLongitude(DependencyObject obj); // Sets the value of the Latitude attached property for a given dependency object. public static void SetLatitude(DependencyObject obj, double value); // Sets the value of the Longitude attached property for a given dependency object. public static void SetLongitude(DependencyObject obj, double value); // Centers the map on the specified coordinates, calculating the required zoom level. // The size parameter is the minimum size that must be visible, // centered on the coordinates. // i.e. the longitude range that must be visible will be: // longitude +- (size.Width / 2) public void Center(double latitude, double longitude, Size size); // Creates a static image of the current view. public ImageSource CreateImage(); // Calculates the coordinates of the specified point. // The point should be in pixels, relative to the top left corner of the control. // The returned Point will be filled with the Latitude in the Y property and // the Longitude in the X property. public Point GetLocation(Point point); } The main points of interest are the two attached properties that make it a bit easier for positioning child controls (though you can still use the regular Canvas ones such as Canvas.Left, etc.): MapCanvas.Latitude and MapCanvas.Longitude. Using them should be straight forward: Canvas.Left MapCanvas.Latitude MapCanvas.Longitude <!-- Make sure you've included a reference to MapControl.dll in your project --> <!-- and put a reference like the following at the start of the XAML file: --> <!-- xmlns:map="clr-namespace:MapControl;assembly=MapControl" --> <map:MapCanvas> <!-- The Top Left corner of the control will be at the specified Latitude + Longitude, so set a negative Margin to centralise the control on the coordinates. --> <Rectangle Fill="Red" Height="50" Width="50" Margin="-25,-25,0,0" map:MapCanvas. </map:MapCanvas> The MapCanvas will handle dragging with the mouse and zooming using the scroll wheel, however, you will probably want to add a set of navigation controls as well. To enable this, the MapControl registers itself with the following (self explanatory) standard WPF commands: ComponentCommands.MoveDown ComponentCommands.MoveLeft ComponentCommands.MoveRight ComponentCommands.MoveUp NavigationCommands.DecreaseZoom NavigationCommands.IncreaseZoom This SearchProvider class first tries to parse the query for a decimal latitude and longitude (in that order, separated by a comma and/or space) but if that fails will pass the query on to Nominatim to search osm data by name and address. Just to reiterate, it will only try and parse decimal degrees, not degrees minutes seconds. SearchProvider public sealed class SearchProvider { // Occurs when the search has completed. public event EventHandler SearchCompleted; // Occurs if there were errors during the search. public event EventHandler<SearchErrorEventArgs> SearchError; // Gets the results returned from the most recent search. public SearchResult[] Results { get; } // Searches for the specified query, localizing the results to the specified // area. // Note that it only returns true if a search was started. This does not // always mean that the method has failed - if a set of valid coordinates // were passed as the query then no search will be performed (returning // false) but the SearchCompleted event will be raised and the Results will // be updated. public bool Search(string query, Rect area); } This finally leaves the SearchResult class that, as you would expect, contains information for an individual search result returned from Nominatim. SearchResult public sealed class SearchResult { // Gets the formatted name of the search result. public string DisplayName { get; } // Gets the index the result was returned from the search. public int Index { get; } // Gets the latitude coordinate of the center of the search result. public double Latitude { get; } // Gets the longitude coordinate of the center of the search result. public double Longitude { get; } // Gets the size of the search's bounding box. public System.Windows.Size Size { get; } } Before using the code or sample application, you should read and make sure you comply with the following: The way I read it is make sure you put a copyright notice on the map (like the one in the bottom right hand corner of the sample application) and make sure you don't abuse the servers by downloading too much (such as trying to download all the tiles in one go). This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) General News Suggestion Question Bug Answer Joke Rant Admin Math Primers for Programmers
http://www.codeproject.com/Articles/87944/WPF-Map-Control-using-openstreetmap-org-Data?msg=4515204
CC-MAIN-2013-20
refinedweb
1,483
52.19
in reply to Re: Have you netted a Perl Monk or Perl Pretender in 5 minutes or less?in thread Have you netted a Perl Monk or Perl Pretender in 5 minutes or less? Do you remember which version of perl was the first one you ever used? What version are you using now? /usr/bin/rsync -av --delete rsync:// +.x/ /opt/perl/snap/MIRROR/ [download] #!/bin/sh cd /opt/perl/snap || exit 1 rsync -av --delete MIRROR/ src/ ## darnit I want one-level namespaces echo ... PATCHING src/hints/darwin.sh ... perl -pi-DIST -e 's/\[2-6/\[2-9/' src/hints/darwin.sh cd src || exit 1 PATH=/usr/bin:/bin ./Configure -des -Duseshrplib -Dusedevel \ -Uversiononly -Dprefix=/opt/perl/snap \ -Dlocincpth=/sw/include -Dloclibpth=/sw/lib \ -Dperladmin=merlyn@stonehenge.com [download] Then I run "cpan-r", described in a recent magazine article of mine, to run the CPAN shell using Expect, executing the "r" command and automatically updating all out-of-date modules. Do I get the job? {grin} -- Randal L. Schwartz, Perl hacker Be sure to read my standard disclaimer if this is a reply. Do I get the job? Do I get the job? {grin} Hey, merlyn, you might get the job, but I don't know if it would meet your salary requirements ;-) Paulster2 | | A scientific project A system/database administration tool A game A toy/personal tool Other web based tool None of the above Results (69 votes). Check out past polls.
https://www.perlmonks.org/index.pl/?node_id=491689
CC-MAIN-2021-43
refinedweb
247
68.26
QML and js reading json file and change content "on the go" Hi! How can I get json content (paths to text, image and video) to show in a listView? Each media it's a Item in listView I'm studying and modifying the "Qt5_CinematicExperience":. This code has a "dummy" model. What I want to do it's update the content everytime that I verify that json file has changed. A lot has to be made through Javascript correct? What are the best practices? Hey again, I am not certain I understand what you want to do, do you have JSON content in an external file and want to read it so it will be available in QML (as JavaScript) or where is the JSON string coming from? In my projects I usually use c++ for that and emit the parsed JSON object or array to QML, so check out the Qt class "QJsonDocument": You can also use pure JavaScript to parse the JSON string, but you can't access the file system from QML as far as I know, so you need c++ anyway. If you want I can publish my JsonFile QML "plugin", that is a simple way to read and write JSON files from QML. Hi again! ;) I have a JSON file with the sample content: @ {"id":32, "type":3, "title":"another video", "media":{ "video":{"url":"/media/videos/sample.mp4","filename":"sample.mp4"}, "image":null, "text":null, "qrcode":{"url":"/media/qrcodes/ceaa2aa649.png","filename":"ceaa2aa649.png"} } }, ...@ I need to read this file (I guess using javascript or your plugin ;) and set all the structure in ListView, all this on the fly, because this json file can changed at any moment. I do not know if this is the best solution. I've made this using python on backend and HTML5+JavaScript on frontend. But I want to change to Qt5 because this application will run in a raspberrypi. And Qt5 seems more faster than using some web browser. Can I read a local file through "xhr": right? For now I'm running away from c++ ;P I'm trying to do everything with qml and js. yeah you should be able to use XMLHttpRequest and JSON.parse if you don't want any c++ in your project :D I just made my own c++ plugin because XMLHttpRequest has some limitations, you can't easily check if a file exists and you cant write to files with it I think (so only read access), but in your case that should be fine so just use it I guess. Just if you want to know how my JsonFile plugin looks like in QML: @ JsonFile { id: jsonFile name: "foo.json" } ... jsonFile.write([1,2,3]) // write any JavaScript array or object var data = jsonFile.read() // read JSON file into JavaScript object or array @ it is easy to use from QML, not that the c++ part is complicated, but is has some helper functions and stuff like jsonFile.size (file size) or jsonFile.exists and some other stuff. the problem is the update of the json file. Read it again and then change the slide content. I know it's ask too much, but can someone put some simple code example? I don't know what you mean by update of the file, what changes the JSON file? if the file is dynamically changed outside of your application you have to use a file watcher or or poll the contents every 'x' seconds, that is not possible with QML. Maybe I understand you wrong, sounds a little weird with your JSON file!? I would love to download your plugin. How can I add it to my project? I haven't published it yet, but I can just post the code here it isn't that much. jsonfile.h @ #ifndef JSONFILE_H #define JSONFILE_H #include <QObject> #include <QFile> #include <QVariant> class JsonFile : public QObject { Q_OBJECT Q_PROPERTY(QString name READ name WRITE setName NOTIFY nameChanged) Q_PROPERTY(QString fileName READ fileName NOTIFY nameChanged) Q_PROPERTY(bool exists READ exists) Q_PROPERTY(bool writeable READ writeable) Q_PROPERTY(bool readable READ readable) Q_PROPERTY(qint64 size READ size) Q_PROPERTY(QString error READ error) public: explicit JsonFile(QObject *parent = 0); JsonFile(const QString &name, QObject *parent = 0); inline QString name() const { return m_file.fileName(); } QString fileName() const; inline bool exists() const { return m_file.exists(); } inline bool writeable() const { return m_file.permissions().testFlag(QFileDevice::WriteUser); } inline bool readable() const { return m_file.permissions().testFlag(QFileDevice::ReadUser); } inline qint64 size() const { return m_file.size(); } inline QString error() const { return m_error; } Q_INVOKABLE QString relativeFilePath(const QString &dir = QString()) const; Q_INVOKABLE bool rename(const QString &newName); Q_INVOKABLE inline bool copy(const QString &newName) { return m_file.copy(newName); } Q_INVOKABLE inline bool remove() { return m_file.remove(); } Q_INVOKABLE bool write(const QVariant &data); Q_INVOKABLE QVariant read(); signals: void nameChanged(const QString &name); public slots: void setName(const QString &name); private: QFile m_file; QString m_error; }; #endif // JSONFILE_H @ jsonfile.cpp @ #include "jsonfile.h" #include <QUrl> #include <QFileInfo> #include <QDir> #include <QJsonDocument> JsonFile::JsonFile(QObject *parent) : QObject(parent) { } JsonFile::JsonFile(const QString &name, QObject *parent) : QObject(parent), m_file(name) { } void JsonFile::setName(const QString &name) { // fix to convert URL's to local file names QUrl url(name); QString localName = url.isLocalFile() ? url.toLocalFile() : name; if (m_file.fileName() != localName) { m_file.setFileName(localName); emit nameChanged(localName); } } QString JsonFile::fileName() const { return QFileInfo(m_file).fileName(); } QString JsonFile::relativeFilePath(const QString &dir) const { return QDir(dir).relativeFilePath(m_file.fileName()); } bool JsonFile::rename(const QString &newName) { bool success = m_file.rename(newName); if (success) { emit nameChanged(newName); } return success; } bool JsonFile::write(const QVariant &data) { if (m_file.fileName().isEmpty()) { m_error = tr("empty name"); return false; } QJsonDocument doc = QJsonDocument::fromVariant(data); if (doc.isNull()) { m_error = tr("cannot convert '%1' to JSON document").arg(data.typeName()); return false; } if (doc.isEmpty()) { m_error = tr("empty data"); return false; } QByteArray json = doc.toJson(); if (!m_file.open(QIODevice::WriteOnly | QIODevice::Truncate | QIODevice::Text)) { m_error = tr("cannot open file '%1' for writing: %2") .arg(m_file.fileName()).arg((m_file.errorString())); return false; } bool success = m_file.write(json) == json.size(); m_file.close(); return success; } QVariant JsonFile::read() { if (m_file.fileName().isEmpty()) { m_error = tr("empty name"); return QVariant(); } if (!m_file.open(QIODevice::ReadOnly | QIODevice::Text)) { m_error = tr("cannot open file '%1' for reading: %2") .arg(m_file.fileName()).arg((m_file.errorString())); return QVariant(); } QByteArray json = m_file.readAll(); m_file.close(); QJsonParseError error; QJsonDocument doc = QJsonDocument::fromJson(json, &error); if (error.error != QJsonParseError::NoError) { m_error = tr("invalid JSON file '%1' at offset %2") .arg(error.errorString()).arg(error.offset); return QVariant(); } return doc.toVariant(); } @ just register that class in the QML engine, like always in main.cpp or where you do that. @ qmlRegisterType<JsonFile>("JsonFile", 1, 0, "JsonFile"); @ I think that class should be fairly easy to understand, don't get scared by the amount f properties and methods, most of them are just helper functions as you can see :) if any error happens (file not found while reading, invalid JSON content, etc) the error message will be available through the "error" property. Hope that helps :) That help me a lot! Thanks! But how can I print the json content (debug)? the raw JSON content is only available in the c++ file, look at the method JsonFile::read() @ QByteArray json = m_file.readAll(); @ that json variable holds the content of the file, you can print it to the console with @ qDebug() << json; @ might need to include QDebug for that @ #include <QDebug> @ at the top of the file (I don't know how much c++ you know?) if you rather want to print the parsed content as JavaScript object or array you can just to that in QML? I "read" the JSON file in Qml but the content it's available in C++ file? My knowledge in C++ it's almost nothing :( well to explain that a little, everything that comes from outside of QML (a file on your disk, network etc) goes through c++, you might not see it but If you use an Image in QML the file gets loaded and decoded in c++ and then transferred to QML. Usually you don't need to see the raw JSON content, so the question here is what you want do to? You can just debug the decoded JSON object/array in QML, that should be fine unless there is an error and you need to know why. In that case you should learn how to use the debugger with QML and c++, you can just set a breakpoint in Qt Creator and see the value at that point without printing the value to the console. You might want to read this article about debugging in QML For simple debug purposes of javaScript objects your can also use @ JSON.stringify(obj) // use it with my JsonFile like this console.log(JSON.stringify(jsonFile.read())) @ that might look stupid to parse the JSON string and then convert it back to a string, but it is the easiest way for debugging purposes since the console cannot print objects (you will just see [object Object] or something simular).
https://forum.qt.io/topic/39756/qml-and-js-reading-json-file-and-change-content-on-the-go
CC-MAIN-2018-34
refinedweb
1,503
62.98
I believe that most of the friends who use pytorch to run programs have encountered this problem on the server: run out of memory, in fact, it means that there is not enough memory. 1. When the bug prompt specifically indicates how much memory a certain gpu has used, the remaining memory is not enough In this case, only batch_size needs to be reduced 2. No matter how you adjust the batch_size, an error will still be reported: run out of memory This situation is because your pytorch version is too high, add the following code at this time with torch.no_grad(): output = net(input,inputcoord) 3. If there is no indication of how much memory has been used and how much memory is left At this time, it may be because your pytorch version does not match the cuda version, then you can enter the following code in your terminal command line: import torch print(torch.__version__) print(torch.version.cuda) print(torch.backends.cudnn.version()) print(torch.cuda.is_available()) Through the above code, you can see your current torch version, cuda version, cudnn version, and whether torch can use gpu under the current cuda version. If it returns false, it is recommended to adjust the cuda version or the pytorch version.
https://stdworkflow.com/216/pytorch-runtime-error-cuda-run-out-of-memory
CC-MAIN-2022-40
refinedweb
213
51.21
I've included Table 10.1 to help you sort out the different components that are used in our side-scroller game, Tommy's Adventures. Table 10.1 Components of a Side-Scroller Game The source code for Tommy's Adventures is organized into seven source code files and two header files. If you have installed the software using the INSTALL program on the companion disk and the default directories, you will find these source code files in the \FG\TOMMY\ subdirectory. Each of the files is listed and discussed in the next few chapters. Here is a list of the source code files, in the order in which they are discussed: Notice that the last file, CHAR.C, was introduced in Chapter 9 when we discussed the game editor. That leaves us six source files and one header file to discuss. We'll present these files as we introduce the key game-programming topics. For instance, when we discuss sprite animation in Chapters 12 and 13, we'll look at the functions in TIMER.ASM, ACTION.C, and MOTION.C. For the most part, you shouldn't have too much trouble following the code because of the way it is organized. But keep in mind that many of the game components and operations, such as the tiles, sprites, scrolling system, and animation are all tightly integrated. It is difficult to discuss one component of the game without mentioning the other parts of the game, which means this is not a sequential discussion. Please bear with me, and I will try to define the elements of the game as I introduce them, and by the end of Chapter 15 we will have covered everything. All the C source code has been tested with the Borland C++, Turbo C/C++, Microsoft C/C++ and Microsoft Visual C++ compilers, and should work with other ANSI C/C++ compilers as well. Project: Compiling the Tommy's Adventures Game If you have not yet done so, now would be a good time to recompile Tommy's Adventures. Be sure to keep a clean copy of the source code and the data files in a backup directory so that you can refer to them later if you need to. Use your favorite C compiler and compile the following source code files: TOMMY.C CHAR.C EFFECTS.C MAP.C MOTION.C This will generate five OBJ files. Link these OBJ files with TIMER.OBJ and the appropriate large model Fastgraph library (FGL.LIB for Fastgraph or FGLL.LIB for Fastgraph/Light). This will give you a new TOMMY.EXE.A note on troubleshooting: Check for batch files in the \FG\TOMMY\ subdirectory with compile commands and switches for the most popular compilers. If you are using the Borland C++ or Turbo C/C++ compilers and you want to compile in the IDE, you will need to make a project file. In general, all you need to do is open a project file, add the five source code files plus TIMER.ASM, and the FGL.LIB or FGLL.LIB. If you have any difficulty with this (many people do!), consult your compiler manuals or call Borland. Do not compile and link the ACTION.C source code file. It is included in the TOMMY.C source code file. I will explain why in Chapter 13. If you are using Fastgraph/Light, you will need to run the FGDRIVER.EXE program before running TOMMY.EXE. Do not try to use FGDRIVER.EXE in a Windows DOS box. Exit Windows before running programs linked with Fastgraph/Light. The best place to begin with our game source code is the GAMEDEFS.H definition file that is used by all of the source code files. This file contains the definitions for all the constants, data structures, and function prototypes used in the game. In particular, you'll find the sections shown in Table 10.2 in this file. Table 10.2 Sections in GAMEDEFS.H Let's examine the header file and then we'll discuss some of the more important data structures. Here is the complete GAMEDEFS.H file: /******************************************************************\ * GameDefs.h -- Main header file for Tommy's Adventures game * * copyright 1994 Diana Gruber * * compile using large model, link with Fastgraph (tm) * \******************************************************************/ /********************* standard include files *********************/ #include <fastgraf.h> /* Fastgraph function declarations*/ #include <conio.h> #include <ctype.h> #include <string.h> #include <stdio.h> #include <stdlib.h> #include <dos.h> #include <io.h> /* Borland C and Turbo C have different names for some of the standard include files */ #ifdef __TURBOC__ #include <alloc.h> #include <mem.h> #else #include <malloc.h> #include <memory.h> #endif #ifdef tommy_c #define DECLARE /* declarations are not extern */ #else #define DECLARE extern /* declarations are extern */ #endif /********************* file i/o variables *************************/ DECLARE int nlevels; /* total number of levels */ DECLARE int current_level; /* current level number */ DECLARE char game_fname[13]; /* file name of game file */ DECLARE char level_fname[13]; /* file name of level data */ DECLARE char background_fname[13]; /* pcx file -- background tiles */ DECLARE char backattr_fname[13]; /* background tile attributes */ DECLARE char foreground_fname[13]; /* pcx file -- foreground tiles */ DECLARE char foreattr_fname[13]; /* foreground tile attributes */ #define MAXLEVELS 6 /* max 6 levels per episode */ typedef struct levdef /* level structure */ { char level_fname[13]; char background_fname[13]; char backattr_fname[13]; char foreground_fname[13]; char foreattr_fname[13]; char sprite_fname[13]; } LEVDEF; DECLARE LEVDEF far level[MAXLEVELS]; /* array of level structures */ DECLARE int nspritelists; /* total number of sprite lists */ DECLARE char sprite_fname[13]; /* sprite file name */ DECLARE char list_fname[13]; /* sprite list file name */ #define MAXSPRITELISTS 8 /* max 8 sprite lists per level */ DECLARE char list_fnames[MAXSPRITELISTS][13]; /* array of sprite lists */ DECLARE FILE *stream; /* general purpose file handle */ DECLARE FILE *dstream; /* used for debugging */ DECLARE FILE *level_stream; /* file handle: level data */ DECLARE FILE *sprite_stream; /* file handle: sprite file */ /******************** map declarations *************************/ #define BACKGROUND 0 /* tile type is background */ #define FOREGROUND 1 /* tile type is foreground */ DECLARE int tile_type; /* foreground or background */ DECLARE int tile_orgx; /* tile space x origin */ DECLARE int tile_orgy; /* tile space y origin */ DECLARE int screen_orgx; /* screen space x origin */ DECLARE int screen_orgy; /* screen space y origin */ DECLARE int screen_xmax; /* max screen space x coordinate */ DECLARE int screen_ymax; /* max screen space y coordinate */ DECLARE int world_x; /* world space x origin */ DECLARE int world_y; /* world space y origin */ DECLARE int world_maxx; /* max world space x coordinate */ DECLARE int world_maxy; /* max world space y coordinate */ DECLARE int vpo; /* visual page offset */ DECLARE int vpb; /* visual page bottom */ DECLARE int hpo; /* hidden page offset */ DECLARE int hpb; /* hidden page bottom */ DECLARE int tpo; #define MAXROWS 200 /* maximum rows of tiles */ #define MAXCOLS 240 /* maximum columns of tiles */ DECLARE int nrows; /* number of rows */ DECLARE int ncols; /* number of columns */ /* tile arrays for levels */ DECLARE unsigned char far background_tile[MAXCOLS][MAXROWS]; DECLARE unsigned char far foreground_tile[MAXCOLS][MAXROWS]; /* tile attribute arrays */ DECLARE unsigned char background_attributes[240]; DECLARE unsigned char foreground_attributes[28]; DECLARE char layout[2][22][15]; /* layout array */ DECLARE int warped; /* flag: warped this frame? */ DECLARE int scrolled_left; /* flag: scrolled left? */ DECLARE int scrolled_right; /* flag: scrolled right? */ DECLARE int scrolled_up; /* flag: scrolled up? */ DECLARE int scrolled_down; /* flag: scrolled down? */ /******************** sprite declarations *************************/; #define MAXSPRITES 100 /* maximum number of sprites */ DECLARE SPRITE *sprite[MAXSPRITES]; /* sprite array */ DECLARE int nsprites; /* number of sprites */ #define STANDFRAMES 3 /* number of frames in sprite list */ #define RUNFRAMES 6 #define JUMPFRAMES 4 #define KICKFRAMES 8 #define SHOOTFRAMES 7 #define SCOREFRAMES 3 #define ENEMYFRAMES 6 DECLARE SPRITE *tom_stand[STANDFRAMES]; /* sprite lists */ DECLARE SPRITE *tom_run [RUNFRAMES]; DECLARE SPRITE *tom_jump [JUMPFRAMES]; DECLARE SPRITE *tom_kick [KICKFRAMES]; DECLARE SPRITE *tom_shoot[SHOOTFRAMES]; DECLARE SPRITE *tom_score[SCOREFRAMES]; DECLARE SPRITE *enemy_sprite[ENEMYFRAMES]; #define LEFT 0 /* direction of sprite */ #define RIGHT 1 /************************** object declarations *******************/ DECLARE struct OBJstruct; /* forward declarations */ typedef struct OBJstruct OBJ, far *OBJp; typedef void near ACTION (OBJp objp); /* pointer to action function */ typedef ACTION near *ACTIONp; */ }; DECLARE OBJp player; /* main player object */ DECLARE OBJp top_node, bottom_node; /* nodes in linked list */ DECLARE OBJp score; /* score object */ #define MAXENEMIES 5 DECLARE OBJp enemy[MAXENEMIES]; /* array of enemy objects */ DECLARE int nenemies; /* how many enemies */ /********************* special effects **************************/ DECLARE char far *slide_array; DECLARE int slide_arraysize; /* size of slide array */ DECLARE int player_blink; /* flag: is Tommy blinking? */ DECLARE int nblinks; /* how many times has he blinked? */ DECLARE unsigned long blink_time; /* how long since the last blink? */ DECLARE char far blink_map[4000]; /* bitmap mask for the blink */ /********************* key declarations *************************/ #define BS 8 /* bios key values */ #define ENTER 13 #define ESC 27 #define SPACE 32 #define KB_ALT 56 /* low-level keyboard scan codes */ #define KB_CTRL 29 #define KB_ESC 1 #define KB_SPACE 57 #define KB_UP 72 #define KB_LEFT 75 #define KB_RIGHT 77 #define KB_DOWN 80 #define KB_F1 59 #define KB_F2 60 #define KB_W 17 #define KB_D 32 /************ miscellaneous defines and variables ***********/ #define MAX(x,y) ((x) > (y)) ? (x) : (y) #define MIN(x,y) ((x) < (y)) ? (x) : (y) #define OFF 0 #define ON 1 #define ERR -1 #define OK 1 #define FALSE 0 #define TRUE 1 DECLARE int hidden; /* hidden page */ DECLARE int visual; /* visual page */ DECLARE int seed; /* random number generator seed */ DECLARE int white; /* colors for status screen */ DECLARE int black; DECLARE int blue; DECLARE unsigned long game_time; /* total clock ticks */ DECLARE unsigned long last_time; /* time last frame */ DECLARE unsigned long delta_time; /* time elapsed between frames */ DECLARE unsigned long max_time; /* how long Tommy stands still */ DECLARE int nbullets; /* how many bullets */ DECLARE unsigned long shoot_time; /* how long between shots */ DECLARE long player_score; /* how many points */ DECLARE int show_score; /* flag: scoreboard on? */ DECLARE int forward_thrust; /* horizontal acceleration */ DECLARE int vertical_thrust; /* vertical acceleration */ DECLARE int kicking; /* flag: kicking? */ DECLARE int kick_frame; /* stage of kick animation */ DECLARE int kick_basey; /* y coord at start of kick */ DECLARE int nkicks; /* how many kicks */ DECLARE int nshots; /* how many shots */ DECLARE int nhits; /* how many hits */ DECLARE int nlives; /* how many lives */ DECLARE int warp_to_next_level; /* flag: warp? */ DECLARE char abort_string[50]; /* display string on exit */ /***************** function declarations *******************/ void set_rate(int rate); /* external timer function */ typedef void far interrupt HANDLER (void); typedef HANDLER far *HANDLERp; DECLARE HANDLERp oldhandler; /* action function declarations: action.c */ void near bullet_go(OBJp objp); void near enemy_hopper_go(OBJp objp); void near enemy_scorpion_go(OBJp objp); void near floating_points_go(OBJp objp); void near kill_bullet(OBJp objp); void near kill_enemy(OBJp objp); void near kill_object(OBJp objp); void near launch_bullet(void); void near launch_enemy(int x,int y,int type); void near launch_floating_points(OBJp objp); void near player_begin_fall(OBJp objp); void near player_begin_jump(OBJp objp); void near player_begin_kick(OBJp objp); void near player_begin_shoot(OBJp objp); void near player_fall(OBJp objp); void near player_jump(OBJp objp); void near player_kick(OBJp objp); void near player_run(OBJp objp); void near player_shoot(OBJp objp); void near player_stand(OBJp objp); void near put_score(OBJp objp); void near update_score(OBJp objp); /* function declarations: char.c */ void put_string(char *string,int ix,int iy); void center_string(char *string,int x1,int x2,int y); /* function declarations: effects.c */ void get_blinkmap(OBJp objp); void load_status_screen(void); void redraw_screen(void); int status_screen(void); void status_shape(int shape,int x,int y); /* function declarations: map.c */ void load_level(void); void page_copy(int ymin); void page_fix(void); void put_foreground_tile(int xtile,int ytile); void put_tile(int xtile,int ytile); void rebuild_background(void); void rebuild_foreground(void); int scroll_down(int npixels); int scroll_left(int npixels); int scroll_right(int npixels); int scroll_up(int npixels); void swap(void); void warp(int x,int y); /* function declarations: motion.c */ int can_move_down(OBJp objp); int can_move_up(OBJp objp); int can_move_right(OBJp objp); int can_move_left(OBJp objp); int collision_detection(OBJp objp1,OBJp objp2); int how_far_left(OBJp objp,int n); int how_far_right(OBJp objp,int n); int how_far_up(OBJp objp,int n); int how_far_down(OBJp objp,int n); int test_bit(char num,int bit); /* function declarations: tommy.c */ void main(void); void activate_level(void); void apply_sprite(OBJp objp); void array_to_level(int n); void fix_palettes(void); void flushkey(void); void getseed(void); void get_blinkmap(OBJp objp); void interrupt increment_timer(void); int irandom(int min,int max); void init_graphics(void); void level_to_array(int n); void load_sprite(void); void load_status_screen(void); void terminate_game(void); The GAMEDEFS.H file is included in all the source code files. This could present a problem. Global variables should be declared one time in one source code file, and then seen elsewhere as "extern" variables. This ensures all the functions in all the source code files are looking at the same memory location for a variable. In order to solve this problem, we define a symbol DECLARE, as follows: #ifdef tommy_c #define DECLARE /* declarations are not extern */ #else #define DECLARE extern /* declarations are extern */ #endifThis means DECLARE will be defined to mean nothing in the TOMMY.C source code file, and elsewhere it will be defined to mean "extern." This solves the problem quite nicely. A global variable can now be declared like this: DECLARE int current_level; The declaration will be extern in all source code files except TOMMY.C. To facilitate the definition of DECLARE, we define tommy_c at the top of TOMMY.C like this: #define tommy_c #include "gamedefs.h"Now the tommy_c symbol will only be defined in the TOMMY.C source code file, and elsewhere it will be invisible to the compiler. If you look closely at GAMEDEFS.H, you'll see that our game is designed using four data structures to support a layout array, levels, sprites, and objects. You'll need a good understanding of how these structures work in order to follow the game code we'll present in Chapters 11 through 15. When we cover the source code, you'll see how these data structures are used. The game code is designed to support six levels. We define a constant named MAXLEVELS in GAMEDEFS.H to specify the number of levels that can be used: #define MAXLEVELS 6If you want to add more levels to the game, you'll need to change this constant. If you recall from Chapter 9 when we completed the game editor, we explained that the level data is stored in an array named level, which is declared like this: DECLARE LEVDEF far level[MAXLEVELS];In the game, we use this same data structure. Recall that it is simply an array of LEVDEF structures. This structure simply holds the names of each of the six data files that are used to create the game: typedef struct levdef /* level structure */ { char level_fname[13]; char background_fname[13]; char backattr_fname[13]; char foreground_fname[13]; char foreattr_fname[13]; char sprite_fname[13]; } LEVDEF;Here we have compartments for the filenames of the level data, background tiles, background tile attributes, foreground tiles, foreground tile attributes, and the sprite list. The names of the these files are read by the main() function in TOMMY.C and then they are assigned to the level array by calling the level_to_array() function, which is also located in TOMMY.C: void level_to_array(int n) { /* update all the levels */ strcpy(level[n].level_fname, level_fname); strcpy(level[n].background_fname,background_fname); strcpy(level[n].backattr_fname, backattr_fname); strcpy(level[n].foreground_fname,foreground_fname); strcpy(level[n].foreattr_fname, foreattr_fname); strcpy(level[n].sprite_fname, sprite_fname); }Once this data has been read in, it can easily be accessed by the main game functions. When we get to Chapter 12, we'll be spending quite a bit of time discussing game animation. In particular, we'll look at how our animated sprites interact with the tiles in our game levels. This type of animation can get a little tricky so we've devoted a few chapters to showing you the subtleties of fast sprite animation. For now, let's explore the data structures that are used. First, we'll need two arrays to hold pointers to our background and foreground tiles: /* tile arrays for levels */ DECLARE unsigned char far background_tile[MAXCOLS][MAXROWS]; DECLARE unsigned char far foreground_tile[MAXCOLS][MAXROWS]; Because MAXCOLS is set to 240 and MAXROWS is set to 200, these arrays can reference as many as 48,000 tiles for our background and foreground art. Second, we need an important structure we call the layout array: /* declare the layout array */ DECLARE char layout[2][22][15];The layout array holds the information about the status of the tiles displayed on the current screen. We need it to help us keep track of when tiles need to be redrawn on the screen when sprites are being animated. Notice that the layout array has three dimensions. The first subscript, [2], refers to the two pages: hidden and visual. Tiles are tracked on both the hidden and visual pages. The second subscript, [22], refers to the number of columns. The third subscript, [15], is the number of rows. It's easy to visualize the layout array as an array of Boolean values superimposed on the tiles, as shown in Figure 10.1. Figure 10.1 How the layout array is set up. If the array element is assigned a value of 0, the corresponding tile has not changed in the current animation frame (see the next Tommy's Tip) and we'll call this a clean tile. If the array element is 1, the tile has been overwritten with something, probably a sprite, and it needs to be redrawn. When there are no sprites visible and all the tiles are clean, the layout array contains all 0s, as shown in Figure 10.1. As sprites are added, they cover up tiles, and the corresponding elements of the layout array are set to 1, or TRUE. Animation in a side scroller consists of displaying many frames very quickly and very smoothly. For the purposes of our discussion, we'll define an animation frame to be the sequence of events ending in a page flip. We expect to animate our game at a frame rate of approximately two dozen frames per second. The sequence of events will happen roughly like this: We'll elaborate on this sequence as we go along in Chapters 11 through 14. In general, every frame of animation does all five of these steps to a greater or lesser degree. Some frames skip the user interaction part, and not all frames require scrolling, but every frame requires a page flip. So we will use the page flip to define the end of a frame. The process of rebuilding all the tiles can be the most time-consuming part of the frame. Since we want to maximize our frame rate, we can take a shortcut on this step. Instead of replacing all the tiles every frame, we can update the bare minimum number of tiles that must be redrawn to clear the screen. If we scrolled during the frame, we will need to update a row or column of tiles along the edges. The only other tiles we'll need to replace are those that were covered by sprites. In order to differentiate these tiles from the "clean" tiles, we need a mechanism to keep track of the tile status. And that's where the layout array comes in. Tile attributes are byte values assigned to individual tiles that contain information about how a sprite may interact with the tile. The most important tile attribute is solid on top (the sprite "walks" on the tiles that are solid on top). Paths and platforms are made up of tiles with the solid-on-top attribute. Not all tiles are solid on top, of course, or Tommy would walk on walls and in the sky. We want Tommy to keep his feet on the floor, and if he happens to venture out into empty space, we want the rules of physics to apply. Figure 10.2 shows how tile attributes are used to keep Tommy's feet on the ground. Figure 10.2 Tile attributes keep Tommy from falling through the floor. Similarly, we have attributes for solid on the bottom so that Tommy will bump his head on ceilings and ledges if he's not careful, and solid on the left or right, so he won't walk through walls. Each tile has eight attributes; besides the four attributes for solid on the top, bottom, left, and right, there are four more attributes that you can use for anything you want. You may want to use attributes to flag a tile as the end of a level, as a door, as a remappable tile (as in the case of a flickering torch, where tiles are replaced periodically), as a starting point for a sprite, or as a hazard, such as spikes. Additionally, passing over a tile may change the action of the sprite. If a tile is a patch of ice, the sprite will slide over it, or a tile may accelerate sprite movement, such as a fan or catapult. There are obviously many ways to use a tile attribute. Each tile is assigned a tile attribute byte. The eight bits in the byte indicate which attributes are set. Table 10.3 shows how I have assigned the bits. Table 10.3 Tile Attributes The tile attributes are assigned to the unique tiles in the tile library. That means that if a tile is a floor, it will be a floor throughout the level. If a tile is solid on the top in one position in the level, every occurrence of that tile in the level will also be solid on the top. Assigning tile attributes to tiles in the tile library is more efficient than assigning attributes to every tile in the level individually. That method would also work, but you would need an array as big as the level to hold the attributes and you would use up much more RAM. I don't do it that way, but I can see how it would be possible for some games to work better with attributes assigned to level positions rather than unique tiles. Feel free to experiment, but for our discussion, we'll assume the attributes are assigned to unique tiles in the tile library. The attribute bytes are stored in two arrays, which are declared like this in GAMEDEFS.H: char background_attributes[240]; char foreground_attributes[28];Each of the 240 unique background tiles has an attribute byte associated with it, as does each of the 28 unique foreground tiles. The attributes are usually set in the game editor. To check an attribute by testing a bit, either in the game or in the editor, the following function will do the job: test_bit(char num,int bit) { /* test bit flags, used for tile attributes */ return((num >> bit) & 1); }This function is found in the file MOTION.C. Is Tommy a sprite or is Tommy an object? As discussed earlier, Tommy is both, but for the purposes of the code we are going to discuss, Tommy must be defined very precisely. Therefore, we will define a structure of type object in GAMEDEFS.H that will completely describe Tommy. The structure looks like this: */ }; This structure gives all the information about Tommy: his current position, what he is doing, how long he has been doing it, and what sprite he is currently displayed as. That's right, Tommy's object keeps track of Tommy's sprite. There are many sprites that could be the current representation of Tommy. He may be standing still, walking, or running. The chosen sprite is one of the images we created in the sprite editor, and it will be stored in a structure that looks like this:; So Tommy's object structure points to Tommy's sprite structure. While there is only one object structure for Tommy, there are 37 sprite structures, and Tommy's object can point to any one of them. We will examine this relationship in more detail in Chapter 12. There will be times, as we build our game, that we'll have an item that we don't know what to do with. There are some items that can be represented as either a tile or an object, and it is not always obvious which is the best way to define it. Take the case of a cheeseburger, for example. Suppose Tommy is running around the level and his energy level goes down. He is hungry. Let's give him a cheeseburger. How are we going to do it? The cheeseburger can be displayed as either a tile or a sprite. If it is stored as a tile, it should be as a foreground tile, so it can be put anywhere in the level, and the background will show through it. How will we know Tommy has grabbed the cheeseburger? We can use a tile attribute to mark the tile as food. Then every time Tommy passes a foreground tile, we can check it to see if the food attribute is set. If it is, we remove the foreground tile from the foreground_tile array and give Tommy the energy boost he has earned. On the other hand, if the cheeseburger is stored as an object it would have no tile attribute. We would have to use collision detection techniques to determine how the two objects should interact with each other. If a collision is detected, we would remove the cheeseburger object from the linked list and give Tommy his snack. Both methods would work, and the one you choose is related to space and speed considerations. Since we only have enough room in video memory for 28 foreground tiles, we may want to use them sparingly. On the other hand, sprite and objects take up room in RAM, and if we are facing a RAM crunch, tiles, which are stored in video memory, may be the better option. Video-to-video transparent blits are a little slower than RAM-to-video transparent blits, so a "cheeseburger sprite" would be a little faster than a "cheeseburger tile." Except that objects are drawn every frame, and foreground tiles are not. We only redraw tiles when a object passes over them (or behind them). Also, if there are many cheeseburgers on a level, we will have to do a collision check on each one every frame. That would slow us down, but just a little bit. Are there other factors we have not considered? We only have four unassigned tile attributes, what if we want to use them for something else? What if we want the cheeseburger to use the background palettes rather than the sprite palettes? Do we want the cheeseburger to blink or display a floating score when it is grabbed? As you can see, the decision on how to store the cheeseburger is complex. The optimal solution is not always obvious during the early design phase of a game. Sometimes a little experimentation is needed. It is a good idea to keep an open mind about things like this. Different games will yield different results. At the bottom of GAMEDEFS.H are the function declarations for all the source code files. Function declarations are important to get clean compiles without compiler warnings. The function declarations are organized in the same order as the source code files and are presented in roughly alphabetical order. Notice that ACTION.C is treated as a separate source code file even though it is not compiled alone. Rather, it is included in TOMMY.C using the #include preprocessor directive. The reason for includeing the file in this manner is all the action functions in ACTION.C are declared to be near, and must reside in the same code segment as the functions that call them. We'll discuss this concept further in Chapter 13. Some of the global variables in GAMEDEFS.H are visible in only one source code file, others are visible in several source code files. I have not differentiated between them-- all globals variables are universally visible in the Tommy's Adventures source code files. I realize this runs contrary to the current programming style of data encapsulation. My only defense of this practice is to say: this is the way I like to do it; it works well for me. Development is speeded up because I always know where my globals are. Having them in one file makes it easy for me to find them. I can modify them or add new globals quickly. I don't waste a lot of time worrying about which variables are visible to which functions, and I don't waste RAM on duplicate copies of variables. It seems like a perfectly efficient way to organize things to me. You may, of course, feel free to encapsulate your own data with my blessing.
https://book.huihoo.com/action-arcade-adventure-set/chapt10.html
CC-MAIN-2019-09
refinedweb
4,737
60.95
#include <configfile.h> List of all members. This class is used to load settings from a configuration text file. Ths file is dividing into sections, with section having a set of key/value fields. Example file format is as follows: # This is a comment section_name ( key1 0 key2 "foo" key3 ["foo" "bar"] ) Standard constructor. Standard destructor. Load config from file. Check for unused fields and print warnings. Read a string value. Read an integer value. Read a floating point (double) value. Read a length (includes unit conversion, if any). Read an angle (includes unit conversion). In the configuration file, angles are specified in degrees; this method will convert them to radians. Read a color (includes text to RGB conversion). In the configuration file colors may be specified with sybolic names; e.g., "blue" and "red". This function will convert them to an RGB value using the X11 rgb.txt file. Read a filename. Always returns an absolute path. If the filename is entered as a relative path, we prepend the config file's path to it. Get the number of values in a tuple. Read a string from a tuple field. Read an integer from a tuple field. Read a float (double) from a tuple field. Read a length from a tuple (includes units conversion). Read an angle form a tuple (includes units conversion). Read a device id. Get the number of sections. Get a section type name. Lookup a section number by section type name. Get a section's parent section. Dump the token list (for debugging). Dump the section list for debugging. Dump the field list for debugging. Name of the file we loaded.
http://playerstage.sourceforge.net/doc/Player-1.6.5/player-html/classConfigFile.php
CC-MAIN-2017-17
refinedweb
276
71.92
Join the Nasdaq Community today and get free, instant access to portfolios, stock ratings, real-time alerts, and more! Tax preparers are a hot commodity this time of year, but sometimes their clients drive them crazy with habits they say are over the line. Here are a few ways you can be less taxing to your tax pro. Why it's annoying: When February's over, it's go time in the tax world. "There's something about March 1st … March just gets crazy," says Anil Melwani, a certified public accountant and president of 212 Tax & Accounting Services in New York City. And it's not just him. The National Association of Enrolled Agents recently surveyed more than 1,400 of the federally licensed tax pros it represents, and it found that more than half (51%) "strongly agree" you should book time with your tax preparer early in the filing season. Do this instead: Get on your tax pro's calendar before the end of February. "The dream situation is for someone to come in between January 25th and February 25th with all their information and paperwork," Melwani says. Why it's annoying: Tax preparers aren't magicians, time travelers or mind readers. "It doesn't really make sense for us to start a return if we don't have everything," Melwani says. "It would usually just lead to double or unnecessary additional work." That could cost you, too: 71% of tax preparers charge extra for disorganized or incomplete files, according to the National Society of Accountants, and the average cost is $117. Also, not having stuff can arouse suspicion, and good tax preparers want nothing to do with fraudulent returns. "If they aren't providing all the information that I'm asking for, that's the hardest to deal with because it's hard to know if they're not doing it because they're hiding something," says Sallie Mullins Thompson, a CPA in New York City. Do this instead: Be organized. Much of the information you need to complete your tax return (W-2s and 1099s, for example) is mailed to you by the end of January, and your return from last year can help fill in any gaps. Tax pros also especially want clients to do three things, according to the NAEA survey: Use separate bank accounts for personal and business funds, keep your receipts in case you're audited and track those business miles. Why it's annoying: Your tax preparer didn't write the tax code; that's the government's job. Also, everybody's tax situation is different, which is probably why a whopping 70% of the tax pros in the NAEA survey said they wish people would stop expecting to the get the same refund that a neighbor, cousin or colleague received. Do this instead: Ask your tax pro to explain your situation, and avoid trying to self-diagnose. "People can go on the internet and get answers to questions, which may not always be correct information," Thompson says. "It makes them think they have a certain amount of knowledge on the subject, and maybe they don't." Here's some other stuff tax pros want people to know, according to the NAEA survey: More From NerdWallet Tina Orem is a writer at NerdWallet. Email: torem@nerdwallet.com. The article How to Keep Your Tax Preparer From Hating You.
https://www.nasdaq.com/article/how-to-keep-your-tax-preparer-from-hating-you-cm918937
CC-MAIN-2018-43
refinedweb
566
58.21
Redistributes gridpoints within the unit sphere. More... #include <SpecialMobius.hpp> Redistributes gridpoints within the unit sphere. A special case of the conformal Mobius transformation that maps the unit ball to itself. This map depends on a single parameter, mu \( = \mu\), which is the x-coordinate of the preimage of the origin under this map. This map has the fixed points \(x=1\) and \(x=-1\). The map is singular for \(\mu=1\) but we have found that this map is accurate up to 12 decimal places for values of \(\mu\) up to 0.96. We define the auxiliary variables \[ r := \sqrt{x^2 + y^2 +z^2}\] and \[ \lambda := \frac{1}{1 - 2 x \mu + \mu^2 r^2}\] The map corresponding to this transformation in cartesian coordinates is then given by: \[\vec{x}'(x,y,z) = \lambda\begin{bmatrix} x(1+\mu^2) - \mu(1+r^2)\\ y(1-\mu^2)\\ z(1-\mu^2)\\ \end{bmatrix}\] The inverse map is the same as the forward map with \(\mu\) replaced by \(-\mu\). This map is intended to be used only inside the unit sphere. A point inside the unit sphere maps to another point inside the unit sphere. The map can have undesirable behavior at certain points outside the unit sphere: The map is singular at \((x,y,z) = (1/\mu, 0, 0)\) (which is outside the unit sphere since \(|\mu| < 1\)). Moreover, a point on the \(x\)-axis arbitrarily close to the singularity maps to an arbitrarily large value on the \(\pm x\)-axis, where the sign depends on which side of the singularity the point is on. A general Mobius transformation is a function on the complex plane, and takes the form \( f(z) = \frac{az+b}{cz+d}\), where \(z, a, b, c, d \in \mathbb{C}\), and \(ad-bc\neq 0\). The special case used in this map is the function \( f(z) = \frac{z - \mu}{1 - z\mu}\). This has the desired properties: The three-dimensional version of this map is obtained by rotating the disk in the plane about the x-axis. This map is useful for performing transformations along the x-axis that preserve the unit disk. A concrete example of this is in the BBH domain, where two BBHs with a center-of-mass at x= \(\mu\) can be shifted such that the new center of mass is now located at x=0. Additionally, the spherical shape of the outer wave-zone is preserved and, as a mobius map, the spherical coordinate shapes of the black holes is also preserved.
https://spectre-code.org/classdomain_1_1CoordinateMaps_1_1SpecialMobius.html
CC-MAIN-2022-21
refinedweb
429
57.71
/* * idle.c -- pause code for fetchmail * * For license terms, see the file COPYING in this directory. */ #include "config.h" #include <stdio.h> #if defined(STDC_HEADERS) #include <stdlib.h> #endif #if defined(HAVE_UNISTD_H) #include <unistd.h> #endif #include <signal.h> #include <errno.h> #include <sys/time.h> #include "fetchmail.h" #include "i18n.h" all 5 * seconds, until it is certain, that the critical section (ie., the window) * is left. */ #if defined(STDC_HEADERS) static sig_atomic_t alarm_latch = FALSE; #else /* assume int can be written in one atomic operation on non ANSI-C systems */ static int alarm_latch = FALSE; #endif RETSIGTYPE gotsigalrm(int sig) { signal RETSIGTYPE donothing(int sig) {signal(sig, donothing); lastsig = sig;} int interruptible_idle(int seconds) /* time for a pause in the action; return TRUE if awakened by signal */ { int awoken = FALSE; /* * With this simple hack, we make it possible for a foreground * fetchmail to wake up one in daemon mode. What we want is the * side effect of interrupting any sleep that may be going on, * forcing fetchmail to re-poll its hosts. The second line is * for people who think all system daemons wake up on SIGHUP. */ signal(SIGUSR1, donothing); if (!getuid()) signal(SIGHUP, donothing); ; siginterrupt(SIGALRM, 1); alarm_latch = FALSE; signal(SIGALRM, gotsigalrm); /* first trap signals */ setitimer(ITIMER_REAL,&ntimeout,NULL); /* then start timer */ /* there is a very small window between the next two lines */ /* which could result in a deadlock. But this will now be */ /* caught by periodical */ signal = run.poll_interval; timeout.tv_usec = 0; do { lastsig = 0; select(0,0,0,0, &timeout); } while (lastsig == SIGCHLD); } #endif #else /* EMX */ alarm_latch = FALSE; signal(SIGALRM, gotsigalrm); _beginthread(itimerthread, NULL, 32768, NULL); /* see similar code above */ if (!alarm_latch) pause(); signal(SIGALRM, SIG_IGN); #endif /* ! EMX */ if (lastsig == SIGUSR1 || ((seconds && !getuid()) && lastsig == SIGHUP)) awoken = TRUE; /* now lock out interrupts again */ signal(SIGUSR1, SIG_IGN); if (!getuid()) signal(SIGHUP, SIG_IGN); return(awoken ? lastsig : 0); } /* idle.c ends here */
http://opensource.apple.com//source/fetchmail/fetchmail-6/fetchmail/idle.c
CC-MAIN-2016-40
refinedweb
310
51.55
Hi, I have the following code snippet from MvcMusicStore tutorial: public class StoreController : Controller { // // GET: /Store/ public string Index() { return "Hello from Store.Index()"; } // // GET: /Store/Browse public string Browse() { return "Hello from Store.Browse()"; } // // GET: /Store/Details public string Details() { return "Hello from Store.Details()"; } public string Details(int id) { string message = "Store.Details, ID = " + id; return Server.HtmlEncode(message); } } When I called the action : /store/details/2 I got: The current request for action 'details' on controller type 'StoreController' is ambiguous between the following action methods: System.String Details() on type MvcMusicStore.Controllers.StoreController System.String Details(Int32) on type MvcMusicStore.Controllers.StoreController According to tutorial, this should work. I am using MVC2. TIA Hi Guys, Hey! I Hi group,I have a generic question about MVVM in WPF. I downloaded the MVVM toolkit from codeplex and the tutorial is great, but I'm wondering which is the best way of handling more realistic scenarios involving more editing. Say I have a totally fake scenario like this: 1) MODEL: Let's assume a Guy class like: public class Guy {ÃÂ public string FirstName {get;set;}ÃÂ public string LastName {get;set;}ÃÂ ...} also, let us add a Team class containing a name, some guys and also a dictionary with key=room number and value=office name for the offices of that team: public class Team {ÃÂ public string Name {get;set;}ÃÂ public List<Guy> Guys {get;}ÃÂ public Dictionary<int,string> Rooms {get;}ÃÂ ...} 2) VIEWMODEL: We wrap the Guy in a VM like: public class GuyViewModel : ViewModelBase {ÃÂ private readonly Guy _guy;ÃÂ ÃÂ public GuyViewModel(Guy guy) { _guy = guy; }ÃÂ and we expose its properties using OnPropertyChanged notifications like: ÃÂ public string FirstNameÃÂ {ÃÂ ÃÂ get { return _guy.FirstName; }ÃÂ ÃÂ set { _guy.FirstName = value; OnPropertyChanged("FirstName"); }ÃÂ }ÃÂ ...ÃÂ ÃÂ This way a WPF-based view can use databinding to link to Guy's data.ÃÂ ÃÂ Now for the Team: our VM would require to expose a couple of editab Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/29767-newbie-question-on-mvc.aspx
CC-MAIN-2018-13
refinedweb
352
55.74
sdcardio – Interface to an SD card via the SPI bus¶ - class sdcardio. SDCard(bus: busio.SPI, cs: microcontroller.Pin, baudrate: int = 8000000)¶ SD Card Block Interface Controls an SD card over SPI. This built-in module has higher read performance than the library adafruit_sdcard, but it is only compatible with busio.SPI, not bitbangio.SPI. Usually an SDCard object is used with storage.VfsFatto allow file I/O to an SD card. Construct an SPI SD Card object with the given properties Note that during detection and configuration, a hard-coded low baudrate is used. Data transfers use the specified baurate (rounded down to one that is supported by the microcontroller) Example usage: import os import board import sdcardio import storage sd = sdcardio.SDCard(board.SPI(), board.SD_CS) vfs = storage.VfsFat(sd) storage.mount(vfs, '/sd') os.listdir('/sd') count(self)¶ Returns the total number of sectors Due to technical limitations, this is a function and not a property.
https://circuitpython.readthedocs.io/en/6.0.x/shared-bindings/sdcardio/index.html
CC-MAIN-2020-50
refinedweb
159
51.24
WRITTEN ARGUMENT IN DEFENCE OF THE ACCUSED IN FIR No. 440/1996 PS ROOP NAGAR, DELHI NORTH In the matter of:- State V/s Ayodhya Prasad Tripathi DOH: Oct. 11, 2012 Hon’ble sir, I, Ayodhya Prasad Tripathi, the accused in the above case, state as here under, 1. That in my answer to the charge, dated January 27, 1998, on record, I had already accepted that I had edited, printed on my printer and distributed THE pamphlet, ‘KIRAYA KANOON’ among the public, exercising my right as provided in the Article 19 of the Indian Constitution and sections 102 and 105 of the Indian Penal Code. For the reason of brevity, I am not repeating them. The same may kindly be accepted as Ex.D1 of this affidavit. This Hon’ble Court has uselessly wasted her time since 1997 A.D. in proving the very fact. Judiciary is guided by the Indian Constitution. Its compiler wanted to burn it.. 2. That I have been accused for committing crime u/s 153 section 153.” May I know what is Azaan and Kutras preached from mosques? 3.. A copy of the judgment is being attached with this WRITTEN ARGUMENT and is marked as (Annexure-s1).”. 4. That the section 196 of the CrPC proves impotency of judges, who have no choice than submitting to the rulers (Now Sonia). Imams incite communal hatred, commit acts prejudicial to maintenance of harmony and abuse and insult the deities of judges by shouting Azaan. During discussions, persons with jurisprudence claim that Britons love justice. Yet Police could and cannot prosecute Imams and judges cannot try Imams for their (Imams) crime of inciting communal hatred u/s 153/295 of the Indian Penal Code. Indian Penal Code was enacted in 1860 A.D. Since then Muslim Imams/Maulavies preach, verses of Koran, which are prejudicial to maintenance of harmony and incite genocide of non-Muslims from Mosques on the ground of faith. Still the judges are helpless. I have reproduced the relevant sections above. According to sections quoted above, the Right of private defense of body and property of Vedic Panthies commenced since 1860 A.D. and continue until Christianity and Islam are eradicated! 5. That the sections 97, 99, 102 and 105 of the Indian Penal Code, quoted above, grant one right to private defence. Section 99 is not applicable in the cases of death and grievous hurt. Christianity and Islam have been detained in India for eradicating Vedic Sanaatan Dharm. Law enforcing agency, with which one may ask for recourse, is helpless as per section 196 of the Criminal Procedure Code. Christians devoured Red Indians of America and their Maya culture. Now, in collusion with Sonia, they are after black Indians and Vedic Sanaatan Dharm of India. We have potential threat to our life and property from Christianity and Islam. We are exercising our right to private defense. We are not committing any offence as per section 96 of the Indian Penal Code. Hamid Ansari, the Vice President of India, has been commanded by Allah to slay non-Muslims [(Koran, 2:191-194 and 8:39) read with Article 29(1) of the Indian Constitution] and Antonia Maino alias Sonia Gandhi has been commanded by Jesus to slay those, who do not accept Jesus their king. [(Bible, Luke 19:27) read with Article 29(1) of the Indian Constitution]. Hamid has taken oath to preserve, protect and defend the Indian Constitution as per Article 60 and Governor of States has taken oath to preserve, protect and defend the Indian Constitution as per Article 159 of the Indian Constitution. Judges have taken oath to uphold the Indian Constitution as per Schedule III forms IV and VIII. Both, Hamid and governors are nominees of Sonia. Sonia, along with legislators and ministers, has taken oath of faith and allegiance in the Indian Constitution as per Schedule III, in various forms. The Article 29(1) of the Indian Constitution has provided unfettered fundamental right to Christians and Muslims to conserve their culture of plunder, murder, rape of women and conversion. 6. That in this instant case prosecution has miserably failed to prove her case. The complainant home secretary was not examined. Even there is no Muslim or Christian witness, whose religious feelings were hurt or animousity, was aroused. No one proved sanction u/s 196 of the CrPC. In contrast Muslims and Christians are hurting our feelings and declaring openly animosity on the ground faith. However, we cannot register complaint against Christians and Muslims for want of sanction u/s 196 of the CrPC. The Indian Constitution has, with its compilation, snatched the chastity of women, (Bible, Isaiah 13:16) and (Koran 23:6), the right to worship [Article 29(1) of the Indian Constitution and right to property (Koran, 8:1, 69 and 41) Article 39(c) of the Indian Constitution] from citizens since November 26, 1949. President of hapless India and the State Governors are armed with Sections 196 and 197 of the Criminal Procedure Code and section 80 of the Civil Procedure Code to defend corrupt assassins and rapists to ensure genocide of Aryans. Judiciary has held Koran and Bible religious books. Armed with the section 196 of Criminal Procedure Code that restrains citizens, judges or police from taking any action against Azaan and Namaaz u/s 153 and 295 of the Indian Penal Code, President and Governors are helplessly protecting assassins, robbers and rapists to eradicate Aryans as such are defenders of criminals. No judge can take cognizance against Azaan, Koran and Bible under the very section 196 of the CrPC. Police THAT cannot arrest Muslim Imam, who shouts Azaan and thus insults Ishwar and Vedic culture, are deputed to protect Muslims, who abuse non-Muslim faiths and incite communal violence. No Police can be prosecuted under the section 197 of the Criminal Procedure Code. Citizens of India have legal right to take back fundamental right to property and save their lives. However, no individual can exercise the rights. Aryavrt Government is here to protect human race from Hamid and Sonia. 7. That the above comments may horrify this Hon’ble court, World Police Obama, Pranab Dada, Antonia, Hurriat, Hammas, and dreaded Democracy, they are left without any excuse. They must note that Marx, Allah and Jehovah are their own enemies. They have initiated war against humanity and even against Muslims and Christians since their inception. 8. That although I explained, in my reply to charge, dated January 27, 1998, which is Ex. 1D, on record, that I am exercising my right of private defense, the presiding officer of the judiciary ignored my submissions for the fear of losing her job. The P.O. has no choice. The Indian Constitution has been compiled to eradicate Vedic Sanaatan Dharm. What for is section 196 of the Criminal Procedure Code compiled? My direct questions: 9. That section 196 of the Criminal Procedure Code has been compiled to abridge the rights granted under sections 102 and 105 of the Indian Penal Code and insure eradication of Vedic Sanaatan Dharm. Governors and public servants have no choice. relinquish one’s dignity, right of life and property and freedom or one’s sustenance, power and pelf. I seek answer of my direct questions: Article 29(1) of the Indian Constitution grants unfettered fundamental right to Christians and Muslims to conserve their cultures of Jihad and Mission. Before occupying offices President (Article 60 of the Indian Constitution) and Governors (Article 159 of the Indian Constitution) take oath to preserve, protect and defend the Indian Constitution, which has granted unfettered fundamental right to Christians and Muslims to slay Hindus. Since January 26, 1950 till to date, who questioned the authority of President of India and Governors, who are preserving, protecting and defending our killers and robbers? Should they remain in office? And if they remain with such powers can Vedic Sanaatan Dharm survive? We had been promised Ram Rajya and freedom of faith by Pakpita Gandhi. Christianity and Islam force even their own followers to relinquish their freedom of faith. self rule, which Gandhi heralded through Ahimsa? There is covenant between Government and citizens that she (government) would protect citizens' properties. Where is the moral in Article 39(c) of the Indian Constitution that snatches properties and means of production of the citizens? Where is the moral in omitting Article 31 of the Indian Constitution? If spoils of war belong to Allah, (Koran 8:1), why Allah is not robber? If booty belongs to Jews, 20:14 why are they not robbers? Women would either be raped by Muslims (Koran 23:6) or by Christians (Bible, Isaiah 13:16). Where is the dignity of women promised by the preamble of the Indian Constitution? Why should rapists survive on the earth? People, who deprive the citizens from their freedom of religion, must be slain as per their own dogmas. Aryavrt Government wants to award capital punishment to evangelists. Sonia Government must be removed to save humanity. 10. That we, the activists of Aryavrt Government, are against Christianity and Islam. With the connivance of the Article 29(1) of the Indian Constitution this Hon’ble Court cannot try Imaams shouting Azaan, from their mosques preaching genocide, love Jihad, demolishing temples, refusing to recite Vande Matram, exercising Talak and demanding for Shariyat law and killing cows. What this Hon’ble court would do if a Qazi comes in this very court and declares that the P.O. must vacate chair for Qazi as per Shariyat. Where is the law in India to stop Muslims? However, in contrast, there is law that Muslims have unfettered fundamental right to conserve their culture (SHARIYA). 11. That, Mosques are training centers for hating, inciting communal hatred and abusing faiths and deities of non-Muslim faiths. Mosques have no right to survive. 12. That Police has no choice. She is law enforcing agency. The Indian Constitution is predator and pirate. Delhi Police is being blackmailed by LG under section 197 of the Criminal Procedure Code, who has taken oath under Article 159 of the Indian Constitution to preserve, protect and defend Christianity and Islam. Christianity and Islam have divine command to eradicate Vedic Sanaatan Dharm. 13. That? Azaan is blasphemy of our Ishwar. 14. That. 15. That Muslims and Christians are fools. They are committing crime with the human race in lust of booty and sex. They are accepting servilities of the prevailing rulers. They are being cheated by their clerics and rulers in lieu of booty and sex. 16. That Muslims are killers of non-Muslims. Muslims are robbers and rapists under divine commands. We, Citizens of India, have right of Private Defence under sections 102 and 105 of the Indian Penal Code. Citizens of India have been promised liberty. However, Islam is submission to Allah, hence there is no liberty in Islam. Qaba is booty. (Koran, Bani Israel, 17:81) Azaan is insult to gods of non-Muslims and incite communal hatred. Azaan attracts prosecution under Sections 153 and 295 of the Indian Penal Code. Koran is nothing but a political manual for enslaving humanity with terror. (Koran 8:39; 9:5; 33:61 etc). 17. That the dreaded democracy of Bharat could not produce a single President/PM for this country and nation is constrained to import cow and man-eater super PM Antonia Maino and her refugee PM puppet Manmohan! Both Hamid Ansari, the Vice President of India, and Antonia Maino have been commanded by Allah and Jesus to slay us. Why should Aryans tolerate such Democracy? 18. That has humanity no shame for being ruled by dreaded criminals supported by the Indian Constitution, Koran and Bible? Media has no right to conceal the true face of Antonia Maino and Hamid Ansari, the Vice President of India, and their dreaded guides Koran and Bible aided and abetted by the dreaded Indian Constitution and shielded by Section 196 of the Criminal Procedure Code. 19. That Islamist Khomeini stated that the "Koran says: kill, imprison! Why are you (Muslims) only clinging to the part those talks about mercy? Mercy is against God"; and, "We need a Khalifa (leader of Islamic state) who would chop hands, cut throat, stone people etc."{D48F41F9-D9B7-4BD7-A45F-7A21E6126A27} 20. That. 21. That Maududi insisted that non-Muslims, although free to practice their “false, man-made way,” have “absolutely neither.” 22. That support of this Hon’ble Court so that the immunity granted to Imaams under section 196 of the Criminal Procedure Code could be withdrawn. For further details browse, URL; 23. That Aryans are still the slaves of British Crown. Slaves have no civil rights. India's Dominion Status, [Article 6(b)(ii) of the Indian Constitution], Section 3(6) of the General Clauses Act and being member of Common Wealth are proofs. Thus, basic cause, i.e. protection of lands, lives, ladies, liberties, and labours of the subjects, for whom the governments are invented and implemented, is getting. dare to protest. We are consigned in the jaws of two criminal killing cultures named Christianity and Islam through the Article 29(1) of the very Indian Constitution. Humanity is victim of Allah, Jesus the only son of Jehovah and Democracy. Either one does not worship Allah alone (Koran 21:98) or does not accept Jesus one’s ruler (Bible, Luke 19:27), as such Christians and Muslims are religiously and constitutionally right in murdering their common enemy Aryans because Aryans do not want Jesus to be their king. They waged war since 1857 against British rule and they do not worship Allah alone. Christians and Muslims have been provided unfettered fundamental right to conserve their very culture of slaughter vide Article 29(1) of the Indian Constitution. 24. That there are three valid reasons with me to exercise my right of private defense. One. The Indian Constitution has been compiled by the Britons' Congress Party in retaliation and to settle vendetta for opposing British rule amongst other reasons. Britons have to herald ‘Armageddon’ through Sonia. As long as the Indian Constitution, Koran and Bible and their dogmas are honoured in Bharat, we, Aryans, cannot survive. Nay! Either one does not worship Allah alone (Koran 21:98) or does not accept Jesus one’s ruler (Bible, Luke 19:27), as such Christians and Muslims would kill each other and humanity would become extinct like dinosaurs. We, Aryans, are Kafirs and Satans. Now, we have secular Hamid Ansari, the Vice President of India, who has taken oath to help Muslims conserve their cultures commanded by Allah to slay us. Our crime? We are idolaters (Koran 9:5). We have another imported and planted secular cow and man eater thief of Aamer Fort treasury Antonia Maino as super PM, who has been commanded by Jesus to slay us. Our crime? We do not accept Jesus our king. (Bible, Luke 19:27). Sonia is here to establish Jesus’ Empire through Armageddon for his second coming to rule upon the world. If no Armageddon, then no second coming of Christ to rule upon the earth! Sonia is obsessed with this devilish thought of human carnage so staggering that the loss of human life of the 1st and the 2nd World Wars combined would seem like scar on human body. We have a notorious predator and pirate Indian Constitution, the Article 29(1) of which has already granted unfettered fundamental right to both killers named Hamid Ansari, the Vice President of India, and Antonia Maino, to help conserve the Christianity and Islam cultures of genocide, i.e. to slay us, plunder and rape of women to secure their seat in heavenly brothel.. Does the Hon’ble court not feel horrified? Plundering the possessions of the citizens is not considered crime but has become integral part of the duties of public servants. It isanab Dada and Antonia Maino and their tools and vassals. Who is Sonia? Bible says, . (Like Hiranyakashyap and his Daitya culture)". Where is freedom of faith guaranteed by the preamble of the Indian Constitution?). 25. That there is no Democracy in India. This is a Government of Sonia, by Sonia and for Sonia. Don't agree? Here are the evidences:- 26. That President the Governors rule. Yet they can topple even any non-congress Government elected by the people of the state.. 27. That one may remember! We are not buying that this is an evil world and that we all are sinners. We are also not buying that as we are not baptized, we will go into Hell and burn forever. Also, we are not buying that if we are baptized we are given a promise that we will go to Heaven of fools, wherefrom Adam was chased away (Bible, Genesis 2:17) and live with the bastard and ghost Lord forever, regardless of what we do. We are not buying that because we are baptized all our sins will be forgiven. We believe in logic and Christians believe in faith. 28.. 29. That) 30..” 31.?” 32.. This is the peace. 33. That, does this Hon’ble Court. 34. That there is nothing peaceful about Islam. Islam’s dogmas breed ruthless killers. At her direction, Muslims will continue to terrorize the world until non-Muslims treat either generosity or ransom (based upon what benefits Islam) until the war lays down its burdens. Thus are you commanded by Allah to continue carrying out Jihad against the unbelieving infidels until they submit to Islam.” Koran 47:4 35. That this Hon’ble court may forget me and think as to how would the P.O. protect her own dignity, life and property? Two:? (Bible, dies; support of this Hon’ble Court to slay these people for relinquishing their faiths as per their own dogmas? Three:. We seek support of this Hon’ble Court in killing these human killers. 36.. 37.. 38. That. 39. That the LG, Government of NCT at Delhi, fails to apprehend the danger of survival of Vedic Sanaatan Dharm and India posed by killer cultures of Christianity and Islam. However, she was quick to apprehend the incitement of communal hatred in my pamphlet ‘KIRAYA KANOON’. 40. That to enslave bull, peasants sterilize the bull and to enslave humanity prophets Moses and Muhammad got their followers circumcised. Prophets died and left behind their legacies for clerics and rulers. 41. That humanity is victim of the frauds and pettifoggery committed by those public servants, who are supposed to serve the people. I am quoting the ex President of USA, “"The nine most terrifying words in the English language are: "I'm from the government and I'm here to help." Ronald Reagan.” 42. That Qaba is booty and belongs to idolater Aryans. The same. 43. That for bomb explosions in mosques had exercised one's right of Private Defense provided by Section 102 of the Indian Penal Code. Anything done in the exercise of Private Defense is no crime as per Section 96 of the Indian Penal Code. 44. That worse persecution was perpetrated by ATS police against our Jagatguru Shri Amritanand Ji. Beef was pushed in his mouth. Spinal cord of Sadhvi Pragya has been broken. Aseemananda was tortured. The leg of Col. Purohit was broken. We are victims of Jesuit, cow and man eater Sonia for our ignorance and want of choice for our public servants. 45. That under 50th criminal case I am accused in Malegaon mosque blast conspiracy. We had demolished Babri structure. Nay! I had submitted affidavit in the hand of thief Manmohan Singh Librhan on Jan 15, 2001 that I alone must be tried for the demolition of the Babri structure. The affidavit was stolen by M.S. Librhan, in lieu of bribe of Rs. 10 crores from Advani and Sonia both, and is not on record. 46. That we filed writ petition 15/1993 for ban of Koran in apex court. It was dismissed as withdrawn. I had published and distributed two handbills 'MUSALMANON BHARAT CHHODO' and 'ISHWAR ALLAH KAISE BNA'. 2 cases FIRs 78/1993 and 137/1993 (Ext 1, on record). These Hon’ble Court may peruse my following blogs, 47. That we seek Hon’ble Court’s support to retaliate. If Hon’ble Court fails to support us, humanity would finish like dinosaurs. Annexure-s10 Foundation of fraud 48. That Aryavrt Government is alone that is opposing both dreaded cultures and regimens of Christianity and Islam. This is a big task and we need huge funds for protection of human race. If one wishes to survive, may join Vedic Panth and support Aryavrt Government, else keep ready for doom. 49. That Muslims and Jews may note! Vedic culture is their Buffer. They are alive because Vedic culture could not be eradicated. No sooner Vedic culture would be eradicated their Allah and Jehovah would be eradicated by Christians within short time. Muslims and Christians may relinquish their criminal faiths if they wish to protect human race. a. That we, Aryans, are non-believing people. Rulers and judges are left without choice. They have to take oath to preserve, protect and defend, uphold and depose faith in the very Article 29(1) of the Indian Constitution and laws of land. Commander-in-Chief of army - President Pranab Dada of hapless India and his Governors has taken oath to preserve, protect and defend the predator and pirate Indian Constitution. [(Articles 60 and 159 of the Indian Constitution)]. All are puppets of Sonia. So long as the Indian Constitution, Koran and Bible survive, human race cannot survive.) ----------------------- Constituent Assembly Debates, Vol. VIII, pp. 269-355 “On 28 August 1947 Sardar Patel again spoke to the Constituent Assembly replying to the amendment motions moved in the Constituent Assembly in favour of separate electorates and reservation on the basis of religions: … You have got a separate State and remember, you are the people who were responsible for it, and not those who remain in Pakistan. You led the agitation. You got it. What is it that you want now?.... …” 50. That on the one hand one has so called democracy of Briton's Congress and on the other hand Aryavrt Kingdom. In the so called democracy even Christian and Muslim, who are slaves, citizens cannot have their own regimen, property and capital [Indian Constitution, DIRECTIVE PRINCIPLES OF STATE POLICY, Article 39(c) and cannot worship a god of one’s choice (Azaan and Koran 3:19) read with Article 29(1) of the Indian Constitution. Aryavrt Government undertakes to provide even Christian and Muslim. 51. That with the compilation of the dreaded predator and pirate Indian Constitution, Christians and Muslims has been granted unfettered fundamental right to conserve their cultures. Citizens of India have lost their right of life and liberty of faith vides Article 29(1) and right upon their properties and means of production vides Article 39(c) since November 26, 1949. Election Commission extorts citizens’ consent of losing their right of life and property.. The chastity, honour and dignity of no woman are safe. (Bible, Isaiah 13:15 and 16) (Koran 4:24; 23:6; 33:50 and 70:30). 52. That the sorry state of affair is that one does not feel horrified that one has been reduced to sheep of such a blatant criminal cow and man eater Jesus and Zimmi of Allah. Sheep keep no family and wear no clothes. Accordingly girls are relinquishing their clothes. We want to abolish the immunity granted to Imams under section 196 of the Criminal Procedure Code. Mosques are broadcasting stations for inciting hatred. We are bombing mosques for our private defense under sections 102 and 105 of the Indian Penal Code. Aryavrt Government is trying her best to eradicate Christianity and Islam. Both have been detained in India to annihilate Vedic Sanaatan Dharm. In lieu of abusing one’s faith and Ishwar, through Azaan, the apex court has issued writ to pay salaries to Imaams amounting to Rs. 10,000 crores annually. (AIR 1993 SUPREME COURT 2086). A copy of the judgment is being attached with this WRITTEN ARGUMENT and is marked as (Annexure-s2). 53. That depose faith in Jehovah (Bible, Isaiah 13:15 and 16) ravish any alien women of your choice and depose faith in Allah (Koran, 23:6 and 70:30) rape any alien women of your choice and remain scot-free. No sooner one converts into either in Islam, or Judaism, or Christianity or Socialism, the criminal activities of murder, plunder and rape of women ceased to be considered crimes instead these crimes turn into the source of sustenance and heaven after death. 54. That. Which freedom of faith, the preamble of the Indian Constitution promises? 55. That in fact, the Koran and Bible were written to justify some of the most ungodly and immoral behaviors the world has ever known. (Allah) even says that peaceful Muslims are “the vilest of creatures” and that hell’s hottest fires await them. [Koran 16:70] If you are a peace-loving Muslim, may note, your Allah hates you, because you fail to murder, plunder and rape. You Muslims have no shame that you submit to dreaded incest monger, (Koran 33:37-38) assassin and robber Allah. (Koran, 8:1 and 17). Servility 56.. These notorious frauds named prophets have fabricated new ways of making the whole humanity their slaves irrespective of faith. These prophets have converted even their own followers, into their slaves for they have made their brainchildren gods unapproachable. The books named Bible and Koran of these gods have been compiled to justify the most ungodly behaviour. 57. That God of Prophets is criminal of a class. Although one has three forums against even dreaded criminal like Doud Ibrahim viz. Society, police and judiciary, one has no forum to complain against gods! 58. That provide right to property (Manusmriti 8:308) and liberty of faith (Gita 7:21) to human race and save honour and dignity of women. (Bible, Isaiah 13:15 and 16) (Koran 4:24; 23:6; 33:50 and 70:30). 59. That Does one know as to who were Vyas, Valmiki and Vishwamitra? Still Brahmins revere them. We Brahmins have accepted Gautam Buddha incarnation of Vishnu and recite it in our SANKALPA before performing SANDHYA thrice a day. We are blamed for untouchability for reasons of infections. A syringe of 1 MM causes infection and refusing eating in 300 MM plate attracts crime vides Article 17 of the Indian Constitution. I have a question. Can one eat with one’s own hand without washing his hand after toilet? Does one. Look! Observing cleanliness attracts hatred between lower class and upper class, however, commands of genocide by Jesus and Allah (Bible, Luke 19:27) and (Koran 8:17) do not attract hatred in the very lower class! I have a schedule caste friend named Pradeep Gautam. Gautam is his Gotra. My maternal family, I mean my Nanihal, belong to Gautam Mishra of Saryupari clan of Brahmins. How Brahmins, practicing hatred with scheduled castes, allowed Pradeep remain Gautam?? As per their own scriptures viz. Koran and Bible, they must be hanged till death. 60. That has we Aryans ever called for jihad against ardent Muslims? Have we said anywhere that Muslims should recant their faith, pay jizyah (protection money), else they should be killed? Where is imposition from us? Muslims fail to see that Muhammad imposed his diabolic faith on others through his numerous ghazwas (raids), that human beings were massacred, raped and forced into conversion, but Muslims see opposition against Azaan and mosque Muslims read about the recent violence against the Copts in Egypt? Have Muslims heard of innocent Muslims in Pakistan accused of blasphemy, who are lynched by Muslim goons as well as awarded death sentence for blasphemy? 61. That now one may compare Ishwar with these Gods! While Jehovah has two brokers named Moses and Jesus, Allah has one alone named Muhammad? Anyone? 62. That." 63. That) and (Koran 2:35)}. They have no shame that they are becoming human bombs and fighting to retain their servilities of Jehovah, Jesus, Allah, rulers, clerics, dead Socialists, Communists and Democrats. Jehovah, Jesus, Allah, rulers, clerics, dead Socialists, Communists and Democrats/ rulers are rewarding both Muslims and Christians for relinquishing their liberty. Tell me, why is judiciary not reviving liberty of citizens? 64. That Allahabad High Court could not get her own orders dated 28th July, 1989 and 9th Aug, 1989 passed in CMWP 9672/1988 executed and cannot punish revenue staff for interpolation of records. Judiciary cannot punish executives i.e. dictator proletariat for violation of their own orders. This situation is due to the Section 197 of CrPC quoted above. In fact every government servant has been posted to steal the possessions of the citizens of India as per the constitutional obligation imposed upon them by the State Governors. So long as these public servants, working on behalf of Governors, extort money under duress and pass the share of the booty to Sonia alias Antonia, the Governors cannot grant sanction for prosecution u/s 197. When public servants fail to extort & share the booty, their assets become disproportionate to their known source of income. Accordingly, Governors immediately grant sanction. A copy of the orders is being attached with this WRITTEN ARGUMENT and is marked as (Annexure-s3) 65. That humanity failed to understand that Judiciary has nothing to do with justice. 'Justice' is the first casualty in judiciary. By default Judiciary has to uphold the Indian Constitution and Law. The Indian Constitution has been compiled to eradicate Vedic Sanaatan Dharm. [THIRD SCHEDULE OF THE Indian Constitution, Form IV and VIII read with Article 29(1) of the Indian Constitution]. How can Judiciary deviate from her own oath? 66. That whoever takes oath of the dreaded predator and pirate Indian Constitution, literally accepts that Aryans, nick named as Hindus by Muslims, meaning ‘the resident of Hindustan, thief, robber, slave, black’ and by Christians as barbaric invaders, who came from middle Asia, cannot have any nation. Aryans' civilization is killer, thief, sexy, assassin and cheat. However, Jesus, who snatches manhood from his followers to reduce. 67. That while both Christians and Muslims have been commanded by their secular gods to slay those who proselyte, (Bible, Deut. 13:6-11) and (Koran 4:89), have right to convert Hindus. They call it liberty of faith. They reduced north eastern States into Hindu minority through conversion. They are running parallel Governments in Nagaland. They hounded Riyangs out of their homes in Mizoram and got peace prizes. 68. That Azaan that incites communal hatred on the ground of religion is secular worship. Veds, Upnishads, Ramayan, Mahaabhaarat are lores of shepherds. They burnt libraries of Taxshila and Nalnda. They yearn that these books be destroyed. Nay! Bible and Koran have been granted immunity by the Judiciary. (AIR 1985 CALCUTTA HIGH COURT, 104). 69. That Ram, Krishna etc. are imagined characters. They were never born on the earth. The followers of Ram, Krishna etc. are Barbaric and savage people. In the interest of religious harmony, they must be slain. Idolaters and those, who do not accept Jesus their king, must be slain, (Koran, 9:5) beheaded, (Koran, 9:111) tortured, (Koran 8:12) insulted, (Azaan and Koran 21:98) condemned, (Koran, 17:18) stolen from, (Koran, 8:1, 69 and 41) deceived, (Koran, 4:142) captured, (Koran 4:24) humiliated (Koran, 9:29) and on and on. The Hadith and Sira follow in the same vein. There is no word in the English language that has the negativity of the word kafir. (Bible, Luke 19:27). 70. That temples and idols are satanic symbols. This is because from temples, Hindus recite ‘DHRM KI JAI HO. ADHRM KA NASH HO. PRANIYON ME SADBHAVNA HO. VISHVA KA KALYAN HO’ as such temples must be destroyed. (Bible Deut. 12:1-3) and (Koran, Bani Israel, 17:81 and The Prophets, 21:58). Those, who worship idols and their temples, must be slain. (Bible, Exodus/ Chapter 20 / The Ten Commandments/ Verses 3 and 5 and Luke 19:27). However, mosques are secular worship places. This is because from here Imams broadcasts ‘ALLAH ALONE CAN BE WORSHIPPED. PERSECUTION (WORSHIP OF OTHER GODS SAVE ALLAH) IS WORSE THAN SLAUGHTER. (Koran 2:191). SLAY KAFIRS. (KORAN 8:17), Public servants have been deputed to insure protection of mosques and churches and demolition of temples. Election is fraud. Duty of judiciary, President of India and Governors 71. That as long as public servants, Muslims and Christians slay Aryans, nicknamed as Hindus, usurp Aryans properties, kidnap Aryans women and rape them, shout Azaan on loud speakers, demolish Aryans' worship places, assassinate Soldiers, demand Kashmir and Nagaland, Muslims and Christians get peace prizes. In lieu of abusing one, one’s faith, one’s Ishwar and one’s culture and inciting communal hatred and ill will, while judges cannot even take cognizance of the offence committed by Muslim Imaams, for want of sanction under section 196 CrPC, the Indian judiciary issued writ to give Rs 10,000 crores (Rs 10 billion) towards salaries to Imams of mosques and Rs 400 crores (Rs 0.4 billion) as Hajj subsidies, in violation of Article 27 of the Indian Constitution. Apex Court upheld it. (Annexure-s11). Section 196 of the CrPC is being used by Governors as arm to protect Muslims and Christians to liquidate Vedic culture and eliminate Aryans. 72..’ " 73..” 74. That Government protects Muslims' and Christians' crimes committed under sections 153 and 295 IPC. No court can take cognizance for the crimes committed by Muslims and Christians, no police can register FIR against Muslims and Christians and no one can sue without sanction u/s 196 of CrPC. No sooner, their Koran, Bible and the predator Indian Constitution is quoted, the Governments get horrified. Sanction u/s 196 is immediately granted. In the impugned case, it is not that the quotations are not in Bible and Koran, but my crime is exposing the danger these books have posed upon humanity and of the conspiracy of Governments to eradicate Vedic culture. What happened with judges, who opposed Islam? 75. That after passing order against Koran, MM ZS Lohat of Tishazari Court resigned. 76. That after refusal to withdraw the case against Ahmed Bukhari, who burnt the Indian Constitution, the current Imam of Zama Masjid, the Delhi High Court closed the trial. (Annexure-s5a). 77. That when MM MS Rohilla, issued NBW against Imam Abdullah Bukhari, he resigned from his services. Annexure-s4 78. That even Hon’ble ACMM may lose her job, if she fails to punish me. 79. That I am telling these things, because I know that my days are numbered. I am already of 80 years. It makes me happy if I die at once instead of facing death at every moment. I feel pity for judges as they are digging their own graves for the compulsion of their power, pelf and sustenance. What would the judges give in legacy to their descendents? 80. That the official English Koran bears the stamp of the Fahd Foundation. It writes, in foot-note,.” 81. That Good Muslims are those, who read Islam’s scriptures the guide of criminals named secular Koran and secular Hadith, planned, funded, staffed, executed, and celebrated the terrorist attacks of 11th September, 2001 upon World Trade Centre and July 7, 2005 serial bombing upon London public transport systems. They proudly told the world about their plan — terrorize the human beings into submission and compel them their slaughter for blasphemy. 82. That.” 83. That. 84. That the simple truth is: good Muslims and Christians are bad people. Islam and Christianity convert them in to criminals. While there are plenty of “bad” Muslims and Christians, who are good people, they are as impotent as bad Nazis of Germany during Hitler’s era or bad Communists during Stalin’s era. The Koran and Bible define good and bad Muslims and Christian for us. The reasons for granting incentives Think for human races. 85. That! 86. Human races have three prevailing preconditions for their survival: 87. That only those can survive, who worship Allah alone. (Azaan and Koran 21:98). One, who worships other god, must be slain, one's lands and properties must be looted and one's women must be raped. (Koran 8:69 and 23:6). These are divine commands. It killer Azaan, Koran and Bible. The conservation of Koran and Bible culture has to be defended by the President of India as well as all the State Governors by oath. (Articles 60 and 159 of the Indian Constitution) [Article 29(1) of the Indian Constitution]. For their (rulers) sustenance, power and pelf, judges, legislatures, Governors etc. have no choice than upholding their (Muslims' and Christians') fundamental rights mentioned above. (Article 60, 159 and Schedule III of the dreaded predator and pirate Indian Constitution.). Judiciary has condoned Muslims from action u/s 153 or 295 of the Indian Penal Code. No court can sit into judgment against Koran and Bible. (AIR 1985 CALCUTTA HIGH COURT, 104). Nay! Muslim Imaams and Maulavies are getting salaries amounting to more than Rs. 10000 crores in lieu of abusing Ishwar and Vedic culture in violation of Article 27 of the Indian Constitution. The Indian Constitution has compelled the Indian Judiciary to accept Jehovah and Allah Gods, Bible and Koran religious books, mosques and churches worship places and Azaan call for prayer. (AIR 1985 CALCUTTA HIGH COURT, 104). Citizens are celebrating 15th August 1947, viz. the day of rape of their women, hounding them out of their mother land, vivisection of their motherland and plunder of their. Citizens are celebrating the doomsday viz. Jan. 26 since 1950, the day on which they lost their right to property, [Article 39(c) of the Indian Constitution and omitted Article 31], right of life, faith, culture and nation. [Article 29(1) of the Indian Constitution]. Has any person courage to protest the presence of such violent and murderous rulers in Bharat? Condition No. 2: 88. That only those can survive, who accept Jesus their king. (Bible, Luke 19:27) read with Article 29(1) of the Indian Constitution. This, again, is divine command and secularism. One, who opposes secularism and Bible,. Its conservation has to be defended by the President of India as well as all the State Governors by oath. [Article 29(1) of the Indian Constitution read with Articles 60 and 159 of the Indian Constitution] For sustenance, power and pelf a legislature and high court and apex court judge has to depose faith and allegiance in the dreaded predator and pirate Indian Constitution. Judiciary has condoned Christians from action u/s 153 or 295 of the Indian Penal Code. No court can sit into judgment against Bible. (AIR 1985 CALCUTTA HIGH COURT, 104). Nay! Christian Government of Mizoram got peace prize in lieu of genocide of Riangs and hounding Riangs out of Mizoram in violation of Article 27 of the Indian Constitution. Condition No. 3: 89. That no citizen can have assets, capital, land, industries, gold and mines. It has been wasted into the State since November 26, 1949 vides Article 39(c) of the Indian Constitution. Although UOI has opted open economy since 1992, the Constitutional stipulation prevails. Moreover, the status does not discriminate between Aryans and non Aryans. 90. That literally the dead socialism is excuse fabricated by robbers and thugs to usurp the possessions of haves. Why Judaism, Christianity, Islam and Communism/Socialism are fabricated? 91. That one should never believe and never accept religious servility for sex, booty and slave. Come to my fold and I would provide you liberty to worship a God of your own choice, says Ishwar in Gita, See Chapter 7 Shloka 21. 92. That for protection. 93. That the notorious democracy calls this status of servility, liberty of faith and secularism! Therefore, the suggestion to these followers of criminal prophets is to relinquish their prophets and come into the fold of great and omnipresent Vedic culture. 94. That suppose; all the parked money in foreign banks and scam money returns to the exchequer, then, how the common man would benefit, when 95% of the money is being siphoned off by Sonia and her tools and vassals? 95. That the answer to the present situation is abolition of the Article 29(1) and Article 39(c), revival of Article 31 of the Indian Constitution and abolition of section 197 of the Criminal Procedure Code. 96. That Aryavrt Government has been founded to revive moral values of human race. Support Aryavrt Government and join Vedic Panth if one wishes to survive with honour and dignity. Live and let live others. You have no choice. 97. That we are innovative people of India. Aryavrt Government and Abhinav Bharat have the audacity to ask why Christianity and Islam should survive on the earth. This is the big adventure that we have formed Aryavrt. Scramble for Booty 98. That, preservation, protection and defense vides Article 29(1) of the Indian Constitution and section 196 of the Criminal Procedure Code. As per the Indian Constitution and laws. Thus, Liberty is snatched by Judaism, Christianity, Islam and Communism/Socialism. 99. That Imaams begin abusing judges after rising from bed until going to bed. Hatred is the foundation of Islam. Judges cannot oppose! A copy of Azaan propagated from mosques by Imaams and Qaba being booty is being attached with this reply and is marked as Annexure-s8. 100. That judges are slaves of Christianity and Islam by the virtue of the dreaded predator and pirate Indian Constitution. [Article 29(1) of the Indian Constitution]. Slaves have no civil rights. Humanity is fighting protracted war for liberty. USA, U.K. and India promises liberty. However, judges have relinquished their liberty in lieu of sustenance, power and pelf. Judges have no shame? 101. That, good democrats are seculars. They take oath, defend and depose faith and allegiance in the dreaded predator and pirate Indian Constitution. They abet aid, harboranab Dada and super PM Antonia. They insure eradication of Vedic culture and rape of their women before their own eyes. They vivisect Bharat on the secular basis and still claim it unity and integrity of the nation. 102. That the legion of commentators, who portray Islam as a religion of peace, hijacked by Muslim Mujahids, may have close study that shows the same a sheer nonsense. This nonsense, however, leads gullible men think that the Jihadi attacks upon people are merely an Islamic reaction to Governments policy in India and in the Middle East in general, and to its allegedly pro-Aryan stance vis-a-vis the Kashmeres in particular. This apologetic view of Islam is actually fatal for human race. 103. That sex and booty are incentives that attract conversion to reduce one slave of prophets. Remove both from Judaism, Christianity and Islam, these faiths would vanish. 104. That we seek your support to retaliate. If you fail to support humanity would finish like dinosaurs. 105. That% reaches the people. Look! Poor Jehovah has no share in booty. Poor Allah takes 20% only. (Koran 8:41). However, Pranab Dada and thief Antonia Maino and their tools and vassals gallop 95% of the exchequer as per the admission of the ex PM and his son Rahul of this dreaded Democracy! 106. That? Hindus would face extinction in the similar way, unless Christianity and Islam are eradicated. 107. That in India, one has right of private defence provided by Section 102 of the Indian Penal Code. Anything done in the exercise of Private Defense is no crime as per Section 96 of the Indian Penal Code. Therefore, every citizen has right to slay every Imaam. 108. That)}. If ‘Allah alone can be worshipped’, where is liberty promised in the Indian Constitution? Where is multiculturalism? The culture of tolerance collapses in the face of the sacred intolerance of dualistic ethics. Intellectuals respond by ignoring the failure. Muslims and Christians must relinquish. 109. That. 110. That until 1835 A.D., Macaulay could not find a single thief or beggar in whole Bharat. Bastard Jesus was ruling and is still ruling India, through Sonia, the thief, cow and man eater (Bible, John 6:53). One may note, I can write wagons of crimes committed by Jesus. Now, one cannot find any man of character and moral. 111. That. Why should Allah and Jehovah survive? 112. That Allah executed marriage of Prophet Muhammad’s daughter in law Zainab with Muhammad. (Koran 33:37-38). Nay! Allah permits loot and rape of any woman (Koran 23:6). Jesus supports marriage with one’s own daughter. (Bible, 1 Corinthians 7:36). 113. That while the Christianity and Islam have stipulated the condition for existence of humanity of being slave of either Jesus or Allah, Aryans' Vedic culture grants liberty of faith. Its scriptures do not allow war in night, rape of women, dashing infants to pieces,. 114. That). 115. That the Christianity through colonialists Sonia is colluding with Muslims, through the aid and abetment of the Indian Constitution, [Article 29(1) of the Indian Constitution], against India's spiritual tradition and moral foundation. It is paving the way for Muslims to retain their Islamic identity and they use Mosques as center for exhorting Islamic dogmas. The Azaan, which is cognizable and non-bail able crime u/s 153 and 295 of the Indian Penal Code, shouted from the mosques of Muslims, is insult of Ishwar and Vedic Sanaatan Dharm. Allah commands Muslims to slay non-Muslims and eradicate persons of non-Islamic faiths as well as Jesus commands Christians to slay non Christians and eradicate persons, who do not accept Jesus their king. {Azaan, (Koran, 2:191-194 and 8:39), (Bible, Exodus/ Chapter 20 / The Ten Commandments/ Verses 3 and 5 and Luke 19:27)}. 116. That everything anti-Vedic Sanaatan Dharm is promoted and Hindus are reduced to spineless jelly fish afraid to speak up. Hindus are falling prey to the corruption of the seculars and have become watered down version of their Hindu self. Hindus are more concerned with survival amidst discrimination, oppression and chaos. Hindus have become a laughingstock, and frequent target of Islamic fanatics and Missionaries. They are successful in imposing their outdated, closed and reductionist theology on helpless Hindus. In response to these multilevel attacks, Hindus have been paralyzed, hypnotized, and ostracized by corrupt politicians, colonial masters and Islamic fanatics. 117. That there is no strong Hindu organization other than few Bhakti movements. They are preaching Bhakti and Hindus are observing Ahimsa as well as surrender and political non-involvement. They are not teaching for reminding Hindus to become politically active. As a result many Hindus think coercive religious conversion and colonialism is predetermined and therefore beyond their control. It is a great disaster for Hindus. Muslims, who are worldly gained political strength, established strong separate identity and are faithful to Allah and Islamic value system. Muslims demand their own way, throwing muscle power and tantrums like violent criminals. However, Muslims fail to realize that they are being exploited by Christians for annihilation of Vedic Sanaatan Dharm. No sooner Vedic Sanaatan Dharm would be annihilated, their Allah, Islam, Dar Ul Islam and Sariyat would finish in the fashion of Afghanistan and Iraq through Armageddon. 118. That Aryavrt Government is here to fill the vacuum. Join us, if one wishes to survive. Fight against source and not against symptoms. 119. That through their Azaan, Imaams abuse deities and faiths of non-Muslims. Abusing notorious criminal Allah and the so-called predator and pirate prophet Muhammad is blasphemy that attracts death penalty. However, Muslim Imaams are being protected by Governments under section 196 of the Criminal Procedure Code. As such we have legal right to proscribe Koran, demolish mosques and Governments are duty bound to arrest and hang every Imam on the earth. 120. That Governors and Judges, who administer oath to each other, have taken oath to preserve, protect and defend and uphold the conservation of the language, script, or culture of the minorities. We, Vedic Panthies, are minority among the minorities the world. Yet no Government is protecting our Vedic Sanaatan Dharm. Liberty 121. That so long as Indian Constitution, manifesto of Carl Marx, Koran, Bible, Democracy, Socialism, Islam and Christianity survive in the globe, neither life nor property of a Muslim is safe nor of a Christian is safe. There can be no liberty. The honour and dignity of a woman is not safe. Socialism of Carl Marx has already gone to hell. Now is the turn of Christianity and Islam. Nay! No one is innocent in the eyes of Islam [(Azaan and (Koran, 2:191-194 and 8:39) and Christianity (Bible, Luke 19:27)]. If these religions survive, the survival of human race is impossible. These criminal religions have been detained in Bharat to eradicate Bharat and Vedic culture, but in the view of the black 11 September 2001, these sectarians would kill each other like Yadavas of Dwaper era. The reason is quite simple.. 122. That) 123. That it is not up to me, but to Muslims and Christians themselves to tear out the hateful verses from their Koran and Bible. 124. That Muslims and Christians want that you respect Christianity and Islam; but Christianity and Islam do not respect you. 125. That Government and the so called independent judiciary insist that you respect Christianity/ Islam, but Christianity/ Islam has to slay you. 126. That Christianity and Islam wants to rule the world, dominate and seek annihilation of Vedic Sanaatan Dharm. 127. That in 1945 Nazism was defeated. In 1989 Communism was defeated. Now is the turn of Christianity and Islam. Defend our Vedic Sanaatan Dharm. 128. That thus, Christianity and Islam corrupt their followers. Sir Edmund Burke put it this way: "All that is necessary for the forces of evil to win in this world is for a few good men to do nothing." 129. That. 130. That. 131. That the biggest lesson to be learned by humanity is that one cannot negotiate with Christianity and Islam. Humanity's only recourse is: Doctrine of Reciprocity. In contrast judiciary, in her observation wrote, India a Hindu State. It is their greatness that they resisted this pressure and kept a cool head and rightly declared India to be a secular state.” 132. That may I know as to where is the greatness in providing unfettered fundamental right to Christians and Muslims to conserve their culture of murder, plunder and rape of women vides Article 29(1) of the Indian Constitution. May I know as to how Azaan, which means ‘Allah alone can be worshipped’, is secular? Where is the greatness in snatching the property and means of production from citizens as per Article 39(c) of the Indian Constitution and imposing FDI? 133. I have mailed complaint against ‘Azaan’ and ‘Masjids’ to the President of India. A copy of the same is being attached with this WRITTEN ARGUMENT and is marked as Annexure-s11. 134. That no case has been proved against me. I have right of clear acquittal. Simultaneously, I demand arrest of Imams for shouting Azaan, proscription of Koran, which is Jihad manual and destruction of all mosques, wherefrom genocide of non-Muslims is preached. (Ayodhya Prasad Tripathi) Accused in Person. Dated; Delhi, Oct 11, 2012. Annexure-s1 In the matter of FIR 440/96 State V/s Ayodhya Prasad Tripathi Police Station Roop Nagar Delhi North Equivalent citations: AIR 2003 SC 976, 2003 (1) ALD Cri 367, 2003 CriLJ 1226 Author: A Pasayat Bench: S V.Patil J, A Pasayat J JUDGMENT Arijit Pasayat, J. 1. Leave granted. 2. Appellants call in question legality of impugned judgment rendered by the Madhya Pradesh High Court at Jabalpur. whereby it upheld the conviction and sentence awarded by the Additional Sessions Judge. Jashpurnagar. 3. Prosecution version which led to the trial of the appellants thereinafter referred to as 'the accused' by their respective names) is as follows: 4. On (SIC) information was lodged by Jhanguram (PW-2) that six persons had assaulted him with intention to take his life, and had also caused injuries to his wife Pandir Bai (P.W.4) and his daughter in-law Tilobai (P.W.5). On the basis of such information, the case was registered and investigation was undertaken. On completion of investigation charge was framed for commission of offences punishable under Section 147, 148 307 read with Section 34 and Section 323 of the Indian Penal Code. 1860 (in short 'IPC'). It was alleged that accused Khodhibai (since acquitted) and Pandri Bai (P.W.4) are sisters. There was a bad blood between them over certain properties and civil litigation was going on. The sic accused persons were cutting the crops raised by Jhanguram (P.W.2) on the date of the occurrence. when he asked them not to do so, the accused persons did not pay any heed. Suddenly accused appellant. Rizan snatched the axe which Jhanguram (P.W.2) was holding and assault ed him with the said weapon and caused several injuries on different parts of his body e.g. lips hands and feet. More particularly, accused-appellant. (SIC) hit Jhanguram and Pandri Bai with a stick. Other accused persons also hit him with their hands and feet. Some persons standing nearby came to their rescue. The injured P.Ws. 2, 4 and 5 were examined by the Doctor (PW-1). During investigation the weapon of assault i.e. axe was seized from the accused-appellant. Rizan and some other weapons from the other persons. Six witnesses were examined to further the prosecution version. Accused persons pleaded innocence and false implication. On consideration of the evidence on record, the Trial Court held that the prosecution has not been able to bring home the accusations against accused-Paras. Vinod, Khodibai and (SIC) 5. Accused-appellant Rizan was found guilty for the offences punishable under Section 326 IPC for inflicting injuries on Jhanguram (P.W.2) and under Section 323 IPC for the injuries inflicted on Pandri Bai (P.W.4). Accused Duda was found guilty for the offences punishable under Section 323 IPC for inflicting injuries on aforesaid two witnesses. However, both the accused-appellants Rizan and Duda were acquitted of the offences relatable, to Sections 147 and 148 IPC. It was also held that the offence committed by the accused persons is not covered by Section 307 IPC. After hearing the accused persons on the question of sentence, accused-appellant. Rizan was sentenced to undergo RI for two (SIC) for the offence punishable under Sections 326 and 323 IPC. Both the sentences were directed to run concurrently. Accused Duda was sentenced to undergo RI for two months. In appeal, by the impugned judgment, the High Court dismissed the appeal maintaining the convictions and the sentences. 6. In support of the appeal, learned counsel for the accused-appellants submitted that this is a case where the conviction is not maintainable as the injuries were inflicted by the accused-appellants while exercising their right of private defence. Further on the same set of evidence four persons have been acquitted and, therefore, so far as the appellants are concerned, conviction does not stand to reason. It is also submitted that the witnesses who claim to have seen the occurrence are witnesses who were in inimical terms with the accused-appellants. Residually, it is submitted that the sentences as imposed are high, and considering the fact that the occurrence took place five years back, the sentences should be reduced to what has already been undergone which is stated to be about three months. It is pointed out that accused-appellant. Duda has already suffered the sentence awarded. Learned Counsel for the prosecution on the other hand submitted that the evidence clearly rules out application of the right of private defence. Merely because the evidence of some of the witnesses has not been accepted to be fully reliable, in view of the clear and categorical findings recorded that the evidence (SIC) so far as the appellants are concerned, the conviction does not suffer from any infirmity. analyses evidence to find out whether it is cogent ad credible. 8. In Dalip Singh and Ors. v. The State of Punjab it has been laid down as under:- "A witness is normally to be considered independent unless he or she springs from sources which are likely to be tainted and that usually means unless the witness has cause, such as enmity against the accused, to which to implicate him falsely. Ordinarily a close relation but forward in cases before us as a general rule of prudence. There is no such general rule. Each case must be limited to and be governed by its own facts." 9. The above decision has since been followed in Guli Chand and Ors. v. State of Rajasthan in which Vadivelu thevan v. State of Madras find, however, that it unfortunately still persists if not in the judgments of the Courts at any rate in the arguments of counsel." 11. Again in Masalti and Ors. v. State of U.P. this Court observed: (p. 209-210 para 14): "But it would, we think, be unreasonable to contend that evidence given by witnesses should be discarded only on the ground that it is evidence of partisan or interested witnesses.....The mechanical rejection of such evidence on the sale." 12. To the same effect is the decision in State of Punjab v. Jagir Singh and Lehna v. State of Haryana (2002. (3) SCC 76). 13. Stress was laid by the accused-appellants. liar. The maxim "falsus in uno falsus in omnibus" has not received general acceptance nor has this maxim come to occupy the status of rule of law. It is merely a rule of caution. All that it amounting to is that in such cases testimony may be disregarded and not that it must be disregarded. The doctrine merely involves the question of weight of evidence which a court may apply in a given set of circumstances, but it is not what may be called ' mandatory rule of evidence' (See Nisar Alli v. The State of Uttar Pradesh . difference accused who had been acquitted from those who were convicted. (See Gurucharan Singh and Anr. v. State of Punjab (AIR (SIC) SC 460). The doctrine is a dangerous one specially in India for if a whole body of the testimony were to be rejected because witness was evidently speaking an untruth in (SIC) to be feared that administration of criminal justice would come to a dead- stop. Witnesses just cannot help in giving embroidery to a story, however, a shifted with care. The aforesaid dictum is not a sound rule for the reason that one hardly comes across a witness whose evidence does not contain a grain of untruth or at any rate exaggeration, embroideries or embellishment. (See Sohrab s/o Beli Navata and Anr. v. The State of Madhya Pradesh and Ugar Ahir and Ors. v. The State of Bihar . An attempt has to be made to, as noted above (SIC) Ariel v. State of Madhya Pradesh (AIR (SIC) SC 15) and Palaka Singh and Ors. v. The State of Punjab. . As observed by this Court in State of Rajasthan v. Smt. Kalki and An to which a discrepancy may be categorized. While normal discrepancies do not corrode the credibility of a party's case, material discrepancies do so. These aspects were highlighted recently in Krishna Mochi and Ors. v. State of Bihar etc. and Gangadhar Behera and Ors. v. State of Orissa (2002 (7) Supreme 276). Accusations have been clearly established against accused-appellants in the case at hand. The Courts below have categorically indicated the distinguishing features in evidence so far as acquitted and convicted accused are concerned. 14. Then comes plea relating of for forestalling the further reasonable apprehension from the side of the accused. The burden of establishing the plea of self- defence is on the accused and the burden stands discharged by showing preponderance of probalities is favour of that plea on the basis of the material on record. (See Munshi Ram and Ors. v. Delhi Administration. AIR 1968 SC 702. State of Gujarat v. Bai Fatima: : State of U.P. v. Mohd. Musheer Khan : and Mohinder Pal Jolly v. State of Punjab: ). Sections 100 to 101 define the extent of the right of private defence of body. If a person has a right. 15. on the right of private defence. The defence has to further establish that the injuries so caused on the accused probabilities ]. In this case, as the Courts below found there was not even a single injury on the accused persons, while PW2 sustained large number of injuries and was hospitalized for more than a month. A plea of right of private defence cannot be based on (SIC) and speculation. While considering whether the right of private defence is available to an accused, it is not relevant whether he may have a chance to inflict severe and mortal injury on the aggression.. Section of grievous hurt would be caused to him. The burden is on the accused to show that he had a right of private defence which extended to causing of death. Sections 100 and 101. IPC define the limit and extent of right of private defence. 16. (SIC) v. State of Punjab , it was observed that as soon as the cause for reasonable apprehension disappears and the threat has either been destroyed or has been but to route. there can be no occasion to exercise the right of private defence. 17. In order to find whether right of private defence is available or not, the injuries received by the accused the (SIC) of threat to his safety, the injuries caused by the accused and the circumstances whether the accused had time to (SIC) (SIC) to public authorities are all relevant factors to be considered. Thus, running to house, fetching a (SIC) and assaulting the deceased are by no means a matter of (SIC). These acts bear stamp of a design to kill and take the case out of the purview of private defence. Similar view was expressed by this Court in Biran Singh v. State of Bihar and recently in Sekar @Raja (SIC) v. State represented by Inspector of Police Tamil Nadu (SIC) Supreme 124). 18. Sentences imposed do not in any way appear to be (SIC). Merely because the occurrence took place sometime back, same cannot be a factor to reduce the sentences. The appeal is without (SIC) and is dismissed. S V.Patil J, A Pasayat J 21 January, 2003 Annexure-s2 Supreme Court Judgements Supreme Court Asks Government To Pay Imam Salaries By Sanjeev Nayyar , October 2005 [ esamskriti@suryaconsulting.net] Chapter : A friend of mine told me of a 1993 Supreme Court judgment that asked the Central & State governments to come out with a scheme for payment of salaries to Imams. Having got for a copy of the judgment decided to reproduce it verbatim. Summary - Imams spend substantial time in mosques. Their most important duty is that of leading community prayer in a mosque the very purpose for which a mosque is created. Imams, ‘incharge of religious activities of the mosque’ had approached the Supreme Court under Article 32 of the Constitution for enforcement of fundamental right against their exploitation by Wakf Boards. They seek a direction to Central and State Wakf Boards to treat the petitioner as employees of the Board and to pay them base wages to enable them to survive. The Wakf Board says Imams are not their employees – do not have the resources to pay them. The right to life enshrined in Art. 21 means right to live with human dignity. In the above circumstances the Supreme Court issued the directions to the Union of India and the Central Wakf Board to prepare a Scheme within a period of six months in respect of different types of mosques. Azaan Age article dated October 5, 2005. “Imams across the country are in for a surprise bonanza of up to Rs 3 lakhs each in the form of arrears. The imams leading the prayers in these mosques are eligible for arrears, with the Delhi high court recently recognizing that arrears have accrued to the imams since the 1993 of the Supreme Court. The Delhi High Court has directed the Delhi Waqb Board to give a schedule of payment of arrears by October 24. The 1993 SC judgment ruled that the imams would be paid salaries and directed the Centre and Central Waqf Council to prepare a scheme for payment of salaries within 6 months. The salaries scheme was submitted on 5.1.1996, during the tenure of the P.V. Narasimha Rao government. The Supreme Court on February 3, 2003 directed that waqf tribunals should be set up in the states so that imams in each state could move the respective tribunals for settling disputes over salaries. However, salaries were not paid. It was against this background that the Delhi High Court recognized that arrears have accrued since 1993”. Friends some thoughts. If India is a secular state and being secular means separation of state from religion why must the Courts get involved in what is purely a religious matter, a dispute between the Imams and Waqf Board. Some of you might argue that temple priests in Tamil Nadu and perhaps states like Karnataka & Andhra Pradesh get salaries from the state government so what is wrong in the State paying salaries to Imams. There is a key difference. Temple collections go to state government coffers unlike mosque collections, which go to the Wakf Board or are part of the mosque funds. In a recent article titled “Nationalization of Hindu Temples” Sandhya Jain wrote. “ In 2002, Karnataka received Rs. 72 crores as revenue, returned Rs. 10 crores for temple maintenance, and granted Rs. 50 crores for madrasas and Rs. 10 crores for churches’. (Daily Pioneer, October 7,2003.) AIR 1993 SUPREME COURT 2086 K. RAMASWAMY AND R. M. SAHAI, JJ. Writ Petn. (C) No. 715 of 1990, D/-13-5-1993.All India Imam Organization and others, Petitioners v. Union of India and others, Respondents. Constitution of India, Arts. 32, 21 – Right to live – Imams, “in charge of religious activities of mosque” – Are entitled to emoluments even in absence of statutory provision in Wakf Act – Supreme Court directed the Govt. and Central Wakf Board to prepare Scheme within period of six months. “Imam” – Is entitled to emoluments even in absence of provisions under Wakf Act. Wakf Act (1954), Pre. Muslim Law – Imam – Entitled to emoluments even in absence of statutory provision now much and by whom? According to the Board they are appointed by the mutawallis and, therefore, any payment by the Board was out of question. Prime facie it is not correct as the letter of appointments issued in some States are from the Board. But assuming that they are appointed by the Mutawallis the Board cannot escape from its responsibility as the mutawallis too u/s 30 of the Act are under the supervision and control of the Board. The right to life enshrined in Art. the Muslim countries mosques are subsidized and the Imams are paid their, remuneration. Therefore, it cannot be said that in our set up or in absence of any statutory provision in the Wakf Act the Imams who look after the religious activities of mosques are not entitled to any remuneration.. In the circumstances the Supreme Court issued the directions to the Union of India and the Central WakfBoard to prepare a Scheme within a period of six months in respect of different types of mosques. (para 5). R. M. SAHAI, J.:- Imams, ‘incharge of religious activities of the mosque’ (1) have approached this Court by way of this, representatives, petition under Article 32 of the Constitution for enforcement of fundamental right against their exploitation by Wakf Boards. Relief sought is direction to Central and State Wakf Boards to treat the petitioner as employees of the Board and to pay them base wages to enable them to survive. Basis of claim is glaring disparity between the nature of work and amount of remuneration. Higher pay scale is claimed for degree holders. 2. ‘Imams perform the duty of offering prayer (Namaz) for congregation in mosques. Essentially the mosque is a center of community worship where Muslims perform ritual prayers and where historically they have also gathered for political, social and cultural functions’. (2) ‘The functions of the mosque is summarized by the 13th Century Jurist Ibn Taymiyah as a place of gathering where prayer was celebrated and where public affairs were conducted’. (3) ‘All mosque well versed in the Shariat, the holy Quran, the Hadiths, ethics, and philosophy, social, economic and religious aspects. ‘Im acts as its Imam. He is in charge of the religious activities of the mosque and it is his duty to conduct prayers five times a day in front of Mihrab’. (5) Islamic religious practice they are not entitled to any emoluments as a matter Shariat is stated to lead prayers which is performed voluntarily by any suitable Muslim without any monetary benefit. Some of the affidavits claim that they are appointed by people of the locality. The Union Government has specially stated that the Islam does not recognize the concept of priesthood as in order religions and the selection of Imams is the sole prerogative of the members of the local community or the managing committee, if any of the mosque. According to Karnataka Wakf Board Imamat in the mosque is not considered to be employment. The allegation of the petitioners that due to meager obligations required for offering prayers according to the principles laid down by the Kuran and Sunnah. The affidavit filed on behalf of Wakf Board has pointed out that mosque can be categorizediflimsams Nazara (Mubtali grade) are in the scale of Rs. 380-20-580-25-830-30-980, whereas Imams Hafiz (Wasti grade) are paid Rs. 445-20-645-25895-30-1045, and Imam Alim (Muntali grade) are paid Rs. 520-20-720-25-970-30awallis of the concerned managing committees and not by the Wakf Board. 4. The mosque differs from a church or a temple in many respects. Ceremonies and service connected with marriages and birth are never performed in mosques. The rites that are important and integral functions of many churches such as confessions, penitences and confirmations do not exist in the mosques. (6) Nor any offerings are made as is common in Hindu temples. ‘In Muslim countries mosques are subsidized by the States, hence no collection of money from the community is permitted. The Ministry of Wakf (Endowments) appoints the servant, preachers and readers of the Koran. Mosques in non-Muslim countries are subsidized by individuals. They are administered by their founder or by their special fund. A caretaker is appointed to keep the place clean. The Muazzin calls a State in relation to all matters except those which are expressly required by this Act to be dealt with by the Wakf Commissioner, shall vest) in the Board.” 5. The Board is vested not only with supervisory and administrative power over the wakfs but even the financial power vests in it. One of its primary duties is to ensure that the income from the wakf is spent on carrying out the purpose for which wakf was created. Mosques are wakfs and are required to be registered under the Act over which the Board exercises control Purpose of their creation is community worship. Namaz or Salat is the mandatory practice observed in every mosque. ‘(Among the Five Pillars (arkan; sg; rukn) of Islam it holds the second most important position immediately after the declaration of faith (shahadah)(8). The Principal functionary to undertake it is the Imam. how much and by whom? According to the Board they are appointed by the mutallis and there fore, any payment by the board was out of question. Prima facie it is not correct as the letter of appointments issued in some states are from the Board. But assuming that they are appointed by the mutawallis the Board cannot escape from its responsibility as the mutawallis too u/s 36 of the Act are under the supervision and control of the Board. In a series of decisions rendered by this Court it has been held that right to life enshrined in Article Muslim countries mosques are subsidized correlation between the two.. 6. In the circumstances we allow this petition and issue following directions: (i) The Union of India and the Central Wakf Board will prepare a scheme we not registered Boardsherry Board and have no source of income and find out ways and means to raise its income. (ix) The exercise should be completed and the scheme be enforced within six months. (x) Our order for payment to Imams shall come into operation from 1st Dec., 1993. In case the scheme is not prepared within the time allowed then it shall operate retrospectively from 1st December, 1993. (xi) The scheme framed by the Central Wakf Board shall be implemented by every State Board. 7. The Writ petition is decided accordingly. Parties shall bear their own costs. Petition allowed Long Live Sanatana Dharam Request/Grievance Registration Number is : PRSEC/E/2011/05079 President Secretariat, New Delhi - 110004 Web site: This is a public document. Any one can view the status from the web site by typing the above Request/Grievance Registration Number. There is no pass-word. Dear Mr Anna Hazare, Your fast unto death is futile and you are fighting a lost war. I am reproducing two orders of Allahabad High Court, U.P. pertaining to my land being usurped by Sonia through her nominated Governor Banwari. Notwithstanding clear proof since 9th August, 1989, I could not get justice till to date. There is no law and no forum in India to get back my land or substitute, as suggested by Allahabad High Court. Judiciary is helpless in view of sec 197 of the Criminal Procedure Code. Can U get me back my lands? You are fighting with the symptom, not the source. The source of corruption is Article 39(c) of the Indian Constitution. I am reproducing the Article below, "39. Certain principles of policy to be followed by the State – The State shall, in particular, direct its policy towards securing – (c) that the operation of the economic system does not result in the concentration of wealth and means of production to the common detriment;" Thus a Public Servant, who has immunity under section 197 of the Criminal Procedure Code, has constraints to be corrupt. I am reproducing the sec.:- (to deprive citizens from their wealth and means of production is official duty of a public servant): Do you not feel horrified that your status is of a sheep of Rom Rajya of Sonia? Sheep cannot have property and his flesh can be consumed by its owner, here, in your case, is Sonia. Sheep has no forum to complain! There is no Democracy in India. This is a Government of Sonia, by Sonia and for Sonia. Don't agree? Here you are:- S. Swamy has dared Sonia to sue against him. He claims that the maximum share of 2G scam has gone to the account of sisters of Sonia. Yet there is no law to arrest her. There is no law with me to get back my lands. Suppose, all the parked money in foreign banks and scam money returns to the exchequer, then, how the common man would benefit, when 95% of the money is being siphoned off by Sonia and her tools and vassals? The answer to the present situation is relegation of the Article 39(c), revival of Article 31 and relegation of section 197 of the Criminal Procedure Code. We, the activists of Aryavrt Government and Abhinav Bharat are fighting the real war. Help us free our 9 officers if you want to end corruption and salvage human race. Apt.. ॐ IN THE HON'BLE HIGH COURT OF JUDICATURE AT ALLAHABAD ******* ANNEXURE NO. Annexure-s4 Date: 9/27/2003 REAL TERRORISTS WHO PARTITIONED INDIA AND ARE STILL SOWING SEEDS OF NEXT CIVIL WAR ARE SCOT FREE IN HINDUSTAN. ...................================ ................What is this going on? Apropos to the front page news, "Keep off riot victim: SC to Gujarat cops", as an Indian Citizen, I seek explanation of M/s A.R. Lakshman J. & S. Rajendra Babu J. as to what are they doing on my several applications, which have been published in my news magazine 'Mujahana' also, for arrest of self proclaimed ISI agent & proclaimed offender Abdullah Bukhari, ex. Shahi Imam of Zama Masjid? I had been arrested by the terrorist dictator proletariat Delhi Police and a judicial terrorist Raj Rani Mitra MM is trying the case in Tis Hazari Courts vide FIRs 10 & 110/2001 PS NIA, who never arrest this criminal Jihadi Abdullah Bukhari. Nay! Even Delhi High Court evaded action even after filing PIL! Nay! In his Wazookhana construction case, another terrorist LG Vijaya Kapoor submitted application for stopping trial. Nay! MM M.S. Rohilla, who dared to issue NBW against Abdullah Bukhari, had been pleased to resign from service! Circumstances suggest that the notorious terrorists of judiciary are after Shri Narendra Modi, the Chief Minister of Gujarat, who is the one and only protector of hapless Hindus, to whom Democracy is hell bent to annihilate with the help of guide of criminals named the Indian Constitution. May one note that Bilkis Yakub Rasool is breeding enemies of human race, (even her own enemies, because there are yet another minority community named Christians who have been commanded by Bible's Jehovah vide Isaiah 13:16 to ravish Bilkis Yakub Rasool in front of her own near and dear) who would cry from the mast of mosque that Allah alone can be worshipped. As per Koran, Chapter The Prophets(21), Verse 98, those who worship other gods save criminal Allah, are fuel of hell and must be liquidated (Koran 2:191); because persecution is worst than slaughter! Why such woman and ISI agent Abdullah Bukhari should get protection and Modi should not? The answer can easily be traced in the guide of criminals named Article 29(1) of the Indian Constitution, which gives these criminals unfettered fundamental right to conserve the very culture of murder, plunder and rape to which revered Shri Narendra Modi is opposing. There are yet other cases of atrocities upon Hindus of Kashmir, Nagaland and Mizoram. While the properties of the victims are being usurped, even today, by Muslims and Christians with the full support of these States, they are getting peace bonuses by Atal Government. Who would take to the task to these States? Why Gujarat should be singled out? Why these judicial terrorist Governments should not resign and Narendra Modi Government alone should resign? Thanking you with best regards, Ayodhya Prasad Tripathi, (President) Manav Raksha Sangh, Web Site; Annexure-s5 Priyadarshini Mattoo (Kashmiri: प्रियदर्शिनी मट्टू, پریدرشینی مٹو ) (July 23, 1970 - January 23, 1996)as a landmark reversal and a measure of the force of media pressure in a democratic setup. This decision went in favor because the facts were not presented correctly in the lower court. The intense media spotlight also led to an accelerated trial, unprecedented in the tangled Indian court system. Significance of the case Lal case, where a number of accused including politician's son Manu Sharmawere released despite the murder taking place in a high-society bar in the presence of. Childhood After Priyadarshini finished school from the Presentation Convent School in Srinagar, her family migrated to Jammu. There she completed her B Com from MAM College, before joining Delhi University for her LLB course. By all accounts, Priyadarshini was a smart and beautiful young woman. She came from a musically talented family and was herself a good singer and guitar player. A friend has called her "a tom boy, not at all submissive, and very compassionate towards animals. A bubbly girl loaded with confidence..." . The murder Pondicherry - in the course of the trial he served as Joint Commissioner of Police in Delhi, where the crime had been committed. In view of these connections, the court handed over investigation of the case to the Central Bureau of Investigation (CBI).. Trial Court Judgment Delivering the". High Court Appeal. Verdict On October 17, 2006, Santosh Singh, who meanwhile had married and become a practising lawyer in Delhi itself, was found guilty under Indian Penal Code sections 302 (murder) and 376 (rape). The verdict blames J.P. Thareja's original judgment: “ The trial judge acquitted the accused amazingly taking a perverse approach. It murdered justice and shocked judicial conscience. ” s taff. The conviction will most probably be challenged in the Supreme Court of India, but the verdict and the process is seen as a barometer of a changing India. In particular, it is hoped now that media pressure can be brought to bear in the case of prominent accused such as Manu Sharma, Sanjeev Nanda or Vikas Yadav, and that eventually, the ability of the powerful to remain above the law would be curtailed. Death penalty. Supreme Court appeal] Post-conviction] Annexure-s5a In the matter of FIR 440/96 Police Station Roop Nagar Delhi North FEBRUARY 08, 2005 Imam and Shankaracharya : Not the Rule of Law Not the Rule of Law Differing measures in cases against Imam and Shankeracharya - D.P. Sinha I.A.S. Retd In the context of the case filed by Tamil Nadu Govt. against Jayendra Saraswati, the Shankarcharya of Kanchi, the case filed by Govt. of NCT, Delhi against Ahmed Bukhari, Naib Imam of Jama Masjid of Delhi is of relevance. A case against Ahmed Bukhari was filed by Delhi Police in the court of Shri Vinod Kumar Sharma, Metropolitan Magistrate, Delhi without the arrest of the accused u/s 124-A, IPC in May 1993, on the basis of FIR No. 98/93 dated 14.05.93. The allegations were that the accused on 22.01.93 had incited the Muslim congregation, that had collected at Jama Masjid for 'Namaz', against the Govt. to boycott the Republic Day celebrations. The court summoned the accused on 16.10.93 for appearance in the court on 06.01.94. On 06.01.94 summons were received back unserved. So fresh summons were issued for 27.05.94. On 17.05.94 the counsel for the accused moved an application seeking exemption from attendance in the court, which was granted. And the case was adjourned to 31.01.95 Even after seven months accused did not turn up, so bailable warrant was issued on 31.01.95 and it was directed that it be served through SHO. for 07.02.95. The S.H.O. did not serve the warrant as directed. The accused also did not appear in the court on 07.02.95. The court therefore ordered that non-bailable warrant be issued against the accursed for 01.06.95. On 01.06.95 non-bailable warrant was received back unserved. So the Magistrate ordered that non-bailable warrant may again be issued for 19.08.95, to be served through DCP Shri P.N. Agarwal personally. But even the DCP failed to comply with the order of the Court. He again directed the DCP to get the NBW served on the accused for 01.11.95 but it was received back unserved with the police report that the accused was not available at his address. The Magistrate did not give up. He again ordered the D.C.P. Shri P.N. Agarwal to get the NBW served personally for 02.01.96. Still the court orders were not complied with and the NBW was returned to the court with the remarks that the accused was not available on the address. The Magistrate gave up on the DCP and reverted back to SHO. He directed him to get the NBW served on the accused personally for 29.02.96. By now police had successfully thwarted the orders of the Metropolitan Magistrate to ensure the presence of the accused in the court for more than two years. It is to be noted that the Magistrate has been very accommodative to the prosecution by giving adjournments from three to seven months, and by not drawing contempt proceedings against the S.H.O. and DCP Shri P.N. Agarwal, who had disobeyed the court orders with impunity and not served the warrants on the accused personally inspite the specific orders of the court in this regard. But the patience of the Magistrate gave way at long last. By his order of 29th Feb., 96 he observed, "From the entire conduct of police it makes me think that why not a separate executing agency be there under the direct control of the judiciary. The judicial commands are often flouted and thwarted by the prosecuting agency for the reasons best known to them. It frustrates the judicial orders and commands. The helplessness of the judiciary is visible form the conduct of the police which shows that judicial officer is at the mercy of the prosecuting agency and it shows that rule of jungle prevails in the police department and not the rule of law". The aforesaid order does reflect exasperation of the Magistrate with the system. At last he gathered courage to issue a show case notice to the SHO that why he should not be proceeded against u/s 60/122 D.P. Act. He also directed the Police Commissioner Nikhil Kumar to execute the non-bailable warrant against the accused within seven days. The accused Ahmed Bukhari filed writ-petition No. 138/96 against the order of M.M. dated 29.02.96, for quashing the FIR against him and the aforesaid order of M.M. The Division Bench of the High Court comprising of the Chief Justice and Dr. Justice M.K. Sharma by their order of 06.03.96 admitted the petition and passed an interim order staying the execution of non-bailable warrant and further proceedings in the case against the accessed Bukhari pending in the court of M.M. Shri Vinod Kumar Sharma. While the above on Writ Petition was pending in the High Court, Govt of NCT decided to withdraw the case against the accused on the ground that it would be 'in the interest of justice and promote public peace and harmony amongst different sect of the society'. In effect the NCT Govt. upheld the view that if the supporters of a person accused of most heinous crimes are such that they may create large-scale disturbance and communal riots leading to arson. loot, murder and rape, the case against him should be withdrawn in the larger interest of public peace and communal harmony. Accordingly, Asstt. Public Prosecuter filed the application u/s321 Cr.P.C. in the court of M.M. Shri Vinod Kumar Sharma, seeking to withdraw the above case. At this stage one Shyam Lal through his Advocate N.K. Gupta filed an application in the court of M.M. Shri V.K. Sharma on 02.01.97 objecting to the withdrawal of the case by Govt. of NCT. He submitted that he lived within the jurisdiction of Jama Masjid Police Station, where the inflammatory speech of the accused has created terror and as such he is an aggrieved party. He along with other residents of the area stoutly opposed the withdrawal of the case. Shri Vinod Kumar Sharma, M.M. vide his detailed fourteen paged order dated 14.01.95 rejected the request of the prosecution (NCT Govt.) to withdrawal the case. He upheld the objection of the residents of the area against the withdraw of the case. He also cited profusely the case law and rulings of the Supreme Court in support of his order. The Magistrate observed that "As per the allegation of the prosecution accused has made an attempt to overthrew the government established by law. No individual can be allowed to challenge the very existence of the state. No administration of justice would be served by withdrawing the case against him, without any material placed before the court and where court exercises its judicial function". The M.M. also cited Subhash Chander vs State (1980 SCR page - 44) in which Justice Krishna Aiyer had held that "the even course of criminal justice cannot be thwarted by the Executive, however high the accused, however sure the government feels a case is false, however unpalatable the continuance of the prosecution to the powers that be, who wish to scuttle the course of justice because of hubris of action, or other noble or ignoble consideration. Once the prosecution is launched, its relentless course cannot be halted except on consideration germane to justice". The prosecution (NCT Govt.) had sought to withdraw the case against accused Abdulla Bukhari on the ground that it would be 'in the interest of justice and promote public peace and harmony amongst the different sects of the society'. It was not considered an adequate reason to permit the withdrawal of the case and MM Shri Vinod Kumar Sharma rejected it vide his order of 14.01.97. He observed: "As per the allegation of the prosecution the accused has made an attempt to overthrow govt. established by law. He has challenged the sovereignty of the State. No individual can be allowed to challenge the very existence of the State, No administration of justice would be served by withdrawing the case against him without any objective material placed before the court and where court exercises its judicial direction." He further observed that "In the absence of any description from the prosecution, I am at loss to understand that what are the considerations or compulsions before the state that the present case is being withdrawn from the proscription". He dismissed the plea of the prosecution to withdraw the case. The NCT Govt. filed revision petition No. 170 of 1997 in Delhi High Court against the aforesaid order of Metropolitan Magistrate. The major thrust of the petition was "it would be extremely deterrent to the social fabric of the society and the religious harmony prevailing in the capital. In fact, the continuance of the prosecution may arouse extreme feelings of bitterness, violence and disturbance in the prevalent peaceful atmosphere in the country". A careful reading of the aforesaid petition would show that the prosecution (read government of the day) apprehended that if accused Syed Ahmed Bukhari was arrested and case is continued against him, his volatile supporters will plunge the country into violent conflagration, precipitating Hindu-Muslim riots resulting in loss of human life and property. The Govt was frightened and terrorized and therefore wanted to withdraw the case to prevent such a possibility. The NCT Govt. in its revision petition also sought to undermine gravity of the offence u/s 124-A, I.P.C. and submitted that the speech could have been delivered as a natural reaction to demolition of Babri Mosque. It is interesting to note the volte-face of the NCT Govt. The same government which had slapped the case against the accused of a grave offence u/s 124-A, IPC was now dragging feet, and wanted the case to be withdrawn. The Revision Petition came up for hearing Justice J.K. Mehra who in his judgment dated mentioned that "Since the prosecution appears to have emanated form certain allegedly inflammatory statements attributed to the accused, I considered it appropriate to call the respondent accused and to ascertain his stand in the presence of his counsel as well as State Counsel. The said respondent-accused stated before the court that 'he accepted the validity of the Constitution of India and the Rule of Law established in this country and categorically stated that he did not challenge the constitution or Rule of Law established in the country and that he is governed by the same. He accepted that India is his country and he is one of the citizens of India. He stated that he had intended to criticize certain policies being pursued by the then Govt. of India. In the light of this discussion and further subsequent to the alleged incident there have been no complaints against the behavior of the accused, trial court should have exercised its discretion in favour of allowing the application". In the concluding para of his judgment Justice J.K. Mehra observed: that 'keeping the fact that the present application was filed without any malafide and with the bonafide intention of securing peace, harmony and public order in the society, I consider that the trial court has erred in declining the application for withdrawal of application.... the application of the state for dropping the prosecution is allowed and the respondent accused is discharged-' Section 124-A, IPC is a cognizable offence, under which police usually arrests the accused person, as soon as a FIR is lodged. But no such arrest was made by Jama Masjid Police Station of Delhi for fear of law and order problem. The case was filed in the court of the Metropolitan Magistrate Shri Vinod Kumar Sharma in May 1993 but till 29th February 1996, he could not secure the presence of the accused in the court, inspite of innumerable summons/NBWs/warrants he issued. The police stubbornly refused to comply with the orders of the court. At long last, when he ordered the police commissioner Nikhil Kumar to excute the NBW on the accused, the accused filed the writ petition and secured stay order from the high court. During the pendency of the writ petition the NCT Govt. took the decision to withdraw the case on the ground that the continuance of the case would jeopardize the law and order situation. It is the same ground for which the police did not arrest Ahmed Bukhari although he was accussed of committing an offence u/s 124 I.P.C. When some residents of the area objected against the withdrawal of the case and the Magistrate upheld their objection refusing to give permission to withdraw the case, NCT Govt. filed revision petition against the order of the Magistrate on the same ground of maintenance of public peace and order. In effect, the High Court allowed the appeal and permitted the case to be withdrawn by the NCT Govt. An extra-ordinary and an unprecedented act of the Delhi High Court deserve mention. Justice S.K. Mehra summoned the accused to the High Court and examined him in respect of the allegations against him. The accused, naturally denied the allegations. His denial and assurance for good conduct in future, and the fact that there had been no complaints against the behavior of the accused, subsequent to the incident in question, convinced the honourable Judge to the extent that he allowed NCT Govt. to withdraw the case. Thus the High Court also laid down a precedent. If a person is accused of a theft and it is found that he has not committed another theft subsequent to that incident, the case should be withdrawn against him. The High Court also did not take adverse note of the willful non-compliance of orders of a judicial authority by police for service of summons/warrants/NBWs. against the accused for about two and a half years. In brief, even though offence u/s 124-A IPC is a cognizable offence, police did not arrest the accused and did not serve summons/warrants/NBWs on him and NCT Govt. withdrew the case against him for just one reason that he has a following of a mob that may endanger public peace and tranquility. The High Court also found it to be a valid reason to withdraw the case. NCT Govt. neither had the courage nor the will to arrest and proceed against the Ahmed Bukhari. But the case of Shankarcharya is different. Shankarcharya does not have a following of mobs that may endanger public peace and religious amity. So why should Tamil Nadu Govt. worry? It is the lamb that is scarified and not the wolf. Posted by Naxal Watch at 7:22 AM Annexure-s6 Cites 13 docs [View All] The Constitution Of India 1949 Article 368 in The Constitution Of India 1949 Article 392 in The Constitution Of India 1949 The Amending Act, 1901 Article 13 in The Constitution Of India 1949 Citedby 26 docs [View All] I. C. Golaknath & Ors vs State Of Punjab & Anrs.(With ... on 27 February, 1967 Kesavananda Bharati ... vs State Of Kerala And Anr on 24 April, 1973 Sajjan Singh vs State Of Rajasthan(With ... on 30 October, 1964 Kihoto Hollohan vs Zachillhu And Others on 18 February, 1992 Shri Kihota Hollohon vs Mr. Zachilhu And Others on 18 February, 1992 Sri Sankari Prasad Singh Deo vs Union Of India And State Of ... on 5 October, 1951 Equivalent citations: 1951 AIR 458, 1952 SCR 89 Bench: Sastri, M Patanjali PETITIONER: SRI SANKARI PRASAD SINGH DEO Vs. RESPONDENT: UNION OF INDIA AND STATE OF BIHAR(And Other Cases). DATE OF JUDGMENT: 05/10/1951 BENCH: SASTRI, M. PATANJALI KANIA, HIRALAL J. (CJ) MUKHERJEA, B.K. DAS, SUDHI RANJAN AIYAR, N. CHANDRASEKHARA CITATION: 1951 AIR 458 1952 SCR 89 CITATOR INFO : F 1952 SC 252 (1,30) RF 1954 SC 257 (4) R 1959 SC 395 (28) E&D 1959 SC 512 (4) F 1965 SC 845 (20,21,23,24,25,27,33,35,38,39 R 1965 SC1636 (25) O 1967 SC1643 (12,14,23,27,43,44,56,59,61,63 RF 1973 SC1461 (16,20,27,30,32,38,39,44,46,88 RF 1975 SC1193 (17) RF 1975 SC2299 (649) RF 1980 SC1789 (96) RF 1980 SC2056 (61) RF 1980 SC2097 (6) D 1981 SC 271 (19,33,42,43) RF 1986 SC1272 (78) RF 1986 SC1571 (34) RF 1987 SC1140 (3) ACT: Constitution (First Amendment) Act, 1951, Arts. 31A, 31B-Validity--Constitution of India, 1950, Arts. 13(2), 368, 379, 392--Provisional Parliament--Power to amend ConstitutionConstitution (Removal of Difficulties) Order No. 2 of 1950--Validity --Amendment of Constitution--Procedure--Bill amended by Legislature--Amendment curtailing fundamental rights--Amendment affecting land--Validity of Amending Act. HEADNOTE: The Constitution (First Amendment) Act, 1951, which has inserted, inter alia, Arts. 31A and 3lB in the Constitution of India is not ultra vires or unconstitutional. The provisional Parliament is competent to exercise the power of amending the Constitution under Art. 368. The fact that the said article refers to the two Houses of the Parliament and the President separately and not to the Parliament, does not lead to the inference that the body which is invested with the power to amend is not the Parliament but a different body consisting of the two Houses. The words "all the powers conferred by the provisions of this Constitution on Parliament" in Art. 379 are not confined to such powers as could be exercised by the provisional Parliament consisting of a single chamber, but are wide enough to include the power to amend the Constitution conferred by Art. 368. The Constitution (Removal of Difficulties) Order No. 2 made by the President on the 26th January, 1950, which purports to adapt Art. 368 by omitting "either House of" and "in each House" and substituting "Parliament" for "that House" is not beyond the powers conferred on him by Art. 39:1 and ultra vires. There is nothing in Art. 392 to suggest that the President should wait, before adapting a particular article, till the occasion actually arose for the provisional Parliament to exercise the power conferred by the article. The view that Art. 368 Art. 368 and would be invalid, is erroneous. Although "law" must ordinarily include constitutional law there is a clear demarcation between ordinary law which is made in the exercise of legislative power and constitutional law, which is made in the exercise of constituent power. In the context of Art. 13, "law" must be taken to mean rules or regulations made in exercise of ordinary legislative power and not amendments to the constitution made in the exercise of constituent power with the result that Art. 13(2) does not affect amendments made under Art. 368. Articles 31A and 3lB inserted in the Constitution by the Constitution (First Amendment) Act, 1951, do not curtail the powers of the High Court under Art. 226 to issue writs for enforcement of any of the rights conferred by Part III or of the Supreme Court under Arts. 132 and 136 to entertain appeals from orders issuing or refusing such writs; but they only exclude from the purview of Part III 'certain classes of cases. These articles therefore do not require ratification under cl. (b) of the proviso to Art. 368. Articles 31A and 31B are not invalid on the ground that they relate to land which is a matter covered by the State List (item 18 of List II) as these articles are essentially amendments of the Constitution, and Parliament alone has the power to enact them. JUDGMENT: ORIGINAL JURISDICTION : Petitions under Art. 32 of the Constitution (Petitions Nos. 166,287,317 to 319, 371,372, 374 to 389, 392 to 395, 418, 481 to 485 of 1951). The facts which led to these petitions are stated in the judgment. Arguments were heard on the l2th, l4th, l1th, 18th and 19th of September. P.R. Das (B. Sen, with him) for the petitioners in Petitions Nos. 37 l, 372, 382,383, 388 and 392. Article 368 of the Constitution is a complete code in itself. It does not contemplate any amendments to the Bill 91 after its introduction. The Bill must be passed and assented to by the President as it was introduced without any amendment. As the Constitution Amendment Bill was amended in several respects during its passage through the Parliament, the Constitution (First Amendment) Act was not passed in conformity with the procedure laid down in article 368 and is therefore invalid. When the Parliament exercises its ordinary legislative powers it has power to amend the Bills under articles 107. 108, 109(3) & (4). It has no such power when it seeks to amend the Constitution itself as article 368 does not give any such power: of The Parliament Act of 1911 (of England). The Article 368 vests the power to amend the Constitution not in the Parliament but in a different body, viz., a two-thirds majority of the two Houses of the Parliament. In article 368, the word Parliament which occurs in other articles is purposely avoided. There is a distinction between ordinary legislative power and power to amend the Constitution. This distinction is observed in America and the power to amend the Constitution is vested there also in a different body. Vide Willis, page 875, Coolly Vol. 1. page 4, Orfield, page 146. Article 379 speaks of the power of the provisional Parliament as a legislative body. The powers under article 368 cannot be and was not intended to be exercised by the provisional Parliament under article 379. As it consists only of a Single Chamber the adaptations made in article 368 by the Constitution (Removal of Difficulties) Order No. 2 are ultra vires. Article 392 gives power to the President to remove only such difficulties as arise in the working of the Constitution. It cannot be used to remove difficulties in the way of amending the Constitution that have been deliberately introduced by the Constitution. No difficulty could have been possibly experienced in the working of the Constitution on the very day the Constitution came into force. The Constitution could legally be amended only by the Parliament consisting of two Houses constituted under clause 2 of Part V. In any event, the impugned Act is void under article 13 (2) as contravening the provisions relating to fundamental rights guaranteed by Part III. ' Law ' in article 13 (2) evidently includes all laws passed by the Parliament and must include laws passed under article 368 amending the Constitution: Constituent Assembly Debates, Vol. IX No. 37, pp. 1644, 1645, 1661, 1665. S.M. Bose (M. L. Chaturvedi, with him)for the petitioner in Petition No. 375. The word "only" in article 368 refers to all that follows and article 368 does not contemplate amendment of a Bill after it has been introduced. The President's Order is ultra rites his powers Under article 392. There is no difficulty in working article 368 and there could be no occasion for the President to adapt 368 in the exercise of his powers under article 392. S. Chaudhuri (M. L. Chaturvedi, with him) for the petitioner in Petition No. 368 adopted the arguments of P.R. Das and S.M. Bose. S.K. Dhar (Nanakchand and M.L. Chaturvedi, with him) for the petitioner in Petition No387. Article 379 on which the provisional Parliament's jurisdiction to amend the Constitution is based not only empowers the said Parliament to exercise the powers of the Parliament but also imposes upon it the obligation to perform all the duties enjoined upon the Parliament by the Constitution. Hence Parliament cannot seek to abridge the rights of property of the citizens guaranteed by Part III. As the present Act contravenes the provisions of Part III, it is void under article 13 (2). In any event, the new articles 31A and 3lB curtail the powers of the Supreme Court under articles 32, 132 and 136 and those of the High Court under article 226, and as such, they required ratification under clause (b) of the proviso to article 368 and not having been ratified, they are void and unconstitutional. They are also ultra vires as they relate to land, a subject matter covered by List II (see item 18) over which the State Legislatures have exclusive power. Parliament cannot make a law validating a law which it had no power to enact. N.P. Asthana (K. B. Asthana, with him) for the petitioners in Petitions Nos. 481 to 484. Article 338 s, does not confer power on any body to amend the constitution. It simply lays down the procedure to be followed for amending the Constitution. In this view u article 379 does not come into operation at all. Under article 392 the President himself can alter the Constitution but he cannot authorise the provisional Parliament to do so. S.P. Sinha (Nanak Chand, with him) for the petitioner in Petition No. 485. Article 13(2) is very wide in its scope and it invalidates all laws past, present and future which seek to curtail the rights conferred by Part II 1. It does not exempt laws passed under article 368 from its operation. N.C. Chatterjee (with V.N. Swami for the petitioner in PetitiOn No. 287 and with Abdul Razzak Khan for the petitioner in Petition No. 318). Article 368 must be read subject to article 13(2). Articles 31A and 31IB are legislative in character and were enacted in the exercise of the law-making power of the Parliament and not in the exercise of any power to amend the Constitution and Parliament has no power to validate the laws as it had no power to enact them. N.R. Raghavachari (V. N. Swami, with him) for the petitioner in Petition No. 166. The fundamental rights are supreme and article 13 (2) is a complete bar to any amendment of the rights cenferred by Part III. N.S.. Bindra (Kahan Chand Chopra, with him) for the petitioner in Petition No. 319. M.L. Chaturvedi for the petitioners in Petitions Nos. 374,376, 377, 379, 380, 381,384, 385, 386, 389, 393, 394 and 395. Bishan Singh for the petitioner in Petition No. 418. Abdul Razzak Khan and P. 5. Safeer for the petitioner in Petition No. a17. M.C. Setalvad, Attorney-General for India (with G.N. Joshi) for the Union of India, and (with Lal Narain Singh, G N. Joshi, A. Kuppuswami and G. Durgabai) for the State of Bihar. The donee of the power under article 368 is Parliament. and The process of the passage of the Bill indicated in the said article is the same as that of ordinary legislative Bills. The article does not mean that the powers under article 368 are to be exercised by a fluctuating body of varying majority and not by Parliament. If the constituent authority and the legislative authority are two different entities the saving clauses in articles 2,3, 4 and 240, will be meaningless. Under article 379 provisional Parliament can exercise all the powers of Parliament; hence Provisional Parliament can act under article 368. "All the powers" in article 379 include power to amend the Constitution and there is no reason to restrict the import of these words by excluding amendment of the Constitution from their ambit. The words "perform all the duties" in that article do not in any manner cut down the power of Parliament under article 379 because article 13 (2) does not impose any duty. There is no conflict between exercising all the powers under article 379 and the prohibition in article 13 (2). No technical meaning should be given to the word "difficulty" in article 392 (1). The adaptation of article 368 is really an adaptation for the removal of difficulties. The adaptation is not of a permanent character. This shows that the adaptation is not an amendment and even if it is an amendment, it is so by way of adaptation. Article 13 (2) prohibits "laws" inconsistent with fundamental rights. It cannot affect article 368 since the word "law" in article 13 (2) refers to ordinary legislative enactments and not constitution making. The argument that the Bill to amend the Constitution should be passed as introduced, without amendments, is fallacious. It cannot be said that the Bill referred to in article 368 has to be dealt with under a procedure different from that laid down for ordinary Bills in articles 107 and 108. Articles 31-A and 31B are not legislative in character. The said articles do not affect the scope of articles 226 and 32, for the power of the Court under the said two articles 95 remains unaltered. What has been done is to alter the content of fundamental rights. P.L. Banerjee, Advocate-General of Uttar Pradesh (U. K. Misra and Gopalji Mehrotra, with him) for the State of Uttar Pradesh adopted the arguments of the un. Attorney-General and added that articles 31-A and 31-B st do not necessarily stand or fall together; even if 31-B goes, 31-A will remain. T.L. Shevde, Advocate-General of Madhya Pradesh (T. P. Naik, with him)for the State of Madhya Pradesh adopted the arguments of the Attorney General. The Provisional Parliament is competent to do all that the future Parliament can do. The adaptation under article 392 does not seek to amend article 368. P.R. Dots, S.M. Bose S. Chaudhuri, N.C. Chatterjee, S.K. Dhar and S.P. Sinha replied. 1951. October 5. The Judgment of the Court was delivered by PATANJALI SASTRI J.— These petitions, which have been heard together, raise the common question whether the Constitution (First Amendment) Act, 1951, which was recently passed by the present provisional Parliament and purports to insert, inter alia, articles 31A and 3lB in the Constitution Of India is ultra vires and unconstitutional.emindary Abolition Acts. Certain z 96 High Courts. The main arguments advanced in support of the petitions may be summarised as follows: First, the power of amending the Constitution provided for under article 368 was conferred not on Parliament but on the two Houses of Parliament as a designated body and, therefore, the provisional Parliament was not competent to exercise that power under article 379.. Thirdly, the Constitution (Removal of Difficulties) Order No. 2 made by the President on 26th January 97 1950, in so far as it purports to adapt article 368 by omitting "either House of" and "in each House" and substituting "Parliament" for" that House",. Fifthly, the Amendment Act, in so far as it purports to take away or abridge the rights conferred by Part III of the Constitution, falls within the prohibition of article 13 (2). And lastly, as the newly inserted articles 31A and 3lB. Before dealing with these points it will be convenient to set out here the material portions of articles 368, 379 and 392, on the true construction of which these arguments have largely turned. (a) articles 54, 55, 78,162 or specified in Parts A and B of the First Schedule by resolutions to that effect passed by those Legislatures before the Bill making provision for such amendment is presented to the President for assent.. 392. . * * * * On the 'first point, it was submitted that whenever the Constitution sought to confer a power upon Parliament, it specifically mentioned "Parliament" as the done a voting; and thirdly, those that require, in addition to the special majority above-mentioned, ratification by resolutions passed by not less than one-half of the States. In the first place, it is provided that the amendment must be initiated by the introduction of a "bill in either' House of Parliament", a familiar feature of parliamentary procedure (of.. Apart from the intrinsic indications in article 368 referred to above, a convincing argument is to be found in articles is not legislation even where it is carried out by the ordinary legislature by passing a bill introduced for the purpose and that articles(1): is necessary if it is to be capable of doing its work efficiently." These observations have application here. Having provided for the constitution of a Parliament and prescribed a certain procedure for the conduct of' its ordinary legislative business to be supplemented by rules made by each House (article 118), the makers of the Constitution must be taken to have intended (1) [1915] A.C. 120. Parliament to follow that procedure, so far as it may be applicable, consistently with the express provisions of article 368, when they entrusted to it the power of amending the Constitution,. framers rounded and other difficulties of a like nature in working the Constitution during the transitional period that the framers of the Constitution made the further provision in article 392 conferring a general power on the President to adapt the provisions of the' Constitution by suitably modifying their terms. This brings us to the construction of article 392. proceeds, 105 892 and is valid and constitutional. A more plausible argument was advanced in support of the contention that the Amendment Act, in so far as it purports realized the sanctity of the fundamental rights conferred by Part III, to make them immune from interference not only by ordinary laws passed by the legislatures in the country but also from constitutional amendments. It is not uncommon to find in framers of the Indian Constitution had the American and the Japanese models before them, and they must be taken to have prohibited even constitutional amendments in derogation of fundamental rights by using aptly wide language in article 13 (2). The argument is attractive, but there are other important considerations which point to the opposite conclusion. Although "law" must ordinarily include constitutional law, there State, the executive, the legislature and the judiciary, the distribution of governmental power among them and the definition of their mutual relation. No doubt our constitution-makers, following the American model, have incorporatedgement or nullification of such rights by alterations of the Constitution itself in exercise of sovereign constituent power. That power, though it has been entrusted to Parliament, has been so hedged about with restrictions that its exercise must be difficult and rare. On the other hand, the terms of article 36a are perfectly general and empower Parliament to amend the Constitution, without any exception whatever. Had it been intended to save the fundamental rights from the operation of that provision, it would have been perfectly easy to make that intention clear by adding a proviso to that effect.. It only remains to deal with the objections particularly directed against the newly inserted articles 31A and :3lB. of Part V and Ch. 5 of Part VI. It was therefore submitted that the newly inserted articles required ratification under the proviso to article 368. The argument proceeds on a misconception. These articles so far as they are material here, run thus : article 13 read with other relevant articles in Part III, while article 3lB are in any way affected. They remain just the same as they were before: only a certain class of case has been excluded from the purview of Part II/and the courts could no longer interfere, not because their powers were curtailed in any manner or to any extent, but because there would be no occasion hereafter for the exercise of their power in such cases. The other objection that it was beyond stated, articles a IA and 3lB. The petitions fail and are dismissed with costs. Petitions dismissed. Agent for the Petitioners in Petitions Nos. 871, 372, 382, 383, 388 and 392: I. N. Shroff. Agent for the Petitioners in Petitions Nos. 287, 374 to 381 393, 394, 395: Rajinder Narain. Agent for the Petitioners in Petitions Nos. 387, 418, 481 to 485, 384, 385, 386 and 389: S.S. Sukla. Agent for the Petitioners in Petition No. 166: M.S.K. Sastri. Agent for the Petitioners in Petition Nos. 817 and 319: R.S. Narula. Agent for the Petitioner in Petition No. 318: Ganpat Rai. Agent for the respondents: P. A. Mehta. Annexure-s7 THE JUDGEMENT BIMAL CHANDRA BASAK May 17 85. Chandmal Chopra & Anr. Versus State of West Bengal The Court: I have heard and disposed of this application on the 13th of May 1985 when I indicated that I shall give my reasons later. Facts: 2. This is an application under Article 226 of the Constitution of India praying for a Writ of Mandamus directing the State of West Bengal to declare each copy of the Koran, whether in the original Arabic or in its translation in any of the languages, as forfeited to the Government. 3. This application was first moved before Khastgir J. The learned Judge entertained the application, gave directions for notice and for affidavits. Thereafter for some reason or other the learned Judge chose not to proceed in this matter any further and released this matter from her list. Such reason cannot be found out from the records of this case though the learned Judge had chosen to take an unprecedented step by giving an interview to the Press regarding the same of which I cannot and do not take any notice. The Chief Justice thereafter assigned this matter to me. As the learned Judge after giving directions has chosen not to hear this matter any further and as this matter has been assigned to me, I have recalled all the earlier orders and/or directions passed and heard the matter afresh as Court Application on the question of issue of the Rule nisi, if any. Accordingly the petitioner no. I who is appearing in person made submissions and prayed for issue of a Rule. 4. The learned Advocate General has appeared for the State and with the leave of this Court the learned Attorney General has made submissions on behalf of Union of India. 5. The petitioners have, in this petition, quoted some passages from the English translation of Koran and thereafter made the following aver. (paragraph 8). In this way, the publication of the Koran in the original Arabic as well as in its translations in various languages including Urdu, Hindi, Bengali, English, etc., amounts to commission of offences punishable u/s 153A and 295A of the Indian Penal Code and accordingly each copy of the book must be declared as forfeited by the respondent u/s 95 of the Code of Criminal Procedure, 1973. (paragraph 9). Submissions - Petitioner 6. The petitioner in his submission has repeated what has been stated in the petition. He has submitted that the provisions of Sections 153A and 295A of the Indian Penal Code are attached and accordingly the respondent should be directed to take action under Section 95 of the Criminal Procedure Code. He has submitted that Koran seeks to destroy idols. It encourages crime and invites violence. It is also against morality. It outrages the religious feelings of non-Muslims. It insults all religions excepting Islam. It encourages hatred, disharmony, feelings of enmity between different religious communities in India. 7. The relevant provisions of Section 95 of the Criminal Procedure Code (hereinafter referred to as Cr. P.C.) and Sections 153A, 295 and 295A of the Indian Penal Code (hereinafter referred to as I.P.C.), are set out hereinbelow: Cr. P.C. - See. 95: “(1) Where (a) any newspaper, or book (b) any document, wherever printed, appears to the State Government to contain any matter the publication of Book Act, 1867 (25 of 1867). (b) document includes any painting, drawing or photograph or other visible representation. (3) No order passed or action taken under this section shall be called in question in any court otherwise than in accordance with the provisions of section 96.” 29SA: .” 8. The petitioner no. 1 has addressed the Court in person and placed the petition and drawn my attention to the relevant provisions of the Act referred to above, and has submitted that it is a fit and proper case where such an order is to be passed against the Government directing them to take action under Section 95 of the Code of Criminal Procedure. Submission - State 9. The learned Advocate General appearing on behalf of the State has placed before me Section 295 of the Indian Penal Code which I have set out above. 10. The learned Advocate General has submitted that the Koran is a sacred book of the Muslim community and making an order of the nature as prayed for would amount to abolition of this religion. Such a prayer offends the provisions of Section 295 of the I.P.C. and, therefore, the question of invoking jurisdiction of this Court in respect of Section 295A of the I.P.C. cannot and does not arise. In this connection he has relied on a decision of the Supreme Court in the case of Veerabadram Chettiar – vs - V. Ramaswami Naicker & Ors. reported in A.I.R. 1958 S.C. 1032 at page 1035, paragraph 7. The relevant passage is set out hereinbelow: “The learned Judge in the court below, has given much too restricted a meaning to the words ‘any object held sacred by any class of persons,’ by holding that only idols in temples or idols carried in processions on festival occasions, are meant to be included within those words. There are no such express words of limitation in S. 295 of the Indian Penal Code and in our opinion the learned Judge has clearly misdirected himself in importing those words of limitation. Idols are only illustrative of those words. A sacred book, like the Bible, or the Koran, or the Granth Saheb, is clearly within the ambit of those general words. If the courts below were right in their interpretation of the crucial words in S. 295 the burning or otherwise destroying or defiling such sacred books, will not come within the purview of the penal statute. In our opinion, placing such a restricted interpretation on the words of such general import, is against all established canons of construction. Any object however trivial or destitute of real value in itself, if regarded as sacred by any class of persons would come within the meaning of the penal section. Nor is it absolutely necessary that the object, in order to be held sacred should have been actually worshipped. An object may be held sacred by a class of persons without being worshipped by them. It is clear, therefore, that the courts below were rather cynical in so lightly brushing aside the religious susceptibilities of that class of persons to which the complainant claims to belong. The section has been intended to respect the religious susceptibilities of persons of different religious persuasions or creeds. Courts have got to be very circumspect in such matters, and to pay due regard to the feelings and religious emotions of different classes of persons with different beliefs, irrespective of the consideration whether or not they share those beliefs, or whether they are rational or otherwise, in the opinion of the court.” Mr. Advocate General has submitted that the Koran has been in existence for a long time. No grievance has been made at any point of time by any one to the effect as the petitioner is seeking to do before this Court. He has submitted that this Court is not entitled to go into this matter as this relates to a question of religion itself. He has further submitted that this is a motivated application with the intention of destroying communal harmony. He has relied on a decision in the case of Public Prosecutor - vs - P. Ramaswami reported in 1966 (1) C.L.J., 672. Submission - Union of India. 12. Mr. Attorney General appearing on behalf of Union of India assisted by M.K. Banerjee Additional Solicitor General has adopted the submission of Mr. Advocate General and further added as follows:. 13. He has also relied on a decision in the case of Krishna Singh - vs - Mathura & Ors. reported in A.I.R. 1980 S.C. 707 at page 712 paragraph 17. 14. He has also relied on several passages from Fyzee and Mulla’s 18th Edition on Mohammedan Law. 15. He has also relied on a decision in the case of Ramjilal Modi - vs - State of U.P. reported in A.I.R. 1957 S.C. 620, paragraph 9. 16. Next. Mr. Attorney General has drawn my attention to the Preamble of the Constitution of India and Article 25 thereof which are set out hereinbelow: Preamble : WE, THE PEOPLE OF INDIA, having solemnly resolved to constitute India into a SOVEREIGN SOCIALIST SECULAR DEMOCRATIC REPUBLIC and to secure to all its citizens Liberty of thought, expression, belief, faith and worship. Art. 25: (1) Subject to public order, morality and health and to the other provisions of this Part, all persons are equally entitled to freedom of conscience and the right freely to profess, practise and propagate religion. 17. He has submitted that in view of such provisions of the Constitution the Court has no such power to give any such direction. 18. He has further relied on a passage from Halsbury’s Laws of England (4th Edition, Vol. 18, paragraphs 1692 and 1693). 19. He has further submitted that this is supposed to be a public interest litigation and this Court should be very cautious about the same. In this connection he has drawn my attention to the decision of the Supreme Court in Bandhu Mukti Morcha - vs -Union of India and others reported in 1984(3) S.C. 161 at 231 paragraphs 59 to 67. Reply 20. The petitioner in his reply repeated his submissions. Decision 21. Before examining the scope of the contention of the petitioners, it is necessary to ascertain the scope and importance of the Koran as such. It is the basic text of the Muslim religion. Like all other religions it proceeds on the basis that it is the only true religion and that those who do not follow that religion are not the true devotees of God. 22. As observed by the Supreme Court in the case of S. Veerabadram Chettiar - vs - V. Ramaswami Naicker (supra), as followed in the Madras decision of Public Prosecutor vs. Ramaswami (supra) the Koran, like the Bible and the Granth Saheb is a sacred book. It is an object held sacred by Muslims. Allah is considered as the God. 23. As pointed out in the Encyclopaedia Britannica, the Koran is the sacred scripture of the religion of Islam. It is a book in the Arabic language containing about 80,000 words. It is composed of 114 suras, or chapters, of varying size. The first sura, entitled “The Opening”, is in the form of a short devotional prayer; it is constantly so used, ceremonially and otherwise, and by comparativists has been called “the Lord’s Prayer of the Muslims”. It is addressed to God. The remainder of the Koran is in the form of an address from God, he either speaking himself, sometimes in the first person, or else, through the imperative ul, “say” which introduces many verses and passages and some suras, ordering that the words that follow be proclaimed. The subject matter is varied; passages of one or several verses, or of an entire sura, deal in diverse ways with many topics. It speaks about oneness and omniscience and supreme majesty of God. The style at time fiery, is powerful, the general tone deeply moralist and theocentric; the whole reverberates with a passionate demand for obedience to the will of a transcendent but near and mightily active God. 24. In the faith of Muslims, and according to the theory propounded in the book itself, the Koran is the revealed word of God. This postulates God, and indeed the kind of God who has something to say to us and who takes the initiative in saying it. Religion in this view is not a human searching after God; it is God who acts, and is known because and insofar as, and only as, he chooses to disclose himself. 25. In the Muslim view, God created the universe, ordaining its processes and controlling them. He prescribed a pattern or order, which nature must obey. For man also he obtained a pattern of behaviour, but unlike the rest of the natural world, man was made conscious and free, to choose whether or not he will conform to God’s decrees. There is for mankind a right way to live; it is the Koran that seeks to make this known. 26. For Muslims the Koran is the ipsissima verba of God himself. It is God speaking to man not merely in 7th century Arabia to Mohammed but from all eternity to every man throughout the world including the individual Muslim as he reads it or devoutly holds it. It is eternal breaking through into time, the unknowable disclosed, the transcendent entering history and remaining here, available to mortals to handle and to appropriate, the divine become apparent. To memorise it, as many Muslims have ceremonially done, and perhaps even to quote from it, as every Muslim does daily in his formal prayers and otherwise, is to enter into some sort of communion with ultimate reality. 27. There is another aspect of this matter. There are various interpretations of different verses of the Koran. As pointed out by S.D. Collet in The Life and Letters of Raja Rammohan Roy two verses of the Koran quoted by Raja Rammohan Roy are interpreted differently by some modem scholars. So far as verse of the Koran under IX.5. is concerned according to a scholar, it does not refer to general massacre of all polytheists and idolaters, that is all non-Muslims, but it speaks only of those non-Muslims who were waging war at the time with the Muslims treacherously by breaking previous agreement. 28. According to the Mulla on Mohammedan Law there are four sources of Islamic Law, one of which is the ‘Koran’. The word ‘Islam’ means peace and submission. In its religious sense it denotes submission to the will of God and in its secular sense the establishment of peace. The word ‘Muslim’ in Arabic is the active participle of ‘As-salam’ which is acceptance of the faith and of which the noun of action is ‘Islam’. In English the word ‘Muslim’ is used both as a Noun and an Adjective and denotes both the persons professing faith and something peculiar to Muslims, such as law, culture, etc. The Muslims believe in the divine origin of their holy book which according to their belief was revealed to the prophet by the angel Gabriel. The ‘Koran’ is Al-furcan, i.e., one showing truth from falsehood and right from wrong. The Koran contains about 6000 verses but not more than 200 verses deal with legal principles. The portion which was revealed to the prophet at Mecca is, singularly free of legal matter and contains the philosophy of life and religion and particularly ‘Islam’. As the Koran is of divine origin, so are the religion and its tenets and the philosophy and the legal principles which the Koran inculcates. The Koran has no earthly source. It was compiled from memory after the prophet’s death from the version of Osman the third Caliph. 29. It is in the light of the above that one should approach to examine the said book. Some passages containing interpretation of some chapters of the Koran quoted out of context cannot be allowed to dominate or influence the main aim and object of this book. It is dangerous for any court to pass its judgement on such a book by merely looking at certain passages out of context. 30. In my opinion the Koran being a sacred Book and “an object held sacred by a class of persons” within meaning of Section 295 of Indian Penal Code, against such book no action can be taken under Section 295A. Section 295A is not attracted in such a case. Section 295A has no application in respect of a sacred book which is protected under Section 295 of I.P.C. Any other interpretation would lead to absurdity. If any offence, within the meaning of Section 295 is committed, in respect of Koran then it is punishable. Such Book gets protection in view of Section 295A. At the same time if it is open to take any such action under Section 295A against such Book, then the protection given under Section 295 will become nugatory and meaningless. 31. Further, as pointed out by the Supreme Court in the case of Ramji Lal Modi - vs - State of U. P. (supra) Section 295A does not penalise any and every act of insult or attempt to insult the religion or the religious beliefs of a class of citizens, which are not perpetrated with the deliberate and malicious intention of outraging the religious feelings of that class. Insults to religion offered unwittingly or carelessly or without any deliberate or malicious intention of outraging the religious feelings of that class do not come within the scope of the section. It only punishes the aggravated form of insult to religion when it is perpetrated with deliberate and malicious intention of outraging the religious feelings of that class. I have set out the aim and object of the Koran. In my opinion it cannot be said that Koran offers any insult to any other religion. It does not reflect any deliberate or malicious intention of outraging the religious feelings of non-Muslims. Isolated passages picked out from here and there and read out of context cannot change the position. 32. The Attorney General is right in his contention that such a construction as suggested cannot be given as this would amount to violation of the Constitution. I have already set out the Preamble to and Article 25 of the Constitution. 33. Preamble to the Constitution is a part of the Constitution. Keshavananda - vs - Kerala, A.I.R. 1973 S.C. 1461. Accordingly, it is open to the Court to keep the same in mind while considering any provision of the Constitution of India. 34. In my opinion passing of such order as prayed for would go against the Preamble of the Constitution and would violate the provisions of Article 25 thereof. The Preamble proclaims India to be a secular State. It means that each and every religion is to be treated equally. No preference is to be given to any particular religion. No religion is to be belittled. Liberty of thought, expression, belief, faith and worship are assured. Koran, which is the basic text book of Mohammedans, occupies a unique position to the believers of that faith as Bible is to the Christians and Gita, Ramayana and Mahabharata to the Hindus. In my opinion, if such an order is passed, it would take away the secularity of India and it would deprive a section of people of their right of thought, expression, belief, faith and worship. This would also amount to infringement of Article 25 which provides that all persons shall be equally entitled to freedom of conscience and the right freely to profess, practise and to propagate religion. Banning or forfeiture of Koran would infringe that right. Such action would amount to abolition of the Muslim religion itself. Muslim religion cannot exist without Koran. The proposed action would take away the freedom of conscience of the people of that faith and their right to profess, practise and propagate the said religion. Such action is unthinkable. The Court cannot sit in judgment on a holy book like Koran, Bible, Geeta and Granth Saheb. 35. As pointed out in Halsbury 4th Ed. Vol. 18 on “Foreign Relations Law”, right of freedom of thought, conscience and religion includes freedom, alone or with or in community with others, and in public or private, to manifest religion or belief in worship, teaching, practice and observance. In my opinion, the action proposed will deprive a class of persons of their human rights. 36. There is another aspect of the matter. This sacred book has been in existence for a number of years with its different interpretations and translations. Upto now no one has chosen to challenge the same. 37. For similar reasons I also hold that Section 153A of the Indian Penal Code has no application in the facts of the present case. Apart from anything else, there is no question of forfeiture or banning of the said book on the grounds of disharmony or feelings of enmity or hatred or ill-will between different religions or communities. This book is not prejudicial to the maintenance of harmony between different religions. Because of the Koran no public tranquility has been disturbed upto now and there is no reason to apprehend any likehood of such disturbance in future. On the other hand the action of the petitioners may be said to have attempted to promote, on the grounds of religion, disharmony or feelings of enmity, hatred or ill-will between different religions, i.e., between Muslims on the one hand and non-Muslims on the other within the meaning of Section 153A. Similarly, in my opinion, it may be said that by this petition the petitioners insult or attempt to insult the Muslim religion and the religious belief of the Muslims within the meaning of Section 295A of the Indian Penal Code. It is an affront to Islam’s Supreme Scriptural Authority. For this reason the contention of the respondents that this application is motivated cannot be completely ruled out. 38. The learned Attorney General was right in making comments regarding caution to be exercised in entertaining public interest litigation. A Writ petition is a very important proceeding. It is known as a High Prerogative Writ. Article 226 of the Constitution confers a wide power on the High Courts. It is wider than Article 32 itself. The High Court enjoys a jurisdiction which is not enjoyed by an ordinary civil court. In many cases where no remedy is available in an ordinary civil court, the Writ Court is the only forum. It is much more expeditious than an ordinary civil proceeding. However, in my opinion it is the duty of the Court while entertaining or admitting such application, particularly a public interest litigation, to be very cautious about the same, particularly where it is a matter of great public interest. In this context reference may be made to the judgment of Pathak J. in the case of Bandhu Mukti Morcha - vs - Union of India (supra). The present case involves the sentiment and religious feelings of a minority community. The matter involves religious feelings of millions of people not only in India but also outside India. It involves a highly delicate and sensitive issue. The application was entertained and admitted without going into the question of prima facie case and the jurisdiction and power of the Court to entertain this petition. In spite of the same directions were given for filing of affidavits. This by itself amounts to holding that there is a prima facie case though this question was not gone into. The Court should be circumspect in such kind of matters and be very cautious about the same. Otherwise though it might attract cheap publicity but may cause untold misery and disruption of religious harmony. The High Court should have been spared of the embarrassment caused. The petition should have been rejected forthwith and in limine as unworthy of its consideration as soon as it was moved. 39. For the aforesaid reasons I am of the opinion that the Writ Court’s jurisdiction has been wrongly sought to be invoked in this case. No prima facie case has been made out. It is clear that this Court has no power of jurisdiction to pass any such order as prayed for in this application. 40. For the aforesaid reasons this applications stands dismissed. No order as to costs. 41. In this connection I record my appreciation of the very frank, fair and sober manner in which this case has been argued by the Attorney General appearing for the Union of India and the Advocate General appearing for the State. Sd/- (B.C. Basak) Annexure-s8 Why allow Azaan? You are here, Christianity and Islam have been fabricated to enslave else kill a human being. shout from their mosques, Allahu Akbar Allah is the greatest) 5..”?”. 9... 11.. 12.’ 13.? I would like to ask National Commission for Minorities’ Chairman Wajahat Habibullah, to send me a copy of notice of minutes sothat I can also proceed with inflammatory statement recited everyday by Muslims. Azaan incites communal hatred among human races. If judiciary allows Muslims to sue Mr. Subramanium; I will loose my faith with the same judiciary which had dismissed my petition no. 15/1993 of Koran and Azaan. (for details plz click the links below…….) Now time has come, when you yourself have to decide to believe this judiciary, media, and government to commit suicide or raise your voice against Islam????? Ayodhya Prasad Tripathi (Press Secretary) Aryavrt Government, 77 Khera Khurd, Delhi -82 Dated; Friday, July 29, 2011 Ph: (+91)9152579041 TUESDAY, AUGUST 14, 2007 Repatriation of Qaba IN THE INTERNATIONAL COURT OF JUSTICE, HAGUE CRIMINAL PROCEEDINGS AGAINST SAUDI ARABIA GOVERNMENT FOR ILLEGAL OCCUPATION UPON ARYANS' TEMPLE, 'KAMESHWAR MAHADEV JYOTIRLING' COMPLAINT NO. OF 2007 IN THE MATTER OF, Wrongful dispossession of Aryans from their Jyotirling, GOVERNMENT OF ARYAVRT THROUGH ITS PRESS SECRETATY, AYODHYA PRASAD TRIPATHI, S/O LATE SHRI BENI MADHAV TRIPATHI, AGED 73 YEARS, R/O 77, KHERA KHURD, DELHI - 11OO82 (INDIA THAT IS BHARAT) ... COMPLAINANT. VERSUS KING Abdullah bin Abdul Aziz Al Saud, S/O King Fahd Kingdom of Saudi Arabia, Riyadh,... RESPONDENT To, HON'BLE THE PRESIDENT & OTHER JUSTICES OF THE INTERNATIONAL COURT OF JUSTICE, The complainant, aforesaid, submits as here under, 1. That His Majesty KING Abdullah bin Abdul Aziz Al Saud is custodian of Qaba Mosque situated in Mecca of his country Saudi Arabia. The Holy Koran published under his guidance, writes, in reference to commands of Allah compiled in Koran, "Truth has (now) arrived, and falsehood perished: For Falsehood is (by its nature) bound to perish." The Koran, 17th Chapter Bani Israil 17:81.Surah 17 Ayet 81, Also Koran Surah Al Anbiyaa 21 Ayet 51-70. 2. That by his own admission Aryans' worship temple has forcibly been occupied illegally by Muhammed and is in constant illegal possession of KING Abdullah bin Abdul Aziz Al Saud. 359 idols in its vicinity were demolished by Ali. 3. That worship place of Aryans is in protracted abuse in violation of the PREAMBLE of the UNO that has been established, " ..." 4. That it would be in the interest of peace, justice, equal rights of nations and faiths that KING Abdullah bin Abdul Aziz Al Saud may be directed to return Aryans' worship place after re-installing those 359 idols, which were demolished by Ali about 1400 years back. 5. That there is yet another evidence of Qaba being Aryans' place of worship as under, Arabic Vedic roots 6. A recent archeological find in Kuwait unearthed a gold-plated statue of the Hindu deity Ganesh. A Muslim resident of Kuwait requested historical research material that can help explain the connection between Hindu civilisation and Arabia 'Saya-ul-okul' signifies memorable words] A careful analysis of the above inscription enables us to draw the following conclusions: 1. centers. The belief, therefore, that visiting Arabs conveyed that knowledge to their own lands through their own indefatigable efforts and scholarship is unfounded. P R A Y E R It is, therefore, most humbly & respectfully prayed that our Mecca as well as Jyotirling renamed as Qaba may kindly be directed to be returned to Aryavrt Government. (Ayodhya Prasad Tripathi) Government of Aryavrt, 77 Khera Khurd, Delhi-110082; Phone: +91-9868324025 Dated:Monday, July 23, 2007 DISTRICT ANANTNAG Date of occurrence Temple FIR No. Poling Station 8-12-1992 One of Gouree Shankar Temples 81/92 Pahalgam 10-12-1992 Two Shiv Jee Temples 278/92 Anantnag 8/9-12-1992 Shiv Jee Temple 45/92 Damhal, Honjipora 9-12-1992 do 46/92 7/8-12-1992 Two Shiva Temples 165/92 Kulgam Sheevala Temple 166/92 Temples 168/92 Shiva Temples 169/92 Two Temples 172/92 Shivalik Temple 173/92 Kulaam Shiv Temple 42/92 Achabal 43/92 Do 7-12-1992 80/92 Dooru 84/92 Ganesh Temple 91/92 86/92 93/92 13-12-1992 16-12-1992 171 /92 178/92 40/92 167/92 47/92 Hanji pora 163/92 DISTRICT BARAMULLA 185/92 Pattan 186/92 78/92 Panzulla Sumbal 188/92 Oattar 15-12-1992 71/92 DISTRICT SRINAGAR 18-12-1992 Temple, Narayan Bagh 145/92 Ganderbal ShamShan Bhoomi, Karan Nagar Karan Nagar Shiv Vashno Mandir, Bana Mohalla 265/92 S.R. Gunj DISTRICT BUDGAM 38/92 Chandpora DISTRICT KUPWARA - Wati pora Atrocities in Kashmir Destruction of Temples Annexure-s10 Prafull Goradia vs Union Of India on 28 January, 2011 Bench: Markandey Katju, Gyan Sudha Misra. [Markandey Katju] J [Gyan Sudha Misra] J New Delhi; January 28, 2011 Annexure-s11 PRSEC/E/2012/10265 Dear valued reader, I have received the message below from URL May peruse and verify yourself. “Please do not send any kind of religious or Hurtful SMS. Strict Action will be taken by the GOVT if any such SMS being Sent by anyone.” Azaan broadcasted from the pulpit of mosques are religious as well as hurtful to the faiths and deities of non-Muslims as such I demand strict action by the Government of Sonia. I am requesting you to browse the URLs below, As per the status checked on Wednesday, August 22, 2012, the report from the President Secretariat is as hereunder, Request/Grievance Status Registration Number : Name M/s Ayodhya Prasad Tripathi Date of Receipt 27 Jul 2012 Current Status The petition is transferred Date of Transfer 31 Jul 2012 Ministry/Department Ministry of Home Affairs Officer's Name Shri. Satpal Chouhan Designation Joint Secretary (Coord. & PG) Address Room No. 9, North Block, New Delhi Telephone No. 23092392 jscpg-mha@nic.in Click here to view the live status of this case Note: You are requested to further liaise in the matter directly with Joint Secretary (Coord. & PG), Ministry of Home Affairs, Room No. 9,North Block, New Delhi for further information. Live Status of Registration Number 'PRSEC/E/2012/10265' M/s Ayodhya Prasad Tripathi TAKEN UP WITH SUBORDINATE ORGANISATION Date of Action 09 Aug 2012 HR Division Smt Rashmi Goel Joint Secretary Lok Nayak Bhavan 24633828 Sir, The telephone no 24633828 of JS Smt Rashmi Goel is not responding to pursue my complaint. May kindly suggest as to how should I pursue my case? Apt Below is the copy of complaint. Your Request/Grievance Registration Number is : PRSEC/E/2012/10265 Dated; Friday, July 27, 2012y THE BEGINNING INDEX WRITTEN ARGUMENT IN DEFENSE OF ACCUSED FIR No. 440/1996 PS ROOP NAGAR, DELHI NORTH In the matter of: State V/s Ayodhya Prasad Tripathi Sl No Details Annexure No Pages 1. Argument 1-60 2. Judgment on Private Defense Annexure-s1 61-68 3. Salaries to Imams Annexure-s2 69-74 4. Section 197 of Criminal Procedure Code Annexure-s3 75-79 5. Consequence of NBW against Bukhari Annexure-s4 80-81 6. Priyadarshini Mattoo case Annexure-s5 82-85 7. Imams above law Annexure-s5a 86-92 8. Article 31 omitted Annexure-s6 93-107 9. Bible and Koran cannot be questioned Annexure-s7 108-113 10. Abolition of Azaan + Repatriation of Qaba Annexure-s8 114-132 11. Temple demolition in Kashmir Annexure-s9 133-135 12. Judgment on Hajj subsidy Annexure-s10 136-139 13. Censor of Social Websites Annexure-s11 140-143 (Ayodhya Prasad Tripathi) Accused in Person Dated: October 11, 2012
http://www.aryavrt.com/argument-153a-ipc
CC-MAIN-2017-17
refinedweb
24,971
62.78
Graphs them. A famous example of a graph that is very useful is, when nodes represent cities and the edges represent distance between these 2 nodes (or cities for that matter). Such an example can be seen below: Judging by the image above, it is very easy to understand what it represents and is very easy to read. The distance between Chicago and New York is 791.5 miles and the distance between New York and Washington DC is 227.1 miles. This is just 1 simple example of how using a graph could be useful, but there are many more. Other examples of graph being useful could be representing family tree, facebook contacts, even travel routes. Undirected Graph When a graph is undirected, that means that the edges can be traversed in both directions. Directed Graph When a graph is directed, that means that the edges can be traversed only in the direction they are “pointing to”. Graph implementation in Java Node.java import java.util.*; public class Node { private int id; private List<Edge> neighbours = new ArrayList<Edge>(); public int getNodeId() { return this.id; } public void addNeighbour(Edge e) { if(this.neighbours.contains(e)) { System.out.println("This edge has already been used for this node."); } else { System.out.println("Successfully added " + e); this.neighbours.add(e); } } public void getNeighbours() { System.out.println("List of all edges that node " + this.id +" has: "); System.out.println("================================="); for (int i = 0; i < this.neighbours.size(); i++ ){ System.out.println("ID of Edge: " + neighbours.get(i).getId() + "\nID of the first node: " + neighbours.get(i).getIdOfStartNode() + "\nID of the second node: " + neighbours.get(i).getIdOfEndNode()); System.out.println(); } System.out.println(neighbours); } public Node(int id) { this.id = id; } } Node.java has 3 methods and 1 constructor. getNodeId() simply returns each node’s id. addNeighbour(Edge e) creates a connection via an edge which is passed as a parameter to another node. It is done by adding the specified edge to the List of edges in the Node class. Note that there is an if condition that checks if the specified edge e already exists in the current edges of this node. getNeighbours() is used just for displaying purposes. View the output to see how exactly this method displays the information. The constructor takes id as a parameter. Edge.java public class Edge { private Node start; private Node end; private double weight; private int id; public int getId() { return this.id; } public Node getStart() { return this.start; } public int getIdOfStartNode() { return this.start.getNodeId(); } public Node getEnd() { return this.end; } public int getIdOfEndNode() { return this.end.getNodeId(); } public double getWeight() { return this.weight; } public Edge(Node s, Node e, double w, int id) { this.start = s; this.end = e; this.weight = w; this.id = id; } } Edge.java has 6 methods and 1 constructor. getId() simply returns the id of the current edge. getStart() returns the Node object from which the edge starts. getIdOfStartNode() returns the id of the Node object from which the edge starts. getEnd() returns the Node object that the edge “stops” at. getIdOfEndNode() returns the id of the Node object that the edge “stops” at. getWeight() gets the weight of the current Node object. The Edge constructor takes 4 parameters and initializes the constructor using them. Graph.java import java.util.*; public class Graph { private List<Node> nodes = new ArrayList<Node>(); private int numberOfNodes = 0; public boolean checkForAvailability() { // will be used in Main.java return this.numberOfNodes > 1; } public void createNode(Node node) { this.nodes.add(node); this.numberOfNodes++; // a node has been added } public int getNumberOfNodes() { return this.numberOfNodes; } } Graph.java has only 3 methods and no constructor. checkForAvailability() checks if there are more than 1 node. If there aren’t any more than 1 node, then a connection cannot be made as a node cannot have an edge towards itself. It has to have a connection with another node. createNode(Node node) takes an argument of type Node and adds that node to the nodes List. After the node has been added, the current graph increments the number of nodes by 1. That way, we can evaluate the checkForAvailability() method to true at some point. getNumberOfNodes() returns the number of nodes. Main.java public class Main { public static void main(String args[]) { Graph graph = new Graph(); Node node1 = new Node(1); // create a new node that contains id of 1 Node node2 = new Node(2); // create a new node that contains id of 2 Node node3 = new Node(3); // create a new node that contains id of 3 graph.createNode(node1); // numberOfNodes should increment by 1 graph.createNode(node2); // numberOfNodes should increment by 1 graph.createNode(node3); // numberOfNodes should increment by 1 Edge e12 = new Edge(node1, node2, 5, 1); // create an edge that connects node1 to node2 and contains weight of 5 Edge e13 = new Edge(node1, node3, 10, 2); // create an edge that connects node1 to node3 and contains weight of 10 if (graph.checkForAvailability()) { // two nodes can be connected via edge node1.addNeighbour(e12); // connect 1 and 2 (nodes) node1.addNeighbour(e13); node1.getNeighbours(); } else { System.out.println("There are less than 2 nodes. Add more to connect."); } } } Main.java has only a main method. A graph is created within the main method. After which, 3 instances of Node are created. Then, these Node instances are added to the graph using the createNode(Node node) method. After that, 2 instances of Edge are created. The first, connects node1 to node 2. The second, connects node1 to node3. After that, there is an if condition that checks if the number of nodes is more than 1 and if it is, add the “Neighbour” to node1. (e12 is the edge that connects node1 and node2.) (e13 is the edge that connects node1 and node3). Output Successfully added Edge@15db9742 Successfully added Edge@6d06d69c List of all edges that node 1 has: ================================= ID of Edge: 1 ID of the first node: 1 ID of the second node: 2 ID of Edge: 2 ID of the first node: 1 ID of the second node: 3 [Edge@15db9742, Edge@6d06d69c] Visualization of the above output: Question: is the above program producing an undirected or directed graph? If it produces unidrected graph, can you modify the API to produce directed one? And if produces directed graph, can you modify the API to produce undirected one? Answer: the Graph above produces a directed graph, because as the name suggests, the arcs are “pointing” to a location. To make it a undirected you would simply need to remove the “arrow” of the arcs and just make them as a simple line. Just like the image below that represents the undirected graph.
https://javatutorial.net/graphs-java-example
CC-MAIN-2019-43
refinedweb
1,120
68.36
Troubleshooting AllJoyn with Windows 10 Insider Preview Builds . This blog post is a fulfillment of the promises made in the AllJoyn session presented at //build/ 2015: AllJoyn: Building Universal Windows Apps that Discover, Connect, and Interact with Other Devices and Cloud Services Using AllJoyn Three major components form AllJoyn UWP apps: The following diagram shows the architecture of a typical AllJoyn UWP project: AllJoyn-enabled UWP apps can be either producers (implement and expose interfaces, typically a device ), consumers (use interfaces, typically apps), or both. Consumers and producers share the same starting steps, only diverging in the implementation details. Follow these steps to develop AllJoyn-enabled UWP apps for Windows 10: (explained in detail later in this document) The Windows 10: This blogpost covers the first two ways - AllJoyn® Studio natively supports querying the network for AllJoyn producers and extracting their XML as well as uploading Introspection XML files. Learn how to create your own here. At //build/ 2015, an AllJoyn-enabled toaster device was shown which will serve as the example for this post. This toaster exposes controls for starting and stopping the toasting sequence, setting the "darkness", and notifications when the toast is burnt. The AllJoyn toaster hardware sample in action The toaster exposes the following XML: <node name="/toaster"> <interface name="org.alljoyn.example.Toaster"> <annotation name="org.alljoyn.Bus.Secure" value="true"/> <description language="en">Example interface for controlling a toaster appliance</description> <description language="fr">Interface Exemple de commande d'un appareil de grille-pain</description> <property name="Version" type="q" access="read"> <description>Interface version</description> <annotation name="org.freedesktop.DBus.Property.EmitsChangedSignal" value="const"/> </property> <signal name="ToastBurnt" sessioncast="true"> <description language="en">Toast is burnt</description> <description language="fr">Toast est brûlé</description> </signal> <method name="StartToasting"> <description language="en">Start toasting</description> <description language="fr">Lancer grillage</description> </method> <method name="StopToasting"> <description language="en">Stop toasting</description> <description language="fr">Arrêtez de grillage</description> </method> <property name="DarknessLevel" type="y" access="readwrite"> <annotation name="org.freedesktop.DBus.Property.EmitsChangedSignal" value="true"/> <description language="en">Toasting darkness level</description> <description language="fr">Grillage niveau de l'obscurité</description> </property> </interface> </node> Create a Project the way you normally would: Click File->New->New Project to begin. Instead of navigating to a Windows Universal Template, select the "AllJoyn App" Template for your target language which was installed with the Extension. Name your project and choose a file location to begin developing. Immediately after selecting an AllJoyn App Template, Visual Studio will ask you to select the AllJoyn Interfaces you would like to include in your Project. If you cannot find your AllJoyn device or interface on the network, follow this guide to troubleshoot. To query your network for exposed interfaces, select the "Producers on the network" in the left-hand panel. This will find any AllJoyn producer on the network and list the interfaces they support. Select the interfaces you would like to inlude in your project. In this case, we are only using the toaster interface, so we select just the "org.alljoyn.example.Toaster" interface. The "Actions" column describes the pending changes to the Project, displaying "Add" for newly-selected Interfaces. Here we have given the interface a friendly name of "ToasterLibrary", which triggers the "Rename" action to be applied. Choose any number of Introspection XML files via the "Browse" button to see their contained Interface(s). Navigate to and select the appropriate XML (here, we are using toaster.xml). Once AllJoyn® Studio loads the XML, it will parse the various Interfaces and the descriptions contained within, allowing you select which Interfaces you would like to support. The "Actions" column describes the pending changes to the Project, displaying "Add" for newly-selected Interfaces. Here we have chosen to include the "org.alljoyn.example.Toaster" Interface and have given it a friendly name of "ToasterLibrary", triggering the "Rename" action to be applied. After completing these steps, the generated files are automatically added to a C++ Windows Runtime Component with the friendly name from above and added as a Reference to the application Project. However, the Root Namespace is still the same "org.alljoyn.example.Toaster" as defined by the interface. Any classes that access these components need to have a "using" clause for the component's root namespace. TIP: Build the generated components now in order to benefit from IntelliSense. After you have created your AllJoyn App solution, you can always go back and modify the interfaces you are using. From the main menu bar, click "AllJoyn->Add/Remove Interfaces..." to launch the Interface manager. From here, you may click "Browse..." to add more XML files, or de-select existing Interface(s). De-selecting an Interface updates its Action to "Remove", and clicking the Ok button removes the associated Windows Runtime Component from your solution. Looking at the Solution Explorer, the Windows Runtime Component has been removed. When implementing AllJoyn functionality, always be sure to include the "Windows.Devices.AllJoyn" namespace as well as the namespace from the interface you want to use. The code generated for the interfaces contains a watcher and consumer class used to find and then control a producer. Implement the watcher by creating a new AllJoynBusAttachment, initialize the watcher with that AllJoynBusAttachment, register for the watcher finding a producer (the "Added" event), then start the watcher. The ToasterWatcher_Added event triggers whenever the watcher finds a producer. Use this event to register a consumer. Note that Visual Studio will generate the necessary shell method through the Quick Actions. Generating the method produces the following shell code: Filling in the shell code with the correct logic enables registering the consumer. Note that this event triggers every time the watcher discovers a producer. If you expect to find multiple producers, keep a data structure of the consumer as there will be one consumer for each producer. Register events for the various signals the producer will emit – property changed signals are direct members of the consumer class, but other signals are members of the Signals class (just as before, use the Quick Actions to generate the shell code for these events). Use the consumer object to read and write properties as well as call methods. Producers implement a service that expose their interfaces. To implement a producer, first create a class for its service. Add the namespace for the interface to the "using" statements, then inherit the shell interface generated for the service and use the Quick Action to implement the class. From here, the Quick Action will fill in the necessary components. All the necessary parts of the code are ready to function, but you still need to implement the actual logic for each method and property call. Taking the SetDarknessLevelAsync as an example, we use a task to create a success result. In case of failure, return ToasterSetDarknessLevelResult.CreateFailureResult(). Implement the rest of the method and property calls to finish the service. Creating the producer is straightforward – create a new AllJoynBusAttachment, initialize a producer with that AllJoynBusAttachment, initialize the newly created service for the producer, then start the producer. Since the service contains the majority of the logic, the main functionality left to implement is to send the signals for property changes and discrete signals for non-state events. Implement these as necessary through method calls. If you've completed all of the instructions in this document correctly, you are ready to start writing AllJoyn code in your app. For reference, please look to the AllJoyn Universal Windows Apps samples on the Microsoft Sample GitHub for AllJoyn Producers and AllJoyn Consumers. For a detailed walkthrough of how to create an AllJoyn app, please watch the AllJoyn session 623 from //build 2015: "AllJoyn: Building Windows apps that discover, connect and interact with other devices and cloud services using AllJoyn". Note that AllJoyn communication between two UWP apps on the same machine is disabled by Windows policy unless they have enabled a loopback exception, such as when being directly deployed from Visual Studio. For detailed instructions on enabling loopback exemption, see here. In addition, here are some resources that will help you get up to speed with AllJoyn and AllJoyn support in Windows 10: Happy Coding, Brian Hi Brian, nice article ... let me to add other two useful links for XML introspection : Thanks, Paolo. HI brian, great article - it is the extension I've waited for for not creating the alljoyn project on my own. Have I overseen something? In the case the introspection xml is changed, the source for creating the interface dlls are not rebuild automatically, and I haven't seen a way to rerun the alljoyncodegenerator.exe to get new interface dlls. In case you want to create several consumer/producers you have to extend/modify the .xml file several time to meet the requirements. but e, currently, it seems that the project has to be rebuild in case the xml receives a modification. Even using menu item alljoyn - add/remove interface and reselect the same interface, will not recreate the interface files. if you have more than a single <interface name="a.b.c"> in the .xml you may switch to another and switch back to the original one to get the new interface dlls. any idea to make this more simple? regards andy Thanks for the questions, Andy. If you have modified your XML and would like to re-generate the code, currently you will need to use the Add/Remove interfaces menu to take the "Remove" action (as shown in the blogpost). This will remove the generated code from the project; re-launch the Add/Remove interfaces menu to Browse for your modified XML file, which will generate code with your new changes. Keep in mind that you may use the Browse menu to add several XML files. Say you have three interfaces that you would like to use: com.contoso.Foo, com.contoso.Bar, and com.contoso.Baz. These may all be in the same XML file "interfaces.xml" or separate files (e.g., foo.xml, bar.xml, and baz.xml). Later, you could add a fourth, com.contoso.Qux, by creating a qux.xml file and adding it through the Add/Remove interfaces menu or by adding it to the interfaces.xml file and using the Browse button to re-select the interfaces.xml file. This is true even for "child" interfaces in a hierarchy; e.g., com.contoso.Foo.Norf. The definition for com.contoso.Foo.Norf can be in its own norf.xml file or combined with the foo.xml file. Regardless, AllJoyn Studio will generate a separate C++ project for each interface that you Add to the project, allowing you to modify, add, and remove interfaces in isolation. As a final note, remember to build the C++ projects after code generation to enable IntelliSense to function correctly. Hi Brian, thanks for answering on how to update the code if the introspection format .xml changes. Do you know whether there will be project settings in the future for creating command line based application which can be run in the background (e.g. controlled by a service instance), or as a service itself. I would be interested in C++ and C# command line based applications for controlling a lot of test machines from a central point. thanks in advance andy Hi Brian, I felt your article is pretty helpful and impressive to me but faced an error "The path is not a legal form" while adding interfaces with toaster.xml suggested in this article. How can I figure out this problem to move on to the next step? Hello Brian, I have a problem with signals and multiple producers and while searching for an solution i found this article and hope you can help me. I have two producers (that have one signal exposed) and my "watcher_added" event is fired twice. I have wrapped the consumer object in a object "device" and in this object i register to the signal event. The problem is that the same signal event is triggered for both producers. I can't figure out what the problem is and can't find an example on the internet with multiple producers and signals. Do you have any thoughts or do you know of there is an example somewhere? Regards, Michel @AndreasDietrich: - we built AllJoyn Studio for the creation of Universal Windows Apps. Depending on the need, in the future we may expand on our code generation solution to include Classic Windows Apps. We appreciate the feedback about this use-case! @oniono: we have been unable to reproduce this locally. Could you describe your setup? (Windows version, VS version, XML description, area path, etc) @Michel: This is likely due to our use of multipoint sessions. However, each object on the AllJoyn Bus presents a unique identifier, so you can always determine exactly which producer emitted a signal. For multiple producers, using a Dictionary for your consumers might be beneficial: Dictionary<string, ToasterConsumer> toasterDictionary; Here the Key will be equal to the unique identifier for the producer. Then in the "Added" event: ToasterConsumer toasterConsumer = toasterJoinSessionResult.Consumer; toasterConsumer.DarknessLevelChanged += ToasterConsumer_DarknessLevelChanged; toasterConsumer.Signals.ToastBurntReceived += Signals_ToastBurntReceived; toasterDictionary.Add(args.UniqueName, toasterConsumer); Finally, in the signal callback use the following to filter to the consumer you want to update/use. toasterDictionary[args.MessageInfo.SenderUniqueName] @oniono: I had the same behavior even when I tried to load a well formatted xml file. The problem was with Visual Studio not capable of accessing folder or create files. I tried to launch Visual Studio as administrator and could create my Windows Runtime Component correctly (and then came back to the safe mode). Does it fix your problem ? Thanks a lot Brian for these extension and tutorial. It's really helpfull. @BrianDRockwell I read your response to @Michel but it's still no clear to me how to do the following: I have a multiple devices that each have multiple produceres. For instance multiple thermostats that each have multiple produceres such as: 1. Battery level 2. Setpoint 3. Current Temperature ... Each thermostat also have a Name, a Manufactorer name, a Node Id etc. How do I recreate this data structure in an App? AllJoyn Explorer seems to do this just fine, but I'm struggling to replicate. In fact, I can't even find the name and manufactorer for each node. Any hint will be valued. Thanks. Hi Brian, Thanks for your article. I also assisted to your session during the Allseen summit. I bought two Lifx light bulbs for testing purposes and used your article to create a simple Universal App to control both lights. Everything went fine to control the first light, but If I modify the code to manage several lights, I get an 0x9040 error in return of the JoinSessionAsync method meaning ER_BUS_OBJ_ALREADY_EXISTS when I try to establish the session to the second Light. Is there something I miss? Regards, Mathieu Thanks for the article. How do you deal with strings that can contain a null value? In our scenario a null value is important, however I can't "emit a property changed" or "get" a value that is null. Are there any code generation templates or utilities available to generate the introspection XML from a class? This introspection XML feels redundant. Ideally, we could markup our classes with attributes or generate the XML from classes. For the particular product I'm working on, and I suspect any product I may ever want to create using this technology, I need the flexibility to distribute a group of services across one or more PC's and devices. Ideally I would architect one UI that pulls together all of these services and a common code base that ran on all the devices. This works nicely except in the case of the "one" in "one or more devices". As people just get started with our product line I need a service running on the local PC and the UI to manage it. From what I read, I'm not permitted to develop such a solution with the exception of testing in which case I can enable a loopback exception. Why is Windows enforcing this policy of disallowing multiple portions of a distributed app to exist on the same PC? Is there some method of architecting such a solution that I've overlooked?
https://channel9.msdn.com/Blogs/Internet-of-Things-Blog/Using-the-AllJoyn--Studio-Extension
CC-MAIN-2018-05
refinedweb
2,726
55.03
In this article I want to talk about a recent password strength checker that I build for my open source application SafePad. First of all we have a public enumeration that contains the password score results. In this article I want to talk about a recent password strength checker that I build for my open source application SafePad. First of all we have a public enumeration that contains the password score results. I am writing this short article as every time I need to serialize a C# object into XML I keep on forgetting how to do it (probably as I am getting old) so I thought I would post a snippet here. More for my own benefit as I am forgetful. using System; using System.Collections.Generic; using System.Linq; using System.Windows.Forms; namespace HauntedHouseSoftware.SecureNotePad.DomainObjects { public sealed class ApplicationSettings { public int WindowPositionX { get; set; } public int WindowPositionY { get; set; } public int WindowWidth { get; set; } public int WindowHeight { get; set; } public FormWindowState FormWindowState { get; set; } } } First lets start with a simple class that we want to serialize into XML. The class above, ApplicationSettings, was taken from my open source application SafePad. This simple class contains some window position setting that need to be serialized to XML.., the Levenshtein distance between “kitten” and “sitting” is 3, since the following three edits change one into the other, and there is no way to do it with fewer than three edits:. A phonetic algorithm is an algorithm for indexing of words by their pronunciation. Most phonetic algorithms were developed for use with the English language; consequently, applying the rules to words in other languages might not give a meaningful result...
https://stephenhaunts.com/2014/01/
CC-MAIN-2019-35
refinedweb
280
54.12
I would like to use different vue layout files depending on the platform used (desktop / mobile). I thought it might be possible by modifying routes. component: () => { this.$q.platform.is.mobile ? import('layouts/MobileLayout.vue') : import('layouts/DesktopLayout.vue') } but it seems that $q is not accessible from the this instance from the routes. I was thinking that it’d be as simple as this but it seems it isn’t. In the end, do I just have to use a single layout and just use v-if all over to choose what to display? I found a post on this forum that is asking the same thing, and the answer just says that “you can use multiple layouts, you are not limited to using just one” which is true, if the path determines which layout to use (for example, compared to) but how do I use the same path but choose which layout to use depending on platform? Thanks in advance. - rstoenescu Admin last edited by You’re using ES6 arrow function, so it’s natural “this” does not refer to the same thing as a normal function. So use normal function. @rstoenescu Unfortunately, it seems that this is undefined even when using normal functions. component: function () { return this.$q.platform.is.mobile ? import('layouts/MobileLayout.vue') : import('layouts/DesktopLayout.vue') }, - rstoenescu Admin last edited by Are you on SSR or what build mode are you using? Need more details in order to help you. Maybe like this ` import { Platform } from ‘quasar’ component: () => { return Platform.is.mobile ? import(‘layouts/MobileLayout.vue’) : import(‘layouts/DesktopLayout.vue’) } ` Make the layout a Vue dynamic component then switch it based on the platform. @rstoenescu This is just the default SPA. thisin the routes.js file does not seem to point to Vue’s thisso I can’t access $q. I have found a way, by using one layout but with a couple of v-ifs, choosing which to display based on the platform, but maybe there’s a better way to this… I just thought.
https://forum.quasar-framework.org/topic/2649/choosing-different-layout-vue-files-depending-on-platform
CC-MAIN-2022-05
refinedweb
342
63.7
CEWE Stiftung & Co KGaA European Mid-Cap Internet. Q1 2015: mixed bag, though core is strong. 20 May 2015 BUY EUR EUR 68.00 - Aubrie Adams - 1 years ago - Views: Transcription 1 Q1 2015: mixed bag, though core is strong Summary: Following Q results we update our model and reduce our EPS estimates by 1-4% to reflect lower growth in the Commercial online printing division, although we still expect 17% sales CAGR E. On adjusted estimates, our new price target of EUR68, the average of DCF and CFRoEV 2016E, offers 17% upside potential and supports our Buy rating. However, less than 25% upside potential means that we remove CEWE from our Alpha list. Q results - mixed bag: Positives were a) better than expected growth in the Photofinishing division (+7.6% yoy) and reduction in its losses by half to EUR1.1m, and b) stable underlying profits in the Retail division despite ongoing optimisation of its Polish business and more clarity on restructuring costs with EUR0.6m booked in Q and no extra costs expected. However, slowdown in Commercial online printing is negative news. While growth is still solid at 10%, continuous slowdown indicates the shift from offline to online is taking longer than anticipated and is one of the main reasons for the reduction in our estimates. Outlook: Management reiterated its 2015 guidance of EUR32m-38m of EBIT versus EUR33m in 2014 or a EUR2m increase of 2014 guidance. We view this as rather conservative, given no Photokina (imaging fair) this year already implies EUR1m lower opex and reduced losses in Q by almost EUR1m yoy. Thus we are at the upper end of the guidance and 4% ahead of consensus. Changes in estimates: We reduce our estimates to reflect slower growth in Commercial online printing, though at EUR81m we are slightly above the guided EUR80m of sales in this division. We trim our earnings expectations for this division, leading to a 1-2% reduction in group EBIT. We reduce EPS by 1-4% in E. Valuation: With reduced estimates our new EUR68 price target (from EUR71) offers 17% upside potential. We note >3% dividend yield and that we are 5%/11% ahead of consensus EPS 2015/16E. As Q2 is generally loss making with sales shifting from Q2 towards Christmas, we do not view Q2 results as a potential catalyst for the stock. However, the CMD at the Dresden online printing production site at the end of September is likely to be a catalyst with more focus on efforts since the Saxoprint acquisition with details of measures to improve efficiency and cost optimisation at this division. 20 May 2015 BUY Current price Price target EUR EUR /05/2015 XETRA Close Market cap (EUR m) 417 Reuters CWCG.DE Bloomberg CWC GY Changes made in this note Rating: Buy (no change) Price target: EUR68.00 (EUR71.00) Estimates changes 2015E 2016E 2017E old % old % old % Sales EBIT EPS Source: Berenberg estimates Share data Shares outstanding (m) 7 Enterprise value (EUR m) 426 Daily trading volume 19,927 Key data Price/book value 2.2 Net gearing -4.6% CAGR sales % CAGR EPS % Y/E , EUR m Sales EBITDA EBIT Net profit Y/E net debt (net cash) EPS (reported) EPS (recurring) CPS DPS Gross margin 63.0% 63.4% 64.6% 68.9% 69.4% 69.5% 69.4% EBITDA margin 13.6% 13.0% 12.5% 12.6% 13.4% 14.2% 15.0% EBIT margin 6.4% 5.7% 5.4% 6.2% 6.8% 7.4% 8.4% Dividend yield 4.5% 4.7% 4.2% 3.1% 3.0% 3.3% 3.9% ROCE 18.1% 14.9% 12.5% 14.3% 15.5% 16.1% 19.1% EV/sales EV/EBITDA EV/EBIT P/E Cash flow RoEV 12.3% 10.9% 9.8% 7.2% 6.0% 8.3% 10.3% Source: Company data, Berenberg Interactive model click here to explore * there may be a delay for the new estimates to be updated on the interactive model Anna Patrice, CFA Analyst 2 BUY Investment thesis 20 May 2015 CEWE is a high-quality company with a strong management track record. This is evident in the shift from analogue to digital and in Current price Price target EUR EUR Market cap (EUR m) /05/2015 XETRA Close EV (EUR m) 426 Non-institutional shareholders 27.4% Neumüller heirs 2.5% CEWE Stiftung & Co. KGaA Business description Trading volume 19,927 Free float 70.1% Share performance High 52 weeks EUR Low 52 weeks EUR Performance relative to Photo services provider SXXP SDAX 1mth 3.6% 1.5% 3mth -0.3% -5.1% 12mth 0.8% -6.0% the leading market position gained in the digital photofinishing and photobook market in western Europe. The Photofinishing division is set to benefit from an increasing share of more-profitable photogift products, supporting margin expansion. The Commercial online printing division is a key growth driver, but is currently loss-making, given its expansion and marketing costs. It is, however, set to break even in 2015/16E and contribute 10% to group EBIT by 2017E. Our valuation is an average of DCF and CFRoEV using 2016E. Profit and loss summary EURm E 2016E 2017E Revenues EBITDA EBITA EBIT Associates contribution Net interest Tax Minorities Net income adj EPS reported EPS adjusted Year end shares Average shares DPS Growth and margins E 2016E 2017E Revenue growth 5.7% -2.3% 2.9% 3.7% 3.9% EBITDA growth 1.1% -0.9% 9.3% 9.5% 9.7% EBIT growth -0.4% 12.8% 12.6% 11.7% 18.8% EPS adj growth 31.0% -20.0% 18.5% 14.0% 20.0% FCF growth -39.2% 204.9% -98.0% % 25.8% EBITDA margin 12.5% 12.6% 13.4% 14.2% 15.0% EBIT margin 5.4% 6.2% 6.8% 7.4% 8.4% Net income margin 4.0% 4.1% 4.7% 5.2% 6.0% FCF margin 1.9% 6.5% 0.1% 4.9% 6.0% Valuation metrics E 2016E 2017E P / adjusted EPS P / book value FCF yield 4.4% 9.4% 0.2% 6.6% 8.3% Dividend yield 4.2% 3.1% 3.0% 3.3% 3.9% EV / sales EV / EBITDA EV / EBIT EV / FCF EV / cap. employed Cash flow summary EURm E 2016E 2017E Net income Depreciation Working capital changes Other non-cash items Operating cash flow Capex FCFE Acquisitions, disposals Other investment CF Dividends paid Buybacks, issuance Change in net debt Net debt FCF per share Key ratios E 2016E 2017E Net debt / equity 16.2% -11.4% -4.6% -11.7% -20.0% Net debt / EBITDA Avg cost of debt 4.8% 6.3% 4.0% 4.0% 4.0% Tax rate 22.6% 32.1% 31.0% 30.0% 30.0% Interest cover Payout ratio 50.2% 53.0% 48.9% 47.9% 47.0% ROCE 12.5% 14.3% 15.5% 16.1% 19.1% Capex / sales 6.9% 7.4% 8.3% 6.8% 6.3% Capex / depreciation 98.1% 115.9% 125.0% 99.5% 95.7% Key risks to our investment thesis Q4 accounts for most annual earnings, thus presenting execution risk and making CEWE dependent on a single quarter. Additional investments in the Commercial online printing division to support market share gains in a structurally changing printing market could push the break-even point beyond 2015E and dilute margins in 2016E. A weaker macro environment and consumer spending in eastern Europe and Scandinavia, where CEWE is present with retail operations and pricing pressure on photo hardware. The underperforming Polish business has led to EUR3m losses in 2014E, and restructuring costs are not excluded. Anna Patrice, CFA Analyst 3 Q mixed bag Positives Photofinishing sales growth of 7.6% to EUR75.5m was better than expected, though management cautioned it was partially due to easy comps (Q1 2014: sales declined 1% yoy) and its EBIT losses have halved to EUR1.1m. This is positive as it demonstrates steady growth and eases investors concern about the sustainability of CEWE s core business, and demonstrates improving mix (ASP was up 8.6% yoy) supportive to the divisional earnings in absolute terms and as a percentage of sales. the Retail division still suffered from business realignment and thus an 18.5% sales decline to EUR13.4m, and posted the same losses as last year of EUR0.6m adjusted for EUR0.6m one-offs. The management indicates that no more one-offs are expected for the restructuring in Poland, and thus we expect losses to diminish in this division from -EUR2.9m last year to -EUR2.2m in FY We view Q results as a positive as uncertainty regarding Retail restructuring is easing and underlying earnings are relatively stable. Negatives Commercial online printing reported 9.6% increase in sales to EUR17.9m with EBIT loss flat yoy at -EUR1.3m. Management expects a further reduction in losses in 2015 and contribution to earnings from 2016E onwards. While the expected reduction in losses is positive, we have been expecting higher organic growth and a better earnings development in 2015/16E. Now it seems that the transition from offline to online, although it is taking place, is a rather slow process. According to management, CEWE is able to gain market share in Germany and has higher growth rates outside Germany albeit from a small base. We still view commercial online printing as a massive opportunity for CEWE in the long term and expect CEWE to at least double its earnings with market share gains and a market shift to online. However, we view this slowdown in growth as negative and one of the main reasons for today s earnings downgrades. We also see risk to the company guidance of EUR100m of sales from commercial online printing by 2016E, though we are not sure if this guidance is reflected in consensus and thus do not see risk to consensus downgrades. Q reported versus estimates CEWE STIFTUNG Q Q Q in EURm 2014 Reported Berenberg Sales yoy -4.8% 3.7% 2.3% Photofinishing yoy -0.8% 7.6% 5.0% Online printing yoy 9.6% 10.0% Retail yoy -33.2% -18.5% -17.0% new EBIT EBIT old reporting margin -3.9% -3.1% -3.4% yoy -17.6% -11.5% Photofinishing margin -3.3% -1.5% -2.1% yoy -52.2% -32.7% Online printing Retail Net profit - reported margin -4.1% -3.4% -3.5% EPS - reported Source: Company reports, Berenberg estimates 3 4 Change in estimates, valuation Following Q results we adjust our estimates to reflect: lower growth in commercial online printing division with EUR95m of sales by 2016E down from EUR101.5m old estimates and below company guidance of EUR100m; slightly lower EBIT in Photofinishing segment in 2016E due to the Photokina imaging fair that takes place every other year; increased capex to EUR44.5m in 2015E, up from old estimates of EUR40.5m due to guided investments in HQ of up to EUR10m in 2015E; and all in all we reduce our EBIT estimates by 0.4%/2.4%/1.3% for 2015/16/17E and reduce EPS by 1%/4%/2% over the same period. We still believe that the company guidance is relatively conservative and thus we are at the upper end of the guidance and 4% ahead of consensus EBIT for 2015E. We are 5% and 11% ahead of consensus EPS estimates for 2015/16E. Berenberg versus consensus estimates FY Berenberg Last fiscal year Current Y Next fiscal year Next fiscal year +1 Sales yoy 2.9% 3.7% 3.9% EBITDA yoy 9.3% 9.5% 9.7% as % of sales 12.6% 13.4% 14.2% 15.0% EBIT yoy 12.6% 11.7% 18.8% as % of sales 6.2% 6.8% 7.4% 8.4% Net income yoy 18.5% 14.0% 20.0% Consensus Last fiscal year Current Y Next fiscal year Next fiscal year +1 Sales yoy 1.9% 2.5% 2.8% EBITDA yoy 7.7% 5.8% 5.4% as % of sales 12.6% 13.4% 13.8% 14.1% EBIT yoy 8.6% 8.6% 8.6% as % of sales 6.2% 6.6% 7.0% 7.4% Net income yoy 11.7% 7.3% 11.0% EPS Diff. vs. consensus Last fiscal year Current Y Next fiscal year Next fiscal year +1 Sales 0.0% 1.0% 2.1% 3.2% EBITDA 0.0% 1.5% 5.0% 9.3% EBIT 0.0% 3.7% 6.7% 16.7% Net income 0.0% 6.0% 12.6% 21.8% EPS 0.0% 4.9% 11.2% 20.3% Source: Berenberg estimates, Bloomberg 4 5 Division sales and EBIT breakdown E EUR m E Divisional sales Photofinishing Retail Online TTL Divisional sales shares Photofinishing 77.2% 75.2% 75.8% 70.8% 70.0% 73.7% 73.0% 71.8% 70.4% Retail 22.8% 24.8% 23.9% 20.7% 18.8% 12.8% 11.9% 11.1% 10.4% Online 0.0% 0.0% 0.3% 8.5% 11.1% 13.5% 15.1% 17.1% 19.3% TTL 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% Divisional sales growth Photofinishing -2.5% 6.2% 5.8% 1.1% 4.5% 2.8% 2.0% 1.9% 1.8% Retail -2.2% 18.5% 1.3% -6.4% -3.7% -33.4% -5.0% -3.0% -3.0% Online na na na % 39.1% 17.9% 15.4% 17.3% 17.3% TTL -2.4% 9.0% 5.0% 8.1% 5.7% -2.3% 2.9% 3.7% 3.9% Divisional EBIT Photofinishing Retail Online TTL Divisional EBIT share Photofinishing 93.9% 93.9% 101.4% 110.7% 111.9% 117.2% 112.6% 101.1% 89.4% Retail 6.1% 6.1% 8.0% 5.8% 0.2% -8.7% -6.0% -1.9% 0.2% Online 0.0% 0.0% -9.4% -16.5% -12.1% -8.6% -6.6% 0.8% 10.4% TTL 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% Divisional EBIT margin Photofinishing 8.4% 8.5% 8.6% 8.9% 10.5% 10.2% 10.5% 10.4% 10.7% Retail 1.8% 1.7% 2.2% 1.6% 0.1% -4.3% -3.5% -1.3% 0.2% Online 0.0% 0.0% % -11.1% -7.2% -4.1% -3.0% 0.3% 4.5% TTL 6.9% 6.8% 6.4% 5.7% 6.6% 6.4% 6.8% 7.4% 8.4% Source: Company reports, Berenberg estimates Our new price target EUR68 is an average of DCF (EUR74.4) and CFRoEV 2016E (EUR62.2) and indicates 17% upside potential. DCF valuation model DCF model EUR m Terminal value Operating profit (NOPAT) Change working capital Depreciation Investments Net cash flow Present value WACC 8.5% 8.5% 8.5% 8.5% 8.5% 8.5% 8.5% 8.5% 8.5% 8.5% Long-term growth rate 1.5% DCF per share derived from WACC derived from Total present value 508 Interest costs, pre-tax 4.0% thereof terminal value: 59% Tax rate 31.0% Net debt at year start -20 Interest costs, after taxes 2.8% Investments, minorities & others -2 Required ROE 8.5% Equity value 526 Risk premium 6.0% No. of outstanding shares 7.1 Risk-free (10y. bond) 2.5% Discounted cash flow per share (EUR) 74.4 Beta E 2016E 2017E 5 6 Fair value per share (EUR) Beta Source: Berenberg estimates Sensitivity analysis DCF Long-term growth rate 0.5% 1.0% 1.5% 2.0% 2.5% % % WACC % % % CFRoEV valuation method CEWE Stiftung & Co KGaA Fair value = (cash flow return / hurdle rate) = (adj. EBIT after taxes / hurdle rate) Business year end: EBIT Depreciation of fixed assets Amortisation of intangible assets Maintenance capex = Adjusted EBIT Taxes (normalised tax rate) Minorities = Adjusted cash flow after tax Hurdle rate 7.0% 7.0% 7.0% = Fair EV Net debt (cash) Pension provisions Accumulated dividends outstanding = Fair market capitalization Number of shares (million) Number of options / dilutive shares Fully diluted no. of shares Fair value per share (EUR) Current value per share premium (-) / discount (+) in % -11% 6% 33% Source: Berenberg estimates 6 7 Financials Profit and loss account Year-end December(EUR m) E 2016E 2017E Sales Own work capitalised Total sales Other operating income Material expenses Personnel expenses Other operating expenses Unusual or infrequent items EBITDA EBITDA margin 13.6% 13.0% 12.5% 12.6% 13.4% 14.2% 15.0% Depreciation EBITA Amortisation of goodwill Amortisation of intangible assets Impairment charges EBIT EBIT margin 6.4% 5.7% 5.4% 6.2% 6.8% 7.4% 8.4% Interest income Interest expenses Other financial result Financial result Income on ordinary activities before taxes Extraordinary income/loss EBT Taxes Tax rate 36% 29% 23% 32% 31% 30% 30% Net income from continuing operations Income from discontinued operations (net of tax) Net income Minority interest Net income (net of minority interest) Source: Company data, Berenberg estimates 7 8 Balance sheet Year-end December (EUR m) E 2016E 2017E Intangible assets Property, plant and equipment Financial assets Fixed assets Inventories Accounts receivable Other current assets Liquid assets Deferred taxes Deferred charges and prepaid expenses Current assets TOTAL Shareholders' equity Minority interest Long-term debt Pensions provisions Other provisions Non-current liabilities short-term debt Accounts payable Advance payments Other liabilities Deferred taxes Other accruals Current liabilities TOTAL Source: Company data, Berenberg estimates 8 9 Cash flow statement EUR m E 2016E 2017E Net profit/loss Depreciation of fixed assets (incl. leases) Amortisation of goodwill Amortisation of intangible assets Other Cash flow from operations before changes in w/c Change in inventory Change in accounts receivable Change in accounts payable Change in other working capital Change in working capital Cash flow from operating activities Maintenance capex Cash flow from operating activities after maintenance Capex, excluding maintenance Payments for acquisitions Financial investments Income from asset disposals Cash flow from investing activities Cash flow before financing Increase/decrease in debt position Purchase of own shares Capital measures Dividends paid Others Effects of exchange rate changes on cash Cash flow from financing activities Increase/decrease in liquid assets Liquid assets at end of period Source: Company data, Berenberg estimates 9 10 Growth rates yoy (%) E 2016E 2017E Net sales 5.0 % 8.1 % 5.7 % -2.3 % 2.9 % 3.7 % 3.9 % EBITDA -3.5 % 3.9 % 1.1 % -0.9 % 9.3 % 9.5 % 9.7 % EBIT 5.1 % 0.5 % 6.2 % 0.4 % 10.6 % 9.8 % 15.3 % Net income 35.4 % 2.1 % 11.7 % 0.8 % 18.5 % 14.0 % 20.0 % EPS reported 40.6 % 1.4 % 11.9 % -6.4 % 18.5 % 14.0 % 20.0 % EPS recurring 30.1 % 1.4 % 31.0 % % 18.5 % 14.0 % 20.0 % Source: Company data, Berenberg estimates 10 11 Ratios Ratios E 2016E 2017E Asset utilisation efficiency Capital employed turnover Operating assets turnover Plant turnover Inventory turnover (sales/inventory) Operational efficiency Operating return 55.3% 49.4% 46.0% 47.4% 45.0% 48.5% 53.1% Total operating costs / sales 86.6% 87.2% 87.8% 87.6% 86.7% 86.0% 85.2% Sales per employee EBITDA per employee EBIT margin 6.4% 5.7% 5.4% 6.2% 6.8% 7.4% 8.4% Return on capital EBIT/ Y/E capital employed 26.2% 18.3% 17.4% 21.2% 20.7% 23.0% 27.4% EBIT / avg. capital employed 25.9% 21.2% 17.8% 20.4% 22.2% 23.0% 27.4% EBITDA/ Y/E capital employed 55.4% 41.6% 40.3% 43.1% 40.7% 44.3% 48.8% EBITDA / avg. capital employed 54.6% 48.3% 41.1% 41.4% 43.6% 44.4% 48.7% Return on equity Net profit / Y/E equity 15.3% 14.4% 14.8% 12.3% 13.6% 14.2% 15.5% Recurring net profit / Y/E equity 15.3% 14.4% 14.8% 12.3% 13.6% 14.2% 15.5% Net profit / avg. equity 15.3% 15.0% 15.4% 13.5% 14.0% 14.8% 16.3% Recurring net profit / avg. equity 15.3% 15.0% 15.4% 13.5% 14.0% 14.8% 16.3% Security Net debt (if net cash=0) Debt / equity -5.5% 20.3% 16.2% -11.4% -4.6% -11.7% -20.0% Net gearing -5.5% 20.3% 16.2% -11.4% -4.6% -11.7% -20.0% Interest cover EBITDA / interest paid Altman's z-score Dividend payout ratio 46% 51% 50% 53% 49% 48% 47% Liquidity Current ratio Acid test ratio Free cash flow Funds management Avg. working capital / sales 8.0% 6.7% 7.4% 8.0% 7.9% 8.8% 8.8% Cash flow / sales 5.4% 5.4% 5.0% 5.0% 4.8% 6.1% 7.0% Free cash flow/sales 6.4% 3.4% 1.9% 6.5% 0.1% 4.9% 6.0% Inventory processing period (days) Receivables collection period (days) Payables payment period (days) Cash conversion cycle (days) Trade creditors / trade debtors 115.8% 141.6% 113.9% 114.0% 102.2% 102.2% 102.2% Other Interest received / avg. cash 1.4% 1.5% 1.0% 1.2% 1.0% 1.5% 2.0% Interest paid / avg. debt 5.2% 7.6% 4.8% 6.3% 4.0% 4.0% 4.0% Capex / dep'n 90.5% 62.6% 98.1% 115.9% 125.0% 99.5% 95.7% Cost per employee Capex / sales 6.5% 4.6% 6.9% 7.4% 8.3% 6.8% 6.3% Maint. capex / sales 5.8% 5.3% 5.3% 5.4% 6.6% 5.4% 5.0% Cash flow Cash ROCE 21.8% 20.0% 16.6% 16.5% 15.5% 19.2% 22.7% Free cash flow 14.8% 8.3% 4.4% 9.4% 0.2% 6.6% 8.3% Source: Company data, Berenberg estimates 11 12 Please note that the use of this research report is subject to the conditions and restrictions set forth in the General investment stment-related disclosures and the Legal disclaimer at the end of this document. For analyst certification and remarks regarding g foreign investors and country-specific disclosures, please refer to the respective paragraph at the end of this document. Disclosures in respect of section 34b of the German Securities Trading Act (Wertpapierhandelsgesetz WpHG) Company CEWE Stiftung & Co KGaA Disclosures no disclosures (1) Joh. Berenberg, Gossler & Co. KG (hereinafter referred to as the Bank ) and/or its affiliate(s) was Lead Manager or Co- Lead Manager over the previous 12 months of a public offering of this company. (2) The Bank acts as Designated Sponsor for this company. (3) Over the previous 12 months, the Bank and/or its affiliate(s) has effected an agreement with this company for investment banking services or received compensation or a promise to pay from this company for investment banking services. (4) The Bank and/or its affiliate(s) holds 5% or more of the share capital of this company. (5) The Bank holds a trading position in shares of this company. Historical price target and rating changes for CEWE Stiftung & Co KGaA in the last 12 months (full coverage) Date Price target - EUR Rating Initiation of coverage 25 February Buy 28 June May Buy Berenberg Equity Research ratings distribution and in proportion to investment banking services, as of 1 April 2015 Buy % % Sell % 0.00 % Hold % % Valuation basis/rating key The recommendations for companies analysed by Berenberg s Equity Research department are made on an absolute basis for which the following three-step rating key is applicable: Buy: Sustainable upside potential of more than 15% to the current share price within 12 months; Sell: Sustainable downside potential of more than 15% to the current share price within 12 months; Hold: Upside/downside potential regarding the current share price limited; no immediate catalyst visible. NB: During periods of high market, sector, or stock volatility, or in special situations, the recommendation system criteria may be breached temporarily. Competent supervisory sory authority Bundesanstalt für Finanzdienstleistungsaufsicht -BaFin- (Federal Financial Supervisory Authority), Graurheindorfer Straße 108, Bonn and Marie-Curie-Str , Frankfurt am Main, Germany. General investment- related disclosures Joh. Berenberg, Gossler & Co. KG (hereinafter referred to as the Bank ) has made every effort to carefully research all information contained in this financial analysis. The information on which the financial analysis is based has been obtained from sources which we believe to be reliable such as, for example, Thom. The companies analysed by the Bank are divided into two groups: those under full coverage (regular updates provided); and those 12 13 under screening coverage (updates provided as and when required at irregular intervals). The functional job title of the person/s responsible for the recommendations contained in this report is Equity Research Analyst unless otherwise stated on the cover. The following internet link provides further remarks on our financial analyses: Legal disclaimer This document has been prepared by Joh. Berenberg, Gossler & Co. KG (hereinafter referred to as the Bank ). This document does not claim completeness regarding all the information on the stocks, stock markets or developments referred to in it. On no account should the document be regarded as a substitute for the recipient procuring information for himself/herself or exercising his/her. The Bank and/or its employees accept no liability whatsoever for any direct or consequential loss or damages of any kind arising out of the use of this document or any part of its content. The Bank and/or its employees may hold, buy or sell positions in any securities mentioned in this document, derivatives thereon or related financial products. The Bank and/or its employees may underwrite issues for any securities mentioned in this document, derivatives thereon or related financial products or seek to perform capital market or underwriting services. Analyst certification I, Anna Patrice, hereby certify that all of the views expressed in this report accurately reflect my personal views about any and all of the subject securities or issuers discussed herein. In addition, I hereby certify that no part of my compensation was, is, or will be, directly or indirectly related to the specific recommendations or views expressed in this research report, nor is it tied to any specific investment banking transaction performed by the Bank or its affiliates. the Bank.. Third-party research disclosures Company CEWE Stiftung & Co KGaA Disclosures no disclosures (1) Berenberg Capital Markets LLC owned 1% or more of the outstanding shares of any class of the subject company by the end of the prior month.* (2) Over the previous 12 months, Berenberg Capital Markets LLC has managed or co-managed any public offering for the subject company.* (3) Berenberg Capital Markets LLC is making a market in the subject securities at the time of the report. 13 14 (4) Berenberg Capital Markets LLC received compensation for investment banking services in the past 12 months, or expects to receive such compensation in the next 3 months.* (5) There is another potential conflict of interest of the analyst or Berenberg Capital Markets LLC, of which the analyst knows or has reason to know at the time of publication of this research report. * For disclosures regarding affiliates of Berenberg Capital Markets LLC please refer to the Disclosures in respect of section 34b of the German Securities Trading Act (Wertpapierhandelsgesetz WpHG) section above. Copyright The Bank reserves all the rights in this document. No part of the document or its content may be rewritten, copied, photocopied or duplicated in any form by any means or redistributed without the Bank s prior written consent. May 2013 Joh. Berenberg, Gossler & Co. KG 14 15 Contacts Investment Banking / US: EQUITY RESEARCH Internet RESEARCH AEROSPACE & DEFENCE CHEMICALS HOUSEHOLD & PERSONAL CARE OIL & GAS Andrew Gollan John Klein Ana Caludi Muldoon Asad Farid Tom O'Donnell Evgenia Molotova Bassel Choughari Jaideep Pandya James Targett AUTOMOTIVES REAL ESTATE Adam Hull CONSTRUCTION INSURANCE Tina Kladnik Paul Kratz Lush Mahendrarajah Peter Eliot Kai Klose Chris Moore Matthew Preston BANKS Robert Muir Sami Taipalus TECHNOLOGY Nick Anderson Michael Watts Adnaan Ahmad Adam Barrass LUXURY GOODS Rebecca Alvey James Chappell DIVERSIFIED FINANCIALS Bassel Choughari Gergios Kertsos Andrew Lowe Pras Jeyanandhan Zuzanna Pusz Daud Khan Eoin Mullany Gal Munda Eleni Papoula FOOD MANUFACTURING MEDIA Tammy Qiu Fintan Ryan Robert Berg BEVERAGES James Targett Laura Janssens TELECOMMUNICATIONS Javier Gonzalez Lastra Jessica Pok Wassil El Hebil Adam Mizrahi FOOD RETAIL Sarah Simon Usman Ghazi Estelle Weingrod Siyi He BUSINESS SERVICES, LEISURE & TRANSPORT MID CAP GENERAL Paul Marsch Najet El Kassir GENERAL RETAIL Robert Chantry Barry Zeitoune Stuart Gordon Michelle Wilson Gunnar Cohrs Simon Mezzanotte Sam England TOBACCO Yousuf Mohamed HEALTHCARE Benjamin May Erik Bloomquist Matthew O'Keeffe Scott Bardo Virginia Nordback Josh Puddle Alistair Campbell Anna Patrice UTILITIES Graham Doyle Daniel Richter Andrew Fisher CAPITAL GOODS Klara Fernandes Simona Sarli Mehul Mahatma Alex Deane Tom Jones Lawson Steele Sebastian Kuenne Louise Pearson Kai Mueller Laura Sutcliffe ECONOMICS Horace Tam Holger Schmied Fabian De Smet Gianni Lavigna Max von Doetinchem Frazer Hall Toby Flaux Jamie Nettleton INDUSTRIALS Karl Hancock Benjamin Stillfried CRM Chris Armstrong Sean Heath Edwina Lucas INSURANCE David Hogg Ellen Parker Trevor Moss James Matthews SALES TRADING Greg Swallow MEDIA & TELECOMMUNICATIONS David Mortlock HAMBURG Julia Thannheiser Richard Payman Alexander Heinz INVESTOR ACCESS MATERIALS George Smibert Gregor Labahn Jennie Jiricny Jina Zachrisson Anita Surana Marvin Schweden TECHNOLOGY Paul Walker Tim Storm EVENTS Jean Beaubois Philipp Wiechmann Charlotte Kilby Christoffer Winter Natalie Meech SALES PARIS Sarah Weyman BENELUX Alex Chevassus Hannah Whitehead Miel Bakker Thibault Bourgeat LONDON Zubin Hubner Member FINRA & SIPC Burr Clark Jessica London CRM Julie Doherty Emily Mouret Laura Cooper Scott Duxbury Peter Nichols Kelleigh Faldi Kieran O'Sullivan INVESTOR ACCESS Shawna Giust Jonathan Saxon Olivia Lee Tristan Hedley Bob Spillane Economics. Greek update: closer to the abyss. 17 June 2015 Greek update: closer to the abyss A dangerous game: Judging by their public comments, Greece s radical political leaders are threatening an imminent Greek economic suicide to force Europe and the IMF to CEWE. Overweight. Financial Markets Research. Retail & Consumer Goods. - Ready for Christmas business- Annual report: Mar 15. 18 November 2014 Retail & Consumer Goods CEWE - Ready for Christmas business- Overweight Old: Overweight Target price: 61.70 Old: 60.50 Current price: 52.31 (17 November 2014) 9M14 with continuation of for 2014 achieved but increased uncertainty due to reduced spending in the oil & gas industry 2014E figures: Underlying EBITDA above our expectations A cc or # Targets $T ypcap$ 1628 1 0 4 2 Page 1/8 Equity flash Trading Update Alternative Energy BUY (BUY) Target EUR 6.50 (EUR 7.00) Price (last closing price) : 5.14 EUR Upside : 26% Est. change 2015e - - Background information. Changes in the shareholder structure and balance sheet. Contract with Google prolonged for two years A cc or # $T ypcap$ 1628 1 0 4 2 Page 1/5 Equity flash Newsflow Telecommunication HOLD (HOLD) Target EUR 4.00 (EUR 4.00) Price (last closing price) : EUR 2.84 Upside : 40 % Est. change 2015e 2016e EPS Buy Current price EUR 36.90 Under-appreciated operating leverage; up to Buy Summary: After better-than-expected 2013 results and following a roadshow with management, we increase our EPS estimates by 11% and 16% for 2014/15E respectively,% in m EUR 2012 2013 2014e 2015e 2016e Sales 211 188 201 204 207 FFO 86 112 117 121 126 Change in model Deutsche EuroShop AG Dividend is worth something 03/02/2015 Hold (Hold) 43.00 EUR (37.00 EUR) Close 02/02/2015 41.93 EUR Bloomberg: DEQ GY WKN: 748020 Sector Real Estate Share price performance 52 week Price Target: EUR 7.50 (7.50) 26 March 2013 FY 2012 figures confirmed Recommendation: BUY (BUY) Risk: HIGH (HIGH) Price Target: EUR 7.50 (7.50) 26 March 2013 FY 2012 figures confirmed Industry segment will be the main growth driver Share price (dark) vs. CDAX FY 2012 figures: Financial Results. siemens.com s Financial Results Fourth Quarter and Fiscal 2015 siemens.com Key figures (in millions of, except where otherwise stated) Volume Q4 % Change Fiscal Year % Change FY 2015 FY 2014 Actual Comp. 1 2015 2014 - - Borussia Dortmund GmbH & Co. KGaA A cc or # $T ypcap$ 1628 1 0 4 2 Page 1/8 Equity flash Trading Update Entertainment BUY (BUY) Target EUR 5.00 (EUR 5.00) Price (last closing price) : EUR 3.57 Upside : 45% Est. change 2015e 2016e EPS +20% Borussia Dortmund GmbH & Co. KGaA A cc or # $T ypcap$ 1628 1 0 4 2 Page 1/7 Equity flash Trading Update Entertainment BUY (BUY) Target EUR 6.00 (EUR 6.00) Price (last closing price) : EUR 4.07 Upside : 47% Est. change 2015e 2016e EPS - No surprises EPS almost tripled yoy; confirm Buy, TP raised to 74.00 H y p o p or t A G # $T ypcap$ 1611 1 1 1 x 6519 2 Page 1/6 First Take Full-year earnings Financial Services Germany Buy Target price : 74.00 EUR vs 72.00 EUR Price : 64.78 EUR Upside : 14 % Est.chg 2015e Price Target: EUR 48.00 (42.00) 27 October 2014 Raised EBIT guidance for FY 2014 Recommendation: BUY (BUY) Risk: MEDIUM (MEDIUM) Price Target: EUR 48.00 (42.00) 27 October 2014 Raised EBIT guidance for FY 2014 Leifheit AG recently released its preliminary key financial figures for Price Target: EUR 4.00 (5.00) 06 November 2014 still searching for the growth path Recommendation: HOLD (HOLD) Risk: HIGH (HIGH) Price Target: EUR 4.00 (5.00) 06 November 2014 still searching for the growth path 9M14 figures: Without major progress in the Digital segment, telegate AG Price Target: EUR 17.00 (17.00) Recommendation: BUY (BUY) Risk: HIGH (HIGH) Price Target: EUR 17.00 (17.00) 04 August 2014 Hypoport confirms full year guidance 1H14: Revenues +12.1% yoy, EBIT and EPS > +100% yoy Hypoport AG announced.50 Klöckner & Co SE. Q3 2014 Results Klöckner & Co SE A Leading Multi Metal Distributor Gisbert Rühl CEO Marcus A. Ketter CFO Results Analysts and Investors Conference November 6, Disclaimer This presentation contains forward-looking statements Price Target: EUR 5.00 (5.00) Recommendation: BUY (BUY) Risk: HIGH (HIGH) Price Target: EUR 5.00 (5.00) 13 August 2014 Westgrund posted solid 1H key figures Next step will be the capital increase Westgrund yesterday posted financial Ituran Location & Control Ltd. In-line Quarter, Big Dividend, Maintain Outperform EQUITY RESEARCH COMPANY UPDATE February 23, 2012 Stock Rating: OUTPERFORM 12-18 mo. Price Target $16.00 ITRN - NASDAQ $13.47 3-5 Yr. EPS Gr. Rate 8% 52-Wk Range $16.96-$11.27 Shares Outstanding 21.0M Float Accumulate. Exide Industries Ltd(EIL) Automobile Ancillaries RETAIL EQUITY RESEARCH Q3FY16 RESULT UPDATE GEOJIT BNP PARIBAS Research RETAIL EQUITY RESEARCH Exide Industries Ltd(EIL) Automobile Ancillaries BSE CODE: 500086 NSE CODE: EXIDEIND Bloomberg CODE: EXID IN SENSEX: 23,759 Accumulate Price Target: EUR 3.00 (3.00) 21 November 2013 Recommendation: BUY (BUY) Risk: HIGH (HIGH) Price Target: EUR 3.00 (3.00) 21 November 2013 Disappointing 3Q13 results What happened: Funkwerk AG (FEW) recently announced 3Q13 results indicating that business Exhibit 1: Financial summary of First Tractor in 1H12-1H14 1H12 2H12 1H13 2H13 1H14 (% YoY) Capital Goods Manufacturing ector August 29, 214 Company Report Rating: HOLD TP: HK$ 4.8 H-hare price (HK$) 5.17 Est. share price return (7.16%) Est. dividend yield 1.39% Est. total return (5.77%) results as of December 31, 2014. Investor presentation Third quarter results as of December 31, 2014 Investor presentation February, 26 th 2015 Disclaimer Certain statements included or incorporated by reference within this presentation may constitute forwardlooking Company Presentation VTG AG Connecting worlds. Analyst Conference April 14, 2015 Company Presentation VTG AG Connecting worlds Analyst Conference April 14, 2015 Table of content 1 Highlights 2014 2 Performance & Financials 2014 3 Update on Strategy 4 Outlook 2015 5 Appendix 1 Executive Management Presentation Q2/2012 Results. 8 August 2012 Management Presentation Q2/2012 Results 8 August 2012 Cautionary statement This presentation contains forward-looking statements which involve risks and uncertainties. The actual performance, results and Price Target: EUR 1.30 (1.30) Recommendation: BUY (BUY) Risk: HIGH (HIGH) Price Target: EUR 1.30 (1.30) 18 November 2014 Strategic refocusing bears first fruit Consolidated financial statements now allow for analytical reassessment: FIRST CAPITAL. Sound 2014 results and attractive dividend. Buy (maintained) Company Update FIRST CAPITAL Company Update Buy (maintained) MARKET PRICE: EUR1.03 TARGET PRICE: EUR1.28 (from EUR1.15) Financial Holding Data Shares Outstanding (m): 24.04 Market Cap. (EURm): 24.77 NAV (EURm): 34.1 FINANCIAL REPORT H1 2014 FINANCIAL REPORT H1 2014 HIGH SPEED BY PASSION 02_Key Figures 03_Group Status Report 05_Consolidated Financial Statements 10_Notes 11_Declaration of the Legal Representatives 02 PANKL KEY FIGURES EARNING.a. Simplex Infrastructures 2QFY216 Result Update Infrastructure November 17, 215 Simplex Infrastructures Performance Highlights Quarterly highlights - Standalone Y/E March (` cr) 2QFY16 1QFY16 2QFY15 % chg (yoy) % chg (qoq) Maruti Suzuki. Source: Company Data; PL Research Q3 results subdued, Outlook remains good; BUY January 28, 2016 Rohan Korde rohankorde@plindia.com +91 22 66322235 Rating BUY Price Rs4,103 Target Price Rs4,844 Implied Upside 18.1% Sensex 24,470 Nifty FSA Note: Summary of Financial Ratio Calculations FSA Note: Summary of Financial Ratio Calculations This note contains a summary of the more common financial statement ratios. A few points should be noted: Calculations vary in practice; consistency and Mensch und Maschine SE Research Update Very good figures in 2015 and prospect of a doubling of profits in 2016 Rating: Buy (unchanged) Price: 9.295 Euro Price target: 10.50 Euro Analyst: Dr. Adam Jakubowski sc-consult GmbH, Paddy Power REDUCE BUY Gavin Kelleher +353 1 240 4104 Gavin.Kelleher@merrion-capital.com Reuters PWL I / Bloomberg PWL ID Paddy Power Rating Previous REDUCE BUY 29 January 2009 IRELAND GAMING Reducing forecasts due to concerns Price Target: EUR 5.50 (5.50) Recommendation: BUY (BUY) Risk: HIGH (HIGH) Price Target: EUR 5.50 (5.50) 26 January 2012 Preliminary FY2011 figures very much in line with our estimates Guidance for FY2012 is maintained SFC Energy AG Techno Electric & Engineering Limited Engineering & Capital Goods Event Update Techno Electric & Engineering Limited Buy Wind business spin off will lead to value unlocking. Institutional Research CMP (`) 404 Target (`) 504 Nifty: 8,224 Sensex: Disclaimer. This document has been prepared by Tele Columbus AG (the "Company") solely for informational purposes. Disclaimer This document has been prepared by Tele Columbus AG (the "Company") solely for informational purposes. This presentation may contain forward-looking statements. These statements are based on Market Snapshot: EUR m Shareholders: Risk Profile (WRe): 2012e Freefloat 56.9 % Ulrich Dietz (CEO) 28.5 % Maria Dietz 9.7 % (CDAX, Software/IT) Buy EUR 5.00 Price EUR 3.39 Upside 47.5 % Value Indicators: EUR Share data: Description: DCF: 5.00 FCF-Value Potential: 4.20 Bloomberg: Reuters: ISIN: GFT GR GFTG DE0005800601 IT services, 2013 Third Quarter Review October 25, 2013 1 October 25, 213 1 Panalpina Group October 25, 213 213 Third Quarter Review October 25, 213 2 Highlights and key figures Operating and financial review Outlook Growth in profitability and margins in the% Price target: EUR 19.00 (19.00) Recommendation: BUY (BUY) Risk: MEDIUM (MEDIUM) Price target: EUR 19.00 (19.00) 06 July 2012 Allgeier and EASY SOFTWARE would form a very strong player in the ECM sector Share price (dark) vs. CDAX Takeover -; 2015 Results and Prospects PRESS RELEASE Paris, 23 March 2016 2015 Results and Prospects Revenues: 2,579.3 million, up 3.2% EBITDA: 342.0 million, an operating margin of 13.3% 2016 Objectives: revenues close to 3 billion and Mensch und Maschine SE Research Update Better than expected Rating: Buy (unchanged) Price: 8,109 Euro Price target: 9.50 Euro Analyst: Dr. Adam Jakubowski sc-consult GmbH, Alter Steinweg 46, 48143 Münster Please take notice Interim Financial Report 9M/2015 Interim Financial Report 9M/2015 Investor & Analyst Conference Call 5 November 2015 Investor Relations Agenda. 1. Financials 9M/2015 2. Outlook Page 2 SGL Group Investor Relations 05 November 2015 Performance TYPES OF FINANCIAL RATIOS TYPES OF FINANCIAL RATIOS In the previous articles we discussed how to invest in the stock market and unit trusts. When investing in the stock market an investor should have a clear understanding about Fundamental Analysis Ratios Fundamental Analysis Ratios Fundamental analysis ratios are used to both measure the performance of a company relative to other companies in the same market sector and to value a company. There are three
http://docplayer.net/1052944-Cewe-stiftung-co-kgaa-european-mid-cap-internet-q1-2015-mixed-bag-though-core-is-strong-20-may-2015-buy-eur-58-99-eur-68-00.html
CC-MAIN-2017-04
refinedweb
6,833
54.42
Suppose you want to test whether something you’re doing is having any effect. You take a few measurements and you compute the average. The average is different than what it would be if what you’re doing had no effect, but is the difference significant? That is, how likely is it that you might see the same change in the average, or even a greater change, if what you’re doing actually had no effect and the difference is due to random effects? The most common way to address this question is the one-sample t test. “One sample” doesn’t mean that you’re only taking one measurement. It means that you’re taking a set of measurements, a sample, from one thing. You’re not comparing measurements from two different things. The t test assumes that the data are coming from some source with a normal (Gaussian) distribution. The Gaussian distribution has thin tails, i.e. the probability of seeing a value far from the mean drops precipitously as you move further out. What if the data are actually coming from a distribution with heavier tails, i.e. a distribution where the probability of being far from the mean drops slowly? With fat tailed data, the t test loses power. That is, it is less likely to reject the null hypothesis, the hypothesis that the mean hasn’t changed, when it should. First we will demonstrate by simulation that this is the case, then we’ll explain why this is to be expected from theory. Simulation We will repeatedly draw a sample of 20 values from a distribution with mean 0.8 and test whether the mean of that distribution is not zero by seeing whether the t test produces a p-value less than the conventional cutoff of 0.05. We will increase the thickness of the distribution tails and see what that does to our power, i.e. the probability of correctly rejecting the hypothesis that the mean is zero. We will fatten the tails of our distribution by generating samples from a Student t distribution and decreasing the number of degrees of freedom: as degrees of freedom go down, the weight of the tail goes up. With a large number of degrees of freedom, the t distribution is approximately normal. As the number of degrees of freedom decreases, the tails get fatter. With one degree of freedom, the t distribution is a Cauchy distribution. Here’s our Python code: from scipy.stats import t, ttest_1samp n = 20 N = 1000 for df in [100, 30, 10, 5, 4, 3, 2, 1]: rejections = 0 for _ in range(N): y = 0.8 + t.rvs(df, size=n) stat, p = ttest_1samp(y, 0) if p < 0.05: rejections += 1 print(df, rejections/N) And here’s the output: 100 0.917 30 0.921 10 0.873 5 0.757 4 0.700 3 0.628 2 0.449 1 0.137 When the degrees of freedom are high, we reject the null about 90% of the time, even for degrees of freedom as small as 10. But with one degree of freedom, i.e. when we’re sampling from a Cauchy distribution, we only reject the null around 14% of the time. Theory Why do fatter tails lower the power of the t test? The. One thought on “Fat tails and the t test” “I love fat tails and I cannot lie.” I waited on this comment until a newer post landed. I’m seldom able to wait this long. It was torture. It’s not even that good. Still, the compulsion remains.
https://www.johndcook.com/blog/2019/09/27/fat-tails-and-the-t-test/
CC-MAIN-2019-51
refinedweb
607
75.61
I. One of the major gaps for me was the <use> element, as most SVG icon systems are built with <use>. I asked Michael if he thought better support might be coming for some of these features, but he showed me a much better way of working with it, circumventing this method entirely. We'll go over this technique so that you can get started writing scalable SVG Icon Systems in React, as well as some tricks I'd propose could work nicely, too. Note: It's worth saying that use support was recently improved, but I've noticed it's spotty at best and there are other routing and XML issues. We'll show you another, cleaner way here. What is <use>? For those not familiar how SVG icon systems are typically built, it works a little like this. The <use> element clones a copy of any other SVG shape element with the ID you reference in the xlink:href attribute, and still manipulate it without reiterating all of the path data. You may wonder why one wouldn't just use an SVG as an <img> tag. You could, but then every icon would be an individual request and you wouldn't have access change parts of the SVG, such as the fill color. Using <use> allows us to keep the path data and basic appearance of our icons defined in one place so that they could be updated once and change everywhere, while still giving us the benefit of updating them on the fly. Joni Trythall has a great article about use and SVG icons, and Chris Coyier wrote another awesome article here on CSS-Tricks as well. Here's a small example if you'd like to see what the markup looks like: See the Pen bc5441283414ae5085f3c19e2fd3f7f2 by Sarah Drasner (@sdras) on CodePen. Why bother with SVG Icons? Some of you at this point might be wondering why we would use an SVG icon system rather than an icon font to begin with. We have our own comparison on that subject. Plus there are a ton of people writing and speaking about this right now Here are some of the more compelling reasons, in my mind: -. If you’re like me and updating an enormous codebase, where in order to move over from an icon font to SVG you’d have to update literally hundreds of instances of markup, I get it. I do. It might not be worth the time in that instance. But if you’re rewriting your views and updating them with React, it’s worth revisiting an opportunity here. Tl;dr: You don’t need <use> in React After Michael patiently listened to me explain how we use <use> and had me show him an example icon system, his solution was simple: it’s not really necessary. Consider this: the only reason we were defining icons to then reuse them (usually as <symbol>s in <defs>) was so that we didn’t have to repeat ourselves and could just update the SVG paths in one spot. But React already allows for that. We simply create the component: // Icon const IconUmbrella = React.createClass({ render() { return ( <svg className="umbrella" xmlns="" width="32" height="32" viewBox="0 0 32 32" aria- <title id="title">Umbrella Icon</title> <path> ) } }); // which makes this reusable component for other views <IconUmbrella /> See the Pen SVG Icon in React by Sarah Drasner (@sdras) on CodePen. And we can use it again and again, but unlike the older <use> way, we don’t have an additional HTTP request. Two SVG-ish things you might notice from the above example. One, I don’t have this kind of output: <?xml version="1.0" encoding="utf-8"?> <!-- Generated by IcoMoon.io --> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" ""> Or even this on the SVG tag itself: <svg version="1.1" xmlns="" … That’s because I’ve made certain to optimize my SVGs with SVGOMG or SVGO before adding the markup everywhere. I strongly suggest you do as well, as you can reduce the size of your SVG by a respectable amount. I usually see percentages around 30% but can go as high as 60% or more. Another thing you may notice is I’m adding a title and ARIA tag. This is going to help screen readers speak the icon for people who are using assistive technologies. <svg className="umbrella" xmlns="" width="32" height="32" viewBox="0 0 32 32" aria- <title id="title">Umbrella Icon</title> Since this id has to be unique, we can make pass props to our instances of the icon and it will propagate to both the title and aria tag like so: // App const App = React.createClass({ render() { return ( <div> <div className="switcher"> <IconOffice iconTitle="animatedOffice" /> </div> <IconOffice iconTitle="orangeBook" bookfill="orange" bookside="#39B39B" bookfront="#76CEBD"/> <IconOffice iconTitle="biggerOffice" width="200" height="200"/> </div> ) } }); // Icon const IconOffice = React.createClass({ ... render() { return ( <svg className="office" xmlns="" width={this.props.width} height={this.props.height} viewBox="0 0 188.5 188.5" aria-labelledby={this.props.iconTitle}> <title id={this.props.iconTitle}>Office With a Lamp</title> ... </svg> ) } }); ReactDOM.render(<App/>, document.querySelector("#main")); The best part, perhaps Here's a really cool part of this whole thing: aside from not needing additional HTTP requests, I can also completely update the shape of the SVG in the future without any need for markup changes, since the component is self-contained. Even better than that, I don't need to load the entire icon font (or SVG sprite) on every page. With all of the icons componentized, I can use something like webpack to "opt-in" to whatever icons I need for a given view. With the weight of fonts, and particularly heavy icon font glyphs, that's a huge possibility for a performance boon. All of that, plus: we can mutate parts of the icon on the fly with color or animation in a very simple way with SVG and props. Mutating it on the fly One thing here you might have noticed is we’re not yet adjusting it on the fly, which is part of the reason we’re using SVG in the first place, right? We can declare some default props on the icon and then change them, like so: // App const App = React.createClass({ render() { return ( <div> <IconOffice /> <IconOffice width="200" height="200"/> </div> ) } }); // Icon const IconOffice = React.createClass({ getDefaultProps() { return { width: '100', height: '200' }; }, render() { return ( <svg className="office" width={this.props.width} height={this.props.height} <title id="title">Office Icon</title> ... </svg> ) } }); ReactDOM.render(<App />, document.querySelector("#main")); See the Pen SVG Icon in React with default props by Sarah Drasner (@sdras) on CodePen. Let's take it a step further, and change out some of the appearance based on the instance. We can use props for this, and declare some default props. I love SVG because we now have a navigable DOM, so below let's change the color of multiple shapes on the fly with fill. Keep in mind that if you're used to dealing with icon fonts, you're no longer changing the color with color, but rather with fill instead. You can check the second example below to see this in action, the books have changed their color. I also love the ability to animate these pieces on the fly, below we've wrapped it in a div to animate it very easily with CSS (you may need to hit rerun to see the animation play): See the Pen SVG Icon in React with default props and animation by Sarah Drasner (@sdras) on CodePen. // App const App = React.createClass({ render() { return ( <div> <div className="switcher"> <IconOffice /> </div> <IconOffice bookfill="orange" bookside="#39B39B" bookfront="#76CEBD" /> <IconOffice width="200" height="200" /> </div> ) } }); // Icon const IconOffice = React.createClass({ getDefaultProps() { return { width: '100', height: '200', bookfill: '#f77b55', bookside: '#353f49', bookfront: '#474f59' }; }, render() { return ( <svg className="office" xmlns="" width={this.props.width} height={this.props={this.props.bookside} <path fill={this.props.bookfront} <path className="cls-7" d="M60.7 69.8h38.9v7.66H60.7z"/> <path className="cls-5" d="M60.7 134.7h38.9v7.66H60.7z"/> ... </svg> ) } }); ReactDOM.render(<App />, document.querySelector("#main")); .switcher .office { #bulb { animation: switch 3s 4 ease both; } #background { animation: fillChange 3s 4 ease both; } } @keyframes switch { 50% { opacity: 1; } } @keyframes fillChange { 50% { fill: #FFDB79; } } One of my awesome coworkers at Trulia, Mattia Toso, also recommended a really nice, much more clean way of declaring all of these props. We can reduce repetition of the this.props here by declaring const for all our uses, and then just simply apply the variable instead: render() { const { height, width, bookfill, bookside, bookfront } = this.props; return ( <svg className="office" xmlns="" width={width} height=={bookside} <path fill={bookfront} We can also make this even more awesome by declaring propTypes on the props we are using. PropTypes are super helpful because they are like living docs for the props we are reusing. propTypes: { width: string, height: string, bookfill: string, bookside: string, bookfront: string }, That way if we use them improperly, like in the example below, we will get a console error that won't stop our code from running, but alerts other people we might be collaborating with (or ourselves) that we're using props incorrectly. Here, I'm using a number instead of a string for my props. <IconOffice bookfill={200} And I get the following error: See the Pen SVG Icon in React with spread with error by Sarah Drasner (@sdras) on CodePen. Even more slender with React 0.14+ In newer versions of React, we can reduce some of this cruft and simplify our code even more, but only if it's a very "dumb" component, e.g. it doesn't take lifecycle methods. Icons are a pretty good use case for this, since we're mostly just rendering, so let's try it out. We can be rid of React.createClass and write our components as simple functions. This is pretty sweet if you've been using JavaScript for a long time but are less familiar with React itself- it reads like the functions we're all used to. Let's clean up our props even further and reuse the umbrella icon just as we would on a website. // App function App() { return ( <div> <Header /> <IconUmbrella /> <IconUmbrella umbrellafill="#333" /> <IconUmbrella umbrellafill="#ccc" /> </div> ) } // Header function Header() { return ( <h3>Hello, world!</h3> ) } // Icon function IconUmbrella(props) { const umbrellafill = props.umbrellafill || 'orangered' return ( <svg className="umbrella" xmlns="" width="32" height="32" viewBox="0 0 32 32" aria- <title id="title">Umbrella</title> <path fill={umbrellafill}> ) } ReactDOM.render(<App />, document.querySelector("#main")); See the Pen SVG Icon in React by Sarah Drasner (@sdras) on CodePen. SVG icon systems are beautifully simple and easily extendable in React, have less HTTP requests, and are easy to maintain in the future, due to the fact that we can completely update the output in the future without any repetitive markup changes.. Uh, so you’re recommending using JavaScript to insert an entire SVG literally every single time that an icon is used? So, if you have a table with 100 records, and each record has a pencil icon, a trash icon, and a status icon, you’re going to have JavaScript literally insert 300 SVGs, each complete with all of the path information for that icon? And how, exactly, is this better than <use>(let the browser do the duplicating internally instead of emulating it with JS) or even an old-school background-image SVG spritesheet (forget about duplicating anything, just render an image that’s already in memory, simply at a different offset)? I mean, I get it, it makes sprites easy to customize, but I simply don’t see this ever scaling to any kind of real use. I hope I never have to maintain a codebase that decided to wrap all of its individual sprites in JavaScript components. No, this is not what I’m recommending. I’m showing you how to use SVG Icon Systems if you’re using React already. Not to convert a view that’s not in React to React in order to use SVG Icons. I also explain why we’re not using use in this instance in the article. And also why, if you’re using React the way that I’m describing, you don’t have to load every instance for every SVG Icon into the page. After re-reading the criticism brought up by Agop about 15 times now, I’m not sure Sarah actually addressed it. As I am reading, Agop seems to be asking how valid this sort of mechanism would be in a real-world website. The example provided, which I’m sure we’ve all designed before, is a tabular set of records with icons for indicating actions (edit, delete, etc. This is a usage example where sprites or even font icons shine. How well would the this article’s solution function here? It seems like this method would require the SVG to be rendered uniquely for every iteration… so it seems like there would be a scaling issue if your website / app / whatever was heavily reliant on SVG-generted icons as a UI/UX element. Or is the argument that one would simply not use React for a situation such as the one presented by Agop? Just as food-for-thought here, GitHub recently switched to SVG icons: And as an aside, I’m a little concerned about the tone here. There are ways to bring up concerns without a rude tone. I have some friends who have said great things about using a sunlight desk lamp, in how it can help their mood thoughout the day. This is really about: <svg><use>vs <svg><path> It would be interesting to create some TEST PAGES! 300 icons sounds like a reasonable number to test. Maybe 10 unique icons, used 30 times each. Page 1: SVG sprite with <symbol>s for each unique icon, and <use>used all 300 times to draw them Page 2: All 300 icons individually drawn with <svg><path> ... But then what are the questions? Maybe… Uh, why would anyone recommend doing that? Ok, let’s say you have different icons for edit, post, delete. If you’re using use in SVG icon, you’re still rendering the path data in the shadow DOM, even with use. If you’re using an icon font, you’re rendering all of the glyphs for the whole set onto that view whether you need them or not. This is typically many many more icons than you typically need to render that table. Each technique comes with its correct uses and overhead and it is the developer’s job to pick the tool accordingly. It depends on how many icons your site has and how big that icon font would be. It also depends on how many times you’re using the icons on a given view. I would say the table is actually a smaller use-case across the web than using icons in a menu or a view, and each of these would warrant different techniques. In the case of the table, it may be that an SVG sprite is better, if you don’t need to change the color. If you need to change the color or animate it, that wouldn’t be a great use case anymore. We, as web developers, should be using the right tool for the job, but we should understand the parameters around each before deciding. Agop mentions: “And how, exactly, is this better than use (let the browser do the duplicating internally instead of emulating it with JS)” – which is addressed first and foremost in the article, which is why I didn’t go into more detail here- it’s already covered. I also explain that with something like webpack you wouldn’t need to load all of the svg and js data into every view in the article. One thing to understand here is that the article is not prescribing that we throw out icon fonts or SVG use in order to create new SVGs in React- it’s proposing a solution for the lack of support of use in React and how to overcome that in order to create an icon system. This isn’t valid anymore. I think as of 0.14 or something like that. See <use>usage within React in my jsperf tests below. Ago, I understand your concerns about using JavaScript to create views that could and should have already been created in HTML. One thing that is great about JS (and react) is that all this can be rendered with JS on the server. This is a win for reusable code and fast server rendering. Using the technique explain in this article doesn’t prevent you from server rendering. I have recently done something very similar. But not wrapped the SVG in JS but got react to import the SVG file. This requires more ‘webpack’ setup (SVG-inline-loader) but means the SVG stays within an .svg file. This combination means that we get the benefits of inline SVG (Style and transition changes), server-side rendering and with client-side caching, which I don’t think has been so eloquently possible before. If you are using <svg><use>with svg4everybody to ajax your svg sprites from a CDN and get around CORS issues (as recommended here), you end up writing a bunch of SVGs to the DOM anyway. I like the suggestion of building it into the JS payload and removing svg4everybody from the equation. If you happen to be using browserify… I somewhat agree with Agop that this seems like a better solution for more complex, individual illustrations rather than icons used all over a page. In a React project I’m working on, we’re using a more generic icon component that takes an icon name and size as props (like so: <Icon name="umbrella" size="big" />), and internally uses the usetag, which is no problem in React 0.14 (and before that with dangerouslySetInnerHTML, ahem). For cases that require more control than that, your solution looks like a great approach. Also, a little heads-up: The default values for width & height in your getDefaultProps()function include the “px” CSS unit, which isn’t valid if you’re going to use it in an HTML attribute. Hi diondiondion, Great point about the px! Thanks, updated. So, aside from any use issues, the point of interest here is in use vs path is whether use is really necessary. It seems to me that if you’re still rendering the shadow DOM with use, to discern whether or not it’s not more work for the same thing. Consider this: if you load an SVG spritesheet and then also describe the svg icon, you could be loading more than what is necessary. If you’re loading just the svg icon component, you can theoretically opt-in to the icons you need for a given view. In case you’re curious if other people are using SVGs inline in this manner, GitHub came out with an article recently detailing how they did just that:. I certainly think there are instances where this approach isn’t ideal, but I’m not sure that the reasons stated here are taking into considerations the common advantages and disadvantages, and I do still think this could offer a performance and workflow boon, in the case of React and your typical website. But, above all test your use case! Test all the things! Hi Sarah, it’s true that in some cases you might end up loading resources more efficiently by including the path directly. In practice I doubt that either of the two approaches is much better or worse in terms of performance than the other. A potentially big advantage of useis that modern browsers can cache external svg sprites for future page loads. In browsers that don’t support that (most notably IE & early Edge), they can be inserted into the page using a polyfill like svgxuse. But that actually reminds me of what might just be the biggest advantage of using path: its simplicity. It’ll “just work” in all reasonably capable browsers, with no need for polyfills or other workarounds, which is pretty sweet. There’s another advantage of using usewith external sprites which is quite specific to the product I’m working on, but I’ll mention it anyway: Being able to simply point to a different svg file to change all icons without having to recompile the app. Our app has a different theme, logo & sometimes icon set for each customer, so keeping these things separate & “abstracted away” makes sense for us, but obviously it’s not a requirement shared by many products. It really seems to be mostly a matter of personal preference about workflow and how tightly you want to couple your icons & react components. Hey diondiondion, For sure, caching is probably the strongest argument for use here. That’s definitely a good thing to bring up. Is it better than only having to load a couple of icons inline per view at typically >1kb apiece? I’m not sure. That, it seems, would totally depend on the system at hand. Your use case sounds pretty interesting! With all things being equal, a workflow and abstraction boon while loading a whole different set per customer seems totally appropriate. I can imagine a scenario where you’d be able to bundle components for each customer as well, but that might take more configuration than would make sense. In your case, the sprite sheet might be less maintenance overall, so agreed, it would be a better choice. I adore this post, Sarah! One of the reasons I’ve been so bullish about SVG icons is their versatility. I think it’s awesome (and super healthy) for developers to have a wealth of options… client-side via use, server-side (a la GitHub’s latest iteration of octicons), or integrated into a project’s JavaScript framework (as you’ve detailed so helpfully). Heck, projects with modest icon needs might be just fine using img! It’s easy to poke holes in any of these solutions for one reason or another. Posts like this one help teams like mine determine what’s best for any given project. SVG is just too complex, powerful and fun to settle for any “one size fits all” solution! Thanks Tyler! I couldn’t agree more. I think this is what you were going for. The idattribute was missing; pretty sure it’s required () Oh, interesting. Thank you, I didn’t realize that. Updated. Great article, but the accessibility part is a bit fucked up right now. You’re using aria-labelledbybut it’s not necessary and you’re using it in a broken way: you’re using one idvalue (“title”) and React will not change it for each use of the component, so you end up with duplicate ids all over your page (if you use that icon component more than once, and/or if you use the same id in other icon components). My advice: remove the idattribute and the aria-labelledbyattribute altogether, because current screen readers do not need it to read the <title>element for inline SVG. Also, adding a <title>element to your SVG icon does not cater for accessibility. Your icon component should be able to do two things: Hide the icon completely, because the meaning is already spelled out in the neighboring text. That would be: <svg aria-<path /></svg> Provide accessible text for the icon, with the ability to change that text for each use, and allowing for internationalization: <svg><title>[This text must not be always the same]</title></svg>. This can be done by outputting the aria-hiddenand the <title>elements conditionally, depending on whether the code using the component provides accessible text or not (no accessible text provided: aria-hidden="true"). As a general rule, accessible text is not a description of the graphics, so “Umbrella icon” does not work well as relevant accessible text. It should instead sum up the meaning you’re trying to get across. For instance, if you have <button><IconUmbrella/></button>, what is the role of that icon-only button? Does it show a weather forecast? Then you should have: <button aria-<IconUmbrella/></button>or <button><IconUmbrella customthingforalttext="Show weather forecast" /></button>. You want a screen reader saying “Show weather forecast, button”, not “Umbrella icon, button”. Also since you removed the alt text from the graphical element, you can translate it if needed, or change it if you’re using the same graphical element in different places with different meanings. You don’t want a screen reader to read “Umbrella Icon” when you actually need to convey “Afficher les prévisions météo”. Most of what @fvsch says is true. Unfortunately, this is not (yet): Which is why most accessibility-oriented SVG examples use a redundant aria-labelledbyattribute. The value of this attribute must be one or more valid ID references to other elements, and of course each element’s ID must be unique to the page. For an icon with no meaningful child content, you usually also want to give it an explicit role="img"so that the browser treats it as a single block, hiding the child markup. Further information on the other points: SVG 2 allows you to incorporate internationalization in your SVG code with multiple <title>elements distinguished by their langattribute. However, as far as I know that isn’t implemented yet in any browsers, so authors still need to select the correct language variant themselves. Even if multi-lingual SVG alternative text was supported, it wouldn’t address the other issue fvsch brings up: the correct accessible text for an icon depends on how that icon is being used in the page. If the icon is inside a button/link, it is more important to describe that element’s function than the icon itself. Thank you both! I’m learning a ton here. Ok, so here’s what I’m going to do- I’ve updated the codepen with unique props for the title and aria-labelledby and will update the article as well. I’ll probably close the comment thread to this post soon and then write a new article pointing back to this one with everything I’ve learned here through the comments in this post. I appreciate the feedback very much. Amelia, do you know which screen readers need the aria-labelledbyattribute to read the <title>element? In my tests with fall 2015 JAWS, NVDA and VoiceOver, it wasn’t needed? Windows-Eyes or ZoomText maybe, or older versions of JAWS, NVDA or VoiceOver? Since <title>support was good in my tests for in-page SVG elements (separated SVG documents is a different story), I tend to avoid adding extra aria-labelledby, which get wrong or outdated values really quickly in practice. If one wants to add the extra aria-labelledbyand id, it’s probably better to generate a random UUID in componentWillMountand use that, so each instance has a working aria-labelledby. Keep the React posts coming Sarah! Your article ‘ I Learned How to be Productive in React in a Week and You Can, Too‘ jumped started me into learning the library. Thanks Marc! Happy that it’s been useful :) Guys, let’s take a step back here and really look at what we’re doing. If you render the SVGs directly on the page, say, server-side (like GitHub, yay!), then yes – there might not be much of a difference. The browser might be more efficient doing its own thing with the shadow DOM and <use>, or it might be quicker with the full SVGs already in place. I’m placing my bets on it being more efficient with <use>because it would know that the structure of the SVG is the same each time, allowing for reuse of existing resources related to parsing and rendering. But let’s not diverge… This article is about using SVGs with React, which is why my other comment opened like this: Using JavaScript is the problem here. React just happens to be one way of using JavaScript to do this. And, in fact, it is a particularly terrible way. Why is it a terrible way? It’s a terrible way because using React like some kind of SVG template system is awfully slow, especially for complex SVGs! Here, I cobbled together a quick little test on jsperf: Notice how the <use>method is way, way faster (like, 500% faster). This should not be any sort of surprise. The <use>method creates one element. The full SVG method creates, well, a lot. Again, this isn’t necessarily about using a full SVG vs. using <use>in regards to writing HTML – it’s simply about using React to construct an SVG. It makes sense to do that when you need a heavily customized SVG here and there (just treat it like a component, duh!), but not when it comes to using SVG icons in general, regardless of whether or not you also happen to be using React. This is primarily a response to this part of the article, which seems to imply “hey, if you’re using React, no need for <use>, just use React components!”: Apologies if the tone of my comments comes off as rude. It’s not written to come off as rude – simply as strong reminder that: Just because you can, doesn’t mean you should. :) Great tests, thanks for your hard work there, those are interesting indeed! Ok, so that makes a lot more sense than what I thought you were saying previously. And it brings up a decent and thoughtful point. That is certainly a lot faster. But here’s where I have a concern, and I’ll just talk about the use case I have so you can see where I’m coming from. Let’s say you manage a huge site with 50 or so icons. Let’s say you have a view that only needs 3 of said icons. I can see an instance here where not loading all of the 50 either in an SVG spritesheet or in an icon font would come in handy, and the savings there might make it worthwhile. I can also see an instance, where, like above, I have a more complex SVG that I want to change, either with animation or just to update pieces of it. Let’s say all of my views are in React. I think, in that instance, this approach might still work nicely. Your tests do provide a really great demonstration of why we shouldn’t do it this way, though. I think you make a great and solid point here. I can see it both ways, and would still say, choose the right tool for the job. Sarah, I think we’re pretty much on the same page :) Like you said, if you have a special SVG – not a fire-and-forget icon, which could be on the page hundreds of times – by all means, make it a component. I’m all for that. As for the other use case: Yes, this could certainly make sense. Just keep in mind that an SVG spritesheet with 50 or so icons, after gzip compression, could be mere kilobytes. And, with fingerprinting, you could serve this spritesheet just once with a far-future expiration header. The browser would cache it basically forever, or until you change it (and thus, the filename changes). Then the question becomes: what’s faster, server rendering full inline SVGs and browser parsing each and every one, or using a single SVG which the browser already has in its cache? Again, like you said: use the right tool for the job :) I think you make solid points about the caching, and we could probably go back and forth on this forever, but, if you’re properly caching and gzipping your pages and views as well, I’m still not sold. It’s hard to portray a 50 SVG spritesheet as mere kilobytes without also giving a nod to the fact that a typical inline SVG icon, even rendered with React, is likely less than 1kb. If you have a site that goes between a search view and details view without typically needing all of the other icons, I’m still not sure :) At this point we might also be debating a very small difference in task on performance. @Sarah Here’s my lazy web question. Are there any tools you can recommend for automatically generating the individual icon components ( preferably with webpack )? I have been avoiding trying to figure out in React for a while so this post is perfect! Thanks Recently, just find a way to do this. Demo: svgo-loaderor svg-simplifycould cleanup the svg before using the transformer. For single color icon it will be easy to use. but for colorful icon like this post shown, may have to compose shapes by yourself. THIS IS GREAT…. :) Great info! This is the best React-centered article I’ve read yet! I say this because I’m a Frontender who is interested in React but not yet tried messing around with it much. While I may not setup my icon system like this I feel this blog post ranks very high in terms of communicating some basic React concepts that actually gets me excited about React. The writing has opened my mind up to the library… Thanks Sarah ;) Hey, please check our React SVG icon live generator that created colleague of mine. Any feedback welcome The aforementioned Github article sparked a discussion for our team to look into using SVG as well. From my initial tests direct SVG rendering is so much better than icon fonts. What I am doing in our case is just including the full SVG on the page without any Javascript. We use an “include” in the templating language we use (Jade), so syntactically we don’t see the full SVG code on the page. It looks a bit like this: I am happy with this because if you export a 16×16 SVG from Sketch and you make sure it renders as a 16×16 block with CSS you are 100% sure that it is sharp. Whereas if you export a 16×16 SVG and then mangle it through an icon font generator you are never really sure that it’s sharp. I think me and my team have spent hours wrestling with icon fonts. This ranges from tasks like debugging icon font output, to telling people how to set up their system to include fontforge/fontcustom to tweaking SVG code to render well as an icon font . I hope these woes will be over now with this new solution. Some tips: it’s best that you export your SVG icons at a certain size and define it explicitly: You can then “color” SVGs with CSS as follows: Great write up Sarah! I’m using a similar approach in some Rails projects with I did an SVG icon system recently, that had to be compatible with React and plain JavaScript (React is used for multiple SPAs embedded in pages, there’s also legacy JS code). I ended up using webpack-svgstore-plugin for generating the SVG bundle and then consuming it from React like: <SvgSymbol name={‘icon-name-is-the-src-file-name’} /> or from server side templates (Jinja2 based) like: {% svg_symbol(‘icon-name’) %} Helper tags simply emit <svg><use…> stuff and reference icon like “/path/to/bundle.svg#icon-name”. This can be easily extended to support multiple bundles via optional param bundle={‘bundle-name’} A problem with directly importing SVGs in React and relying on Webpack or something to bundle them into JS is the resulting icon system is usable only from React land. I see vendor lock-in for such trivial thing as a bigger problem than the performance issues mentioned above. I think all my Pure components had they’re feelings hurt when you called then “dumb” ;( Hey Sarah, thanks for the writeup and the discussions with Agop. One quick point about #a11y here. In the described scenario you’re bringing in several SVG elements which all include a title element with the id “title”. I just tried it with VoiceOver. Assuming you’ve got several different icons included like this in a page and all of them include VoiceOver will read the title of the first appearing icon for all of them, as ID’s should be included only once inside of a document. Setting unique ID’s is needed for this in combination with aria-labelledbyto work properly. Thanks. :) Hi Stephan, Yeah, if you look up in the comments and the post, this was all addressed and the post was updated. I can update all of the other codepens, though, to avoid future confusion. I only updated the one that was the reference for the accessibility. Thanks for looking out! Hi Sarah Thanks for the share
https://css-tricks.com/creating-svg-icon-system-react/
CC-MAIN-2017-22
refinedweb
6,220
68.81
Force.com Sites lets you create public web sites and applications that run natively on Force.com, extending the reach of your applications to new users on intranets, external Web sites, and online communities. You may sometimes want to authenticate users on these web sites – for example if you’re building a shopping portal or registering users for a service. This tutorial shows you how to authenticate users on Force.com Sites. It provides a description of Customer Portal, which is needed for the authentication, and shows you how to set up such a site and process to allow site visitors to become authenticated users. When someone visits your Force.com Site, they may view all the pages that you provide on that site. What's important, however, is the type of activities that they are allowed to perform besides just viewing pages and clicking on navigation links. The type of actions that they can perform are completely controlled by their profile. Since a new user arriving has no special identity or authority, they are automatically assigned the Guest User profile. The best way to see exactly what a guest can do when they arrive at your site is to review that profile. Log into your environment, and navigate to Setup | Develop | Sites, and then select your Force.com Site detail page. Here you'll find the button called Public Access Settings, as shown in the following figure. Clicking on Public Access Settings will give you the details for the guest profile. This determines the security profile of each and every site visitor before they authenticate. Here's a screenshot from the guest user profile. The thing that I want to point out here is that all of the standard database objects can be read by the guest user, and many of these objects can be created by the guest user as well. You will notice that none of these standard objects may be edited or deleted by the guest user (nor can you edit the profile to allow that). What does this mean for your site? Well for one thing it means that you can easily build a Visualforce page that will show records from these objects, including the Account, Contact and other standard objects, to any guest user on your site. Guest users can also read and update custom objects, if you set the permissions to allow it, which you can configure on this same page. For example, if you create a bug report object, you typically only want authenticated users to be able to edit bug reports. You can ensure that only authenticated users have access by only granting the edit permission on the object to a different profile - that assigned to authenticated users. The platform will then automatically ensure that unauthenticated users can never make edits. This is one of the many security features built into Force.com Sites. In a typical scenario, you would grant read permissions on objects to the guest profile, but edit permissions to a profile assigned only to authenticated users. In other words, the visitor on the site must upgrade their profile to one of the two profiles which allow edits or updates to standard (or custom) objects. This user then becomes authenticated and falls under the permissions that are exposed by your Customer Portal user license. The process that allows this profile switch is the login process. First let's cover the basics of the Customer Portal, then we can describe the login process. The Salesforce Customer Portal is a product that provides a user license for each of your customers. In terms of Force.com Sites development, think of a customer as any person who is not an employee of your orginization, and with whom you would like to share secure data or processes. If you are designing a web application, you can also think of a customer as an authenticated user. To understand this feature at the highest level, you can think of all of your customers as users in your database. These users have a subset of the normal suite of database objects available to them and a special profile that governs their access to your data. For example, these customer portal users could edit contact records or view accounts and update those bug report records in your database. You specify everything about these users' experience by defining the objects that they can see, their data permissions and even the user interface that they use to view the data. Providing this highly customized user interface is where Visualforce and Force.com Sites comes in. To bring these two technologies (Customer Portal and Sites) together we will need a bit of code and a few Visualforce pages, but fortunately all of the required code is provided for you so that all you need is a bit of cut and paste to get started. You can then modify these pages to suit your web site and you will end up with a seamless system that allows your visitors to become customer portal users. So what is a Customer Portal user? If you have never used the customer portal, you can easily get started using a Developer Edition. The latest Developer Edition environments come with Customer Portal licenses. A good way to start is to set up customer portal for your developer edition environment, which I'll cover shortly. The portal provides functionality to log someone in, and assign them a portal profile. Once someone is logged in, the data that they may have access to is the actual data from your environment. Authenticated users are represented inside your environment as both a Contact record and a User record for each user of the portal. In other words, if someone registers for a login account, you can navigate to Setup | Manage Users | Users and manage them there. You'll notice that you can navigate from each User record to the associated Contact record using a lookup field. The Contact records are also associated with an Account record. Because they are your "customers", this also allows you to effectively limit the data that they can see using account sharing rules. While this sounds a little complicated, it's relatively easy to set up - but you will notice these items pop up in the configuration details below. Now let's talk about setting up the portal. All of the steps that I'd like to cover here are also documented in the customer portal set up documentation. In fact, you should refer to the online documentation before beginning this process, and if you have any problems following this tutorial. I've assumed that you've already created your Force.com Site on a Developer Edition environment. If not, check out An Introduction to Force.com Sites. To get customer portal turned on for your environment, you must visit Setup | Customize | Customer Portal | Settings. Click the box to enable portal, and hit save. You can skip the wizard. I have done that in my org and now the page looks like this: You will notice that I have Login Enabled checked on this customer portal. Be sure to enable this check box as we will need it to be on later when we configure the login from Force.com Sites. Now that you have the portal enabled, you need to do a little setup. Click through to the portal details page. Here you'll see the Login Enabled is already set. However, we also want to enable Self-Registration, which allow new users to be created. In the final section on this page, "Self-Registration Settings", click the checkbox for "Self-Registration Enabled", change the "Default New User License" to "Customer Portal Manager", set the "Default New User" to "User" and the "Default New User Profile" to "Customer Portal Manager". Here's my final configuration: That's all I had to do to get started with portal. What this does is establish the user license associated with new users registered with the portal, and determines their profile. Once the basic portal is set up and the login enabled, you are ready to explore the profile that is created by the portal setup: Click on the name of the customer portal just created Locate the Default New User Profile link at the bottom of the page Click this link to review the default profile (it's probably called "Customer Portal Manager") Let's take a look at this profile as it is on my org. This is the permission set that will be assigned by customer portal to authenticated users. Now in this profile you can see that the customer portal user is able to read, create and edit cases and contacts as well as any custom objects that I may have granted them access to in my organization. The key difference here is that guest users may not update contact records but portal users who are authenticated users, can update contact records in the system. Please keep these basic object permissions in mind while you are building your application or your Force.com Site. Now that we have looked at what a guest can do and what a portal user can do, let's compare the two. If you're a developer you can easily understand the difference between these two profiles by reviewing their profiles in the application, so we'll just look at this in a summary matrix, a simple table that shows one important difference exists on the Guest user and Standard Objects. User Read Standard Objects Edit Standard Objects Read & Edit Custom Objects Sites Guest User Yes No Yes Customer Portal User Yes Yes Yes Based on this restriction, if you want to edit a standard object, you must first authenticate that user. The platform will not allow you to grant permissions for Edit on Standard objects including Contact records. Fortunately for us developers, there is a straight forward way to promote the Guest User to become a Portal User, this process is called Authentication or Login. To achieve the highest security, Force.com offers one single process of authentication, so please don't be tempted to build your own authentication process. Walk through the following section all the way until the end to get basic authentication working. Now, let's talk a little bit about the authentication process that allows a guests visiting your site to become authenticated and therefore have their profile changed. This is really two different steps since some of your users may not yet have their own login on your site. These processes are: self registration and login. To begin you'll need a Force.com site. I assume that you will be able to set this up given the documentation that can be found within the application and on the Sites page in the Technical Library. Next, I may be able to save you the trouble that I ran into while building this sample: I forgot to enable login for my site! Here is a screen shot to show you just what I mean. In this screen the login feature is enabled for customer portal on my Force.com Site. Be sure that your site has this login enabled or you will see messages in your site indicating that the user is not authenticated, and it may not be clear that your site is not allowing this, even if you have an existing customer portal already configured. Do this by navigating to Setup | Develop | Sites, selecting your site, and then hitting the Login Settings button. Notice that the login fields indicates that logins are enabled for Customer Portal. Once I turn this on, a number of Visualforce pages are created for you. The most important are: SiteLogin, which allows registered users to log in to your web application SiteRegister, which allows new users to be created for your web application You can use these right out of the box. The Force.com Sites setup even takes care of adding this new Visualforce page to your list of pages that are enabled for the site For example, if you registered the domain acme, then you could go to these pages right now: You can of course modify these pages as you see fit. For customer portal the process of authentication begins in one of two ways; in the first case a valid portal user record already exists in the system (in other words, the user has already registered) and we take this user into the system through a Normal Login form, say from a login link on your Site. In the second case the user is coming to your site for the first time, and must complete the self registration process. This process will create a portal user and then proceed with the normal login method. Let's look at each of these in turn. Let's take a look at the Visualforce page that allows a user to login - the standard SiteLogin page that's automatically generated. You can create your own, or use a template from your site. Here is the Visualforce markup for this page, in this case the source we are reviewing it is a component that is placed on the login page that you create within your site. <apex:component <apex:form <apex:outputPanel <apex:pageMessages <apex:panelGrid <apex:outputLabel <apex:inputText <apex:outputLabel <apex:inputSecret <br /> <apex:commandButton <br /> <apex:panelGroup <apex:outputLink {!$Label.site.forgot_your_password_q} </apex:outputLink> <apex:outputText <apex:outputLink {!$Label.site.new_user_q} </apex:outputLink> </apex:panelGroup> </apex:panelGrid> </apex:outputPanel> </apex:form> </apex:component> The key aspects to this page are: SiteLoginController, which does the actual work of logging in $Page.RegistrationEnabledto determine if registration has been enabled for your site. Let's now look at the Apex Code for the controller, SiteLoginController: public class SiteLoginController { public String username {get; set;} public String password {get; set;} public PageReference login() { String startUrl = System.currentPageReference().getParameters().get('startURL'); return Site.login(username, password, startUrl); } public SiteLoginController () {} public); } } It's pretty basic. It simply takes the username and password, and invokes the system method Site.login(). (See all the methods for this class in the docs.) The return value from this call is a PageReference. If the login is successful, the page that will be returned is the one passed in the startUrl. As you can see, you can pretty easily build your own authentication pages around this code. Now let's turn to self registration. When setting up the Customer Portal we enabled self-registration. As mentioned above, you can simply make use of the SiteRegister Visualforce page (and accompanying SiteRegisterController Apex Code) to accomplish the task, or write your own code. The default page looks something like this: As you can see, the critical pieces of data include: The basic process of registration boils down to the following code snippet: User u = new User(); u.Username = username; u.Email = email; u.CommunityNickname = communityNickname; String accountId = PORTAL_ACCOUNT_ID; // lastName is a required field on user, but if it isn't specified, // the code uses the username String userId = Site.createPortalUser(u, accountId, password); As you can see, it's a matter of creating a new User, and populating the Username, Email and CommunityNickname fields, and then passing this into the Site.createPortalUser() method. The only tricky bit is that you need to pass in an accountId as well. In fact, the automatically generated SiteRegister needs this ID set as well before it will work. Recall that the Contact records (which represent portal users, together with User records) are associated with an Account record. Well this association is established in the createPortalUser() call. So for this code to work, you need to find a valid account ID, and ensure that the account owner is in the role hierarchy. Here's an easy way to do this for our demo purposes on a Developer Edition environment: SiteRegisterControllercode) as the PORTAL_ACCOUNT_ID That's it. By doing this you found the ID of an account that will be associated with all new portal users Contact records, and you ensured that the owner of the account is in a role hierarchy. This role hierarchy gives you more control over the security aspects of the Contacts, which we won't cover in this tutorial. You can find the complete code for the standard Visualforce page and controller here. Once the customer portal user is created in this manner, the method returns the id of the new user. If the user ID is not null then we know we have created a new user and we can now proceed with the login process. The login process looks something like this: if (userId != null) { if (password != null && password.length() > 1) { return Site.login(username, password, null); } else { PageReference page = System.Page.SiteRegisterConfirm; page.setRedirect(true); return page; } } That's it! That's the end of the process. Now your guest user is authenticated as a Portal user and you can begin to take them into your site and into your Visualforce application just as you would with a normal user. As an authenticated user they now enjoy the full security privileges that you assigned to that portal user profile; you can share data and business processes with these users by building out your Visualforce application and navigating the authenticated users through those pages. The portal user session stays alive for the same duration as your normal session does in Salesforce and is controlled by the security settings you have for normal user logins. All of this session management, security, cookies, etc. is the same as a normal platform user, which means you don't have to worry about security, or write any code. See An Overview of Force.com Security for more information. In a normal Force.com site you would typically have a public set of pages and a login link on the main page, the login link would go the form we discussed above, then into some page of the Force.com application that you have written which is for registered users only. If a guest user attempts to navigate to these pages they would see the Authorization Required page specified in your Sites setup. Here are a few best practices: Finally, if your need is to track an individual user for a browser session you may be able to construct a tracking identifier and direct the user's browser to hold this parameter in a query string. This technique would allow you to match a user session to a custom object, say holding shopping cart contents or bread crumbs for the user.
https://developer.salesforce.com/page/Authenticating_Users_on_Force.com_Sites
CC-MAIN-2016-44
refinedweb
3,109
60.55
In this post I’ll show how to build a dockerized OpenStack and OpenContrail lab, integrate it with Juniper MX80 DC-GW and demonstrate one of Contrail’s most interesting and unique features called BGP-as-a-Service. Continuing. Now let me briefly go over the lab that I’ve built to showcase the BGPaaS and DC-GW integration features..: Once installation is complete and all docker containers are up and running, we can setup the OpenStack side of our test environment. The script below will do the following:. DC-GW integration procedure is very simple and requires only a few simple steps:. Before we can begin working with OpenContrail’s API, we need to authenticate with the controller and get a REST API connection handler.. The next thing I need to do is explicitly set the import/export route-target properties for the irb-net object. This will require a new RouteTargetList object which then gets referenced by a route_target_list property of the irb-net object.. For the sake of brevity, I will not cover MX80’s configuration in details and simply include it here for reference with some minor explanatory comments. The easiest way to verify that BGP peering has been established is to query OpenContrail’s introspection API: Datapath verification can be done from either side, in this case I’m showing a ping from MX80’s global VRF towards one of the OpenStack VMs:. With VM interface ID saved in a VMI_ID variable I can create a BGPaaS service and link it to that particular VM interface. The final step is setting up a BGP peering on the CumulusVX side. CumulusVX configuration is very simple and self-explanatory. The BGP neighbor IP is the IP of virtual network’s default gateway located on local vRouter.: It’s hidden since I haven’t configured MPLSoUDP dynamic tunnels on MX80. However this proves that the prefix does get injected into customer VPNs and become available on all devices with the matching import route-target communities.. Historically, OpenDaylight has had multiple projects implementing custom OpenStack networking drivers:. In order to provide BGP VPN functionality, NetVirt employs the use of three service components:: BGP session is established between QBGP and external DC-GW, however next-hop values installed by NetVirt and advertised by QBGP have IPs of the respective compute nodes, so that traffic is sent directly via the most optimal path.. The next few steps are similar to what I’ve described in my Kolla lab post, will create a pair of VMs, build all Kolla containers, push them to a local Docker repo and finally deploy OpenStack using Kolla-ansible playbooks: The final 4-deploy.sh script will also create a simple init.sh script inside the controller VM that can be used to setup a test topology with a single VM connected to a 10.0.0.0/24 subnet: Of course, another option to build a lab is to follow the official Kolla documentation to create your own custom test environment.. admintenant demo-router ODL cannot automatically extract VTEP IP from updates received from DC-GW, so we need to explicitly configure it: That is all what needs to be configured on ODL. Although I would consider this to be outside of the scope of the current post, for the sake of completeness I’m including the relevant configuration from the DC-GW: For detailed explanation of how EVPN RT5 is configured on Cisco CSR refer to the following guide.: We can also check the contents of EVPN RIB compiled by QBGP Finally, we can verify that the prefix 8.8.8.0/24 advertised from DC-GW is being passed by QBGP and accepted by NetVirt’s FIB Manager: The last output confirms that the prefix is being received and accepted by ODL. To do a similar check on CSR side we can run the following command:.]]> In this post I’ll have a brief look at the NFV MANO framework developed by ETSI and create a simple vIDS network service using Tacker.. This architecture consists of the following building blocks:. Before I get to the implementation, let me give a quick overview of how a Network Service is build from its constituent parts, in the context of our vIDS use case.. For this demo environment I’ll keep using my OpenStack Kolla lab environment described in my previous post.: The successful result can be checked with tacker vim-list which should report that registered VIM is now reachable.. As I’ve mentioned before, VNFFG is not integrated with NSD yet, so we’ll add it later. For now, we have provided enough information to instantiate our NSD. This last command creates a cirros-based VM with two interfaces and connects them to demo-net virtual network. All ICMP traffic from VM1 still goes directly to its default gateway so the last thing we need to do is create: All these parameters can be obtained using the CLI commands as shown below: The following command creates a VNFFG and an equivalent SFC to steer all ICMP traffic from VM1 through vIDS VNF. The result can be verified using Skydive following the procedure described in my previous post. This post only scratches the surface of what’s available in Tacker with a lot of other salient features left out of scope, including:.]]> In this post I’ll show how to configure Neutron’s service function chaining, troubleshoot it with Skydive and how SFC is implemented in OVS forwarding pipeline. S..: Having done that, we should see some packets coming out of egress port P2. However form the VM1’s perspective the ping is still failing. Next step would be to see if the packets are hitting the integration bridge that port P2 is attached to:: After this step the counters should start incrementing and the ping from VM1 to its default gateway is resumed.: After we’ve configured SFC, the forwarding pipeline is changed and now looks like this:. SFC is a pretty vast topic and is still under active development. Some of the upcoming enhancement to the current implementation of SFC will include:.]]> I’m returning to my OpenStack SDN series to explore some of the new platform features like service function chaining, network service orchestration, intent-based networking and dynamic WAN routing. To kick things off I’m going to demonstrate my new fully-containerized OpenStack Lab that I’ve built using an OpenStack project called Kolla. For..: This step can take quite a long time (anything from 1 to 4 hours depending on the network and disk I/O speed), however, once it’s been done these container images can be used to deploy as many OpenStack instances as necessary..:. Now that the lab is up we can start exploring the new OpenStack SDN features. In the next post I’ll have a close look at Neutron’s SFC feature, how to configure it and how it’s been implemented in OVS forwarding pipeline.]]> A short post about how I do SSH session management for network devices in Linux A: expecthacks.. I’ve written a little tool that uses Netmiko to install (and remove) public SSH keys onto network devices. Assuming python-pip is already installed here’s what’s required to download and install ssh-copy-net: Its functionality mimics the one of ssh-copy-id, so the next step is always to upload the public key to the device:: Now I am able to login the device by simply typing its name:: Here’s an example of my tmux configuration file: Now having all the above defined and with the help of zsh command autocompletion, I can login the device with just a few keypresses (shown in square brackets below): Press Ctrl+B v to split the terminal window vertically: An so on and so forth… ]]> In this post I will show how to use IETF, OpenConfig and vendor-specific YANG models in Ansible to configure BGP peering and verify state of physical interfaces between IOS-XE and JUNOS devices. The idea of using Ansible for configuration changes and state verification is not new. However the approach I’m going to demonstrate in this post, using YANG and NETCONF, will have a few notable differences: I hope this promise is exciting enough so without further ado, let’s get cracking.. Each device contains some basic initial configuration to allow it be reachable from the Ansible server. vMX configuration is quite similar. Static MAC address is required in order for ge interfaces to work.: configor verifyand defines how the enclosed data is supposed to be used Here’s how IOS-XE will be configured, using IETF interfaca YANG models (to unshut the interface) and Cisco’s native YANG model for interface IP and BGP settings: For JUNOS configuration, instead of the default humongous native model, I’ll use a set of much more light-weight OpenConfig YANG models to configure interfaces, BGP and redistribution policies: Both devices now can be configured with just a single command:): Once again, all expected state can be verified with a single command:. One thing that puts a lot of network engineers off NETCONF and YANG is the complexity of the device configuration process. Even the simplest change involves multiple tools and requires some knowledge of XML. In this post I will show how to use simple, human-readable YAML configuration files to instantiate YANG models and push them down to network devices using a single command. XML,: Recursive function to parse this data structure written in a pseudo-language will look something like this: The beauty of recursive functions is that they are capable parsing data structures of arbitrary complexity. That means if we had 1000 randomly nested child elements in the parent data structure, they all could have been parsed by the same 6-line function.: The same code could be re-written using the getattr and setattr method calls::. final step is to write a wrapper class that would consume the YDK model binding along with the YAML data, and both instantiate and push the configuration down to the network device.: To push this BGP configuration to the device all what I need to do is run the following command: The resulting configuration on IOS XE device would look like this: To see more example, follow this link to my Github repo.]]> Now it’s time to turn our gaze to the godfather of YANG models and one of the most famous open-source SDN controllers, OpenDaylight. In this post I’ll show how to connect Cisco IOS XE device to ODL and use Yang Development Kit to push a simple BGP configuration through ODL’s RESTCONF interface.. We’ll use NETCONF to connect to Cisco IOS XE device and RESTCONF to interact with ODL from a Linux shell. It might be useful to turn on logging in karaf console to catch any errors we might encounter later:: Assuming this XML is saved in a file called new_device.xml.1, we can use curl to send it to ODL’s netconf-connector plugin: When the controller gets this information it will try to connect to the device via NETCONF and do the following three things: ./cache/schemadirectory After ODL downloads all of the 260 available models (can take up to 20 minutes) we will see the following errors in the karaf console:: Assuming the updated XML is saved in new_device.xml.2 file, the following command will update the current configuration of CSR1K device: We can then verify that the device has been successfully mounted to the controller: The output should look similar to the following with the connection-status set to connected and no detected unavailable-capabilities: At this point we should be able to interact with IOS XE’s native YANG model through ODL’s RESTCONF interface using the following URL The only thing that’s missing is the actual configuration data. To generate it, I’ll use a new open-source tool called YDK. Yang Development Kit is a suite of tools to work with NETCONF/RESTCONF interfaces of a network device. The way I see it, YDK accomplishes two things:. One of the first things we need to do is install YDK-GEN, the tools responsible for API bindings generation, and it’s core Python packages on the local machine. The following few commands are my version of the official installation procedure:. Assuming that the IOS XE bundle profile is saved in a file called cisco-ios-xe_0_1_0.json, we can use YDK to generate and install the Python binding package: Now we can start configuring BGP using our newly generated Python package. First, we need to create an instance of BGP configuration data:. At this point of time all data is stored inside the instance of a Bgp class. In order to get an XML representation of it, we need to use YDK’s XML provider and encoding service: All what we’ve got left now is to send the data to ODL: The controller should have returned the status code 204 No Content, meaning that configuration has been changed successfully. Back at the IOS XE CLI we can see the new BGP configuration that has been pushed down from the controller. You can find a shorter version of the above procedure in my ODL 101 repo.]]> The sheer size of some of the YANG models can scare away even the bravest of network engineers. However, as it is with any programming language, the complexity is built out of a finite set of simple concepts. In this post we’ll learn some of these concepts by building our own YANG model to program static IP routes on Cisco IOS XE.. We’ll pick up from where we left our environment in the previous post right after we’ve configured a network interface. The following IOS CLI command enables RESTCONF’s root URL at You can start exploring the structure of RESTCONF interface starting at the root URL by specifying resource names separated by “/”. For example, the following command will return all configuration from Cisco’s native datastore. In order to get JSON instead of the default XML output the client should specify JSON media type application/vnd.yang.datastore+json and pass it in the Accept header.. Let’s start by adding the following static route to the IOS XE device: Now we can view the configured static route via RESTCONF: The returned output should look something like this:. YANG RFC defines a number of data structures to model an XML tree. Let’s first concentrate on the three most fundamental data structures that constitute the biggest part of any YANG model: 'name': {...} 'name': 'value' key. In JSON lists are encoded as name/arrays pairs containing JSON objects 'name': [{...}, {...}] Now let’s see how we can describe the received data in terms of the above data structures: routeelement is a JSON object, therefore it can only be mapped to a YANG container. ip-route-interface-forwarding-listis an array of JSON objects, therefore it must be a list. prefixand maskkey/value pairs. Since they don’t contain any child elements and their values are strings they can only be mapped to YANG leafs. fwd-list, is another YANG list and so far contains a single next-hop value inside a YANG leaf called fwd.: YANG’s syntax is pretty light-weight and looks very similar to JSON. The topmost module defines the model’s name and encloses all other elements. The first two statements are used to define XML namespace and prefix that I’ve described in my previous post.: So far we’ve been concentrating on the simplest form of a static route, which doesn’t include any of the optional arguments. Let’s add the leaf nodes for name, AD, tag, track and permanent options of the static route: Since track and permanent options are mutually exclusive they should not appear in the configuration at the same time. To model that we can use the choice YANG statement. Let’s remove the track and permanent leafs from the model and replace them with this:: Each element in a YANG model is optional by default, which means that the route container can include any number of VRF and non-VRF routes. The full YANG model can be found here. Now let me demonstrate how to use our newly built YANG model to change the next-hop of an existing static route. Using pyang we need to generate a Python module based on the YANG model. From a Python shell, download the current static IP route configuration: Import the downloaded JSON into a YANG model instance: Delete the old next-hop and replace it with 12.12.12.2: Save the updated model in a JSON file with the help of a write_file function: If we tried sending the new_conf.json file now, the device would have responded with an error: Instead of patching the original file, I’ve applied the above changes to a local copy of the file. Once patched, the following commands should produce the needed XML. The final step would be to send the generated XML to the IOS XE device. Since we are replacing the old static IP route configuration we’re gonna have to use HTTP PUT to overwrite the old data. Back at the IOS XE CLI we can see the new static IP route installed..]]> Everyone who has any interest in network automation inevitably comes across NETCONF and YANG. These technologies have mostly been implemented for and adopted by big telcos and service providers, while support in the enterprise/DC gear has been virtually non-existent. Things are starting to change now as NETCONF/YANG support has been introduced in the latest IOS XE software train. That’s why I think it’s high time I started another series of posts dedicated to YANG, NETCONF, RESTCONF and various open-source tools to interact with those interfaces.: All of these standard NETCONF operations are implemented in ncclient Python library which is what we’re going to use to talk to CSR1k.: Be sure to check of these and many other YANG models on YangModels Github repo.. When the session is established, server capabilities advertised in the hello message get saved in the server_capabilities variable. Last command should print a long list of all capabilities and supported. Now we can use pyangbind, an extension to pyang, to build a Python module based on the downloaded YANG models and start building interface configuration. Make sure your $PYBINDPLUGIN variable is set like its described here.. To setup an IP address, we first need to create a model of an interface we’re planning to manipulate. We can then use .get() on the model’s instance to see the list of all configurable parameters and their defaults. The simples thing we can do is modify the interface description. New objects are added by calling .add() on the parent object and passing unique key as an argument. At the time of writing pyangbind only supported serialisation into JSON format which means we have to do a couple of extra steps to get the required XML. For now let’s dump the contents of our interface model instance into a file. Even though pyanbind does not support XML, it is possible to use other pyang plugins to generate XML from. If the change was applied successfully reply.ok should return True and we can close the session to the device. Going back to the CSR1k’s CLI we should see our changes reflected in the running configuration:.]]>. In the previous post we have installed OpenStack and created a simple virtual topology as shown below. In OpenStack’s data model this topology consists of the following elements: So far nothing unusual, this is a simple Neutron data model, all that information is stored in Neutron’s database and can be queried with neutron CLI commands.: This is a visual representation of our network topology inside OVN’s Northbound DB, built based on the output of ovn-nbctl show command:: reg0[1] = 1. The next table catches these marked packets and commits them to the connection tracker. Special ct_label=0/1action ensures return traffic is allowed which is a standard behaviour of all stateful firewalls.. Here is some useful information about router interfaces and ports that will be used in the examples below. outportis decided, IP TTL is decremented and the new next-hop IP is set in register0. MAC_Bindingtable of Southbound DB.. Skipping to the L2 MAC address lookup stage, the output port (0x1) is decided based on the destination MAC address and saved in register 15. Finally, the packet reaches the last table where it is sent out the physical patch port interface towards R1. 65 converts the logical output port 3 to physical port 6, which is yet another patch port connected to a transit switch. The packet once again re-enters OpenFlow pipeline from table 0, this time from port 5. Table 0 maps incoming port 5 to the logical datapath of a transit switch with Tunnel key 7.). When packet reaches the destination node, it once again enters the OpenFlow table 0, but this time all information is extracted from the tunnel keys. At the end of the transit switch datapath the packet gets sent out port 12, whose peer is patch port 16. The packet re-enters OpenFlow table 0 from port 16, where it gets mapped to the logical datapath of a gateway router. Similar to a distributed router R1, table 21 determines the next-hop MAC address for a packet and saves the output port in register 15. The first table of an egress pipeline source-NATs packets to external IP address of the GW router. The modified packet is sent out the physical port 14 towards the external switch. External switch determines the output port connected to the br-ex on a local hypervisor and send the packet out..]]> This is a first of a two-post series dedicated to project OVN. In this post I’ll show how to build, install and configure OVN to work with a 3-node RDO OpenStack lab. Vanilla is a distributed SDN controller implementing virtual networks with the help OVS. Even though it is positioned as a CMS-independent controller, the main use case is still OpenStack. OVN was designed to address the following limitations of vanilla OpenStack networking:: If you want to learn more about OVN architecture and use cases, OpenStack OVN page has an excellent collection of resources for further reading.. On the controller node, generate a sample answer file and modify settings to match the IPs of individual nodes. Optionally, you can disable some of the unused components like Nagios and Ceilometer similar to how I did it in my earlier post. After the last step we should have a working 3-node OpenStack lab, similar to the one depicted below. If you want to learn about how to automate this process, refer to my older posts about OpenStack and underlay Leaf-Spine fabric build using Chef.. The official OVS installation procedure for CentOS7 is pretty accurate and requires only a few modifications to account for the packages missing in the minimal CentOS image I’ve used as a base OS. At the end of the process we should have a set of rpms inside the ovs/rpm/rpmbuild/RPMS/ directory.. First, we need to make sure all Compute nodes have a bridge that would provide access to external provider networks. In my case, I’ll move the eth1 interface under the OVS br-ex on all Compute nodes. IP address needs to be moved to br-ex interface. Below example is for Compute node #2: At the same time OVS configuration on Network/Controller node will need to be completely wiped out. Once that’s done, we can remove the Neutron OVS package from all nodes. Now everything is ready for OVN installation. First step is to install the kernel module and upgrade the existing OVS package. Reboot may be needed in order for the correct kernel module to be loaded. Now we can install OVN. Controllers will be running the ovn-northd process which can be installed as follows: The following packages install the ovn-controller on all Compute nodes: The last thing is to install the OVN ML2 plugin, a python library that allows Neutron to talk to OVN Northbound database.: This means that all instances of a distributed OVN controller located on each compute node have successfully registered with Southbound OVSDB and provided information about their physical overlay addresses and supported encapsulation types.: Now we should be able to create a test topology with two tenant subnets and an external network interconnected by a virtual router. When we attach a few test VMs to each subnet we should be able to successfully ping between the VMs, assuming the security groups are setup to allow ICMP/ND..]]> I was fortunate enough to be given a chance to test the new virtual QFX 10k image from Juniper. In this post I will show how to import this image into UnetLab and demonstrate the basic L2 and L3 EVPN services.: To be able to use these images in UnetLab, we first need to convert them to qcow2 format and copy them to the directory where UNL stores all its qemu images: Next, we need to create new node definitions for RE and PFE VMs. The easiest way would be to clone the linux node type: Now let’s add the QFX to the list of nodes by modifying the following file:. Once all the nodes have been configured, we can have a closer look at the traffic flows, specifically at how packets are being forwarded and where the L2 and L3 lookups take place. Traffic from H1 to H2 will never leave its own broadcast domain. As soon as the packet hits the incoming interface of SW1, MAC address lookup occurs pointing to the remote VTEP interface of SW2. Once SW2 decapsulates the packet, the lookup in the MAC address table returns the locally connected interface, where it gets forwarded next. The route to 8.8.8.0/24 is advertised by SW3 in type-5 NLRI: The above two pieces of information are fed into our EVPN-VRF routing table to produce the entry with the following parameters:. This is the example of “asymmetric” routing, similar to the one exhibited by Neutron DVR. You would see similar behaviour if you examined the flow between H3 and H2..]]> In this post we’ll explore how DVR is implemented in OpenStack Neutron and what are some of its benefits and shortcomings.. Let’s see what we’re going to be dealing with in this post. This is a simple virtual topology with two VMs sitting in two different subnets. VM1 has a floating IP assigned that is used for external access.. Using the technique described in my earlier post I’ve collected the dynamically allocated port numbers and created a physical representation of our virtual network.: Once all changes has been made, we need to either create a new router or update the existing one to enable the DVR functionality: Now let’s see how the traffic flows have changed with the introduction of DVR. We’re going to be examining the following traffic flow: R1 now has an instance on all compute nodes that have VMs in the BLUE or RED networks. That means that VM1 will send a packet directly to the R1’s BLUE interface via the integration bridge.: . Integration bridge of compute node #3 will lookup the destination MAC address and send the packet out port 2. The reverse packet flow is similar - the packet will get routed on the compute node #3 and sent in a BLUE network to the compute node #2. External connectivity will be very different for VMs with and without a floating IP. We will examine each case individually.: In our case table 167772161 matches all packets sourced from the BLUE subnet and if we examine the corresponding routing table we’ll find the missing default route there. The next hop of this default route points to the SNAT’s interface in the BLUE network. MAC address is statically programmed by the local L3-agent. Integration bridge sends the packet out port 1 to the tunnel bridge. Tunnel bridge finds the corresponding match and sends the VXLAN-encapsulated packet to the Network node. Tunnel bridge of the Network node forwards the frame up to the integration bridge. Integration bridge sends the frame to port 10, which is where SNAT namespace is attached SNAT is a namespace with an interface in each of the subnets - BLUE, RED and External subnet SNAT has a single default route pointing to the External network’s gateway. Before sending the packet out, iptables will NAT the packet to hide it behind SNAT’s qg external interface IP. The first step in this scenario is the same - VM1 sends a packet to the MAC address of its default gateway. As before, the default route is missing in the main routing table. Looking at the ip rule configuration we can find that table 16 matches all packets from that particular VM (10.0.0.12). Routing table 16 sends the packet via a point-to-point veth pair link to the FIP namespace. Before sending the packet out, DVR translates the source IP of the packet to the FIP assigned to that VM. A FIP namespace is a simple router designed to connect multiple DVRs to external network. This way all routers can share the same “uplink” namespace and don’t have to consume valuable addresses from external subnet. Default route inside the FIP namespace points to the External subnet’s gateway IP. The MAC address of the gateway is statically configured by the L3 agent. The packet is sent to the br-int with the destination MAC address of the default gateway, which is learned on port 3. External bridge strips the VLAN ID of the packet coming from the br-int and does the lookup in the dynamic MAC address table. The frame is forwarded out the physical interface.: Now that the FIP namespace knows the route to the floating IP, it can respond to ARPs on behalf of DVR as long as proxy-ARP is enabled on the external fg interface: Finally, the DVR NATs the packet back to its internal IP in the BLUE subnet and forwards it straight to VM1. Without a doubt DVR has introduced a number of much needed improvements to OpenStack networking: However, there’s a number of issues that either remain unaddressed or result directly from the current DVR architecture: Some of the above issues are not critical and can be fixed with a little effort:.]]> In this post we’ll use Chef, unnumbered BGP and Cumulus VX to build a massively scalable “Lapukhov” Leaf-Spine data centre.. In order to help us build this in an automated and scalable way, we’re going to use a relatively new feature called unnumbered BGP.. In order to implement BGP unnumbered on Cumulus Linux all you need to is: Example Quagga configuration snippet will look like this: As you can see, Cumulus simplifies it even more by allowing you to only specify the BGP peering type (external/internal) and learning the value of peer BGP AS dynamically from a neighbor. With all the above in mind, this is the list of decisions I’ve made while building the fabric configuration: Picking up where we left off after the OpenStack node provisioning described in the previous post Get the latest OpenStack lab cookbooks Download and import Cumulus VX image similar to how it’s described here.. In this post we will explore what’s required to perform a zero-touch deployment of an OpenStack cloud. We’ll get a 3-node lab up and running inside UNetLab with just a few commands. Now. Install git and clone chef cookbooks.. Next step is to configure the server networking and kickoff the OpenStack installer. These steps will also be done with a single command: At the end of these steps you should have a fully functional 3-node OpenStack environment. This is a part of a 2-post series. In the next post we’ll look into how to use the same tools to perform the baremetal provisioning of our physical underlay network.]]>. Since. To simulate the baremetal server (10.0.0.100) I’ve VRF’d an interface on Arista “L4” switch and connected it directly to a “swp3” interface of the Cumulus VX. This is not shown on the diagram. L2 Gateway is a relatively new service plugin for OpenStack Neutron. It provides the ability to interconnect a given tenant network with a VLAN on a physical switch. There are three main components that compose this solution: Note that in our case both network and control nodes are running on the same VM.. Next, let’s enable OSPF Once OSPFd is running, we can use sudo vtysh to connect to local quagga shell and finalise the configuration.. The last command runs a bootstrap script that does the following things: At this stage everything should be ready for testing. We’ll start by examining the following traffic flow:. The packet travels through compute host 2, populating the flow entries of all OVS bridges along the way. These entries are then used by subsequent unicast packets travelling from VM-2.. We also have a default head-end replication table 22 which floods all BUM traffic received from the integration bridge to all VTEPs:. We can once again use the trace command to see the ARP request flow inside the tunnel bridge. Now we should be able to clear the ARP cache on baremetal device and successfully ping both VM-2, VM-1 and the virtual router.. In the next post I was planning to examine another “must have” feature of any SDN solution - Distributed Virtual Routing. However due to my current circumstances I may need to take a few weeks' break before going on. Be back soon! ]]> In the this post we’ll tackle yet another Neutron scalability problem identified in my earlier post - a requirement to have a direct L2 adjacency between the external provider network and the network node. Before we start, let’s recap the difference between the two major Neutron network types:. The direct adjacency requirement presents a big problem for deployments where layer-3 routed underlay is used for the tenant networks. There is a limited number of ways to satisfy this requirements, for example:)... In order to make changes persistent and prevent the static interface configuration from interfering with OVS, remove all OVS-related configuration and shutdown interface eth1.300.. In the next post we’ll examine the L2 gateway feature that allows tenant networks to communicate with physical servers through yet another VXLAN-VLAN hardware gateway.]]> In the previous post we’ve had a look at how native OpenStack SDN works and what are some of its limitations. In this post we’ll tackle the first one of them - overhead created by multicast source replication.: Despite all the tradeoffs, OVS with unicast source replication has become a de-facto standard in most recent OpenStack implementations. The biggest advantage of such approach is the lack of requirement for multicast in the underlay network.. Configuration of these two features is fairly straight-forward. First, we need to add L2 population to the list of supported mechanism drivers on our control node and restart the neutron server. Next we need to enable L2 population and ARP responder features on all 3 compute nodes. Since L2 population is triggered by the port_up messages, we might need to restart both our VMs for the change to take effect.. Inside table 21 are the entries created by the ARP responder feature. The following is an example entry that matches all ARP requests where target IP address field equals the IP of VM-2(10.0.0.9). The resulting action builds an ARP response by modifying the fields and headers on the original ARP request message, specifically OVS:. The two features described in this post only affect the ARP traffic to VMs known to the Neutron server. All the other BUM traffic will still be flooded as described in the previous post..]]>
https://networkop.co.uk/atom.xml
CC-MAIN-2019-43
refinedweb
6,108
58.72
Trust-region subproblem solvers for nonlinear/nonconvex optimization Project description This package provides Python routines for solving the trust-region subproblem from nonlinear, nonconvex optimization. For more details on trust-region methods, see the book: A. R. Conn, N. I. M. Gould and Ph. L. Toint (2000), Trust-Region Methods, MPS-SIAM Series on Optimization. The trust-region subproblem we solve is min_{s in R^n} g^T s + 0.5 s^T H s, subject to ||s||_2 <= delta (and sl <= s <= su) Quick install $ sudo apt-get install gfortran $ pip install --user numpy $ pip install --user trustregion For more details, see below. Note that NumPy must be installed first, as it is used to compile the Fortran-linked modules. Interface The Python package trustregion provides one routine, solve, with interface: import trustregion s = trustregion.solve(g, H, delta, sl=None, su=None, verbose_output=False) s, gnew, crvmin = trustregion.solve(g, H, delta, sl=None, su=None, verbose_output=True) where the inputs are g, the gradient of the objective (as a 1D NumPy array) H, the symmetric Hessian matrix of the objective (as a 2D square NumPy array) - this can be Noneif the model is linear delta, the trust-region radius (non-negative float) sl, the lower bounds on the step (as a 1D NumPy array) - this can be Noneif not present, but sland sumust either be both Noneor both set su, the upper bounds on the step (as a 1D NumPy array) - this can be Noneif not present, but sland sumust either be both Noneor both set verbose_output, a flag indicating which outputs to return. The outputs are: s, an approximate minimizer of the subproblem (as a 1D NumPy array) gnew, the gradient of the objective at the solution s(i.e. gnew = g + H.dot(s)) crvmin, a float giving information about the curvature of the problem. If sis on the trust-region boundary (given by delta), then crvmin=0. If sis constrained in all directions by the box constraints, then crvmin=-1. Otherwise, crvmin>0is the smallest curvature seen in the Hessian. Example Usage Examples for the use of trustregion.solve can be found in the examples directory on Github. Algorithms trustregion implements three different methods for solving the subproblem, based on the problem class (in Fortran 90, wrapped to Python): trslin.f90solves the linear objective case (where H=Noneor H=0), using Algorithm B.1 from: L. Roberts (2019), Derivative-Free Algorithms for Nonlinear Optimisation Problems, PhD Thesis, University of Oxford. trsapp.f90solves the quadratic case without box constraints. It is a minor modification of the routine of the same name in NEWUOA[M. J. D. Powell (2004), The NEWUOA software for unconstrained optimization without derivatives, technical report DAMTP 2004/NA05, University of Cambridge]. trsbox.f90solves the quadratic case with box constraints. It is a minor modification of the routine of the same name in BOBYQA[M. J. D. Powell (2009), The BOBYQA algorithm for bound constrained optimization without derivatives, technical report DAMTP 2009/NA06, University of Cambridge]. In the linear case, an active-set method is used to solve the resulting convex problem. In the quadratic cases, a modification of the Steihaug-Toint/conjugate gradient method is used. For more details, see the relevant references above. Requirements trustregion requires the following software to be installed: - Fortran compiler (e.g. gfortran) - Python 2.7 or Python 3 () Additionally, the following python packages should be installed (these will be installed automatically if using pip, see Installation using pip): - NumPy 1.11 or higher () Installation using pip For easy installation, use pip as root: $ [sudo] pip install numpy $ [sudo] pip install trustregion Note that NumPy should be installed before trustregion, as it is used to compile the Fortran modules. If you do not have root privileges or you want to install trustregion for your private use, you can use: $ pip install --user numpy $ pip install --user trustregion which will install trustregion in your home directory. Note that if an older install of trustregion is present on your system you can use: $ [sudo] pip install --upgrade trustregion to upgrade trustregion to the latest version. Manual installation Alternatively, you can download the source code from Github and unpack as follows: $ git clone $ cd trust-region To upgrade trustregion to the latest version, navigate to the top-level directory (i.e. the one containing setup.py) and rerun the installation using pip, as above: $ git pull $ [sudo] pip install . # with admin privileges Testing If you installed trustregion manually, you can test your installation by running: $ python setup.py test Alternatively, the documentation provides some simple examples of how to run trustregion. Uninstallation If trustregion was installed using pip you can uninstall as follows: $ [sudo] pip uninstall trustregion If trustregion was installed manually you have to remove the installed files by hand (located in your python site-packages directory). Bugs Please report any bugs using GitHub’s issue tracker. License This algorithm is released under the GNU GPL license. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/trustregion/
CC-MAIN-2019-51
refinedweb
856
54.32
Hello, What's the best way to save session state in a plugin? I'm currently using a class/static variable. Is that a good idea or am I asking for trouble? Thanks,Xavi I think it's OK. In addition, you can use view.settings() or window.settings() (in ST3 only) to store something persistently (more or less).Settings are stored in the session file of your project, so as long as you don't close the view (for view.settings()), the settings are available even if you restart ST.I suppose it's the same for window.settings() except it's global for the project and not only for a specific view. Class or module-level variables are fine for saving state that needs to be persistent across the entire ST session (both in ST2 and ST3). As bizoo pointed out, you can use view.settings() in both ST2 and ST3 to save view-specific state and ST3 has window specific state using window.settings(). Unfortunately this doesn't exist in ST2. To get around that, and get window-specific state in ST2 you have two options: 1. Use a class or module-level dictionary indexed by the window id. This goes something like this:[code]class SomeCommand(WindowCommand): state = {} def run(self): # set state SomeCommand.state.setdefault(self.window.id(), {})'mystate'] = 'myvalue' # get state SomeCommand.state.get(self.window.id(), {}).get('mystate')[/code] 2. If you don't want the class or module-level stuff (i.e. maybe it doesn't fit with your project structure), you can use the sublime settings, like so:[code]class SomeOtherCommands(WindowCommand): def run(self): state = sublime.load_settings('SomeOtherCommandWindowState') # long and unique name, unlikely to exist # set state state.setdefault(self.window.id(), {})'mystate'] = 'myvalue' # get state state.get(self.window.id(), {}).get('mystate')[/code] Any way you slice it, per-window state gets a little dirty in ST2.
https://forum.sublimetext.com/t/saving-session-state-in-a-plugin/10414/2
CC-MAIN-2017-22
refinedweb
318
68.97
I know how to change a char in a char array but I wonder if we can change one element of a C-string pointed by a char pointer? Code: int main() { char * p = "Hello World"; *(p+1) = 'x'; cout << p << endl; return 0; } Printable View I know how to change a char in a char array but I wonder if we can change one element of a C-string pointed by a char pointer? Code: int main() { char * p = "Hello World"; *(p+1) = 'x'; cout << p << endl; return 0; } Well, I feel a bit strange to see a guy with 643 posts not using class string in C++. You handle strings the C way. And of course, no. You can use some flag in the compilation procedure to allow this to happen. Why? Because this is a string literal! String literals can not modify their data, but they can modify their pointer. On the other hand, if you had an array, then the opposite stands true. You can not modify the pointer, but you can modify the data :) In short, change your definition of `p' to: Code: char p[] = "Hello World"; Example with string literal. OutputOutputCode: #include <stdio.h> int main(void) { char* strLiteral = "I am a string literal!"; char* origin = strLiteral; printf("I am about to chrash...!\n"); *(strLiteral+1) = 'W'; printf("Did I?\n"); printf("%s\n", origin); return 0; } While with the arrayWhile with the arrayCode: linux05:/home/users/std10093>px I am about to chrash...! Segmentation fault On linux I got output:On linux I got output:Code: #include <stdio.h> int main(void) { char str[15] = "a string"; printf("I can modify the data...\n"); *(str+1) = 'W'; printf("But not the pointer...!\n"); str++; printf("%s\n", str); return 0; } and on windowsand on windowsCode: linux05:/home/users/std10093>gcc -Wall px.c -o px px.c: In function 'main': px.c:11: error: lvalue required as increment operand Hope that helps!Hope that helps!Code: main.c: In function `main': main.c:11: error: wrong type argument to increment make[2]: *** [build/Debug/Cygwin-Windows/main.o] Error 1 make[1]: *** [.build-conf] Error 2 make: *** [.build-impl] Error 2 Thank you, that helped. So why are you using C strings, now? Maybe he uses the string class, but wanted to remember how it feels handling strings in C way :P Let's not assume things... though, when these things show up, I get kind of worried that Ducky is trying to climb up the wrong path again... char *p = "This"; p can be reassigned, because it is a pointer. And that very syntax is only allowed as a convenience. A convenience which C++ copied from the beginning, I think, but nonetheless. Before C was standardized, you needed to do magic like this: char * p; char cs[] = "This"; p = cs; I will even link a FAQ answer later that shows the difference between a pointer pointing to an array object and an array itself with pictures. Arrays cannot be reassigned, because that's illegal. If arrays were pointers, then it would be legal. str++; is: str = str + 1; which means that this requires array assignment. With a pointer, this operation is fine, though it's not guaranteed to point to a dereferenceable location. In other words, you can do this: But you cannot increment foo here, because foo is an array type. Try, and you will get that error. foo is the wrong type to increment.But you cannot increment foo here, because foo is an array type. Try, and you will get that error. foo is the wrong type to increment.Code: #include <stdio.h> #include <ctype.h> int main(void) { char foo[] = "My sample string."; for (char *bar = foo; *bar != '\0'; bar++) { *bar = toupper(*bar); } printf("%s\n", foo); return 0; } Try reading this FAQ answer. lol @Elysia I was looking at somebody's code and I was wondering why was that working when 'pa' is declared as a pointer so the value it was pointing couldn't be changed. Is it because of VirtualProtect()? Code: DWORD_PTR pa = 0x12340011; BYTE *p = (BYTE *)pa; VirtualProtect((void *)pa,5, PAGE_EXECUTE_READWRITE,&OldProtect); p[0] = (BYTE)0xE9; for (int i = 0; i < 4; i++) p[i + 1] = p2[i]; It's because it is of type void, so you can't have an array of type void, but a pointer to void! :) Elysia, I told that in terms of fun mostly, don't get so upset :) However, I like your example, so I am going to augment my upload on my page with your example and the faq link. Of course you will get the credits ;) EDIT: Here it. Please. I give 0 ........s about credit here.Please. I give 0 ........s about credit here.Quote: Ι believe I said that too. That is what I am trying to say... but thanks for the FAQ, since I didn't know about this page. Exactly! The code I posted was for demonstration of which you can do and what you can't! I thought this was clear by the printfs ;) However, I like your example, so I am going to augment my upload on my page with your example and the faq link. Of course you will get the credits ;) If you were trying to say what I said, I felt I needed to rephrase your post anyway. You kept referring to arrays as pointers. Arrays are not pointers. And I thought it was important enough to say that arrays in general can't be reassigned, and it's not just a string literal issue.
https://cboard.cprogramming.com/cplusplus-programming/154313-change-one-element-string-pointed-char-pointer-printable-thread.html
CC-MAIN-2017-04
refinedweb
940
75.61
This is the third article in our OpenStack API tutorial series. To learn how you can deploy and administer OpenStack in an enterprise environment, check out OpenStack training from The Linux Foundation. Today let's use our knowledge of the OpenStack API (from Intro to the OpenStack API and Spinning up a Server with the OpenStack API) to put to use an OpenStack SDK. The API is the basic interface through which you communicate with your OpenStack infrastructure. Using the API you make RESTful calls. While you're free to use a library such as libcurl within your programs, making straight HTTP calls is a bit cumbersome. So instead, there are various libraries in different programming languages that simplify the process of calling into the OpenStack API. These libraries are called SDKs. Let's look at one of them. For these examples, I decided to use the Go programming language. The reason I chose that is that it's a fun, cool language to use, and I wanted to steer towards something different from the usual Python used in OpenStack. (The only SDK officially create by the OpenStack group is for Python. All of the other SDKs, including the one for Go, are unofficial.) Note that as I mentioned previously, I'm using Rackspace for these examples. (See the first article for an explanation why.) Go Language There are a few different SDKs for the Go language; I chose one created by Rackspace called Gophercloud. I'm going to assume you already have Go installed and your GOPATH set up appropriately. Now let's install Gophercloud and try it out. To install it type: go get github.com/rackspace/gophercloud Easy enough. Now create a new text file using your favorite editor. I'm going to call my file test1.go. Because I want you to be able to try this out, instead of providing snippets, I'm going to provide you with an entire program you can type (or paste) in. package main import ( "fmt" "github.com/rackspace/gophercloud" "github.com/rackspace/gophercloud/openstack" "github.com/rackspace/gophercloud/pagination" "github.com/rackspace/gophercloud/openstack/compute/v2/servers" ) func main() { opts := gophercloud.AuthOptions{ IdentityEndpoint: "", Username: "RackspaceUsername", Password: "RackspacePassword", } provider, err := openstack.AuthenticatedClient(opts) if err != nil { fmt.Println("AuthenticatedClient") fmt.Println(err) return } client, err := openstack.NewComputeV2(provider, gophercloud.EndpointOpts{ Region: "IAD", }) if err != nil { fmt.Println("NewComputeV2 Error:") fmt.Println(err) return } opts2 := servers.ListOpts{} pager := servers.List(client, opts2) pager.EachPage(func(page pagination.Page) (bool, error) { serverList, err := servers.ExtractServers(page) if err != nil { return false, err } for _, s := range serverList { fmt.Println(s.ID, s.Name, s.Status) // servers.Delete(client, s.ID); } return true, nil }) } In this code, replace RackspaceUsername with your username and RackspacePassword with your password. Next, log into your Rackspace console and create a server. (We could use code to create a server, but I just want to list the running servers in this short example.) Go ahead and create a couple servers. Create them both in a single region. Then in the code above, replace IAD with the abbreviation for the region if you're using one other than Northern Virginia. Finally run your code by typing go run test1.go replacing test1.go with whatever filename you chose. Here's what I see when I run it: jeff@JeffLinuxMint16 ~/dev/tests/openstack/go $ go run test1.go dce631ce-71ff-4b9a-89f0-0ea7bf4a6770 Cloud-Server-02 BUILD 5500e1d6-2782-4971-9045-72c9b744d482 Cloud-Server-01 ACTIVE I ran the test1.go program after one of the servers was finished being built and was active, and the second was still in the process of building. You can see in this list that indeed one is listed as BUILD, and the other as ACTIVE. Digging into the code Now let's look at the code in detail. The imports section lists the external packages you need in your code. I'm using fmt so I can print out information, as well as several of the gophercloud packages. This part is kind of tricky; you need to look at the documentation and only list exactly the gophercloud packages you need, as there are different packages for different parts of the API. The first part of code inside the main function creates an authentication options object, filling it in with your username, password, and the URL used for authenticating. Remember, to authenticate through the API, you send a request to the URL, passing your username and password; you get back an authentication token. Behind the scenes, that's exactly what is going to happen here, but with less coding. Here, we're filling that information into a structure; then in the line that follows we call openstack.AuthenticatedClient, passing in that structure. We'll either get back an error or token information, and we save the token information in a variable called provider. (I just used the variable name "provider" because that's what the samples used in the Gophercloud documentation.) Next, we check if there's an error, and if so, print it out and exit. Otherwise continue. The next step is to use the Compute service. To do so, we call openstack.NewComputeV2, passing the provider variable from the previous call, and the end points for this call. The second parameter to this call is an EndpointOpts structure, which specifies the endpoints for the call. In our case, that's just a region, so we set it to IAD, or whatever region you're using. The NewComputeV2 function will either return an error or an object representing the Compute service. We can use that object to make further API calls. For the further API calls, we're going to call servers.list. This part is a bit odd in terms of object-oriented programming. We don't call a method on the Compute service object to list the servers; instead, we call call the list method found in the servers package (which we imported at the top of the file), passing in the Compute service object. We then page through the list of servers. Remember, we're limited on how much information we'll get back in each API call. If there's more information, we need to make further requests to get back additional data, continuing until there's no more data. That's why we use a pager object and call its EachPage function. That function takes a callback function as a parameter; the callback in turn has a parameter representing the current page. Using that page, we can extract the servers' information from the page of data. Finally, we loop through the list of servers extracted from the page. For each server we print out the ID, Name, and Status. Now you might be wondering how I knew I could print out ID, Name, and Status. What other parameters are there? For that I looked at the documentation. The documentation has URLs that match the URLs in the import statements, thanks to a cool website called godoc.org. The import statement in question is this: github.com/rackspace/gophercloud/openstack/compute/v2/servers Thus, tack on before that and open the resulting address in your browser: This is the full documentation for the servers package. Here's a screenshot: About halfway down this screenshot, for example, you can see the List function and the parameters it takes: a ServiceClient and a structure containing options. The function returns a Pager object. Farther down this page is a type called Server. This is where I found the members for the Server object, including Name, and so on. You can see other important parameters are AccessIPv4 and HostID. Testing for errors You can see the different kinds of errors you get by changing some of the parameters. For example, if you change the Region parameter in the NewComputeV2 call to a nonexistent region like so: Region: "ABCDEF" then you'll see the following output: NewComputeV2 Error: No suitable endpoint could be found in the service catalog. Shutting Down the Servers Don't forget to shut down your servers! Otherwise you'll have to pay more than you intended to at the end of the month. Let's modify the above code to shut down the servers. (Put the region back to your region if you changed it to test out the error.) Now if you look in the servers page of the documentation, you'll find a Delete function. That function takes two parameters: a ServiceClient and an id. Let's add a line of code to our program that deletes the servers right after printing out the information. Warning! Warning! This will delete all the servers in your account for that region. If that's not what you want, don't run this. Instead, delete them manually from the Rackspace web console! Here's how you can delete all the servers. You can see in the code above that there's a commented-out line. Uncomment it so it looks like this: servers.Delete(client, s.ID); Then, if you want to delete all your servers, run the program again. This time it will delete each server as it prints them out. Wait a few seconds to give Rackspace time to delete the servers. Then run the program again and you won't see any servers listed. Done! Finding all the docs You can explore the documentation in full on the godoc server. Start at this page and from there you can drill down into the various services. There's also some basic getting-started information. Scroll to the very bottom of the page and you'll see a section called Directories where you can find the available services and objects contained within. Conclusion The OpenStack SDKs make it easier to call into the OpenStack APIs. There are different SDKs for different languages. Some languages are a bit incomplete (for example, there isn't much available for C++). Other languages have several choices. Choose the language you're comfortable in, and make sure you understand the API before embarking into the SDK. Then the SDK will make a lot more sense. This will conclude our brief introduction to the OpenStack API and SDKs. Is there more you'd like to see? If so, let us know in the comments. See the previous two articles in this three-part series: Intro to the OpenStack API Spinning up a Server with the OpenStack API
https://www.linux.com/learn/how-use-openstack-sdk-go
CC-MAIN-2017-13
refinedweb
1,751
66.64
#include <Stream_Modules.h> #include <Stream_Modules.h> Inheritance diagram for ACE_Thru_Task<>: Construction. Destruction. 0 [virtual] Hook called from <ace_thread_exit> when during thread exit and from the default implementation of <module_closed>. In general, this method shouldn't be called directly by an application, particularly if the <task> is running as an Active Object. Instead, a special message should be passed into the <task> via the <put> method defined below, and the <svc> method should interpret this as a flag to shut down the <task>. Reimplemented from ACE_Task_Base. Dump the state of an object. Reimplemented from ACE_Task< ACE_SYNCH_USE >. Terminates object when dynamic unlinking occurs. Reimplemented from ACE_Shared_Object. Returns information on a service object. Initializes object when dynamic linking occurs. Hook called to initialize a task and prepare it for execution. <args> can be used to pass arbitrary information into <open>. A hook method that can be used to pass a message to a task, where it can be processed immediately or queued for subsequent processing in the <svc> hook method. Run by a daemon thread to handle deferred processing. Declare the dynamic allocation hooks.
https://www.dre.vanderbilt.edu/Doxygen/5.4.4/html/ace/classACE__Thru__Task.html
CC-MAIN-2022-40
refinedweb
181
52.36
This file provides firmware functions to manage the following functionalities of the Reset and clock control (RCC) peripheral: More... #include "stm32f37x_rcc.h" This file provides firmware functions to manage the following functionalities of the Reset and clock control (RCC) peripheral: =============================================================================== ##### RCC specific features ##### =============================================================================== [..] After reset the device is running from HSI (8 MHz) with Flash 0 WS, all peripherals are off except internal SRAM, Flash and SWD. (#) There is no prescaler on High speed (AHB) and Low speed (APB) busses; all peripherals mapped on these busses are running at HSI speed. (#) The clock for all peripherals is switched off, except the SRAM and FLASH. (#) All GPIOs are in input floating state, except the SWD pins which are assigned to be used for debug purpose. [..] Once the device started from reset, the user application has to: (#) Configure the clock source to be used to drive the System clock (if the application needs higher frequency/performance) (#) Configure the System clock frequency and Flash settings (#) Configure the AHB and APB busses prescalers (#) Enable the clock for the peripheral(s) to be used (#) Configure the clock source(s) for peripherals which clocks are not derived from the System clock (SDADC, CEC, I2C, USART, RTC and IWDG).
https://ese.han.nl/STM32/stm32f37stdlibrary/html/stm32f37x__rcc_8c.html
CC-MAIN-2019-13
refinedweb
202
51.21
A Flutter plugin for making payments via Paystack Payment Gateway. Fully supports Android and iOS.. There are two ways of making payment with the plugin., charge: charge, ); PaystackPlugin.checkout() returns the state and details of the payment in an instance of It is recommended that when PaystackPlugin.checkout() returns, the payment should be verified on your backend. You can choose to initialize the payment locally or via your backend.: This method helps to perform a check if the card number is valid. Method that checks if the card security code is valid. Method checks if the expiry date (combination of year and month) is valid. Method to check if the card is valid. Always do this check, before charging the card. This method returns an estimate of the string representation of the card type(issuer). This is quite easy. Just send a HTTP GET request to[TRANSACTION_REFERENCE]. Please, check the official documentaion on verifying transactions. Paystack provides tons of payment cards for testing. For help getting started with Flutter, view the online documentation. An example project has been provided in this plugin. Clone this repo and navigate to the example folder. Open it with a supported IDE or execute flutter run from that folder in terminal. The project is open to public contribution. Please feel very free to contribute. Experienced an issue or want to report a bug? Please, report it here. Remember to be as descriptive as possible. Thanks to the authors of Paystack iOS and Android SDKS. I leveraged on their work to bring this plugin to fruition. To to add your app here, just shoot me an email on faradaywilly(at)gmail.com. example/README.md Demonstrates how to use the flutter_paystack plugin. For help getting started with Flutter, view our online documentation. Add this to your package's pubspec.yaml file: dependencies: flutter_paystack: ^1.0.1 You can install packages from the command line: with Flutter: $ flutter pub get Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:flutter_paystack/flutter_paystack.dart'; We analyzed this package on Jul 15, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: Flutter References Flutter, and has no conflicting libraries. Fix lib/src/common/string_utils.dart. (-0.50 points) Analysis of lib/src/common/string_utils.dart reported 1 hint: line 3 col 49: Equality operator == invocation with references of unrelated types. Format lib/src/widgets/pin_widget.dart. Run flutter format to format lib/src/widgets/pin_widget.dart. Support latest dependencies. (-10 points) The version constraint in pubspec.yaml does not support the latest published versions for 1 dependency ( meta).
https://pub.dev/packages/flutter_paystack
CC-MAIN-2019-30
refinedweb
455
61.73
Checking for "observe_field" in gave me the answer. If you use the following test code in the controller def live_search render_text "<p> "+ params.inspect + ".</p>" end I'll see the function 'observe_field' will not return the text unless told to do so (you were probably expected to retrive the field directly through the DOM). change the rhtml to: <label for="searchtext">Live Search:</label> <%= text_field_tag :searchtext %> <%= observe_field(:searchtext, :frequency => 0.25, :update => :search_hits, :with => "searchtext", :url => { :action => :live_search }) %> <p>Search Results:</p> <div id="search_hits"></div> That will do it (until I learn to fetch the fields directly). Best Regards G..
http://archive.oreilly.com/cs/user/view/cs_msg/81873
CC-MAIN-2014-52
refinedweb
101
64.71
HttpHandler is the low level Request and Response API to service incoming HTTP requests. All handlers implement the IHttpHandler interface. There is no need to use any extra namespace to use it as it contains in the System.Web namespace. Handlers are somewhat analogous to Internet Server Application Programming Interface (ISAPI) extensions. HttpHandler IHttpHandler System.Web In this article, I am going to explain how to use HttpHandler to create a SEO friendly as well as user friendly URL. During this article, I will create two .aspx file, one HandlerUrl.cs class file. I am assuming here that I have to show a different article based on the id I get from the request. But I will not get the id as querystring but as part of the name of the page like article1.aspx or article2.aspx. Here 1 and 2 is my article id. I will extract it and send into my page (showarticle.aspx) using Server.Transfer method so that my URL in the browser will be like article1.aspx but internally it will be served as showarticle.aspx?id=1. Server.Transfer showarticle.aspx?id=1 I am going to show how to do this in a few steps. (To get hundreds of real time .NET How to solutions, click here.) Right click your App_Code folder and add a .cs file named HandlerUrl.cs and write the following code: namespace UrlHandlerNameSpace { /// <summary> /// Demo url HttpHandler /// </summary> public class DemoUrlHandler: IHttpHandler { /// <summary> /// Process my request /// </summary> /// <param name="context"></param> public void ProcessRequest(HttpContext context) { // get the requested page name like article1.aspx string strUrl = System.IO.Path.GetFileName(context.Request.RawUrl.ToString()); // get the page name without extension eg. "article1" int len = strUrl.IndexOf("."); // as the length of "article" is 7 int sep = 7; // subtract the length of "article" word from the complete length // of the page name to get the // length of the id, as it may be 1, 100, 50000 // (1 character lonr or 3 chars long or may be 5 chars long) len -= sep; // now get the exact id string id = strUrl.Substring(sep, len); // Now transfer the request to the actual page that will show the article // based on the id HttpContext.Current.Server.Transfer("~/urlhandler/showArticle.aspx?id=" + id); } public bool IsReusable { get { return true; } } } } As you can see, I have inherited IHttpHandler interface into DemoUrlHandler class file so I will have to implement two methods of it - ProcessRequest and IsReusable. ProcessRequest is the method that handles my request and sends the request using Server.Transfer to the showArticle.aspx specifying the id of as the querystring. I have done some calculations to extract the id from the name of the requested page. DemoUrlHandler ProcessRequest IsReusable Go to your web.config file and add the handler for your request as below inside system.web tag. system.web <httpHandlers> <add verb="*" path="article/article*.aspx" type="UrlHandlerNameSpace.DemoUrlHandler"/> </httpHandlers> In the above code, GET UrlHandlerNameSpace For testing purposes, I have created a default.aspx page that will list different URLs like article1.aspx, article2.aspx, article3.aspx, etc. as displayed in the picture below: This page will show my article based on the querystring passed to it from my handler. protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { if (Request["id"] != null) { // Call your function to get data from database based on the id // This is just a demo application so I am writing title based on the id string title = "Title for Article " + Request["id"].ToString(); litPageTitle.Text = title; this.Title = title; } } } Now make the default.aspx page as startup page and run your demo project. You should see the page like default.aspx above, try clicking one of the URLs and you will see the corresponding page (below image). Notice the address bar in the picture below. The URL is "/article/article8.aspx" but internally this page is being served by showarticle.aspx with querystring as 8 (/urlhandler/showarticle.aspx?id=8). In this way, you are not showing querystring in the browser, your URL is neat and clean. This page will be understood by the search engines as a complete stand along page rather than a page with different querystring value and will be given much more weightage than a single page with different querystring value. To form a SEO friendly and user friendly URL, we don't need to use any third party component. Using ASP.NET HttpHandler, we can do it easily by writing a small amount of code. Thanks and happy coding!!! HttpHandler Read my many articles.
http://www.codeproject.com/Articles/27448/Writing-SEO-friendly-url-using-HttpHandlers-in-ASP?fid=1459817&df=90&mpp=10&sort=Position&spc=None&tid=4288627
CC-MAIN-2016-22
refinedweb
762
67.55
Opened 8 years ago Closed 7 years ago Last modified 7 years ago #4626 closed (wontfix) pass the context to extra_context functions Description (last modified by mtredinnick) I found it helpful for extra_context functions to have access to the context. The change is in line 80: < if callable(value): < c[key] = value() > if callable(value): > c[key] = value(c) I'm not sure who decides on whether it should go to the version - it does changes the API Change History (7) comment:1 Changed 8 years ago by mtredinnick comment:2 Changed 8 years ago by bennydaon@… This change is to list_detail.py and I use it for the function display_pages: def display_pages (c): ''' return an HTML string that renders: "1 2 3 4 ..." and has a link for all pages except the current. ''' return ' '.join([p==c['page'] and '%s ' % p or '<a href="/foo/page/%s">%s </a>' % p for p in range(1, c['pages']+1)]) comment:3 Changed 8 years ago by SmileyChris - Resolution set to wontfix - Status changed from new to closed Sounds like you should be doing this as a template tag, not an extra_context function. Also taking into consideration the fact that this would break everyone's current functionality, I'll call it and say wontfix. comment:4 Changed 7 years ago by sciyoshi@… - Cc sciyoshi@… added - Resolution wontfix deleted - Status changed from closed to reopened I'd like to ask for this again. I have the following situation: two models, Category and Post, where each Post has a Category. I want object_detail for Categories to display a list of the category's posts by {% extend %}ing 'post_list.html'. Unfortunately, doing this any other way (i.e., a context processor or a list_detail view on the Post model) are either ugly or require multiple lookups for the category. With this, everything is much easier: (r'/categories/(?P<slug>.+)/$', object_detail, dict( queryset=Category.objects.all, template_object_name='category', slug_field='slug', extra_context=dict( post_list=lambda context: context['category'].get_posts(), ), )) Note that you can add this functionality without breaking anything: if callable(value): try: c[key] = value() except TypeError: c[key] = value(c) comment:5 Changed 7 years ago by mtredinnick - Resolution set to wontfix - Status changed from reopened to closed Please do not reopen tickets that have been marked as wontfix. We ask this for a reason: so that we don't continually have requests reopened just because people don't agree with them. If you are unhappy with the resolution, the contributing.txt document (also available in our documentation section online) explains what the next step is. Read the section in there about ticket statuses. I am not commenting on your request, since following the normal procedure is important here. comment:6 Changed 7 years ago by daonb <bennydaon@…> I've opened this ticket, but fully agree with the decision to close it as 'wontfix'. If you need to access any of the context parameters, you're better of using a customer templatetag. I found the use of templatetag make reuse easier and the code more readable. Templatetags are quite easy to write and can accept any number of parameters. If I understand your issue, you need a template tag that will return a post list based on the 'category' parameter: {% get_posts for category as post_list %} You can find an explanation on how to write your own templatetags here. Hope it helps. comment:7 Changed 7 years ago by Samuel Cormier-Iijima <sciyoshi@…> The problem I was having was that context variables set in the child template (through a template tag) didn't seem to be accessible from the parent. The problem was that I wasn't setting the variable inside a block tag, so it wasn't getting pushed onto the parent's context. Anyways, although I think it would be nice for small things like this to not have to write a whole template tag, I respect the developers' decision to wontfix this. Thanks, Samuel (Fixed description formatting.) What file are you trying to change here? What is an example use-case?
https://code.djangoproject.com/ticket/4626
CC-MAIN-2015-18
refinedweb
679
60.24
First suicide by a Karnataka silk farmer couple was reported this week in Mandya When a silk reeler informed Swami that his cocoons would fetch him only Rs 120 a kg — and not the promised Rs. 320 — the 35-year-old sericulture farmer was devastated. At midnight on March 5, relatives heard him quarrel with his wife. Nothing, however, could have prepared them for what they saw the next morning. Swami was found hanging from a tree on his field, while his wife Vasantha had hung herself from the ceiling. The couple, residents of Valagere Doddi, around 60 km from Bangalore in Mandya district, leave behind three children: Sharath (3), Keertana (5) and Chandrika (6). Poignant story This is the first suicide by a sericulture farmer reported in Karnataka. This poignant story, however, is much more than just a case of one family's inability to cope with adversity. The suicides appear to have been triggered by the plunge in the price of silk cocoons, following the announcement in the Union budget that the import duty on raw silk would be cut from 30 to a mere five per cent. Anticipating this, reelers pulled out of the market by late-February, and cocoon prices plummeted from Rs. 380 a kg in January to Rs 120. Swami's grieving father, Boregowda, 65, himself a sericulture farmer, says his son owed Rs.1.2 lakh to debtors. Two of his six crops in 2010 had failed, and the land lease (at Rs.10 per plant a year) was outstanding. “ Now how will a man of my age fend for three children?” he asks. While he hopes the government will offer compensation and make provisions for the children's future, he points out that no district official has visited them. It is not just the matter of the crash in cocoon prices that are driving farmers to distress. Government support for sericulture has weakened. For example, the Government Model Grainage, a few yards from Swami's home, barely functions, forcing them to buy from private agents at Rs.1,200-Rs.1,800 a kg of eggs (compared with the government rate of Rs. 300). The inputs cost ranges from Rs.80 to Rs. 120 a kg of cocoon. “Many of us buy water for this labour-intensive crop. If we aren't assured at least Rs. 250 per crop, how will we survive?” asks Jayaramegowda. In the absence of any institutional credit, silk farmers often borrow from private lenders at high rates of interest. Risk Karnataka produces more than 60 per cent of the country's silk. In Mandya district, an estimated 92 per cent are small and marginal farmers. Here, the area under mulberry came down from 16,416 hectares in March 2010 to 12,398 hectares in January 2011. In that period, cocoon prices increased from an average of Rs.192 to Rs. 229 a kg. When it touched Rs. 380 in December, farmers were tempted to take investment risks, says Krishne Gowda, sericulturist and district convenor of the Karnataka Prantha Raithara Samiti (KPRS). The Samiti demands that the government fix a minimum price of Rs. 400 a kg for cocoons. More importantly, it wants the import duty reverted to 30 per cent. “In January, global tenders for importing 2,500 tonnes of duty-free silk were finalised. Chinese silk is already cheaper than Indian silk. These duty cuts will drive us out of business,” he says. Though the trigger is a flawed policy of the Central government, the State, too, has failed in responding to the crisis, he says. Keywords: Farmers' issue, sericulture, import duty KOVAL ! why do you only respond to people who threaten to unsubscribe... what about me....Id like a shout out too ....I watched all your videos....TWICE.....i loved you when you weren't? famous.... *sigh* BloombergFTR, GS, CVS, DHI, PM - Wednesday Notable Stocks with Volume at NYSEHealth Talk & YouCVS Caremark Corporation (CVS Caremark) is a pharmacy healthcare provider in the United States. BloombergFTR, GS, CVS, DHI, PM - Wednesday Notable Stocks with Volume at NYSEHealth Talk & YouCVS Caremark Corporation (CVS Caremark) is a pharmacy healthcare provider in the United States. A shame it is this UPA govenment. UPA 1 was okay but two has even stopped pretending to do anything for the poor. This week's Frontline had a cover story exposing every lie that the media, politicians and Pranab told us about the budget. swami and vasantha hung themselves... why? they could not repay the loan of 80000, which had accumulated to 120000...why? they were sericulture farmers and cocoon prices went down to 120/kg from 380... why? cheaper chineese silk flooded our market... why? import duty was reduced from 30% to 5% in the last central govt budget... why? pranab mukherjee prepared it and he follows neo-"liberal' policy .... liberal - liberates you from life ! It is shocking that these things happen and it does not figure on the government's agenda. And this is not just in India but all developing countries where there are interests in only GDP growth and a fiscal agenda. there is not even a pretense of social good. It must be noted that they are not landed. land redistribution has failed and so people will continue to suffer. this is bad news for Inida Forest product,all endavor to increase foretation will alos hamper as Tassar is one of the silk which helps in increasing forestation reduce,we were expacting Inida will increse duty to help it is forestation . Farmer suicides are common around the world has to do with the capitalist agenda of corporotisation of agriculture and withdrawal of support systems for farmers. What i appreciate is the simple systematic way in which this piece brings out the problems and the factors that led to the situation. kudos! Even in the US this is a common story. The difference here is in a country like India where there are so many mouths to feed and so much poverty and a need to keep food cheap and affordable, such corporatisation can spell doom. This has to stop and the government should see reason. Let all the political parties forget all the differences and save the basic fabric of our country. Will any MNC's and other corporates come forward to save these poor farmers. NGO's it is wake-up call. We feel that we have a Chinese Govt and not Indian Govt. Our Govt has scant regard for our farmars and is only interested to help the Chinese. This must be stopped with immediate effect. We fail to understand the sudden change in the attitude of the Finance Minister by reducing the import duty on Raw Silk from 30% to 5% . a reduction of 25%! They have ignored the following completely. a). Loss of revenue to The Central Govt, without considering the benifits to the villagres, farmers, reelers,working in sericulture for generations. b).Making import yarns cheaper, those are of better quality. Is a definate move to kill the sericulture. c). One side Govt had imposed Anti Dumping Duty on the Raw Silk Yarn to protect the home industry , now where that attitude has gone? This needs to be brought to the immediate attention. A minimum support prize must be fixed. All media celebrated the Karnataka's CM's pro-farmer budget. How can it be pro-farmer if they completely let the market run haywire and do nothing to control the fluctuations. The farmer is last on their mind. This is a horrifying story. Why is the government not bothered about these farmers? We are so proud of our silk, but we do not bother to make sure that these people who manufacture it live a decent life? What kind of government do we have? Are we not ashamd to be in this country? Please Email the Editor
http://www.thehindu.com/news/national/article1529739.ece
CC-MAIN-2014-42
refinedweb
1,315
75
Google Analytics is great. A free, fully featured website tracking and reporting system. It's even better if you use Google Adwords and want to test out how different ads work with getting visitors. DotNetNuke is great. A free, fully featured web platform in which the sky is the limit for developing websites quickly and easily. What I found, though, was that putting the two together, while not difficult, is quite clunky. So, I set out to create a much better way. Google Analytics works by you (the webmaster or developer) inserting a piece of JavaScript into each page you want tracked by Google. As visitors view your website pages, Google records all of the page views. You then go to the Analytics website and view your reports. Entering JavaScript into pages is tough with DotNetNuke as you don't want to modify the base code. You can put the script into a skin, but if you've downloaded a skin for your site, you mightn't want to fiddle with it. There's no easy way to control where to put some JavaScript code on a DotNetNuke page. Enter the DotNetNuke Google Analytics module - a free and easy way to integrate the two together. The Google Analytics module is a 'normal' DotNetNuke module, so if you have admin permissions on your DotNetNuke website to install modules, then you can install this one. It doesn't use any database tables or web.config modifications, so should be OK for anyone on shared hosting to use. It is a lightweight module, and is designed not to place extra load onto the website. Step 0: Create and/or locate your login details for your Google Analytics account. If you haven't already got an account, follow the instructions on the Analytics site to create an account and website profile (). Step 1: Download the Module install Zip file from this page. Step 2: Install the module by using the 'Module Install' functionality in your DotNetNuke portal. To check that the module installed correctly, it should appear in the 'Module' drop down list in your Control Panel, under the name "iFinity Google Analytics". Step 3: Go to the home page of your DotNetNuke portal, and use the 'Add New Module' functionality to add an instance of the iFinity Google Analytics module to the page. I normally use the bottom pane to keep it out of the way of other modules. Don't be worried that you can see the module in 'Edit' mode. The module will disappear when you log out as Administrator. Step 4: Click on the 'Settings' control for the Google Analytics module you just created. Then, scroll the Settings page until you get to the 'Advanced Settings' section, and check the 'Display Module on All Pages'. This ensures that all pages on your website are tracked by Google Analytics. See the red highlighted area: If for any reason, you didn't want all pages on your site to be tracked, you can just copy the module from the home page onto the pages you want tracked. Scroll further down the page, expand the 'Page Settings' section until you get to the 'Basic Settings' section, and select Visibility: None, and make Display Container unchecked. This hides the Google Analytics module from visitors. Basic Settings section of the DotNetNuke module Step 5: Scroll down the page further until you reach the 'Analytics Script Generator Settings' section. Under here is the specific settings for your site. Leave this page for a minute and open up a new browser window to go to Google Analytics. Step 6: Go to Google Analytics at (make sure you sign in with your account), and click on 'Edit' next to the website profile you have set up. This will bring up the details of the website profile. Then, find the 'Check Status' link on the page and click that. You should see a box containing some JavaScript. Instead of copying out all the JavaScript as directed, you only need the account number, or the 'UA' number. Copy the UA number from the Google page into your DotNetNuke site, in the 'Google Analytics Tracking ID' field. You do not need the inverted commas around the number. See the following example: Analytics Settings section of the DotNetNuke Module Step 7: (optional) If you'd like to restrict tracking so that it doesn't show on the reports for certain visitors, you can select a DotNetNuke security group in the next drop down list. For example, you don't want the administrative editing to show up as page hits on Google Analytics. In that case, you would select 'Administrators' as the group to exclude from tracking. Step 8: Click on Update. The page should refresh and return to the home page of your site. If you do a "View Source" on your browser and scroll to the bottom of the HTML, just before the body tag, you should see the generated script. (Note: if you selected the Adminstrator group, you'll have to sign out first). body Step 9: Go back to your Google Analytics account. It should still be on the page showing the tracking code to be added. Click on the 'Finish' button at the bottom. When it returns to the website profile page, you should see a green tick and the words 'Receiving Data'. If you do see this, your site is providing statistics to Analytics and you are all set. The security group option is there so that any editing and other administrative activity doesn't show up on the Analytics reports. However, if you wanted to separate out usage for, say, registered vs. non-registered users, you can set up two profiles in Analytics and create two Analytics modules in your DotNetNuke site. You can then set one to track unregistered users by excluding registered users. and vice versa. The other box on the settings page is the 'Do not generate tracking script for calls to this host'. This option allows you to interrupt calls to Google when the site is called using a specific host-name. I normally use this so that when running tests on localhost, I'm not bothering the Google server. However, instead of totally hiding the code, all that happens is that the urchinTracker() JavaScript call is commented out, so you can still check that it is working. urchinTracker() If you want to see the code, it is included with the module install. For a high level intro, all that happens in the module is the collection of the module settings (UA-ID, security groups, and host name). The code determines if tracking should be done for the incoming requests and checked against those variables. If the answer is yes, some JavaScript is inserted using the 'RegisterClientScript' ASP.NET call. And, that's it. RegisterClientScript About the only tricky bit is the use of a created BasePage property, a trick I learnt from Scott McCulloch of. By creating this property inside a class which is derived from PortalModuleBase, you can get access to all sorts of goodies like the meta tags, headers, and the base ClientScript object for the page. BasePage PortalModuleBase ClientScript //snip //now register the script this.BasePage.ClientScript.RegisterStartupScript(GetType(), "analytics", script); //end snip //expose the base page public DotNetNuke.Framework.CDefault BasePage { get { return (DotNetNuke.Framework.CDefault.
http://www.codeproject.com/Articles/20233/A-simple-DotNetNuke-Google-Analytics-Module
CC-MAIN-2014-35
refinedweb
1,229
63.49
The Logstash Lines: 2.2 Release, Dynamic Config Reloading Welcome back to The Logstash Lines! In these weekly posts, we'll share the latest happenings in the world of Logstash and its ecosystem.This week, we are super excited to report that dynamic config reloading feature has been merged to master. With this feature, any config changes made to the file will be picked up dynamically, and the internal pipeline restarted to apply these changes. This means Logstash as a process does not need to be restarted to update configuration. To enable this, run LS with: bin/logstash -f config_file --auto-reload --reload-interval 2 --reload-interval is how often LS watches the config file for changes, defaulting to 3 seconds This is also getting back ported to 2.3 release, so look out for that! 2.2.0 and 2.1.2 Release As part of release bonanza, LS 2.2.0 and 2.1.2 was released with new pipeline architecture being the highlight among other features/bugs. See blog for details. A user reported a bug with the new pipeline which affects a subset of LS filters (metrics, multiline, etc). These filters use flushing logic to periodically emit events which are in-flight. This has been fixed and we're targeting a 2.2.1 release this week. LS Metrics The aim of this project is to expose internal metrics of Logstash using an API. Follow this issue for more details. - Metrics Store: Internal implementation has been changed to use namespaced keys which allows for better filtering from API directly (#4492). - Metrics Store returns a structured hash which can be directly used by consumers like API (#4529) Others: - Java 8 is minimum required for Logstash 3.0.0 (#3877) - JDBC Plugin: Investigating use of stored procedures for MySQL database - JSON Lines codec: Improved performance by using buffered tokenizer to split on lines instead of using line codec (#17) Many exciting things are in store for Logstash and we'll talk about that in detail at our user conference - Elastic{ON} '16. Our entire engineering team will be at this conference, so please come by and say hi! We'd love to talk about roadmap, issues and upcoming features. See you in San Francisco!
https://www.elastic.co/fr/blog/logstash-lines-2016-02-09
CC-MAIN-2017-39
refinedweb
376
65.73
⚠️ Officially no longer supported. Use at your own risk. Disclaimer: Google is trying to get rid of Chrome apps on Windows, OS X and Linux in favor of webapps and Chrome extensions. The best use of this editor extension is for targeting ChromeOS which “will remain supported and maintained for the foreseeable future.” read more about it here: Since Google is killing chrome apps for Chrome on Windows, OS X and Linux, and now supporting Android and Linux apps on ChromeOS, I don’t see the value in maintaining this repo. Chrome App Builder Chrome app builder is an editor extension and API for Unity3D to export games as google chrome apps. It is designed to look just like the build settings of other platforms. Known issues Bad builds with pre-built engine option. How does it work? Mostly, it is based on the webgl player of unity, which means the WebGL module for unity needs to be installed. Combined with a template, and and an extension to fix the chrome related stuff, and it’s all good. In addition, it offers an API to access chrome functions from within unity. Installation Just add the content of the assets folder into the assets folder of your project. Choose “window -> chrome app builder” to access the different settings and the build button! Is it on the asset store? No, but it should be. I guess because of the nature of the constant change of the way the WebGL player works, it is hard to maintain this project for many Unity versions. Requirement You need unity3d installed ( 5.3 and above recommended you probably need the latest version, since Chrome app builder uses private unity apis that keep changing all the time, and we try to keep up and update it. so best way is to test.) with unity webgl module. And of course, you need google chrome on your computer. Last commit worked with: Unity 2017.3.0f2 API Calling Chrome API from Unity can be done from the following classes: Chrome.App.Browser//To open a web page in a new tab ref Chrome.App.Identity//To get information about the currently connected user on Chrome ref Chrome.App.Notification//To display notifications ref Chrome.App.Power//For power management ref Chrome.App.Window//For window related operations (maximize, minimize, fullscreen, etc.) ref The Chrome.Social namespace can be used to use OAuth on various social media (Facebook, Linkedin, Instagram) Todos - Fix template manager so it works EXACTLY as the internal unity template manager - Add demo scene (one that doesn’t suck) - Implement getAuthToken - Add more chrome APIs (Native sockets, In-app purchase and so on) - Add a setting for facebook and instagram client id (currently need to set in the code) and maybe add more social media and such - Create good documentation - Create a permission and a minimum chrome version detector - Other fixes are to be made. TL;DR License MIT Development Pull requests are welcome.
https://unitylist.com/p/sb/Chrome-App-Builder
CC-MAIN-2019-09
refinedweb
497
64.91