text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Download: AccelStepper.zip (version 1.30) A stepper motor controlled by a dedicated driver board. A bipolar stepper motor controlled by an H-Bridge circuit. A unipolar stepper motor, controlled by 4 transistors. Sets the maximum speed. The default is very slow, so this must be configured. When controlled by setting position, the stepper will accelerate to move at this maximum speed, and decelerate as it reaches the destination. Sets the acceleration to be used, in steps per second per second. Move the motor to a new absolute position. This returns immediately. Actual movement is caused by the run() function. Move the motor (either positive or negative) relative to its current position. This returns immediately. Actual movement is caused by the run() function. Read the motor's current absolution position. Read the distance the motor is from its destination position. This can be used to check if the motor has reached its final position. Update the motor. This must be called repetitively to make the motor move. Update the motor, and wait for it to reach its destination. This function does not return until the motor is stopped, so it is only useful if no other motors are moving. Set the speed, in steps per second. This function returns immediately. Actual motion is caused by called runSpeed(). #include <AccelStepper.h> //AccelStepper Xaxis(1, 2, 5); // pin 2 = step, pin 5 = direction //AccelStepper Yaxis(1, 3, 6); // pin 3 = step, pin 6 = direction //AccelStepper Zaxis(1, 4, 7); // pin 4 = step, pin 7 = direction AccelStepper Xaxis(1, 3, 6); // pin 3 = step, pin 6 = direction AccelStepper Yaxis(1, 4, 7); // pin 4 = step, pin 7 = direction AccelStepper Zaxis(1, 5, 9); // pin 5 = step, pin 8 = direction void setup() { Xaxis.setMaxSpeed(400); Yaxis.setMaxSpeed(400); Zaxis.setMaxSpeed(400); Xaxis.setSpeed(45); Yaxis.setSpeed(25); Zaxis.setSpeed(80); } void loop() { Xaxis.runSpeed(); Yaxis.runSpeed(); Zaxis.runSpeed(); }
https://www.pjrc.com/teensy/td_libs_AccelStepper.html
CC-MAIN-2016-30
refinedweb
314
51.75
Simple Pi Robot aims to put robot control in simple form. Simple Pi Robot aims to put robot control in simple form. The part list (1) Raspberry pi (Any model) but with the recent launch of pizero or pi 2 shall be a good option, my current model employs B+. (2) 40 pin GPIO cable (if using pi B+ or pi 2). (3) Breadboard (for quickly assembly various sensors). (4) A 2wd chassis. (5) Distance sensor (Ultrasonic HC SR04). (6) Power bank (for powering up the pi). (7) Rechargeable batteries AA (preferably 2100 mAH). (8)Jumper wires both Male & female type, Resistors. (9) WiFi adapter (EDUP/EDIMAX for communicating wirelessly with pi). (10) Memory card (4GB and higher for running the OS on pi). (11) Motor drivers (L298 ). (12 )servo motor. (13) Miscellaneous – cable ties (for tying the jumper wires) and foam tape (for holding servo or any other sensor where screw assembly cannot be used). Step 1: Choosing the Motor drive shield Currently there are very few motor drive shield available for raspberry pi, to name some:- (1) RTK Motor controller shield (2) Pololu DRV8835 Dual Motor Driver Kit for Raspberry Pi (3) Adafruit DC and Stepper Motor HAT for Raspberry Pi (4) Raspirobot board made by Adafruit and recently underdevelopment like ZEROBORG One of the most common problem building any robot, is to minimize the wiring requirement and the same can be achieved by using the shields/hat. I tried building my robot, first with one of the shield equivalent to Raspirobot from the manufacture ALSROBOT. The kits ships from china but the problem was i was unable to increase the motor input voltage. The maximum was less than 5 Volts, with slight unbalance between the voltages, still one can check the following link – ALSROBOT – PI Motor driver shield Anyhow In my current tutorial i have used the cheap and versatile L298 motor driver – L298. The advantage with the above board apart from being cheap, is that there is regulated 5Volts output is available. I have used the L298 board to drive two DC motors and 1 servo motors. Step 2: The L298 Motor Driver I have Mounted L298 Motor driver at the bottom of my chassis, now to connect the DC motor and servo motor connect (i) DC Motor -A to output -A (+ & -). (ii) DC Motor -B to output -B (+ & -). (iii) Servo motor +ve to + 5V regulated supply of L298 Board & servo motor -ve to GND of L298 Board. Keep the enable pin jumper as it is, if you don’t want speed control, else remove the jumper. The jumper in place ensures +5V supply to enable pin which in turn drives motor at rated speed. Now connect 4 nos jumper wires to the control inputs, connect other end of jumper wires to GPIO pin as designated in next step. For servo motor connect one no. jumper wire for control to GPIO pin as in next step Step 3: The servo Motor The servo has a 3 wire connection: power, ground, and control. The power source must be constantly. I have used Futuba s3003 servo – the connection is very simple the “+” & “-” goes to the L298 Board as outlined earlier. It’s important to look at the operating voltage of servo ( in my case its 4.8 – 6 V see image above), the signal wire is to be connected to GPIO output , normally its white or orange. Controlling servo motors in raspberry pi might be tricky, but then there is very powerful library hosted @ RPIO.PWM ,to install it on pi use the following code. sudo apt-get install python-setuptools sudo easy_install -U RPIO To know more about RPIO.PWM and the DMA used , please refer to the link Step 4: The Robot chasis I have used Ellipzo Robot chassis with 2wd, 2wd are simple and easy to control. The kit comes included with DC Motors, pan kit for servo motors and all necessary hardware’s to assemble the kit. The detailed link is available at – Ellipzo Robot chaisis kit. Pl. refer to video for assembly of Raspberry Pi, along with L298 motor driver, the Breakout board, camera module and the power bank. Step 5: The Distance sensor Integrating the distance sensor is easy and we need just 1k resistor, along with jumper wires. Connect VCC & GND to pi +5Volts & GND respectively. The other two pins TRIG & ECHO are to be connected to GPIO pins as in preceding steps. Remember to connect the resistor as shown in image. The Python code to measure distance is included in the last step. Step 6: Raspberry Pi -camera – Video streaming using VLC player Here I have used Pi camera module, the set up is quite easy and you can refer the link :Raspberry pi Camera setup For video streaming, with VLC begin with installing VLC on raspberry pi sudo apt-get install vlc To start streaming the Camera Video Using RTSP enter following raspivid -o - -t 0 -n | cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/}' :demux=h264 or with proper width & height use following code raspivid -o - -t 0 -n -w 600 -h 400 -fps 12 | cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/}' :demux=h264 Now to view the stream over VLC player open VLC on your remote system than open a network stream using rtsp://###.###.###.###:8554/ where ###.###.###.### is the address of your pi given by the network router. Now as the pi moves inside your home see the video stream on your remote system. Step 7: Raspberry pi pin-out & python code Step 8: Some Assembly Images Code from RPIO import PWM import RPi.GPIO as GPIO from RPIO import PWM import RPi.GPIO as GPIO import time from time import sleep from subprocess import call GPIO.setmode(GPIO.BCM) GPIO.setup(19,GPIO.OUT) GPIO.setup(26,GPIO.OUT) GPIO.setup(16,GPIO.OUT) GPIO.setup(20,GPIO.OUT) GPIO.setup(21,GPIO.IN) GPIO.setup(8,GPIO.OUT) GPIO.setup(27,GPIO.OUT) GPIO.setup(9,GPIO.OUT) TRIG=18 ECHO=17 print"controls" print"1: move forward" print"2: move reverse" print"3: stop robot" print"4: take picture with user defined name" print"5: move forward with speed control" print"6: Rotate the Robot" print"7: Turn the Robot" print"8: for servo control please" print"11 : welcome to autonomous control" print"press enter to send command" def takestillpic(inp): print" please enter photo character" inp = raw_input() call ( ["raspistill -vf -hf -o " + str(inp) + ".jpg" ],shell=True ) def fwd(): GPIO.output(19,True) GPIO.output(26,False) GPIO.output(16,True) GPIO.output(20,False) def rev(): GPIO.output(19,False) GPIO.output(26,True) GPIO.output(16,False) GPIO.output(20,True) def stop(): GPIO.output(19,False) GPIO.output(26,False) GPIO.output(16,False) GPIO.output(20,False) def distmeas(): print" Distance measurement in progress" GPIO.setup(TRIG,GPIO.OUT) GPIO.setup(ECHO,GPIO.IN) GPIO.output(TRIG,False) print" waiting for sensor to settle please" = round(distance,2) print " Distance ", distance, "cm" if distance < 50 : GPIO.output(19,False) GPIO.output(26,False) GPIO.output(16,False) GPIO.output(20,False) time.sleep(1) print " robot stopped as distance is less" print " Now Robot going Backward" GPIO.output(19,False) GPIO.output(26,True) GPIO.output(16,False) GPIO.output(20,True) time.sleep(1) GPIO.output(19,False) GPIO.output(26,False) GPIO.output(16,False) GPIO.output(20,False) TLr() time.sleep(4) fwd() distmeas() else: distmeas() def TL(): GPIO.output(19,True) GPIO.output(26,False) GPIO.output(16,False) GPIO.output(20,False) def TLr(): GPIO.output(19,True) GPIO.output(26,False) time.sleep(0.75) GPIO.output(19,False) GPIO.output(26,False) while True: inp= raw_input() if inp =="1": fwd() print"robot moving in fwd direction" elif inp =="2": rev() print"robot moving in rev direction" elif inp=="3": stop() print"robot stopped" elif inp =="4": takestillpic(inp) print " photo please" elif inp =="5": GPIO.output(7,False) GPIO.output(8,False) elif inp =="6": TL() elif inp =="7": TLr() elif inp =="8": servo = PWM.Servo() servo.set_servo(27,1000) time.sleep(2) servo.stop_servo(27) elif inp =="9": servo = PWM.Servo() servo.set_servo(27,1500) time.sleep(2) servo.stop_servo(27) elif inp =="10": servo = PWM.Servo() servo.set_servo(27,2000) time.sleep(2) servo.stop_servo(27) elif inp == "11": fwd() distmeas() GPIO.cleanup() Source: Simple Pi Robot Low cost PCB at PCBWay - only $5 for 10 PCBs and FREE first order for new members PCB Assembly service starts from $30 with Free shipping all around world + Free stencil Extra 15% off for flex and rigid-flex PCB
https://projects-raspberry.com/simple-pi-robot/
CC-MAIN-2021-25
refinedweb
1,451
61.46
Jonathan EMar 24 Pre-Release Testers, Xojo Pro Las Vegas, NV I have been having a problem with XQL returning nothing when that should not be the case. I thought that it might be related to the fact that the nodes it should return have a custom XML namespace (xmlns="...") defined. I missed the documentation's instructions on using namespaces at first and spent an enormous amount of time on this issue. Here is an example of how to structure an XQL query in Xojo with a namespace definition: //Given kXML is a valid XML document with a node named "foo" with xmlns="" Dim theDocument As XmlDocument theDocument = New XmlDocument( kxml ) Dim name As String = "foo" Dim Map() As String Map.Append( "ns1" ) Map.Append( "" ) Dim nodes As XmlNodeList = theDocument.Xql( "//ns1:" + name, URI ) - The "//" at the beginning of the query means at any level in the document. - To include the namespace in your query, you use the key (in this case, "ns1" but set it to whatever you want) and then a colon before the node name. - Don't forget to include the array of namespaces that contains any that you reference in your query when you call XMLDocument.XQL as the value for the optional Map() parameter.
https://forum.xojo.com/59108-xql-and-namespaces/0
CC-MAIN-2020-16
refinedweb
209
67.89
I'm using Jekyll for my blog, and I'd like the ability to use unique CSS styling in particular posts. Right now, I'm specifying a CSS file in the YAML frontmatter like so: style: artdirection.css {% if page.style %} <link rel="stylesheet" href="{{ page.style }}"> {% endif %}` I'm pretty sure this would work: --- title: xxx style: | /* You can put any CSS here */ /* Just indent it with two spaces */ /* And don't forget the | after "style: " */ h2 { color: red; } --- Your markdown/textile goes here. h2s will be red And then in your layout: <style type="text/css"> {{ page.style }} </style> And that should be it.
https://codedump.io/share/QdZfjIdXAbTl/1/can-you-use-jekyll-page-variables-in-a-layout
CC-MAIN-2016-50
refinedweb
106
83.36
I use Blender 2.64a and I need set mouse position and I don't know it. Can you help me, please? (Sorry for my English. I'm Slovak) How to set mouse position? Scripting in Blender with Python, and working on the API Moderators: jesterKing, stiv 3 posts • Page 1 of 1 - Posts: 4 - Joined: Mon Nov 05, 2012 4:23 pm Re: How to set mouse position? Not sure if blender has a built in way to do this, but this example only works on windows: Code: Select all import bpy from ctypes import * # point structure definition class POINT(Structure): _fields_ = [("x", c_ulong), ("y", c_ulong)] # Gets cursor position (windows only) def getCursorPos(): pt = POINT() windll.user32.GetCursorPos(byref(pt)) return pt.x, pt.y # Sets cursor position (windows only) def setCursorPos(x, y): windll.user32.SetCursorPos(x, y) # example x, y = getCursorPos() setCursorPos(x + 100, y) - Posts: 4 - Joined: Mon Nov 05, 2012 4:23 pm 3 posts • Page 1 of 1 Who is online Users browsing this forum: No registered users and 0 guests
https://www.blender.org/forum/viewtopic.php?t=25529&view=next
CC-MAIN-2016-30
refinedweb
177
72.16
Understanding Java RMI Internals Source code of CalcClient.java: import java.rmi.Naming; public class CalcClient { public static void main(String[] args)throws Exception { Calc c=(Calc)Naming.lookup("rmi://remotehost:1099/calc"); System.out.println(c.add(3,4)); } } Note: You have to replace remotehost with the host name or IP address of the server machine. These are the classes. Compile all the java files to get the class files. Now, use the RMI compiler to generate the stub. Use the .keep option if you need the source file of the stub. C:\test>rmic .keep CalcImpl This will create two files: CalcImpl_Stub.class and CalcImpl_Stub.java. Source code for CalcImpl_Stub.java: public final class CalcImpl_Stub extends java.rmi.server.RemoteStub implements Calc, java.rmi.Remote { private static final long serialVersionUID = 2; private static java.lang.reflect.Method $method_add_0; static { try { $method_add_0 = Calc.class.getMethod("add", new java.lang.Class[] {int.class, int.class}); } catch (java.lang.NoSuchMethodException e) { throw new java.lang.NoSuchMethodError( "stub class initialization failed"); } } // constructors public CalcImpl_Stub(java.rmi.server.RemoteRef ref) { super(ref); } // methods from remote interfaces // implementation of add(int, int) public int add(int $param_int_1, int $param_int_2) throws java.rmi.RemoteException try { Object $result = ref.invoke(this, $method_add_0, new java.lang.Object[] {new java.lang.Integer($param_int_1), new java.lang.Integer($param_int_2)}, -7734458262622125146L); return ((java.lang.Integer) $result).intValue(); } catch (java.lang.RuntimeException e) { throw e; } catch (java.rmi.RemoteException e) { throw e; } catch (java.lang.Exception e) { throw new java.rmi.UnexpectedException("undeclared checked exception", e); } } } Now to run this code on the server machine, we need two consoles. On console 1, run C:\test>rmiregistry On console 2, run C:\test>java CalcImpl On the client machine, run the following on a console, C:\test>java CalcClient You will get an output of 7 on your screen. What Is Happening Here I'll show you what is really happening behind the scenes. - First, RMIRegistry is run on the server machine. RMIRegistry itself is a remote object. The point to note here is this: All remote objects (in other words, all objects that extend UnicastRemoteObject) export themselves to an arbitrary port on the server machine. Because RMIRegistry is also a remote object, it exports itself into a port. A difference is that the the port is a well-known port. By default, it is 1099. The term well-known means that the port is known to all the clients. - Now, the server is run on the server machine. In UnicastRemoteObject's constructor, it exports itself to an anonymous port on the server machine. This port is unknown to the clients; only the server knows about this port. - When you call Naming.rebind(), you are passing a reference of CalcImpl (here c) as the second parameter to the Naming class. The Naming class constructs an object of the Stub and binds it past the stub object (not the actual object) to the Registry remote object. How is the stub object created? Here it is: - The naming class uses the getClass() method to get the name of the class (here, CalcImpl). - Then, it adds _Stub to the name of the class to get the stub's name (here, CalcImpl_Stub). - It loads the CalcImpl_Stub class to the JVM. - It gets a RemoteRef obj from c. - It uses this RemoteRef object to construct the stub: - It passes this stub object to RMIRegistry for binding, along with the public name as (calc,stub). - RMIRegistry stores this public name and stub object in a hashmap internally. - When the client executes Naming.lookup(), it passes the public name as the parameter. The RMIRegistry (which itself is a remote object) returns the stored stub object back to the client. Now, the client gets a stub object that knows about the server host name and port to which the server listens. The client can invoke the stub's method to call the remote object's method. RemoteRef ref=c.getRef();It is this ref that encapsulate all the details about the server such as server hostname, server's listening port, address, and so on. CalcImpl_Stub stub=new CalcImpl_Stub(ref); If you look at CalcImpl_Stub.java, you will find that the stub's constructor takes RemoteRef reference as the parameter. Page 2 of 3
https://www.developer.com/java/other/article.php/10936_3455311_2/Understanding-Java-RMI-Internals.htm
CC-MAIN-2018-09
refinedweb
713
61.22
Part Two: Practical Use Cases of the Tutum Stream API two of a two-part tutorial will show you how to use the Tutum Stream API. This part will show how to use the Tutum Stream API for some interesting use cases, such as integrations with Slack, Pagerduty, and others. It shows how you can retrieve additional information from the Tutum API to provide richer detail to your monitoring tools. Finally, we’ll see how to deploy a service on Tutum that is always connected to the Tutum Stream API to keep you apprised of changes to your deployments. To follow along with this tutorial, clone this GitHub repo. It has a few sample client files as well as Python modules with code to interact with Pagerduty, Slack, and Tutum. It also has a Dockerfile that you can use to build a Docker image to deploy as a service on Tutum. The Four WebSocket Events As noted in the previous tutorial, there are four event handlers you need to set up with your WebSocket connection. These event handlers are: on_open: for when the connection to the WebSocket is opened; on_close: for when the connection to the WebSocket is closed; on_message: for when your WebSocket client receives a message; on_error: for when you receive an error These four handlers will be the key to interacting with the Tutum Stream API. First Steps with the Tutum Stream Client To get started with the Tutum Stream API, let’s just run the basic example client that Tutum showed in its initial blog post on the Tutum Stream API. If you cloned my GitHub repo, it is listed as tutum-sample.py. You’ll need to set two environment variables in your terminal before running it. export TUTUM_TOKEN=*your_Tutum_token* export TUTUM_USERNAME=*your_Tutum_username* These will be used to authenticate your WebSocket client with the Tutum Stream server. Once you’ve set these, run: python tutum-sample.py You should see the following output in your terminal: Connected Auth completed If you look at the source code for tutum-sample.py, you will see that “Connected” is printed when the on_open event handler is triggered. The “Auth completed” statement was printed when the on_message event handler was triggered and the message type was “auth”. If you want to play around further, keep the connection in your terminal and use your web browser to navigate to Tutum. Redeploy one of your current services or launch a new service (the tutum/hello-world service is an easy sample). Your terminal should post a number of messages relating to the creation of a new service and other updates. Monitoring your Tutum Stream client with Pagerduty By the end of this tutorial, we’ll have a service deployed on Tutum that is connected to the Tutum Stream API and handling events as we desire. However, we’ll want to make sure our client is properly connected to the Stream API at all times. After all, you can’t send messages regarding your events if your client is down. To monitor our Stream client, we’ll use Pagerduty. Pagerduty allows you to get notifications if portions of your infrastructure go down. You can send emails, text messages, or notifications through mobile apps. You can trigger notifications using the Pagerduty Integration API. In the repo for this project, I’ve written a function to post an event to Pagerduty. Look at the pagerduty_event() function in integrations/pagerduty.py. The important part of the function is as follows: def pagerduty_event(event_type="trigger", incident_key=None, description=None, client=None, client_url=None, service_key=PAGERDUTY_KEY): if not service_key: raise Exception("Please provide a Pagerduty Service Key") if not description: description = "Tutum Stream Issue" data = { "service_key": service_key, "incident_key": incident_key, "event_type": event_type, "description": description, "client": client, "client_url": client_url } headers = {'Content-type': 'application/json'} r = requests.post(PAGERDUTY_URL, headers=headers, data=json.dumps(data)) The function puts together some JSON and posts it to the Pagerduty endpoint. By default, it sends a ‘trigger’ event to Pagerduty, which sends an alert to the proper person. Note that you can also send a ‘resolve’ event to Pagerduty, which indicates that a previously triggered issue has been resolved. Let’s see how this works in action. In the git repo, check out pagerduty-client.py. It is the same as the tutum-sample.py script, with two changes. First, the on_close handler has been changed: def on_close(ws): pagerduty_event(event_type='trigger', incident_key='tutum-stream', description='Tutum Stream connection closed.') print "### closed ###" Once the WebSocket closes its connection, it sends a trigger event to Pagerduty. This way, you’ll be alerted when you are no longer connected to the Tutum Stream API. Similarly, look at the on_open handler: def on_open(ws): pagerduty_event(event_type='resolve', incident_key='tutum-stream', description='Tutum Stream connection open.') print "Connected" This handler mirrors the on_close handler, as it sends a resolve event to Pagerduty. If you previously sent a trigger event signaling that your Stream API client is down, this will send a message that the problem has been fixed. You could use this Pagerduty integration for all kinds of alerts. Another common use case would be to use the pagerduty_event function in the on_message handler when certain Services, Stacks, or Nodes are deleted. We’ll investigate the on_message handler more in the next section. As a final note, if you want to use the pagerduty_event() function, you’ll need to set your PAGERDUTY_KEY as an environment variable. In your terminal, run: export PAGERDUTY_KEY=*your_Pagerduty_key* Posting Update Messages to Slack Monitoring errors is responsible and all, but sometimes it’s more fun to see the good things your team is doing. This section will show you how to post messages to Slack whenever you create a new Container on Tutum. Again, let’s look to the source code. Check out integrations/slack.py. There are two functions: def post_slack(text=None, slack_url=SLACK_URL): if not SLACK_URL: raise Exception('Please provide a Slack URL') if not text: text = "You received a message from Tutum Stream!" data = {"text": text} r = requests.post(slack_url, data=json.dumps(data)) return r def generic_slack(message): msg_as_JSON = json.loads(message) text = "Your {} was {}d on Tutum!\n" "Check {} to see more" "details.".format(msg_as_JSON.get('type'), msg_as_JSON.get('action'), msg_as_JSON.get('resource_uri')) post_slack(text=text) The first function, post_slack(), is an implementation of the method to post messages to Slack using the Incoming Webhooks integration. To use this function, you’ll need to register for a Webhook URL to send POST requests. The second function, generic_slack(), is a simple wrapper function around post_slack. It takes a message from the Tutum Stream API, and it will post information about that update to your Slack channel. Let’s put this into action. Check out client.py. This will be our actual client script that we end up deploying on Tutum. In particular, check out the last two lines of the following: def on_message(ws, message): msg_as_JSON = json.loads(message) type = msg_as_JSON.get("type") if type: if type == "auth": print("Auth completed") elif type == "container": generic_slack(message) This same checks to see if the type of event from the Tutum Stream is a “container” event. If so, it posts a message to Slack using the generic_slack function. Test it out yourself. In your terminal, set your SLACK_URL environment variable, then run the client. export SLACK_URL=*your_Slack_URL* python client.py Navigate to the Tutum Dashboard with your browser. Deploy a new service or redeploy an existing service. In your Slack channel, you should see a message like the following: Your container was created on Tutum! Check /api/v1/container/df3eb045-8bb9-46c1-ac59-a67ce0f4d875/ to see more details. Congratulations! You’ve sent your first Tutum Stream event to Slack! Advanced Message Handling with Tutum Stream The example above is nice, but it’d be nice to provide a little more info about Tutum events. As discussed in part one of this tutorial, a message from the Tutum Stream API includes a resource_uri and the resource_uri of any parents of the event object. We can use these resource_uri’s to add additional information to our notifications. The source code in integrations/utilities.py includes a helper function called get_resource(). This function takes a resource_uri and returns the text response from the Tutum API for that resource. Look at the on_message handler in the client.py script. It has the following code that is executed when the event type is “service”: elif type == "service": parents = msg_as_JSON.get("parents") if parents: stack = get_resource(parents[0]) stack_as_JSON = json.loads(stack) text = ("A Service on Tutum was {}d.\n" "It belonged to the {} Stack.\n" "The Stack state is:" " {}".format(msg_as_JSON.get('action'), stack_as_JSON.get('name'), stack_as_JSON.get('state'))) post_slack(text=text) This handler grabs the information about the Service’s parent Stack. It then posts a message to Slack about the status of the Stack with a recently deployed Service. This is a simple example, but you can use the resource_uri of both the object and its parents to do some powerful stuff. For the ambitious, you could parse the resource_uri of each and every event that is triggered and post the information to ElasticSearch for later analysis. Deploying your Tutum Stream Client as a Tutum Service So far, we’ve been running the Tutum Stream WebSockets client in our local terminal. However, we’ll want something that’s always running to make sure we’re always tracking our Tutum events. To do this, we’ll deploy the WebSockets client as a service on Tutum in a Docker container. In the git repo, I’ve provided a Dockerfile to get you started. It is based on the Alpine Linux image from the Gilder Labs team. Alpine Linux is a minimalist Linux distribution with a pretty good package repository. The Dockerfile installs Python and Pip, pip installs the required Python packages, and adds the code from your repo. To get the image on Tutum, run: docker build -t tutum.co//stream-client . docker push tutum.co//stream-client In your Tutum browser, set up a Stack file with Stream: stream: image: 'tutum.co//tutum-stream:latest' environment: - PAGERDUTY_KEY=*your_Pagerduty_key* - SLACK_URL=*your_Slack_URL* roles: - global We’re giving the “global” API role to the service so that it has full access to the Tutum API. You can read about API roles here. Hit “Create and Deploy” and you’re done! You’ll be receiving Slack messages and Pagerduty alerts with updates to your Tutum infrastructure! Conclusion This tutorial has walked through integrating the Tutum Steam API into your monitoring and communications systems. We learned how to send Pagerduty triggers and Slack messages on specified Tutum events, as well as how to deploy a minimal Docker image that monitors your Tutum Stream. Please let me know of any questions or comments on this tutorial in the comment box below. Interested in adding additional integrations? Feel free to fork the repo for this tutorial and create a pull request with your new tool!
https://blog.tutum.co/2015/05/12/using-the-new-tutum-stream-api-part-2-pagerduty-and-slack-notifications/
CC-MAIN-2017-13
refinedweb
1,835
56.05
Wait, it looks like we've just drawn a moving picture I don't know if you found it. It seems like we've just drawn a moving picture, just a picture!!!That's all the code. If there are 100 10000 + pieces?So we're so tall? Don't worry, pygame offers us a solution--the elves and the elves Spirit?Elf group?Blue elves?Pikachu? Spirit The object that displays images in game development is the genie - Don't worry. Let's take a look. It's really a class. Here's its class diagram - Effect: - pygame.sprite.Sprite - A ** object that stores image data images and location rect s - pygame.sprite.Group, which stores objects previously created by pygame.sprite.Sprite and draws them uniformly in the main window program - Analyse the composition of this class - The elves need two important attributes - Image to be displayed, rect image to be displayed on the screen - The default update() method does nothing, and subclasses can override it to update the elf location each time the screen is refreshed - Be careful of the pits! - pygame.sprite.Sprite does not provide image and rect attributes - Programmers are required to derive subclasses from pygame.sprite.Sprite - And set the image and rect properties in the initialization method of the subclass sprite groups - A group of elves can contain multiple elf objects - Call the update() method of the elf group object - You can automatically call the update() method for each elf in the group - Call the draw (screen object) method of the elf group object - You can draw the image of each elf in the group at the rect position - Generally speaking, unified command and unified action, but each item has its own independent movement method and attributes Group(*sprites) -> Group Instance code: # 1. New `plane_sprites.py'file # 2. Definition `GameSprite` inherits from `pygame.sprite.Sprite` import pygame # 3. Written in parentheses means inheriting the parent class class GameSprite(pygame.sprite.Sprite): """The game genie in the airplane battle, according to design UML Write code""" #Note that to override the init method here, we pass in the init initialization method (row function) def __init__(self, image_name, speed=1): # Call the initialization method of the parent class, and be sure to call the super() object to invoke the initialization inint method of the parent class when our parent is not an object base class super().__init__() # Define the properties of the object, which record the elf's position speed and movement # Load Image self.image = pygame.image.load(image_name) # Set Size self.rect = self.image.get_rect() # Recording speed self.speed = speed def update(self): # Move vertically on the screen self.rect.y += self.speed How do we use them in our previous host program? What?You don't know what our main program is?Okay, I forgot to tell you that our main program is the one we wrote to draw windows before. It's actually very simple, you just need to import the packages in the main game program How do I finish adding enemy sprites, make them move, and put them in the game loop? - Be careful: Let's first clarify the division of work between the elves and the elves 1. Elves *Encapsulation**image**,**Location rect** and**speed** *Provide `update()` method to **update location rect** according to game requirements 2. Elf Group *Contains ** Multiple ** ** Elf Objects ** * `update` method to have all the elves in the elf group call the `update` method to update location * `draw(screen)` method, draws all the elves in the elf group on `screen` - Complete code samples import pygame # You need to import forms and import to use inprom imports. To use forms, use the tools provided by the module directly. # They both implement the import module functionality from plane_sprites import * # Initialization of the game pygame.init() # Create Game Window 480 * 700 screen = pygame.display.set_mode((480, 700)) # Draw background image bg = pygame.image.load("./images/background.png") screen.blit(bg, (0, 0)) # pygame.display.update() # Draw a Hero's Airplane hero = pygame.image.load("./images/me1.png") screen.blit(hero, (150, 300)) # update method can be called uniformly after all drawing is done pygame.display.update() # Create Clock Object clock = pygame.time.Clock() # 1. Define rect to record the initial position of the aircraft hero_rect = pygame.Rect(150, 300, 102, 126) # Start our business logic # Create enemy elves # Enemy fighter object enemy = GameSprite("./images/enemy1.png") enemy1 = GameSprite("./images/enemy1.png", 2) # Creating an enemy plane's elf group, we can use multiple values to transfer the elf combination, and with this elf group, we can use the elf method directly, drawing all the images at once enemy_group = pygame.sprite.Group(enemy, enemy1) # # Let the elf group call two methods # # Update - Let all the elves in the group update their location, this is the update location # enemy_group.update() # # draw - Draws all the Wizards on screen, this is drawing # enemy_group.draw(screen) # We just called it once, and now we throw it into our game loop # Game Cycle - > means the official start of the game! while True: # You can specify how often code inside a loop executes clock.tick(60) # Listen for events for event in pygame.event.get(): # Determine if the event type is an exit event if event.type == pygame.QUIT: print("Game Exit...") # quit uninstalls all modules pygame.quit() # exit() directly terminates the currently executing program exit() # 2. Modify the position of the aircraft hero_rect.y -= 1 # Judging the position of an aircraft if hero_rect.y <= 0: hero_rect.y = 700 # 3. Call blit method to draw image screen.blit(bg, (0, 0)) screen.blit(hero, hero_rect) # Let the elf group call two methods # Update - Let all the elves in the group update their location, this is the update location enemy_group.update() # draw - Draws all the Wizards on screen, this is drawing enemy_group.draw(screen) # 4. Call the update method to update the display pygame.display.update() pygame.quit()
https://programmer.group/python-project-game-3-wizard-blue-elves.html
CC-MAIN-2020-24
refinedweb
993
65.12
guides-good_tests This document is an introduction to writing good autopilot tests. This should be treated as additional material on top of all the things you’d normally do to write good code. Put another way: test code follows all the same rules as production code - it must follow the coding standards, and be of a professional quality. Several points in this document are written with respect to the unity autopilot test suite. This is incidental, and doesn’t mean that these points do not apply to other test suites! Write Expressive Tests Unit tests are often used as a reference for how your public API should be used. Functional (Autopilot) tests are no different: they can be used to figure out how your application should work from a functional standpoint. However, this only works if your tests are written in a clear, concise, and most importantly expressive style. There are many things you can do to make your tests easier to read: Pick Good Test Case Class Names Pick a name that encapsulates all the tests in the class, but is as specific as possible. If necessary, break your tests into several classes, so your class names can be more specific. This is important because when a test fails, the test id is the primary means of identifying the failure. The more descriptive the test id is, the easier it is to find the fault and fix the test. Pick Good Test Case Method Names Similar to picking good test case class names, picking good method names makes your test id more descriptive. We recommend writing very long test method names, for example: #PEP 257 when writing all docstrings. Test One Thing Only Tests should test one thing, and one thing only. Since we’re not writing unit tests, it’s fine to have more than one assert statement in a test, but the test should test one feature only. How do you tell if you’re testing more than one thing? There’s two primary ways: Can you describe the test in a single sentence without using words like ‘and’, ‘also’, etc? If not, you should consider splitting your tests into multiple smaller tests. Tests usually follow a simple pattern: Set up the test environment. Perform some action. Test things with assert statements. If you feel you’re repeating steps ‘b’ and ‘c’ you’re likely testing more than one thing, and should consider splitting your tests up. Good Example:, as long as they’re covered by an autopilot test somewhere else - that’s why we don’t need to verify that the dash really did open when we called self.dash.ensure_visible(). Fail Well Make sure your tests test what they’re supposed to. It’s very easy to write a test that passes. It’s much more difficult to write a test that only passes when the feature it’s testing is working correctly, and fails otherwise. There are two main ways to achieve this: - Write the test first. This is easy to do if you’re trying to fix a bug in Unity. In fact, having a test that’s exploitable via an autopilot test will help you fix the bug as well. Once you think you have fixed the bug, make sure the autopilot test you wrote now passed. The general workflow will be: Branch unity trunk. Write autopilot test that reproduces the bug. Commit. Write code that fixes the bug. Verify that the test now passes. Commit. Push. Merge. Celebrate! - If you’re writing tests for a bug-fix that’s already been written but is waiting on tests before it can be merged, the workflow is similar but slightly different: Branch unity trunk. Write autopilot test that reproduces the bug. Commit. Merge code that supposedly fixes the bug. Verify that the test now passes. Commit. Push. Superseed original merge proposal with your branch. Celebrate! Think about design Much in the same way you might choose a functional or objective-oriented paradigm for a piece of code, a testsuite can benefit from choosing a good design pattern. One such design pattern is the page object model. The page object model can reduce testcase complexity and allow the testcase to grow and easily adapt to changes within the underlying application. Check out Page Object Pattern. Test Length Tests should be short - as short as possible while maintaining readability. Longer tests are harder to read, harder to understand, and harder to debug. Long tests are often symptomatic of several possible problems: Your test requires complicated setup that should be encapsulated in a method or function. Your test is actually several tests all jammed into one large test. Bad Example:. Removed assertions that were duplicated from other tests. For example, there’s already an autopilot test that ensures that new applications have their title displayed on the panel. With a bit of refactoring, this test could be even smaller (the launcher proxy classes could have a method to click an icon given a desktop id), but this is now perfectly readable and understandable within a few seconds of reading. Good docstrings Test docstrings are used to communicate to other developers what the test is supposed to be testing. Test Docstrings must: Conform toPEP.""" The difference between these two are subtle, but important. Test Readability The most important attribute for a test is that it is correct - it must test what’s it’s supposed to test. The second most important attribute is that it is readable. Tests should be able to be examined by themselves by someone other than the test author without any undue hardship. There are several things you can do to improve test readability: - Don’t abuse the setUp() method. It’s tempting to put code that’s common to every test in a class into the setUp method, but it leads to tests that are not readable by themselves. For example, this test uses the setUp method to start the launcher switcher, and tearDown to cancel it: Bad Example:def test_launcher_switcher_next(self): """Moving to the next launcher item while switcher is activated must work.""" self.launcher_instance.switcher_next() self.assertThat(self.launcher.key_nav_selection, Eventually(GreaterThan(0))) This leads to a shorter test (which we’ve already said is a good thing), but the test itself is incomplete. Without scrolling up to the setUp and tearDown methods, it’s hard to tell how the launcher switcher is started. The situation gets even worse when test classes derive from each other, since the code that starts the launcher switcher may not even be in the same class! A much better solution in this example is to initiate the switcher explicitly, and use addCleanup() to cancel it when the test ends, like this: Good Example))) The code is longer, but it’s still very readable. It also follows the setup/action/test convention discussed above. Appropriate uses of the setUp() method include: - Initialising test class member variables. - Setting unity options that are required for the test. For example, many of the switcher autopilot tests set a unity option to prevent the switcher going into details mode after a timeout. This isn’t part of the test, but makes the test easier to write. - Setting unity log levels. The unity log is captured after each test. Some tests may adjust the verbosity of different parts of the Unity logging tree. - Put common setup code into well-named methods. If the “setup” phase of a test is more than a few lines long, it makes sense to put this code into it’s own method. Pay particular attention to the name of the method you use. You need to make sure that the method name is explicit enough to keep the test readable. Here’s an example of a test that doesn’t do this: Bad Example) The test is now shorter, and the launch_test_apps method can be re-used elsewhere. Importantly - even though I’ve hidden the implementation of the launch_test_apps method, the test still makes sense. - Hide complicated assertions behind custom assertXXX methods or custom matchers. If you find that you frequently need to use a complicated assertion pattern, it may make sense to either: Write a custom matcher. As long as you follow the protocol laid down by the testtools.matchers.Matcher class, you can use a hand-written Matcher just like you would use an ordinary one. Matchers should be written in the autopilot.matchers module if they’re likely to be reusable outside of a single test, or as local classes if they’re specific to one test. Write custom assertion methods. For example:def test_multi_key_copyright(self): """Pressing the sequences 'Multi_key' + 'c' + 'o' must produce '?'.""" self.dash.reveal_application_lens() self.keyboard.press_and_release('Multi_key') self.keyboard.type("oc") self.assertSearchText("?") This test uses a custom method named assertSearchText that hides the complexity involved in getting the dash search text and comparing it to the given parameter. Prefer wait_for and Eventually to sleep Early autopilot tests relied on extensive use of the python sleep call to halt tests long enough for unity to change its state before the test continued. Previously, an autopilot test might have looked like this: Bad Example: def test_alt_f4_close_dash(self): """Dash must close on alt+F4.""" self.dash.ensure_visible() sleep(2) self.keyboard.press_and_release("Alt+F4") sleep(2) self.assertThat(self.dash.visible, Equals(False)) This test uses two sleep calls. The first makes sure the dash has had time to open before the test continues, and the second makes sure that the dash has had time to respond to our key presses before we start testing things. - There are several issues with this approach: - On slow machines (like a jenkins instance running on a virtual machine), we may not be sleeping long enough. This can lead to tests failing on jenkins that pass on developers machines. - On fast machines, we may be sleeping too long. This won’t cause the test to fail, but it does make running the test suite longer than it has to be. There are two solutions to this problem: In Tests Tests should use the Eventually matcher. This can be imported as follows: from autopilot.matchers import Eventually The Eventually matcher works on all attributes in a proxy class that derives from UnityIntrospectableObject (at the time of writing that is almost all the autopilot unity proxy classes). The Eventually matcher takes a single argument, which is another testtools matcher instance. For example, the bad assertion from the example above could be rewritten like so: Proxy classes are not test cases, and do not have access to the self.assertThat method. However, we want proxy class methods to block until unity has had time to process the commands given. For example, the ensure_visible method on the Dash controller should block until the dash really is visible. To achieve this goal, all attributes on unity proxy classes have been patched with a wait_for method that takes a testtools matcher (just like Eventually - in fact, the Eventually matcher just calls wait_for under the hood). For example, previously the ensure_visible method on the Dash controller might have looked like this: Bad Example:)) Scenarios Autopilot uses the python-testscenarios package to run a test multiple times in different scenarios. A good example of scenarios in use is the launcher keyboard navigation tests: each test is run once with the launcher hide mode set to ‘always show launcher’, and again with it set to ‘autohide launcher’. This allows test authors to write their test once and have it execute in multiple environments. In order to use test scenarios, the test author must create a list of scenarios and assign them to the test case’s scenarios class attribute. The autopilot ibus test case classes use scenarios in a very simple fashion: Good Example:))) This is a simplified version of the IBus tests. In this case, the test_simple_input_dash test will be called 5 times. Each time, the self.input and self.result attribute will be set to the values in the scenario list. The first part of the scenario tuple is the scenario name - this is appended to the test id, and can be whatever you want. Important It is important to notice that the test does not change its behavior depending on the scenario it is run under. Exactly the same steps are taken - the only difference in this case is what gets typed on the keyboard, and what result is expected. Scenarios are applied before the test’s setUp or tearDown methods are called, so it’s safe (and indeed encouraged) to set up the test environment based on these attributes. For example, you may wish to set certain unity options for the duration of the test based on a scenario parameter. Multiplying Scenarios Scenarios are very helpful, but only represent a single-dimension of parameters. For example, consider the launcher keyboard navigation tests. We may want several different scenarios to come into play: A scenario that controls whether the launcher is set to ‘autohide’ or ‘always visible’. A scenario that controls which monitor the test is run on (in case we have multiple monitors configured). We can generate two separate scenario lists to represent these two scenario axis, and then produce the dot-product of thw two lists like this:) Now we have a problem: Some of the generated scenarios won’t make any sense. For example, one such scenario will be (autohide, monitor_1, launcher on primary monitor only). If monitor 0 is the primary monitor, this will leave us running launcher tests on a monitor that doesn’t contain a launcher! There are two ways to get around this problem, and they both lead to terrible tests: Detect these situations and skip the test. This is bad for several reasons - first, skipped tests should be viewed with the same level of suspicion as commented out code. Test skips should only be used in exceptional circumstances. A test skip in the test results is just as serious as a test failure. Detect the situation in the test, and run different code using an if statement. For example, we might decode to do this:def test_something(self): # ... setup code here ... if self.monitor_mode == 1 and self.launcher_monitor == 1: # test something else else: # test the original thing. As a general rule, tests shouldn’t have assert statements inside an if statement unless there’s a very good reason for doing so. Scenarios can be useful, but we must be careful not to abuse them. It is far better to spend more time typing and end up with clear, readable tests than it is to end up with fewer, less readable tests. Like all code, tests are read far more often than they’re written. Do Not Depend on Object Ordering Calls such as select_many return several objects at once. These objects are explicitly unordered, and test authors must take care not to make assumptions about their order. Bad Example:.
https://docs.ubuntu.com/phone/en/apps/api-autopilot-development/guides-good_tests.html
CC-MAIN-2018-47
refinedweb
2,497
62.68
cannot? Azure automatically applies security patches to the Linux nodes in your cluster on a nightly schedule. However, you are responsible for ensuring that those Linux nodes are rebooted as required. You have several options for rebooting nodes: - Kured, an open-source reboot daemon for Kubernetes. Kured runs as a DaemonSet and monitors each node for the presence of a file that indicates that a reboot is required. Across the cluster, OS reboots are managed by the same cordon and drain process as a cluster upgrade. For more information about using kured, see Apply security and kernel updates to nodes in AKS.. Why are two resource groups created with AKS? AKS builds upon a number of Azure infrastructure resources, including virtual machine scale sets, virtual networks, and managed disks. This enables you to leverage whenever the cluster is deleted, so it should only be used for resources which cannot: - Currently, you can't modify the list of admission controllers in AKS. Can I use admission controller webhooks on AKS? Yes, you may use admission controller webhooks on AKS. It is recommended you exclude internal AKS namespaces which are marked with the control-plane label. For example, by adding the below to the webhook configuration: namespaceSelector: matchExpressions: - key: control-plane operator: DoesNotExist having something deployed on kube-system (not recommended) which you require to be covered by your custom admission webhook, you may add the below label or annotation so that Admissions Enforcer ignores it. Label: "admissions.enforcer/disabled": "true" or Annotation: "admissions.enforcer/disabled": true Is Azure Key Vault integrated with AKS? AKS isn't currently natively integrated with Azure Key Vault. However, the Azure Key Vault provider for CSI Secrets Store enables direct integration from Kubernetes pods to Key Vault secrets. add on feature with Uptime SLA. Can I apply Azure reservation discounts to my AKS agent nodes? AKS agent nodes are billed as standard Azure virtual machines, so if you've purchased Azure reservations for the VM size that you is not supported. Can I move my AKS cluster or AKS infrastructure resources to other resource groups or rename them? Moving or renaming your AKS cluster and its associated resources is not supported. Why is my cluster delete taking so long? Most clusters are deleted upon user request; in some cases, especially where customers are bringing their own Resource Group, or doing cross-RG tasks deletion can take additional time or fail. If you have an issue with deletes, double-check that you do not have locks on the RG, that any resources outside of the RG are disassociated from the RG, etc. If I have pod / deployments in state 'NodeLost' or 'Unknown' can I still upgrade my cluster? You can, but AKS does not recommend this. Upgrades should ideally be performed when the state of the cluster is known and healthy. If I have a cluster with one or more nodes in an Unhealthy state or shut down, can I perform an upgrade? No, please. Please remove them and attempt the delete again. I ran an upgrade, but now my pods are in crash loops, and readiness probes fail? Please confirm your service principal has not expired. Please see: AKS service principal and AKS update credentials. My cluster was working, but suddenly cannot provision LoadBalancers, mount PVCs, etc.? Please confirm your service principal has not expired. Please see: AKS service principal and AKS update credentials. Can I use the virtual machine scale set APIs to scale manually? No, scale operations by using the virtual machine scale set APIs aren't supported. Use the AKS APIs ( az aks scale). Can I use virtual machine scale sets to manually scale to 0 nodes? No, scale operations by using the virtual machine scale set APIs aren't supported. Can I stop or de-allocate all my VMs? While AKS has resilience mechanisms to withstand such a config and recover from it, this is not a recommended configuration. Can I use custom VM extensions? No AKS is a managed service, and manipulation of the IaaS resources is not supported. To install custom components, etc. please leverage the Kubernetes APIs and mechanisms. For example, leverage DaemonSets to install required components. Does AKS store any customer data outside of the cluster's region? No. All data created in an AKS cluster is maintained within the cluster's region.
https://docs.microsoft.com/sl-si/azure/aks/faq
CC-MAIN-2020-34
refinedweb
725
57.16
. There has to be a better way! Browsing the itertools recipes for inspiration, I found the pairwise function: def pairwise(iterable): "s -> (s0,s1), (s1,s2), (s2, s3), ..." a, b = tee(iterable) next(b, None) return zip(a, b) Perfect! Now I just have to adjust it to work with n iterators where the first iterator is at i, the second at i+1, etc. def nwise(iterable, n=1): iterators = tee(iterable, n) for i in range(n): for _ in range(i): next(iterators[i], None) return zip(*iterators) There we go. Even though it works on multiple levels of the iterable, it’s still memory-efficient, because generators are awesome. This will provide you with an output like this: In [4]: l = [1,2,3,4,5,6] In [5]: list(nwise(l, n=3)) Out[5]: [(1, 2, 3), (2, 3, 4), (3, 4, 5), (4, 5, 6)] Note that the list call was just used to empty the generator for printing. Here’s a quick one-liner that counts the times a fixed-length (42) sequence in a list sums up to a certain value (1337): sum([1 for seq in nwise(l, n=42) if sum(seq) == 1337])
https://dmuhs.blog/2018/09/14/n-wise-iteration-in-python/
CC-MAIN-2022-21
refinedweb
204
66.67
G. 😲Gatsby now integrates multiple data sources, splits code, supports offline, lots new plugins... amazing!! #reactjs #staticsitegenerator— Christian Mund (@krist) July 6, 2017 I made this website with @gatsbyjs and I'm totally in love with it React SSGs are the future— Charlotte Dann (@charlotte_dann) July 5, 2017 Super excited about the next version of @gatsbyjs. The first GraphQL-powered static site generator as far as I know!— Sacha Greif (@SachaGreif) March 12, 2017 Gatsby is growing like crazyGatsby is growing like crazy In the last year, Gatsby community and usage have exploded. Milestones reached: - 196 code contributors on GitHub (with many more helping in our chat room on Discord). - 10,000 stars on GitHub - 1000 followers on Twitter - 500,000 NPM downloads (100,000 in the last month!!) Cool sites built with GatsbyCool sites built with Gatsby - Segment relaunched their blog on Gatsby - The life insurance startup Fabric built their marketing site and web app using Gatsby - JavaScript consultancy Formidable built their website on Gatsby And you’re on of course a Gatsby website 😛 The three questions that guide Gatsby’s designThe three questions that guide Gatsby’s design Gatsby started, like all the best projects do, as a spark of curiosity — “I wonder if I could create a tool for building static websites with React?“. I’d been using React to build web apps for 1.5 years at that point and loved how easy React’s component model made it to build complex apps and wanted that same model for building websites. In a week of intense coding, I prototyped the first version of Gatsby (see my talk at React conf to hear more of the story) and open sourced this 2 years ago. 1000s of sites and 10,000 stars later, it seems clear that tools for building static React sites are useful. But in many conversations among community members building Gatsby sites, two more questions kept coming up. - How could we query data from anywhere and have that data show up in our pages without any custom scripting? - How should a website framework work for an internet dominated by smartphones on unreliable networks — an internet vastly different and larger than the one frameworks were designed for a decade ago? Plugin systemPlugin system Gatsby v1 heads out to sea delivering components to ports far and wide The first building block for answering to these questions was a plugin system. Wordpress & Jekyll are both great examples of open source communities with robust plugins ecosystems. Plugins help accelerate developing websites as you can build on what others have done and help collaborate with others on basic building blocks Gatsby’s plugin system lets you hook into Gatsby’s lifecycle APIs everywhere from events during the bootstrap and build processes and in the browser. There are already many official Gatsby plugins built—all distributed as individual NPM packages. It is easy to create your own plugins for internal projects and for contributing back to Gatsby. Plugins can: - add support for webpack loaders such as Sass, Less - add drop-in support for lightweight React-compatible frameworks Preact and Inferno - add a sitemap or RSS feed - add Google Analytics - …and much more! GraphQL-based data processing layerGraphQL-based data processing layer Plugins also drive the new GraphQL data processing layer. This new system enables rich integrations with CMSs like Contentful, Wordpress, and Drupal along with other remote and local sources. In Gatsby v0, (like pretty much every static site generator) data was processed then pushed into templates to be rendered into HTML. This is a straight-forward pattern and works great for many use cases. But when you start working on more complex sites, you really start to miss the flexibility of a database-driven site. With a database, all your data is available to query against in any fashion you’d like. Whatever bits of data you need to assemble a page, you can pull in. You want to create author pages showing their bio and last 5 posts? It’s just a query away. We wanted this same flexibility for Gatsby. So for 1.0, the Gatsby data team has built a new data processing layer which converts your data (whether from local files or remote sources) into a GraphQL schema which you can query against like a database. Every Gatsby page can have a GraphQL query which tells Gatsby what data is required for that page. The data layer runs the GraphQL queries during development and at build time and writes out a JSON file with the result of the query. This JSON file is then loaded alongside React code and injected into the React component as props. Because we know at build-time what data is needed for every page, we can easily pre-fetch page data meaning even very complex, data-heavy pages load almost instantly. This pattern of colocating your queries next to your views is copied from the Relay data framework from Facebook. Colocation makes it easy to fully understand your views as everything necessary for that view is fully defined there. An example of how this works in practice. Say we had a markdown file that looked like: --- title: A sweet post date: "2017-02-23" --- This is my sweet blog post. **Cool!** In our site, we would write a React component which acts as a template for all the blog posts. Included with the component is an exported pageQuery. // A basic React component for rendering a blog page. import React from "react"; class BlogPostTemplate extends React.Component { render() { return ( <div> <h1>{this.props.data.markdownRemark.frontmatter.title}</h1> <small>{this.props.data.markdownRemark.frontmatter.date}</small> <div dangerouslySetInnerHTML={{ __html: this.props.data.markdownRemark.html, }} /> </div> ); } } export default BlogPostTemplate; export const pageQuery = graphql` query BlogPost($slug: String!) { markdownRemark(slug: { eq: $slug }) { # Get the markdown body compiled to HTML. html frontmatter { title # Transform the date at build time! date(formatString: "MMM D, YYYY") } } } `; All data sourcing and transforming is plugin-driven. So in time, any imaginable data source and potential ways of transforming its data will be an npm install away. For the markdown ecosystem there’s already a robust set of plugins including adding syntax highlighting with PrismJS and resizing images referenced in markdown files so they’re mobile ready. There’s also source plugins written for Contentful, Wordpress, Drupal, Hacker News (really 😛), and more as well as transformer plugins for markdown, JSON, YAML, JSDoc, React prop-types, and images. We’re collecting a list of additional source/transformer plugins that’d be useful to have over at These plugins are easy to write (somewhat similar to webpack loaders) so we expect to see the list of plugins grow rapidly. Building for the next billion internet usersBuilding. the PRPL Pattern) developed by the Google Chrome Developer Relations team and others to help websites work well on modern browsers with unreliable networks. Sites built with Gatsby run as much as possible in the client so regardless of the network conditions—good, bad, or nonexistent—things will keep working. When a page loads, Gatsby immediately starts prefetching resources for pages nearby so that when a user clicks on a link, the new page loads instantly. Many of the top e-commerce websites in areas where people are coming online for the first time are developing their websites using these techniques. Read Google’s case studies on: Service worker and offline supportService worker and offline support Service workers are perhaps the most exciting technology that’s come to the web in the past several years. It makes possible (finally!) sophisticated client caching plus true offline support. We’ve added excellent support to Gatsby for Service Workers and a great offline experience. If you’re using Chrome or Firefox, this site loads and works offline! Service workers make your site much more resilient against bad networks. If someone loads your site on a train and goes through a tunnel, you won’t lose them as they’ll still be able to keep clicking around. Route-based code splittingRoute-based code splitting Many sites generate one JavaScript bundle for the entire site. Which means someone loading your frontpage loads far more code than is necessary which is bad then users get frustrated when site isn’t responsive to their clicks and touches while the code loads. Gatsby 1.0 initially only loads the code necessary for the page you’re on. As you navigate around, Gatsby loads in the code needed for each route. This means that one page with heavy imports: import d3 from "d3"; import threejs from "react-threejs"; …won’t affect the performance of the rest of the site. This is particularly helpful for teams of people collaborating on a site with pages with very different technical and business requirements. Different parts of the site can evolve independently of each other. One client I’m working with on Gatsby 1.0 (a stealth startup in San Francisco) is using Gatsby to build both their marketing site and SaaS app within the same Gatsby codebase. The marketing pages of their site are built using markdown and React components along with a modern css-in-js library Glamor for styling. The SaaS portion uses Redux to communicate with their Django API. The marketing portion of the site loads quickly with minimal JavaScript. When a potential customer goes to sign-up for the app, there’s no awkward jump from the marketing website to the web app—just a simple page change which seamlessly loads in the needed JavaScript. The team is sharing components and styles across the site without stepping on each others shoes as they rapidly iterate on features. Ending noteEnding note Gatsby is just getting started. We’re really looking forward to working with you! See you on GitHub! 👋
https://www.gatsbyjs.org/blog/gatsby-v1/?utm_campaign=React%2BNewsletter&utm_medium=web&utm_source=React_Newsletter_77
CC-MAIN-2018-05
refinedweb
1,634
60.95
A Developer.com Site An Eweek.com Site Hi there, I'm trying to get the text from two RichEdit controls which are located in external application, but I can't get the the text with GetWindowText. What is my mistake? Any help would be much appreciated... Thanks in advance. Here's the code: #include <iostream> #include <fstream> #include <Windows.h> #include <Winuser.h> using namespace std; HWND hwndApp; HWND hwndRichEditControl1 ; HWND hwndRichEditControl2; void GetWindowAndControls(){ // get the handle for the application, works fine hwndApp= FindWindow(NULL, L"MyExternalApplication"); // get the handles for the two RichEdit controls, works ok (tested with winID). hwndRichEditControl1 = FindWindowEx(hwndApp, NULL, L"WindowsForms10.RichEdit20W.app.0.33c0d9d", NULL); hwndRichEditControl2= FindWindowEx(hwndApp, hwndRichEditControl1, L"WindowsForms10.RichEdit20W.app.0.33c0d9d", NULL); } void main() { // declare the strings TCHAR txtApp[10], txt1[10], txt2[10]; // get the application handle and the rich edit handles GetWindowAndControls(); // this works fine, and get the text from main window ::GetWindowText(hwndApp, txtApp, 10); // this doesn't work - doesn't get text from RichEdit controls inside main window //Why? How can I make it work? ::GetWindowTextW(hwndRichEditControl1 , txt1, 10); ::GetWindowTextW(hwndRichEditControl2 , txt2, 10); } Read MSDN on GetWindowText () ... I know GetWindowText doesn't work with controls. That's why I'm asking what function I CAN use for this purpose. Originally Posted by fred100 Read MSDN on GetWindowText () ... I tried using the SendMessage: char txt[512]=""; ::SendMessage(hwndRichEditControl1 ,WM_GETTEXT,512,(LPARAM)txt); But even this doesn't work.. any ideas? I'm using winID to watch this control and I can see the text under "Title" property. Its class is WindowsForms10.RichEdit20W.app.0.33c0d9d. Why can't I retrieve the text with SendMessage(..., WM_GETTEXT, ...)?? I solved the problem with TCHAR txt[32]=L""; ::SendMessage(hwndRichEdit, WM_GETTEXT, 32, (LPARAM)txt); It seems that the class type wasn't correct: It suppose to be TCHAR and not char class, and must be converted to LPARAM. Ok, the thread can be closed now. You may want to change the method that you are using to declare your txt variable. You are mixing the TCHAR support (a Microsoft invention which allows building applications which support multiple character set widths) with a direct literal wide string. You used the following to declare txt: TCHAR txt[32]=L""; TCHAR is substituted with either wchar_t or char (depending on what the character set option is for the project). The right side (L"") requests an empty wide string (each character is a wchar_t). Since this compiles, your character set must be set to Unicode. If you set your character set to Multi-Byte, this would produce a compile error. If you need to support building the source using both settings, then the following could be used: Code: #include <tchar.h> TCHAR txt[32]=_T(""); If you are only going to build the source with one setting for character set, then the following would work for unicode: Code: wchar_t txt[32]=L""; #include <tchar.h> TCHAR txt[32]=_T(""); wchar_t txt[32].
http://forums.codeguru.com/showthread.php?466787-using-handles-to-get-RichEdit-text
CC-MAIN-2020-10
refinedweb
498
57.16
Bug #1372 TheIDE fails to compile with NOGTK flag Description It seems to be some issue with Upp namespace. See full buildlog here:. History #1 Updated by Jan Dolinár over 5 years ago Ok, so it has nothing to do with namespaces... The problem is 'None' in PythonSyntax::Identation::Type. There is macro called None in X/X11.h, which messes things up. The solution is to rename the enum value to something else. BTW: The "Identation" struct is a typo, it should be renamed to "Indentation". #2 Updated by Zbigniew Rebacz over 5 years ago Thanks for information. It is my fault. We can change None. I will make complex patch that allows identation insert in html/xml like style with minor re factoring of indentation. None is probably now - Unknown. Shouldn't syntax be in upp namespace? #3 Updated by Jan Dolinár over 5 years ago - Assignee changed from Miroslav Fidler to Zbigniew Rebacz Can you please simply rename the None ASAP? It breaks the nightly builds of Debian and Arch packages. Complex changes can wait, but this should be fixed as soon as possible. Thanks. #4 Updated by Zbigniew Rebacz over 5 years ago - File CodeEditorNOGTKCompilationFix.diff added #5 Updated by Miroslav Fidler over 5 years ago - Status changed from New to Ready for QA Applied. #6 Updated by Zbigniew Rebacz over 5 years ago - Assignee changed from Zbigniew Rebacz to Jan Dolinár #7 Updated by Jan Dolinár over 5 years ago - Status changed from Ready for QA to Approved Also available in: Atom PDF
https://www.ultimatepp.org/redmine/issues/1372
CC-MAIN-2021-39
refinedweb
258
75.61
. @robert-hh - Thanks, really good point. I hadn't looked that deeply. As you say, the specs for EU433 are very similar to EU866, though the default channel frequencies are different. Still that should be easy to fix up by removing the defaults and adding in new defaults. So we know that we CAN'T use something like this: lora = LoRa(mode=LoRa.LORAWAN, region=LoRa.EU433, ....) But it looks like we can use: LoRa(mode=LoRa.LORAWAN, region=LoRa.EU866) Followed by removing the 3 default channels (0, 1, 2) and adding them back in manually. Except in the documentation, it says: On the 868MHz band the channels 0 to 2 cannot be removed, they can only be replaced by other channels using the lora.add_channel method. So, it looks like the pseudo-code to use EU433, ON A LOPY-4 is: # Using EU433, but have to select region EU866 on the LOPY-4 to access EU433 # Put any other arguments into this call that you need LoRa(mode=LoRa.LORAWAN, region=LoRa.EU866) # Replace the three default channels: lora.add_channel(index=0, frequency=433175000, dr_min=0, dr_max=5) lora.add_channel(index=1, frequency=433375000, dr_min=0, dr_max=5) lora.add_channel(index=2, frequency=433575000, dr_min=0, dr_max=5) It looks like this is a work-around until the actual region parameter handling is hooked up. @Mladen I would love to know if this actually works in the field. @mladen The code seems inconsistent. The settings and QSTR for EU433 are not included, but using EU868, you can set the frequency to as low as 410 MHz. (esp32/mods/modlora.c, line 1143 ff) case LORAMAC_REGION_EU868: #if defined(LOPY4) if (frequency < 410000000 || frequency > 870000000) { #else if (frequency < 863000000 || frequency > 870000000) { #endif goto freq_error; } break; Besides the frequencies, the spec for EU433 and EU868 seem very similar. @mladen Have you tried just adding the EU433 setting when you setup LoRa in your code? The device-level setting (in the firmware update tool) is just a default anyway... Actual LoPy4 specification on the WEB site says: According to this EU433 must be working! From what I could tell from looking at the code, the front ends to the update tool, and the python interfaces to the LoRaWAN stack only support a subset of regions: (from pycom-micropython-sigfox/esp32/mods/modlora.c) static void lora_validate_region (LoRaMacRegion_t region) { if (region != LORAMAC_REGION_AS923 && region != LORAMAC_REGION_AU915 && region != LORAMAC_REGION_EU868 && region != LORAMAC_REGION_US915) { nlr_raise(mp_obj_new_exception_msg_varg(&mp_type_ValueError, "invalid region %d", region)); } } The regions are supported in the main LoRaWAN stack, but it doesn't look like the support has trickled through to the pycom code yet. I wonder if anyone from pycom could tell us where EU433 is on the priority list?
https://forum.pycom.io/topic/3345/lopy4-433mhz/16
CC-MAIN-2018-30
refinedweb
453
54.63
Since version 1.0 of a code library that I’m sure you’re tired of me talking about came out, I have been making steady updates, some of which break legacy code. I was also having trouble keeping track of which version of the library a particular demo was written for. In order to make sure that the new code library doesn’t cause unpredictable results for the old implementations I added a version check to the main class. This version check is very simple. It checks a version number in the client code against a version number of the library (or external classes) when the main class of the library is initialized. It is also completely optional (so average users don’t need to mark up their code with version numbers). Here’s an example: Say you normally initialize your library using: MyLibrary.initialize(); Which calls the initialize method: public static function initialize():void { // initialize library here } If the external code changes unexpectedly (say through an SVN update) this could cause problems that are difficult to trace. The solution is to modify the initializer to match the client code version with the initializer version. public static const VERSION:String = "1.6"; public static function initialize(versionCheck:String = VERSION):void { if (versionCheck != VERSION) { throw new Error ("The version check failed! This library is version " + VERSION + ". Update your code or GTFO!"); } // initialize library here } And when you run the initializer, use the version number you’re expecting. MyLibrary.initialize("1.6"); Some things to notice with this approach: - I used a string instead of a number for the version number so that it would allow for sub-sub version numbers and other markers, e.g. “1.6.24 beta r545″ - Because there is a default value for versionCheckin the initializer, providing a version number is optional. Using MyLibrary.initialize()will still work and throw no errors. - Unfortunately, there is not much you can do if the version numbers don’t match except to warn the client that the external code has changed. Still, I’ve found this to be very useful. The code for my initializer in its entirety after the jump… From KitchenSync.as /** * The current version of the library. Use this to verify that the library is the * version that your software expects. */ public static const VERSION:String = "1.6"; /** * Flag noting whether the engine has been initialized. */ private static var _initialized:Boolean = false; /** * Initializes the timing core for KitchenSync. Must be called before using any actions. * * @param frameRateSeed must be a DisplayObject that is added to the display list. * @param versionCheck a string for the version you think you're using. e.g. 1.2 This is recommended * but not required. It will throw an error if you're using the wrong version of KS. */ public static function initialize(frameRateSeed:DisplayObject, versionCheck:String = VERSION):void { if (_initialized) { throw new IllegalOperationError("KitchenSync has already been initialized."); } if (versionCheck != VERSION) { throw new Error ("Version check failed. Please update to the correct version or to continue using this version (at your own risk) put the initialize() method inside a try{} block."); } // Initialization code omitted _initialized = true; } Tags: actionscript, KitchenSync, library, Tutorial [...] Libary Code Version Checking Cool technique for checking the version of your code when initializing. Helps to keep track of what [...] Hi, nice solution… Here is my own, basically the same approach as you but, with the help of eclipse ant to atuomatize the task of manually update the version number. Simply add this number to the right button menu… and you are right!!.. Good work. Pedro [...] 6. Libary Code Version Checking [...] interesting, not that hard to find out but now that i see it already made… looks cool. Another solution might involve namespaces mabe? nvm, thinking out loud… xD nice blog!
http://dispatchevent.org/mims/version-checking/
crawl-002
refinedweb
633
66.23
Foreach loop Foreach loop runs upon a single thread and processing takes place sequentially one by one. Foreach loop is a basic feature of C# and it is available from C# 1.0. Its execution is slower than Parallel.Foreach in most of the cases. Parallel.ForEach loop Parallel.ForEach loop runs upon multiple threads and processing takes place in a parallel way. Parallel.ForEach loop is not a basic feature of C# and it is available from C# 4.0 and above. Before C# 4.0 we cannot use it. Its execution is faster than foreach in most of the cases. To use Parallel.ForEach loop we need to import System.Threading.Tasks namespace in using directive. But you know your application well and you can decide which one you want to use. I am giving 2 examples, in the first example traditional foreach loop is faster than Parallel.foreach loop where as in second example traditional foreach loop is very slow as compared to Parallel.foreach. Example 1: Parallel.ForEach loop is slower than Traditional Foreach loop. View All
http://www.c-sharpcorner.com/UploadFile/efa3cf/parallel-foreach-vs-foreach-loop-in-C-Sharp/
CC-MAIN-2018-09
refinedweb
181
62.54
#include <gdk-pixbuf/gdk-pixbuf.h> void gdk_pixbuf_xlib_init (Display *display, int screen_num); void gdk_pixbuf_xlib_init_with_depth (Display *display, int screen_num, int prefDepth); In addition to the normal Gdk-specific functions, the gdk-pixbuf package provides a small library that lets Xlib-only applications use GdkPixbuf structures and render them to X drawables. The functions in this section are used to initialize the gdk-pixbuf Xlib library. This library must be initialized near the beginning of the program or before calling any of the other gdk-pixbuf Xlib functions. void gdk_pixbuf_xlib_init (Display *display, int screen_num); Initializes the gdk-pixbuf Xlib machinery by calling xlib_rgb_init(). This function should be called near the beginning of your program, or before using any of the gdk-pixbuf-xlib functions. void gdk_pixbuf_xlib_init_with_depth (Display *display, int screen_num, int prefDepth); Similar to gdk_pixbuf_xlib_init(), but also lets you specify the preferred depth for XlibRGB if you do not want it to use the default depth it picks.
http://maemo.org/api_refs/4.1/gtk+2.0-2.10.12/gdk-pixbuf/gdk-pixbuf-gdk-pixbuf-xlib-init.html
CC-MAIN-2016-40
refinedweb
156
55.58
Python Client for QuadrigaCX Project description Introduction Quadriga is a Python client for Canadian cryptocurrency exchange platform QuadrigaCX. It wraps the exchange’s REST API v2 using requests library. Announcements Requirements - Python 2.7, 3.4, 3.5 or 3.6. - QuadrigaCX API secret, API key and client ID (the number used for your login). Installation To install a stable version from PyPi: ~$ pip install quadriga To install the latest version directly from GitHub: ~$ pip install -e git+git@github.com:joowani/quadriga.git@master#egg=quadriga You may need to use sudo depending on your environment. Getting Started Here are some usage examples: from quadriga import QuadrigaClient client = QuadrigaClient( api_key='api_key', api_secret='api_secret', client_id='client_id', ) client.get_balance() # Get the user's account balance client.lookup_order(['order_id']) # Look up one or more orders by ID client.cancel_order('order_id') # Cancel an order by ID client.get_deposit_address('bch') # Get the funding address for BCH client.get_deposit_address('btc') # Get the funding address for BTC client.get_deposit_address('btg') # Get the funding address for BTG client.get_deposit_address('eth') # Get the funding address for ETH client.get_deposit_address('ltc') # Get the funding address for LTC client.withdraw('bch', 1, 'bch_wallet_address') # Withdraw 1 BCH to wallet client.withdraw('btc', 1, 'btc_wallet_address') # Withdraw 1 BTC to wallet client.withdraw('btg', 1, 'btg_wallet_address') # Withdraw 1 BTG to wallet client.withdraw('eth', 1, 'eth_wallet_address') # Withdraw 1 ETH to wallet client.withdraw('ltc', 1, 'ltc_wallet_address') # Withdraw 1 LTC to wallet book = client.book('btc_cad') book.get_ticker() # Get the latest ticker information book.get_user_orders() # Get user's open orders book.get_user_trades() # Get user's trade history book.get_public_orders() # Get public open orders book.get_public_trades() # Get recent public trade history book.buy_market_order(10) # Buy 10 BTC at market price book.buy_limit_order(5, 10) # Buy 5 BTC at limit price of $10 CAD book.sell_market_order(10) # Sell 10 BTC at market price book.sell_limit_order(5, 10) # Sell 5 BTC at limit price of $10 CAD Donation If you found this library useful, feel free to donate. - BTC: 3QG2wSQnXNbGv1y88oHgLXtTabJwxfF8mU - ETH: 0x1f90a2a456420B38Bdb39086C17e61BF5C377dab Disclaimer The author(s) of this project is in no way affiliated with QuadrigaCX, and shall not accept any liability, obligation or responsibility whatsoever for any cost, loss or damage arising from the use of this client. Please use at your own risk. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/quadriga/
CC-MAIN-2019-43
refinedweb
401
52.87
- Creating Business Components - Creating Multitiered Web Applications - Using Code Behind - Summary Web developers are not necessarily good designers. Most companies divide the task of building Web sites between two teams. Normally, one team is responsible for the design content of a page, and the other team is responsible for the application logic. Maintaining this separation of tasks is difficult when both the design content and application logic are jumbled together on a single page. A carefully engineered ASP.NET page can be easily garbled after being loaded into a design program. Likewise, a beautifully designed page can quickly become mangled in the hands of an engineer. In this article, you learn two methods of dividing your application code from your presentation content. In other words, you learn how to keep both your design and engineering teams happy. First, you learn how to package application code into custom business components. Using a business component, you can place all your application logic into a separate Visual Basic class file. You also learn how to use business components to build multitiered Web applications. Next, you learn how to take advantage of a feature of ASP.NET pages named code-behind. Using code-behind, you can place all your application logic into a file and create one or more ASP.NET pages that inherit from this file. Code-behind is the technology used by Visual Studio.NET to divide presentation content from application logic. NOTE You can view "live" versions of many of the code samples in this article by visiting the Superexpert Web site at Creating Business Components In this section, you learn how to create business components using Visual Basic and use the components in an ASP.NET page. Business components have a number of benefits: Business components enable you to divide presentation content from application logic. You can design an attractive ASP.NET page and package all the page's application logic into a business component. Business components promote code reuse. You can write a library of useful subroutines and functions, package them into a business component, and reuse the same code on multiple ASP.NET pages. Business components are compiled. You therefore can distribute a component without worrying about the source code being easily revealed or modified. Business components can be written in multiple languages. Some developers like working with Visual Basic, some prefer C# or C++, and some even like COBOL and Perl. You can write components with different languages and combine the components into a single ASP.NET page. You can even call methods from a component written in one language from a component written in another language. Business components enable you to build multitiered Web applications. For example, you can use components to create a data layer that abstracts away from the design specifics of a particular database. Or you can write a set of components that encapsulate your business logic. Creating a Simple Business Component A business component is a Visual Basic class file. Whenever you create a component, you need to complete each of the following three steps: Create a file that contains the definitions for one or more Visual Basic classes and save the file with the extension .vb. Compile the class file. Copy the compiled class file into your Web application's /BIN directory. Start by creating a simple component that randomly displays different quotations. You can call this component the quote component. First, you need to create the Visual Basic class file for the component. The quote component is contained in Listing, System.Web.UI, System.Web.UI.HTMLControls, or System.Web.UI.WebControls namespaces in an ASP.NET page because these namespaces are automatically imported by default. However, a Visual Basic class file does not have default namespaces. Next, you need to create your own namespace for the class file. In Listing contains the declaration for your Visual Basic class. The class has a single function, named ShowQuote(), that randomly returns one of three quotations. This function is exposed as a method of the Quote class. After you write your component, you need to save it in a file that ends with the extension .vb. Save the file in Listing 1 with the name Quote.vb. The next step is to compile your Quote.vb file by using the vbc command-line compiler included with the .NET framework. Open a DOS prompt, navigate to the directory that contains the Quote.vb file, and execute the following statement (see Figure 1): vbc /t:library quote.vb The /t option tells the compiler to create a DLL file rather than an EXE file. Figure 1 Compiling a component. NOTE If you use either Web or HTML controls in your component, you need to add a reference to the system.web.dll assembly when you compile the component like this: vbc /t:library /r:system.web.dll quote.vb All the classes in the .NET framework are contained in assemblies. If you use a specialized class, you need to reference the proper assembly when you compile the component. If you lookup a particular class in the .NET Framework SDK Documentation, it will list the assembly associated with the class. If no errors are encountered during compilation, a new file named Quote.dll should appear in the same directory as the Quote.vb file. You now have a compiled business component. The final step is to move the component to a directory where the Web server can find it. To use the component in your ASP.NET pages, you need to move it to a special directory named /BIN. If this directory does not already exist, you can create it. ASP Classic Note You do not need to register the component in the server's Registry by using a tool such as regsvr32.exe. Information about ASP.NET components is not stored in the Registry. This means that you can copy a Web application to a new server, and all the components immediately work on that new server. The /BIN directory must be an immediate subdirectory of your application's root directory. By default, the /BIN directory should be located under the wwwroot directory. However, if your application is contained in a Virtual Directory, you must create the /BIN directory in the root directory of the Virtual Directory. Immediately after you copy the component to the /BIN directory, you can start using it in your ASP.NET pages. For example, the page in Listing 2 uses the quote component to assign a random quote to a Label control. ASP Classic Note Great news! You no longer need to stop and restart your Web server to start using a component whenever you modify it. As soon as you move a component into the /BIN directory, the new component will be used for all new page requests. ASP.NET components are not locked on disk because the Web server maintains shadow copies of all the components in a separate directory. When you replace a component in the /BIN directory, the Web server completes all the current page requests using the old version of the component in the shadow directory. As soon as all the current requests are completed, the shadow copy of the old component is automatically replaced with the new component. imports the namespace. After the namespace is imported, you can use your component just like any .NET class. In the Page_Load subroutine, you create an instance of your component. Next, you call the ShowQuote() method of the component to assign a random quotation to the Label control (see Figure </form> </body> </html> You also can expose properties in a component by using property accessor syntax. Using this syntax, you can define a Set function that executes every time you assign a value to a property and a Get function that executes every time you read a value from a property. You can then place validation logic into the property's Get and Set functions to prevent certain values from being assigned or read. The modified version of the adder component in Listing 5, for example, uses property accessor syntax. Listing 5AdderProperties.vb Imports System Namespace myComponents Public Class AdderProperties Private _firstValue As Integer Private _secondValue As Integer Public Property FirstValue As Integer Get Return _firstValue End Get Set _firstValue = Value End Set End Property Public Property SecondValue As Integer Get Return _secondValue End Get Set _secondValue = Value End Set End Property Function AddValues() As Integer Return _firstValue + _secondValue End Function End Class End Namespace The modified version of the adder component in Listing 5 works in exactly the same way as the original adder component. When you assign a new value to the FirstValue property, the Set function executes and assigns the value to a private variable named _firstValue. When you read the FirstValue property, the Get function executes and returns the value of the private _firstvalue variable. The advantage of using accessor functions with properties is that you can add validation logic into the functions. For example, if you never want someone to assign a value less than 0 to the FirstValue property, you can declare the FirstValue property like this: Public Property firstValue As Integer Get Return _firstValue End Get Set If value < 0 Then _firstValue = 0 Else _firstValue = Value End If End Set End Property This Set function checks whether the value passed to it is less than 0. If the value is, in fact, less than 0, the value 0 is assigned to the private _firstValue variable. Using a Component to Handle Events You can use components to move some, but not all, of the application logic away from an ASP.NET page to a separate compiled file. Imagine, for example, that you want to create a simple user registration form with an ASP.NET page. When someone completes the user registration form, you want the information to be saved in a file. You also want to use a component to encapsulate the logic for saving the form data to a file. Listing 6 subroutine, you must pass the values entered into each of the TextBox controls to the doRegister() method. You are forced to do so because you cannot refer directly to the controls in UserRegistration.aspx from the register component. Later in this article, in the discussion of code-behind, you learn how to move all your application logic into a separate compiled file.
http://www.informit.com/articles/article.aspx?p=25468&amp;seqNum=3
CC-MAIN-2019-04
refinedweb
1,735
55.64
×16 pixels to work with and you’re futzing around with MSPaint. I’m used to having gobs of memory and near unlimited CPU at my disposal, so working with such a small canvas was quite … constraining. This looks like a pretty cool feature! Especially the "Simplify type names" would be something that I would be using a lot. I hope it makes it for Whidbey or as a power tool… The only issue I have with lots of warnings (using VS 2003) is that actual errors can be hard to find among the 20 to 30 Obsolete warnings in the project. I like Simplify and Sort a lot. I actualy spend time sorting and grouping the usings. For Remove, you might want to consider just putting them in comments so we don’t have to re-type them if we need them again. For #if DEBUG | RELEASE : before showing a dialog, maybe make a second pass with them swapped, only showing the dailog with non-standard defines. When I’m pasting code in, it’s often from MSDN samples. It is devoid of using statements and does not have full names like System.Collections.Generic.IList. How about a "Suggest usings" that does a quick search for matching class names in the project? Frederik: Could you give me examples of how you would be using it and why you would be using it a lot? It will help when explaining to others if I can site specific examples of how customers like you would find it valuable. Thanks! Andrew: In 2k5 we made it pretty simple to add a new using to a file. Just type something that needs the using and we’ll offer to add it automatically. i.e. if you type ‘List<int>’ we’ll offer to add "using System.Collections.Generic". So i don’t think that commenting out "usings" would be that helpful. The DEBUG | RELEASE idea is possible and i’ll look into it. (although it feels icky to special case like that). — BTW, about the warnings, that’s why we wanted a new "Information" classification. That way you could hide all warnings in the task list and jsut see the information tidbits. These are great features. I’d love to get my hands on these in Whidbey or as soon as possible after that. I think the best would be "Remove unused usings". I’ve actually been wanting this for a while – especially useful in simplifying code that’s gone through heavy refactoring. I’ll vote for it! I’m glad to hear about the suggest usings that Andrew had asked about. I think this is a generally useful addition. My current editor supports organizing (expanding, collapsing, sorting, removing unused) type imports and I get a lot of use out of it. Since it’s so easy to just do the right thing in most cases, I agree it is best to just offer to fix it rather than providing code critics telling you about the problem. Something missing from your description that I currently use is the ability to do organize all files with one click. In VS this would probably mean all files in the project; in my editor it means all open files. Wow, these are great! Please try to get these into Whidbey, or as Frederik said, a power tool. Ah, MS Paint, so thats how you do it 🙂 I remember once asking you how you knocked up these samples. I’d assumed you had some special build of Whidbey that allowed you to actually create these menu items and smart tags in code easily 🙂 The ReSharper from JetBrains gives you the delete/simplify option today, in VS2003. Another code cleanup idea is to wrap methods with the same prefix in regions. Ariel Resharper is slow and ugly. Wow, neat feature.? While, this is a nice feature, I would much prefer to have incremental compilation for C#. I think Eclipse does a fantastic job here, and VS is clearly lacking. I do realize that there are perf considerations once the code base becomes huge, but these can be addressed through some advanced options. It would have been awesome if incremental compilation made it to Whidbey :(; I just hope that it appears in Orcas.. Call me paranoid, but I tend to avoid "execute this on the entire file" commands. There’s too many chances for side-effects, and I don’t know what exactly was done for me. Senkwe: MSPaint was only to generate the icon for the menu option. These are actual screenshots and the features work as advertised (without testing and verification that they work in all cases of course). Omer: )." Ctrl-Z will always undo the last action you performed, and it does behave as expected in this case. We could utilize the "refactoring preview changes" dialog in this case, but i was shooting for a very lightweight feature here. ?" You won’t be burdening yourself. We don’t automatically add "usings" for you, you must explicitly do it yourself. So if there is no "using System.Collections.Generic" then you’ll be fine. Ed: ." That seems quite reasonable. But much more complex to implement infortunately. "Call me paranoid, but I tend to avoid "execute this on the entire file" commands. There’s too many chances for side-effects, and I don’t know what exactly was done for me." You’re paranoid 🙂 What chance is there for side-effects? If a using is unused then removing it has no consequences. If a simply type name binds to the same thing that the fully qualified name binds to then there is no consequence in using the simple name. Harry: " While, this is a nice feature, I would much prefer to have incremental compilation for C#." Me2. But unfortunately incremental compile is something that would take us weeks (if not months) to do, whereas this feature took me 2 hours 🙂 "I think Eclipse does a fantastic job here, and VS is clearly lacking. I do realize that there are perf considerations once the code base becomes huge, but these can be addressed through some advanced options." Agreed. We are well aware of the benefits of incremental compile and are seriously considering it against all the other work we’d like to do. "It would have been awesome if incremental compilation made it to Whidbey :(; I just hope that it appears in Orcas." I hope it does too 🙂 Nicholas: "Something missing from your description that I currently use is the ability to do organize all files with one click. In VS this would probably mean all files in the project; in my editor it means all open files." Yes, this is true. However, these will just be standard commands (like "Edit.RemoveUnusedUsings") so it would be trivial to write a macro that did something like: foreach (Project p in dte.Solution) { ….foreach (File f in p.Files) { ……..f.ExecuteCommand("Edit.RemoveUnusedUsings"); ….} } (i’m not sure of the syntax, but it would be something like that). Ariel: "Another code cleanup idea is to wrap methods with the same prefix in regions." I don’t really like the sound of that. it seems kinda limiting to just do it based on method prefix. I think i’d prefer an extensible model here where you have more flexibility to specify what code goes in what regions. Cyrus, cool ideas. I do have one suggestion though… perhaps you could make the ‘Sort’ feature a little more general. Perhaps it could sort the currently highlighted block of code instead of just being used for Using statements. I often spend (waste?) time copying and pasting my WinForms event procedures to get them listed in alphabetical order. Being able to simply highligh the code (while colasped to definitions) and clicking ‘Sort’ from a context menu to do the same job would be way cool. Typically, my classes look something like this: public class Whatever { /*Private field variables go here*/ /*Constructor(s) go here*/ /*Private properties go here*/ /*Public properties go here*/ /*Private methods go here*/ /*Public methods go here*. /*Event procedures go here*/ /*Display and/or Finalize go here*/ } Being able to sort each ‘section’ of code using this new Sort feature would save me a lot of time. Thats my $0.02 worth.. At that point all you’ll need is to implement an IComparable<Node> object that will do the custom sorting that you want. Of course we would want to provide some intuitive UI so you could do a lot of the setup from the IDE. But in the end if you wanted maximum flexibility then that would be available to you. Would this be something helpful to you? (Yes, I could definitely see using these features) Do the featuers sound useful in their current incarnation, or would you like to see them behave differently? (Sounds good at this point. Most likely can’t give useful feedback until I get to use the features in a beta). Thanks! Yes yes! After having these refactoring abilities in the java world I feel so crippled now using VS.Net. As already mentioned, Resharper does this sort of stuff already. And as already mentioned, it gets SLOW. However this tends to be only on startup and when reloading aproject. For the rest of the time, its fairly snappy. And the productivity gains I get from it far outweigh the 20-30 seconds it may hold me up in the morning. But until Cyrus releases some dope little refactoring PowerToy, thats what I’m stuck with… So yes please – Gimme gimme! Yes on this feature! And Len’s idea is pretty sweet too… given that I can order the priorities in a dialog box somewhere. Out of interest what would the behaviour be when simplifying typenames when naming conflicts occur, such as: System.Web.UI.WebControls.Calendar c1; System.Globalization.Calendar c2; Would one be simplified, would both be left unchanged? Alex: "Simplify Type names" will only do so when the new names means exactly the same thing as the old name. So in your case it depends on the "usings" you have. I’ll walk through the cases: a) You have no usings. Then no change will be made b) you have a "using System.Globalization". Then the first type name will stay the same, the second will be simplified to "Calendar" c) You have both "using System.Globalization" and "using System.Web.UI.WebControls". Then no change will be made because it would change the meaning of your code since "Calendar" would be ambiguous. Does that make sense? I guess I’m wondering about the overal usefulness of this feature. What is the penalty for having unused usings? If this is such an issue that we need a special tool to remove them, then why do the default project wizards in VS.NET include references to certain assemblies like System.Xml or System.Data that I may never use? My understanding is that assembly references that were in the references list, but never actually used weren’t included in the manefest, so no harm no foul. Doesn’t this fall in the same category? I guess I would think there are other issues that are much more worth your time than. Cyrus I really like this post and these Ideas I really hope you can get it in. I use resharper specifically for this, Yeah shortcuts and refactoring are also a big bonus but cleaning up the using are outstanding. Very cool ideas. +1 for releasing it as a power tool if it can ‘t make it into the release. Nick: There is no penalty except for added conceptual confusion. Some people ask themselves "why is this using here, do i need it?". Other’s prefer code to be as tidy as possible and want to remove as much extraneous information as possible. The default projects include certain things like System.Collections.Generic because we find that people are always using collections and thus it helps them since the types from those usings will now be in the completion lists. "I guess I would think there are other issues that are much more worth your time than this. " Of course. But i did this on my own time jsut for the fun of. " I don’t see how increment compilation hurts you here. Could you expand on this? What’s wrong with the compiler saying "you’re calling a non-existent method, would you like me to generate it for you?" Cyrus: I prefer keeping type names short. I prefer to see "SqlConnection" in my code over "System.Data.SqlClient.SqlConnection", for example. However, it happens to my that I do use the fully qualified name of a type, only to find out later that I use that specific type or more types from the same namespaces quite often in the same class. That’s where the Simplify Type Names comes in handy. Another use scenario is when you copy and paste code – example code, for example, might be using full type names a lot. I did spend time simplifying type names before.. Does that help you? Frederik: Thanks! That helps a lot. — . " We addressed that problem in whidbey. There is a SimpleTypeName function that you can use with snippets. It’s how we make sure that we’ll dump the best name for global::System.NotImplementedException into your code. Check out any of the refactoring snippets and you’ll see how to do this. I would love to see this in Whidbey or as part of the TweakC# tool that you mentioned earlier. How would we use it? Today I’m oftening tidying up using statements where the code has been updated, so the namespace is no longer in use – but the using statements remain. In addition no one of my other co-workers are pedantic enough to care about the order using statements. u should give us more info In VS 2005 Relase version, i have found "Sort usings" in "Tools->Customize" in "Edit" category. But when i add the icon in the standard toolbar, it is always disabled ! Have you finally activate these tools ? It’s definitely a shame to see that the feature was only half added to the IDE 🙁 A small feature but a very welcome one it would have been! PingBack from PingBack from
https://blogs.msdn.microsoft.com/cyrusn/2004/12/04/code-cleanup/
CC-MAIN-2016-50
refinedweb
2,406
73.68
- dump memory to device during system failure #include <sys/types.h> #include <sys/ddi.h> #include <sys/sunddi.h> int dump(dev_t dev, caddr_t addr, daddr_t blkno, int nblk); Solaris specific (Solaris DDI). This entry point is required. For drivers that do not implement dump() routines, nodev(9F) should be used. Device number. Address for the beginning of the area to be dumped. Block offset to dump memory. Number of blocks to dump. dump() is used to dump a portion of virtual address space directly to a device in the case of system failure. It can also be used for checking the state of the kernel during a checkpoint operation. The memory area to be dumped is specified by addr (base address) and nblk (length). It is dumped to the device specified by dev starting at offset blkno. Upon completion dump() returns the status of the transfer. When the system is panicking, the calls of functions scheduled by timeout(9F) and ddi_trigger_softintr(9F) will never occur. Neither can delay(9F) be relied upon, since it is implemented via timeout(). See ddi_in_panic(9F). dump() is called at interrupt priority. dump() returns 0 on success, or the appropriate error number.
https://docs.oracle.com/cd/E23823_01/html/816-5179/dump-9e.html
CC-MAIN-2021-43
refinedweb
198
61.02
Articles | News | Weblogs | Buzz | Books | Forums Artima Forums | Articles | Weblogs | Java Answers | News | Buzz Sponsored Link • Weblogs Forum Please Teach me Web Frameworks for Python! 104 replies on 7 pages. Most recent reply : Sep 29, 2008 1:31 AM by brianna americana Guest Back to Topic List Reply to this Topic Threaded View Previous Topic Next Topic Flat View: This topic has 104 replies on 7 pages [ « | 1 ... 2 3 4 5 6 7 | » ] Doug Winter Posts: 3 Nickname: winjer Registered: Oct, 2003 Re: Please Teach me Web Frameworks for Python! Posted: Jan 28, 2006 4:11 AM Reply Advertisement I've had a look at a number of these frameworks in detail, and here is my take. First, if I may say, your comment about XML is frankly weird. Since you are producing XML output, you are almost certainly going to have to write some XML somewhere. Bizarrely the only system you have looked at in which it is sensible to generate HTML without any XML input at all is Nevow (which provides Stan) and this is the very system you criticised. 1. Django Too much magic. They are set on magic removal, but I still think their API generation stuff is a nightmare-in-waiting Using Regexs to specify the UI is evil and unpredictable. 2. Nevow The more complex your application, the better suited Nevow is to it. It's fast and it's definitely coder-driven. It's more lispish than pythonic I guess, and deferreds will make you much keener on PEP343 :) Nevow probably isn't ready for widespread adoption yet, as I think glyph would agree. 3. Zope 2 Nightmarish, avoid if possible. That said, it's the only one of these web frameworks to gain real traction, but it's really showing it's age now. 4. Zope 3 Looks very good so far, but is tarnished by the Zope name. Definitely worth considering. 5. TurboGears Probably the most pythonic of the options. It looks like a rough assemblage of random components but the fact they play so nicely together is a testament to their quality. You should look at this one too. Stefane Fermigier Posts: 2 Nickname: fermigier Registered: Oct, 2003 Re: Please Teach me Web Frameworks for Python! Posted: Jan 28, 2006 7:07 AM Reply Check my blog entry on the Python web (mega)frameworks and their common components ( ). Hope that helps clarify some points. Anthony Tarlano Posts: 9 Nickname: tarlano Registered: Jun, 2003 Re: Please Teach me Web Frameworks for Python! Posted: Jan 28, 2006 7:12 AM Reply Guido, IMHO no one can tell you which Web Application Technology will be best suited to your requirements and constraints, since only you know your personal taste when you open up your editor and peer through someone else's codified constraints and requirements. I have used many of the candidates mentioned so far, but not all, so I will let other responders answer your question on "where to start?". I would like to just echo something that you said that I do agree with: "I like [Quixote's] approach to templating: instead of inventing a brand new templating language, it makes one tiny modification to Python". I do ask one thing from you, please after you have finished your "starter project" post details both on where you started as well as where you ended. Jim Carroll Posts: 7 Nickname: mrmaple Registered: Jan, 2006 Turbogears gets my vote. Posted: Jan 28, 2006 11:51 AM Reply Hi Guido, I have done php, java (mostly .jsp pages), and zope development. Today I would use Turbogears, because it improves all aspects of my current favorite Zope2 technique. In Zope, I use a page to show the contents of the database, links to pages that allow editing that pull from python scripts based on some database ID using ZSQL against MySQL, then the editing page has another script as an Action that validates, and changes the dates to MySQL format, then invokes more ZSQL & redirects to the next page. The problem is that any one edit & save operation is spread out against two ZPT templates, two Scripts, and two ZSQL objects... if something goes wrong the error messages don't point to the source of the problem, it takes quite a bit of thinking to figure out where the problem really is. I end up having to help my coworkers with this more often than I should. Using just the CherryPy part of Turbogears and the Kid templates is attractive because all the things that (in Zope) are spread out across multiple python scripts and ZSQL objects is in one .py file accessing MySQL directly, and the error messages really do point to the problem. The Kid templates polish some of the rough edges of the ZPT templates... I find I don't need as many tal:define statenmenst to set up things to work nicely. The only reason that I'm not using Turbogears at the moment is that I had trouble deploying it on my Mac Mini. I couldn't get mod_python to compile just right. I want to use mod_python because apache will re-launch my application on the first request after apache starts without having to have a different mechanism to start my Turbogears app. One reason I don't think Turbogears is the ultimate web app framework is that every request causes Python functions to get interpreted. In the wxPython + C++ work I do my goal is that anything time critical is handled 100% by C++, and the Python is used to wire together the C++ objects at startup time. Someday the high performance possible with python/C++ hybrid programming will create the ultimate web server. The trick is getting the database access and string manipulation to be specified in Python at start-up, but then only touch C++ code during a request. I'd like to hear what GUI tools you use for Python development at Google. The idea of Google putting resources into wxPython & wxWidgets in an open way (maybe even creating development tools to blow C# out of the water) is something I'd love to encourage. Thanks for Python, -Jim Garito . Posts: 1 Nickname: garito Registered: Jan, 2006 Re: Please Teach me Web Frameworks for Python! Posted: Jan 28, 2006 12:06 PM Reply Hi, I'm working on Yanged a Zope product that acts like a framework to create reusable web software On Yanged every piece of code is a "Funcionality" a is fully reusable I use MVC some years ago and I think there are better approaches and is a little old pattern (in my opinion) On Yanged we have run path's (run path's, functionality and formulators) to do the work The language editor is Freemind. You could edit your application behavior with freemind For example, this is the code for Validar (validate) and is used to validate *ANY* formulator: ## Comando Yanged "Validar" ##bind container=container ##bind context=context ##bind namespace= ##bind script=script ##bind subpath=traverse_subpath ##parameters=args ##title= ## from Products.Formulator.Form import FormValidationError if 'Formulario' in args: Formulario = args['Formulario'] elif 'Elemento' in args: Formulario = context.Dame({'tipo': 'Formulario Yanged', 'nombre': args['Elemento'].tipo(args)}) else: Formulario = context.Dame({'tipo': 'Formulario Yanged', 'nombre': args['Path'][0]}) errores = dict() try: args['ValoresFormulario'] = Formulario.validate_all_to_request(context.REQUEST) except FormValidationError, e: for error in e.errors: errores[error.field.get_value('title')] = error.error_text if len(errores): args['Errores'] = errores if not len(context.REQUEST.form): args['Errores'] = {} return args As you can see (if you know any Zope) is a Python Script but a functionality could be anything I could call Another example this time with a Page Template: <tal:b tal: <tal:b tal: <tal:b tal: </tal:b> <div tal: <tal:b tal: </div> <tal:b tal: These piece of code is called "Etiquetable" and is used to put the label of a formulator field but *ANY* field All with 1000 lines of unoptimized (refined) code + Funcitionalities At these moment I'm working on the functionalities (basic with AJAX, actions -groupped functionalities that run's something-...) What do you thing about it? Thanks! Bill de hÓra Posts: 1127 Nickname: dehora Registered: May, 2003 Re: Please Teach me Web Frameworks for Python! Posted: Jan 28, 2006 2:19 PM Reply Django is best thing out there given the requirements you mentioned. It's stupifyingly productive, the parts are coherent, it has a proper end-to-end story from setup to dpeloyment. I didn't buy your point about magic, in any case, 1.0 will getting rid of most of what is essentially a non-problem. JOhn Mudd Posts: 5 Nickname: johnmudd Registered: Dec, 2005 Re: Please Teach me Web Frameworks for Python! Posted: Jan 28, 2006 5:19 PM Reply Almost a year ago I became fascinated with Twisted + Nevow. There's a demo that shows sever side Python driving Ajax apps. Looks simple, cool and it's driven by Python, on the *server* side. I waited, hoping for it to solidify. Anyway, my wish is that you give this combo a close look. Having you in the user community might push it along. Kieran Holland Posts: 1 Nickname: kmh Registered: Jan, 2006 Django's templating language is a feature Posted: Jan 28, 2006 6:06 PM Reply Good to see your interest in a problem that RoR has demonstrated is now crucial to language adoption. I have been using Django for some time, working together with a designer, and overall I am very satisfied. The core developers are 100% committed, the documentation is outstanding, and the community helpful. Here are a few reasons why Django's templating is a major feature : - Simple implementation (parsed with a couple of REs) - Superfast rendering - Template inheritance (very DRY) - Well documented - Easily accessible to non-programmers - Extendible by programmers where necessary In any web project that is not maintained exclusively by programmers I am convinced that anything "more Pythonic/powerful" in the templates is asking for trouble. Django templating is certainly not "rich and powerful" like PHP - it does just enough. If the designer wants complicated logic then Django makes it easy for a programmer to write a custom template tag in regular Python, keeping program logic where it is meant to be. There has been a bit of API flux since Django went public, with the aim to get the core features just right before a 1.0 release, but backwards incompatible changes have been completely documented. The Django team have guaranteed a stable API after 1.0 Importantly, the core Django developers have a lot of work invested in existing production sites: they know what works and do not make backwards incompatible changes lightly. LC Rees Posts: 2 Nickname: lcrees Registered: Jan, 2006 Re: Please Teach me Web Frameworks for Python! Posted: Jan 28, 2006 6:39 PM Reply I'm more of a casual programmer than many of the luminaries posting to this thread. Being a Python fan, however, I follow community developments and I've followed Python web programming discussions in particular for the past few years. Comments from a casual Python programmer with no allegiance to any particular framework couldn't hurt. I've wanted to pick up web programming but I want to do it in Python. Python "fits in my head" and I don't think about the language while using it. I've fiddled with several Python web frameworks over the years but none of them click the way Python itself does. I want the Python of Python web frameworks but I haven't found it. I want to stop thinking about web framework and start thinking about web applications. I've run through the Django and TurboGears tutorials and they have some interesting features but they don't feel like they're "it" yet (though that could change). I've looked at some of the projects recommended on this thread but not in any detail. web.py seems more Pythonic in its resemblance to the better PSL libraries but I haven't experimented with it yet. Though I'm sure Guido's not proposing it here and I doubt it will happen anytime soon, I would be in favor of "one -- and preferably only one -- obvious way to do it" for each of the features Guido lists. There must be best practices that can be codified for each of those features though there may not be a best implementation for all of them in Python yet. <ul><li>There should be one obvious way to make Python web applications run on a webserver that's easy for hosts to implement and casual programmers to write to.</li><li>There should be one obvious way to mix Python with markup. No templating system should be mandated but if you just wanted to mix <i>just</i> Python and (X)HTML there should be one way to put Python and (X)HTML in a web page, FTP it to a web server, and have it live. It's ugly and most people here wouldn't use program that way. I personally prefer the in-page templating system be as free of programming logic as possible. But if people are going to do it, they might as well do it in Python instead of an ugly hack like PHP.</li><li>One reason django offers for having yet another templating language is the security advantages of keeping "arbitrary Python code" out of a template. This exposes a missing feature in Python's current library: the lack of a bulletproof "secure execution mechanism". Inclusion of a rexec/Bastion replacement in the PSL solve at least two issues I'm familiar with: The Python procedural language PL/Python being marked as "untrusted" under PostgreSQL and using Python as an in-page templating language.</li></ul>Some varient of all of these common features should ship with the standard library IMHO. But, even if that nevers happen, I welcome Guido's involvement in this area. I understand the advantages of the "let a thousand flowers bloom, a hundred schools of thought contend" approach to encouraging innovation in the Python community. But we've been chasing our tail as a community around the bush on this issue for years when it seems the best practices in the areas Guido outlines are common knowledge now. Other languages provide at least a bad web programming solution (higher than CGI) out of the box while we get nothing despite having a superior language. Guido's experiences in this area could help lead to a better solution for those who just want something out of the box and are too lazy or ignorant to search for package X and package Y from CheeseShop. There may or may not be much point in serving that demographic but it exists and ought to be using Python instead of something worse. I trust Guido's instincts because, in general, he's shown one quality most language designers (and, by implication, most web framework authors) manifestly lack: good taste. Foo Bar Posts: 2 Nickname: throwaway Registered: Jul, 2003 Re: Please Teach me Web Frameworks for Python! Posted: Jan 28, 2006 8:37 PM Reply I've been interested in web.py, too, but it appears to have a relatively restrictive licence, as web frameworks go. :-( Seems as though you have to release your application as open source if you put it on the web. That makes it a nonstarter for me and many others but I bet Guido could get an exemption from the author... I wish he'd been more explicit about that in his introduction of web.py, though. Seems a little sneaky. (I learned this from the reddit.com comments, somewhat ironically: . See for the actual licence, especially the summary in the left-hand column.) Jeff Lewis Posts: 2 Nickname: jlburly Registered: Jan, 2006 Re: Please Teach me Web Frameworks for Python! Posted: Jan 28, 2006 10:49 PM Reply Sorry if off-topic, but I'm not sure that is what the aogpl license is saying. Reading section 2.d. as a whole, I'm pretty sure it just means that if you use web.py, whether modified, extended/aggregated or in original form, in your app that if a user of your app asks for the src of web.py, you need to provide them with a way to download the src of web.py that your app is using (including the mods and/or derivitives/aggregations if such is the case), not the src for the rest of your app. I could be wrong of course. You could always ask Aaron S to clarify. Jeff > I've been interested in web.py, too, but it appears to > have a relatively restrictive licence, as web frameworks > go. :-( Seems as though you have to release your > application as open source if you put it on the web. > ... > See > for the actual licence, > especially the summary in the left-hand column.) mike bayer Posts: 22 Nickname: zzzeek Registered: Jan, 2005 Re: Django's templating language is a feature Posted: Jan 28, 2006 11:02 PM Reply > Here are a few reasons why Django's templating is a > major feature : > > - Simple implementation (parsed with a couple of REs) > - Superfast rendering > - Template inheritance (very DRY) > - Well documented > - Easily accessible to non-programmers > - Extendible by programmers where necessary > Ive been trying to stay out of the Django love-fest going on here, but really, these above features are not very interesting at all. All the major template languages for python now have these features, and in the case of Myghty they are based on a model that is far more widely used than Django's (i.e. that of HTML::Mason). As Guido so aptly mentioned, Django's templating, with its enormous list of functions (all of which are implemented custom, as opposed to drawing upon the Python language itself as Myghty does) is designed for the PHP non-programmer "giant bag of functions" crowd, which is about as un-pythonic as something can get. Foo Bar Posts: 2 Nickname: throwaway Registered: Jul, 2003 Re: Please Teach me Web Frameworks for Python! Posted: Jan 28, 2006 11:27 PM Reply I'm not certain on that point, either, but this section seems way I read that, unless your program can run without a copy of web.py, the terms of the licence would apply to its source code as a whole. Adding a file which imports web.py seems like creating a "modified work" to me. But I don't know whether a lawyer would see it that way. Eugene Lazutkin Posts: 15 Nickname: elazutkin Registered: Jan, 2006 Re: Please Teach me Web Frameworks for Python! Posted: Jan 29, 2006 12:38 AM Reply > 1. Django > > Too much magic. They are set on magic removal, but I > still think their API generation stuff is a > nightmare-in-waiting Using Regexs to specify the UI is > evil and unpredictable. Is it possible to expand on "nightmare-in-waiting" in light of "set on magic removal"? Who is "Using Regexs to specify the UI"??? I have no idea what you are talking about. So far your take sounds like a pure propaganda without any facts. Thanks, Eugene Paul Boddie Posts: 26 Nickname: pboddie Registered: Jan, 2006 Re: Please Teach me Web Frameworks for Python! Posted: Jan 29, 2006 10:57 AM Reply [Django] > Is it possible to expand on "nightmare-in-waiting" in > light of "set on magic removal"? Who is "Using Regexs to > specify the UI"??? I have no idea what you are talking > about. So far your take sounds like a pure propaganda > without any facts. From what I've seen of Django and regular expressions, noting that I don't have any interest in the templating system (since I'm an adherent of XML-based templating), the principal application of regexps is in interpreting the path or URL when dispatching to a particular part of an application. I think this is an elegant twist on the way Zope and its predecessors did dispatching: the Zope object publishing mechanism (at least once upon a time) traversed path fragments and called functions and methods providing request parameters as arguments; Django takes the groups from a matching regular expression (applied to the path) and then presents those groups as arguments to the function associated with that particular regexp, meaning that each argument represents some "interesting" part of the URL. In other words, Django chooses an arguably more interesting use for the arguments/parameters of "published" functions, especially since at least in early Zope-related technologies, the conversion of request parameters wasn't powerful enough, and Django arguably gives the application more control over the organisation of the "URL space" whilst leaving the retrieval of request parameters to the application, which more demanding Zope applications had to do manually anyway. What I don't particularly like about Django and some other frameworks is the mandatory relational database system aspect. I've been involved in writing introductory material for frameworks with similar prerequisites before, and having people install, configure, tune and troubleshoot RDBMSs is a real turn-off for many newcomers (and an irritation even for many seasoned developers). Flat View: This topic has 104 replies on 7 pages [ « | 2 3 4 5 6 7 | » ] Previous Topic Next Topic Sponsored Links Web Artima.com - - Advertise with Us
http://www.artima.com/forums/flat.jsp?forum=106&thread=146149&start=60&msRange=15
CC-MAIN-2014-10
refinedweb
3,585
59.33
2D Game Grid Model from C++ to QML - DenverCoder21 Hello guys, I'm fairly new to Qt/QML and currently try to create a little Snake game. For this, I implemented a C++ model of a game board which has a two-dimensional array containing the states of the respective cell (empty, snake, fruit). I'd like to practice exposing custom models from C++ to QML, please bear that in mind in your answers (as in please no QML-only answers), thank you. ;-) The best approach to me seemed to have my game board class subclass QAbstractTableModel, which I did as best as I could. My problem now is the next step when it comes to using it in QML. So far I only found examples using role names - but the cells of a row don't have role names, just a state. Also I've seen examples that use a QML TableView with handwritten TableColumns added - but I want QML to display how many columns ever my model has. Here is my code so far: board.h #pragma once #include <vector> #include <QAbstractTableModel> class board : public QAbstractTableModel { Q_OBJECT public: enum class state { empty, snake, fruit }; board(int width, int height); state get_state(int x, int y) const; void set_state(int x, int y, state state); int rowCount(const QModelIndex& parent = QModelIndex()) const; int columnCount(const QModelIndex& parent = QModelIndex()) const; QVariant data(const QModelIndex& index, int role = Qt::DisplayRole) const; private: std::vector<std::vector<state>> m_board; }; Q_DECLARE_METATYPE(board::state) board.cpp #include "board.h" board::board(int width, int height) : m_board(width, std::vector<state>(height, state::empty)) { } board::state board::get_state(int x, int y) const { return m_board.at(x).at(y); } void board::set_state(int x, int y, board::state state) { m_board.at(x).at(y) = state; } int board::rowCount(const QModelIndex&) const { return m_board.size(); } int board::columnCount(const QModelIndex&) const { return m_board.at(0).size(); } QVariant board::data(const QModelIndex& index, int role) const { if(!index.isValid()) return QVariant(); if(role != Qt::DisplayRole) return QVariant(); return qVariantFromValue(get_state(index.row(), index.column())); } I haven't written anything in QML, yet. Am I on the right path? What would I need to do in QML? Thanks a lot! - p3c0 Moderators So far I only found examples using role names - but the cells of a row don't have role names, just a state If they dont then there's no way to get data from model to the view. The view will look completely blank. Well honestly I cant tell if this is the best idea to use a model. May be you have already thought of the updating the model for eg. how to update each cell when the snake moves. Btw. did you look at this Snake game ? Sorry it's QML-only ;). But you can try to fit the logic into model. - DenverCoder21
https://forum.qt.io/topic/71629/2d-game-grid-model-from-c-to-qml
CC-MAIN-2017-47
refinedweb
480
65.52
Deviating from hint-like beginnings. solution in Clear category for Pawn Brotherhood by tigercat2000 def safe_pawns(pawns): # Initialize an empty set that will contain coordinate pairs of our pawns. pawn_indexes = set() # Iterate through the pawn coordinate sets. for p in pawns: # ... and decode them into x,y coordinates, instead of letter,number. # The rows are already a usable integer in a string, but they need to be made 0-indexed. row = int(p[1]) - 1 # The columns are refered to by a letter coordinate. Convert the letter into ascii and subtract 97 to get the number it's refering to. col = ord(p[0]) - 97 # Add the determined row and column to our pawn index set. pawn_indexes.add((row, col)) # Initialize count of safe pawns at 0. Nobody is safe until we check them. count = 0 # Iterate through our pawns, getting both a row and column in the loop. for row, col in pawn_indexes: # Use a tuple for backup positions. safe_places = ((row - 1, col - 1), (row - 1, col + 1)) # Initialize a variable called is_safe as False for this loop iteration. is_safe = False # Loop through coordinate tuples inside of our backup position tuple. for coords in safe_places: # Check if the coordinate is one in our established pawn locations. if coords in pawn_indexes: # If it is, the pawn is 'safe', so mark the is_safe variable as True. is_safe = True # And exit the for..in loop early with a break, we don't need to make sure every position is covered, just one. break # Check if they were determined as safe in our backup position tuple. if is_safe: # If they were, increase the count of safe pawns. count += 1 return count Nov. 5, 2016 Forum Price Global Activity ClassRoom Manager Leaderboard Coding games Python programming for beginners
https://py.checkio.org/mission/pawn-brotherhood/publications/tigercat2000/python-3/deviating-from-hint-like-beginnings/share/cac8272249ddcd2d37336bb1c1993a99/
CC-MAIN-2021-21
refinedweb
294
67.55
view raw I want to execute some code at pre install time when installing a gem from rubygems.org with a command like gem install some-gem # File lib/rubygems.rb, line 724 def self.pre_install(&hook) @pre_install_hooks << hook end RubyGems defaults are stored in rubygems/defaults.rb. If you're packaging RubyGems or implementing Ruby you can change RubyGems' defaults. For RubyGems packagers, provide lib/rubygems/defaults/operating_system.rb and override any defaults from lib/rubygems/defaults.rb. For Ruby implementers, provide lib/rubygems/defaults/#{RUBY_ENGINE}.rb and override any defaults from lib/rubygems/defaults.rb. If you need RubyGems to perform extra work on install or uninstall, your defaults override file can set pre and post install and uninstall hooks. See ::pre_install, ::pre_uninstall, ::post_install, ::post_uninstall. Gem.pre_install { puts 'pre install hook called!' } s.require_paths = ["lib", "test", "rubygems"] The answer is presently (2015-11-11) NO you cannot execute arbitrary code at install time for a specific gem. The hooks mentioned in your question are for use by the RubyGem installer itself and are not gem specific. See: How can I make a Ruby gem package copy files to arbitrary locations? for additional details. These files: lib/rubygems/defaults/defaults.rb lib/rubygems/defaults/operating_system.rb rubygems/defaults.rb Are not called from your gem directory. They are found in the RubyGems system location. If you wish to execute the same code for every gem before any are installed then you can use the pre_install hooks by placing the code in /usr/lib64/ruby/2.2.0/rubygems/defaults.rb or wherever your version of Ruby is installed on your system. The operating_system.rb file will get loaded from the same location as well.
https://codedump.io/share/hUraLbzlx4RA/1/how-to-add-a-prepostinstallhook-to-ruby-gems
CC-MAIN-2017-22
refinedweb
285
60.41
For code/output blocks: Use ``` (aka backtick or grave accent) in a single line before and after the block. See: Len and buflen with resample - gonche1124 last edited by gonche1124 I´m building my first strategy that uses the resample functions of cerebro. I´m writing out the length and the buflen of each data feed in the next function. When I dont do any resampling, the result is as expected: len increments in every next step while buflen remains constant. However, when I resample my data feed from 1 minute to 60 minutes, then the output for len and buflen is the same (both incrementing with each next call). I´m trying to figure out what I´m doing wrong... here is my code: def next(self): finalS=" " for i, d in enumerate(self.datas): dt, dn, dv, dl, db = self.datetime.datetime(), d._name, d.open[0], len(d), d.buflen() finalS=finalS+"|" + " {} {} {} {} {}".format(dt, dn, dv, dl, db) print(finalS) And a snapshot of the output with one data feed: And with resampling: Thanks for the help! - backtrader administrators last edited by Resampled data cannot be preloaded, hence the difference. - gonche1124 last edited by Got it, thanks!
https://community.backtrader.com/topic/1890/len-and-buflen-with-resample
CC-MAIN-2022-33
refinedweb
200
64.51
28 December 2007 16:27 [Source: ICIS news] By Charlie Shaw LONDON (ICIS news)--Opinions were mixed regarding the short-term prospects for ethyl acetate. Some felt it would see modest growth, while others said fundamentals in the second half of 2008 would be less favourable than in 2007. ?xml:namespace> One producer said the slowing auto industry in ?xml:namespace> Another view was that prices will be driven higher by elevated oil and gas numbers. One distributor said fully integrated producers would have a clear advantage over those having to buy their raw materials. On the other hand, more acetic acid capacity could come on stream later in the year which could start to ease ethyl acetate prices. Asian imports could start to arrive in larger quantities should the euro exchange rate remain favourable to Asian exporters. However, healthy demand in that part of the world could see sellers there solely interested in the domestic market. Butyl acetate is likely to remain tight in 2008, with the availability of feedstock butanol constrained by two major maintenance shutdowns. This could result in an overall reduction of 10% of average annual output, according to some. One large buyer thought otherwise, predicting that prices would be lower by the end of the year and forecasting a reduction in the cost of methanol - which is used for acetic acid production. The buyer thought that new capacity for production of butanol in the Asia-Pacific region would help to ease butyl acetate prices downwards. Another factor cited as likely to sustain high prices was a protracted absence of imports from On the other hand, one major producer said a weaker downstream economic environment could give some relief to the balance of supply and demand. Increasing raw material costs and strong market competition continued to be the main focus as European producers of iso-propanol (IPA), methyl ethyl ketone (MEK) and methyl iso-butyl ketone (MIBK) looked ahead to 2008. Although tight supply has pushed spot prices for IPA and MEK up during times of severe production outages in 2007, prices bounced back below manufacturer targets as soon as market balance was restored, sellers and buyers noted. Sustained strong naphtha pricing was an ongoing challenge to downstream MEK producers, and the €57/tonne ($83/tonne) first-quarter propylene increase will apply further upward pricing pressure on IPA and MIBK, producers said. Buyers, however, said the market was well supplied at present and manufacturers would struggle to raise the level. For MIBK, a structural oversupply situation in the European market meant prices were some €200/tonne below what producers described as reasonable for profit margins. Domestic manufacturers said imported material was the main driver for the cost pressure and no easing of competition was to be expected in 2008, according to market participants. Propylene oxide-based glycol ether producers will be looking to implement hikes in the region of €100/tonne for methoxy propanol (PM) and €120/tonne for methoxy propanol acetate (PMA) from next week, based chiefly on first-quarter propylene and methanol increases of €57/tonne and €110/tonne respectively. Sellers said they were never able to pass through the added raw material costs they incurred moving into the fourth quarter this year, which was why they would be looking to make up some of this extra ground early in 2008. One producer said it would aim to secure a sizable increase next week, followed by a series of step increases during the first quarter. The market for ethylene glycol ethers saw sustained tightness in 2007 on account of a series of plant outages in This tightness was always forecast to remain in the first quarter of 2008, with European sellers still saying they would be unable to meet demand for some time. Prices are set to rise further with immediate effect on account of greater-than-expected first-quarter ethylene and propylene hikes, which will add a substantial cost to producers’ raw material expenditures. A maintenance outage announced by the market’s largest producer in February has given distributors and buyers added reason to suppose that producers will try to push prices through the €1,500/tonne FD NWE mark later in the quarter. ($1 = €0.69) Peter Gerrard and Sofia L
http://www.icis.com/Articles/2007/12/28/9088989/outlook-08-feedstock-driving-europe-solvents.html
CC-MAIN-2014-42
refinedweb
709
53.24
Are you sure? This action might not be possible to undo. Are you sure you want to continue? --- Introduction to Matplotlib & pylab Aman W Department of Applied Physics University of Gondar Contents • Introduction to Graph &visualization of data • Matplotlib • Pylab – Plot – Show() • Graphs/Plots • Line plot • Scatter plot, histogram, …, etc • Pylab & Numpy & Scipy • 3D graphics Introduction to Graph &visualization of data • So far we have created programs that print out words and numbers, • The purpose of scientific computation is not getting numbers, but to analyze what this number shows • To understand the meaning of the (many) numbers we compute, we often need post processing, statistical analysis and graphical visualization of our data. • We will also want our programs to produce graphics, meaning pictures of some sort. • Visualization is a method of computing. It transforms the symbolic into the geometric and enabling researchers to observe their simulations and computations. Matplotlib & Pylab • A picture says more than a thousand words. • The following sections describe Matplotlib/Pylab which allows us to generate high quality graphs of the type y = f(x) (and a bit more) • The Python library Matplotlib is a 2D/3D plotting library which produces quality figures in a variety of hardcopy formats and interactive environments. • Pylab is slightly more convenient to use for easy plots, and • Matplotlib gives far more detailed control over how plots are created. • You can generate different plots, – line graphs, histograms, power spectra, – bar charts, errorcharts, scatterplots, etc, with just a few lines of code. • Matplotlib home page • Matplotlib tutorial • Visual Python which is a very handy tool to quickly generate animations of time dependent processes taking place in 3d space. Graphs/Plots • A number of Python packages include features for making graphs. • In this section we will use the powerful, easy-to-use, and popular package pylab. • The package contains features for generating graphs of many different types. • We will concentrate of three types that are especially useful in physics: ordinary line graphs, scatter plots, and density (or heat) plots. • We start by looking at line graphs. Import Pylab Before using Pylab, we should import it the package Three formats of the command: import Pylab from Pylab import * from Pylab import Name • The difference? • What gets imported from the file and what name refers to it after importing • import Pylab • Everything in Pylab gets imported. • To refer to something in the file, append the text Numpy to the front of its name: Pylab.Name from Pylab import * • Everything in Pylab gets imported • To refer to anything in the module, just use its name. Everything in the module is now in the current namespace. • Take care! Using this import command can easily overwrite the definition of an existing function or variable! • from pylab import * this method is not generally recommended but it is okay for interactive testing • While using from pylab import * is acceptable at the command prompt to interactively create • The pylab top level provides over 800 different objects which are all imported into the global name space when running from pylab import *. • This is not good practice, and could conflict with other objects that exist already or are created later. • As a rule of thumb: never use from somewhere import * in programs we save. from Pylab import Name • Only the item Name in Pylab gets imported. • After importing Name, you can just use it without a module prefix. It’s brought into the current namespace. • The pylab module has a number of function – Plot(), – show(), – xlabel(), ylabel(), – legend(), – Title(), …, etc • To create an ordinary graph in Python, we use the function plot and show() from the pylab package Line plots • Here is a simple example • import pylab # We import pylab y = [ 1.0, 2.4, 1.7, 0.3, 0.6,1.8 ] pylab.plot(y) • In the simplest case, this function plot takes one argument, which is a list or array of the values we want to plot, y. • The function creates a graph of the given values in the memory of the computer, but it doesn’t actually display it on the screen of the computer pylab,show() • To display the graph we use a second function from pylab, the show function, which takes the graph in memory and draws it on the screen Pylab: plot, show • When all of them written simply as follows import pylab # import pylab y = [ 1.0, 2.4, 1.7, 0.3, 0.6,1.8 ] pylab.plot(y) pylab,show() • When we run the program above, it produces a new window on the screen with a graph in it like this: • The computer has plotted the values in the list y at unit intervals along the x-axis (starting from zero in the standard Python style) and joined them up with straight lines. Pylab: plot, show • Normally we want to specify both the x- and y-coordinates for the points in the graph. • We can do this using a plot statement with two list arguments, thus: >>> import pylab >>>x = [ 0.5, 1.0, 2.0, 4.0, 7.0, 10.0 ] >>>y = [ 1.0, 2.4, 1.7, 0.3, 0.6, 1.8 ] >>>pylab.plot(x, y) >>>pylab.show() • Once you have displayed a graph on the screen you can do other things with it. • You will notice a number of buttons along the bottom of the window in which the graph appears (not shown in the figures here, but you will see them if you run the programs on your own computer). • Among other things, these buttons allow you to zoom in on portions of the graph, move your view around the graph, or save the graph as an image file on your computer. • You can also save the graph in “PostScript” format, which you can then print out on a printer or insert as a figure in a word processor document. • The other important thing about this graph is • What variable x and y axis indicates? • What is the title/ description of this plot? • y = sin(x)--- this plot • import pylab x = [ 0.5, 1.0, 2.0, 4.0, 7.0, 10.0 ] y = [ 1.0, 2.4, 1.7, 0.3, 0.6, 1.8 ] pylab.plot(x, y) pylab.xlabel('x-axis') pylab.ylabel('y-axis') pylab.title('x-y plot') pylab.show() • xlabel('...') and ylabel('...') allow labelling the axes. • Title or plot(x,y, label=‘—’) The label keyword defines the name of this line. • The line label will be shown in the legend if the legend() command is used . • legend() This command will display a legend with the labels as defined in the plot command. • grid() This command will display a grid on the backdrop. • The full list of options can be found when typing help(‘pylab.plot’) at the Python prompt import pylab x = [ 0.5, 1.0, 2.0, 4.0, 7.0, 10.0 ] y = [ 1.0, 2.4, 1.7, 0.3, 0.6, 1.8 ] pylab. plot(x,y,'--k') pylab.xlabel('x-axis') pylab.ylabel('y-axis') pylab.title('x-y plot') pylab.show() • Note further than you can chose different line styles, line thicknesses, symbols and colours for the data to be plotted. (The syntax is very similar to MATLAB.) • For example: – plot(x,y,'og') will plot circles (o) in green (g) – plot(x,y,'-r') will plot a line (-) in red (r) – plot(x,y,'-b‘, line width=2) will plot a blue line (b) with two pixel thickness line width=2 which is twice as wide as the default. Pylab: Plot Style • Line style • Color – "-": solid line – r: red – "--": dashed line – g: green – ".": mark points with a – b: blue point – c: cyan – "o": mark points with a circle – m: magenta – "s": mark points with a – y: yellow square – k: black For instance – w: white 'r-^‘ red, dot-dash, triangles Pylab: xlim, ylim, xlabel, ylabel • xlim – Get or set the *x* limit of the current axes. • ylim – Get of set the *y* limit of the current axes. • xlabel – Set the *x* axis label of the current axes. • ylabel – Set the *y* axis label of the current axes. • Once you have created the figure (using the plot command) and added any labels, legends etc, you • have two options to save the plot. – You can display the figure (using show) and interactively save it by clicking on the disk icon. – You can (without displaying the figure) save it directly from your Python code. • The command to use is savefig. • The format is determined by the extension of the le name you provide. • Here is an example which saves the plot import pylab x = [ 0.5, 1.0, 2.0, 4.0, 7.0, 10.0 ] y = [ 1.0, 2.4, 1.7, 0.3, 0.6, 1.8 ] pylab . plot (x, y, label =‘x-y plot') pylab.xlabel('x-axis') pylab.ylabel('y-axis') pylab . savefig ('myplot .png ') # saves png file pylab . savefig ('myplot .eps ') # saves eps file pylab . savefig ('myplot .pdf ') # saves pdf file
https://www.scribd.com/presentation/350945323/chapter-0061-pptx
CC-MAIN-2018-30
refinedweb
1,504
61.36
James Morris wrote:> On Fri, 9 Jun 2006, dlezcano@fr.ibm.com wrote:> > >>When an outgoing packet has the loopback destination addres, the>>skbuff is filled with the network namespace. So the loopback packets>>never go outside the namespace. This approach facilitate the migration>>of loopback because identification is done by network namespace and>>not by address. The loopback has been benchmarked by tbench and the>>overhead is roughly 1.5 %> > > I think you'll need to make it so this code has zero impact when not > configured.Indeed, and over stuff other than loopback too. I'll not so humbly suggest :) netperf TCP_STREAM and TCP_RR figures _with_ CPU utilization/service demand measures.rick jones-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2006/6/9/439
CC-MAIN-2015-40
refinedweb
143
58.89
Understanding Memory Aliasing for Speed and Correctness¶ The aggressive reuse of memory is one of the ways through which Theano makes code fast, and it is important for the correctness and speed of your program that you understand how Theano might alias buffers. This section describes the principles based on which Theano handles memory, and explains when you might want to alter the default behaviour of some functions and methods for faster performance. The Memory Model: Two Spaces¶ There are some simple principles that guide Theano’s handling of memory. The main idea is that there is a pool of memory managed by Theano, and Theano tracks changes to values in that pool. - Theano manages its own memory space, which typically does not overlap with the memory of normal Python variables that non-Theano code creates. - Theano functions only modify buffers that are in Theano’s memory space. - Theano’s memory space includes the buffers allocated to store sharedvariables and the temporaries used to evaluate functions. - Physically, Theano’s memory space may be spread across the host, a GPU device(s), and in the future may even include objects on a remote machine. - The memory allocated for a sharedvariable buffer is unique: it is never aliased to another sharedvariable. - Theano’s managed memory is constant while Theano functions are not running and Theano’s library code is not running. - The default behaviour of a function is to return user-space values for outputs, and to expect user-space values for inputs. The distinction between Theano-managed memory and user-managed memory can be broken down by some Theano functions (e.g. shared, get_value and the constructors for In and Out) by using a borrow=True flag. This can make those methods faster (by avoiding copy operations) at the expense of risking subtle bugs in the overall program (by aliasing memory). The rest of this section is aimed at helping you to understand when it is safe to use the borrow=True argument and reap the benefits of faster code. Borrowing when Constructing Function Objects¶ A borrow argument can also be provided to the In and Out objects that control how theano.function handles its argument[s] and return value[s]. import theano, theano.tensor x = theano.tensor.matrix() y = 2 * x f = theano.function([theano.In(x, borrow=True)], theano.Out(y, borrow=True)) Borrowing an input means that Theano will treat the argument you provide as if it were part of Theano’s pool of temporaries. Consequently, your input may be reused as a buffer (and overwritten!) during the computation of other variables in the course of evaluating that function (e.g. f). Borrowing an output means that Theano will not insist on allocating a fresh output buffer every time you call the function. It will possibly reuse the same one as on a previous call, and overwrite the old content. Consequently, it may overwrite old return values through side-effect. Those return values may also be overwritten in the course of evaluating another compiled function (for example, the output may be aliased to a shared variable). So be careful to use a borrowed return value right away before calling any more Theano functions. The default is of course to not borrow internal results. It is also possible to pass a return_internal_type=True flag to the Out variable which has the same interpretation as the return_internal_type flag to the shared variable’s get_value function. Unlike get_value(), the combination of return_internal_type=True and borrow=True arguments to Out() are not guaranteed to avoid copying an output value. They are just hints that give more flexibility to the compilation and optimization of the graph. Take home message: When an input x to a function is not needed after the function returns and you would like to make it available to Theano as additional workspace, then consider marking it with In(x, borrow=True). It may make the function faster and reduce its memory requirement. When a return value y is large (in terms of memory footprint), and you only need to read from it once, right away when it’s returned, then consider marking it with an Out(y, borrow=True).
http://deeplearning.net/software/theano/tutorial/aliasing.html
CC-MAIN-2019-09
refinedweb
700
61.36
I have seen acrosss in a company interview test this question, but i am not clear about the question first. Could you people clarify my doubt ? Question : Write a program to sort an integer array which contains Only 0's,1's and 2's. Counting of elements not allowed, you are expected to do it in O(n) time complexity. Ex Array : {2, 0, 1, 2, 1, 2, 1, 0, 2, 0}. The whole sorting algorithm requires only three lines of code: public static void main(String[] args) { int[] array = { 2, 0, 1, 2, 1, 2, 1, 0, 2, 0 }; // Line 1: Define some space to hold the totals int[] counts = new int[3]; // To store the (3) different totals // Line 2: Get the total of each type for (int i : array) counts[i]++; // Line 3: Write the appropriate number of each type consecutively back into the array: for (int i = 0, start = 0; i < counts.length; start += counts[i++]) Arrays.fill(array, start, start + counts[i], i); System.out.println(Arrays.toString(array)); } Output: [0, 0, 0, 1, 1, 1, 2, 2, 2, 2] At no time did we refer to array.length, no care how long the array was. It iterated through the array touching each element just once, making this algorithm O(n) as required. Output to a linked list. Run through the whole array. HTH Raku Instead of blasting you with yet another unintelligible pseudo-code, I’ll give you the name of the problem: this problem is known as the Dutch national flag problem (first proposed by Edsgar Dijkstra) and can be solved by a three-ways merge (see the PHP code in the first answer which solves this, albeit very inefficiently). A more efficient in-place solution of the threeways merge is described in Bentley’s and McIlroy’s seminal paper Engineering a Sort Function. It uses four indices to delimit the ranges of the intermediate array, which has the unsorted values in the middle, the 1s at both edges, and the 0s and 2s in-between: After having established this invariant, the = parts (i.e. the 1s) are swapped back into the middle. It depends what you mean by "no counting allowed". One simple way to do this would be to have a new empty array, then look for 0's, appending them to the new array. Repeat for 1's then 2's and it's sorted in O(n) time. But this is more-or-less a radix sort. It's like we're counting the 0's then 1's then 2's, so I'm not sure if this fits your criteria. Edit: we could do this with only O(1) extra memory by keeping a pointer for our insertion point (starting at the start of the array), and scanning through the array for 0's, swapping each 0 with the element where the pointer is, and incrementing the pointer. Then repeat for 1's, 2's and it's still O(n). Java implementation: import java.util.Arrays; public class Sort { public static void main(String[] args) { int[] array = {2, 0, 1, 2, 1, 2, 1, 0, 2, 0}; sort(array); System.out.println(Arrays.toString(array)); } public static void sort(int[] array) { int pointer = 0; for(int i = 0; i < 3; i++) { for(int j = 0; j < array.length; j++) { if(array[j] == i) { int temp = array[pointer]; array[pointer] = array[j]; array[j] = temp; pointer++; } } } } } Gives output: [0, 0, 0, 1, 1, 1, 2, 2, 2, 2] Sorry, it's php, but it seems O(n) and could be easily written in java :) $arr = array(2, 0, 1, 2, 1, 2, 1, 0, 2, 0); $tmp = array(array(),array(),array()); foreach($arr as $i){ $tmp[$i][] = $i; } print_r(array_merge($tmp[0],$tmp[1],$tmp[2])); In O(n), pseudo-code: def sort (src): # Create an empty array, and set pointer to its start. def dest as array[sizeof src] pto = 0 # For every possible value. for val in 0, 1, 2: # Check every position in the source. for pfrom ranges from 0 to sizeof(src): # And transfer if matching (includes update of dest pointer). if src[pfrom] is val: dest[pto] = val pto = pto + 1 # Return the new array (or transfer it back to the source if desired). return dest This is basically iterating over the source list three times, adding the elements if they match the value desired on this pass. But it's still O(n). The equivalent Java code would be: class Test { public static int [] mySort (int [] src) { int [] dest = new int[src.length]; int pto = 0; for (int val = 0; val < 3; val++) for (int pfrom = 0; pfrom < src.length; pfrom++) if (src[pfrom] == val) dest[pto++] = val; return dest; } public static void main(String args[]) { int [] arr1 = {2, 0, 1, 2, 1, 2, 1, 0, 2, 0}; int [] arr2 = mySort (arr1); for (int i = 0; i < arr2.length; i++) System.out.println ("Array[" + i + "] = " + arr2[i]); } } which outputs: Array[0] = 0 Array[1] = 0 Array[2] = 0 Array[3] = 1 Array[4] = 1 Array[5] = 1 Array[6] = 2 Array[7] = 2 Array[8] = 2 Array[9] = 2 But seriously, if a potential employer gave me this question, I'd state straight out that I could answer the question if they wish, but that the correct answer is to just use Array.sort. Then if, and only if, there is a performance problem with that method and the specific data sets, you could investigate a faster way. And that faster way would almost certainly involve counting, despite what the requirements were. You don't hamstring your developers with arbitrary limitations. Requirements should specify what is required, not how. If you answered this question to me in this way, I'd hire you on the spot. This answer doesn't count the elements.. public static void main(String[] args) throws Exception { Integer[] array = { 2, 0, 1, 2, 1, 2, 1, 0, 2, 0 }; List<Integer>[] elements = new ArrayList[3]; // To store the different element types // Initialize the array with new lists for (int i = 0; i < elements.length; i++) elements[i] = new ArrayList<Integer>(); // Populate the lists for (int i : array) elements[i].add(i); for (int i = 0, start = 0; i < elements.length; start += elements[i++].size()) System.arraycopy(elements[i].toArray(), 0, array, start, elements[i].size()); System.out.println(Arrays.toString(array)); } Output: [0, 0, 0, 1, 1, 1, 2, 2, 2, 2] Push and Pull have a constant complexity! Push each element into a priority queue Pull each element to indices 0...n (: You can do it in one pass, placing each encountered element to it's final position: void sort012(int* array, int len) { int* p0 = array; int* p2 = array + len; for (int* p = array; p <= p2; ) { if (*p == 0) { std::swap(*p, *p0); p0++; p++; } else if (*p == 2) { std::swap(*p, *p2); p2--; } else { p++; } } }
http://www.dlxedu.com/askdetail/3/d88be49fc0d9221dbb63391368f1434f.html
CC-MAIN-2018-39
refinedweb
1,162
68.91
According to the Javadocs: %r - Used to output the number of milliseconds elapsed since the start of the application until the creation of the logging event. In practice the initial time is actually initialised from when the PatternLayout class is loaded and the static initialisers are fired. Consider the following rather contrived and paraphrased example: public class MyClass { static final Logger logger = Logger.getLogger(MyClass.class); public static void main(String[] args) throws Exception { BasicConfigurator.configure(); Thread.sleep(10 * 1000); logger.info("Hello, World!"); } } The time reported at Hello, World is actually very close to zero. AFAIK it seems to be practically impossible to know the app/JVM launch-time, nonetheless it could be better. Solutions I guess would be for the configuration call or, probably better, one of the Logger.getLogger() calls initialised the time or forces load of the PatternLayout class-load? A workaround is to log something early and of course many apps do this so it isn't a noted problem. In practice this is probably not unique to any platform or possibly even version of Log4J. Do any of the other implementations suffer from the same problem. log4cxx captures the time of the APR initializer which should occur during static initialization so it should be very close to application start time. At this point, the safest thing to do with log4j 1.2 would be to change the documentation to match the established behavior which I have done for both log4j 1.2 and 1.3. However, if someone wants to submit a patch for log4j 1.3 that implements the previously documented behavior, I'd be open to committing it. Commited in rev 427875 and 427876 (1.2 and trunk, respectively).
https://bz.apache.org/bugzilla/show_bug.cgi?id=40145
CC-MAIN-2017-47
refinedweb
287
57.87
#include <googledrivestorage.h> List of all members. Definition at line 32 of file googledrivestorage.h. [private] This private constructor is called from loadFromConfig(). Definition at line 46 of file googledrivestorage.cpp. This constructor uses OAuth code flow to get tokens. Definition at line 49 of file googledrivestorage.cpp. [virtual] Definition at line 53 of file googledrivestorage.cpp. [inline] Definition at line 116 of file googledrivestorage.h. [protected, virtual] Return whether to expect new refresh_token on refresh. Implements Cloud::BaseStorage. Definition at line 61 of file googledrivestorage.cpp. Definition at line 55 of file googledrivestorage.cpp. Returns bool based on JSON response from cloud. Definition at line 131 of file googledrivestorage.cpp. Calls the callback when finished. Implements Cloud::Id::IdStorage. Definition at line 184 of file googledrivestorage.cpp. Definition at line 241 of file googledrivestorage.cpp. Returns the StorageInfo struct. Implements Cloud::Storage. Definition at line 208 of file googledrivestorage.cpp. Constructs StorageInfo based on JSON response from cloud. Definition at line 73 of file googledrivestorage.cpp. Public Cloud API comes down there. Returns Array<StorageFile> - the list of files. Definition at line 152 of file googledrivestorage.cpp. [static] Load token and user id from configs and return GoogleDriveStorage for those. Definition at line 219 of file googledrivestorage.cpp. Return unique storage name. Definition at line 69 of file googledrivestorage.cpp. Return whether storage needs refresh_token to work. Definition at line 59 of file googledrivestorage.cpp. Definition at line 177 of file googledrivestorage.cpp. Remove all GoogleDriveStorage-related data from config. Definition at line 235 of file googledrivestorage.cpp. Storage methods, which are used by CloudManager to save storage in configuration file. Save storage data using ConfMan. Definition at line 63 of file googledrivestorage.cpp. Returns storage's saves directory path with the trailing slash. Definition at line 217 of file googledrivestorage.cpp. Definition at line 57 of file googledrivestorage.cpp. Returns pointer to Networking::NetworkReadStream. Definition at line 164 of file googledrivestorage.cpp. Returns UploadStatus struct with info about uploaded file. Definition at line 160 of file googledrivestorage.cpp.
https://doxygen.residualvm.org/db/d01/classCloud_1_1GoogleDrive_1_1GoogleDriveStorage.html
CC-MAIN-2020-29
refinedweb
340
55.61
NAME Bot::BasicBot::Pluggable::Store - base class for the back-end pluggable store VERSION version 0.96 SYNOPSIS my $store = Bot::BasicBot::Pluggable::Store->new( option => "value" ); my $namespace = "MyModule"; for ( $store->keys($namespace) ) { my $value = $store->get($namespace, $_); $store->set( $namespace, $_, "$value and your momma." ); } Store classes should subclass this and provide some persistent way of storing things. METHODS - new() Standard newmethod,. - new_from_hashref( $hashref ). - init() Called as part of new class construction, before load(). - load() Called as part of new class construction, after init(). - save() Subclass me. But, only if you want to. See ...Store::Storable.pm as an example. - keys($namespace,[$regex]) Returns a list of all store keys for the passed $namespace. If you pass $regexthen it will only pass the keys matching $regex - get($namespace, $variable) Returns the stored value of the $variablefrom $namespace. - set($namespace, $variable, $value) Sets stored value for $variableto $valuein $namespace. Returns store object. - unset($namespace, $variable) Removes the $variablefrom the store. Returns store object. - namespaces() Returns a list of all namespaces in the store. - dump()is written generally so you don't have to re-implement it in subclasses. - restore($data) Restores the store from a dump(). AUTHOR Mario Domgoergen <mdom@cpan.org> This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. SEE ALSO Bot::BasicBot::Pluggable::Module
https://metacpan.org/pod/release/DIZ/Bot-BasicBot-Pluggable-0.96/lib/Bot/BasicBot/Pluggable/Store.pm
CC-MAIN-2018-09
refinedweb
230
61.83
CMSIS-DAP debugger for Python Project description pyOCD is an Open Source python 2.7 based library for programming and debugging ARM Cortex-M microcontrollers using CMSIS-DAP. Linux, OSX and Windows are supported. You can use the following interfaces: - From a python interpretor: - halt, step, resume execution - read/write memory - read/write block memory - read-write core register - set/remove hardware breakpoints - flash new binary - reset - From a GDB client, you have all the features provided by gdb: - load a .elf file - read/write memory - read/write core register - set/remove hardware breakpoints - high level stepping - … Installation The latest stable version of pyOCD may be done via pip as follows: $ pip install --pre -U pyocd To install the latest development version (master branch), you can do the following: $ pip install --pre -U Note that you may run into permissions issues running these commands. You have a few options here: - Run with sudo -H to install pyOCD and dependencies globally - Specify the --user option to install local to your user - Run the command in a virtualenv local to a specific project working set. You can also install from source by cloning the git repository and running python setup.py install Standalone GDB Server When you install pyOCD via pip, you should be able to execute the following in order to start a GDB server powered by pyOCD: pyocd-gdbserver You can get additional help by running pyocd-gdbserver --help. Recommended GDB and IDE setup The GDB server works well with Eclipse and the GNU ARM Eclipse OpenOCD plug-in. To view register the Embedded System Register Viewer plugin can be used. These can be installed from inside eclipse using the following links: GNU ARM Eclipse: Embedded System Register Viewer: The pyOCD gdb server executable will run as a drop in place replacement for OpenOCD. If a supported mbed development board is being debugged the target does not need to be specified, as pyOCD will automatically determine this. If an external processor is being debugged then -t [processor] must be added to the command line. For more information on setup see this post for OpenOCD Development Setup PyOCD developers are recommended to setup a working environment using virtualenv. After cloning the code, you can setup a virtualenv and install the PyOCD dependencies for the current platform by doing the following: $ virtualenv env $ source env/bin/activate $ pip install -r dev-requirements.txt On Windows, the virtualenv would be activated by executing env\Scripts\activate. To run the unittests, you can execute the following. Because of how nose searches for tests, specifying the directory is important as it will otherwise attempt to run non-unit tests as well (which will hang). $ nosetests pyOCD/tests To get code coverage results, do the following: $ nosetests --with-coverage --cover-html --cover-package=pyOCD pyOCD/tests $ firefox cover/index.html Examples Tests A series of tests are provided in the test directory: - basic_test.py: a simple test that checks: - read/write core registers - read/write memory - stop/resume/step the execution - reset the target - erase pages - flash a binary - gdb_test.py: launch a gdbserver - gdb_server.py: an enhanced version of gdbserver which provides the following options: - “-p”, “–port”, help = “Write the port number that GDB server will open.” - “-b”, “–board”, help=”Connect to board by board id.” - “-l”, “–list”, help = “List all connected boards.” - “-d”, “–debug”, help = “Set the level of system logging output.” - “-t”, “–target”, help = “Override target to debug.” - “-n”, “–nobreak”, help = “Disable halt at hardfault handler.” - “-r”, “–reset-break”, help = “Halt the target when reset.” - “-s”, “–step-int”, help = “Allow single stepping to step into interrupts.” - “-f”, “–frequency”, help = “Set the SWD clock frequency in Hz.” - “-o”, “–persist”, help = “Keep GDB server running even after remote has detached.” - “-bh”, “–soft-bkpt-as-hard”, help = “Replace software breakpoints with hardware breakpoints.” - “-ce”, “–chip_erase”, help=”Use chip erase when programming.” - “-se”, “–sector_erase”, help=”Use sector erase when programming.” - “-hp”, “–hide_progress”, help = “Don’t display programming progress.” - “-fp”, “–fast_program”, help = “Use only the CRC of each page to determine if it already has the same data.” Hello World example code from pyOCD.board import MbedBoard import logging logging.basicConfig(level=logging.INFO) board = MbedBoard.chooseBoard() target = board.target flash = board.flash target.resume() target.halt() print "pc: 0x%X" % target.readCoreRegister("pc") pc: 0xA64 target.step() print "pc: 0x%X" % target.readCoreRegister("pc") pc: 0xA30 target.step() print "pc: 0x%X" % target.readCoreRegister("pc") pc: 0xA32 flash.flashBinary("binaries/l1_lpc1768.bin") print "pc: 0x%X" % target.readCoreRegister("pc") pc: 0x10000000 target.reset() target.halt() print "pc: 0x%X" % target.readCoreRegister("pc") pc: 0xAAC board.uninit() GDB server example Python: from pyOCD.gdbserver import GDBServer from pyOCD.board import MbedBoard import logging logging.basicConfig(level=logging.INFO) board = MbedBoard.chooseBoard() # start gdbserver gdb = GDBServer(board, 3333) gdb server: arm-none-eabi-gdb basic.elf <gdb> target remote localhost:3333 <gdb> load <gdb> continue Architecture Interface An interface does the link between the target and the computer. This module contains basic functionalities to write and read data to and from an interface. You can inherit from Interface and overwrite read(), write(), etc Then declare your interface in INTERFACE (in pyOCD.interface.__init__.py) Target A target defines basic functionalities such as step, resume, halt, readMemory, etc. You can inherit from Target to implement your own methods. Then declare your target in TARGET (in pyOCD.target.__init__.py) Transport Defines the transport used to communicate. In particular, you can find CMSIS-DAP. Implements methods such as memWriteAP, memReadAP, writeDP, readDP, … You can inherit from Transport and implement your own methods. Then declare your transport in TRANSPORT (in pyOCD.transport.__init__.py) Flash Contains flash algorithm in order to flash a new binary into the target. gdbserver Start a GDB server. The server listens on a specific port. You can then connect a GDB client to it and debug/program the target. Then you can debug a board which is composed by an interface, a target, a transport and a flash Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pyOCD/0.5.1/
CC-MAIN-2018-26
refinedweb
1,028
50.23
Here is a solution for developers looking for a skin based slider control. It is different from the article Transparent Slider Control by Nic Wilson in the way that it allows you to skin the background and tick of the slider control and also allows you to have a customized cursor over the slider control. The main class for slider control is CZipSliderCtl that uses another bitmap class CZipBitmap for drawing normal and transparent images on the control. It is very easy to use and looks good (if you have good-looking images), so go for it. Follow the following instructions to use it in your application. CZipSliderCtl CZipBitmap Its fairly simple to use the CZipSliderClt class. Just add the files ZipSliderCtl.h, ZipSliderCtl.cpp, ZipBitmap.h, ZipBitmap.cpp into your project, add the slider control to your dialog box and change the member variable of the control. Modify the following code CZipSliderClt CSliderCtl m_sliderCtl; to look like this: CZipSliderCtl m_sliderCtl; You will need add the following code at the top of you application's dlg header file. #include "ZipSliderCtl.h" Congratulations you have successfully created the object of the slider control and now it is time to skin the control. Add the following code at the bottom of OnInitDialog function OnInitDialog m_sliderCtl.SetSkin(IDB_SEEKBAR_BACK,IDB_SEEKBAR_TICK,IDC_CURSOR_SEEK); m_sliderCtl.SetRange(0,15000); So you have skinned your control and it is ready to use. Compile and run to see how it looks. All the best.. enjoy!!! The CZipSliderCtl class is based on the fairly simple concept of subclassing. I have derived this class from CSliderCtl and have overridden the following functions CSliderCtl //{{AFX_MSG(CZipSliderCtl) afx_msg void OnMouseMove(UINT nFlags, CPoint point); afx_msg void OnPaint(); afx_msg void OnLButtonUp(UINT nFlags, CPoint point); afx_msg void OnLButtonDown(UINT nFlags, CPoint point); afx_msg void OnKeyUp(UINT nChar, UINT nRepCnt, UINT nFlags); afx_msg void OnKeyDown(UINT nChar, UINT nRepCnt, UINT nFlags); afx_msg BOOL OnSetCursor(CWnd* pWnd, UINT nHitTest, UINT message); //}}AFX_MSG I have used the class CZipBitmap to draw the normal and transparent images on the dialog box. If any transparent image is drawn using this class, it makes all the portions of let-top pixel color transparent. The magic of skinning the control is always contained in the OnPaint function. So look at the following magical lines of code OnPaint { CPaintDC dc(this); // device context for painting int iMax,iMin,iTickWidth=10,iMarginWidth=10; GetRange(iMin,iMax); RECT rcBack,rcTick; GetClientRect(&rcBack); rcTick = rcBack; TRACE("%d\n",GetPos()); rcTick.left = ((rcBack.right-iMarginWidth)*(GetPos()))/((iMax - iMin)+iMarginWidth/2); rcTick.right = rcTick.left + iTickWidth; m_bmpBack->Draw(dc,0,0); m_bmTrans->DrawTrans(dc,rcTick.left, -2); } so its all done. I hope my efforts will be appreciated 17 Jun 2002 - Initial Revision17 Jun 2002 - Reformatted some.
http://www.codeproject.com/Articles/2453/Skin-based-slider-control?fid=4112
CC-MAIN-2015-48
refinedweb
459
53.51
Due by 11:59pm on Thursday, 2/12 Download extra01.zip. Inside the archive, you will find a file called extra: This homework is required for the "extra lectures" track of CS 98: Additional Topics on the Structure and Interpretation of Computer Programs. Implement intersect, which takes two functions f and g and their derivatives df and dg. It returns an intersection point x, at which f(x) is equal to g(x). def intersect(f, df, g, dg): """Return where f with derivative df intersects g with derivative dg. >>> parabola, line = lambda x: x*x - 2, lambda x: x + 10 >>> dp, dl = lambda x: 2*x, lambda x: 1 >>> intersect(parabola, dp, line, dl) 4.0 """ "*** YOUR CODE HERE ***" Differentiation of polynomials can be performed automatically by applying the product rule and the fact that the derivative of a sum is the sum of the derivatives of the terms. In the following example, polynomials are expressed as two-argument Python functions. The first argument is the input x. The second argument called derive is True or False. When derive is True, the derivative is returned. When derive is False, the function value is returned. For example, the quadratic function below returns a quadratic polynomial. The linear term X and constant function K are defined using conditional expressions. X = lambda x, derive: 1 if derive else x K = lambda k: lambda x, derive: 0 if derive else k def quadratic(a, b, c): """Return a quadratic polynomial a*x*x + b*x + c. >>> q_and_dq = quadratic(1, 6, 8) # x*x + 6*x + 8 >>> q_and_dq(1.0, False) # value at 1 15.0 >>> q_and_dq(1.0, True) # derivative at 1 8.0 >>> q_and_dq(-1.0, False) # value at -1 3.0 >>> q_and_dq(-1.0, True) # derivative at -1 4.0 """ A, B, C = K(a), K(b), K(c) AXX = mul_fns(A, mul_fns(X, X)) BX = mul_fns(B, X) return add_fns(AXX, add_fns(BX, C)) To complete this implementation and apply Newton's method to polynomials, fill in the bodies of add_fns, mul_fns, and poly_zero below. def add_fns(f_and_df, g_and_dg): """Return the sum of two polynomials.""" "*** YOUR CODE HERE ***" def mul_fns(f_and_df, g_and_dg): """Return the product of two polynomials.""" "*** YOUR CODE HERE ***" def poly_zero(f_and_df): """Return a zero of polynomial f_and_df, which returns: f(x) for f_and_df(x, False) df(x) for f_and_df(x, True) >>> q = quadratic(1, 6, 8) >>> round(poly_zero(q), 5) # Round to 5 decimal places -2.0 >>> round(poly_zero(quadratic(-1, -6, -9)), 5) -3.0 """ "*** YOUR CODE HERE ***"
http://gaotx.com/cs61a/hw/extra01/
CC-MAIN-2018-43
refinedweb
422
65.52
module Penny.Brenner.Clear (mode) where import qualified Control.Monad.Exception.Synchronous as Ex import Control.Applicative (pure) import Control.Monad (guard, mzero, when) import Data.Maybe (mapMaybe, fromMaybe) import Data.Monoid (mconcat, First(..)) import qualified Data.Set as Set import qualified Data.Map as M import qualified Data.Text as X import qualified Data.Text.IO as TIO import qualified System.Console.MultiArg as MA import qualified Penny.Lincoln as L import qualified Control.Monad.Trans.State as St import qualified Control.Monad.Trans.Maybe as MT import Control.Monad.Trans.Class (lift) import qualified Penny.Copper.Types as Y import qualified Penny.Copper as C import qualified Penny.Copper.Render as R import Text.Show.Pretty (ppShow) import qualified Penny.Brenner.Types as Y import qualified Penny.Brenner.Util as U help :: String -> String help pn = unlines [ "usage: " ++ pn ++ " clear clear [options] FIT_FILE LEDGER_FILE..." , "Parses all postings that are in FIT_FILE. Then marks all" , "postings that are in the FILEs given that correspond to one" , "of the postings in the FIT_FILE as being cleared." , "Quits if one of the postings found in FIT_FILE is not found" , "in the database, if one of the postings in the database" , "is not found in one of the FILEs, or if any of the postings found" , "in one of the FILEs already has a flag." , "" , "Results are printed to standard output. If no FILE, or FILE is \"-\"," , "read standard input." , "" , "Options:" , " -h, --help - show help and exit" ] data Arg = APosArg String deriving (Eq, Show) toPosArg :: Arg -> Maybe String toPosArg a = case a of { APosArg s -> Just s } data Opts = Opts { csvLocation :: Y.FitFileLocation , ledgerLocations :: [String] } deriving Show mode :: Maybe Y.FitAcct -> MA.Mode (IO ()) mode c = MA.Mode { MA.mName = "clear" , MA.mIntersperse = MA.Intersperse , MA.mOpts = [ ] , MA.mPosArgs = APosArg , MA.mProcess = process c , MA.mHelp = help } process :: Maybe Y.FitAcct -> [Arg] -> IO () process mayC as = do c <- case mayC of Just cd -> return cd Nothing -> fail $ "no financial institution account given" ++ " on command line, and no default financial" ++ " institution configured." (csv, ls) <- case mapMaybe toPosArg as of [] -> fail "clear: you must provide a postings file." x:xs -> return (Y.FitFileLocation x, xs) let os = Opts csv ls runClear c os runClear :: Y.FitAcct -> Opts -> IO () runClear c os = do dbList <- U.loadDb (Y.AllowNew False) (Y.dbLocation c) let db = M.fromList dbList (_, prsr) = Y.parser c txns <- fmap (Ex.switch fail return) $ prsr (csvLocation os) leds <- C.open (ledgerLocations os) toClear <- case mapM (findUNumber db) (concat txns) of Nothing -> fail $ "at least one posting was not found in the" ++ " database. Ensure all postings have " ++ "been imported and merged." Just ls -> return $ Set.fromList ls let (led', left) = changeLedger (Y.pennyAcct c) toClear leds when (not (Set.null left)) (fail $ "some postings were not cleared. " ++ "Those not cleared:\n" ++ ppShow left) case R.ledger (Y.groupSpecs c) led' of Nothing -> fail "could not render resulting ledger." Just txt -> TIO.putStr txt -- | Examines an financial institution transaction and the DbMap to -- find a matching UNumber. Fails if the financial institution -- transaction is not in the Db. findUNumber :: Y.DbMap -> Y.Posting -> Maybe Y.UNumber findUNumber m pstg = let atn = Y.fitId pstg p ap = Y.fitId ap == atn filteredMap = M.filter p m ls = M.toList filteredMap in case ls of (n, _):[] -> Just n _ -> Nothing clearedFlag :: L.Flag clearedFlag = L.Flag . X.singleton $ 'C' -- | Changes a ledger to clear postings. Returns postings still not -- cleared. changeLedger :: Y.PennyAcct -> Set.Set Y.UNumber -> Y.Ledger -> (Y.Ledger, Set.Set Y.UNumber) changeLedger ax s l = St.runState k s where k = Y.mapLedgerA f l f = Y.mapItemA pure pure (changeTxn ax) changeTxn :: Y.PennyAcct -> L.Transaction -> St.State (Set.Set Y.UNumber) L.Transaction changeTxn ax t = do let fam = L.unTransaction t fam' = L.mapParent (const L.emptyTopLineChangeData) fam fam'' <- L.mapChildrenA (changePstg ax) fam' return $ L.changeTransaction fam'' t -- | Sees if this posting is a posting in the right account and has a -- UNumber that needs to be cleared. If so, clears it. If this posting -- already has a flag, skips it. changePstg :: Y.PennyAcct -> L.Posting -> St.State (Set.Set Y.UNumber) L.PostingChangeData changePstg ax p = fmap (fromMaybe L.emptyPostingChangeData) . MT.runMaybeT $ do guard (L.pAccount p == (Y.unPennyAcct ax)) let tags = L.pTags p un <- maybe mzero return $ parseUNumberFromTags tags guard (L.pFlag p == Nothing) set <- lift St.get guard (Set.member un set) lift $ St.put (Set.delete un set) return $ L.emptyPostingChangeData { L.pcFlag = Just (Just clearedFlag) } parseUNumberFromTags :: L.Tags -> Maybe Y.UNumber parseUNumberFromTags = getFirst . mconcat . map First . map parseUNumberFromTag . L.unTags parseUNumberFromTag :: L.Tag -> Maybe Y.UNumber parseUNumberFromTag (L.Tag x) = do (f, xs) <- X.uncons x guard (f == 'U') case reads . X.unpack $ xs of (u, ""):[] -> Just (Y.UNumber u) _ -> Nothing
http://hackage.haskell.org/package/penny-lib-0.12.0.0/docs/src/Penny-Brenner-Clear.html
CC-MAIN-2015-40
refinedweb
802
64.27
At 11:00 PM 4/7/2010 +0530, Amit Sethi wrote: >I am trying to create a module for the docutils . I wish to install >it docutils.writer.something Is that a documented, official way to extend docutils? If not, and docutils.writer is not a namespace package, then you should not be doing that. >I have docutils installed at >/usr/local/lib/python2.6/dist-packages/docutils-0.6-py2.6.egg >on my ubuntu machine but when I tried to install it create a seperate >package called docutils and saves >the package in that ? How do I tell a package that this particular >package is a subpackage of a library >the setup.py of the package looks like : > license='Public Domain', > packages=['docutils.writers'], > package_dir={'docutils.writers':'docbook'}, > scripts=['rst2docbook.py'] The correct way to do this is to NOT list packages or package_dir; just use py_modules = 'docutils.writer.docbook', and have a docutils/writer/docbook.py file. This will install it in a distutils-compatible way, but not a distribute-compatible or setuptools-compatible way. Really, this is not a well-supported scenario in any case -- it is to be avoided if at all possible. Installing your code into other people's packages is a bad way to extend them. (Setuptools and Distribute offer better ways of providing extensibility, such as namespace packages and entry points.)
https://mail.python.org/pipermail/distutils-sig/2010-April/015967.html
CC-MAIN-2016-36
refinedweb
229
50.02
Top 50 Quotes On How To Create A Website Hi guys, I’m Rakib Khan from Themeprobably.com and in this Posts I’m going to show you how you can quickly make a website. (in just 10 mins) how to create a website Now after watching this Posts, you will be able to make any kind of website just like this, by using drag & drop. So, don’t miss this Posts out and watch it till the end, to learn how to do it. Okay! So, I’m Rakib Khan from Themeprobably.com and Let’s start making this website! Okay! So before we start, you need to First, Click the link, below this Posts. So I’m going to click this link. godaddy.com And, It will take You to this page. Now we’re going to do this in just 5 steps! Ok! So the 1st step is to pick a name for your website. Now, I already have picked a name which is “quicktechy.com” So I’m going to search for it… And then click “check availability”. Okay, so you can see that the name is available. Once you get the name, you can go to the next step, which is to Get hosting & domain. Hosting & domain are the two things we need for launching our website. Hosting is the place where your website’s files will be stored Domain is the name of your website So to get hosting & domain, let’s scroll down… and click get here that we’re getting the domain for 959 rupees Now, if we change the duration from 2 years… to 1 year….. You can see that, we’re getting the domain for free. So, now let’s proceed to checkout. Now Godaddy will ask you login. So let’s click create account. and fill in these details. Now enter any 4-digit number for the pin And click “Create Account”. and then continue filling these details. Now, choose your payment option, and click continue.. Okay! So guy’s this is going to cost us around 99 rupees per month, and, the plan will be valid for 1 year. So let’s place the order. and make the payment. So I’m going to quickly complete the payment! Okay! So now we’ve completed the payment & we have got our Domain & Hosting. Now let’s go to STEP 3, which is to Install WordPress. Now we’re going to use wordpress because it makes it very easy to build a website without knowing any programming or coding. So let’s install wordpress, We’re going to scroll down & then click managed wordpress. Okay! So we’ll click here, and then click get started. Now just select your domain. and.. click next. then again click next Now you need to enter a username & password for wordpress. You will need this to login into wordpress So I’m going to enter my name and password and click install. Okay! So wordpress is installed! Now, let’s click ‘get started’. and then click no thanks… and okay. Okay, so this is our WordPress Dashboard! let’s go to our website address And press enter Now as you can see… our website is LIVE! So! This is how, the default site, looks like. So next, In order to easily edit our website, we’re going to install a new theme. So the new theme is called “Astra” So to install the theme, let’s go here… and click themes. now click ‘add new’ and search for… Astra. So we’re going to install this theme. Just click install. and click activate. Okay! So the theme is activated. Next, we’re going to install a plugin which comes with this theme. So by installing the plugin we’ll be able to easily customise our theme. So to install that plugin, let’s go to plugins. And then click ‘add new’. Now search for a Plugin called ‘Astra’, And then install this plugin.. So click install. and then click activate. So the Astra Sites plugin is now installed! Now, This plugin has a set of Designs for your website, which you can choose and then apply it to your site. So to see those designs let’s click see library. So these are the designs. Now, before you choose a design, Just click elementor. this will make it easier for you to edit the design So click elementor and now you can choose any design you like….. So, I’m going to choose this design… And you can see how the site look like! If you want to apply this design to your site… Just click install plugins. and then click import this site. Now the design and the demo content will be imported into your site Once its done! We can now see the site So let’s click “view site”. Okay! So as you can see, the demo has been imported into our website. and this is how it looks. And you can also see these other pages which also has the demo content…. So, Once you’ve got the design into your site You can now go to the final step which is to Edit the content. So, To edit any page of your site. You just have go into the page and click Edit with Elementor. So let’s say you want to edit the homepage, You just simply click home… And then click “edit with elementor”. And now you will go into this editing section. So let’s say you want to change the text here. you just select the text, And then start typing anything you want So I’m going to type (Hi! Welcome to my website) And now if you want to change the text on this button. You just click here… And change text on this button here So the same way, you can edit any text you want on this page Just select the text. And then start typing. So this works throughout the website. Now if you want to change this image, You just click it. Select the image here. And drag & drop your image. So once you’re done with the changes, you can simply save the page, by clicking “save”. And all your changes will be saved. Now you can view the page by clicking here and then click view page. So, you can see that all our changes are here… Okay, so this area, which is the header. you can do it by going into the customize option. So let’s go to customize. Now you can see that there are some “Blue icons”. Now if you want to change this logo, just click this “Blue Icon” and you can change the Logo here. Now the same way you can change the menu section by clicking these icons. So everything can be edited by using these blue icons. and this will be same in footer area also. So let’s say if you want to change the text. You just click this blue icon and start typing anything you want. Once you are done with the changes, just click “publish” and they will be published on the site. Now lets’ close this and go back to our site. Okay! So which are over here. and then drag and drop them into this area. So for example; if you want to add a heading you can drag and drop this element here. and then enter your text. And to add an image you can drag & drop this element. So drag & drop here and start creating your page! Now the other way to create a page… is by using templates. Now templates are ready-made pages which you can import into your site. So let’s click “add template” and you will find a lot of designs here. Now if you want to use any designs, simply click it. See how its look like. here. It will take you to the page which we say in the first step which was choosing your domain. So just pick a domain and build your website. I will see you in the next Posts. Babye 🙂 So, If you want to also create a Business E-mail for your website. You can watch this Posts, We show you, How you can do it for free! This Post Has 0 Comments
https://themeprobably.com/how-to-create-a-website/
CC-MAIN-2021-31
refinedweb
1,404
93.54
#ifndef _SIZE_T_DEFINED #ifndef _BSD_SIZE_T_DEFINED_ #define _SIZE_T_DEFINED #define _BSD_SIZE_T_DEFINED_ /* for Darwin */ #endif /* ___int_size_t_h */ #endif /* _BSD_SIZE_T_DEFINED_ */ #undef toupper /* APPLE LOCAL begin supply missing ctype.h decls 2001-07-11 sts *//* These are supposed be in ctype.h like the standard says! We need this until Darwin ctype.h gets fixed and/or GCC has a fixincludes to supply these if they're missing. */extern "C" {extern int isalnum(int c);extern int isalpha(int c);extern int iscntrl(int c);extern int isdigit(int c);extern int isgraph(int c);extern int islower(int c);extern int isprint(int c);extern int ispunct(int c);extern int isspace(int c);extern int isupper(int c);extern int isxdigit(int c);}/* APPLE LOCAL end supply missing ctype.h decls 2001-07-11 sts */ % limit stacksize 3072 % mkdir ../gcc_build;cd ../gcc_build % ../gcc-3.1/configure --enable-languages='c,c++,objc' % make bootstrap-lean % sudo make install I thought the gcc Apple was shipping had been hacked up in ways incompatible to the standard version in order to make it work on OSX. I forget the details -- sorry -- but I thought they had to come up with custom hacks to deal with Darwin, HFS+, NetInfo, etc and that the standard version of GCC would not have such customizations available. Have Apple's changes been folded into the main trunk version, or are those hacks in some other way approximated by this hack? If not, this sounds very risky to me... I would rather go with Apple's gcc3 as it is posted in the April DevTools. I think the other one will work, since Apple submits changes back and those for mere BSD-compatibility will probably be folded in (why shouldn't they). However, Apple's version includes some auto-vectorization features to push sequences of floating-point ops into AltiVec Ops to achive notable speedups - these optimizations are not likely to be folded in soon. I think this is well documented with the Dev-Tools, and probably these optimizations are the reason for a part of the 10.2 speedup. (Even if you don't have a QE-compatible Mac. Regards, iSee I don't really see the point of going through all that to get gcc3 since it is already included in the April 2002 (beta) version of the Developer Tools. By default, the compiler used with the April Dev Tools is gcc2 but gcc3 is there and the release notes document that gets installed on your hard drive: tells you how to switch to use gcc3 instead of gcc2. I have switched to use gcc3 (from the April Dev Tools) for all my commmand-line compiling. I did this via the command sudo /usr/sbin/gcc_select 3 as given in the release notes. I have seen no problem at all. I have not yet tried using gcc3 in Project Builder. The releases notes tell you how to do this - it is a separate step and it is fine to continue to use gcc2 within PB even though your default compiler (/usr/bin/cc) is gcc3. By the way, as far as I know, the gcc3 that is supplied with the Dev Tools is identical to the one you can download from GNU, so there should be no problem with compatibility. Yup, using apple's gcc under the dev tools is far better, but this was written for people on dialups who would rather download 20MB compared to 200MB. Apple's gcc is quite a bit different from the FSF version, however the strange thing is that there have always been fixes and hacks to support darwin but have never been commited or placed in the mainline. Strange. setenv CFLAGS "-O1 -mdynamic-no-pic -no-cpp-precomp" -faltivec -mcpu=7450 -mtune=7450 oops, i didn't mean to put the -O1 in the required category. just the dynamic and precomp flags. also, the optional flags are things to look into. cvs -d :pserver:YOURUSERNAME@anoncvs.opensource.apple.com:/cvs/Darwin login cvs -d :pserver:YOURUSERNAME@anoncvs.opensource.apple.com:/cvs/Darwin -z9 co gcc3 I tried to get gcc3 from CVS. Got it but it needs gnumake. Got gnumake but it needs cc....and so on and so on... Any place I can get the Apple binaries? Visit other IDG sites:
http://hints.macworld.com/article.php?story=20020723090544155
CC-MAIN-2018-26
refinedweb
718
62.48
Over on Matt Turner's blog, he uses MarkLogic to get a list of medieval weapons from the wiki page as the first step in the enrichment of the texts of Shakespeare's plays. Here's another attempt at this task, using only standard XQuery functions. Again, we are fortunate that wiki pages are well-formed XML. declare namespace h= "" ; let $url := "" let $wikipage := doc($url) return string-join($wikipage//h:div[@id="bodyContent"]//h:li[h:a/@title][empty(h:ul)]/h:a,',') The complex path here is it ensure that only the relevant li tags are included and that only terminals in a hierarchy of terms are included, hence the check that the li has no ul child.
http://en.m.wikibooks.org/wiki/XQuery/Wiki_weapons_page
CC-MAIN-2015-18
refinedweb
120
54.76
Nicko van Someren wrote: > On 2 Dec 2007, at 03:09, Neil Toronto wrote: > >> Are there any use-cases for allowing namespace dicts (such as globals, >> builtins and classes) to have non-string keys? I'm asking because I'm >> planning on accelerating method lookups next, and the possibility of a >> key compare changing the underlying dict could be a major pain. (It was >> a minor pain for globals.) > > The only plausible use case I can think of might be wanting to use ints > or longs as keys, though I've never seen it done. Of course this would > be trivial to code around and it seems very much a fringe case, so I'd > be in favour of deprecating non-string namespace keys if it's going to > make look-ups go faster. If you insert non-string keys into a namespace dict it'll slow down lookups already. :) The dict will switch to the more general lookdict from lookdict_string. Looks like it's just a bad idea all around.... This problem already exists for general dicts with non-string keys (it can blow the C stack) and attribute caching makes it a bit more likely (the compare only has to insert or delete an item rather than cause a resize), so it'd be nice if it didn't apply to identifiers. As far as I know, though, the only way to get non-string keys into a class dict is by using a metaclass. Anyway, report: I've got an initial working attribute cache, using the conveniently-named-and-left-NULL tp_cache. It's a nice speedup - except on everything the standard benchmarks test, because their class hierarchies are very shallow. :p If an MRO has more than two classes in it, every kind of lookup (class method, object method, object attribute) is faster. Having more than four or five makes things like self.variable take less than half the time. It'd be nice to have a benchmark with a deep class hierarchy. Does anybody know of one? I'm working on making it as fast as the original when the MRO is short. Question for Guido: should I roll this into the fastglobals patch? Neil
https://mail.python.org/pipermail/python-dev/2007-December/075519.html
CC-MAIN-2016-40
refinedweb
370
78.48
Spectral Units in ADQL In case you find the piece of Python given below too hard to read: It's just this table of conversion expressions between the different SI units we are dealing with here. Astronomers these days work all along the electromagnetic spectrum (and beyond, of course). Depending on where they observe, they will have very different instrumentation, and hence some see their messengers very naturally as waves, others quite as naturally as particles, others just as electrons flowing out of a CCD that is sitting behind a filter. In consequence, when people say where in the spectrum they are, they use very different notions. A radio astronomer will say “I'm observing at 21 cm” or “at 50 GHz“. There's an entire field named after a wavelength, “submillimeter“, and blueward of that people give their bands in micrometers. Optical astronomers can't be cured of their Ångström habit. Going still more high-energy, after an island of nanometers in the UV you end up in the realm of keV in X-ray, and then MeV, GeV, TeV and even EeV. However, there is just one VO (or at least that's where we want to go). Historically, the VO has had a slant towards optical astronomy, which gives us the legacy of having wavelengths in far too many places, including Obscore. Retrospectively, this was an unfortunate choice not only because it makes us look optical bigots, but in particular because in contrast to energy and, by ν = E/h, frequency, messenger wavelength depends on the medium you work in, and I shudder to think how many wavelengths in my data center actually are air wavelengths rather than vacuum wavelengths. Also, as you go beyond photons, energy really is the only thing that reasonably characterises all messengers alike (well, even that still isn't quite settled for gravitational waves as long as we're not done with a quantum theory of gravitation). Well – the wavelength milk is spilled. Still, the VO has been boldly expanding its reach beyond the optical and infrared windows (recently, with neutrinos and gravitational waves, not to mention EPN-TAP's in-situ measurements in the solar system, even beyond the electromagnetic spectrum). Which means we will have to accomodate the various customs regarding spectral units described above. Where there are “thick” user interfaces, these can care about that. For instance, my datalink XSLT and javascript lets people constrain spectral cutouts (along BAND) in a variety of units (Example). But what if the UI is as shallow as it is in ADQL, where you deal with whatever is in the underlying database tables? This has come up again at last week's EuroVO Technology Forum in virtual Strasbourg in the context of making Obscore more attractive to radio astronomers. And thus I've sat down and taught DaCHS a new user defined function to address just that. Up front: When you read this in 2022 or beyond and everything has panned out, the function might be called ivo_specconv already, and perhaps the arguments have changed slightly. I hope I'll remember to update this post accordingly. If not, please poke me to do so. The function I'm proposing is, mainly, gavo_specconv(expr, target_unit). All it does is convert the SQL expression expr to the (spectral) target_unit if it knows how to do that (i.e., if the expression's unit and the target unit are spectral units properly written in VOUnit) and raise an error otherwise. So, you can now post: SELECT TOP 5 gavo_specconv(em_min, 'GHz') AS nu FROM ivoa.obscore WHERE gavo_specconv((em_min+em_max)/2, 'GHz') BETWEEN 1 AND 2 AND obs_collection='VLBA LH sources' to the TAP service at. You will get your result in GHz, and you write your constraint in GHz, too. Oh, and see below on the ugly constraint on obs_collection. Similarly, an X-ray astronomer would say, perhaps: SELECT TOP 5 access_url, gavo_specconv(em_min, 'keV') AS energy FROM ivoa.obscore WHERE gavo_specconv((em_min+em_max)/2, 'keV') BETWEEN 0.5 AND 2 AND obs_collection='RASS' This works because the ADQL translator can figure out the unit of its first argument. But, perhaps regrettably, ADQL has no notion of literals with units, and so there is no way to meaningfully say the equivalent of gavo_specconv(656, 'Hz') to get Hα in Hz, and you will receive a (hopefully helpful) error message if you try that. However, this functionality is highly desirable not the least because the queries above are fairly inefficient. That's why I added the funny constraints on the collection: without them, the queries will take perhaps half a minute and thus require async operation on my box. The (fundamental) reason for that is that postgres is not smart enough to work out it could be using an index on em_min and em_max if it sees something like nu between 3e8/em_min and 3e7/em_max by re-writing the constraint into 3e8/nu between em_min and em_max (and think really hard about whether this is equivalent in the presence of NULLs). To be sure, I will not teach that to my translation layer either. Not using indexes, however, is a recipe for slow queries when the obscore table you query has about 85 million rows (hi there in 2050: yes, that was a sizable table in our day). To let users fix what's too hard for postgres (or, for that matter, the translation engine when it cannot figure out units), there is a second form of gavo_specconv that takes a third argument: gavo_specconv(expr, unit_of_expr, target_unit). With that, you can write queries like: SELECT TOP 5 gavo_specconv(em_min, 'Angstrom') AS nu FROM ivoa.obscore WHERE gavo_specconv(5000, 'Angstrom', 'm') BETWEEN em_min AND em_max and hope the planner will use indexes. Full disclosure: Right now, I don't have indexes on the spectral limits of all tables contributing to my obscore table, so this particular query only looks fast because it's easy to find five datasets covering 500 nm – but that's an oversight I'll fix soon. Of course, to make this functionality useful in practice, it needs to be available on all obscore services (say) – only then can people run all-VO obscore searches without the optical bias. The next step (before Bambi-eyeing the TAP implementors) therefore would be to get it into the catalogue of ADQL user defined functions. For this, one would need to specify a bit more carefully what units must minimally be supported. In DaCHS, I have built this on a full implementation of VOUnits, which means you can query using attoparsecs of wavelength and get your result in dekaerg (which is a microjoule: 1 daerg = 1 uJ in VOUnits – don't you just love this?): SELECT gavo_specconv( (spectral_start+spectral_end)/2, 'daerg') AS energy FROM rr.stc_spectral WHERE gavo_specconv(0.0002, 'apc', 'J') BETWEEN spectral_start AND spectral_end (stop computing: an attoparsec is about 3 cm). This, incidentally, queries the draft RegTAP extension for the VODataService 1.2 coverage in space, time, and spectrum, which is another reason I'm proposing this function: I'm not quite sure how well my rationale that using Joules of energy is equally inconvenient for all communities will be generally received. The real rationale – that Joule is the SI unit for energy – I don't dare bring forward in the first place. Playing with wavelengths in AU (you can do that, too; note, though, that VOUnit forbids prefixes on AU, so don't even try mAU) is perhaps entertaining in a slightly twisted way, but admittedly poses a bit of a challenge in implementation when one does not have full VOUnits available. I'm currently thinking that m, nm, Angstrom, MHz, GHz, keV and MeV (ach! No Joule! But no erg, either!) plus whatever spectral units are in use in the local tables would about cover our use cases. But I'd be curious what other people think. Since I found the implementation of this a bit more challenging than I had at first expected, let me say a few words on how the underlying code works; I guess you can stop reading here unless you are planning to implement something like this. The fundamental trouble is that spectral conversions are non-linear. That means that what I do for ADQL's IN_UNIT – just compute a conversion factor and then multiply that to whatever expression is in its first argument – will not work. Instead, one has to write a new expression. And building these expressions becomes involved because there are thousands of possible combinations of input and output units. What I ended up doing is adopting standard (i.e., SI) units for energy (J), wavelength (m), and frequency (Hz) as common bases, and then first convert the source and target units to the applicable standard unit. This entails trying to convert each input unit to each standard unit until a conversion actually works, which in DaCHS' Python looks like this: def toStdUnit(fromUnit): for stdUnit in ["J", "Hz", "m"]: try: factor = base.computeConversionFactor( fromUnit, stdUnit) except base.IncompatibleUnits: continue return stdUnit, factor raise common.UfuncError( f"specconv: {fromUnit} is not a spectral unit understood here") The VOUnits code is hidden away in base.computeConversionFactor, which raises an IncompatibleUnits when a conversion is impossible; hence, in the end, as a by-product this function also determines what kind of spectral value (energy, frequency, or wavelength) I am dealing with. That accomplished, all I need to do is look up the conversions between the basic units, which can be done in a single dictionary mapping pairs of standard units to the conversion expression templates. I have not tried to make these templates particularly pretty, but if you squint, you can still, I hope, figure out this is actually what the opening image shows: SPEC_CONVERSION = { ("J", "m"): "h*c/(({expr})*{f})", ("J", "Hz"): "({expr})*{f}/h", ("J", "J"): "({expr})*{f}", ("Hz", "m"): "c/({expr})/{f}", ("Hz", "Hz"): "{f}*({expr})", ("Hz", "J"): "h*{f}*({expr})", ("m", "m"): "{f}*({expr})", ("m", "Hz"): "c/({expr})/{f}", ("m", "J"): "h*c/({expr})/{f}",} expr is (conceptually) replaced by the first argument of the UDF, and f is the conversion factor between the input unit and the unit expr is in. Note that thankfully, not additive operators are involved and thus all this is numerically well-conditioned. Hence, I can afford not attempting to simplify any of the expressions involved. The rest is essentially book-keeping, where I'm using the ADQL parser to turn the expression into a tree fragment and then fiddling in the tree fragment for expr into that. The result then replaces the UDF function call in the syntax tree. You can review all this in context in DaCHS' ufunctions.py, starting at the definition of toStdUnit. Sure: this is no Turing award material. But perhaps these notes are useful when people want to put this kind of thing into their ADQL engines. Which I'd consider a Really Good Thing™.
https://blog.g-vo.org/tag/dachs2.html
CC-MAIN-2022-33
refinedweb
1,835
58.11
The QToolTip class provides tool tips (balloon help) for any widget or rectangular part of a widget. More... #include <qtooltip.h> Inherits Qt. List of all member functions. The tip is a short, single line of text reminding the user of the widget's or rectangle's function. It is drawn immediately below the region in a distinctive black-on-yellow combination. The tip can be any Rich-Text formatted string. hovers the mouse on a tip-equipped region for a second or so and remains active until the user either clicks a mouse button, presses a key, lets the mouse hover for five seconds or moves the mouse outside all tip-equipped ); } See also maybeTip(). This is the most common entry point to the QToolTip class; it is suitable for adding tool tips to buttons, checkboxes, comboboxes and so on. Examples: helpsystem/mainwindow.cpp, qdir/qdir.cpp, scribble/scribble.cpp, and tooltip/tooltip.c. This function is obsolete. It is provided to keep old source working. We strongly advise against using it in new code. See also setFont(). Returns the tool tip group this QToolTip is a member of or 0 if it isn't a member of any group.(). Examples: helpsystem/tooltip.cpp and tooltip/tooltip.cpp. See also setPalette(). Returns the widget this QToolTip applies to. The tool tip is destroyed automatically when the parent widget is destroyed. See also group(). If there is more than one tool tip on widget, only the one covering the entire widget is removed. Removes any tool tip for rect from widget. If there is more than one tool tip on widget, only the one covering rectangle rect is removed. This function is obsolete. It is provided to keep old source working. We strongly advise against using it in new code. See also font(). By default, tool tips are enabled. Note that this function affects all tool tips in the entire application. See also QToolTipGroup::enabled. See also palette()... Immediately pops up a tip within the rectangle geometry, saying text and removes the tip once the cursor moves out of rectangle rect. groupText is the text emitted from the group.. This file is part of the Qt toolkit. Copyright © 1995-2005 Trolltech. All Rights Reserved.
http://doc.trolltech.com/3.3/qtooltip.html
crawl-001
refinedweb
374
69.79
Other languages [Main] As well as its own scripting language, obmm also lets you script using iron python, C# and visual basic. In this case, access to obmm's scripting functions is provided via an interface . C# and vb scripts must define a class named 'Script' which inherits from 'IScript' in the root namespace. It must contain a method called 'Execute' which takes an IScriptFunctions interface as a parameter. Both of these interfaces exist in the OblivionModManager.Scripting namespace. For example, a basic C# script would look like: using OblivionModManager.Scripting; class Script : IScript { public void Execute(IScriptFunctions sf) { sf.Message("This is a C# script"); } } Unlike C# and vb, python scripts don't require any special setup. obmm attempts to enforce security restrictions on these scripts, but as I'm not certain of their safety, python, C# and vb scripts are disabled by default. If you want to enable them, you can do so from the options menu.
http://timeslip.chorrol.com/obmmm/otherlanguages.htm
CC-MAIN-2017-13
refinedweb
159
64.61
Hi Paul,I've got two problems working with this patchset:1. A task can't join a cpuset unless 'cpus' and 'mems' are set. Thesedon't seem to automatically inherit the parent's values. So when I do mount -t container -o ns,cpuset nsproxy /containers (unshare a namespace)the unshare fails because container_clone() created a new cpusetcontainer but the task couldn't automatically enter that new cpuset.2. I can't delete containers because of the files they contain, andam not allowed to delete those files by hand.thanks,-serge-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2007/6/4/314
CC-MAIN-2014-41
refinedweb
120
56.66
This is your resource to discuss support topics with your peers, and learn from each other. 11-06-2012 07:54 AM Hi, I already have a thread but the post was not in the right category. I was trying to play audio/video. Presently I followed this ( Also the audio/video streaming is not working. I followed this ( 11-06-2012 10:56 AM Hi, I have the very same problem here. On the Dev Alpha (beta3) I succeded in streaming an mp4 file from an http source, but can't see any video output at all. Looking forward for any advice. Thanks, Antonio 11-06-2012 07:43 PM you might want to post some code to help you identify the issue? probably an issue with how you configured your foreigncontrol... 11-07-2012 01:51 AM - edited 11-07-2012 02:01 AM I did a small research and found that there were some issue in videoOutput property of MediaPlayer class. i tried the below code and able to play Local Audio/Video file. attachedObjects: [ MediaPlayer { id: myPlayer // sourceUrl: "" videoOutput: VideoOutput.PrimaryDisplay windowId: videoSurface.windowId // name of the window to create } ] ForeignWindowControl { id: videoSurface windowId: "myVideoSurface" updatedProperties: WindowProperty.Size | WindowProperty.Position | WindowProperty.Visible visible: boundToWindow // preferredWidth: 1280 // preferredHeight: 768 preferredWidth: 640 preferredHeight: 480 } call the url from Button clicked Button { id: Button4 text: qsTr("Play Video Local WMV") layoutProperties: StackLayoutProperties { spaceQuota: 1.0 } onClicked: { videoSurface.visible = true; myPlayer.setSourceUrl("asset:///sounds/BB10DevAlph a.wmv") myPlayer.play() } }a.wmv") myPlayer.play() } } The above code helps me to play local audio/video, not remote(streaming). Now my question is. 1. How can I play streaming (RTSP/HTTP) audio/video using the same MediaPlayer class. 2. How can I play streaming (RTSP/HTTP) audio/video using Media Previewer using Invocation Framework, so that I can launch default Media Player of the device and starts playing content. I mean how to Invoke the Media Previewer to play a file. I followed this Button { id: Button5 text: qsTr("Launch Default Player ") layoutProperties: StackLayoutProperties { spaceQuota: 1.0 } attachedObjects: [ Invocation { id: invoke query: InvokeQuery { mimeType: "video/audio" uri: "" } } ] onClicked: { invoke.trigger("bb.action.OPEN") } } I saw this tutorial 11-07-2012 05:25 AM Hi Guys, Now I am able to play local Audio/video in default player also. But getting Error 13 when put an remote url. Here is the code. void MultimediaTest1::handleInvokeButtonClick() { //exit(1); InvokeRequest cardrequest; cardrequest.setMimeType("audio/video"); cardrequest.setTarget("sys.mediaplayer.previewer") ; cardrequest.setUri("; cardrequest.setUri(" usic/sample.mp3"); InvokeManager invokemanager; invokemanager.invoke(cardrequest); }usic/sample.mp3"); InvokeManager invokemanager; invokemanager.invoke(cardrequest); } I call this Qt C++ function from my QML file, and it plays local videos, but if I set an URL in setUri() method , getting the error. 11-07-2012 09:31 AM - edited 11-07-2012 10:08 AM Hey Guys, I got one more update, I can play remote audio/video if the url doesn't have space or %20 in it. How can I play urls which has %20 in it. I tried QUrl::fromEncoded("http://... %20...") but is not woking. Could any please help me how to play encoded url. 11-07-2012 09:37 AM - edited 11-07-2012 09:38 AM Did you manage to play remote mp4 videos or just audio? I still have sound but no video output at all on a remote mp4 file (HTTP protocol). 11-07-2012 10:11 AM I am able to play Audio/Video from Local and Remote location in Custome player and Device default player. The only issue I have is I can't play Encoded Remote Url, a Url which has Space or %20 in it. It would be really nice if anyone can help me to play encoded Url. I think its the QUrl which is causing the issue. 11-07-2012 10:34 AM Still no luck here... I'm using this piece of QML, based on the official docs, and I still get white screen (but I can hear the sound of the stream correctly). import bb.cascades 1.0 import bb.multimedia 1.0 Page { attachedObjects: [ MediaPlayer { id: myPlayer sourceUrl: "" videoOutput: VideoOutput.PrimaryDisplay windowId: videoSurface.windowId } ] ForeignWindowControl { id: videoSurface windowId: "myVideoSurface" updatedProperties: WindowProperty.Size | WindowProperty.Position | WindowProperty.Visible visible: boundToWindow preferredWidth: 640 preferredHeight: 480 } onCreationCompleted: { myPlayer.play(); } } From the C++ side I have this very small snippet: // Test video QmlDocument *qml = QmlDocument::create("asset:///testvideo.qml"); Page* root = qml->createRootObject<Page>(); Application::instance()->setScene(root); 11-07-2012 11:14 AM
https://supportforums.blackberry.com/t5/Native-Development/Playing-Streaming-Video/m-p/1977647
CC-MAIN-2017-04
refinedweb
755
50.33
MongoDB Interview Questions Based on the document-oriented NoSQL database,. We have an impressive collection of MongoDB Interview Questions and Answers that is a must-read for all developers! Development History of MongoDB 10gen software company started developing MongoDB in 2007 as one of the components of a platform as a service product. However, in 2009, the company moved to an open-source development model and changed its name to MongoDB Inc in 2013. The company made is first release public on February 2009. Whether you are a fresher or an experienced MongoDB developer, these MongoDB admin interview questions and answers are all you need to succeed in your next interview. Latest Version: The most recent version is 4.0.5, which was released in December 2018. Advantages - Easy to install and setup. - Schema-less database. - Capable of deriving a document-oriented data model - Secured and scalable. - Full technical support. Are you looking to impress your boss and grab the upcoming promotion at work? Here are the most common MongoDB interview questions to help you do that. Related Interview Questions and Answers Most Frequently Asked MongoDB Interview Questions And Answers With Examples: - How to list all indexes in MongoDB? - What is MongoDB and how it works? Explain - What are the uses of MongoDB? Explain - Is MongoDB better than Mysql? Explain - When was MongoDB founded and why it is called MongoDB? - What is the difference between Mysql and MongoDB? Explain - Is MongoDB a relational database? Explain - What are the difference between SQL and MongoDB? Explain - Explain "Namespace" in MongoDB. - What is index and how it is used in MongoDB? - Explain Storage Engine in MongoDB - In MongoDB, what is CRUD? - What is sharding in MongoDB? Explain - How do I create a collection in MongoDB? Write it's syntax - How do I drop a collection in MongoDB? Write it's syntax - How we can create an index in MongoDB? - What is the command which are use to drop database in MongoDB? - What is the use of limit() function in MongoDB? - Can we store images in MongoDB? - What are alternatives to MongoDB? - What is replica set in Mongodb? Explain - What will be objects between two dates in MongoDB? - Is MongoDB support ACID transactions? - How to install MongoDB on our machine? To list all indexes you can use db.items.getIndexes(). MongoDB is used for high volume data storage. MongoDB is one of many non-relational database technologies that came up in the mid-2000s for use in big data applications and other processing jobs. MongoDB is more faster than others because it allows users to query in a different manner. In MongoDB, a record is a document, which is a data structure composed of field and value pairs. It is similar to JavaScript Object Notation objects Most developers prefer MongoDB over MySQL because MongoDB allows them to build applications quicker, handle diverse data types, and efficiently manage applications. The flexible data model in MongoDB ensures database schema evolves with business needs.. No. MongoDB is a non-relational database. Instead, it is document-oriented. This means, instead of storing data in tables, similar to a relational database, it stores data in individual documents. In MongoDB, Binary Interchange and Structure Object Notation (BSON) objects are stored in a collection. The combination of collection and database names is called a namespace. All documents in MongoDB belong to a namespace.. A storage engine in MongoDB is a part of the database, which is responsible for managing and storing data on the disk. The two storage engines in MongoDB are WiredTiger and MMAPv1. CRUD in MongoDB refers to the fundamental operations - Create, Read, Update, and Delete. MongoDB uses the method of sharding for enabling deployments of large data sets and operations that demand high throughput. This method allows data to be stored across different machines. In MongoDB, developers do not need to create a collection. It will get created automatically when a document will be inserted. The syntax for creating a collection in MongoDB is: db.createCollection(name,options) To drop a collection in MongoDB, connect to the database where you want to delete the collection. Type the following command for deleting: db.collection_name.drop() You can use the db.collection.createIndex() method for creating Indexes in MongoDB. The command - db.dropDatabse() is used for drop databases in MongoDB. The limit() method in MongoDB is used for limiting the records in different databases. Yes. You can use GridFS function in MongoDB for storing as well as retrieving large files such as Images, audio files, and video files. You can consider CouchDB, Cassandra, Riak, Redis, and HBase as some of the decent alternatives to MongoDB. A Replica Set in MongoDB is a group of instances that maintain similar data sets. These type of sets are essential for production deployments as they offer high availability as well as good redundancy. Yes. MongoDB 4.0 version provides complete multi-document ACID transaction support.
https://www.bestinterviewquestion.com/mongodb-interview-questions
CC-MAIN-2019-47
refinedweb
821
59.8
How do I use a progress bar when my script is doing some task that is likely to take time? For example, a function which takes some time to complete and returns True when done. How can I display a progress bar during the time the function is being executed? Note that I need this to be in real time, so I can't figure out what to do about it. Do I need a thread for this? I have no idea. Right now I am not printing anything while the function is being executed, however a progress bar would be nice. Also I am more interested in how this can be done from a code point of view. There are specific libraries (like this one here) but maybe something very simple would do: import time import sys toolbar_width = 40 # setup toolbar sys.stdout.write("[%s]" % (" " * toolbar_width)) sys.stdout.flush() sys.stdout.write("\b" * (toolbar_width+1)) # return to start of line, after '[' for i in xrange(toolbar_width): time.sleep(0.1) # do real work here # update the bar sys.stdout.write("-") sys.stdout.flush() sys.stdout.write("]\n") # this ends the progress bar Note: progressbar2 is a fork of progressbar which hasn't been maintained in years. With tqdm you can add a progress meter to your loops in a second: In [1]: import time In [2]: from tqdm import tqdm In [3]: for i in tqdm(range(10)): ....: time.sleep(3) 60%|██████ | 6/10 [00:18<00:12, 0.33 it/s] Also, there is a graphical version of tqdm since v2.0.0 ( d977a0c): In [1]: import time In [2]: from tqdm import tqdm_gui In [3]: for i in tqdm_gui(range(100)): ....: time.sleep(3) But be careful, since tqdm_gui can raise a TqdmExperimentalWarning: GUI is experimental/alpha, you can ignore it by using warnings.simplefilter("ignore"), but it will ignore all warnings in your code after that.
https://pythonpedia.com/en/knowledge-base/3160699/python-progress-bar
CC-MAIN-2020-16
refinedweb
320
73.47
rodd has asked for the wisdom of the Perl Monks concerning the following question: I'm looking for a way to pause a perl 5.8.8 interpreter thread and later restore it at the same (or next) line in the thread script. And with the same var namespace. By "later" I mean that the perl interpreter may have to be stopped, then restarted next day. So the thread state should be stored in the filesystem. The reason for this is that I'm working on a perl server daemon that launches threads that do things for users. Now I want to give my users the possibility to "pause" their corresponding thread, go on vacation, and let the thread continue when their are back, without having to worry about server crashes while they're gone. I've spent the day reading into Data::Dumper and its serializers, PadWalker, __LINE__, eval and goto's, and I think a combination of all these may give me a "homegrown" solution. But it seems quite painful to turn all local and global vars into evalable code, then restore it later and goto to where it stopped. (I have no eval security concerns, as this is all private code). The sam'ol perl developer dilemma... Am I reinventing the wheel here? Any better packages around cpan to do the job better? In a perfect world: use threads; ... sub mythread { ... threads->self->unload("/threadfile.id") if($goaway); ## I'm back. Keep going... } sub laterdaemon { threads->reload("/threadfile.id"); } [download] The blood red ran from the Grey Monk's side, His hands and feet were wounded wide, His body bent, his arms and knees Like to the roots of ancient trees. --William Blake I'm quite prepared to be proved wrong on this, but from my perspective this simply cannot be done. As for why. There is no way to preserve/restore the state of the cpu registers, and if you can't do that, the rest is irrelevant. But there are other problems like pipes, sockets, open file handles (for example under Linux you can have an open file handle to an unlinked file - you can't really restore that) and network connections, and you have to think about what you do with $^T, for example. All in all I think it's easier to make that particular application serialize its data and dump it to disk, and then resume later on. But there are other problems like pipes, sockets, open file handles... As they are all process global entities, so long as the process continued running, they would persist, and once the thread was restored, it could continue to use them. All the caveats about changes to them and other process global state like $^T are essentially the same as if the thread had slept, or been suspended. Equally, these problems exist with existing mechanisms for suspending (hibernating) processes to disk (think laptops). If anything modifies the state of the filesystem between suspend and resume, things get screwy. If you close you laptop while programs are connected to the net, they probably won't like it much when you try to restore them. Some protocols like http being connectionless, mean that browser can redisplay the cached state of pages and interaction can continue, assuming a new connection has been established. But you'll have less luck with most other protocols. Perl 5 doesn't do continuations, or anything close to them. Hence the reason I say the register state is the biggest insurmountable problem. In theory, as iThreads are always clones of their 'parent' (notional only), then if you could 'capture' the state of a thread immediately after it is created, but before tranfering control to it, you would have a restorable, known state. So, to save the state of a thread, you would have it spawn a new thread--which would be a clone of itself--and then immediately die. The thread procedure for that new thread could then serialise itself to disk and die also. The problem then comes at restoration. You spawn a new thread to restore the serialised one from disk, but it will be a clone of it's parent--there's currently no way to avoid that (more's the pity)--so the deserialise thread proc would have to clean itself of inherited state, prior to reconstructing the saved state. And that's currently not doable by any stretch of my imagination. As an afterthought, you might want to look at the shell mechanisms for suspending processes in the background, google for "bash suspend" for the idea. Cheers Chris No recent polls found
https://www.perlmonks.org/index.pl/?node_id=699934
CC-MAIN-2020-50
refinedweb
774
69.62
I am writing some scripts to process some text files in python. Locally the script reads from a single txt file thus i use index_file = open('index.txt', 'r') for line in index_file: .... import os from glob import glob def readindex(path): pattern = '*.txt' full_path = os.path.join(path, pattern) for fname in sorted(glob(full_path)): for line in open(fname, 'r'): yield line # read lines to memory list for using multiple times linelist = list(readindex("directory")) for line in linelist: print line, This script defines a generator (see this question for details about generators) to iterate through all the files in directory "directory" that have extension "txt" in sorted order. It yields all the lines as one stream that after calling the function can be iterated through as if the lines were coming from one open file, as that seems to be what the question author wanted. The comma at the end of print line, makes sure that newline is not printed twice, although the content of the for loop would be replaced by question author anyway. In that case one can use line.rstrip() to get rid of the newline. The glob module finds all the pathnames matching a specified pattern according to the rules used by the Unix shell, although results are returned in arbitrary order.
https://codedump.io/share/Aepp5qHZ0cfe/1/how-to-search-string-in-a-folder-of-text-files-using-python
CC-MAIN-2017-17
refinedweb
219
69.31
view raw I have a following xml file, I want to read the contents in <seg> <?xml version="1.0"?> <mteval> <tstset setid="default" srclang="any" trglang="TRGLANG" sysid="SYSID"> <doc docid="ntpmt-dev-2000/even1k.cn.seg.txt"> <seg id="1">therefore , can be obtained having excellent properties ( good stability and solubility of the balance of the crystal as a pharmaceutical compound is not possible to predict .</seg> <seg id="3">compound ( I ) are preferably crystalline , in particular , has good stability and solubility equilibrium and suitable for industrial prepared type A crystal is preferred .</seg> <seg id="4">method B included in the catalyst such as DMF , and the like in the presence of a compound of formula ( II ) with thionyl chloride or oxalyl chloride to give an acyl chloride , in the presence of a base of the acid chloride with alcohol ( IV ) ( O ) by reaction of esterification .</seg> </doc> </tstset> </mteval> from xml.dom.minidom import parse import xml.dom.minidom dom = xml.dom.minidom.parse(r"path_to_xml file") file = dom.documentElement seg = dom.getElementsByTagName("seg") for item in seg: sent = item.firstChild.data print(sent,sep='') file = open(r'file.txt','w') file.write(sent) file.close() <seg> You're only calling file.write(sent) once, open the file before the loop, and then add the following line to this code: file = open(r'file.txt','w') for item in seg: sent = item.firstChild.data print(sent,sep='') file.write(sent) // <---- this line file.close()
https://codedump.io/share/1nW8GXewbvlu/1/how-to-read-some-contents-of-xml-files-and-write-them-into-a-text-file
CC-MAIN-2017-22
refinedweb
248
51.55
. - Improved visibility on transparent backgrounds - Themed arrow buttons in applets that were missing them - Layout and antialiasing fixes in various applets. Dot Categories: KDE4 alone with X.org server perfectly fits into 512MB or even less RAM, but then try to run a handful of applications and services, and you'll experience what I'm talking about. God!! Until you do that there's nothing to argue about or discuss. Sure... As long as you start a bunch of qt3 and gtk apps, the memory use is going to get higher. IMHO people tends to consider that kde4 uses more memory because they run a lot of kde3 apps. And loading qt4 + qt3 (worse, kdelibs3 and kdelibs4) is, and there's nothing that can be done about that, more memory consuming than just loading qt3/kde3. Just give the time to port apps like amarok/k3b to kde4, and memory use of "kde4" will decrease greatly :). !" Like it has already been said: LiveCD:s suck for measuring performance. I have tested KDE4 through LiveCD on my MacBook Pro. It has 2.4GHz dual-core CPU and 2 GB of RAM. And guess what? It ran slow. For starters, it consumes a lot of RAM to load the contents of the CD to RAM. Seconduly, since not everything is loaded to RAM, it occasionally fetches stuff from the CD; and that is REALLY slow. I tried the LiveCD:s to see the functionality and features. Never in million years did I imagine that they would give me accurate idea what the performance is like. LiveCD image stored on your HDD gives you roughly the same performance as a usual system installed on your HDD. LiveCD's may suck in measuring performance but I can clearly see how much RAM they need to run. "LiveCD image stored on your HDD gives you roughly the same performance as a usual system installed on your HDD." Um, then you are not talking about LiveCD anymore. If the system is installed on the HD, then it's not running from the HD, and therefore it's not a LiveCD anymore. "LiveCD's may suck in measuring performance but I can clearly see how much RAM they need to run." LiveCD's do not give you accurate information regarding memory-consumption or speed. Well, I'm not in a mood of arguing with you :-) It seems like you disregard my arguments without proving anything - I have facts - you have superficial opinions. I've been a techie for the last fifteen years and I can tell you for sure, that running LiveCD Linux image gives you a very good impression about its environment speed and characteristics - you CAN compare any KDE3.x based LiveCD distro with any KDE4.x based LiveCD distro. The former will run just fine if you allocate just 256MB of RAM for it, the latter will not be running fine until you give it 768MB of RAM. Once again, I'm talking about running ISO images stored on your HDD. If you DO that that come back and give me more reasoning :-) I think you are absolutely wrong here. You state that you have facts, and dissenting people have opinions. That alone counts against you right there. You're saying exactly opposite things. One person isn't commenting on objective matter while the other is commenting on subjective matter. First off, we are talking about something objective, and easily verifiable. Do LiveCDs represent real world memory usage? No, they do not. Live CDs have to run off of memory. Different live CDs do this through a variety of different means. Sabayon live CDs load through QEMU for some crazy reason. Some will attempt to see swap and use it. Some load a huge image into memory to have more apps, while others load less into memory to have more room for running those apps. Some Live CDs use different file systems, different forms of compression, etc. People compile binaries differently as well. Regardless, in the simplest terms, if you want to compare KDE 3 versus KDE 4 for performance, you'd have to attempt to compile them both along very similar lines, even though QT and the underlying libraries are quite different. You'd need to compare a pure KDE 3/QT 3 environment with the same apps/services running against a pure KDE 4/QT 4 environment with the same apps/services running. And you'd want them to run natively, installed properly on the drive, not off some live CD. Running the live CD itself affects memory usage itself and distorts your results. Do LiveCDs represent real world memory usage? No, they do not. Live CD's are useful by their own. So I can't see why you have decided they are an inferior way to compare KDE 3 and KDE 4. livecds are compressing data and uncompressing is memory overhead. basic know how. Live CD's are useful for a great deal many things. Don't get me wrong. However, saying that KDE 4 can't be run on a system with less than 768 megs of memory because a Live CD was slow on a 512 box isn't a fair statement. "It seems like you disregard my arguments without proving anything - I have facts - you have superficial opinions." What are my "opinions" then? Just about only thing I have said is that "you can't use LiveCD to measure performance". I have made no claims regarding performance of KDE4. So you have facts? Where are they? All I have seen is your OPINION that KDE4 has poor performance. I have seen no facts anywhere. "I've been a techie for the last fifteen years" I have been a techie since I have been about 6 years old. That was 24 years ago. Do I win a prize? "The former will run just fine if you allocate just 256MB of RAM for it, the latter will not be running fine until you give it 768MB of RAM." But the thing is that I have seen people run KDE4 on Nokia's tablets. I have seen KDE run on OpenMoko. I have seen KDE4 run on computers with 512MB of RAM. And while the former two might be a bit on the slow side (what do you expect, really?), the latter machine seemed to work just fine. "Once again, I'm talking about running ISO images stored on your HDD." But that's not LiveCD. LiveCD runs on the CD, it does not touch your HD at all. Yes you can see how much RAM a liveCD need to run, but it's not comparable to real installation or in many cases even to other liveCDs. Running the liveCD image from a cd, harddrive or punchcards do not really make any difference, by nature a liveCD need way more RAM than a proper installation. As they are designed to run from a readonly media, to achieve proper functionality they allocate a part of the physical memory to use as ramdisk to hold the parts of the filesystem where read/write access are required. Since the size of the ramdisk are dictated by the strategy chosen by of the distribution and the whim of the developers, it will wary between different distributions. Smootly isn't the correct word for KDE4 on my eeePC. It runs, much MUCH MUCH slower/fatter than KDE3. I know some people who did the comparsion also and they have all the same results, using ubuntuEEE. Maybe your distro or KDE is very well optimized? I'm running openSUSE here. It runs slower and fatter on my 1.5G RAM, dual core Thinkpad T60 as well. It's very frustrating to have a three year old laptop treated as "legacy" by KDE. David, who, outside the dot.kde.org peanut gallery, told you your T60 was considered legacy by the KDE developers? I have 4.1 on my Eee 900 and I don't see that much of a performance hit compared to KDE 3. chances are it has nothing to do with memory consumption or cpu cycles otherwise used and everything to do with things like graphics card drivers. when you and others say, "kde4 runs slow for me" i don't doubt you. there are known hardware configurations out there that cause problems due to driver related issues, for instance. there have also been issues related to performance in the newer Qt4 technologies which have been worked on (and continue to be improved still). but, as if by magic, when these problem items are upgraded underneath KDE4, things start to work and flow a lot nicer. that there are others, in this very thread even, whose experience is quite good out of the box says a lot ... so .. are there issues on some configurations? yes. is there something KDE can do about it? yes: track down the issues, report them upstream, let people know about them so they know what to expect. for the large part, however, even with the newness of some parts of KDE 4's codebase .. much of the codebase hasn't changed much, improvements in performance have been made (KConfig in 4.1, for instance) and our biggest challenges currently lie with upstream projects. Hello Aaron, during the time I used KDE 4.1, I felt that KDE 4.x felt indeed much less snappy that KDE 3.5 and that's what I blame about it: a) Animations. I don't have these under 3.5 for menus, menu bar, etc, but it was not as easy to disable them for 4.1. Am I right with that? Anyway, I think the animations somehow made it feel slower. b) The double buffering is what I blame most for the perceived loss of performance. When I open kcontrol in 3.5 right now and switch panes, it flickers. Flicker may be bad and stuff, but it's an _immediate_ response and lasting very short time only. That so much that I stoped to perceive it long ago as such. With the double buffering it appears that only when the rendering is done, I will see the result and lack the immediate and direct feedback. So I click and get a feedback like 2ms later or more, that's noticable, is it? I am not saying that I won't be used to that in a few years. I am not now. ----- Otherwise, start times, Dolphin vs. Konqueror file views, previews, etc. did all feel faster, that's right. I think give back flicker and stop animations and you will get people to say that it's fast. The last one is probably something that _should_ be an option. Yours, Kay easy. Loading all the KDE3 libs because not aps have been ported yet. KDE 4 live cd's work fine on my laptop with 256MB of RAM. I tried Fedora 9 LiveCD and Mandriva One 2009 beta2 LiveCDs - It looks like you tried something else, cause with my distros KDE4 is hardly usable even with 512MB of RAM. Sounds to me like it is time to switch distros then. If others report KDE 4 runs fine with their low-powered system, and yours doesn't, I would seriously doubt it to be a problem with KDE 4 itself. Note that it still *could* be, but extraordinary claims require extraordinary evidence... Well, I have given the instructions how to check out my statements about memory consumption and you still insist that I'm just "claiming" something and the world may be different for you. Why am I still arguing? Livecd's take a lot of ram by themselves already, even without KDE 4.x. I have an Acer Aspire one with 512 mb ram, KDE 4.1 (Kubuntu) runs pretty one. Just the desktop, or desktop + apps? desktop+kmail+konqueror+openoffice on 1.5 singlecore+512m ram+no desktop-effects+suse. "KDE 4.1.0 is all shiny and sexy but I cannot imagine running it smoothly on a PC with less than 768MB of RAM." 4.1 absolutely flies on my Asus eee 701 with 512Mb and underclocked 600Mhz cpu, and that's _with_ compositing switched on. The "wows" I get out of people when they see Cover Flow task switching with live preview... My eeePC must have some defect... What distro are you using? LFS :-) Scream if you've already been asked this, but what type of graphics card do you use. It's well known that nVidia cards' 2D performance sucks, particularly since KDE4 uses 2D hardware acceleration. "KDE 4.1.0 is all shiny and sexy but I cannot imagine running it smoothly on a PC with less than 768MB of RAM." I saw it running smoothly on a 6-year old notebook with 256 MB RAM. The only feature which cannot be used is compositing.. Out of my cold dead hand. You know, I really wanted to like KDE4.1, and I briefly convinced myself that I did, but I lost interest after two or three weeks of use. It's just not as flexible and powerful, though it may be one day soon. I usually run KDE applications from fluxbox, but I'm running KDE3 right now, and after running KDE4.1 for a few weeks, KDE3 never felt so straightforward and stable. I know KDE4 has potential, but "potential" is not a good reason to surrender the most advanced and functional expression of the desktop as we've known it for the past 25 years or so. KDE4 is important work, and the job of developers is development. But users have to use, and KDE3 is more useful. "Maintanance mode" is just fine, security updates are all that is really called for, but KDE4.1 is not as useful to me. Thanks to KDE for taking care of my needs. I think I owe you a modest contribution. Since you admit you are not really a user of the desktop in KDE anyway, your insistence on KDE3 are a rather pointless. As you say you are usually running KDE applications from fluxbox, so upgrading to KDE4 makes more sense for you than most. Just continue as you where before, then you don't get affected by any changes to the desktop made by the switch to Plasma as some users have problems with. But you get access to across the board improvement made to as good as every application ported to KDE4. Yay! I reported a bug and it's fixed in this update :) My fist ever bug report to be fixed and released! I gotta drink on this one! Now only if 3.5.x choppy word-wrapped scrolling would be fixed... but we can't have everything! ;) Sorry if it's a dumb question, but I can't figure out how to install it in Ubuntu. Kubuntu's instructions () say: 1. Launch Adept 2. In Software Repositories enable Unsupported updates in Updates. ... But there's no "Software Repositories" in Adept. There's "Adept"->"Manage Repositories", but it just does "apt-get update". I'm sure I've seen the repository editor before, though. What am I missing? I'd rather just edit /etc/apt/sources.list myself, though. Does anyone know what I should add? Adept -> Manage Repositories -> Update tab -> Check the Unsupported Updates box, fetch updates, voila. Well, as I said, when I click "Manage Repositories", it just updates everything - same as if I click "Fetch Updates". There's no "Update tab" anywhere. Is my Adept broken? When I hit Adept -> Manage Repositories I get the dialog in the screenshot I'm attaching. I've highlighted the update tab and and Unsupported Updates checkbox. I haven't done anything funny to Adept (that I can remember) so theoretically we should have the same dialog (You're using kubuntu 8.04?). About Adept says I'm using "Adept Manager 2.1 Cruiser." Here's the screenshot I forgot to attach. Sorry :-( I don't get the dialog box at all... But if I run adept_manager from the terminal, I see this: /usr/lib/python2.5/site-packages/apt/__init__.py:18: FutureWarning: apt API not stable yet warnings.warn("apt API not stable yet", FutureWarning) Traceback (most recent call last): File "/usr/bin/software-properties-kde", line 34, in from softwareproperties.kde.SoftwarePropertiesKDE import SoftwarePropertiesKDE File "/usr/lib/python2.5/site-packages/softwareproperties/kde/SoftwarePropertiesKDE.py", line 36, in from PyQt4.QtCore import * RuntimeError: the sip module supports API v3.0 to v3.6 but the PyQt4.QtCore module requires API v3.7 Not sure what Qt4 has to do with adept. software-properties-kde is written in PyQt4. At the linking stage of kmdr-editor I get this error: --enable-final is NOT enabled. Fedora 9/gcc 4.2.4 Sad ... --enable-visibility causes this error. I did not realize this release was on the way and there are some improvements in Kommander for this branch. They were mostly personal, but had I realized I might have added more. Anyway Kommander 3.5.10 is worth looking at. As for me and a lot of colleagues kde4 doesn't fit our needs - no i'm not going to list the features i'm missing or things that went in a wrong direction (don't want to get my post removed) - thanks a lot for maintaining the KDE3 branch. I really enjoy to work with. Geared to professional work, but highly customizable though, it's an ideal platform to get things done. This attitude will ensure that KDE 4 never meets your needs. Drop the negativity, use bugzilla to make sure there are reports for the missing features, be patient, and you'll make KDE 4 better for you and for others. No, his point is "Don't fix it if it works". No, his point is that he prefers KDE 3.5 to KDE 4.1 It is never the point of negativity. Imagine this, you're working on some let say code file, editing for example, using kate. After you extracted the point of error, and successfully eliminated impossibilities, you save your code, pressing CTRL+S, but somehow in the same time KDE Nepomouk, or whatever is spelling here, emits some garbage over D-bus, your plasma crashes, at the same time the child process crashes too, kate dies, all your work is annihilated. Will you write bugzilla report? Say yes and you're qualify in the same category as the guy who actually invented that D-bus garbage, the category is amateurism.
https://dot.kde.org/comment/69820
CC-MAIN-2021-10
refinedweb
3,113
75.4
python - if __name__ == __main__ invalid syntax - What does if __name__ == “__main__”: do? When your script is run by passing it as a command to the Python interpreter, python myscript.py all of the code that is at indentation level 0 gets executed. Functions and classes that are defined are, well, defined, but none of their code gets run. Unlike other languages, there's no main() function that gets run automatically - the main() function is implicitly all the code at the top level. In this case, the top-level code is an if block. ___": ... If your script is being imported into another module, its various function and class definitions will be imported and its top-level code will be executed, but the code in the then-body of the if clause above won't get run as the condition is not met. As a basic example, consider the following two scripts: # file one.py def func(): print("func() in one.py") print("top-level in one.py") if __name__ == "__main__": print("one.py is being run directly") else: print("one.py is being imported into another module") # file two.py import one print("top-level in two.py") one.func() if __name__ == "__main__": print("two.py is being run directly") else: print("two.py is being imported into another module") Now, if you invoke the interpreter as python one.py The output will be top-level in one.py one.py is being run directly If you run two.py instead: python two.py You get top-level in one.py one.py is being imported into another module top-level in two.py func() in one.py two.py is being run directly Thus, when module one gets loaded, its __name__ equals "one" instead of "__main__". What does the if __name__ == "__main__": do? # Threading example import time, thread def myfunction(string, sleeptime, lock, *args): while True: lock.acquire() time.sleep(sleeptime) lock.release() time.sleep(sleeptime) if __name__ == "__main__": lock = thread.allocate_lock() thread.start_new_thread(myfunction, ("Thread #: 1", 2, lock)) thread.start_new_thread(myfunction, ("Thread #: 2", 2, lock)) What does the if __name__ == "__main__":do?: This() What does if __name__ == "__main__":do? __name__ is a global variable (in Python, global actually means on the module level) that exists in all namespaces. It is typically the module's name (as a str type). As the only special case, however, in whatever Python process you run, as in mycode.py: python mycode.py the otherwise anonymous global namespace is assigned the value of '__main__' to its __name__. Thus, including the final lines if __name__ == '__main__': main() - at the end of your mycode.py script, - when it is the primary, entry-point module that is run by a Python process, will cause your script's uniquely defined main function to run. Another benefit of using this construct: you can also import your code as a module in another script and then run the main function if and when your program decides: import mycode # ... any amount of other code mycode.main() When there are certain statements in our module ( M.py) we want to be executed when it'll be running as main (not imported), we can place those statements (test-cases, print statements) under this if block. As by default (when module running as main, not imported) the __name__ variable is set to "__main__", and when it'll be imported the __name__ variable will get a different value, most probably the name of the module ( 'M'). This is helpful in running different variants of a modules together, and separating their specific input & output statements and also if there are any test-cases. In short, use this ' if __name__ == "main" ' block to prevent (certain) code from being run when the module is imported. Put simply, __name__ is a variable defined for each script that defines whether the script is being run as the main module or it is being run as an imported module. So if we have two scripts; #script1.py print "Script 1's name: {}".format(__name__) and #script2.py import script1 print "Script 2's name: {}".format(__name__) The output from executing script1 is Script 1's name: __main__ And the output from executing script2 is: Script1's name is script1 Script 2's name: __main__ As you can see, __name__ tells us which code is the 'main' module. This is great, because you can just write code and not have to worry about structural issues like in C/C++, where, if a file does not implement a 'main' function then it cannot be compiled as an executable and if it does, it cannot then be used as a library. Say you write a Python script that does something great and you implement a boatload of functions that are useful for other purposes. If I want to use them I can just import your script and use them without executing your program (given that your code only executes within the if __name__ == "__main__": context). Whereas in C/C++ you would have to portion out those pieces into a separate module that then includes the file. Picture the situation below; The arrows are import links. For three modules each trying to include the previous modules code there are six files (nine, counting the implementation files) and five links. This makes it difficult to include other code into a C project unless it is compiled specifically as a library. Now picture it for Python: You write a module, and if someone wants to use your code they just import it and the __name__ variable can help to separate the executable portion of the program from the library part. Consider: if __name__ == "__main__": main() It checks if the __name__ attribute of the Python script is "__main__". In other words, if the program itself is executed, the attribute will be __main__, so the program will be executed (in this case the main() function). However, if your Python script is used by a module, any code outside of the if statement will be executed, so if \__name__ == "\__main__" is used just to check if the program is used as a module or not, and therefore decides whether to run the code. There are a number of variables that the system (Python interpreter) provides for source files (modules). You can get their values anytime you want, so, let us focus on the __name__ variable/attribute: When Python loads a source code file, it executes all of the code found in it. (Note that it doesn't call all of the methods and functions defined in the file, but it does define them.) Before the interpreter executes the source code file though, it defines a few special variables for that file; __name__ is one of those special variables that Python automatically defines for each source code file. If Python is loading this source code file as the main program (i.e. the file you run), then it sets the special __name__ variable for this file to have a value "__main__". If this is being imported from another module, __name__ will be set to that module's name. So, in your example in part: if __name__ == "__main__": lock = thread.allocate_lock() thread.start_new_thread(myfunction, ("Thread #: 1", 2, lock)) thread.start_new_thread(myfunction, ("Thread #: 2", 2, lock)) means that the code block: lock = thread.allocate_lock() thread.start_new_thread(myfunction, ("Thread #: 1", 2, lock)) thread.start_new_thread(myfunction, ("Thread #: 2", 2, lock)) will be executed only when you run the module directly; the code block will not execute if another module is calling/importing it because the value of __name__ will not equal to "main" in that particular instance. Hope this helps out. It is a special for when a Python file is called from the command line. This is typically used to call a "main()" function or execute other appropriate startup code, like commandline arguments handling for instance. It could be written in several ways. Another is: def some_function_for_instance_main(): dosomething() __name__ == '__main__' and some_function_for_instance_main() I am not saying you should use this in production code, but it serves to illustrate that there is nothing "magical" about if __name__ == '__main__'. It is a good convention for invoking a main function in Python files. The reason for if __name__ == "__main__": main() is primarily to avoid the import lock problems that would arise from having code directly imported. You want main() to run if your file was directly invoked (that's the __name__ == "__main__" case), but if your code was imported then the importer has to enter your code from the true main module to avoid import lock problems. A side-effect is that you automatically sign on to a methodology that supports multiple entry points. You can run your program using main() as the entry point, but you don't have to. While setup.py expects main(), other tools use alternate entry points. For example, to run your file as a gunicorn process, you define an app() function instead of a main(). Just as with setup.py, gunicorn imports your code so you don't want it do do anything while it's being imported (because of the import lock issue). Consider: print __name__ The output for the above is __main__. if __name == "__main__": print "direct method" The above statement is true and prints "direct method". Suppose if they imported this class in other class it doesn't print "direct method" because, while importing, it will set __name__ equal to "firstmodel name". If this .py file are imported by other .py files, the code under "the if statement" will not be executed. If this .py are run by python this_py.py under shell, or double clicked in Windows. the code under "the if statement" will be executed. It is usually written for testing. All the answers have pretty much explained the functionality. But I will provide one example of its usage which might help clearing out the concept further. Assume that you have two Python files, a.py and b.py. Now, a.py imports b.py. We run the a.py file, where the "import b.py" code is executed first. Before the rest of the a.py code runs, the code in the file b.py must run completely. In the b.py code there is some code that is exclusive to that file b.py and we don't want any other file (other than b.py file), that has imported the b.py file, to run it. So that is what this line of code checks. If it is the main file (i.e., b.py) running the code, which in this case it is not (a.py is the main file running), then only the code gets executed. if name == 'main': We see if __name__ == '__main__': quite often. It checks if a module is being imported or not. In other words, the code within the if block will be executed only when the code runs directly. Here directly means not imported. Let's see what it does using a simple code that prints the name of the module: # test.py def test(): print('test module name=%s' %(__name__)) if __name__ == '__main__': print('call test()') test() If we run the code directly via python test.py, the module name is __main__: call test() test module name=__main__
http://code.i-harness.com/en/q/6655b
CC-MAIN-2019-04
refinedweb
1,884
73.88
A Programming Language with Extended Static Checking In this article, I’ll look at a common problem one encounters when verifying programs: namely, writing loop invariants. In short, a loop invariant is a property of the loop which: Loop invariants can be tricky to get right but, without them, the verification will probably fail. Let’s consider a very simple example: define nat as int where $ >= 0 nat counter(int count): i = 0 while i < count: i = i + 1 return i This program does not verify. In order to get it to verify, we need to add a loop invariant. The need for loop invariants arises from Hoare’s rule for while-loops. The key issue is that the verifier does not know anything about any variable modified within a loop, other than what the loop condition and/or invariant states. while In our example above, the loop condition only tells us that i < count during the loop, and that i >= count after the loop (in fact, we can be more precise here but the verifier cannot). Knowing that i >= count is not enough to prove the function’s post-condition (i.e. that i >= 0). This is because count is an arbitrary int which, for example, may be negative. i < count i >= count i >= 0 count int Therefore, to get our example to verify, we need a loop invariant that explicitly states i cannot be negative: i nat counter(int count): i = 0 while i < count where i >= 0: i = i + 1 return i The loop invariant is specified on the while loop with the where keyword. In this case, it simply states that i is always >=0. Whilst this might seem obvious to us, it is unfortunately not so obvious to the verifier! In principle, we could employ a simple form of static analysis to infer this loop invariant (although, currently, Whiley does not do this). Unfortunately, in general, we will need to write loop invariants ourselves. where >=0 To explore a slightly more complex example, I’ve put together a short video which illustrates using Whiley to verify a program which sums a list of natural numbers: Finally, if you’re interested in trying this out for yourself, the easiest way to install Whiley is through the Eclipse Plugin. Have fun! Hello! I think Whiley is a really interesting language. I love such concepts as verifying programs and deriving algorithms from specifications. Is it possible to define a type with Whiley in an algebraic specification style? Example: functions new : stack; push : element * stack -> stack; empty : stack -> boolean; pop : stack -> stack; top : stack -> element; axioms empty( new ) = true empty( push(x,s) ) = false top( push(x,s) ) = x pop( push(x,s) ) = s preconditions pre: pop( s : stack ) = not empty(s) pre: top( s : stack ) = not empty(s) end Hi Jose, At the moment, specifying global axioms as you have is not supported. I do plan to add that in the future. In general, you can define a data type like so: define Stack as [int] // stack is a list of ints define EmptyStack as Stack where |$| == 0 define NonEmptyStack as Stack where |$| > 0 EmptyStack Empty(): return [] NonEmptyStack push(Stack stack, int element): return stack + element Stack pop(NonEmptyStack stack) ensures |$|==|stack|-1: last = |stack| - 1 return stack[0..last] int top(NonEmptyStack stack): last = |stack| - 1 return stack[last] That covers of some of axioms you specified, but not all. For example, showing that pop(push(s,x)) == s is not possible yet. pop(push(s,x)) == s Hello Dave, It was very kind of you to reply so fast! I’m sure It will be a very useful feature. Thanks for the help. The axiom ‘pop(push(s,x)) == s’ is actually the difficult one. Meyer, in his OOSC book, does not implement it with assertions within the STACK[G] class. I’ve installed ‘Eclipse IDE for Java Developers 32 bits, Version: Juno Service Release 1, Build id: 20120920-0800’ and, when I tried to install the Whiley plugin I found the following error message: ‘An error occurred while collecting items to be installed session context was:(profile=epp.package.java, phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=, action=). Artifact not found:. Artifact not found:.‘ Can you help me? Thanks in advance. Kind regards, Jose Interesting, well I haven’t actually tested the plugin on Juno yet. So, that might the problem … I’ll investigate. Also, you can download the WDK and run the compiler from the command-line. You need to provide a command-line switch to get verification to work. e.g. bin/wyjc -X verification:enable=true file.whiley Ok, I tried to install it on Juno at work and yes I see the same problem. So, I’ll try to figure it out now … D Ok, looks like it was a permissions problem. It has now installed successfully on my Juno Eclipse release at work (so you should try the install again). To get started once installed, you need to create a “Whiley Project”. Go to “File->New Project” then scroll down to the bottom of the available project wizards and you should see the Whiley folder, and the new project wizard in there. Once you’ve created a Whiley Project, then add a Whiley Module to the src folder. At this point, you should be good to go. Note that the compiler only runs when you save the file, and it is a little slow… Let me know how you go! Hi Dave! Thank you again! I’ve been able to install the plugin on Juno. I have my Whiley project and module ‘HelloWorld.whiley’ (inside a folder of name ‘foo’) with the following source code: package foo; import * from whiley.lang.* void ::main(System.Console sys): sys.out.println(“Hello World”) The problem is tryint to launch the program. I right-click on the Whiley module and select “Run Configurations”. From there, I create a “Whiley Application” configuration and associate it with my project. But I don’t know which is the ‘Main class’ or Main Type. I’ve tried with ‘System’, ‘HelloWorld’ and ‘foo’. Also with ‘main’ and ‘Main’ and it doesn’t work. It should be something easy, I suppose. Hey Jose, Ah, well … that’s not a bug. It’s a feature. The eclipse plugin is sketchy, but is just enough to show verification working. In fact, I can’t get the “hello world” example to compile under Eclipse [because the compiler cannot find the Whiley standard library]. If you want to run code, you’ll need to use the WDK for now (sorry). Hi Dave, I’ve been trying to compile and run the ‘HelloWorld’ example with the WDK (release (0.3.18) on Ubuntu. When I try to compile, I get the following error message: wyjc HelloWorld.whiley /home/josemarivg/Whiley/wdk-v0.3.18/bin/wyjc: line 32: /home/josemarivg/Whiley/wdk-v0.3.18/bin/../bin/wy_common.bash: No such file or directory Error: Could not find or load main class wyjc.WyjcMain I opened the ‘/wdk-v0.3.18/bin/wyjc’ and saw a reference to a ‘wy_common.bash’ file but, when I unpacked the .tgz, I didn’t find anyone. Thanks! Regards, Jose Ah, a catalogue of teething problems … but thanks for being so patient!! In fact, someone else also reported this issue already, and I have corrected it in the download for v0.3.18. I’m thinking maybe you orginally downloaded the wdk-v0.3.18.tgz a while ago when you first saw the blog post? In which case, downloading it again should resolve the problem (which basically was me forgetting for various reasons to include that file in the release). Just in case that doesn’t help, you can get the file from here and then just place it into the bin/ directory of the wdk. bin/ My pleasure, Dave. I’ve downloaded again the WDK. I’m trying to use it now on a Windows machine with Cygwin installed (I’m not at home). I’ve set the PATH the environment variable to point to bin/ directory where the scripts are. And when I execute the compiler (‘wyjc’ command), I get the error: Library ‘wyjc’ not found But, well, in the lib directory I can see all the required jars. You know what, scrap all that I said below. Looking again at what you said, I think you’ve configured everything correctly. This is a problem with the wy_common.bash file. I didn’t actually write that file — an RA did and I know he didn’t test on cygwin. That must be the problem … wy_common.bash Ok, sounds like something is wrong with the configuration of PATH. Depending on how comfortable with the UNIX command-line you are, You can try a few things from a Cygwin X-Term: PATH 1) Change to the wdk directory, e.g. with something like cd wdk-v0.3.18 cd wdk-v0.3.18 2) Then try executing the command bin/wyjc -help. If this prints usage information, then you can run wyjc from there using bin/wyjc (basically this avoids the PATH issue by giving an explicit location for wyjc). bin/wyjc -help wyjc bin/wyjc 3) If that isn’t helping, or you want to figure out what’s up with the PATH variable, then try executing this from the command-line: echo $PATH. This will tell you what the current setting for PATH is. Check that it includes the location you specified, and: echo $PATH * If PATH doesn’t contain the location you added, then let’s try making sure you’re in a proper shell. Run the command bash, which should appear to do little except change the nature of the command-line. Then, run echo $PATH again and see if what you want is there. If it is, try running wyjc and see if it works. bash * If PATH did contain the location you were expecting, then it’s probably misconfigured somehow. You probably need to show me the contents of that variable before I can diagnose. Anyway, it’s difficult to give you more explicit instructions because it depends on a lot of UNIX stuff, and how exactly your environment is configured. If you’re not too comfortable with the UNIX command-line, then perhaps you know someone who is? Eitherway, let me know how you get on … I’m installing Cygwin now, and I’ll have to debug the problem … will get back to you on that. In the meantime, you can run directly using this command (from within wdK directory): java -cp "lib\wyc-v0.3.17.jar;lib\wyjc-v0.3.18.jar; lib\wyil-v0.3.18.jar;lib\wyone-v0.3.18.jar" wyjc.WyjcMain -wp lib\wyrt-v0.3.18.jar hello-world.whiley (replace hello-world.whiley with whatever whiley file you want to compile. Additionally, to enable verification to you need to supply the -X verification:enable=true switch before -wp .... hello-world.whiley -X verification:enable=true -wp ... Ok, I have a temporary fix for you. If you change to the wdk directory, then execute bin/wyjc it seems to work on my Cygwin install. I.e. something like this: ~$ cd wdk-v0.3.18 ~/wdk-v0.3.18$ bin/wyjc somefile.whiley Not quite sure why it doesn’t work in conjunction with the PATH variable, as that does work OK under UNIX. Hmmm, anyway … thanks for pointing out the problems!! I definitely need to test more on Cygwin … which I used to do a lot, but since I got a Mac Book it’s fallen by the wayside … Dave Ok, fixed but I’m not going to update the wdk-v0.3.18 download, and instead I’ll put the fix in the next release (assuming I don’t find any further problems with it). You can manually edit the file in question (bin/wy_common.bash). Early on it reads cygpath -pw $LIBDIR, which should become cygpath -m $LIBDIR. wdk-v0.3.18 bin/wy_common.bash cygpath -pw $LIBDIR cygpath -m $LIBDIR You can also download the raw file from here: Please let me know if that works!! […] idea of a loop invariant. For some general background on writing loop invariants in Whiley, see my previous post. To recap the main points, here’s a simple function which requires a loop invariant to […] […] entry to the loop. This is quite a departure from the way we think about verifying While loops (see here and here for more on that). In fact, we could still require that the loop invariant holds on.
http://whiley.org/2013/01/29/understanding-loop-invariants-in-whiley/
CC-MAIN-2017-47
refinedweb
2,123
64.91
This topic applies only to version 1 of Managed Extensions for C++. This syntax should only be used to maintain version 1 code. See Implicit Boxing for information on using the equivalent functionality in the new syntax. Creates a managed copy of a __value class object. __box(value-class identifier)). The newly created boxed object is a copy of the __value class object. Therefore, modifications to the value of the boxed object do not affect the contents of the __value class object. Here's is an example that does boxing and unboxing: // keyword__box.cpp // compile with: /clr:oldSyntax #using < mscorlib.dll > using namespace System; int main() { Int32 i = 1; System::Object* obj = __box(i); Int32 j = *dynamic_cast<__box Int32*>(obj); }); } 10 20
http://msdn.microsoft.com/en-us/library/d6d8ft9s.aspx
crawl-002
refinedweb
122
50.43
Workflow (who is suppose to be launch on Record change only) is been launch if a retrieve and update has been made. - Friday, 24 February 2012 5:04 PM Hi all, I'm doing this: public class CalculDelaisDemande : IPlugin {); if (context.Depth != 1) return; try { using (var orgContext = new Microsoft.Xrm.Sdk.Client.OrganizationServiceContext(service)) { if (context.InputParameters.Contains("Target") && context.InputParameters["Target"] is Entity) { var entityDemande = (Entity) context.InputParameters["Target"]; var demande = (from c in orgContext.CreateQuery<new_demande>() where c.new_demandeId == entityDemande.Id select c).FirstOrDefault(); orgContext.UpdateObject(demande); orgContext.SaveChanges(); ..... A workflow is associate and should be launch on "record fields change". But it is always launch even though I do not change the value of the field. WHy? Anybody would know about this? I could not find anything on this after googling. Thank you Bernard Lessard All Replies - Saturday, 25 February 2012 12:48 AMi think Organization context sends all the fields back to the server on update and not the changed ones. since you did call an update object on the record, it would have sent all values back to database and this caused the workflow to fire. HTH Sam Dynamics CRM MVP | Inogic | news at inogic dot com If this post answers your question, please click "Mark As Answer" on the post and "Mark as Helpful" - Saturday, 25 February 2012 12:46 PM Can you allow the workflow to fire, but in the first step check a condition and stop as cancelled if the fields you are interested in have not changed? (this would probably require storing the old value in another field to compare against, and updating the old one to match if the new one genuinely has changed, just like old-school CRM 4.0 workarounds to provide very limited "auditing" of changes) Hope this helps. Adam Vero, MCT
http://social.microsoft.com/Forums/en-AU/crm/thread/2089acb5-19cf-4791-a3be-29093ab3317d
CC-MAIN-2013-20
refinedweb
306
55.74
What if I told you that you could use the same very performant code in Android, iOS or even in Flutter. In this article, we’ll see how to achieve this with Rust. But, why would we want something like this? Imagine that you have a mobile app that needs to process some audio in order to get some insights about the user but you don’t want the audio to be sent to the server to be processed. You want to preserve the privacy of the user. In this kind of scenario it would make sense to avoid having to write a library for Android and for iOS. That would save us from having to maintain two different codebases and would diminish the chance to get more bugs. That’s nice, but how could we do something like this? Enter Rust. With Rust not only would you be able to share the same code among multiple platforms, but you could also take advantage of the boost of performance that you will get. What are we going to do We are going to write a simple shared Rust library and compile it to Android and iOS, and as a bonus, we will also write a Flutter plugin using the very same code. As you can see, the scope of this article is quite broad so we’ll try to keep everything organized. You can also read this post while taking a look at the associated GitHub repository. Scaffolding our project Let’s start by creating a folder called rust-for-android-ios-flutter and create four folders in it ( android, ios, flutter & rust): mkdir rust-for-android-ios-flutter cd rust-for-android-ios-flutter mkdir ios android rust flutter Once we have it, just cd into the rust folder and create a new Rust library called rustylib: cd rust cargo init --name rustylib --lib This library will have only one function that will get a string as its argument and will return a new string. Basically, just a Hello, World!. But just think of it as a function that could work as the entry point to a more complex process completely written in Rust. Let’s install some targets In order to compile our rustylib library to Android and iOS we will need to have some targets installed in our machine: # Android targets rustup target add aarch64-linux-android armv7-linux-androideabi i686-linux-android x86_64-linux-android # iOS targets rustup target add aarch64-apple-ios armv7-apple-ios armv7s-apple-ios x86_64-apple-ios i386-apple-ios Tools for iOS For iOS we have to be sure that we have Xcode installed in our computer and the Xcode build tools already set up. # install the Xcode build tools. xcode-select --install # this cargo subcommand will help you create a universal library for use with iOS. cargo install cargo-lipo # this tool will let you automatically create the C/C++11 headers of the library. cargo install cbindgen As you can see, we have also installed cargo-lipo and cbindgen. Tools for Android For Android, we have to be sure that we have correctly set up the $ANDROID_HOME environment variable. In macOS this is tipically set to ~/Library/Android/sdk. It is also recommended that you install Android Studio and the NDK. Once you have everything installed, ensure that the $NDK_HOME environment variable is properly set. In macOS this should be tipically set to ~/Library/Android/sdk/ndk-bundle. Finally, we’re just going to install cargo-ndk, which handles finding the correct linkers and converting between the triples used in the Rust world to the triples used in the Android world: cargo install cargo-ndk Rust library configuration The next step is to modify our Cargo.toml. Make sure it looks similar to this: [package] name = "rustylib" version = "0.1.0" authors = ["Roberto Huertas <roberto.huertas@outlook.com>"] edition = "2018" # See more keys and their definitions at [lib] name = "rustylib" # this is needed to build for iOS and Android. crate-type = ["staticlib", "cdylib"] # this dependency is only needed for Android. [target.'cfg(target_os = "android")'.dependencies] jni = { version = "0.13.1", default-features = false } iOS project Now, let’s create an iOS project using Xcode. In iOS you can use 2 different types of user interface. As we want to show how to use both of them, let’s create two different kind of projects. Storyboard We’re choosing Storyboard as our user interface and we’re naming the project rusty-ios-classic. Save it in the previously created ios folder. SwiftUI Let’s create now a new iOS project. But this time we’re going to select SwiftUI as our user interface and name it rusty-ios. Save it again in the ios folder. Your treeview should look similar to this one: Writing our first Rust code Now, go to the Rust project, open the lib.rs file and make sure it looks exactly like this: use std::ffi::{CStr, CString}; use std::os::raw::c_char; #[no_mangle] pub unsafe extern "C" fn hello(to: *const c_char) -> *mut c_char { let c_str = CStr::from_ptr(to); let recipient = match c_str.to_str() { Ok(s) => s, Err(_) => "you", }; CString::new(format!("Hello from Rust: {}", recipient)) .unwrap() .into_raw() } #[no_mangle] pub unsafe extern "C" fn hello_release(s: *mut c_char) { if s.is_null() { return; } CString::from_raw(s); } The #[no_mangle] attribute is vital here to avoid the compiler from changing the name of the function. We want the name of the function to be exported as it is. Note also that we’re using extern "C". This tells the compiler that this function will be called from outside Rust and ensures that it is compiled using C calling conventions. You may be wondering why on Earth we need this hello_release function. The key here is to take a look at the hello function. Using CString and returning the raw representation keeps the string in memory and prevents it from being released at the end of the function. If the memory were to be released, the pointer provided back to the caller would now be pointing to empty memory or to something else entirely. In order to avoid a memory leak, because we have now a string that sticks around after the function has finishing executing, we have to provide the hello_release function that takes a pointer to a C string and frees that memory. It’s very important not to forget calling this function from the iOS code if we don’t want to get into troubles. If you look closely at this function, you’ll notice that it leverages the way memory is managed in Rust by using the function’s scope in order to free the pointer. This code will be the one that we’ll use for our iOS projects. Compiling for iOS Before we compile the library for iOS we’re going to generate a C header that will work as a bridge for our Swift code to be able to call our Rust code. We’ll be leveraging cbindgen for this: cd rust cbindgen src/lib.rs -l c > rustylib.h This should generate a file called rustylib.h containing the following code: #include <stdarg.h> #include <stdbool.h> #include <stdint.h> #include <stdlib.h> char *hello(const char *to); void hello_release(char *s); Note that cbindgen has automatically generated the C interface for us in a very convenient way. Now, let’s proceed to compile our Rust library so it can be consumed in any iOS project: # it's important to not forget the release flag. cargo lipo --release target/universal/release folder and look for a file called librustylib.a. That’s the binary we’re going to use in our iOS projects. Using the iOS binary First, we’re going to copy our librustylib.a and rustylib.h files into the ios folder: # we're still in the `rust` folder so... inc=../ios/include libs=../ios/libs mkdir ${inc} mkdir ${libs} cp rustylib.h ${inc} cp target/universal/release/librustylib.a ${libs} You should be seeing a treeview like this one, with an include and a libs file: As you can imagine, having to manually do this every time you have to compile a new version of your Rust library would be very tedious. Fortunately, you can automate this process by using a simple bash script like this one. Now, the following is something that you will need to do only once (twice if you have created two iOS projects as the article described). Let’s open our rusty-ios-classic project in Xcode and let’s do the following: Add the librustylib.a file in the General > Frameworks, Libraries and Embedded Content. Ensure that you see the name of the library there. If it doesn’t show, try it again. I’m not sure if it’s an Xcode bug but most of the times you’ll need to add it twice for it to work correctly. After that, go to the Build Settings tab, search for search paths and add the header and library search paths. You can use relative paths or use the $(PROJECT_DIR) variable to avoid hardcoding your local path. Finally, let’s add the Objective-C Bridging header. Search for bridging header in the Build Settings tab: Repeat the same for our rusty-ios project in case you want to try both type of iOS projects. In our rusty-ios project if you’re using the project that uses SwiftUI as a user interface, then open the ContentView.swift file and make it look like this: import SwiftUI struct ContentView: View { let s = getName() var body: some View { Text(s) } } struct ContentView_Previews: PreviewProvider { static var previews: some View { ContentView() } } func getName() -> String { let result = hello("Rob") let sr = String(cString: result!) // IMPORTANT: once we get the result we have to release the pointer. hello_release(UnsafeMutablePointer(mutating: result)) return sr } Run the project in Xcode. In this case, you should be able to see Hello from Rust: Rob in the emulator or device you’re using to test the app 🚀. In our rusty-ios-classic project In case you’re using a the project with a Storyboard user interface, open the ViewController.swift file and make it look like this: import UIKit class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. let result = hello("Rob") let s_result = String(cString: result!) // IMPORTANT: once we get the result we have to release the pointer. hello_release(UnsafeMutablePointer(mutating: result)) print(s_result) } } If everything is fine, you should be able to see Hello from Rust: Rob in the output pane. 😉 Android project Let’s open Android Studio and let’s create our Android project: File > New...> New Project > Basic Activity. Name it rusty-android and set the package name. We’ll choose Kotlin as our default language and minimum API 22. You should end up with a treeview similar to this one: JNI If you remember when we discussed about how to create our iOS project, we needed to create a C header working as a bridge. In Android we will leverage the Java Native Interface or JNI for short and we will expose our functions through it. The way that JNI constructs the name of the function that will be called follows a specific convention: Java_<domain>_<qualified_classname>_<methodname>. In our case, that would be Java_com_robertohuertas_rusty_1android_MainActivity_hello (note that _1 represents underscores _ in the qualified class name). As you can imagine, if we have to name our functions in such a specific way, this can pose a problem if we want to reuse this very same code in other Android apps. We have several alternatives, though. We can use some sort of proxy class that follows the same specific domain and class naming and include it in every project or we can create an Android Library and use it everywhere. In our case, we’re going to create an Android Library. Creating an Android Library In Android Studio, File > New > New Module.... Then choose Android Library. Your Android Studio project pane should look similar to the one below: Adding a little bit more Rust Ok, so let’s create our Android functions by leveraging the JNI naming conventions. Let’s cd into our rust/src folder and create a new android.rs file: cd rust/src echo > android.rs Once you have it, copy this code into it: #![cfg(target_os = "android")] #![allow(non_snake_case)] use crate::hello; use jni::objects::{JClass, JString}; use jni::sys::jstring; use jni::JNIEnv; use std::ffi::CString; // NOTE: RustKt references the name rusty.kt, which will be the kotlin file exposing the functions below. // Remember the JNI naming conventions. #[no_mangle] pub extern "system" fn Java_com_robertohuertas_rusty_1android_1lib_RustyKt_helloDirect( env: JNIEnv, _: JClass, input: JString, ) -> jstring { let input: String = env .get_string(input) .expect("Couldn't get Java string!") .into(); let output = env .new_string(format!("Hello from Rust: {}", input)) .expect("Couldn't create a Java string!"); output.into_inner() } #[allow(clippy::similar_names)] #[no_mangle] pub extern "system" fn Java_com_robertohuertas_rusty_1android_1lib_RustyKt_hello( env: JNIEnv, _: JClass, input: JString, ) -> jstring { let java_str = env.get_string(input).expect("Couldn't get Java string!"); // we call our generic func for iOS let java_str_ptr = java_str.as_ptr(); let result = unsafe { hello(java_str_ptr) }; // freeing memory from CString in ios function // if we call hello_release we won't have access to the result let result_ptr = unsafe { CString::from_raw(result) }; let result_str = result_ptr.to_str().unwrap(); let output = env .new_string(result_str) .expect("Couldn't create a Java string!"); output.into_inner() } Wait, what’s going on here? 🤔 We’d better stop for a moment and explain a little bit the previous code. First of all, on the top of the file we can see two different directives: #![cfg(target_os = "android")] #![allow(non_snake_case)] The first one will enable this code only when we’re compiling for Android and the second one will allow us to name our functions however we want. Rust enforces snake_case but we need to opt out of this in order to comply with the JNI naming conventions. Ok, but then, why have you created two different functions ( helloDirect and hello) and not just one? 🤔 Well, the answer is that I wanted to show you two ways of handling the Android part and let you decide which one is more convenient for your kind of project. The first function uses the jni crate without interacting with the lib.rs code (a.k.a. iOS code) and the second one uses the same code we have in the lib.rs file. The difference is clear. The first function is way clearer and more succint than the second one. Plus, in the second one, we have to deal with the hello_release function and unsafe while in the first one we don’t. So, what we should do? In my opinion, I would use the first one. This is a super simple example where we’re just building a string and returning it. Ideally, this logic should be also encapsulated in a pure Rust library that would be consumed by both the iOS and the Android code. These pieces of code should be only concerned about providing the glue to the iOS and Android land via C headers and JNI and that’s it. So, ideally, in our example, instead of duplicating the logic in the Java_com_robertohuertas_rusty_1android_1lib_RustyKt_helloDirect function we would call another library. Anyway, for the sake of knowing that you have several options, I think it’s good to explore all the approaches. 😜 One more important thing. Note that we’re exporting our functions with system instead of C. This is just to stop cbindgen from generating signatures for these Android functions. But wait, this won’t work! We haven’t exposed our android module. Add this to the lib.rs file: // add it below the use declarations. #[cfg(target_os = "android")] mod android; The cfg attribute will prevent the android module we just created to be compiled in case we’re not targeting Android. Compiling for Android Let’s get ready to compile our code for Android. Create a scripts folder inside the rust folder and add a file called android_build.sh with the following content: #!/usr/bin/env bash # set the version to use the library min_ver=22 # verify before executing this that you have the proper targets installed cargo ndk --target aarch64-linux-android --android-platform ${min_ver} -- build --release cargo ndk --target armv7-linux-androideabi --android-platform ${min_ver} -- build --release cargo ndk --target i686-linux-android --android-platform ${min_ver} -- build --release cargo ndk --target x86_64-linux-android --android-platform ${min_ver} -- build --release # moving libraries to the android project jniLibs=../android/rusty-android/rusty-android-lib/src/main/jniLibs libName=libdevicers.so rm -rf ${jniLibs} mkdir ${jniLibs} mkdir ${jniLibs}/arm64-v8a mkdir ${jniLibs}/armeabi-v7a mkdir ${jniLibs}/x86 mkdir ${jniLibs}/x86_64 cp target/aarch64-linux-android/release/${libName} ${jniLibs}/arm64-v8a/${libName} cp target/armv7-linux-androideabi/release/${libName} ${jniLibs}/armeabi-v7a/${libName} cp target/i686-linux-android/release/${libName} ${jniLibs}/x86/${libName} cp target/x86_64-linux-android/release/${libName} ${jniLibs}/x86_64/${libName} Similarly to the previously suggested build script for iOS, this script will help us to compile and move the needed files to our previously created Android Library. If you execute this bash script, once the compilation process ends you should be able to find a treeview similar to this one, with a newly created folder called jniLibs with several subfolders in it referencing several architectures: Writing the Android Library Finally, we’re going to write our Android Library code and consume it from our Android application. Let’s create a new file under android/rusty-android-lib/src/main/java/com/robertohuertas/rusty_android_lib called rusty.kt. Note that the name must be the same that we used when defining our JNI functions in our Rust library. Copy the following code in it: package com.robertohuertas.rusty_android_lib external fun hello(to: String): String external fun helloDirect(to: String): String fun loadRustyLib() { System.loadLibrary("rustylib") } Here, we just declared two signatures mirroring our two Rust functions (remember the name we gave them in our Rust code) and a function that will be called in order to dynamically load the library. Note that we’re not using the name of the library (librustylib.so) but the name we gave to the crate. Compiling the Android Library If you want to generate an .aar file ready to be consumed by any Android app just use the Gradle tab of your Android Studio and look for a task called assemble. Right-click on it and select Run. This will compile your library and you’ll be able to find it at android/rusty-android-lib/build/outputs/aar/rusty-android-lib-release.aar. Consuming the Android Library In this case, we don’t need to consume the library as an .aar file because we have it in the same project. In case you want to know how to consume it like that just take a look at the Android Documentation. In our example, we just need to add the library as a dependency in our android/app/build.gradle: dependencies { implementation project(':rusty-android-lib') } And then, in our Android Studio, choose File > Sync project with Gradle files. Now, open the file content_main.xml located at android/app/src/main/res/layout and add an id to the TextView, so we can reference it later and programmatically change its value: android:id="@+id/txt" After this, we’re going to use our Android Library from our MainActivity.kt file located at android/app/src/main/java/com/robertohuertas/rust_android. Open it and write this: package com.robertohuertas.rusty_android import android.os.Bundle import com.google.android.material.snackbar.Snackbar import androidx.appcompat.app.AppCompatActivity import android.view.Menu import android.view.MenuItem import android.widget.TextView import kotlinx.android.synthetic.main.activity_main.* import com.robertohuertas.rusty_android_lib.* class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) setSupportActionBar(toolbar) loadRustyLib() findViewById<TextView>(R.id.txt).let { it?.text = hello("Rob") } var greeting2 = helloDirect("Rob Direct") fab.setOnClickListener { view -> Snackbar.make(view, greeting2, Snackbar.LENGTH_LONG) .setAction("Action", null).show() } }) } } } We’re done! Run it in your emulator and you should see a text in the app saying Hello from Rust: Rob. Furthermore, if you click the button below, a snackbar will show Hello from Rust: Rob direct. As you can see, we’re using both functions, the one calling the iOS function and the one using only the jni-rs crate. Flutter Project And now… let’s go for the bonus points! As we already have an Android Library and a working iOS project, making this work in a Flutter project shouldn’t be very difficult. The basic idea is to create a Flutter plugin package so we can share our code in several Flutter projects. So let’s start! 😀 # let's use the flutter folder cd flutter # create a plugin project, set its namespace and its name flutter create --template=plugin --org com.robertohuertas rusty_flutter_lib # now you'll have a folder called rusty_flutter_lib inside the flutter folder # for convenience, we'll move everything to the parent directory (flutter) # this last step is completely optional. mv rusty_flutter_lib/{.,}* . rm -rf rusty_flutter_lib One of the cool things of the Flutter plugin packages is that the template comes with an example project so we can use it to test that our plugin works as expected. No need to create a new project to test our Flutter plugin. We’re basically going to use our previous Android and iOS code and use it in our Flutter project. This should be very straightforward. Importing the Android library In order to use the Android Library that we built before we’re going to use Android Studio. Let’s open the flutter/android project. Then, in order to import the Android Library, select File > New... > New Module. import .JAR/.AAR Package: And use the previously generated .AAR Package path: By doing this, we should see a new folder called rusty-android-lib-release folder with our .aar package inside along with some other files: Finally, open the build.gradle file in the flutter/android folder and add a new dependency: dependencies { implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk7:$kotlin_version" // this is the line to add including the directory holding our .aar package: implementation fileTree(include: '*.aar', dir: 'rusty-android-lib-release') } Adding the Android platform code Open flutter/android/src/main/kotlin/com/robertohuertas/rusty_flutter_lib/RustyFlutterLibPlugin.kt in your favorite IDE and replace the code in it with this one: package com.robertohuertas.rusty_flutter_lib // importing the Android library import com.robertohuertas.rusty_android_lib.* import io.flutter.plugin.common.MethodCall import io.flutter.plugin.common.MethodChannel import io.flutter.plugin.common.MethodChannel.MethodCallHandler import io.flutter.plugin.common.MethodChannel.Result import io.flutter.plugin.common.PluginRegistry.Registrar class RustyFlutterLibPlugin: MethodCallHandler { companion object { @JvmStatic fun registerWith(registrar: Registrar) { val channel = MethodChannel(registrar.messenger(), "rusty_flutter_lib") channel.setMethodCallHandler(RustyFlutterLibPlugin()) // dynamically loading the android library loadRustyLib() } } override fun onMethodCall(call: MethodCall, result: Result) { when { call.method == "getPlatformVersion" -> result.success("Android ${android.os.Build.VERSION.RELEASE}") call.method == "getHello" -> { val to = call.argument<String>("to") if (to == null) { result.success("No to parameter found") } else { // we're using the helloDirect function here // but you could also use the hello function, too. val res = helloDirect(to) result.success(res) } } else -> result.notImplemented() } } } Importing the iOS code Importing the iOS code is fairly easy: # copy the header to the Classes folder cp ../rust/rustylib.h ios/Classes # create a new libs folder mkdir ios/libs # copy the universal library into the libs folder cp ../rust/target/universal/release/librustylib.a ios/libs/ Then open the rusty_flutter_lib.podspec file and add this line: s.ios.vendored_library = 'libs/librustylib.a' Adding the iOS plaform code Open the flutter/ios/Classes/RustyFlutterLibPlugin.swift file and add the following code: import Flutter import UIKit public class SwiftRustyFlutterLibPlugin: NSObject, FlutterPlugin { public static func register(with registrar: FlutterPluginRegistrar) { let channel = FlutterMethodChannel(name: "rusty_flutter_lib", binaryMessenger: registrar.messenger()) let instance = SwiftRustyFlutterLibPlugin() registrar.addMethodCallDelegate(instance, channel: channel) } public func handle(_ call: FlutterMethodCall, result: @escaping FlutterResult) { if (call.method == "getPlatformVersion") { result("iOS " + UIDevice.current.systemVersion) } else if (call.method == "hello") { let res = hello("Rob") let sr = String(cString: res!) hello_release(UnsafeMutablePointer(mutating: res)) result(sr) } else { result("No method found") } } } Connect the API and the platform code Open the flutter/lib/rusty_flutter_lib.dart file and add a new static method in the RustyFlutterLib class: static Future<String> hello({to: String}) async { final String greetings = await _channel.invokeMethod('hello', {'to': to}); return greetings; } Testing the Flutter example app Let’s open the flutter/example/lib/main.dart file and let’s consume our recently created Flutter package: import 'package:flutter/material.dart'; import 'dart:async'; import 'package:flutter/services.dart'; import 'package:rusty_flutter_lib/rusty_flutter_lib.dart'; void main() => runApp(MyApp()); class MyApp extends StatefulWidget { @override _MyAppState createState() => _MyAppState(); } class _MyAppState extends State<MyApp> { String _platformVersion = 'Unknown'; String _greeting = ''; @override void initState() { super.initState(); initPlatformState(); } // Platform messages are asynchronous, so we initialize in an async method. Future<void> initPlatformState() async { String platformVersion; String greeting; // Platform messages may fail, so we use a try/catch PlatformException. try { platformVersion = await RustyFlutterLib.platformVersion; greeting = await RustyFlutterLib.hello(to: 'Rob'); } on PlatformException { platformVersion = 'Failed to get platform version.'; greeting = 'Failed to get hello'; } // If the widget was removed from the tree while the asynchronous platform // message was in flight, we want to discard the reply rather than calling // setState to update our non-existent appearance. if (!mounted) return; setState(() { _platformVersion = platformVersion; _greeting = greeting; }); } @override Widget build(BuildContext context) { return MaterialApp( home: Scaffold( appBar: AppBar( title: const Text('Plugin example app'), ), body: Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ Text( _greeting, style: TextStyle(fontSize: 20), ), Text('Running on: $_platformVersion\n'), ], ), ), ), ); } } After that, go to the example/ios folder and execute this: pod install Then, open example/android/app/build.gradle and change the minSdkVersion from 16 to 22, as our library was built with this setting, too. And finally, open the example/pubspec.yaml file and add the line below to avoid this issue when building the iOS version: version: 1.0.0+1 Congratulations for arriving here! 👏 Let’s run the example app and see if it’s working. In Android you should see something similar to the screenshot below: Similarly, in iOS: Thanks for finishing this long reading! I hope you found it useful! 😊 Bibliography - Building and Deploying a Rust library on iOS - Building and Deploying a Rust library on Android - Rust on iOS - Rust on Android - cargo-ndk - jni-rs - JNI tips - Create an Android library -- Originally published at robertohuertas.com on October 27, 2019. Discussion (19) Great article! Flutter (Dart) has experimental FFI support that should allow to integrate Rust without the intermediate JNI step You're right! Probably I'll dedicate a specific post for that. As it is experimental and the API can still change I thought it would be better to focus on a more stable way to integrate Rust with Flutter. Thanks for your insight! 😊 I was thinking the same thing, it's a shame that most of portability to flutter are just wraps around Android code, While it's possible achieve the same in a cleaner and more efficient way I love what Rust is doing in the multiplatform space (especially with Wasm). Thank you for highlighting more of what's possible. Have you also looked into Kotlin Multiplatform? It can also output an iOS framework, and C library. On Android, it's nice because you can benchmark whether your code would work just fine in JVM-land (ART) or if you'd be better off compiling to native code and dealing with JNI overhead. Jake Wharton gave a talk that broached this topic at Droidcon NYC 2019 droidcon.com/media-detail?video=36... Rust may outperform compiled Kotlin code, but it's something to keep in mind. NOOOOOOOOOO, KOTLIN SUCKS, DELETE IT FOREVER! Well done! I love seeing stuff like this. Rust is the perfect language for bridging all of these gaps, IMHO.... Very interesting indeed, well done! I wonder what are the setbacks you see on going multiplatform with rust? I have seen using non-conventional languages for writing once - running everywhere before and even though it looks cool, how maintainable is this in terms of finding developers willing to work with such project, third-party libraries, community... I see it as a cool experiment but not something you could do in a long run. Hi Javier, I wouldn't certainly write a whole app using this approach, but for a very specific need in which you want to share code in the form of a native library I would. This, of course, would ultimately depend on the specifics of your team and your requirements. I don't think it would be a difficult task to find some devs interested in maintaining this part of the app. There's always someone willing to tackle this kind of complexities. The alternative in these kind of cases is having a dedicated team per platform, or do it in C++. Code duplication is costly, also in terms of possible bugs, and if you're willing to jump into C++... then why not use a safer programming language if possible. As usual, there's not really a correct answer and it's ultimately you who have to decide, considering lots of factors, if it would be suitable for you or not. Great Read :) Thank you for this interesting post. I'm currently developing apps with flutter and dart. Would i see a noticeable improvement on mobile in writing some code with rust instead of dart ? What are the real benefits here ? The main objective of the article was to show that you could share the same code between different platforms, including Flutter. That being said, I guess it would only make sense to use Rust for a very intense process or calculation so you could leverage Rust's memory management and the ability to work in a parallel way without having to use isolates. Did i overread it or did you forget to mention that xcode is only available on a apple computer? I didn't mention it as I thought it was obvious. But maybe I should in case some people don't know this. Thanks for your comment. Excellent article. I wonder if we can share a network layer this way between domains? Fanastic article, rust is powerfull!!! Noice! How about adding Nativescript plugin too 🤓 Gosh, that rust code looks like someone tripped on spent nuclear fuel and breathed in all the fumes and then started writing words. It's abominable. Please delete that retard language out of existence, it doesn't deserve to even be called a language. It's non-intuitive and just a mess.
https://dev.to/robertohuertasm/rust-once-and-share-it-with-android-ios-and-flutter-286o
CC-MAIN-2021-31
refinedweb
5,152
56.55
. Putting two threads together In Python, there are numerous techniques to combine string values. The ‘+’ operator is the simplest way to combine two string values in Python. To learn how to merge two strings, create any Python script with the following. Two string values are allocated to two variables, with a third variable used to store the joined values, which will be printed later. string1 = "Code" string2 = "Underscored" joined_string = string1 + string2 print(joined_string) After running the script from the editor, the following output will show. The words “Code” and “Underscored” are combined here, and the result is “CodeUnderscored.” Format floating point in the string In programming, a floating-point number is necessary to generate fractional values, and formatting the floating-point integer for programming reasons is sometimes required. There are numerous ways to format a floating-point number in Python. The following script formats a floating-point value using string formatting and string interpolation. The format() function with format width is used in string formatting, and string interpolation uses the percent ” symbol with the format with width. According to the formatting width, five digits are set before the decimal point, and two digits are set after. # Use of String Formatting float_var_one = 863.2378 print("{:5.2f}".format(float_var_one)) # Use of String Interpolation float_var_two = 863.2378 print("%5.2f" % float_var_two) Raising a number to a power There are numerous ways to calculate a_var^n_var in Python. Three techniques to calculate a_var^n_var in Python are shown in the script below. The a_var^n_var is calculated using the double ‘‘ operator, the pow() method, and the math.pow() method. Numeric values are used to initialize the values of a_var and n_var. The techniques double ‘‘ and pow() are used to calculate the power of integer values. math.pow() can be used to calculate the power of fractional values, as seen in the script’s last section. import math #values assignment to a_var and n_var a_var = 4 n_var = 3 # Method One power_var = a_var ** n_var print("%d to the power_var %d is %d" % (a_var,n_var,power_var)) # Method Two power_var = pow(a_var,n_var) print("Value of %d to the power %d : %d" % (x_var,n_var,power_var)) # Method Three Power_var = math.pow(2,6.5) print("Value is %d to the power of %d :%5.2f" % (x_var,n_var,power_var)) Using boolean data types The following script illustrates the various applications of Boolean types. The first output will report val1’s value, the Boolean value true. Only zero yields false as a Boolean value, while all positive and negative numbers return true. Both the second and third outputs prints true for positive and negative numbers. Because the comparison operator returns false, the fourth output will display false for 0, and the fifth output will also print false. # Boolean value val_one = True print(val_one) # Number to Boolean num_var = 10 print(bool(num_var)) num_var = -5 print(bool(num_var)) num_var = 0 print(bool(num_var)) # Boolean from comparison operator val_1 = 6 val_2 = 3 print(val_1 < val_2) Use of the If else clause The following Python script demonstrates how to utilize a conditional statement. The if-else statement is declared slightly differently than in other languages in Python. In Python, unlike other languages, curly brackets are not required to define the if-else block, but the indentation block must be utilized appropriately, or the script will fail. The script uses a simple if-else statement to check whether the value of the number variable is more than or equal to 70 or not. After the ‘if’ and ‘otherwise’ blocks, a colon(:) is used to mark the beginning of the block. # Assigning a numeric value num_var = 85 # Check the is more than 80 or not if (num_val >= 80): print("You are successful ") else: print("You have failed") Using the AND and OR operators The following script demonstrates the AND and OR operators in the conditional statement. If both criteria are true, the AND operator returns true, and the OR operator returns true if any of the two conditions are true. As Computer and theory marks, two floating-point numbers will be used. The ‘if’ statement employs both AND and OR operators. According to the condition, the ‘if’ statement will return true if the Computer marks are greater than 40 and the theory marks are greater than or equal to 30, or if the total of Computer and theory marks is greater than or equal to 80. # Take Computer marks computer_marks = float(input("Enter students Computer marks: ")) # Input the students theory marks theory_marks = float(input("Enter the Students Theory marks: ")) # Using the AND and OR operators, determine whether the condition is met. if (computer_marks >= 40 and theory_marks >= 30) or (computer_marks + theory_marks) >=80: print("\nYou are successful") else: print("\nYou have failed") switch case statement Python does not have a switch-case statement like other programming languages, but a custom function can be used to create one. The employee_details() function is built to mimic the switch-case expression in the following script. The function has only one parameter and a switcher dictionary. Each dictionary index is checked for the value of the function parameter. If a match is detected, the function will return the corresponding value of the index; otherwise, the info_dic’s second parameter value. The return value of the info_dic.get() function. # Switcher for implementing switch case options def student_info(ID): info_dic = { "S001": "Student Name: Ken White", "S002": "Student Name: Sky Smith", "S003": "Student Name: Rob Joy", } '''The initial arguments are returned if the match is found, and nothing will be returned if no match is found. ''' return info_dic.get(ID, "nothing") # Take the Student ID ID = input("Enter the Student ID: ") #Print the output print(student_info(ID)) Using a While Loop The following shows utilizing a while loop in Python. The colon(:) is used to establish the loop’s starting block, and all loop statements must be indented appropriately; otherwise, an indentation error would show. The counter value is set to 1 in the following script, which is utilized in the loop. The loop will iterate five times, printing the counter numbers each time. To reach the loop’s termination condition, the counter value is incremented by one in each iteration. # Initialize counter count= 1 # Iterate the loop 7 times while count < 8: # Print the count value print ("The current count value: %d" % counter) # Increment the count count = count + 1 Using the “for Loop” In Python, the for loop is used for a variety of applications. The starting block of this loop must be declared with a colon(:), and the statements must be indented properly. A list of laptop names is defined in the following script. Further, a for loop is vital in iterating and outputting each item on the list. The len() method is used to count the total number of items in the list and to set the range() function’s limit. # Initialize the list laptops = ["HP", "DELL", "Chromebook","Lenovo", "IBM","Apple", "Toshiba"] print("The Trendiest Laptops in the market are:\n") # Iterating through the list above using a for loop for lap in range(len(laptops)): print(laptops[lap]) From another Python script, run a Python script It is occasionally necessary to use the script of one Python file in another Python file. It’s simple to accomplish, just like importing any module with the import keyword. Two variables are initialized with string values in the tour.py file. This file is imported with the alias ‘t’ in the implementation.py file. It is where you’ll find a list of month names. The flag variable is used to print the value of the local_tourism variable for June and July only once. The overseas_tourism variable’s value will be printed for the month of ‘December.’ When the else section of the if-else if-else sentence is completed, the other nine-month names will be printed. # tour.py # Initialize values local_tourism = "Summer Vacation" overseas_tourism = "Winter Vacation" # implementation.py # Another Python script should be imported. import tour as t # months' list initialization months_var = ["Jan", "Feb", "March", "April", "May", "June", "July", "Aug", "Sept", "Oct", "Nov", "Dec"] # One-time printing of the summer vacation flag variable flag = 0 # Iterating a list using a for loop for m in months_var: if m == "June" or m == "July": if flag == 0: print("Now",t.local_tourism) flag = 1 elif month == "Dec": print("Now",t.overseas_vacation) else: print("The month currently is",m) Using Regex In Python, a regular expression, or regex, is used to match or search for and replace any part of a string based on a pattern. In Python, there’s the module used to use a regular expression. The script below demonstrates how to use regex in Python. The script’s pattern will match strings with a capital letter as the first character. The pattern will be matched with a string value using the match() method. A success message will be printed if the method returns true; else, an instructional message will be printed. # Importing the re module import re # Taking any data string. string = input("Entering a string value: ") # Create a search pattern. pattern = '^[A-Z]' # match the pattern to the value input result_var = re.match(pattern, string) # Print message based on the return value if result_var: print("The capital letter is used to begin the input value. ") else: print("You must begin typing string with a capital letter. ") In the following output, the script is run twice. The match() method returns false for the first execution, and the second execution returns true. Application of getpass getpass is a handy Python package for gathering password input from the user. The getpass module is demonstrated in the following script. The getpass() method is used to take the input and convert it to a password, and the ‘if’ statement is used to compare the input value to the defined password. If the password matches, the message “you are authenticated” will appear; otherwise, “you are not authenticated” will appear. # import getpass module import getpass # Take password from the user passwd = getpass.getpass('Password:') # Check the password if passwd == "codeunderscored": print("You are authenticated") else: print("You are not authenticated") When using the terminal to run the script, the input value is not displayed like it is with other Linux passwords. As seen in the output, the script is run twice from the terminal. Once with an invalid password and once with a valid password. Application of the date format In Python, the date value can be formatted in various ways. The following script uses the datetime module to set the current and future dates. The current system date and time are read using the today() function. The formatted date value is then printed using the date object’s various property names. The next section of the script demonstrates how to assign and print a custom date value. from datetime import date # Check the time and date. date_today = date.today() # The formatted date should be printed. print("Today is :%d-%d-%d" % (date_today.day, date_today.month, date_today.year)) # Creation of a custom date. custom_date = date(2022, 5, 10) print("The specified date is:",custom_date) After running the script, the following output will show. Adding or removing an item from a list Python’s list object is used to solve a variety of problems. Python includes several built-in functions to work with the list object. The following example demonstrates how to add and remove new items from a list. The script declares a list of four elements. The Insert() method adds a new item to the list’s second place. The remove() method is used to find and delete a specific item from a list. Following the insertion and deletion, the list is printed. # Declaration of computer companies comp_companies = ["Apple","Google","Uber","IBM"] # Insert an item in the 2nd position comp_companies.insert(1, "Lenovo") # Displaying list after inserting print("The Computer Companies list after insert:") print(comp_companies) # Removing an item from the list comp_companies.remove("Google") # Print the list after delete print("The Computer Companies list after delete:") print(comp_companies) After running the script, the following output will show. List comprehension In Python, list comprehension builds a new list from text, tuple, or other lists. The for loop and lambda function can be used to accomplish the same goal. The script below demonstrates two different applications of list comprehension. List comprehension is used to turn a string value into a list of characters. A tuple is then turned into a list in the same manner. # Using list comprehension, make a character list. code_char_list = [ char for char in "codeunderscored" ] print(code_char_list ) # Define a tuple of websites comp_companies = ("Apple","Google","Uber","IBM") # Using list comprehension, create a list from a tuple. companies_list = [ site for site in comp_companies ] print(companies_list) Adding and Search data in a dictionary Like the associative array in other programming languages, the dictionary object is used in Python to store numerous data. The subsequent script is responsible for adding a new item to the dictionary and searching for any item. The script declares a dictionary of customer information, with the index containing the Students ID and the value containing the customer name. After that, a new customer record is added to the dictionary’s end. A Students ID is used to do a dictionary search. The ‘for’ loop and the ‘if’ condition traverse the dictionary’s indexes and look for the input value. # Define a dictionary student_info = {'67':'Tom Keen','35':' Ali Flex', '16':'Sam Mike','23':'Ann White', '69':' Joy Brown'} # Append a new data student_info['16'] = 'Ferdous Smiles' print("The Student names are:") # Print the values of the dictionary for stu in student_info: print(student_info[stu]) # Take Students ID as input to search name = input("Enter Students's ID:") # Search the ID in the dictionary for stu in student_info: if stu == name: print(student_info[stu]) break Conclusion The example scripts we have looked at in this article are must-know for any beginner who intends to excel in Python. We hope the examples on this page will help you quickly accelerate your Python learning.
https://www.codeunderscored.com/python-scripts-beginners-guide/
CC-MAIN-2022-21
refinedweb
2,338
53.31
These lists identify all of the APIs which are obsolete in the .NET Framework V2.0. These lists are meant to serve as a simple reference for obsoleted members and types. Where applicable, an alternate or improved version of the API that provides a more robust set of functionality is listed. If no alternative is suggested then there is no replacement for the API. It is perfectly valid to continue to use and consume a type/member which is obsoleted as a warning. In each list, there are two separate counts specified for each assembly or namespace: Note: only assemblies that contain obsoleted types or members are listed. Obsolete List: By Assembly Obsolete List: By Namespace
http://msdn.microsoft.com/es-es/netframework/aa497286(en-us).aspx
crawl-002
refinedweb
116
56.96
ICalendarObjects Since: BlackBerry 10.0.0 #include <bb/pim/calendar/ICalendarObjects> To link against this class, add the following line to your .pro file: LIBS += -lbbpim The ICalendarObjects class represents a container for iCalendar objects. This class holds objects that are constructed by functions that read data in an iCalendar file, such as CalendarService::readICalendarFile() and CalendarService::retrieveICalendarAttachment(). You must parse an iCalendar file using these types of functions before you can retrieve event information from it. This class contains information that represents both events and tasks (or to-dos). You can call events() to retrieve a QList of events, and you can manipulate the events in an ICalendarObjects object using functions such as addEvent(), setEvents(), and resetEvents(). Similarly, you can call todos() to retrieve a QList of tasks, and you can manipulate the tasks using functions such as addTodo(), setTodos(), and resetTodos(). Overview Public Functions Index Public Functions Constructs a new ICalendarObjects. BlackBerry 10.0.0 Copy constructor. This function constructs an ICalendarObjects containing exactly the same values as the provided ICalendarObjects. BlackBerry 10.0.0 Destructor. BlackBerry 10.0.0 void Adds an event to the list of iCalendar events. This function adds a new event at the end of the existing list of iCalendar events. BlackBerry 10.0.0 void Adds a task (to-do) to the list of iCalendar tasks. This function adds a new task at the end of the existing list of iCalendar tasks. BlackBerry 10.0.0 bb::pim::message::AttachmentKey Retrieves the attachment ID that provided the iCalendar objects. This function returns the ID for the attachment that provided the iCalendar objects. The ICalendarObjects instance returned by CalendarService::retrieveICalendarAttachment() will have a non-zero value if the message has an iCalendar attachment that is not yet on the device. It's possible to request the download of the attachment by calling bb::pim::message::MessageService::downloadAttachment(). The attachment ID that provided the iCalendar objects. BlackBerry 10.3.0 QList< CalendarEvent > Retrieves the events in the iCalendar file. This function returns the calendar events that were parsed from the iCalendar file. Events in an iCalendar file are specified using the VEVENT identifier. A list of events from the iCalendar file. BlackBerry 10.0.0 bool Indicates whether this ICalendarObjects is valid. This function determines whether the attributes of this ICalendarObjects object have acceptable values. true if this ICalendarObjects is valid, false otherwise. BlackBerry 10.0.0 ICalendarObjects & Assignment operator. This operator copies all values from the provided ICalendarObjects into this ICalendarObjects. A reference to this ICalendarObjects. BlackBerry 10.0.0 void Removes all events. This function clears the list of iCalendar events. BlackBerry 10.0.0 void Removes all tasks (to-dos). This function clears the list of iCalendar tasks. BlackBerry 10.0.0 void Sets the attachment ID that provided the iCalendar objects. This function assigns an attachment ID to this ICalendarObjects object. BlackBerry 10.3.0 void Sets the list of events. This function changes the list of iCalendar events to the provided set of events. BlackBerry 10.0.0 void Sets the list of tasks (to-dos). This function changes the list of iCalendar tasks to the provided set of tasks. BlackBerry 10.0.0 QList< ICalendarTodo > Retrieves the tasks (to-dos) in the iCalendar file. This function returns the tasks (to-dos) that were parsed from the iCalendar file. Tasks in an iCalendar file are specified using the VTODO identifier. A list of tasks from the iCalendar file. BlackBerry 10.0.0 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/cascades/bb__pim__calendar__icalendarobjects.html
CC-MAIN-2014-42
refinedweb
599
51.24
In my last post I tried to outline some of the contrasts between the popular Python and Clojure. A repeated comment from the Python community was 'Why dont you let the code speak for itself, go to Rosetta'....So I did and this is what I found. The Python community is very active and sizeable, which is good. Quite a few of them let me know that the examples which I had picked from the PyEuler project were not showing off Pythons best side and looking at their counter-examples I cannot deny that. On the encouragement of several Python users I've now visisted Rosetta code for the first time and here I'll share my experiences, solving 3 of their challenges. If you read the comments in the last post you'll see that tempers tend to rise when you're contrasting languages and that's not what I'm going for. I want this to be a fun apples-to-apples comparison, where hopefully both Python and Clojure users take away something useful. But please remember before you start ripping my comment-box in pieces: Inverting the example does not always invert the conclusion. If we determine that certain codepaths are error prone and 'dangerous' providing a safe/correct example won't necessarily change the principle, so let's try to keep the debate principle oriented. Makes sense? Now I won't claim any expertise on the Rosetta project I visisted it yesterday for the first time. It's basically a place where you can go and show off solutions in every programming language you can think of, which is quite insightful. I'll try to comment on size, readability, boilerplate and correctness in the following examples. My method was simple: I saw a link to a task from the front page, solved that, clicked around until I found something that had a Python solution, solved that and so on. If the name rings a bell and you can't quite place it, perhaps it's because it relates to Mersenne primes which were mentioned on Slashdot earlier this week. The 48th number was found, winning some $100.000,- USD to a lucky (patient?) university. Mathematically speaking, the test is this: Lucas-Lehmer Test: for p a prime, the Mersenne number 2p − 1 is prime if and only if 2p − 1 divides S(p − 1) where S(n + 1) = (S(n))2 − 2, and S(1) = 4. So it's an iterative walk through S to determine if p is a Mersenne prime, its some Big Integer math which (until the next release) is a bit slow on the JVM. We'll start with the Python teams solution: from sys import stdout from math import sqrt, log def is_prime ( p ): if p == 2: return True # Lucas-Lehmer test only works on odd primes elif p <_ in="in" range3="range3" no="no" print="print" _-="_-" range2="range2" intsqrtp1="intsqrtp1" mersenne="mersenne" else:= upb_count: break print Weighing in at 36 lines Team Python has produced a working solution. In the last post we we're specifically looking at Pythons functional programming (FP) capabilities, but since this is a comparison in a broader sense I won't deduct points for mutability and/or side effects :) Methodically speaking their prime tester is similar to mine, there's a couple of special cases but otherwise we look for divisors up until the squareroot of n. The biggest difference in both testers, is that Python code often relies on breaking out of nesting loops when certain conditions occur, but apart from that we're using the same approach. To new-comers their translation 2p - 1 to (1 << p) - 1 Might be a little confusing, but it's a great performance booster. << means bit-shift-left and n << x = x*2^n So, using the same methodology, allow me to present 11 lines of Clojure: (defn prime? [i] (cond (< i 4) (>= i 2) (zero? (rem i 2)) false :else (not-any? #(zero? (rem i %)) (range 3 (inc (Math/sqrt i)))))))) (defn mersenne? [p] (or (= p 2) (let [mp (dec (bit-shift-left 1 p))] (loop [n 3 s 4] (if (> n p) (zero? s) (recur (inc n) (rem (- (* s s) 2) mp))))))) (filter mersenne? (filter prime? (iterate inc 1))) One thing you need to be mindful of, is that functions suffixed "?" in Clojure is often (by convention) used as predicates, meaning they will return either true of false. My prime? asks "are we below 4? If we are we're looking at numbers 1, 2 and 3, whereof only the first isn't a prime, so the question is really "are we looking at 2 or 3"? Shortened to (>= i 2). Mersenne? is exactly similar to Pythons and the filters should read like plain english. (iterate inc 1) iterates the function inc(rement) starting with one, 1.2.3.4.5.6.......n. Python: Reads easy, well layed out - 36 lines is a bit much though Clojure: Reads easy, 11 lines is excellent, no boilerplate. If we're agreed, Clojure takes a tender lead of 1 - 0. If we're not agreed, I'm sure you'll let me know :) It's worth noting though, that in the world outside Clojure and Python we see a C++ solution weighing about 45 lines, Java at about 40 lines and Ruby at 42 - So Clojure's 11 lines are impressive. This was a fun little challenge. Basically you get a URL which feeds you some XML giving you titles of all Programmings tasks on Rosetta, you then crawl these looking for examples, counting them as you go along. This is one of the times where I have to say Team Pythons solution really impressed me. It weighs in at a trimmed 12 lines: import urllib, xml.dom.minidom x = urllib.urlopen("") tasks = [] for i in xml.dom.minidom.parseString(x.read()).getElementsByTagName("cm"): t = i.getAttribute('title').replace(" ", "_") y = urllib.urlopen("" % t) tasks.append( y.read().lower().count("{{header|") ) print t.replace("_", " ") + ": %d examples." % tasks[-1] print "\nTotal: %d examples." % sum(tasks) Quite nice dont you think? First they get the xml into x and then they define an array called tasks. Here they inject the number of examples they crawl in their for-loop. They took the advice of Rosetta and match "{{header|" in all subpages, which I've found actually doesn't give quite the correct result, because not everybody uses this tag like they should! Shame on them. Now, this had me pressed up against the wall in terms of line-numbers, so I knew I had to come up with something good. May I present 11 long lines of Clojure: (use net.cgrand.enlive-html) (let [xml "" title-url "" task-count #(try (dec (count (select (html-resource (java.net.URL. (str title-url %))) [[:h2]]))) (catch Exception 0)) results (pmap #(assoc % :tasks (task-count (.replace (:title %) \space \_))) (map :attrs (select (html-resource (java.net.URL. xml-url)) [:cm])))] (doseq [r results] (println (:title r) (apply str (repeat (- 55 (count (:title r))) \space)) (:tasks r))) (println "Total: " (reduce + (map :tasks results)))) I know that I couldn't possible cram more code into that small space :) Allow me to explain: I define task-count as being an anonymous function which counts the number of -h2- tags on a given URL and decrements 1 from this count. The reason for this is that when looking through the pages it's obvious that the '{{header' tag is being misused, but it was also obvious that the h2 tag is reserved for 2 things, 1x 'Content' and code-examples, so subtract the 1 and you got the number of examples. Then comes the fun part - I run a parallel-map (that means multithreaded) on a function which builds a hash-map. assoc(iate) means to connect a key-value pair. Here the hash-map is whatever it's being fed, meaning I work directly on the one select returns, the key is :tasks and the value is (task-count) of pulling out the :title tag from parsing the attributes of :cm tags on the url. Cryptic? Let me show you a boiled down example: user> (first (select (html-resource (java.net.URL. xml-url)) [:cm])) {:tag :cm, :attrs {:title "100 doors", :ns "0", :pageid "2151"}, :content nil} user> (:attrs (first (select (html-resource (java.net.URL. xml-url)) [:cm]))) {:title "100 doors", :ns "0", :pageid "2151"} Line 1: The select methods returns as many as those maps are there are occurances of [:cm] in the XML. Then I pull out the attributes, giving me :title, :ns and :pageid. When that's fed into the main pmap I replace spaces with underscores making "100 doors" => "100_doors". That title is then fed to task-count, which returns the number of tasks on that page. That number is then tied to the hash-map with the key :tasks. Then finally you have the printing which is straightforward, except I threw in that (repeat (- 55 (count (:title r))) \space) expression, which just aligns everything very nicely, injecting as many spaces as is needed to print 55 characters in. (it's a hack, but it's much prettier than the alternative). As an encouragement, know that once you're comfortable with Clojure, these routines take 2 - 5 minutes to write and half of that to read - but if you're coming from another language, I know that it's harder. Python: Gets almost correct count, with very concise and readable code - zero boilerplate. Runtime: 1 hour 53 minutes Clojure: Gets almost correct count, with very concise and somewhat complex code, zero boilerplate Runtime: 12 minutes !! Behold the power of pmap! Deciding on a multi threaded strategy was key to such a huge performance gain, opening as many threads as makes sense on my duo-core system (pmap takes care of that automatically). Also note, I say 'almost correct' on with Clojure as well because user generated html is user generated, it's error prone. Although I tested dozens of pages manually finding Clojure to be right every time, that is no guarantee. Sidenote: For this problem which both languages solved beautifully in ~10 lines, Java solves in 60+ lines, Perl about 50 and Ruby at about 35. So quite impressive on both counts. Ok, last one. This one is more of a data juggling act than anything else. You get some data and is asked to represent that in a way that's idiomatic to your language, not mutating it at run-time (which I did first). The data consists of 4 columns and n rows. Team Python weighs in at 12 lines (not counting data definition): from collections import defaultdict data = [('Employee Name', 'Employee ID', 'Salary', 'Department'), ('Tyler Bennett', 'E10297', 32000, 'D101'), ('John Rappl', 'E21437', 47000, 'D050'), ('George Woltman', 'E00127', 53500, 'D101'), ('Adam Smith', 'E63535', 18000, 'D202'), ('Claire Buckman', 'E39876', 27800, 'D202'), ('David McClellan', 'E04242', 41500, 'D101'), ('Rich Holcomb', 'E01234', 49500, 'D202'), ('Nathan Adams', 'E41298', 21900, 'D050'), ('Richard Potter', 'E43128', 15900, 'D101'), ('David Motsinger', 'E27002', 19250, 'D202'), ('Tim Sampair', 'E03033', 27000, 'D101'), ('Kim Arlich', 'E10001', 57000, 'D190'), ('Timothy Grove', 'E16398', 29900, 'D190')] departments = defaultdict(list) for rec in data[1:]: departments[rec[-1]].append(rec) N = 3 format = "%-15s " * len(data[0]) for department, recs in departments.iteritems(): print "Department", department print " ", format % data[0] for rec in sorted(recs, key=lambda rec: -rec[-2])[:N]: print " ", format % rec print Starting with the data definition, then in Clojure terms that would be a vector containing lists. I'm not sure what a Python user would call it. It's low ceremony, but not being used to all the commas they kinda stick out. They sort the data using 'sorted' which takes a lambda as the comperator. I think that's what Graham would call a 'near Lisp experience' and it is quite cool. Despite coming in at 12 lines, it's a slim read because all the functionality is packed into that 1 line - very powerful. (albeit a little Perl like "rec: -rec[-2])[:N]:") Weighing in at 6 lines, we have Clojure: (def data [{:name "Tyler Bennett" :id "E10297" :salary 32000 :department "D101"} {:name "John Rappl" :id "E21437" :salary 47000 :department "D050"} {:name "George Woltman" :id "E00127" :salary 53500 :department "D101"} {:name "Adam Smith" :id "E63535" :salary 18000 :department "D202"} {:name "Claire Buckman" :id "E39876" :salary 27800 :department "D202"} {:name "David McClellan" :id "E04242" :salary 41500 :department "D101"} {:name "Rich Holcomb" :id "E01234" :salary 49500 :department "D202"} {:name "Nathan Adams" :id "E41298" :salary 21900 :department "D050"} {:name "Richard Potter" :id "E43128" :salary 15900 :department "D101"} {:name "David Motsinger" :id "E27002" :salary 19250 :department "D202"} {:name "Tim Sampair" :id "E03033" :salary 27000 :department "D101"} {:name "Kim Arlich" :id "E10001" :salary 57000 :department "D190"} {:name "Timothy Grove" :id "E16398" :salary 29900 :department "D190"}]) (let [n 3 departments (reduce #(assoc %1 (keyword (:department %2)) (conj ((keyword (:department %2)) %1) %2)) {} (sort-by :salary data))] (doseq [d departments] (println (key d) (map #(str "[" (:name %) ": " (:salary %) "]") (take n (val d)))))) The data is a vector containing hash-maps. In the 'let' statement I bind n to 3, this is a requirement of the task and it states how many employees we extract (max). Then I do all my data-mangling in the binding to 'departments' and if you're comfortable with Clojure it should read like plain english...well almost :) Sort gets the raw data and compare the :salary values asking if the first is smaller than the next. This returns all the above data sorted by salary, highest first. Then that gets passed to reduce, which produces a hashmap which groups all the departments, like {:D101 [old-entries new-entries]}, growing it everytime a new employee from that department comes through. And with that little functional beauty I'm actually done. The entries went in sorted by size, so when grouped by departments they're already lined up the way I want. All I have to do is call (take n (:D101 ..)) and I've got the result I wanted. Python: Very concise, suffers a little on readability (-rec[:-2]) for instance, no boilerplate Clojure: Very concise and readable, basically 2 expressions handle everything So for a job such as this I'd say they are neck and neck. In the same amount of space I show off heavy use of reduce, assoc and sort, so I feel I should get an extra point for that :) Where both languages packed all their power into 1 expression/statement, notice how C++ gets over 70 lines for this simple problem (again, not counting definitions), that's 700% more possibility for bugs, 69 lines wasted on ceremony, perhaps 2 hours straight into the waste basket. I only spent a few minutes on my solution and I assume it's the same for Team Python. So I felt it was good to do a little more informal comparison after my first post earlier this week. Both Clojure and Python are tools and so I don't want to create an atmosphere we're people get all worked up about tools. There are some facts that we should consider and then there's an element of style and taste. If I had to ascribe a 'coolness' factor to Python, I'd say it's pretty cool. Without hesitation I would pick it above C++, Perl, Ruby, TCL etc, just to name a few odd balls. It can come in very concise chunks and it's easy to dive into. Regarding boiletplate, clearness of syntax and ceremony it's virtually unmatched. When I reviewed Scala I was told that it had very thin boilerplate, but if that's true then Pythons is invisible. But like I also told one disgruntled Scala user, it's not always fun being compared to Clojure and I think a few Python users felt that they got more competition than they hoped for. Oh well, as long as the competition is friendly, its mutually beneficial. Not forgetting, I was asked to let the code speak for itself, so I won't say another word... I hope you enjoyed the read :)
http://www.bestinclass.dk/index.clj/2009/10/python-vs-clojure-reloaded.html
CC-MAIN-2014-15
refinedweb
2,699
67.28
Web scraping in Python to look up stock prices Objective We are going to scrape the FT.com website for current stock prices for selected stocks. You’ll learn: - scraping web page - fetching price/quote (with regex) - presenting results - error handling - coloring the output Step 1: Taking input, Opening URL and Grabbing data # import 'urllib2' to open the url and 'sys' so we could take input from console import urllib2, sys # take input from the console and save it as 'symbol' symbol = sys.argv[1] # we are using FT.com for the stock pricing url = '' # open the URL, and save the output in a variable called 'content' content = urllib2.urlopen(url+symbol) # read the 'content' and save it in a variable called 'data' data = content.read() print(data) We are going to import urllib2 and sys so we could a) open a url and b) take input from console. Next, we’ll take input (i.e the stock symbol we want to look up) from console and save it in a variable called symbol. sys.argv[1] basically means the first argument that was given to the system. The URL we are going to scrape is from the Financial Times website. We’ll save that url in a variable aptly names url. Then, we are going to open that URL and save it’s content in a variable called content. We’ll then read that content and save whatever it read in a variable called data. Lastly, we’ll print that data to make sure everything is working. At this point it’ll output a whole lot of jumbled up code in your terminal (which is basically what you’ll get if you ‘View Page Source’ in the browser) Step 2: Making sense of the data, extracting what we want and presenting the results So you have ALL the data on the page now.. What we really want is just the latest stock price. Say hello to Regex! Regex (aka Regular Expressions) let’s you define an expression and look up that expression in a whole lot of data to get what you want. For a layman, it is the equivalent of doing Ctrl+F in the browser but way more powerful and complex. But first, we need to know what we are looking for. To do that, i’m going to save the data in a text file, do a search for the price and check for something unique that represents that price so i can look for it every time with the regex i write. save the data to a file Use the >> redirection operator in the shell to redirect the results (output) of whatever we get after running stock_portfolio.py and save that in a text file called output.txt You need to run the following commadn in the Terminal (console). python stock_portfolio.py >> output.txt searching for a unique identifier Now open that output in your favorite code editor. The reason for opening it in a code editor is so that i can view it with syntax highlighting, makes it easier to make sense of the file and look for what we want. Start with searching (ctrl+F) for the exact stock price. Then look for code before and after that stock price that makes it unique. For example, i looked up the stock APL, i know the stock price was 593.76 at the time i extracted the data so i searched for ‘593.76’ in the output. That gave me the position in the file where the latest stock price is mentioned. Then i looked for unique identifiers before and after the price and found that _last_lastPrice" data- was always mentioned exactly before the latest price and it was only used for the price and nothing else. So, i used that to base my regular expression on. adding regex and getting the price To be able to use regular expressions in our script, we’ll import the re library. import urllib2, sys, re symbol = sys.argv[1] url = '' content = urllib2.urlopen(url+symbol) data = content.read() # Regex # 'm' is for match, it is frequently used to represent return of a match m = re.search('_last_lastPrice" data-(\d*[.]\d*)', data) # we are searching for the latest stock price (\d*[.]\d*) placed after a specific span tag, in whatever is stored in 'data'. # if we have a match, then quote is equal to the first group we looked up. the () represent a group in regex. quote = m.group(1) print(quote) We are running the search method on the re module to search for _last_lastPrice" data-(\d*[.]\d*) where (\d*[.]\d*) represents the format of the price 593.76 ( \d is for digits and *** means all ). Look for this price, it comes immidiately after _last_lastPrice" data- and look for it in the **data variable. Save the result of the search in a variable called m. Use RegExr to find exact match in text and learn about writing regex queries. Getting it right may take you a few tries if you are new to regex. It took me three to get the expression right. This might be the most difficult part of this whole tutorial if you have never used regex before. At this point the result of print(m) would be <_sre.SRE_Match object at 0x1086c1990> which basically means that it searched and a match exists. Now we save the result of the match in a variable called quote. And finally, print that quote. print(quote) 593.76 Error handling What if the symbol given has a typo, or is incorrect or doesn’t exist? Python by defualt will stop running the script after the error and if you had any other symbols after the error they won’t show since the script stopped running. To counter that, we are going to add an if/else statement so that if the value exists, it is shown and if not, it shows an error message. In case of an error, the script will show an error message and keep running. if m: # if we have a match, then quote is equal to the first group we looked up. quote = m.group(1) else: quote = symbol + ' is not a correct symbol or it does not exist' print(quote) Getting prices for multiple stocks in one go (Looping) instead of taking input from the system, i’m going to save the multiple symbols i want to look up in a list. that way i won’t have to type the symbols every time i want to look up the prices. # SYMBOLS TO LOOK FOR: symbol_list = ['APL', 'PPL', 'HUBC', 'FFC'] # for every Symbol in the list, look it up and print the result for symbol in symbol_list: url = ''+symbol+':KAR' # (FT.com adds :KAR at the end of symbols in KSE) # i'm permanently adding it to the URL because i'm only going to be looking up KSE stocks # and i am too lazy to add :KAR with every symbol manually) Final script import urllib2, re # urllib2 is required to open the url # re is required for regular expression (regex) # SYMBOLS TO LOOK FOR: symbol_list = ['APL', 'PPL', 'HUBCO', 'EFERT'] # for every Symbol in the list, look it up and print the result for symbol in symbol_list: url = ''+symbol+':KAR' # (FT.com adds :KAR at the end of symbols in KSE)) Error handling: - when provided symbol isn’t correct or it doesn’t exist Resources: RegExr Print in terminal with colors using Python?
http://tldrdevnotes.com/python/2014-08-26-Web%20scraping%20in%20Python%20to%20look%20up%20stock%20prices.html
CC-MAIN-2018-17
refinedweb
1,253
79.19
Sample pie chart Below is the source code that produces the above chart. ../demos/pietest.py from pychart import * import sys data = [("foo", 10),("bar", 20), ("baz", 30), ("ao", 40)] ar = area.T(size=(150,150), legend=legend.T(), x_grid_style = None, y_grid_style = None) plot = pie_plot.T(data=data, arc_offsets=[0,10,0,10], shadow = (2, -2, fill_style.gray50), label_offset = 25, arrow_style = arrow.a3) ar.add_plot(plot) ar.draw() This class supports the following attributes: You can draw each pie "slice" shifted off-center. This attribute, if non-None, must be a number sequence whose length is equal to the number of pie slices. The Nth value in arc_offsets specify the amount of offset (from the center of the circle) for the Nth slice. The value of None will draw all the slices anchored at the center. The style of arrow that connects a label to the corresponding "pie". The location of the center of the pie. Specifies the data points. See Section 5 The column, within "data", from which the data values are retrieved. The fill style of each item. The length of the list should be equal to the length of the data. The column, within "data", from which the labels of items are retrieved. The fill style of the frame surrounding each label. Format string of the label The style of the frame surrounding each label. The distance from the center of each label. The style of the outer edge of each pie slice. The radius of the pie. The value is either None or a tuple. When non-None, a drop-shadow is drawn beneath the object. X-off, and y-off specifies the offset of the shadow relative to the object, and fill specifies the style of the shadow ( see Section 16). The angle at which the first item is drawn.
http://home.gna.org/pychart/doc/module-pie-plot.html
crawl-001
refinedweb
305
77.13
Installed build 300 yesterday. All of the Resharper: prefixed color options are gone from the fonts & colors dialogue in visual studio 2005. I restored my .vssettings file and all the colors are missing from the resharper section. Also, new "highlight current line" highlights in yellow on my black background breaking ClearType. There doesn't seem to be an option to change the background color it highlights with or the method it draws with. My current line is unreadable with this option on. Installed build 300 yesterday. All of the Resharper: prefixed color options are gone from the fonts & colors dialogue in visual studio 2005. I restored my .vssettings file and all the colors are missing from the resharper section. Please use jetbrains.resharper.eap forum for posts releated to Early Access verisons of ReSharper. This group is about released ReSharper versions only. Valentin Kipiatkov CTO and Chief Scientist JetBrains, Inc "Develop with pleasure!" Hello Wil, possible ways to resolve this issue: 1) Try close all instances of VS, and run 'devenv /setup' from the command line 2) If this doesn't help, try to re-install ReSharper 300 Regards, Dmitry Shaporenkov JetBrains, Inc "Develop with pleasure!" Hi Dmitry I have the same issue and I tried both solutions. Neither works. Any other ideas? Also the key bindings seem to be strange and Resharper won't let me rename namespaces anymore. I don't think I can get a test case because it seems to be tied to my machine and fairly random. Just FYI. Regards Petrik Also missing the Resharper colors from the dialog and I'm running V2.0 build 259. My custom resharper colors were still painted in the editor but Devenv /setup removed them and still didn't fix the problem. I was able to get the colors back using VS settings import. I have a funny feeling this has something to do with import/export settings in vs but i might be completely wrong. Just my 2 cents :) "Wil Welsh" <no_reply@jetbrains.com> wrote in message news:4610268.1162393584441.JavaMail.itn@is.intellij.net... > devenv /setup fixed all of my problems. Cheers.
https://resharper-support.jetbrains.com/hc/en-us/community/posts/206056309-Build-300-Colors
CC-MAIN-2020-16
refinedweb
356
67.76
5 Different Types of Document Ready ExamplesBy Sam Deering These are the different types of Document Ready functions typically used in jQuery (aka jQuery DOM Ready). A lot of developers seem to use them without really knowing why. So I will try to explain why you might choose one version over another. Think of the document ready function as a self-executing function which fires after the page elements have loaded. See Where to Declare Your jQuery Functions for more information on how to use the Document Ready Functions. Document Ready Example 1 $(document).ready(function() { //do jQuery stuff when DOM is ready }); Document Ready Example 2 $(function(){ //jQuery code here }); This is equivalent to example 1… they literally mean the same thing. Document Ready Example 3 jQuery(document).ready(function($) { //do jQuery stuff when DOM is ready }); Adding the jQuery can help prevent conflicts with other JS frameworks. Why do conflicts happen? Conflicts typically happen because many JavaScript Libraries/Frameworks use the same shortcut name which is the dollar symbol $. Then if they have the same named functions the browser gets confused! How do we prevent conflicts? Well, to prevent conflicts i recommend aliasing the jQuery namespace (ie by using example 3 above). Then when you call $.noConflict() to avoid namespace difficulties (as the $ shortcut is no longer available) we are forcing it to wrtie jQuery each time it is required. jQuery.noConflict(); // Reverts '$' variable back to other JS libraries jQuery(document).ready( function(){ //do jQuery stuff when DOM is ready with no conflicts }); //or the self executing function way jQuery.noConflict(); (function($) { // code using $ as alias to jQuery })(jQuery); Document Ready Example 4 (function($) { // code using $ as alias to jQuery $(function() { // more code using $ as alias to jQuery }); })(jQuery); // other code using $ as an alias to the other library This way you can embed a function inside a function that both use the $ as a jQuery alias. Document Ready Example 5 $(window).load(function(){ //initialize after images are loaded }); Sometimes you want to manipulate pictures and with $(document).ready() you won’t be able to do that if the visitor doesn’t have the image already loaded. In which case you need to initialize the jQuery alignment function when the image finishes loading. You could also use plain JavaScript and append a function call the the body tag in the html, use this only if your not using a JS framework. Read more:
https://www.sitepoint.com/types-document-ready/
CC-MAIN-2017-30
refinedweb
405
63.49
I have often read a common question in forum posts of how to set the values of a User Control from a parent page using a method but no one has provided the proper solution. Considering the preceding requirements, I have decided to write this article to provide the step-by-step solution to create a User Control. So let us start creating an application so beginners can also understand. Step 2: Create the User Control /> "CallingUsercontrolMethod" or anything you wish and specify the location. - Then right-click on the project in the Solution Explorer then select "Add New Item" then select Web User Control template as follows: Now open the design mode and add the two textboxes. After adding it, the User Control source code will look as in the following: <%@> public void SetData(string Name, String City) { txtName.Text = Name; txtcity.Text = City; } In the preceding code we are creating the method setData that will set the values to the textboxes from the parent form, now after creating the properties the student.ascx class file will look as follows: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; public partial class student : System.Web.UI.UserControl { protected void Page_Load(object sender, EventArgs e) { } public void SetData(string Name, String City) { txtName.Text = Name; txtcity.Text = City; } } Now we are done with the method. Let us use it in an .aspx page. As already stated, a User Control does not run directly on its own. To render a User Control you must use it in an .aspx page. Now let us add the User Control to the .aspx page. Now we need to add the User Control into an .aspx page to use it. So let us add the Default.aspx page by right-clicking on the project in the Solution Explorer. After adding the .aspx page the Solution Explorer will look as follows: compilemode.com < follows: /> /> In the preceding UI, on the save button we will pass the values to the user TextBox controls and I will show it in the User Control text box. Now let us see how the method will look, it's similar to any standard TextBox control as: /> Step 6: Set the values to User Control from .aspx page using User Control Method Now in the preceding control you saw that in the method of the User Control we have created is shown the same as standard controls. We can also create a method to set the height and width in a similar manner. Double-click on the Save button and write the following code in the Default.aspx.cs file to set the values Now the entire code of the Default.aspx.cs class file will look as in the following: using System; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected void txtSave_Click(object sender, EventArgs e) { //Calling and Passing Values to user control Method studentcontrol.SetData(txtName.Text, txtcity.Text); } } Now run the application, the UI will look as follows: /> /> default.aspx page and the values are assigned using methods to the User Control from the parent page. Summary From the preceding examples you have learned how to use the methods in a User Control to assign the values. I hope this article is useful for all readers, if you have a suggestion then please contact me.
http://www.compilemode.com/2015/05/call-user-control-method-from-parent-in-Asp-net.html
CC-MAIN-2017-09
refinedweb
576
66.03
Important: Please read the Qt Code of Conduct - ListView - OverBounds is freezing at bottom when content is full Hello, I'm using Qt v5.7.0 I have an issue with ListView, when its height content is bigger than its own height, the move is freezing when you scroll over the bottom of the list. There is an example : import QtQuick 2.7 import QtQuick.Controls 2.0 /// [...] ListView { id: myList width: parent.width height: parent.height boundsBehavior: Flickable.DragOverBounds flickableDirection: Flickable.VerticalFlick ScrollBar.vertical: ScrollBar {} verticalLayoutDirection: ListView.BottomToTop spacing: 20 cacheBuffer: 0 model: 20 delegate: ItemDelegate { width: myList.width height: 30 Rectangle { width: parent.width height: parent.height color: "red" } } } Is it a bug ? Or something may depend on hightlight item ? Thanks
https://forum.qt.io/topic/70927/listview-overbounds-is-freezing-at-bottom-when-content-is-full
CC-MAIN-2020-45
refinedweb
123
63.66
Our little web browser now renders web pages that change. That means our browser is now doing styling and layout multiple times per page. Most of that styling and layout, however, is a waste: even when the page changes, it usually doesn't change much, and layout is expensive. In this chapter we'll put the breaks on new features and implement some speed-ups instead. Before we start working on speeding up our browser, let's find out what's taking up so much time. Let's take a moment to list the stuff our browser does. First, on the initial load: And then every time it does layout: I'd like to get timing measurements on both of these, so that we know what to work on. To do that, I'm basically going to check the wall clock time at various points in the process and do some subtraction. To keep it all well-contained, I'm going to make a Timer class to store that wall-clock time, with a method to report how long a phase took. import time class Timer: def __init__(self): self.phase = None self.time = None def start(self, name): if self.phase: self.stop() self.phase = name self.time = time.time() def stop(self): print("[{:>10.6f}] {}".format(time.time() - self.time, self.phase)) self.phase = None That wacky string in the timer field on browsers: Then we call start every time we start doing something useful. For example, in browse, I start the Downloading phase: I'm just going to go ahead and insert one of these for each of the bullet points above. Then, at the end of render, I stop the timer: Your results may not match mine,These results were recorded on a 2019 13-inch MacBook Pro with a 2.4GHz i5, 8 GB of LPDDR3 memory, and an Intel Iris Plus graphics 655 with 1.5 GB of video memory, running macOS 10.14.6. but here's what I saw on my console on a full page load for this web page. [ 0.225331] Download [ 0.028922] Parse HTML [ 0.001835] Parse CSS [ 0.003599] Run JS [ 0.007073] Style [ 0.517131] Layout [ 0.022553] Display List [ 0.275645] Rendering [ 0.008225] Chrome The overall process takes about one second (60 frames), with layout consuming half and then rendering and network consuming the rest. Moreover, the downloading only takes place on initial load, so it's really layout and rendering that we're going to optimize. By the way, keep in mind that while networking in a real web browser is similar enough to our toy version,Granted with caching, keep-alive, parallel connections, and incremental parsing to hide the delay… rendering is more complex in real web browsers (since real browsers can apply many more stylistic effects) and layout is much more complex in real browsers!Of course they're also not written in Python…Now is a good time to mention that benchmarking a real browser is a lot harder than benchmarking ours. A real browser will run most of these phases simultaneously, and may also split the work over multiple CPU and GPU processes. Counting how much time everything takes is a chore. Real browsers are also memory hogs, so optimizing memory usage is just as important for them, which is a real pain to measure in Python. Luckily our browser doesn't have tabs so it's unlikely to strain for memory! By the way, this might be the point in the book where you realize you accidentally implemented something super-inefficiently. If you something other than the network, layout, or rendering is taking a long time, look into that.The exact speeds of each of these phases can vary quite a bit between implementations, and might depend (for example) on the exercises you ended up implementing, so don't sweat the details too much. Whatever help it is your browser needs, this chapter will only address layout and rendering. :hoverstyles But to really drive home the need for faster layout and rendering, let's implement a little browser feature that really taxes layout and rendering: hover styles. CSS has a selector, called :hover, which applies to whichever element you are currently hovering over. In real browsers this is especially useful in combination with other selectors: you might have the a:hover selector for links that you're hovering over, or section:hover h1 for headings inside the section you're hoving over. We have a pretty limited set of selectors, but we can at least try out the style :hover { border-top-width: 1px; border-right-width: 1px; border-bottom-width: 1px; border-left-width: 1px; } This should draw a box around the element we are currently hovering over. Let's add this line to our browser stylesheet and try to implement it. First, we need to parse the :hover selector. To do that, we create a new kind of selector: class PseudoclassSelector: def __init__(self, cls): self.cls = cls def matches(self, node): return self.cls in node.pseudoclasses def score(self): return 0 Note that this expects an ElementNode.pseudoclasses field, which I'll initialize to an empty set: Now that we have this class, we need to parse these pseudo-class selectors. That just involves copying the bit of code we have for class selectors and replacing the period with a colon: def css_selector(s, i): # ... elif s[i] == ":": name, i = css_value(s, i + 1) return PseudoclassSelector(name), i # ... Finally we need to handle hover events. In Tk, you do that by binding the <Motion> event: The handle_hover method is pretty simple; it calls find_element, walks up the tree until it finds an ElementNode, and then sets its hover pseudoclass. Also, it has to unset the hover pseudoclass on the previously-hovered element. I store a reference to that element in the hovered_elt field, initialized to None in the constructor. class Browser: def handle_hover(self, e): x, y = e.x, e.y - 60 + self.scrolly elt = find_element(x, y, self.nodes) while elt and not isinstance(elt, ElementNode): elt = elt.parent if self.hovered_elt: self.hovered_elt.pseudoclasses.remove("hover") if not elt: self.hovered_elt = None return elt.pseudoclasses.add("hover") self.hovered_elt = elt self.relayout() Note that hover calls relayout, because by changing the pseudoclasses it potentially changes which rules apply to which elements and thus which borders are applied where. Try this out! You should see black rectangles appear over every element you hover over, except of course that it'll take about a second to do so and the refresh rate will be really really bad. Now it's really clear we need to speed up layout. Actually, this is not how :hover works, because in normal CSS if you hover over an element you probably also hover over its parent, and both get the :hover style. Because of how limited our selector language is, there's no style change that incrementalizes well that I can apply on hover. So I'm instead bowdlerizing how :hover works. In other words, this chapter is a good guide to incremental reflow but a bad guide to hover selectors. My deepest apologies. Please learn how to use CSS from some other source. How can we make layout faster? In the intro to this chapter, I mentioned that the layout doesn't change much, and that's going to be the key here. But what exactly do I mean? When the page is reflowedThat is, laid out but excluding initial layout. due to some change in JavaScript or CSS (like with hovering), the sizes and positions of most elements on the page change. For example, changing what element you're hovering over will change the height of that element (due to the added border), and that will move every later element further down the page. However, intuitively, even if some part of the page moves down, the relative positions of its innards won't change much. My goal will be to leverage this intuition to skip as much work as possible on reflow. The idea is going to be to split layout into two phases. In the first, we'll compute relative positions for each element; in the second, we'll compute the absolute positions by adding the parent offset to the relative position. This will also involve splitting the layout function into two, which I will call layout1 and layout2,Look, I've got a bit of a cold, I'm not feeling very creative. for each of our five layout types (block, line, text, input, and inline). I'll start with block layout. In block layout, the x position is computed from the parent's left content edge and then changed in layout to account for the margin. Likewise, the y position is initialized from the argument in layout and then changed once to account for margins. In other words, neither changes much, so it's safe to move these absolute position fields to layout2.: def layout1(self): y = 0 # ... def layout2(self, y): self.x = parent.content_left() self.y = y self.x += self.ml self.y += self.mt Note that layout1 doesn't take any arguments any more, since it doesn't need to know its position to figure out the relative positions of its contents. Meanwhile, layout2 still needs a y parameter from its parent. Now, layout1 creates children and then makes a recursive calls to child.layout to lay out its children. But since we've split layout into two phases, we need to change those recursive calls to call either layout1 or layout2. Actually we need to do both—but let's do a recursive call to layout1 in layout1, and add a loop to layout2 to do the recursive layout2 call: y = self.y for child in self.children: child.layout2(y) y += child.h + (child.mt + child.mb if isinstance(child, BlockLayout) else 0) We also need to make sure that our layout objects don't do anything interesting in the constructor. The hope, after all, is to keep layout objects around between reflows, and that means we won't call the constructor on reflow. If we do anything interesting in the constructor, it won't get updated as the page changes. In block layout, the margin, padding, and border values are computed in the constructor; let's move that computation into layout1. Plus, the children array is initialized in the constructor, but it's only modified in layout1. Let's move that children array to layout1. These few changes are complex, and we'll be doing the same thing again for each of the other four layout types, so let's review what the code we've written guarantees: layout1 This method is responsible for computing the twelve margin, padding, and border fields, and for creating the child layouts, and for assigning the w and h fields on the BlockLayout. Plus, it must call layout1 on its children. It may not read the x or y fields on anything. layout2 This method may read the w and h fields and is responsible for assigning the x and y fields on the BlockLayout. Plus, it must call layout2 on its children. Now that we've got BlockLayout sorted, let's move on to text, input, and line layout working. I'll save inline layout to the very end. In TextLayout, the layout function does basically nothing, because everything happens in the constructor. We'll rename layout to layout2 and move the computation of the font, color, w, and h fields from the constructor into layout1. In InputLayout, the width and height are computed in the constructor (so let's move them to layout1), while layout computes x and y but also creates a child InlineLayout for textareas. Let's put the child layout creation in layout1 and leave the x and y computation in layout2. Plus, layout2 will need to call layout2 on the child layout, if there is one. Now let's get to LineLayout. Like in BlockLayout, we'll need to split layout into two functions. But one quirk of LineLayout is that it does not create its own children ( InlineLayout is in charge of that) and it also does not compute its own width (its children do that in their attach methods). So unlike the other layout modes, LineLayout will initialize the children and w fields in its constructor, and its layout1 won't recursively call layout1 on its children or compute the w field: class LineLayout: def __init__(self, parent): self.parent = parent parent.children.append(self) self.w = 0 self.children = [] def layout1(self): self.h = 0 leading = 2 for child in self.children: self.h = max(self.h, child.h + leading) The layout2 method looks more normal, however: class LineLayout: def layout2(self, y): self.y = y self.x = self.parent.x x = self.x leading = 2 y += leading / 2 for child in self.children: child.layout2(x, y) x += child.w + child.space Finally, inline layout. For inline layout we want to accomplish the same split as above. The current layout function computes the x, y, and w fields, then creates its children with the recurse method, and then calls layout on each child. If we peek inside recurse and its helper methods text and input, we'll find that it only reads the w field, which is allowed in layout1. So all of recurse can happen in layout1. Hooray! class InlineLayout: def layout1(self): self.children = [] LineLayout(self) self.w = self.parent.content_width() self.recurse(self.node) h = 0 for child in self.children: child.layout1() h += child.h self.h = h As mentioned above, there's a quirk with inline layout, which is that it is responsible not only for its children (line layouts) but their children (text and input layouts) as well. So, we need update the text and input helpers to call layout1 on the new layout objects they create. Meanwhile, layout2 compute x and y and will need a new loop that calls layout2 on the children: class InlineLayout: def layout2(self, y): self.x = self.parent.content_left() self.y = self.parent.content_top() y = self.y for child in self.children: child.layout2(y) y += child.h Note that I'm accepting a y argument in layout2, even though I don't use it. That's because BlockLayout passes one. Probably it would be good to accept both x and y in every layout2 method, though that's not how I've written my code. Also don't forget to update InputLayout to pass a y argument in layout2 when it handles its child layout. I want to emphasize that recurse is by far the slowest part of our browser's layout algorithm,Because it calls font.measure, which has to do a slow and expensive text rendering to get the right size. Text is crazy. and since it happens in layout1 we will be able to mostly skip it. This will be our the big performance win. Finally, we need to update our browser to actually call all of these functions. In Browser.relayout, where we used to call layout, let's call layout1 followed by layout2: I cannot overemphasize how it important it is right now to stop and debug. If you're anything like me, you will have a long list of very minor bugs, including forgetting to add self to stuff when moving it out of constructors, not moving the children array, and passing the wrong number of arguments to one of the two layout phases. If you don't get things working bug-free now, you'll spend three times as long debugging the next phase, where we make things yet more complicated by only calling layout1 sometimes. Read through all of the constructors for the layout classes to make sure they're not doing anything interesting. If you time the browser, now that we've split layout into two phases, you'll find that not much has changed; for me, layout got about five percent slower. To find out more, I made layout1 and layout2 separate phases in the timer. Here's what it looks like: [ 0.498251] Layout1 [ 0.006418] Layout2 So really layout1 is taking all of the time and layout2 is near-instantaneous.Heh heh, near-instantaneous. Anyone that's worked on a high-performance application would like you to know that 6.418 milliseconds is almost 40% of your one-frame time budget! But, this is a toy web browser written in Python. Cut me some slack. That validates our next step: avoiding layout1 whenever we can. The idea is simple, but the implementation will be tricky, because sometimes you do need to lay an element out again; for example, the element you hover over gains a border, and that changes its width and therefore potentially its line breaking behavior. So you may need to run layout1 on that element. But you won't need to do so on its siblings. The guide to this nonsense will be the responsibilities of layout1 and layout2 outlined above. Because layout1 reads the node style, it only needs to be called when the node or its style changes. Let's start by making a plan. Look over your browser and make a list of all of the places where relayout is called. For me, these are: parse, on initial page load. js_innerHTML, when new elements are added to the page (and old ones removed). edit_input, when the contents of an input element is changed. handle_hover, when an element is hovered over. Each of these needs a different approach: parse, we are doing the initial page load so we don't even have an existing layout. We need to create one, and that involves calling layout1on everything. js_innerHTML, the new elements and their parent are the only elements that have had their nodes or style changed. edit_input, only the input element itself has had a change to node or style. handle_hover, only the newly-hovered and newly-not-hovered elements have had a change to node or style. Let's split relayout into pieces to reflect the above. First, let's move the construction of self.page and self.layout into parse. Then let's create a new reflow function that calls style and layout1 on an element of your choice. And finally, relayout will just contain the call to layout2 and the computation of the display list. Here's parse and relayout: class Browser: def parse(self, body): # ... self.page = Page() self.layout = BlockLayout(self.page, self.nodes) self.reflow(self.nodes) self.relayout() def relayout(self): self.start("Layout2") self.layout.layout2(0) self.max_h = self.layout.h self.timer.start("Display List") self.display_list = self.layout.display_list() self.render() The new reflow method is a little more complex. Here's what it looks like after a simple reorganization: class Browser: def reflow(self, elt): self.timer.start("Style") style(self.nodes, self.rules) self.timer.start("Layout1") self.layout.layout1() Note that while reflow takes an element as an argument, it ignores it, and restyles and re-lays-out the whole page. That's clearly silly, so let's fix that. First, the style call only needs to be passed elt: Second, instead of calling layout1 on self.layout, we only want to call it on the layout object corresponding to elt. The easiest way to find that is with a big loop:There is a subtlety in the code below. It's important to check the current node before recursing, because some nodes have two layout objects, in particular block layout elements that contain text and thus have both a BlockLayout and an InlineLayout. We want the parent, and doing the check before recursing guarantees us that. def find_layout(layout, elt): if not isinstance(layout, LineLayout) and layout.node == elt: return layout for child in layout.children: out = find_layout(child, elt) if out: return out This is definitely inefficient, because we could store the element-layout correspondence on the node itself, but let's run with it for the sake of simplicity. We can now change the self.layout.layout1() line to: This mostly works, but find_layout won't be happy on initial page load because at that point some of the layout objects don't have children yet. Let's add a line for that:I recognize that in many languages, unlike in Python, you can't just add fields in some method without declaring them earlier on. I assume that in all of those cases you've been initializing the fields with dummy values, and then you'd check for that dummy value instead of using hasattr. Just make sure your dummy value for children isn't the empty list, since that is also a valid value. Better to use a null pointer or something like that, whatever your language provides. The logic of returning any layout object without a children field is that if some layout object does not have such a field, it definitely hasn't had layout1 called on it and therefore we should, which we do by returning it from find_layout. Finally, let's go back to place where we call relayout and add a call to reflow. The only complicated case is handle_hover, where you need to call reflow both on the old hovered_elt and on the new one. (You only need to call layout once, though.) With this tweak, you should see layout1 taking up almost no time, except on initial page load, and hovering should be much more responsible. For me the timings now look like this: So now rendering takes up roughly 89% of the runtime when hovering, and everything else takes up 32 milliseconds total. That's not one frame, but it's not bad for a Python application! Let's put a bow on this lab by speeding up render. It's actually super easy: we just need to avoid drawing stuff outside the browser window; in the graphics world this is called clipping. Now, we need to make sure to draw text that starts outside the browser window but has a part inside the window, so I'm going to update the DrawText constructor to compute where the text ends: Ok, wait, that's not the code you expected. Why 50? Why not use font.measure and font.metrics? Because font.measure and font.metrics are quite slow: they actually execute text layout, and that takes a long time! So I'll be using only y position for clipping, and I'll be using an overapproximation to font.metrics. The 50 is not a magic value; it just needs to be bigger than any actual line height. If it's too big, we render a few too many DrawText objects, but it won't change the resulting page. Now both DrawText and DrawRect objects have top-left and bottom-right coordinates and we can check those in render: for cmd in self.display_list: if cmd.y2 - self.scrolly < 0: continue if cmd.y2 - self.scrolly > 600: continue cmd.draw(self.scrolly - 60, self.canvas) That takes rendering down from a quarter-second to a hundredth of a second for me, and makes the hover animation fairly smooth. A hover reflow now takes roughly 25 milliseconds, with the display list computation 44% of that, rendering 39%, and layout2 13%. We could continue optimizing (for example, tracking invalidation rectangles in rendering), but I'm going to call this a success. We've made interacting with our browser more than 30 times faster, and in the process made the :hover selector perfectly usable. With the changes in this chapter, my toy browser became roughly 30× faster, to the point that it is now reacts to changes fast enough to make simple animations. The cost of that is a more complex, two-pass layout algorithm. DrawTextcommand in TextLayout.display_list, you already know the width and height of the text to be laid out. Use that to compute y2and x2in DrawText, and update the clipping implementation to clip horizontally as well as vertically. font.measurefunction is quite slow! Change TextLayout.add_spaceto use a cache for the size of a space in a given font. BlockLayoutobjects, how much in InlineLayout, and so on. What percentage of the layout1time is spent handling inline layouts? If you did the first exercise, measure its effect. setAttributemethod in JavaScript. Note that this method can be used to change the id, class, or styleattribute, which can change which styles apply to the affected element; make sure to handle that with reflow. Furthermore, you can use setAttributeto update the hrefattribute of a <link>element, which means you must download a new CSS file and recompute the set of CSS rules. Make sure to handle that edge case as well. If you change the srcattribute of a <script>tag, oddly enough, the new JavaScript file is not downloaded or executed. setTimeoutcommand in JavaScript; setTimeout(ms, f)should run the function fin msmilliseconds. In Python, you can use the Timerclass from the threadinglibrary, but be careful—the callback on the timer is called in a separate thread, so you need to make sure no other JavaScript is running when calling the timeout handler. Use setTimeoutto implement a simple animation; for example, you might implement a "typewriter effect" where a paragraph is typed out letter-by-letter. Check that your browser handles the animation relatively smoothly.
https://browser.engineering/reflow.html
CC-MAIN-2019-51
refinedweb
4,267
64.81
How would one build this in MEL? I’ve almost finished an asset manager but want a better system to browse through folders like the one above? How would one build this in MEL? I’ve almost finished an asset manager but want a better system to browse through folders like the one above? kid!!! How is their any way we can put tabs upside down just like in this example picture… click image for different example QTTabWidget.setPosition(‘South’) would make the tabs appear on the bottom. North, West and East are also valid options. /Christian Hi all, I trying to add new button to the Status line: global proc toggleMyToolsButton() { global string $gStatusLine; global string $gStatusLineForm; $statusLine = `formLayout -parent $gStatusLineForm`; $gStatusLine = `flowLayout`; setParent $statusLine; $buttonForm = `formLayout`; // Just a button global string $gMyToolsButton; $gMyToolsButton = `iconTextCheckBox -image1 "bullet_selectSolverNode.png" // test -annotation ("My tools") -changeCommand ("ToggleChannelBox") channelBoxButton`; // test /// // Set up the attachments. // formLayout -edit -attachForm $gMyToolsButton top 1 //-attachControl $gMyToolsButton left 0 $gLayerEditorButton // Why this don't working? -attachNone $gMyToolsButton bottom -attachNone $gMyToolsButton right $buttonForm; } toggleMyToolsButton(); But I don't alighn new button to other button like a "Show/hide Attribute editor/Layer editor/Channel box, etc.": Hi. I’m back to studying Python. Now reading up on classes and how to utilize it with UI:s. I’m having some problems figuring out how to get exactly what I want though and so I turn to you. I’m using the book “Maya Python for Games and Film”. They are distributing a base window class found here: I then created another class which inherits from the base class, here: import optwin reload(optwin) from optwin import AR_OptionsWindow class newWin(AR_OptionsWindow): def __init__(self): AR_OptionsWindow.__init__(self) self.title = 'Dag\'s window' self.window = 'dag\'s window' self.actionName = 'Huey' def displayOptions(self): mc.setParent(self.optionsForm) self.btnA = mc.button( label = "button A") mc.formLayout(self.optionsForm, edit=True, attachForm=( [self.btnA, 'left', 0], [self.btnA, 'right', 0], [self.btnA, 'top', 0], [self.btnA, 'bottom', 0], ), attachPosition=( [self.btnA, 'right', 0, 50], ) ) newWin.showUI() It’s really simple. However I really can’t figure out how to align the button in the formlayout which is parented to the tab layout so it will scale with the window =/ Tried parenting the button to the mainForm instead and sure, that works, but I want it to be attached under the tab layout. Thank you for reading! Ok, I figured this one out. However I still don’t get it… :banghead: By editing the base class, the elements resize according to the formLayout settings. on line 42 I changed: scrollable=True to scrollable=False However I really wouldn’t want to go in and edit the class, but it seems nothing else work. :rolleyes: I tried to edit this in my inherited class: def displayOptions(self): mc.tabLayout( self.optionsBorder, e=True, scrollable=False, tabsVisible=True, tli = (1, 'tab 1') ) This works for the tabVisible and tab label, but does nothing for the scrollable function. :argh: Anyone could shed some light on this? I’m posting both the edited class and my full inherited one for you to try. I am completely stumped with this. I’m trying to build a character control GUI with a background image and buttons or some other clickable object laid over the top but I can’t get it working. The buttons are not clickable and behind the image and I have no idea why. I’ve basically pieced this code together from other character gui’s so I’m at a loss. Any help would be appreciated. string $title = "Control Window"; int $width = 640; int $height = 700; //create window if(`window -q -exists UIWin`){ deleteUI UIWin; } string $win = `window -title $title -widthHeight $width $height`; // create tablayout string $tabs = `tabLayout -innerMarginWidth 5 -innerMarginHeight 5`; //Body Tab string $tab_cl1 = `columnLayout -w $width -h $height -adj 0 Body`; // create a formLayout string $form= `formLayout -numberOfDivisions 640`; //createButton string $bodyButton =`button -w 70 -h 20 -label "Body" -ann "Select body control" -c print "body select"` ; string $chestButton =`button -w 70 -h 20 -label "chest" -ann "Select chest control" -c print "chest select"` ; //creat image string $BodyPage = `image -image "guiTail.png"`; // FormLayout edit formLayout -edit -attachForm $bodyButton "top" 300 -attachForm $bodyButton "left" 200 -attachForm $chestButton "top" 40 -attachForm $chestButton "left" 400 -attachForm $BodyPage "top" 0 -attachForm $BodyPage "left" 0 $form; setParent..; setParent..; //Show Window showWindow $win;]
http://forums.cgsociety.org/t/mel-maya-ui-building/664014/348
CC-MAIN-2019-51
refinedweb
735
55.95
On vacation with my hands full of books to read, I found myself committed on a serious SICP reading that takes longer than predicted (specifically while doing the exercises in Clojure in the mean time). The promise I made was also to start learning Erlang, while starting learning a Haskell for my great good (fans will understand:)). On the edge of opening the Haskell book this weekend, I already watched Francesco Cesarini and Simon Thompson videos on Erlang which provided me with some meat for Akka. A very short project, indeed but food for the mind, still being with no pet project, nor Scala or Clojure Master for guidance. One of the exercises proposed by Cesarini and Thompson consists in the creation of a ring of actors, one creating the following, then sending an acknowledgement message. The last created actor sends an actor to the source actor, notifying the end of the ring process. The implementation can be summarized into the following diagram: The sbt build.scala file used for the project is contained into the following template: import sbt._ import sbt.classpath._ import Keys._ import Process._ import System._ object BuildSettings { val buildSettings = Defaults.defaultSettings ++ Seq ( fork in run := true, javaOptions in run += "-server", javaOptions in run += "-Xms384m", javaOptions in run += "-Xmx512m", organization := "com.promindis", version := "0.1-SNAPSHOT", scalaVersion := "2.9.1", scalacOptions := Seq("-unchecked", "-deprecation") ) } object Resolvers { val typesafeReleases = "Typesafe Repo" at "" val scalaToolsReleases = "Scala-Tools Maven2 Releases Repository" at "" val scalaToolsSnapshots = "Scala-Tools Maven2 Snapshots Repository" at "" } object TestDependencies { val specs2Version = "1.6.1" val testDependencies = "org.specs2" %% "specs2" % specs2Version % "test" } object AKKADependencies { val akkaVersion = "1.2" val actorDependencies = "se.scalablesolutions.akka" % "akka-actor" % akkaVersion } object MainBuild extends Build { import Resolvers._ import TestDependencies._ import AKKADependencies._ import BuildSettings._ lazy val algorithms = Project( "Ring", file("."), settings = buildSettings ++ Seq(resolvers += typesafeReleases) ++ Seq (libraryDependencies ++= Seq(testDependencies, actorDependencies)) ) } The BuildSettings object flags the execution of the scala main methods as forked processes so there will be no interference between the sbt process and the execution of the ring. I added memory sizing info for the execution in order not to be constrained by undersized estimates and imposed the flag server as running a 32bits old dual core laptop (yes, shame on me, but a good laptop can reach far more than 2000 euros). The addition of the server flag revealed to be a nice idea dividing the time of execution by two. While achieving this small kata, I felled into a few traps. One of them, I was not expecting, was the chained creation of the Node in the ring one after the other. I could not chain them on construction unless generating an expected big stack overflow. So, on a first try, I overrode the preStart method, naively expecting for some asynchronous invocation of the method. The stack overflow took me by surprise. I then cheated and asynchronously ordered, via message sending, the creation of the next actor. The idea was to reproduce the following Erlang BIF invocation: NPid = spawn(ring, start_proc, [Num - 1, Pid]) The Node class template is a classic: } } As one can see, I tried to reduce the volume of exchanged message, using only literal symbols during exchanges. The 'start and 'ok reproduces the Start, OK symbols on the ring schema. While receiving a 'start message, a Node actor, creates a new Node actor, after notifying the source of all nodes that it has been created. It allowed to check all my actors where created. On receiving an 'ok message, the actor poisons herself so to free the resources. The reference to the source and the number of expected actors are communicated as constructor parameters. On receiving a 'start message, a Node actor, creates her follower decreasing the number of Node to be created., the last Node, matching the number 1, sends the 'ok message to the source. The Node companion object, takes in charge the creation and start of new actors. The Source actor, is slightly different as collecting the ping notifications from the ring, tracing the complete execution of the ring and relaunching a ring execution. The ability to relaunch a ring process was important as the JVM warm up can take one or two ring processes before exposing a stable time of execution. + " for " + total) decreaseCounter() if (counter != 0) { println("Retrying... " + counter + " times") start = currentTimeMillis() total = 0 Node(self, number) ! 'ok } else { self ! PoisonPill } } } object Source { def apply (number: Int, maximum: Int) = Actor.actorOf(new Source(number, maximum)).start() } On receiving a 'ping message,the (missnamed ) total number of created actors is incremented On receiving an 'ok message, that flags the end of the ring execution, a new ring process is initiated again until reaching the expected number of ring executions. Here is the complete code content: import akka.actor.{PoisonPill, ActorRef, Actor} import System._ import Integer._ object Ring { + "ms for " + total) decreaseCounter() if (counter != 0) { println("Retrying... " + counter + " times") start = currentTimeMillis() total = 0 Node(self, number) ! 'ok } else { self ! PoisonPill } } } object Source { def apply (number: Int, maximum: Int) = Actor.actorOf(new Source(number, maximum)).start() } def main(arguments: Array[String]) { Source(parseInt(arguments(0)), parseInt(arguments(1))) } } where the Ring object main method takes as input parameters respectively the number of node in a ring, and the number of ring processes to execute. With an underlying jdk7 the kind of sampling I got are typically : > run 1000 5 [info] 201ms for 1000 [info] Retrying... 4 times [info] 127ms for 1000 [info] Retrying... 3 times [info] 203ms for 1000 [info] Retrying... 2 times [info] 90ms for 1000 [info] Retrying... 1 times [info] 74ms for 1000 [success] Total time: 3 s, completed 29 oct. 2011 15:05:16 > run 10000 5 [info] 1079ms for 10000 [info] Retrying... 4 times [info] 511ms for 10000 [info] Retrying... 3 times [info] 94ms for 10000 [info] Retrying... 2 times [info] 85ms for 10000 [info] Retrying... 1 times [info] 108ms for 10000 [success] Total time: 5 s, completed 29 oct. 2011 15:05:26 > run 100000 5 [info] 2289ms for 100000 [info] Retrying... 4 times [info] 967ms for 100000 [info] Retrying... 3 times [info] 754ms for 100000 [info] Retrying... 2 times [info] 739ms for 100000 [info] Retrying... 1 times [info] 752ms for 100000 [success] Total time: 8 s, completed 29 oct. 2011 15:05:45 > run 1000000 5 [info] 9184ms for 1000000 [info] Retrying... 4 times [info] 7834ms for 1000000 [info] Retrying... 3 times [info] 8163ms for 1000000 [info] Retrying... 2 times [info] 7470ms for 1000000 [info] Retrying... 1 times [info] 7585ms for 1000000 [success] Total time: 43 s, completed 29 oct. 2011 15:06:34 Where the performances seems weaker compared to the Erlang one: 2> timer:tc(ring, start, [1000]). {5000,ok} 3> timer:tc(ring, start, [10000]). {52000,ok} 4> timer:tc(ring, start, [100000]). {246000,ok} 5> timer:tc(ring, start, [1000000]). {1535000,ok} The execution time unit in Erlang is microseconds, As I do not have enough knowledge (yet !!) about the Akka internals nor the Erlang one, I don't want to bring on the scene any conclusion in favor of one of the experiments or the other. I would rather both welcome explanations and critics about the code sample in order to increase my knowledge of the framework, and understand better why the performances should be better for one or the other. In addition, one should remember that the machine is under sized for this kind of experiments the number of messages exchanged during the Akka ring experiment (3000000) is greater that the number of messages exchanged during the Erlang test (1000000) after all creating 1000000 actors, exchanging around 3000000 messages in about 7s could be considered a good performance on a three year old laptop So what are the results on your machines ? Certainly better than mine ? Be seeing you !!! :) 5 comments: Output on my desktop (2 cores, 3GHz) with jdk 7: 201ms for 1000 Retrying... 9 times 126ms for 1000 Retrying... 8 times 183ms for 1000 Retrying... 7 times 250ms for 1000 Retrying... 6 times 76ms for 1000 Retrying... 5 times 99ms for 1000 Retrying... 4 times [ERROR] [14/11/11 18:55] [akka:event-driven:dispatcher:global-11] [LocalActorRef] Actor has not been started, you need to invoke 'actor.start()' before using it akka.actor.ActorInitializationException: Actor has not been started, you need to invoke 'actor.start()' before using it [PC2_6eaa71ca-0ee1-11e1-a597-001a4d5595d4] at akka.actor.ScalaActorRef$class.$bang(ActorRef.scala:1399) at akka.actor.LocalActorRef.$bang(ActorRef.scala:605) at Ring$Node$$anonfun$receive$1.apply(Ring.scala:16) at Ring$Node$$anonfun$receive$1.apply(Ring.scala:9) at akka.actor.Actor$class.apply(Actor.scala:545) at Ring$Node.apply(Ring.scala:7) at akka.actor.LocalActorRef.invoke(ActorRef.scala:905) at akka.dispatch.MessageInvocation.invoke(MessageHandling.scala:25) at akka.dispatch.ExecutableMailbox$class.processMailbox(ExecutorBasedEventDrivenDispatcher.scala:216) at akka.dispatch.ExecutorBasedEventDrivenDispatcher$$anon$4.processMailbox(ExecutorBasedEventDrivenDispatcher.scala:122) at akka.dispatch.ExecutableMailbox$class.run(ExecutorBasedEventDrivenDispatcher.scala:188) at akka.dispatch.ExecutorBasedEventDrivenDispatcher$$anon$4.run(ExecutorBasedEventDrivenDispatcher.scala:122)) at akka.dispatch.MonitorableThread.run(ThreadPoolBuilder.scala:184) 112ms for 1000 Retrying... 3 times 47ms for 1000 Retrying... 2 times 67ms for 1000 Retrying... 1 times 205ms for 1000 Whaoooooooo, nice snapshot. Give a few days for I reproduce it on my "Ford T" computer and investigate. It will change from my boring day job :) I am going to post the problem on the akka-user list today while starting the analysis of the problem I am trying to reproduce your problem with a build 1.7.0_02-ea-b02 from the JDK7.I did not suceeded for the moment and being upgrading up to u1 version. A first answer from the Akka user group suggested changing the "self ! PoisonPill" simply by a self.stop. Nice advice, because as you might imagine the performance got better (20% to 30% on my machine). I am going to correct the code. Keep me informed if you reproduce it with this change. Would yo be as so kind as to tell me more about your jdk7 version ? It was build 1.7.0_01-b08 for win32, but for now I cannot reproduce that exception. Let us think that it was some fantom bug. Also I tried to play ping-pong with Akka actors, and found that they perform close to Erlang actors, when they are pre-created: 18,155,070,440 ns 1,815 ns/op 550,810 ops/s {code} import akka.actor.Actor._ import akka.actor.{Actor, ActorRef} val n = 10000000 val t = System.nanoTime def printResult() { val d = System.nanoTime - t printf("%,d ns\n", d) printf("%,d ns/op\n", d / n) printf("%,d ops/s\n", (n * 1000000000L) / d) } abstract class Player extends Actor { def adversary: ActorRef def receive = { case 0 => printResult(); self.stop() case 1 => adversary ! 0; self.stop() case i: Int => adversary ! i - 1 } } object Pong extends Player { def adversary = Ping.self } object Ping extends Player { def adversary = Pong.self } actorOf(Pong).start() actorOf(Ping).start() ! n {code}
http://patterngazer.blogspot.com/2011/10/one-ring-to-rule-my-akka-actors.html
CC-MAIN-2019-09
refinedweb
1,844
59.5
[quantal] 5.100.82.112+bdcom-0ubuntu1 - wl_linux.c:43:24: fatal error: asm/system.h: No such file or directory Bug Description A rather weird way to try the bug, but **I think it affects quantal** as well (as system.h doesn't exist anymore) -- that's why I'm reporting it as a bug. I installed the latest ubuntu 3.4.4 on ubuntu 12.04 (from kernel mainline ppa) and the package bcmwl-kernel-source from ubuntu quantal repositories: $ apt-cache policy bcmwl-kernel-source bcmwl-kernel- Installed: 5.100.82. Candidate: 5.100.82. Version table: *** 5.100.82. 100 /var/lib/ 5. 500 http:// 5. 500 http:// This happened when I installed/tried to reconfigure the package: $ sudo dpkg-reconfigure bcmwl-kernel-source Removing all DKMS Modules Done. Building for 3.2.0-26-generic and 3.4.4-030404- Building for architecture i686 Building initial module for 3.2.0-26-generic Done. wl: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/ depmod.... DKMS: install completed. Building initial module for 3.4.4-030404- ERROR (dkms apport): kernel package linux-headers- Error! Bad return status for module build on kernel: 3.4.4-030404- Consult /var/lib/ update-initramfs: deferring update (trigger activated) savvas@ DKMS make.log for bcmwl-5. Mon Jul 2 13:16:56 CEST 2012 make: Entering directory `/usr/src/ Wireless Extension is the only possible API for this kernel version Using Wireless Extension API LD /var/lib/ CC [M] /var/lib/ CC [M] /var/lib/ /var/lib/ compilation terminated. make[1]: *** [/var/lib/ make: *** [_module_ make: Leaving directory `/usr/src/ The solution was to apply a similar patch from comment #6 on bug https:/ #if LINUX_VERSION_CODE < KERNEL_VERSION(3, 4, 0) #include <asm/system.h> #endif Again, I believe that this affects normal quantal installations, that's why I'm reporting it. Hello Savvas, precise-proposed version compiles fine via dkms against linux-lts-quantal's 3.5 looks ok now, thank you! This bug was fixed in the package bcmwl - 5.100.82. 112+bdcom- 0ubuntu2 --------------- 82.112+ bdcom-0ubuntu2) quantal; urgency=low bcmwl (5.100. * 0004-Add- support- for-Linux- 3.2.patch: dkms.conf. in: - Make sure the patch can always be applied. * debian/ - Always apply all patches (LP: #1020059). -- Alberto Milone <email address hidden> Thu, 05 Jul 2012 13:20:00 +0200
https://bugs.launchpad.net/ubuntu/+source/bcmwl/+bug/1020059
CC-MAIN-2019-09
refinedweb
406
53.37
import of glyphNameFormatter.reader crashes RF - RafaŁ Buchner last edited by gferreira import glyphNameFormatter.readerprints something in very long loop, so if I want to do something with this module, it takes a long time to import it. hi, thanks for the bug report. this has already been fixed in glyphNameFormatter(see this commit) and in the current 3.3 alpha. in the meantime, you can download the latest glyphNameFormatterfrom GitHub and use that instead of the embedded one. see Overriding embedded libraries - RafaŁ Buchner last edited by thanks, works This was my mistake, I left a print statement in and rushed to make a release. It's a nasty bug, I fixed it but that was after Frederik made the release. My apologies for the inconvenience.. will post a public beta later on today!
https://forum.robofont.com/topic/580/import-of-glyphnameformatter-reader-crashes-rf/?page=1
CC-MAIN-2019-35
refinedweb
135
66.74
Lucio Piccoli-2 wrote: > > 2) MessageConsumers > Deploying both the broker and the app inside a single WAR works OK. > However > since the client is a tight ass and using WLS express I can't use MDB. So > I > am trying to instantiate MessageConsumers inside a servlet. The servlet > gets > preloaded using the <load-on-startup> in the web.xml. However the servlet > loads before the broker is loaded and it complains about the follow. > It seems like the servlet instance is trying to create a broker is ready > and > complains badly. > Is there a simple mechanism that allows POJO to be configured as > MessageConsumers within a WebApp? > > looking through the SpringBrokerContextListener code it seems that one can create a new ServletContextListener class that can call the broker setup and then initialise MessageConsumers. This will ensure that the broker is started before creating queues. Does this approach seem reasonable? public class SpringBrokerContextListener implements ServletContextListener { -lp -- View this message in context: Sent from the ActiveMQ - User forum at Nabble.com.
http://mail-archives.apache.org/mod_mbox/activemq-users/200605.mbox/%3C4654043.post@talk.nabble.com%3E
CC-MAIN-2018-22
refinedweb
168
56.55
The C++11 random normal_distribution produces random numbers x using the respective discrete probability function of the distribution-the function is shown at the end of the post.The distribution class declaration is shown below. template<class RealType = double> class normal_distribution; The class default type is double type and note this distribution can generate only floating point type values or real numbers. The distribution is based on the the normal distribution of the probability distribution. Link : The types and member functions of the class is shown below. Types The RealType is a type definition of the template type and the param_type is a structure but note the definition of the param_type will alter from compiler to compiler. Constructors and reset function The first constructor accepts two parameters ‘mean’ and ‘stddev’ whose default values are 0 and 1.These default values will be same in all compiler.The uses of these two parameters is used in finding the probability of the random values in the distribution.The relation 0 < stddev on ‘stddev’ should hold. The second constructor accept param_type object and in this case the values of ‘mean’ and ‘stddev’ is deduced from the ‘mean’ and ‘stddev’ values of the param_type object. Code example normal_distribution<float > nd1; normal_distribution< float >::param_type pt(56.01 , 6.7 ) ; normal_distribution< long double >: nd2(pt) ; //error! , type of pt is float but type of nd2 is double type reset() The reset( ) function reset the state of the distribution. Generating functions the first operator() function The generated random sequence is obtained using the operator() function.The first overloaded operator() accept URNG(Uniform Random Number Generator) or engine. Code example default_random_engine dre ; cout<< nd(dre) << ” ” << nd(dre) << endl ; Output in Code::blocks, -1.08682 -0.121966 the second operator( ) function The second overloaded operator( ) function accept URNG and param_type object. Link : C++11 random linear_congruential_engine Code example normal_distribution< float >::param_type pt(56.01 , 6.7 ) ; linear_congruential_engine<unsigned int , 193703 , 0 , 83474882 > lce ; cout<< nd(lce , pt) << ” ” << nd(lce , pt) << endl ; Output in Code::blocks, 55.132 55.9866 Property functions mean() function This function returns the ‘mean’ value of the distribution. Code example nd1( 900 , 10); cout<< nd.mean() << endl << nd1.mean() ; Output, 0 900 stddev() function This function returns the ‘stddev’ value of the distribution. Code example nd1( 900 , 10); cout<< nd.stddev() << endl << nd1.stddev() ; Output, 1 10 param() This function returns the param_type object. Code example cout<< nd.param().mean() << endl << nd.param().stddev() ; Output, 123 893 param(param_type) Using this function we can change the ‘mean’ and ‘stddev’ value of the distribution to the ‘mean’ and ‘stddev’ value of the param_type object by passing the param_type object. Code example cout<< nd.mean() << endl ; normal_distribution< float >::param_type pt( 56.01 , 6.7 ) ; nd.param( pt ); cout<< evd.mean() ; Output, 5000 56.01 min() function The min() returns the smallest value the distribution can generate,which is the value 0. Code example cout<< nd.min( ); Output, 0 max() function The max() returns the largest value the distribution can generate.It returns the value of numeric_limits<result_type>::max(). Code example cout<< nd.max( ); Output, 3.40282e+038 operator== and operator!= functions These two functions check the parameters of the two distribution objects.If the two parameters are equal the operator== returns 1 and operator!= returns 0.Note the function operator== always returns true(ans operator!= always false) as long as the ‘mean’ and ‘stddev’ of the two comparison objects are equal not matter what state the two objects is in. Code example nd2( 0.23 , 5.69 ) , nd3( 90 , 9.56 ) ; cout<< ( nd==nd1 ) << endl << ( nd!=nd1 ) << endl << ( nd==nd3); Output , 0 1 1 operator>> and operator<< functions These two operators allows you to save the state of the engine and the distribution.Using the operator>> function we can save the sate of the distribution to the ‘stringstream’ type object.And using the operator<< function we can obtain the state of the distribution or engine save in the ‘stringstream’ object and reassign it to the distribution’s or engine’s object.The same state can help in producing the same sequence which was generated earlier when that state was achieved.Note to reproduce the same state not only the distribution’s state but also the engine’s state must be same. In the code below we will try to save the distribution state and also save the engine’s state,in this way we will be able to reproduce the same random sequence. Code example #include <random> #include <sstream> using namespace std ; int main( ) { stringstream engState , //object to save the engine state disState ; //object to save the distribution state normal_distribution<> ndIO1(25.6 , 90.4) , ndIO2(25.6 , 90.4) ; default_random_engine dre1 , dre2 ; cout<< ndIO1(dre1) << endl ; //first random number disState<< ndIO1 ; //Save the 2nd distribution state of ndIO to disState engState<< dre1 ; //Save the 2nd engine state of dre1 to engState cout<< “Second and third state output of ndIO1 and dre1 \n”; cout<< ndIO1( dre1 ) << ” ” ; cout<< ndIO1(dre1) << endl ; disState>> ndIO2 ; //reassign the distribution state saved in disState to ndIO2 engState>> dre2 ; //reassign the engine’s state saved in engState to dre2 cout<< “\n\nOutputting the random sequence using ndIO2 and dre2 current state\n”; cout<< ndIO2( dre2 ) << ” ” ; cout<< ndIO2( dre2 ) << endl ; cin.get( ); return 0; } Output in Code::Blocks , 14.5743 Second and third state output of ndIO1 and dre1 -72.6484 87.4598 Outputting the random sequence using ndIO2 and dre2 current state -72.6484 87.4598 The second sate of ndIO1 and dre1 has been assigned to ndIO2 and dre2,so the numbers generated by the second sate and it’s consecutive state of ndIO1 and dre1 will be same as the sequence generated by the ndIO2 and dre2 current state and it’s consecutive state.Note reproducing the same sequence can be useful for debugging purpose. normal_distribution produces random numbers x distributed according to the probability density function, Note in normal_distribution the generated sequence will have more values that revolve around the ‘mean’ value. Link : C++ cmath exp function
https://corecplusplustutorial.com/cpp11-random-normal_distribution/
CC-MAIN-2017-30
refinedweb
1,007
56.76
I have two sql database tables with a 1:n relationship. For my ASP.NET MVC-solution I have enabled EF-code-first-migration and the proper DBContext and classes established. I would like to write an MVC-controller that joins both tables in order to select specific records for display in a view. Here are the two classes: public class Tbl_Group_Communities : Entity { public string GKZ { get; set; } public int G_ID { get; set; } } public class Katastralgemeinden : Entity { public string KGNr { get; set; } public string KGName { get; set; } public string GKZ { get; set; } public string GemeindeName { get; set; } } public IEnumerable<Tbl_Group_Communities> Get() { var entities = UnitOfWork.GetAll<Tbl_Group_Communities>().ToList(); return entities; } You can do inner join as shown below. Assumption : Hope your table names are like Tbl_Group_Communities and Katastralgemeinden.In other words same name as the class names. from s in db.Tbl_Group_Communities join sa in db.Katastralgemeinden on s.GKZ equals sa.GKZ where s.G_ID == 1 select s You can learn more about join here : Join Operators
https://codedump.io/share/HV5rcBf9RpUc/1/how-to-join-two-tables-with-linq-in-an-mvc-controller
CC-MAIN-2016-50
refinedweb
167
52.15
With the free Turbo Explorer line, Borland brings programming to the masses. Each of the four Turbo Explorer ‘personalities’ targets a different combination of programming language and platform: Turbo Delphi and Turbo C++ for Windows, and Turbo Delphi and Turbo C# for .NET. There are over 200 components for building programs, but the abundance of features may be overwhelming to new users. Review: Borland Turbo Explorer About The Author Thom Holwerda Follow me on Twitter @thomholwerda 24 Comments 2006-09-06 10:53 ammemson Um.. are you sure about that? Next was around in the late 80’s. They had certainly released NextStep well before 1995. Delphi was released in 1995. AFAIK, InterfaceBuilder was always part of NextStep. In fact: 2006-09-05 9:25 pmMollyC samad, I’ll have to disagree with you regarding Cocoa. I’m fairly new to Cocoa, but here’s my take on it. I think Cocoa is outdated. Objective C is basically Smalltalk C. There was a goal to create an Object-Oriented C, and at the time, it was thought that Smalltalk was the epitome of object-oriented”ness”. So they added a bunch of smalltalkisms to C and thus begat Objective C. The problem is that there’s been a lot of advancement in Object Oriented thinking since then, and Objective C has not kept up with the times. I think the .NET runtime and framework blow Cocoa away. I did see that Apple is introducing garbage collection to Cocoa with Leopard, which is a step in the right direction (although I kind of like Cocoa’s “autorelease” mechanism). The problem here is that while the Leopard’s Cocoa runtime might support garbage collection, the Cocoa API does not, so it seems that it’ll only be useful in code that doesn’t use the Cocoa API. Similar to how the runtime supports exception handling, but the API doesn’t, so nobody actually uses exceptions. Cocoa’s lack of namespaces also needs to be addressed. I don’t even like InterfaceBuilder, even though many praise it. It seems overengineered to me and clunky. As for the API itself, the newer “NS” apis are pretty clean, but the old “NS” apis that came from the NextStep days are a mess with horrible documentation to boot. Caveat to the above: I only started Cocoa programming as a hobby during the last year or so, and am not an expert. So feel free to correct me and call me an idiot. 🙂 2006-09-05 9:42 pmzetsurin “I don’t even like InterfaceBuilder, even though many praise it. It seems overengineered to me and clunky.” Totally agree with you there. Tried it for a while then gave it a miss. It’s about the only programming tech I gave up on because I simply don’t agree with how it works, it’s just too over the top. All this dragging little connectors about and whatnot seems daft. 2006-09-05 11:20 pmsamad You’re definitely right about Obj C being outdated in many ways (i.e., no namespaces, lack of exceptions). However I think Obj C’s loose bindings benefits the realities of GUI OO programming. Obj C allows you to query an object’s methods and members dynamically during runtime. You can even write a class that impersonates another class during runtime. Qt has its own C++ preprocessor (moc) in order to create a somewhat more looser bindings environment, and thus easier to program, via signals and slots. As for memory management, I think it’s a matter of preference. Complete garbage collections almost removes the need for the programmer to handle memory management. But it comes with a cost: performance and space. Some programmers want complete control over the memory and like to manually allocate/deallocate objects. Obj-C is sort of a middle path, where using release, retain, and autorelease allows some degree of the programmer’s intervention in the way memory is handled with little performance impact. 2006-09-06 2:20 pmsteve_s WRT the “lack of exceptions” criticism of ObjC:… I’ve used exceptions myself quite a few times… Lack of namespaces is indeed a bit of a flaw in ObjC. Personally I like the relative simplicity of ObjC when compared to some other OO languages. As you point out the loose bindings and general dynamic nature of the language does indeed make it a good fit for the realities of GUI programming. I am personally very interested in seeing how garbage collection works in ObjC 2 (on Leopard). The existing release/retain/autorelease mechanism works pretty well (better than malloc at least 🙂 ) but it does take a bit of getting used to. I look forward to this. I was interested in dabbling in Delphi a couple years ago, only to find that the cheapest Delphi distro was $1500. Borland did offer a “personalized” Delphi for free, but it only came on a CD packaged with some obscure European magazine. I’m glad Borland saw the light and is appealing to hobbyists again (maybe it took Microsoft’s Express line to wake Borland up). Edited 2006-09-05 21:23). 2006-09-05 10:42 pmWorkn None of the other solutions mentioned in the comments even come close to Delphi in terms of productivity and ease of use, not even VS 2005 is as easy to use or productive as even Delphi 5 which is 7 years old. Delphi also blows VS away when it comes to database application development, you can write full featured database apps in Delphi with very little code, while in VS you have to write tons and tons of code. Another area where Delphi shines is 3rd party library/component support, go to sourceforge once and do a search on Delphi and see how many hits you get, it’s impressive 🙂 It’s just a shame so many IT managers simply follow the MS bandwagon without ever realizing Delphi is the best. I have used Delphi all the way back to version 1.0 in WIN16. The VCL is one of the most enjoyable, and easy to use, libraries I’ve ever seen. Very feature full. It does suck that Borland chose to cripple the free version at the visual component creation and registration level, since that is imo what has always made Delphi so flexible and useful. I switched earlier this year from VS2005 to BDS2006 using Delphi and I have to say its so much faster and less buggy to use than VS. Ive never programmed Delphi before this and it was very easy to learn and get up to speed quickly. I just hope this does take off because I really think its a nice ide and app developer for non-net apps. The 3rd party apps make a world of difference as well. TMS Softwares AdvStringGrid is by far the fastest, most versatile grid Ive ever used. ProfGrid is about as equal with awesome built in formulas like excel’s. Teechart Pro is also a very excellent charting package. This comes from a guy that used ComponentOne’s Flexgrid and Dundas Charts. These controls are just way faster and more feature packed than even these ones period. Too bad they didnt have something this good in Linux. Really, when you can drag and drop and have an app up quick doing data analysis/charting, this is the bomb. In a production environment its very awesome for the quick turn arounds. Edited 2006-09-06 00:39 there is actually another program called lazarus w/c is an IDE very similar to delphi. it uses freepascal as underlying language, that can be compatible to borland pascal. it’s open source and nearing version 1 release. those interested in turbo explorer may also want to check lazarus (), as its code it has a strong advantage on portability, since freepascal can compile the same code for a different number of OSes (Windows, Linux, Mac, etc). i think using both turbo explorer and lazarus at the same time can be complementary, as lazarus has the potential of being more feature rich than the free version of turbo explorer, while turbo explorer can be a good way of learning how to do things the pascal way, w/ it’s more reliable ide and better help files. 2006-09-06 9:08 amAkiFoblesia Last time I checked Lazarus IDE didn’t support copy/paste from external applications….Has this changed?… yup, it has changed. copy/paste is now supported. moreover, lots of bugfixes has been made, though a lot are yet to be done as well. more so, in comparison to turbo explorer, lazarus is able to freely add in components. for a moment though, i think i’m going to check what turbo explorer free edition has stored this weekend. as a hobbyist who prefer the pascal language, i’d say these two are great to be at hand You just can’t install them in the component pallet. You can create and use them in your code at run time. you simply have to add the correct units to the uses list. For exampe you can easily use Synpase() for TCP/IP programmming like so: uses blcksock; var sock:ttcpblocksocket; response:string; begin sock:= ttcpblocksocket.create; sock.connect(‘bla.com’,5432); if sock.lasterror = 0 then begin sock.sendstring(‘bla bla bla’+CRLF); //wait for a response response:=sock.receievestring(5000); end else begin //error occured,lets see it showmessage(sock.lasterrordesc); end; end; The point here is you can use most 3rd party components with a little extra work. You would also have to creat the event handlers by hand but it can be done. It’s good to see Borland getting back to what made it an up-and-coming development company–inexpensive, strong software. I’m just hoping it’s not the latest in a long string of schemes to get back to a stable business. Going back to the days of CP/M, Turbo Pascal filled a need for all of us who had been using some BASIC interpreter or some version of UCSD Pascal. For $50, you got a good, fast compiler that produced native code and worked pretty well. By version 3.0, they’d included a menu and the first commercially-available IDE. Seeing Pascal and C++ in inexpensive versions (no longer free) that are hopefully well-supported should move more schools and hobbyists to Borland. It’s good to see that they’re embracing .NET as well because the world could use some sanity there, as well. I’d have to say Cocoa is probably the best GUI toolkit/environment I have ever worked with, and I had significant exposure with Win32 API, Java Swing, and GTK/GNOME. A lot of what this article talks about comes pretty standard with Cocoa. Cocoa has a bit of a learning curve, but the API, language, and tools (i.e., Interface Builder) fit together very well. I wish more developers would consider it. Even if you hate Xcode (I do), it is possible to write Makefiles that build OS X apps properly.
https://www.osnews.com/story/15738/review-borland-turbo-explorer/
CC-MAIN-2019-18
refinedweb
1,864
63.9
public class SyncInfo extends Object implements IAdaptable IResourceVariantComparatoris used to decide which comparison type to use. For two-way comparisons, a SyncInfo node has a change type. This will be one of IN-SYNC, ADDITION, DELETION or CHANGE determined in the following manner. ADDITIONif it exists locally and there is no remote. DELETIONif it does not exists locally and there is remote. CHANGEif both the local and remote exist but the comparator indicates that they differ. The comparator may be comparing contents or timestamps or some other resource state. IN_SYNCin all other cases. For three-way comparisons, the sync info node has a direction as well as a change type. The direction is one of INCOMING, OUTGOING or CONFLICTING. The comparison of the local and remote resources with a base resource is used to determine the direction of the change. clone, finalize, getClass, notify, notifyAll, wait, wait, wait String getLocalAuthor(IProgressMonitor monitor) nullif it doesn't have one. For example if the local file is shared in CVS this would be the revision author. monitor- the progress monitor null) equalsin class Object public int hashCode() hashCodein class Object public Object getAdapter(Class String toString() toStringin class Object. Copyright (c) 2000, 2017 Eclipse Contributors and others. All rights reserved.Guidelines for using Eclipse APIs.
http://help.eclipse.org/oxygen/nftopic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/team/core/synchronize/SyncInfo.html
CC-MAIN-2017-51
refinedweb
213
59.19
This is your resource to discuss support topics with your peers, and learn from each other. 08-25-2009 11:17 AM Hi all, how can make use of Sprite class object in my Manager class? is it possible to use Graphics class of javax.microedition.lcdui in my app that extends UiApplication and push a screen extends MainScreen?? i tried but i didnt get any results. the screen remains as white screen. so let me know is there any approach or anothe alternative to make this possible. Thanks and Regards, PraveenG 08-25-2009 02:16 PM Blanc, You should not (or cannot) mix the javax.microedition.lcdui classes and the net.rim.device.api.ui classes. This will not work. You will have to implement your own sprite class if you want to use the MainScreen or FullScreen. This is not that difficult to do... I have a sprite class that has the bitmap name and when this sprite is loaded, it checks to see if the bitmap (a PNG file) is already loaded in my graphics libray class. If so, my graphics library class returns the reference to the bitmap and increases a usage counter by 1. If not, it loads the PNG from my resource and increases the usage counter by one. When the sprite is destroyed, it substracst 1 from the usage indictator and if it is 0, it unload the PNG file. This saves memory. The class should have x and y coordinates as well as any other attribute you think your sprite may need. Width, height, energy, speed? Keep in mind that when you draw your sprites, that you start with the y value and substract the height. This way, you can sort your sprites so that a sprite in the background is not overlapping the sprite in the foreground. Another thing to do is to make a draw(Graphics g) method in your main sprite class. All of the different sprite classes will extend from it and will override the draw(Graphics g) method. Let me know if you have any questions as you see I've been doing this for awhile. Also, I do not know you background and I don't know if I have to get more specific of not. Thanks 08-26-2009 12:44 AM Lord, Can i have a detailed explanation or steps to implement my own sprite class?? is it the same thing that looks similar as shown in the below code or is there any specific implementation. class MySprite extends javax.microedition.lcdui.Sprite { // some code } class Myscreen extends MainScreen { MySprite mysprite; // what is the code to be placed to use it?? } If you are having any sample implementations let me know them it will be of great use to me. Thanks and Regards, PraveenG 11-10-2009 05:53 AM Lord, Sorry to reply to such an old post, but I to am porting from J2me to net.rim.device.api.ui. I understand what you said about Sprites and thank you. What did you do for TiledLayer, LayerManager and Layer? Did you extend the Field manager? Thanks 11-12-2009 01:48 PM I must have missed this post... Blanc, My Sprite class has some methods like public class MySprite { private Bitmap m_bitmap; private MySprite m_player; private int m_nX; private int m_nY; private int m_nWidth; private int m_nHeight; public void setPlayerSprite(MySprite playerSprite) { m_player = playerSprite; } public void process() { if( m_player.getX() > m_nX + m_nWidth) m_nX += 4; if( m_player.getX() + m_player.getWidth() < m_nX) m_xX -= 4; } public void draw(Graphics g) { m_bitmap.draw(....); } } Main Engine Loop in the run method .. int nLen = sprites.size(); for(int i = 0; i < nLen; ++i) { // Call Process on each Sprite // Maybe you can sort the sprite for the Zorder // Draw the sprites } ... in the paint event .. int nLen = sprites.size(); for(int i = 0; i < nLen; ++i) { // Call sprite.draw( g); for all the sprites. } ... This is a small snippet of how the sprite class can be and notice how I'm not extending from the J2ME sprite class. dsmaltz, I always managed my layers myself. What I would do is create one large bitmap with the different frames and then I would reference each frame in the actual code.
https://supportforums.blackberry.com/t5/BlackBerry-World-Development/Using-Sprite-Class/m-p/317951
CC-MAIN-2016-36
refinedweb
711
82.75
Hi David, David Trudgett wrote: > > Some questions I have in my mind: > > 1. I used "raw" and you used "PrincipiaSearchSource()" . The reason I used > raw was because I wanted to be sure that what I got wasn't munged in any > way, and that what I put back reflects what was actually there before. Sounds fair enough. I really shouldn't have used PrincipiaSearchSource(), as the method name doesn't reflect the function I wanted it to perform. The method read_raw() would have been more appropriate. However, my background developing Java applications programs causes me to favour calling methods to directly pulling in attributes. I *know* the Python idiom, but I'm not completely comfortable with it yet :-) > 2. You used the "manage_edit()" method, whereas I just assigned to the > "raw" property. My way seems to work OK, but I'm not sure how yours works: > I assume it brings up an edit page in the browser for each document? My external method isn't returning anything, and isn't passing a REQUEST or a RESPONSE, so there is nothing returned to the browser at all. A better version of the external method would return a nicely formatted status message to the browser. The difference between setting "raw" directly and using manage_edit() is that the latter will parse and check the syntax of and save a cooked version of the DTML. As you just directly set the attribute "raw", you *might* find that your change aren't all reflected in the operation of the methods you've changed. However, as you've only changed some HTML formatting, this shouldn't be a problem with what you've done as yet. Using manage_edit() will also alert you to invalid syntax in your changed version by raising a ParseError, that will be visible in the browser. If there are additional triggers in a class to get it recatalogued in various special ways, these might only be triggered from methods like manage_edit(), whereas setting an attribute will only trigger standard catalogue awareness. > 3. I don't like resorting to testing the "meta_type" for a particular > string value. As you noted in your code, it doesn't allow for subclassing, > so it's not fully general. I agree somewhat. However, I think that testing the meta_type is the most Zope-friendly way to do it :-) For example, in a pathological case, I could write a Python class in a Product that ostensibly inherits from DTML Method, but completely changes the way the attribute "raw" is used. > 4. I was surprised that the import statement (not to mention > "re.compile()") could be put outside of the method definition, considering > that Zope's external methods grab on to individual methods within a Python > module. Think about the way Python loads in functions and classes: the file gets read into the interpreter, and statements get executed (which runs them), whilst function definitions get executed (which causes their definitions to appear in the namespace somewhere). -- Steve Alexander Software Engineer Cat-Box limited _______________________________________________ Zope maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - ) - [Zope] Re: Traversal of ZODB Steve Alexander - Re: [Zope] Re: Traversal of ZODB David Trudgett - Re: [Zope] Re: Traversal of ZODB Steve Alexander - Re: [Zope] Re: Traversal of ZODB Steve Alexander
https://www.mail-archive.com/zope@zope.org/msg03333.html
CC-MAIN-2019-04
refinedweb
548
59.33
On Sun, Nov 11, 2007 at 07:39:54PM +0330, Ali Gholami Rudi wrote: > idxoftag("x") fails if x is a tag name; it compares the pointers and > not the strs. > > -- Ali > diff --git a/dwm.c b/dwm.c > --- a/dwm.c > +++ b/dwm.c > @@ -863,7 +863,7 @@ idxoftag(const char *tag) { > idxoftag(const char *tag) { > unsigned int i; > > - for(i = 0; (i < LENGTH(tags)) && (tags[i] != tag); i++); > + for(i = 0; (i < LENGTH(tags)) && cmp(tags[i], tag); i++); > return (i < LENGTH(tags)) ? i : 0; > }. Regards, -- Anselm R. Garbe >< >< GPG key: 0D73F361Received on Sun Nov 11 2007 - 17:26:38 UTC This archive was generated by hypermail 2.2.0 : Sun Jul 13 2008 - 15:05:51 UTC
http://lists.suckless.org/dwm/0711/4244.html
CC-MAIN-2019-30
refinedweb
121
75.1
hi, in code below cin.get() doesn't work correctly (maybe I'm mistaken) first time I press Enter, cursor will come at beginning of line. and next time, new line will overwrite on what would output in next linefirst time I press Enter, cursor will come at beginning of line. and next time, new line will overwrite on what would output in next lineCode:#include <iostream> #include <string> using namespace std; void getPassword(void); bool Login(string, string); int main { string uname; string pass; int cmd; while (true) { cout << "Select your choose: " << endl << " 1. Login" << endl << " 2. What evere!" << endl << " 3. Exit" << endl; cin >> cmd; switch (cmd) { case 1: cin.ignore(); cout << "Enter username: "; getline(cin, username); cout << "Enter Passwoed: "; password = getPassword(); if (!Login(username, password)) { cout << "> Username or Password was wrong!" << endl << "> Press Enter to continue" << endl; cin.get(); // problem is here: I should press Enter twice!!? cout << endl; // and what ever cout here will be ignored break; } break; case 2: // do something break; case 3: return 0; break; } } } I have tried different solution I found over internet but cannot solve this. thanks in advance
http://cboard.cprogramming.com/cplusplus-programming/132591-cin-get-problem.html
CC-MAIN-2014-52
refinedweb
187
73.37
jsp online exam... jsp online exam... Plz provide me front end jsp online exam jsp online exam i have designed a html page with 20 multiple choice questions ...options are in radio button for each question .... now i have to retrieve all the selected answers and save it to database match it with correct how to create online exam in jsp and database how to create online exam in jsp and database learing stage ,want to know how to create online exam Free Java Books Free Java Books Sams Teach Yourself Java 2 in 24 Hours As the author of computer books, I spend a lot...; Noble and Borders, observing the behavior of shoppers browsing through the books Free JSP Books Free JSP Books Download the following JSP books... it is sent by POST. How to using JSP HTML Form Free JSP Books Free JSP Books  ... Servlet 2.3 filtering, the Jakarta Struts project and the role of JSP and servlets... Insiders, all free, no registration required. If you are interested in links to JSP online examination system project in jsp online examination system project in jsp How many and which data tables are required for online examination system project in jsp in java. please give me the detailed structure of each table Struts Books ; Free Struts Books The Apache.... Informit Safari Tech Books Online... Many good books on the framework are now available, and the online online examination system project in jsp online examination system project in jsp How to show the status bar which shows how much time is remaining in online examination system in jsp.my Download Button - JSP-Servlet Download Button HI friends, This is my maiden question at this site. I am doing online banking project, in that i have "mini statement" link. And i want the Bank customers to DOWNLOAD the mini statement. Can any one help me Servlets Books ; Books : Java Servlet & JSP Cookbook...; Holborn Books Online Core Servlets... leading free servlet/JSP engines- Apache Tomcat, the JSWDK, and the Java Web Server bookstore - JSP-Servlet online bookstore i want to display some books like online shoping.please send me code for that using jsp and servlets Programming Books download the entire book in PDF format for free, and you will also find... Java Programming Books  ... As the author of computer books, I spend a lot of time loitering in the computer JSF Books ; O'Reilly Safari Online Books of JSF Over the last few.... Safari Books Online JSF...JSF Books   jsp project - JSP-Servlet jsp project sure vac call management project JSP Programming Books JSP Programming Books  ... using Servlet 2.3 filtering, the Jakarta Struts project and the role of JSP... content authored by the JSP Insiders, all free, no registration required. If you Project in jsp Project in jsp Hi, I'm doing MCA n have to do a project 'Attendance Consolidation' in JSP.I know basic java, but new to jsp. Is there any JSP source code available for reference...? pls Tomcat Books , there are currently few books and limited online resources to explain the nuances of JSP... Tomcat Books  ... and Code Download Tomcat is an open source web server that processes JavaSer PDF books | Free JSP Books | Free JSP Download | Authentication... uing JDBC in JSP | Download CSV File from Database in JSP JSP... | Download images from Database in JSP | How to Create JSP Page | How Need E-Books - JSP-Servlet jsp online ex jsp online ex wat is the parametr u passed ...ques jsp online ex jsp online ex how to sum up the count of a cloumn in mysql and display ot on jsp when asked to calculate Free J2EE Online Training Free J2EE Online Training The Enterprise Edition of Java popularly known... professionals free online J2EE training there are various websites but the quality.... So give a boost to your career with free online J2EE training Project JSP Project Register.html <html> <body > <form...; <%! %> <jsp:useBean <jsp:setProperty < JAZZ UP - Free online Java magazine JAVA JAZZ UP - Free online Java magazine  .... API Updates The Apache Project has... and JavaServer Pages (JSP) for creating enterprise-grade web applications. Earlier jsp sir i want to jsp code for online examination like as bank po,,,,,,plz help me sir JSP JSP FILE UPLOAD-DOWNLOAD code USING JSP Ajax Books Ajax Books AJAX - Asynchronous JavaScript and XML - some books and resource links These books... there have been few books on DHTML and only 2-3 that cover AJAX. But that will change jsp Jsp pagination - JSP-Servlet Jsp pagination Iam doing a online exam application for this i need... tell me the solution Hi friend, For Jsp pagination application visit to : Thanks secure online payment code for jsp secure online payment code for jsp how to implements online payment in jsp download - JSP-Servlet download here is the code in servlet for download a file. while...; /** * * @author saravana */ public class download extends HttpServlet...(); System.out.println("inside download servlet"); BufferedInputStream online examination system mini project online examination system mini project i developed a project on online examination system using jsp and java script . I am getting the quetion... in finding out that..only that part had left for my project completion.please help me online bookstore - JSP-Servlet online bookstore i want code for online bookshop.please send mo soon.......... Hi Friend, Please specify some details. Thanks download image using url in jsp download image using url in jsp how to download image using url in jsp JSP Tutorials Resource - Useful Jsp Tutorials Links and Resources Pages (JSPs) has typically required the use of several books as well as online...JSP Tutorials JSP Tutorial The prerequisites for the tutorial Online Quiz Application in JSP Online Quiz Application in JSP  ... are going to implement of Online quiz application using of JSP. Step 1: Create... quiz question and answer form using with JSP or JDBC database. Here programming - JSP-Servlet programming hello, I am doing online exam project using jsp-servlet... to retrieve next question when click on next button in jsp page and that question will come in same jsp page. please help me how can I do how to make exampage in jsp ? how to make exampage in jsp ? how to make a online exam page in jsp and servelet online shopping code using jsp online shopping code using jsp plz send me the code of online shopping using jsp or jdbc or servlets plz plz help me Need Help in creating online quiz application using JSP - JSP-Servlet Need Help in creating online quiz application using JSP Hi, i am creating online Quiz application using JSP and MySQl ,Apache 6 in Netbeans IDE. i..., JSP Page Online Quize project - JSP-Servlet project i have to do a project in jsp... plz suggest me some good topics. its an mini project.. also mention some good projects in core java... reply urgently Hi friend, Please visit for more Jsp Jsp Hi, I am doing a project in that i want to upload a resume.In resume we should restrict mailId and mobile number send me the solution as soon as possible jsp hello sir.....i am doing one web application project in which i have one module called user management and into usermanagement two submodules are there user category and user........both having their own add,edit and delete project project how to code into jsp of forgot password
http://roseindia.net/tutorialhelp/comment/96204
CC-MAIN-2014-41
refinedweb
1,237
63.09
There are a lot of moving parts in Redux and its associated libraries. It's challenging for newcomers to figure out what's going on -- especially when you mix in React, React-Router, React-Redux, React-Router-Redux, etc. (Yes, it's common to have all four of those libraries in the same project along with Redux). Fortunately, the individual parts are small and easy to understand. And... I want to help you understand. But rather than risk you rage-quit programming because of the (N+1)th TodoMVC tutorial, I've tried to put together a game-like tutorial to make learning Redux more fun. My goal is to teach you Redux and -- equally important -- make you aware of the awesome third-party libraries in the ecosystem. This is Part 1 in the series. Forget about React, Gulp, Webpack, Express, and every other crazy dependency. There is no need to set up a build pipeline and a web server with hot reloading, etc. (in game parlance, Nightmare mode). We're going to do this on super easy mode. We'll code directly in a JSFiddle sandbox. Let's start with a very, very simple game. Not even really a game. We are going to start by writing a tiny program that lets you level up. We'll build on it from there. Take a minute to study the following program. const { createStore } = Redux;//// Actions//const Actions = { LEVEL_UP: 'LEVEL_UP',};//// Action Creators//const levelUp = () => ({ type: Actions.LEVEL_UP});//// Reducers//const levelReducer = (state = 1, action) => { switch (action.type) { case Actions.LEVEL_UP: return state + 1; } return state;};//// Bootstrapping//const store = createStore(levelReducer);//// Run!//console.log(store.getState());store.dispatch(levelUp());console.log(store.getState()); const { createStore } = Redux; Import the createStore function from Redux. This style of assignment is called Destructuring. Also note that JSFiddle puts Redux in the global namespace. In a real app we'd write: import { createStore } from 'redux'; const Actions = { LEVEL_UP: 'LEVEL_UP',}; Here we declare all the possible types of actions our game has. We are starting small and will... level up. const levelUp = () => ({ type: Actions.LEVEL_UP}); This is called an 'action creator' in Redux. It's a function that creates a plain Javascript object that describes an action to take -- i.e. a state change. To make the action actually happen, we need to dispatch it to the store (see below). Our level-up action doesn't take any parameters but we'll see examples of that later. If you're new to ES6... here's what the ES5 Javascript equivalent would look like: function levelUp() { return { type: Actions.LEVEL_UP };} const levelReducer = (state = 1, action) => { switch (action.type) { case Actions.LEVEL_UP: return state + 1; } return state;}; 'Reducer' is a fancy name for a function that, given the current state and an action, calculates the next state. In our example, the state is the current level (which defaults to 1). When our reducer processes the LEVEL_UP action, it will return an incremented state (i.e. level). For reasons I will gloss over here, we must return the state untouched if our reducer ignored the action (line 25) const store = createStore(levelReducer); Here we're using a method from Redux to transform our reducer into a 'store' object that will be the single source of state in our application. The notable methods on store are getState() and dispatch(). The most important thing to know about state is that it must be immutable (i.e. you are not allowed to change it directly). When you want something to be different in your app, you must dispatch an action to the store, which then gets internally passed to reducers. console.log(store.getState());store.dispatch(levelUp());console.log(store.getState()); Output the current state, level up, and output the (new!) current state. You can also subscribe to the store to receive change notifications. Next time we'll use that rather than manually printing the state. So... you now have a fully-functioning Redux application. Yaay. But hey, you can write the same thing without Redux in 4 lines. Well sure you can. But the example functionality is trivial. Let's add some complexity. We'll make this a little more interesting by introducing our main character. In addition to having a level, our hero exists at some position in the world, has a concept of current health and maximum health, and some kind of capacity to carry items. I've added these new elements to the following code. Take a few minutes to read it and try to understand what's happening. (The highlighted sections show the additions) const { createStore, combineReducers } = Redux;const initialState = { xp: 0, level: 1, position: { x: 0, y: 0, }, stats: { health: 50, maxHealth: 50, }, inventory: { potions: 1, }};//// Actions//const Actions = { GAIN_XP: 'GAIN_XP', LEVEL_UP: 'LEVEL_UP', MOVE: 'MOVE', DRINK_POTION: 'DRINK_POTION', TAKE_DAMAGE: 'TAKE_DAMAGE',};//// Action Creators//const gainXp = (xp) => ({ type: Actions.GAIN_XP, payload: xp});const levelUp = () => ({ type: Actions.LEVEL_UP});const move = (x, y) => ({ type: Actions.MOVE, payload: { x, y }});const drinkPotion = () => ({ type: Actions.DRINK_POTION});const takeDamage = (amount) => ({ type: Actions.TAKE_DAMAGE, payload: amount});//// Reducers//const xpReducer = (state = 0, action) => { switch (action.type) { case Actions.GAIN_XP: return state + action.payload; } return state;};const levelReducer = (state = 1, action) => { switch (action.type) { case Actions.LEVEL_UP: return state + 1; } return state;};const positionReducer = (state = initialState.position, action) => { switch (action.type) { case Actions.MOVE: let { x, y } = action.payload; x += state.x; y += state.y; return { x, y }; } return state;};const statsReducer = (state = initialState.stats, action) => { let { health, maxHealth } = state; switch (action.type) { case Actions.DRINK_POTION: health = Math.min(health + 20, maxHealth); return { ...state, health, maxHealth }; case Actions.TAKE_DAMAGE: health = Math.max(0, health - action.payload); return { ...state, health }; } return state;};const inventoryReducer = (state = initialState.inventory, action) => { let { potions } = state; switch (action.type) { case Actions.DRINK_POTION: potions = Math.max(0, potions - 1); return { ...state, potions }; } return state;};//// Bootstrapping//const reducer = combineReducers({ xp: xpReducer, level: levelReducer, position: positionReducer, stats: statsReducer, inventory: inventoryReducer,});const store = createStore(reducer);store.subscribe(() => { console.log(JSON.stringify(store.getState()));});//// Run!//store.dispatch(move(1, 0));store.dispatch(move(0, 1));store.dispatch(takeDamage(13));store.dispatch(drinkPotion());store.dispatch(gainXp(100));store.dispatch(levelUp()); Here is the console output: Ok, so still not a game yet. But we're moving towards the goal. The code already has a few problems (aside from not being a game yet). Stay tuned for the next episode where I discuss the problems and how to solve them. We'll also continue to add features to the 'game'. Think you spot a problem or room for improvement? Leave a comment and let me know.
https://decembersoft.com/posts/redux-hero-part-1-a-hero-is-born-a-fun-introduction-to-redux-js/
CC-MAIN-2018-43
refinedweb
1,093
52.76
On Thu, Jan 26, 2006 at 09:50:29AM -0200, Sidnei da Silva wrote: > On Wed, Jan 25, 2006 at 04:37:45PM -0200, Sidnei da Silva wrote: > | On Wed, Jan 25, 2006 at 01:30:41PM -0500, Stephan Richter wrote: > | | Please revert this revision. As you admit yourself in CHANGES.txt, this > is a > | | new feature. You cannot add new features to a dot release branch. > | > | Well, I would go as far as calling it a bugfix. Will revert it then we > | can discuss. > > Sorry, I've removed some context from this email, so I guess nobody > understood what it is about. > > I had added a call to pkg_resource.declare_namespace() to Zope 3's > zope/__init__.py, which enables one to have sub packages of 'zope' > distributed as eggs. > > Stephan complained that this is not a 'bugfix' but a 'new feature' and > asked me to revert, which I promptly did. > > I suspect that no new Zope 3.2.x release will come out soon anyway, so > maybe it doesn't make sense to add it there really, but it would be > really nice to have this feature available before the 3.3 release. > > Does anybody care about using eggs with Zope 3.2? Advertising I care about not adding this functionality in a stable release as I am totally unsure of how it will affect the Debian packaging of Zope. Debian packaging and eggs do not see eye to eye sometimes. > > -- > Sidnei da Silva > Enfold Systems, LLC. > > _______________________________________________ > Zope3-dev mailing list > Zope3-dev@zope.org > Unsub: > > -- Brian Sutherland Metropolis - "it's the first movie with a robot. And she's a woman. And she's EVIL!!" _______________________________________________ Zope3-dev mailing list Zope3-dev@zope.org Unsub:
https://www.mail-archive.com/zope3-dev@zope.org/msg03641.html
CC-MAIN-2017-51
refinedweb
288
76.42
Subject: Re: [boost] Towards a Warning free code policy proposal From: Dave Abrahams (dave_at_[hidden]) Date: 2010-08-27 14:26:31 On Fri, Aug 27, 2010 at 8:26 AM, vicente.botet <vicente.botet_at_[hidden]> wrote: > Hi all, > > Last year there where two long threads about Boost Warning policy. If I'm not wrong nothing was concluded. I think that we can have two warning policies: one respect to the Boost users and one internal for Boost developement. > > We can consider that Boost headers can be used by the users as system headers, so no warning is reported for these files. The main advantage I see is that it allows the users to don't mix the warnings comming from Boost and their own warnings. > > Next follows a way to disable completly warnings for 3 compilers: Any Boost header file could be surrounded If we want to do something like that, it should be encapsulated in #include files: #include <boost/detail/header_prefix.hpp> ... #include <boost/detail/header_suffix.hpp> -- Dave Abrahams BoostPro Computing Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2010/08/170289.php
CC-MAIN-2019-51
refinedweb
195
64.1
Issues opening textastic to view source Am I missing something here? It is not opening the textastic app. # coding: utf-8 import appex import webbrowser from urlparse import urlsplit def main(): if not appex.is_running_extension(): print 'This script is intended to be run from the sharing extension.' return url = appex.get_url() if not url: print 'No input URL found.' return url = urlsplit(url) url = 'textastic://' + url.netloc + url.path + url.query + url.fragment print url webbrowser.open(url) if __name__ == '__main__': main() @miwagner1 Could you post the typical output of the share app extensions cannot launch other apps. Only Today Widgets. Its an Apple thing. Marcus67 the typical output looks like textastic://forum.omz-software.com/topic/2616/can-pythonista-push-itself-into-the-foreground Well dam if an extension can't open an app, I'm going to have to get fancy for viewing source code. Ok so workflow can open other apps from the app extension. The user just had to confirm it. So how would I do this from the pythonista extension? I think workflow has a today extension? so maybe that's it? Alternatively, you could post a notification. If you set pythonista notifications to alerts instead of banners, then younwould effectively get prompted to open the app. scratch that... this does work import appex from objc_util import * app=UIApplication.sharedApplication() url=nsurl('pythonista://') app.openURL_(url) appex.finish() #optional to close appex in containing spp, otherwise it will be there when you return. based on this... except that when I walked up the responder chain it turned out to be the sharedApplication() This is fantastic. There are so many possibilities I can do now with pythonista and editorial with webpages. Right in the safari app. - Webmaster4o @omz In the thread to which @JonB linked, people reported success with putting this method through apple review. You should implement this as the default behavior for webbrowser.openwhen called from appex. No wonder he was surprised it worked. Apples docs say the UIApplication is Not available in app extensions. It's possible this will break in later iOS versions.
https://forum.omz-software.com/topic/2660/issues-opening-textastic-to-view-source
CC-MAIN-2017-26
refinedweb
351
61.73
jobs with the REST or WebDriver API. There are three status badges that correspond to the three states of a finished test: Passing, Failed, and Unknown. With the browser matrix, you can keep track of the test status for various browser/platform/operating system combinations. Choose a Sauce account to associate with your project. If you just have one project, you can use your main Sauce account name. If you have multiple projects, you will want to create a sub-account for each project. Run your tests for a given project on Sauce using that account's username and access key. If you are logged in as the account you want to use, you can find your credentials on the account page. If you are logged in as a parent account, you can see your sub-account usernames and access keys on the sub-accounts page. Make sure to set a build number a pass/fail status and a visibility (to either 'public', 'share' or 'public restricted') for every test that runs. You will be able to see that these are set correctly by seeing that your tests say "Pass" or "Failed" instead of "Finished" and that a build number is visible in the UI. Note: If your tests don't have a build or pass/fail status, you'll get the "Unknown" image for security reasons. Adding the Standard Badge You can copy/paste the following Markdown into your GitHub README: []() Or you can add the following HTML to your project site: <a href=""> <img src="" alt="Sauce Test Status"/> </a> Adding the Browser Matrix Widget You can copy/paste the following Markdown into your GitHub README: []() Or you can add the following HTML to your project site: <a href=""> <img src="" alt="Sauce Test Status"/> </a> Status Images for Private Accounts To display the build status of a private Sauce account, you need to provide a HMAC token generated from your username and access key. Here is how you could generate one in python: Note: if you don't have python on your system, check out this link for HMAC First start the python interpreter with the following command: python Then run the following code in the interpreter to generate a query param to add to badge image URLs: from hashlib import md5 import hmac "?auth=" + hmac.new("philg:45753ef0-4aac-44a3-82e7-b3ac81af8bca", None, md5).hexdigest() Once the auth token has been obtained, it can be appended to one of the above badge URLs as the value of the auth query like this: ?auth=AUTH_TOKEN
https://wiki.saucelabs.com/plugins/viewsource/viewpagesrc.action?pageId=48366187
CC-MAIN-2021-10
refinedweb
428
62.72
Creating Ruby-Inspired Modules In ColdFusion A couple of months ago, I read Practical Object-Oriented Design in Ruby: An Agile Primer, by Sandi Metz. I haven't reviewed the book yet, but I wanted to explore one of the concepts Metz talked about: Ruby Modules. From what I understood (and this may be somewhat off-base, I'm not a Ruby programmer), a module is a way to "inherit" behavior without using classical inheritance. Essentially, behavior is "included" into a class, rather than "inherited" into a class. In a language that only allows for single-class inheritance, a Module provides a mechanism for including behavior from several different sources. Furthermore, it allows the developer to borrow behaviors without worrying about the "is-a-type-of" inheritance relationship. It seems fairly interesting, so I wanted to see how this kind of behavior could be used in ColdFusion. In the past, I've looked at "mixins" in ColdFusion. Essentially, a mixin is just the "include" of a code block into another container. ColdFusion's CFInclude tag allows you to include both .cfm and .cfc files. However, with a .cfc file, the CFComponent tag is completely ignored (treating the .cfc file as if it were a .cfm file). I think we can use the CFInclude tag to mimic (in part) the Module concept in Ruby. To explore this, I wanted to create a publish-and-subscribe module. This module would expose two public methods, on() and off(), for event binding and unbinding, respectively; it would expose one private method, trigger(), such that the container component could announce (ie. publish) events. To start, I created my PubSub.cfc ColdFusion component - this component will be our "Ruby Module." Since this component is not meant to be instantiated, I am excluding any init() method. Therefore, any instance variables that the module will create need to be defined in the component's pseudo-constructor. The module methods can then be defined the same way any ColdFusion component methods could be defined: PubSub.cfc - Our Ruby-Inspired "Observable" Module - <cfscript> - component - output = false - hint = "I provide simple publish + subscribe module features." - { - // Each event type will have any number of subscriptions associated - // with it. Each subscription will be characterized as a type and a - // function callback. - eventTypes = {}; - // --- - // PUBLIC METHODS. - // --- - // I remove the subscription from the given even type. - public any function off( - required string eventType, - required any eventHandler - ) { - // If the event type doesn't exist, nothing else to do. - if ( ! structKeyExists( eventTypes, eventType ) ) { - return( this ); - } - var filteredSubscriptions = []; - // Copy over all of the subscriptions that do not match the - // given event handler. - for ( var subscription in eventTypes[ eventType ] ) { - if ( isSameHandler( subscription.handler, eventHandler ) ) { - continue; - } - arrayAppend( filteredSubscriptions, subscription ); - } - // Persist the filtered subscription collection. - eventTypes[ eventType ] = filteredSubscriptions; - // Return the reference to the consumer of the module. - return( this ); - } - // I set up the subscription to the given event type. - public any function on( - required string eventType, - required any eventHandler - ) { - // Make sure that we have a subscription channel for the given - // event type. Currently, event types are completely arbitrary. - if ( ! structKeyExists( eventTypes, eventType ) ) { - eventTypes[ eventType ] = []; - } - // Add the subscription. - arrayAppend( - eventTypes[ eventType ], - { - type = eventType, - handler = eventHandler, - uuid = getEventHandlerID( eventHandler ) - } - ); - // Return the reference to the consumer of the module. - return( this ); - } - // --- - // PRIVATE METHODS. - // --- - // I param a module-based UUID for the event handler meta data - // such that different handlers can be compared for equality. This - // data is persisted in the function's meta data. - private string function getEventHandlerID( required any eventHandler ) { - var uuidKey = "pubsub-id"; - var metaData = getMetaData( eventHandler ); - if ( ! structKeyExists( metaData, uuidKey ) ) { - metaData[ uuidKey ] = createUUID(); - } - return( metaData[ uuidKey ] ); - } - // I determine if the two event handlers are the same. - private boolean function isSameHandler( - required any eventHandlerA, - required any eventHandlerB - ) { - var uuidA = getEventHandlerID( eventHandlerA ); - var uuidB = getEventHandlerID( eventHandlerB ); - return( uuidA == uuidB ); - } - // I trigger the given event on the given channel. - private void function trigger( required string eventType ) { - // If there are no bindings on this channel, there is nothing - // more that we need to do. - if ( ! structKeyExists( eventTypes, eventType ) ) { - return; - } - // Invoke each handler with same argument collection. This - // way, the event can be triggered with multiple arguments. - for ( var subscription in eventTypes[ eventType ] ) { - subscription.handler( argumentCollection = arguments ); - } - } - } - </cfscript> The details the of the code are not so important. Just note that the component (ie. our Module) has defined an instance variable - eventTypes - and several public and private methods. NOTE: I could have left out the CFComponent tag in our PubSub.cfc module; however, I wanted to keep it there as a way to philosophically define the module as one cohesive and isolated set of features. Now, let's create a normal ColdFusion component that includes this module file in order to leverage the publish and subscribe behavior. In this case, we'll create a Friend.cfc ColdFusion component that exposes a setName() method; every time the setName() method is called, a "nameChanged" event is triggered. Friend.cfc - Our Consumer Of PubSub.cfc Module - <cfscript> - component - output = false - accessors = true - hint = "I model a Friend with evented setters." - { - // Define the properties for accessor generation. - property name="name" type="string"; - // Include the PubSub behaviors, on(), off(), trigger(). - include "PubSub.cfc"; - // I initialize the component with the given default values. - public any function init( required string initialName ) { - name = initialName; - return( this ); - } - // --- - // PUBLIC METHODS. - // --- - // Set the new name - triggers "nameChanged" event. - public any function setName( required string newName ) { - var oldName = name; - name = newName; - // Announce the name-change event to all callback handlers - // that have asked to subscribe. - trigger( "nameChanged", newName, oldName ); - return( this ); - } - } - </cfscript> Notice that once the PubSub.cfc module is included into the Friend.cfc component, the component can make use of the "inherited" trigger() method as if it were a native method of the component. Ok, now it's time to test this baby beast out. In the following code, I've created two different event handles for the "nameChanged" event. Both our bound using the "inherited" on() method. Then, one is unbound using the "inherited" off() method. I do this to make sure that both the on() and off() methods are working properly. - <cfscript> - // Define one name-change handler. - function nameChangeHandler( eventType, newName, oldName ) { - writeOutput( "[1] Name Changed: " & oldName & " to " & newName ); - writeOutput( "<br />" ); - } - // Define another name-change handler so that we can test to see - // if the off() method unbinds the correct handler. - function anotherNameChangeHandler( eventType, newName, oldName ) { - writeOutput( "[2] Love your new name: " & newName ); - writeOutput( "<br />" ); - } - // NOTE: In ColdFusion 9, the above event handlers will NOT have - // access to this page's variables' scope. ColdFusion 9 does not - // create closures; however, in ColdFusion 10, you CAN create a - // closure that does keep the proper variables reference. - // ------------------------------------------------------ // - // ------------------------------------------------------ // - // Create our friend instance - this instance includes the PubSub - // module and should have publicly exposed methods for on() and - // off() binding and unbind respectively. - friend = new Friend( "Sarah" ); - writeOutput( "Initial Name: " & friend.getName() & "<br />" ); - // Bind our two event handlers. - friend.on( "nameChanged", nameChangeHandler ); - friend.on( "nameChanged", anotherNameChangeHandler ); - // Change the name to see that BOTH event handlers are triggered. - friend.setName( "Tricia" ); - // Unbind one of the event handlers. - friend.off( "nameChanged", nameChangeHandler ); - // Change the name again to make sure that only ONE of the event - // handlers is triggered, ensuring that the other event handler - // was properly unbound via off(). - friend.setName( "Joanna" ); - </cfscript> When we run the above code, we get the following page output: Initial Name: Sarah [1] Name Changed: Sarah to Tricia [2] Love your new name: Tricia [2] Love your new name: Joanna As you can see, the event handlers were triggered when the setName() method was called. Furthermore, we know that the off() method worked properly since only one of the event handlers was invoked at the second calling of the setName() method. NOTE: In ColdFusion 9 (and earlier), our event handlers are not bound to the Variables scope of our test code. A the time of invocation (ie. triggering), they are bound to the context of the Friend component instance. As of ColdFusion 10, however, closures can be used to maintain the intended Variables scope binding. I believe that Ruby modules are actually a bit more robust than this. I believe that a Ruby module can have both "module methods" and "instance methods." This could probably be mimicked using a private "namespace" variable; but even then, defining module methods would be a bit messy. If you really wanted to copy the Ruby Module behavior, you'd probably have to use something more like a ColdFusion custom tag mixin, which has its own sandboxed Variables scope. This kind of Ruby module architecture is very interesting. It allows you to create inheritable behavior without having to think about an actual class inheritance chain. Since I'm not a good Object-Oriented developer, I won't try to talk about the pros and cons of such an approach; but, with behavior like Publish and Subscribe, I can easily see the value of mixing-in behavior without having to worry as to whether or not, "Friend is a type of Observable," as you would with classical inheritance. Ben. I've never thought about using a CFINCLUDE for a CFC. Unfortunately, I'm still on CF9. I have just started to pick up Ruby in the last few months. Being able to include functionality so easily with a Gem or a module makes extending functionality so much easier. I like the idea of applying that rational to CF. Hi Ben, Actually i am working on a framework which is a totally event driven approach with using similar concept but little bit with around advice to an event. example. user.around("save", "Transaction') .on("save", "email.userConfirmation") .on("change:['password']."notify.admin") and do have more functionality planned. i happend to support an legacy application and running into lot off issues making changes luckily application is running on CF10 so closure is to rescue. i don't know ruby i do get lot of concepts from javascript and implementing on CF just for fun luckily got approval and use in a new project. here is an another JS library i was copying good to see i am not the only crazy about implementing js concept in CF love you blog posts @Billy, This code is running in CF9 - the comment about CF10 was that it has "closures" which make the function-passing a bit different. @Brad, I've played with Ruby for about 3 days (in total). So, I have very little understanding about how it all works. I'd be curious to hear about a few (or even just one) Gems that you have worked with; and, how they work well as modules. I'm just interested in how they get used. @Manithan, That looks really interesting. But, what is the second argument in the methods you are calling? Is that supposed to be a String? Or would that be a reference to an object/function? Like, I understand that you want to implement Transaction around the save method; but, who "does" the Transaction. Is this like AOP - aspect oriented programming? I think in AOP, they call those wrapper-features, "advice"? Anyway, looks really interesting! @Ben, second arguments could be a call back function or string (in my framework it is an event like user.save would be user.cfc and save function just like coldbox or fw1 ) or an array of events. around is AOP call where i add transaction. this case "Transaction" in an interceptor which would have an aroundAdvice function which event call would go through from event dispatcher. i was basically trying to make a simple AOP without create proxy object or CFC like coldspring and wirebox does using closures where i think perfect used case for me. i was planning to implement before and after advice but aroundAdvice perfectly fits right where i can control the event. i have lot of things going on like javascript events in it. not sure how it fit for new applications since there are lot of frameworks out there but for me it fits for legacy application where i don't have to change much code. just configure the events and dispatch them from right place. @Manithan, Ah, OK, I completely see what you're saying. The strings maps to a component method. Seems pretty cool! I don't know much about how frameworks currently implement this kind of stuff - I'm still getting my feet wet with object technologies.
http://www.bennadel.com/blog/2482-creating-ruby-inspired-modules-in-coldfusion.htm
CC-MAIN-2014-42
refinedweb
2,060
55.95
Welcome to the first article in a series that will examine the practicalities of using new XML-based technologies. In these columns, I'll take a look at an XML technology, and at attempts to deploy it in a practical system. In addition to reporting on the deployment experience, I expect to have some fun along the way too. I won't expect too much prior knowledge from the reader, but a grounding in basic Web standards such as XML and HTTP will help. For starters, the next few columns will look at BEEP, one of the many acronyms lurking in the alphabet soup of Web services. The purpose of this article is to introduce the BEEP protocol framework and to suggest where it may be appropriately used. BEEP stands for Blocks Extensible Exchange Protocol, an expansion which makes almost as little sense as just saying BEEP, and frankly is far less entertaining. Nevertheless, XML users will likely find themselves drawn to the word extensible, and indeed it's extensibility that makes BEEP worth looking at in the first place. More of that later; first let's look at the problem that BEEP solves. You're writing a networked application, and you want instances of your programs to be able to communicate via TCP/IP. Before you can even get around to the logic of your application itself, you need to figure out how your programs are going to connect, authenticate themselves, send messages, receive messages, and report errors. The cumulative time you'll spend on this may well outweigh the effort needed for the application logic itself. In a nutshell, this is the problem BEEP solves. It implements all the "hygiene factors" of creating a new network protocol so you don't have to worry about them. At this point you may well be wondering, and not without justification, why we need another type of distributed computing protocol to add to CORBA/IIOP, SOAP, XML-RPC, and friends. In answer, you need to recognize that BEEP sits at a different level. It's a framework. SOAP can happily be implemented on top of BEEP. BEEP takes care of the connections, the authentication, and the packaging up at the TCP/IP level of the messages -- matters that SOAP deliberately avoids. BEEP really competes on the same level as HTTP. Designers of recent application protocols have looked upon HTTP, and seen that it was good. Well, good enough. So WebDAV, the protocol underlying the "net folders" feature in Windows, added a few verbs to HTTP in order to allow distributed authoring. The Internet Printing Protocol invented some HTTP headers in order to use HTTP/1.1 to do its work, and the binding of SOAP to HTTP has done a similar thing (see Resources for background on these three uses of the HTTP protocol). In principle, the right thing has been done. A ubiquitous and widely implemented protocol, HTTP, has been reused in an efficient way. There are some unfortunate consequences: the first of these is the resultant overloading of port 80. Since not just Web page requests, but potentially security-critical business requests are passing through port 80 now, increased vigilance is required. The many interactions with Web caches and other devices which affect port 80 must be taken into account. These issues have been rehearsed extensively elsewhere (see Resources), so I won't go into detail here. The second consequence of reusing HTTP is that you're tied to using its model of interaction. HTTP is a stateless request-response-oriented protocol. There can be no requests without a response, and there can be no response without a request. Additionally, no state is preserved between requests. Unfortunately, this isn't good enough for many interaction schemes, as it precludes things like asynchrony, stateful interaction, and peer-to-peer communication. These problems can and have been circumvented by layering on top of HTTP, but most of these solutions feel awkward at best. It is at this point that the seasoned programmer would tell you it's time to refactor, that is, to place the responsibilities of a system at their correct level and abstract out common functionality. This is the best way to look at BEEP: it is essentially a refactoring of an overloaded HTTP to support the common requirements of today's Internet application protocols. Enough scene setting, it's time to look at what BEEP can and cannot do, so you can get an idea of why you might want to use it. In a presentation given by Marshall Rose (see Resources), the author of the BEEP specification, BEEP's "target market" of application is described in the following terms: - Connection-oriented: Applications passing data using BEEP are expected to connect, do their business, and then disconnect. This gives rise to the characteristics of communication being ordered, reliable, and congestion sensitive. (Paralleling at the IP level shares many of the same characteristics of using TCP rather than UDP.) - Message-oriented: Applications passing data using BEEP are expected to communicate using defined bundles of structured data. This means that the communicating applications are loosely coupled and don't require extensive knowledge of each others' interfaces. - Asynchronous: Unlike HTTP, BEEP is not restricted to a particular ordering of requests and responses. Asynchronicity allows for peer-to-peer style communication, but it doesn't rule out conventional client/server communication either. While these characteristics encompass a large number of potential applications (for instance, they would happily permit the re-implementation of HTTP, FTP, SMTP, and various instant-messaging protocols), a number of applications fall outside of BEEP's scope. These include one-shot exchanges such as DNS lookup, where the cost introduced by BEEP would be disproportionate, or tightly coupled RPC protocols like NFS. Given that an application falls into the target market, what can BEEP offer? Its main areas of functionality are: - Separating one message from the next (framing) - Encoding of messages - Allowing multiple asynchronous requests - Reporting errors - Negotiating encryption - Negotiating authentication The fact you don't have to worry about these things leaves you free to add the other ingredients to your networked application. You can start thinking about message types and structures, for instance. BEEP is a peer-to-peer protocol, which means that it has no notion of client or server, unlike HTTP. However, as with arguments and romance, somebody has to make the first move. So for convenience I'll refer to the peer that starts a connection as the initiator, and the peer accepting the connection as the listener. When a connection is established between the two, a BEEP session is created. Figure 1. BEEP sessions, channels, and profiles All communication in a session happens within one or more channels, as illustrated in Figure 1. The peers require only one IP connection, which is then multiplexed to create channels. The nature of communication possible within that channel is determined by the profiles it supports (each channel may have one or more.) The first channel, channel 0, has a special purpose. It supports the BEEP management profile, which is used to negotiate the setup of further channels. The supported profiles determine the precise interaction between the peers in a particular channel. Defining a protocol using BEEP comes down to the definition of profiles. After the establishment of a session, the initiator asks to start a channel for the particular profile or set of profiles it wishes to use. If the listener supports the profile(s), the channel will be created. Profiles themselves take one of two forms: those for initial tuning, and those for data exchange. Tuning profiles, set up at the start of communication, affect the rest of the session in some way. For instance, requesting the TLS profile ensures that channels are encrypted using Transport Layer Security. Other tuning profiles perform steps such as authentication. Data-exchange profiles set expectations between the two peers as to what sort of exchanges will be allowed in a channel, similar to the way Java interfaces set expectations between interacting objects as to what methods are available. As with XML namespaces, a profile is identified by a URI. For instance, the example "Echo" profile from the BEEP Java tools has the URI. BEEP puts no limits on the kind of data a channel can carry. BEEP uses the MIME standard to support payloads of arbitrary type. This approach neatly sidesteps the sorts of issues raised by SOAP of how to send an XML document or a binary file inside a SOAP message. At the beginning of this article I promised you that BEEP made use of XML, and by this point you'd be forgiven for wondering where. In fact, the BEEP management profile, responsible for channel initiation, is defined as an XML DTD (see Resources for a pointer to the management profile definition). This is why XML and BEEP fit so well together: as BEEP takes care of protocol infrastructure, XML takes care of data structuring. Hence XML is a natural choice in which to define the syntax of messages in BEEP profiles (although, as noted above, profiles can use any MIME type). Aside from the channel-management profile, many emerging BEEP application profiles have used XML as an encoding for their messages. This is a boon, as it means that any existing messaging standard defined in terms of XML documents has a reasonably straightforward mapping into a BEEP profile. In this article I've explained the rationale for using BEEP and outlined its target application areas. I've given a very high-level overview of how BEEP interactions take place. The next column will go into more detail on how communication is achieved through channels and profiles, with an example implementation in Java. - beepcore.org is the home of the BEEP specifications and tools on the Web. - The BEEP core, as discussed in this article, is defined in RFC 3080. - BEEP defines its management profile, responsible for channel initiation, as an XML DTD. - Using SOAP in BEEP specifies how SOAP 1.1 envelopes can be transmitted using a BEEP profile. - The xml-dist-app mailing list contains many lengthy debates on the merits or otherwise of reusing HTTP. - Keith Moore's Internet Draft On the use of HTTP as a Substrate for Other Protocols makes recommendations as to the appropriate reuse of HTTP. - The developerWorks Web services zone has plenty of information on SOAP and associated technologies. - WebDAV.org has more information about WebDAV. - Internet Printing Protocol gives complete background on the Internet Printing Protocol (IPP). - Dumbill at edd@xml.com.
http://www.ibm.com/developerworks/xml/library/x-beep/index.html
crawl-003
refinedweb
1,765
53.21
03 February 2010 08:28 [Source: ICIS news] TOKYO (ICIS news)--Tosoh Corp has posted a nine-month operating profit of Y5.16bn in its petrochemical segment for the period ending 31 December 2009, up from Y832m in the year-ago period on increased cumene shipments, it said on Wednesday. Shipments of ethylene and propylene decreased, while shipments of cumene increased due to the capacity expansion of its manufacturing plant in the previous year, the Japanese chemicals producer said. The plant also did not have to shut down for maintenance in 2009, which allowed it to produce at full capacity during the period, Tosoh added. Net sales in the segment decreased 32% to 121.4bn from Y178.7bn, it said. During this period, the company incurred a net loss of Y4.25bn ($47m) an improvement on the net loss of Y13bn that it suffered in the nine months ended December 2008. Operating profits for the period stood at Y3.8bn against a loss of Y7.36bn in the year-ago period on decreased fixed expenses, it added. Nine-month net sales decreased 24% to Y453.5bn from Y594.6bn year on year due to declined products’ prices in ?xml:namespace> Nine-month net sales decreased 24% to Y453.5bn from Y594.6bn year on year due to declined products’ prices
http://www.icis.com/Articles/2010/02/03/9331194/japans-tosoh-records-9-month-petchem-op-profit-of-y5.16bn.html
CC-MAIN-2013-48
refinedweb
220
68.57
Large Image Scrolling Using Low Level Touch Events What you learn: You will learn how to scroll (pan) images larger than the display surface of your device using touch gestures. These touch gestures will be processed by low level touch event handlers, resulting in a light-weight smooth scrolling implementation. Tested with Android 2.0 and 2.0.1 platforms on Droid handset (firmware: Android 2.0.1). Difficulty: 1.5 of 5 What it looks like: Description: Imagine a rectangle as a window through which we can see a portion of an image currently loaded into memory. This window is the same size as our display. We use touch events to move the window over the surface of the image. What we need to do is: - load a large image into memory - set up a scroll rectangle the same size as our display - use touch events to move our scroll rectangle over our image - draw the portion of the image currently within the scroll rectangle to the display This tutorial describes how to work with images that fit into memory. For larger images, you will need a solution that loads portions of an image, either via streaming/caching (from an external source such as a web server) or compression/decompression (local file store). Many examples exist on the web. If you are working with map data, consider MapView. What about GestureDetector and Gesture Builder? GestureDetector is a good choice for handling different kinds of gestures like fling, long-press or double-tap. You can certainly use it in place of the material presented here. But there may be times (for whatever reason) that you can't use GestureDetector, or you may not need all the functionality it offers. Gesture Builder is for handling of more complex gestures and managing groups of gestures. Implementation: The full source is available at the end. One caveat before we get started: for simplicity I haven't handled activity lifecycle or bitmap recycling. You will run out of memory fairly quickly if you force a lifecycle refresh by, for example, opening and closing the keyboard. 0.) In Eclipse, create a new Android Project, targeting Android 2.0 (older versions may work too, but the folders may be slightly different from those shown here). For consistency with this tutorial you may wish to name your main activity LargeImageScroller, and make your package name com.example.largeimagescroller. 1.) Obtain an image resource: The image resource I used is 1440x1080 - tested with the Droid handset - but you can use a smaller one if you want; it just has to be larger than the display size so you can scroll it. Be careful if you go too much larger, as you may run out of memory. If you are using a different device your memory requirements may vary, and you may need to adjust the size of the image accordingly. (For testing purposes I tested this with a huge image – 3200x2300 – one I was sure would take up a lot of memory, just to make sure the scrolling was smooth, but this isn't something you'd normally want to do.) Add the image resource (I've named mine testlargeimg.png) to your /drawable-hdpi folder (may also be named /drawable depending on which Android version you are using). 2.) For convenience, we will run our simple application fullscreen and in landscape mode. To do this modify the project's manifest: Edit AndroidManifest.xml and add the following to the application tag: Using xml Syntax Highlighting While we are here you can also add the following if you wish to debug on an actual device (this also goes inside the application tag): Using xml Syntax Highlighting In the activity tag, set the screen orientation to landscape: Using xml Syntax Highlighting 3.) Now we need an activity and a custom view: We will create a custom view and add it to our activity so that we can handle our own drawing and touch events. The standard way of doing this is: Using java Syntax Highlighting - public class LargeImageScroller extends Activity { - /** Called when the activity is first created. */ - @Override - public void onCreate(Bundle savedInstanceState) { - super.onCreate(savedInstanceState); - setContentView(new SampleView(this)); - } - private static class SampleView extends View { - public SampleView(Context context) { - super(context); - } - @Override - public boolean onTouchEvent(MotionEvent event) { - } - @Override - protected void onDraw(Canvas canvas) { - } - } - } There are numerous examples of this type of custom view creation in the Android SDK if you'd like more information. In onCreate() the only additional code we need is for getting our display width and height. We can use the system service getDefaultDisplay() API to do this: Using java Syntax Highlighting You might think that getDefaultDisplay() will always return the same values for a given hardware device. Actually, the values will change depending on, for example, screen orientation. On my Droid in landscape mode I see a width of 854 and a height of 480, but in portrait mode these values are reversed. If you have an application that needs to know when the display settings change, you can hook the onSizeChanged() API (see the Android docs for more). For our application, we are always in landscape mode so these values wont change after we retrieve them. That's it for the activity. Everything else happens in our custom view. 4.) Set up our SampleView: Constructor: In our SampleView constructor we handle the bitmap loading, the scroll rectangle setup and the rectangle that defines how large an area we draw onto (our display rectangle) – in our case, the whole screen. Using java Syntax Highlighting We have initialized our scroll rectangle to be exactly the same coordinates as the display rectangle. When we first run our application this means the upper-leftmost portion of our image will be visible. The bitmap loader code is one of many standard ways to load Android bitmap resources. Touch event handler: Our touch event handler is where we process our touch gestures. A gesture is broken into a series of actions, the most common of which are down, move and up, though there are others (see the Android docs for MotionEvent for a complete reference). Information about an event is contained in MotionEvent. For our application we only care about down and move, as you will see. Using java Syntax Highlighting - @Override - public boolean onTouchEvent(MotionEvent event) { - switch (event.getAction()) { - case MotionEvent.ACTION_DOWN: - startX = event.getRawX(); - startY = event.getRawY(); - break; - case MotionEvent.ACTION_MOVE: - float x = event.getRawX(); - float y = event.getRawY(); - scrollByX = x - startX; - scrollByY = y - startY; - startX = x; - startY = y; - invalidate(); - break; - } - return true; - } - } When you first touch the display a single ACTION_DOWN event is generated. Thereafter as you move your finger you will generate a chain of ACTION_MOVE events. The number of ACTION_MOVE events generated over a given time period is controlled by the Android OS. When either an ACTION_DOWN or an ACTION_MOVE event occurs, you can retrieve the coordinates of the event's location, using getRawX() and getRawY(). This gives us a way to determine how far our finger has traveled. We store the coordinates of the ACTION_DOWN event in startX and startY. Side note: getRawX() and getRawY() always return absolute screen coordinates. Another way to retrieve coordinates is with getX() and getY() but beware: depending on the event you call getX() and getY() with, they may return either absolute (relative to device screen) or relative (relative to view) coordinates. For more see the Android docs. We use getRawX() and getRawY() for this application. We build small moves from consecutive ACTION_MOVE events and then apply these small moves to our scroll rectangle. This will occur several-to-many times a second and so will give the appearance of smooth scrolling. scrollByX and scrollByY keep incremental totals of the amount we need to scroll by the next time our view is redrawn. startX and startY are updated also, so that we can keep tracking these movements as increments. Given that the ACTION_MOVE event may get generated many times, it is best to keep the code that executes from within the event handler to a minimum. This is true for any event handler. Invalidate() indicates to the Android OS that we'd like our view to be redrawn. Our redraw happens in onDraw() (discussed below), where we update the coordinates of the scroll rectangle and repaint the enclosed bitmap portion. The return true at the end indicates that we've processed the touch event to our liking and have no more use for it, so we tell the Android OS not to process it further. Draw updater: Our draw handler, onDraw(), is responsible for calculating the updated scroll rectangle coordinates and drawing the area of the bitmap within this newly updated rectangle. When you slide your finger to the left, you can think of this as 'pulling' the bitmap towards the left, under the scroll rectangle – this is exactly the same as sliding the scroll rectangle to the right. So in our ACTION_MOVE event handler, if we calculate a move update that indicates that we are moving to the left, we need to update the scroll rectangle to move to the right. This means we need to add the negative sense of the move update to our current scroll rectangle coordinates: Using java Syntax Highlighting We need to do one more thing before we are ready to draw. We must check our coordinates to make sure no part of our scroll rectangle will be off the bitmap: Using java Syntax Highlighting - //); The checks against 0 are straightforward: since our left (or top) coordinate is 0 for both the scroll rectangle and the bitmap, this is simply a check to make sure our scroll rectangle x (or y) coordinate is not to the left (or top) of the left (or top) edge of our bitmap. To understand the check against the right edge, it is helpful to look at the following diagram: Since we perform our scroll rectangle x coordinate check using the left x coordinate, then in order to perform a check that uses the right edge of the scroll rectangle, we have to take the scroll rectangle's width into account. This will allow us to check the right edge of the scroll rectangle against the right edge of the bitmap. In the example above, we have a bitmap width of 8 and a scroll rectangle width of 3, so we would have a left x coordinate of 5 for our scroll rectangle (= 8-3). y behaves similarly. In our case the scroll rectangle width is also our display width so this is how we end up with (bitmap width – display width) in the code fragment above. The same applies to the height variables. The hard part is done. Set the newly calculated coordinates into our scroll rectangle, and draw it: Using java Syntax Highlighting The last detail in our draw handler is to update the original scroll coordinates with the new ones, so we can start over with the same process as we continue to update our drawing in response to user move gestures: Using java Syntax Highlighting 5.) To build and run, you will need to add the variable declarations; see the full source at the end. When you run the example you should be able to smoothly scroll around your image. One final note: We never create a new bitmap, once the original is loaded into memory. Continuously creating bitmaps on the fly will kill your performance. In particular, avoid creating bitmaps in response to ACTION_MOVE events. We simply redraw the correct portion of the already loaded bitmap, which is defined by the scroll rectangle's coordinates. Takeaways: - The implementation described here is for simple scrolling needs, and for use with images that will fit into memory. - ACTION_DOWN and ACTION_MOVE events can be used to calculate scroll move updates; you will get several-to-many ACTION_MOVE events for one move gesture. - To avoid poor performance, try to keep the code that executes from within the event handlers to a minimum. - To avoid poor performance, don't create bitmaps on the fly (in this tutorial we only create one bitmap on startup). - If you need to handle different kinds of gestures (fling, long-press, double-tap, etc.) consider an alternative such as GestureDetector. You'll need a single large image as described in step 1, recommended size is 1440x1080, though if your device is other than a Droid, your mileage may vary. You will also need to edit your AndroidManifest.xml as described in step 2. "/src/your_package_structure/LargeImageScroller.java" Using java Syntax Highlighting - package com.example.largeimagescroller; - import android.app.Activity; - import android.os.Bundle; - import android.content.Context; - import android.graphics.Bitmap; - import android.graphics.BitmapFactory; - import android.graphics.Canvas; - import android.graphics.Paint; - import android.graphics.Rect; - import android.view.Display; - import android.view.MotionEvent; - import android.view.View; - import android.view.WindowManager; - public class LargeImageScroller extends Activity { - // Physical display width and height. - private static int displayWidth = 0; - private static int displayHeight = 0; - /** Called when the activity is first created. */ - @Override - public void onCreate(Bundle savedInstanceState) { - super.onCreate(savedInstanceState); - // displayWidth and displayHeight will change depending on screen - // orientation. To get these dynamically, we should hook onSizeChanged(). - // This simple example uses only landscape mode, so it's ok to get them - // once on startup and use those values throughout. - Display display = ((WindowManager) - getSystemService(Context.WINDOW_SERVICE)).getDefaultDisplay(); - displayWidth = display.getWidth(); - displayHeight = display.getHeight(); - // SampleView constructor must be constructed last as it needs the - // displayWidth and displayHeight we just got. - setContentView(new SampleView(this)); - } - private static class SampleView extends View { - private static Bitmap bmLargeImage; //bitmap large enough to be scrolled - private static Rect displayRect = null; //rect we display to - private Rect scrollRect = null; //rect we scroll over our bitmap with - private int scrollRectX = 0; //current left location of scroll rect - private int scrollRectY = 0; //current top location of scroll rect - private float scrollByX = 0; //x amount to scroll by - private float scrollByY = 0; //y amount to scroll by - private float startX = 0; //track x from one ACTION_MOVE to the next - private float startY = 0; //track y from one ACTION_MOVE to the next - public SampleView(Context context) { - super(context); - // Destination rect for our main canvas draw. It never changes. - displayRect = new Rect(0, 0, displayWidth, displayHeight); - // Scroll rect: this will be used to 'scroll around' over the - // bitmap in memory. Initialize as above. - scrollRect = new Rect(0, 0, displayWidth, displayHeight); - // Load a large bitmap into an offscreen area of memory. - bmLargeImage = BitmapFactory.decodeResource(getResources(), - R.drawable.testlargeimg); - } - @Override - public boolean onTouchEvent(MotionEvent event) { - switch (event.getAction()) { - case MotionEvent.ACTION_DOWN: - // Remember our initial down event location. - startX = event.getRawX(); - startY = event.getRawY(); - break; - case MotionEvent.ACTION_MOVE: - float x = event.getRawX(); - float y = event.getRawY(); - // Calculate move update. This will happen many times - // during the course of a single movement gesture. - scrollByX = x - startX; //move update x increment - scrollByY = y - startY; //move update y increment - startX = x; //reset initial values to latest - startY = y; - invalidate(); //force a redraw - break; - } - return true; //done with this event so consume it - } - @Override - protected void onDraw(Canvas canvas) { - // Our move updates are calculated in ACTION_MOVE in the opposite direction - // from how we want to move the scroll rect. Think of this as dragging to - // the left being the same as sliding the scroll rect to the right. - int newScrollRectX = scrollRectX - (int)scrollByX; - int newScrollRectY = scrollRectY - (int)scrollByY; - //); - // We have our updated scroll rect coordinates, set them and draw. - scrollRect.set(newScrollRectX, newScrollRectY, - newScrollRectX + displayWidth, newScrollRectY + displayHeight); - Paint paint = new Paint(); - canvas.drawBitmap(bmLargeImage, scrollRect, displayRect, paint); - // Reset current scroll coordinates to reflect the latest updates, - // so we can repeat this update process. - scrollRectX = newScrollRectX; - scrollRectY = newScrollRectY; - } - } - } Hope this helps! XCaf
http://www.anddev.org/large_image_scrolling_using_low_level_touch_events-t11182.html
CC-MAIN-2014-42
refinedweb
2,618
54.42
I'm trying to extract integers from a string and use them to scan through a YAML file like so: FORMS = YAML.load_file('../email/lib/lists/form_links.yml') def get_form(form) form_num = form.scan(/\d+/) data = FORMS['esd_forms'][form_num] begin if data != nil "Form link: #{data}" else raise StandardError end rescue StandardError "** Form: #{form} is not a valid form name **" end end esd_forms: 1: '' 2: '' 3: '' 4: '' 5: '' 6: '' 7: '' 8: '' 9: '' 10: '' 11: '' 03: '' 07: '' 10: '' 13: '' 14: '' wrong argument type Array (expected Regexp) irb(main):001:0> form = 'esd-2' => "esd-2" irb(main):002:0> form_num = form.scan(/\d+/) => ["2"] irb(main):003:0> puts form_num 2 String#scan returns all the occurrences in the String that matches the regular expression, in an array. You see in your irb session when you execute form_num = form.scan(/\d+/), it actually returns an array with 1 element ["2"]. If you want to return the first matched segment, you can use String#[]: form = 'esd-2' form_num = form[/\d+/] #=> "2" Besides, if you need to examine what is stored in an variable, p will be a better choice than puts. And irb actually use p to output the expression result by default as you see in your irb session. form = 'esd-2' form_num = form.scan(/\d+/) puts form_num #=> 2 p form_num #=> ["2"]
https://codedump.io/share/P1TYsRbTZnTJ/1/how-to-extract-digits-from-a-string-and-use-them-to-extract-from-a-yaml-file
CC-MAIN-2016-50
refinedweb
221
62.58
Data Points - A New Option for Creating OData: Web API By Julie Lerman | June 2013 | Get the Code: C# VB Microsoft .NET developers have been able to create OData feeds since before there was even an OData spec. By using WCF Data Services, you could expose an Entity Data Model (EDM) over the Web using Representational State Transfer (REST). In other words, you could consume these services through HTTP calls: GET, PUT, DELETE and so on. As the framework for creating these services evolved (and was renamed a few times along the way), the output evolved as well and eventually became encapsulated in the OData specification (odata.org). Now there’s a great variety of client APIs for consuming OData—from .NET, from PHP, from JavaScript and from many other clients as well. But until recently, the only easy way to create a service was with WCF Data Services. WCF Data Services is a .NET technology that simply wraps your EDM (.edmx, or a model defined via Code First) and then exposes that model for querying and updating through HTTP. Because the calls are URIs (such as), you can even query from a Web browser or a tool such as Fiddler. To create a WCF Data Service, Visual Studio provides an item template for building a data service using a set of APIs. Now there’s another way to create OData feeds—using an ASP.NET Web API. In this article I want to provide a high-level look at some of the differences between these two approaches and why you’d choose one over the other. I’ll also look at some of the ways creating an OData API differs from creating a Web API. API vs. Data Service at a High Level A WCF Data Service is a System.Data.Services.DataService that wraps around an ObjectContext or DbContext you’ve already defined. When you declare the service class, it’s a generic DataService of your context (that is, DataService<MyDbContext>). Because it starts out completely locked down, you set access permissions in the constructor to the DbSets from your context that you want the service to expose. That’s all you need to do. The underlying DataService API takes care of the rest: interacting directly with your context, and querying and updating the database in response to your client application’s HTTP requests to the service. It’s also possible to add some customizations to the service, overriding some of its query or update logic. But for the most part, the point is to let the DataService take care of most of the interaction with the context. With a Web API, on the other hand, you define the context interaction in response to the HTTP requests (PUT, GET and the like). The API exposes methods and you define the logic of the methods. You don’t necessarily have to be interacting with Entity Framework or even a database. You could have in-memory objects that the client is requesting or sending. The access points won’t just be magically created like they are with the WCF Data Service; instead, you control what’s happening in response to those calls. This is the defining factor for choosing a service over an API to expose your OData. If most of what you want to expose is simple Create, Read, Update, Delete (CRUD) without the need for a lot of customization, then a data service will be your best choice. If you’ll need to customize a good deal of the behavior, a Web API makes more sense. I like the way Microsoft Integration MVP Matt Milner put it at a recent gathering: “WCF Data Services is for when you’re starting with the data and model and just want to expose it. Web API is better when you’re starting with the API and want to define what it should expose.” Setting the Stage with a Standard Web API For those with limited experience with Web API, prior to looking at the new OData support I find it helpful to get a feel for the Web API basics and then see how they relate to creating a Web API that exposes OData. I’ll do that here—first creating a simple Web API that uses Entity Framework as its data layer, and then converting it to provide its results as OData. One use for a Web API is as an alternative to a standard controller in a Model-View-Controller (MVC) application, and you can create it as part of an ASP.NET MVC 4 project. If you don’t want the front end, you can start with an empty ASP.NET Web application and add Web API controllers. However, for the sake of newbies, I’ll start with an ASP.NET MVC 4 template because that provides scaffolding that will spit out some starter code. Once you understand how all the pieces go together, starting with an empty project is the right way to go. So I’ll create a new ASP.NET MVC 4 application and then, when prompted, choose the Empty template (not the Web API template, which is designed for a more robust app that uses views and is overkill for my purposes). This results in a project structured for an MVC app with empty folders for Models, Views and Controllers. Figure 1 compares the results of the Empty template to the Web API template. You can see that an Empty template results in a much simpler structure and all I need to do is delete a few folders. Figure 1 ASP.NET MVC 4 Projects from Empty Template and Web API Template I also won’t need the Models folder because I’m using an existing set of domain classes and a DbContext in separate projects to provide the model. I’ll then use the Visual Studio tooling to create my first controller, which will be a Web API controller to interact with my DbContext and domain classes that I’ve referenced from my MVC project. My model contains classes for Airline, Passengers, Flights and some additional types for airline-related data. Because I used the Empty template, I’ll need to add some references in order to call into the DbContext—one to System.Data.Entity.dll and one to EntityFramework.dll. You can add both of these references by installing the EntityFramework NuGet package. You can create a new Web API controller in the same manner as creating a standard MVC controller: right-click the Controllers folder in the solution and choose Add, then Controller. As you can see in Figure 2, you now have a template for creating an API controller with EF read and write actions. There’s also an Empty API controller. Let’s start with the EF read/write actions for a point of comparison to the controller we want for OData that will also use Entity Framework. Figure 2 A Template for Creating an API Controller with Pre-Populated Actions If you’ve created MVC controllers before, you’ll see that the resulting class is similar, but instead of a set of view-related action methods, such as Index, Add and Edit, this controller has a set of HTTP actions. For example, there are two Get methods, as shown in Figure 3. The first, GetAirlines, has a signature that takes no parameters and uses an instance of the AirlineContext (which the template scaffolding has named db) to return a set of Airline instances in an Enumerable. The other, GetAirline, takes an integer and uses that to find and return a particular airline. public class AirlineController : ApiController { private AirlineContext db = new AirlineContext2(); // GET api/Airline public IEnumerable<Airline> GetAirlines() { return db.Airlines.AsEnumerable(); } // GET api/Airline/5 public Airline GetAirline(int id) { Airline airline = db.Airlines.Find(id); if (airline == null) { throw new HttpResponseException (Request.CreateResponse(HttpStatusCode.NotFound)); } return airline; } The template adds comments to demonstrate how you’d use these methods. After providing some configurations to my Web API, I can check it out directly in a browser using the example syntax on the port my app has assigned:. This is the default HTTP GET call and is therefore routed by the application to execute the GetAirlines method. Web API uses content negotiation to determine how the result set should be formatted. I’m using Google Chrome as my default browser, which triggered the results to be formatted as XML. The request from the client controls the format of the results. Internet Explorer, for example, sends no specific header information with respect to what format it accepts, so Web API will default to returning JSON. My XML results are shown in Figure 4. <ArrayOfAirline xmlns:i= <Airline> <Id>1</Id> <Legs/> <ModifiedDate>2013-02-22T00:00:00</ModifiedDate> <Name>Vermont Balloon Transporters</Name> </Airline> <Airline> <Id>2</Id> <Legs/> <ModifiedDate>2013-02-22T00:00:00</ModifiedDate> <Name>Olympic Airways</Name> </Airline> <Airline> <Id>3</Id> <Legs/> <ModifiedDate>2013-02-22T00:00:00</ModifiedDate> <Name>Salt Lake Flyer</Name> </Airline> </ArrayOfAirline> If, following the guidance of the GetAirline method, I were to add an integer parameter to the request——then only the single airline whose key (Id) is equal to 3 would be returned: If I were to use Internet Explorer, or a tool such as Fiddler where I could explicitly control the request to the API to ensure I get JSON, the result of the request for Airline with the Id 3 would be returned as JSON: These responses contain simple representations of the airline type with elements for each property: Id, Legs, ModifiedDate and Name. The controller also contains a PutAirline method that Web API will call in response to a PUT HTTP request. PutAirline contains code for using the AirlineContext to update an airline. There’s also a PostAirline method for inserts and a DeleteAirline method for deleting. These can’t be demonstrated in a browser URL but you can find plenty of getting-started content for Web API on MSDN, Pluralsight and elsewhere, so I’ll move on to converting this to output its result according to the OData spec. Turning Your Web API into an OData Provider Now that you have a basic understanding of how Web API can be used to expose data using the Entity Framework, let’s look at the special use of Web API to create an OData provider from your data model. You can force your Web API to return data formatted as OData by turning your controller into an OData controller—using a class that’s available in the ASP.NET and Web Tools 2012.2 package—and then overriding its OData-specific methods. With this new type of controller, you won’t even need the methods that were created by the template. In fact, a more efficient path for creating an OData controller is to choose the Empty Web API scaffolding template rather than the one that created the CRUD operations. There are four steps I’ll need to perform for this transition: - Make the controller a type of ODataController and implement its HTTP methods. I’ll use a shortcut for this. - Define the available EntitySets in the project’s WebAPIConfig file. - Configure the routing in WebAPIConfig. - Pluralize the name of the controller class to align with OData conventions. Creating an ODataController Rather than inherit from ODataController directly, I’ll use EntitySetController, which derives from ODataController and provides higher-level support by way of a number of virtual CRUD methods. I used NuGet to install the Microsoft ASP.NET Web API OData package for the proper assemblies that contain both of these controller classes. Here’s the beginning of my class, now inheriting from EntitySetController and specifying that the controller is for the Airline type: I’ve fleshed out the override for the Get method, which will return db.Airlines. Notice that I’m not calling ToList or AsEnumerable on the Airlines DbSet. The Get method needs to return an IQueryable of Airline and that’s what db.Airlines does. This way, the consumer of the OData can define queries over this set, which will then get executed on the database, rather than pulling all of the Airlines into memory and then querying over them. The HTTP methods you can override and add logic to are GET, POST (for inserts), PUT (for updates), PATCH (for merging updates) and DELETE. But for updates you’ll actually use the virtual method CreateEntity to override the logic called for a POST, the UpdateEntity for logic invoked with PUT and PatchEntity for logic needed for the PATCH HTTP call. Additional virtual methods that can be part of this OData provider are: CreateLink, DeleteLink and GetEntityByKey. In WCF Data Services, you control which CRUD actions are allowed per EntitySet by configuring the SetEntitySetAccessRule. But with Web API, you simply add the methods you want to support and leave out the methods you don’t want consumers to access. Specifying EntitySets for the API The Web API needs to know which EntitySets should be available to consumers. This confused me at first. I expected it to discover this by reading the AirlineContext. But as I thought about it more, I realized it’s similar to using the SetEntitySetAccessRule in WCF Data Services. In WCF Data Services, you define which CRUD operations are allowed at the same time you expose a particular set. But with the Web API, you start by modifying the WebApiConfig.Register method to specify which sets will be part of the API and then use the methods in the controller to expose the particular CRUD operations. You specify the sets using the ODataModelBuilder—similar to the DbContext.ModelBuilder you may have used with Code First. Here’s the code in the Register method of the WebApiConfig file to let my OData feed expose Airlines and Legs: Defining a Route to Find the OData Next, the Register method needs a route that points to this model so that when you call into the Web API, it will provide access to the EntitySets you defined: You’ll see that many demos use “odata” for the RoutePrefix parameter, which defines the URL prefix for your API methods. While this is a good standard, you can name it whatever you like. So I’ll change it just to prove my point: Renaming the Controller Class The application template generates code that uses a singular naming convention for controllers, such as AirlineController and LegController. However, the focus of OData is on the EntitySets, which are typically named using the plural form of the entity name. And because my EntitySets are indeed plural, I need to change the name of my controller class to AirlinesController to align with the Airlines EntitySet. Consuming the OData Now I can work with the API using familiar OData query syntax. I’ll start by requesting a listing of what’s available using the request:. The results are shown in Figure 5. <service xmlns="" xmlns: <workspace> <atom:titleDefault</atom:title> <collection href="Airlines"> <atom:titleAirlines</atom:title> </collection> <collection href="Legs"> <atom:titleLegs</atom:title> </collection> </workspace> </service> The results show me that the service exposes Airlines and Legs. Next, I’ll ask for a list of the Airlines as OData with. OData can be returned as XML or JSON. The default for Web API results is the JSON format: { "odata.metadata": "","value":[ { "Id":1,"Name":"Vermont Balloons","ModifiedDate":"2013-02-26T00:00:00" },{ "Id":2,"Name":"Olympic Airways","ModifiedDate":"2013-02-26T00:00:00" },{ "Id":3,"Name":"Salt Lake Flyer","ModifiedDate":"2013-02-26T00:00:00" } ] } One of the many OData URI features is querying. By default, the Web API doesn’t enable querying, as that imposes an extra load on the server. So you won’t be able to use these querying features with your Web API until you add the Queryable annotation to the appropriate methods. For example, here I’ve added Queryable to the Get method: Now you can use the $filter, $inlinecount, $orderby, $sort and $top methods. Here’s a query using the OData filter method: The ODataController allows you to constrain the queries so that consumers don’t cause performance problems on your server. For example, you can limit the number of records that are returned in a single response. See the Web API-specific “OData Security Guidance” article at bit.ly/X0hyv3 for more details. Just Scratching the Surface I’ve looked at only a part of the querying capabilities you can provide with the Web API OData support. You can also use the virtual methods of the EntitySetController to allow updating to the database. An interesting addition to PUT, POST, and DELETE is PATCH, which lets you send an explicit and efficient request for an update when only a small number of fields have been changed, rather than sending the full entity for a POST. But the logic within your PATCH method needs to handle a proper update, which, if you’re using Entity Framework, most likely means retrieving the current object from the database and updating it with the new values. How you implement that logic depends on knowing at what point in the workflow you want to pay the price of pushing data over the wire. It’s also important to be aware that this release (with the ASP.NET and Web Tools 2012.2 package) supports only a subset of OData features. That means not all of the API calls you can make into an OData feed will work with an OData provider created with the Web API. The release notes for the ASP.NET and Web Tools 2012.2 package list which features are supported. There’s a lot more to learn than I can share in the limited space of this column. I recommend Mike Wasson’s excellent series on OData in the official Web API documentation at bit.ly/14cfHIm. You’ll learn about building all of the CRUD methods, using PATCH, and even using annotations to limit what types of filtering are allowed in your OData APIs and working with relationships. Keep in mind that many of the other Web API features apply to the OData API, such as how to use authorization to limit who can access which operations. Also, the .NET Web Development and Tools Blog (blogs.msdn.com/webdev) has a number of detailed blog posts about OData support in the Web API. “Programming Entity Framework” books from O’Reilly Media and numerous online courses at Pluralsight.com. Follow her on Twitter at twitter.com/julielerman. Thanks to the following technical experts for reviewing this article: Jon Galloway (Microsoft) and Mike Wasson (Microsoft) Jon Galloway (Jon.Galloway@microsoft.com) is a Technical Evangelist on the Windows Azure evangelism team, focused on ASP.NET MVC and ASP.NET Web API. He speaks at conferences and international Web Camps from Istanbul to Bangalore to Buenos Aires. He's a co-author on the Wrox Professional ASP.NET MVC book series, and is a co-host on the Herding Code podcast. Mike Wasson (mwasson@microsoft.com) is a programmer-writer at Microsoft. For many years he documented the Win32 multimedia APIs. He currently writes about ASP.NET, focusing on Web API. Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/en-us/magazine/dn201742.aspx
CC-MAIN-2018-34
refinedweb
3,238
61.36
I spend a good chunk of time today researching. I had hoped to find an easy solution that would allow me to create and modify the "exth" header in mobi files. If I can just do that, then I can set the "cdetype" in that header to "EBOK", which would signal to Kindles that books like Recipes with Backbone are ebooks and not "personal documents" ("PDOC"). Last night I was able to get this working with the GUI tool Calibre. The problem is that it is a GUI tool. A big appeal of git-scribe is that automates so much of the ebook chain. In my fork of git-scribe, I have it to the point that I can generate PDF, epub and mobi versions of a book and zip them all into a single file with just one command. I do not relish adding a manual, GUI step to that process. The best I can come up with for command-line solutions is mobiperl. It seems like it would do the trick -- it was the inspiration for Calibre's EBOK support. But I am not going to pursue that. There are already an alarming number of dependencies in git-scribe, I do not think adding perl and all of mobiperl's requirements is going to help. In ruby-land (git-scribe is written in ruby), there is the mobi gem, but that is for reading mobi, not writing. And it does not seem to be exth-aware. I could try writing my own, but that just feels like too much of a rabbit hole. So it seems as though I am stuck with Calibre. It turns out that there is a set of command-line tools that are installed along with the GUI. One in particular, ebook-meta seems like it could be of some use, except... it does not support updating "cdetype" field in a mobi. Another command-line tool installed along with Calibre is ebook-convert. It does not support editing the cdetype either, but I have already used the GUI to always generate EBOK output. So I give the command a try, converting the mobi version of Recipes with Backbone to... mobi: ➜ output git:(master) ebook-convert book.mobi book_ebok.mobi 1% Converting input to HTML... InputFormatPlugin: MOBI Input running on /home/cstrom/repos/backbone-recipes/output/book.mobi Parsing all content... Forcing Recipes_with_Backbone.html into XHTML namespace 34% Running transforms on ebook... Merging user specified metadata... Detecting structure... Detected chapter: 1. Who Should Read this Book Detected chapter: 3. How this Book is Organized Detected chapter: Chapter 1. Writing Client Side Apps (Without Backb Detected chapter: Chapter 2. Writing Backbone Applications Detected chapter: Chapter 3. Namespacing Detected chapter: Chapter 4. View Templates with Underscore.js Detected chapter: Chapter 5. Instantiated View Detected chapter: Chapter 6. Collection View Detected chapter: Chapter 7. View Signature Detected chapter: Chapter 8. Fill-In Rendering Detected chapter: Chapter 9. Actions and Animations Detected chapter: Chapter 10. Reduced Models and Collections Detected chapter: Chapter 11. Non-REST Models Detected chapter: Chapter 12. Changes Feed Detected chapter: Chapter 13. Pagination and Search Detected chapter: Chapter 14. Constructor Route Detected chapter: Chapter 15. Router Redirection Detected chapter: Chapter 16. Evented Routers Detected chapter: Chapter 17. Object References in Backbone Detected chapter: Chapter 18. Custom Events Flattening CSS and remapping font sizes... Source base font size is 12.00000pt Removing fake margins... Cleaning up manifest... Trimming unused files from manifest... Trimming 'images/0000iHrm... it really seems to do quite a bit of changing and re-arranging of things. I had expected, based on a cursory reading of the mobi-to-mobi options for ebook-convertthat it would do less. Still, it does the job. When I copy the generated book back to my Kindle Fire, it shows up under the "Books" section rather than the "Documents" section. Hrm... perhaps I can add an optional "clean-up" step to git-scribe that can invoke an arbitrary command-line script. I cannot explicitly include Calibre since it requires the GUI configuration from last night. But first, I think, I need to compare the output from ebook-convertwith the original git-scribemobi. I worked hard three months ago to get it just right. I will pick back up with that tomorrow. Day #243
https://japhr.blogspot.com/2011/12/command-line-calibre.html
CC-MAIN-2017-51
refinedweb
716
69.07
Walk straight over the input array. Let best hold the best sum found overall and curr hold the best sum ending with the current element. For the latter, we can either use the current element alone (that's n) or also use the best sum ending with the previous element (that's curr+n, with the previous value of curr). Ruby: def max_sub_array(nums) best, curr = nums[0], 0 nums.each { |n| best = [best, curr = [n, curr+n].max].max } best end C++: int maxSubArray(vector<int>& nums) { int best = nums[0], curr = 0; for (int n : nums) best = max(best, curr = max(n, curr+n)); return best; }
https://discuss.leetcode.com/topic/16242/3-4-lines-ruby-and-c
CC-MAIN-2018-05
refinedweb
108
72.66
14 September 2008 18:08 [Source: ICIS news] HOUSTON (ICIS news)--The destructive winds of hurricane Ike were gone, but rain, power blackouts and flooding continued to beset Houston as the city struggled on Sunday to recover. ?xml:namespace> Houston Mayor Bill White urged area residents - who have been sheltering in their homes since Friday - to continue to stay off the city’s streets. He discouraged people from going out to view storm damage or to try to travel anywhere by car. “There’s no need to be out on the roads,” White said during a press conference. He cautioned residents against going out and possibly getting stranded or injured and thereby creating more work for already overwhelmed rescue and medical teams. “There’s no need for emergency rescues” for motorists attempting to brave high water on the road, he said. Water pressure remained low in many areas of the city, including some assisted living centres, White said. Debris removal crews were concentrating on clearing main thoroughfares, White said. Work to clear smaller avenues and residential streets may take some time, he indicated. White urged residents to practice self-sufficiency when possible. “I would just ask the citizens of the community to look after your neighbour,” White said. Some 2.2m ?xml:namespace> In addition to wide-scale damage and disruption to city and suburban property throughout southern Almost all US offshore oil and gas production in the Gulf of Mexico remains shut-in, and federal officials said two US drilling rigs were torn from their moorings and are adrift in the Gulf..
http://www.icis.com/Articles/2008/09/14/9156107/houston-struggles-to-recover-from-ikes-wrath.html
CC-MAIN-2013-20
refinedweb
263
58.62
A module for showing and filtering vectors Project description A tool for visualization and filtering of vectors. Features - Simple to use API - Clear visualization of vectors in a starchart. - Filtering of vectors. - Only show Pareto-optimal vectors. - Constrain the vectors by valueranges for each dimension. How to use After you have installed VecVis you are ready to go. The only way to start the GUI and add vectors is via API. The available functions are listed at readthedocs. This version does not work on Mac OS X. Example use: from vecvis import API app = API(3) app.add_vector((1,0.4,1.3)) Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/vecvis/
CC-MAIN-2018-22
refinedweb
130
69.68