text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Configure parameters
You can change the configuration of Planning Analytics Workspace Local by modifying a paw configuration file.
Do not change the values in defaults.env. Use paw.env to override a value in defaults.env.
- ADMINTOOL_PORT Added in v2.0.44
- In IBM® Planning Analytics Workspace Local version 2.0.44 or later, you can access the Planning Analytics Workspace administration tool remotely on Windows Server by specifying the ADMINTOOL_PORT environment variable in the config/paw.ps1 file.
- For example:
$env:ADMINTOOL_PORT="8888"
- This configuration parameter applies to Planning Analytics Workspace Local installed on a Windows Server only.
- API_ALLOW_ORIGINS Added in v2.0.46
- This parameter allows cross-origins to access API endpoints. Set to a space-separated list of domains. You can use * for global matching. By default, only same-origin is allowed. For example,
API_ALLOW_ORIGINS="*.example.com http://*.enterprise.com"
This parameter is required to embed URL links to Planning Analytics Workspace within an iframe in another product such as IBM Cognos® Analytics. This technique is an example of Cross-Origin Resource Sharing (CORS).
- For more information, see Access-Control-Allow-Origin and Same-origin policy.
- CSP_FRAME_ANCESTORS Added in v2.0.46
- This parameter enables the HTTP Content Security Policy frame-ancestors directive. Enter values as the list of valid parent frame sources separated by a space. The default is self.
This parameter is required to embed URL links to Planning Analytics Workspace within an iframe in another product such as IBM Cognos Analytics. This technique is an example of Cross-Origin Resource Sharing (CORS).
For more information, see CSP: frame-ancestors.
- EnableIPv6
- Flag to enable IPV6 on the bridge network. Value is false.
- ENABLE_INTENT_BAR
- Set to false to disable the natural language search on the intent bar. You might want to set this parameter to false to avoid long running search processes that are created with the intent bar.
- Default is
True.
- ENABLE_PASTE_SPECIAL
- Set to true to enable mixed cell paste. For more information, see Paste values to a mixed range of leaves and consolidated cells.
- Default is
False.
- EnableSSL
- Set to true if you are using SSL. Default is false. Leave all other SSL options at default values if you want to run by using a self-signed test certificate.
- ENABLE_USER_IMPORT
- Default is
true.
- If set to
true, when a user logs in, they are immediately added as a user in Planning Analytics Workspace. When this parameter is set to true, you cannot activate, deactivate, or delete users from the Administer page of Planning Analytics Workspace.
- If set to false, a user must first be added to Planning Analytics Workspace before they can log in to Planning Analytics Workspace. If a user has not been added and tries to log in, they see an error message. Users are added by an administrator. For more information, see Add users.
- When this parameter is set to false, an administrator can activate, deactivate, and delete users. For more information, see Activate or deactivate a user and Delete a user.
- ENABLE_VIEW_EXCHANGE Added in v2.0.44
- Set to
trueto enable Exploration View exchanges between Planning Analytics Workspace and Planning Analytics for Microsoft Excel in the Content Store.
- For more information, see Save to the Planning Analytics Workspace Content Store.
- Note: If you are using Planning Analytics for Microsoft Excel version 2.0.43 or earlier, setting this parameter to
truewill prevent Planning Analytics for Microsoft Excel from connecting to TM1 and authentication servers with security modes 2 or 3 enabled.
-.
- LOG_DIR
- Host directory for storing service logs. Ensure that services can create directories here. Value is log.
- PAGatewayHTTPPort
- HTTP port that is mapped to the host by pa-gateway. Value is 80.
- PAGatewayHTTPSPort
- HTTPS port that is mapped to the host by pa-gateway. Value is 443.
- PAW_NET
- Name of the PAW bridge network. Value is paw_net.
- PAW_V6_SUBNET
- IPV6 subnet for Docker containers. Value is fdfb:297:e511:0:d0c::/80.
- ProxyTimeoutSeconds
- Maximum number of seconds the gateway waits for a backend service response. Value is 120.
- REGISTRY
- Docker registry. Value is pa-docker:5000/planninganalytics.
- ServerName
- Domain name that is used to access Planning Analytics Workspace. This value is used by the gateway as the redirect target for non-SSL requests. Value is pa-gateway.
- SessionTimeout
- The amount of time a Planning Analytics Workspace login session can go unused before it is no longer valid. Specify a positive integer followed by a unit of time, which can be hours (h), minutes (m), or seconds (s).
- For example, specify 30 seconds as 30s. You can include multiple values in a single entry. For example, 1m30s is equivalent to 90 seconds.
- Default is 60 minutes.
- For example,
export SessionTimeout="60m".
- SslCertificateFile
- Path to a PEM-encoded file that contains the private key, server certificate, and optionally, the entire certificate Trust Chain. Value is config/ssl/pa-workspace.pem on Microsoft Windows Server 2016 OS or config/pa-workspace.pem on Linux OS.
- TM1APIPort
- Port for the TM1® Admin Host. The value is empty, which means to use the default port.
- TM1CredentialStoreKeyFile
- Path to and name of the random credential store key, which is generated the first time that you start Planning Analytics Workspace. Value is config/credential_store.key.
- VALIDATE_HOST
- Indicates whether to perform host validation and repair.
- Set to "true" to validate until Start.ps1 is successful and then don't validate when Start.ps1 is run again. The default is "true".
- Set to "always" to always validate.
- Set to "false" to never validate.
-.
- VIRTUAL_BOX_AS_SERVICE
- If you are running the VM as a service using
VBoxVmService, set this parameter to true to suppress scripts from probing or starting the VM by using VirtualBox tools. Value is false.
- X_FRAME_OPTIONS Added in v2.0.46
- This parameter enables the X-Frame-Options header as an alternative to Content-Security-Policy (CSP) frame-ancestors for browsers that don't support CSP (Internet Explorer). The default is sameorigin.
This parameter is required to embed URL links to Planning Analytics Workspace within an iframe in another product such as IBM Cognos Analytics. This technique is an example of Cross-Origin Resource Sharing (CORS).
For more information, see X-Frame-Options.
- CAMLoginNamespace
- IBM Cognos Analytics CAM authentication namespace ID. Specify only when PAAuthMode = cam.
- IBMCognosGateway
- Gateway URI of the IBM Cognos Analytics server. Specify only when PAAuthMode = cam. To enable SSO for Planning Analytics Workspace, you must enter a value in this field.
- IBMCognosServlet
- Dispatcher URI of your IBM Cognos Analytics server. Specify only when PAAuthMode = cam.
- PAAuthMode
- Supported authentication modes. Value must be
camfor IBM Cognos Analytics security authentication or
tm1for standard TM1 authentication.
- TM1ApplicationsLocation
- URI of the TM1 Application Server. Value is.
- TM1Location
- URI of the TM1 Admin Host. Value is.
- TM1LoginServerURL
- URI of the TM1 server to be used for Planning Analytics Workspace authentication. Specify only when PAAuthMode = tm1. | https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=local-configure-parameters | CC-MAIN-2022-05 | refinedweb | 1,136 | 53.07 |
Introduction: Arduino Bluetooth Tank With Custom Android Application (V1.0)
Hello! this is a guide for how to build a simple and cheap Arduino based Bluetooth tank. This is the first stage of this project and only involves the development of a moving tank track based vehicle. The second stage is the Android Application that will be used to control the tank. I intend to build on this project in the future by adding either a controllable turret or a robotic claw! So follow if you want to see future updates! :D
so what you will need to start this project is:
9V Battery Button Clip to 1mm DC Power Plug ~ £2.60
9V Battery (Rechargeable is a good idea) ~ £6.00
L293D Motor Driver Shield Arduino Expansion Board ~ £2.89
Tamiya 70168 Double Gearbox Kit ~ £7.00
Tamiya Tracked Vehicle Chassis ~ £13.00
Arduino HC-06 Bluetooth Transceiver Module 4-Pin serial ~ £6.99
Arduino Uno R3 (Other Uno versions or the duemilanove should be fine) ~ £3.50 - £10.00
This entire project should cost around £42 (Sorry for the UK links at some point i'll add US & CA ones as well)
This is my first Instrucatbles guide so apologies if anything is unclear, please feel free to message me or email me directly at js702@kent.ac.uk
Step 1: Chassis Construction
So the first step of this project is to construct the Tamiya Tracked Vehicle Chassis.
Once the chassis has been constructed instead of installing the single motor gear box you then want to install the Tamiya 70168 Double Gearbox Kit
I apologise that i do not have any images of me constructing the chassis, i advice you follow the instructions included with both of these kits especially with the gear box ones. I constructed my gear box with a gearing ratio of 38.2:1 you can chose another if you wish.
Alternatively you could build your own chassis or use another one as long as it has two separate DC motors the rest of the project onwards should still be applicable!
Step 2: Circuit Design
The Circuit diagram for this project can be seen above, its extremely simple.
First you want to solder wires to each of the motors in the gear box.
You then want to connect one motor to the M1 connections on the L293D Shield then the second motor to the M2 connections. Don't worry about which one goes to which right now you will be able to flip the variable names in the Arduino code later if your tanks motors are connected in reverse.
Now that the motors are connected you want to attach the HC-06 Bluetooth Module. As the L293D Shield i chose did not have a pass through for the Rx & Tx connections on the Arduino i had to resort to soldering directly to the header pins as shown in the images above.
You want to connect the Rx of the Bluetooth module to the Tx of the Arduino and the Tx of the Bluetooth module to the Rx of the Arduino.
I wanted to be able to quickly detach and reattach my HC-06 Module so that i could use it in other projects as well so i used some female header pins soldered to some spare Proto-board i had laying around.
Now you want to connect your 9V battery to your DC connection. Optionally you can cut the positive wire and solder an On/Off switch in. Make sure you do this with the positive wire not the neutral, it is good practise to always have switches attached to circuits on the high connection.
Step 3: Finalising Hardware
Now that you have the circuit and chassis assembled its time to put it together.
As i intend to build upon this project and i want to keep the costs down i decided to use some cardboard for mounting and holding the Arduino in place.
I first measured the size of card that i would need to lift the Arduino above the tank tracks, i then cut it out and secured it using electrical tape. This can be seen in the images above.
I then used some sticky Velco tape for holding my Arduino board in place.
To ensure that the Bluetooth module and power cable do not short with the Arduino or Motor Shield i then cut out another small piece of card as a separator seen in the images above.
All of the hardware for this project has now been completed!
Step 4: Arduino Code
The code that was uploaded to the Arduino can be seen bellow. It requires you to install the AFMotor.h Library, this will allow you to use the Motor Shield properly.
In case you do not know how to install Arduino Libraries click here for a quick tutorial.
The way the the program currently works is by setting up the serial communication through the Bluetooth Module. The Arduino monitors the Rx pin to check for changes in state. Once the Bluetooth Module has been sent data from a connected Bluetooth device. If the Arduino recognised the received data as an instruction for motor control it will enter the corresponding part of the if statement I.E if Rx receives "0" the tank will enter the forward motor state until told otherwise. The system flow diagram above roughly shows how this code functions.
Once you have uploaded this code to the Arduino, you can keep the USB-B connection attached and open the Serial-Monitor, you can type 0 - 9 into the command line to test each of the motor states to make sure everythings working as intended.
#include <AFMotor.h> AF_DCMotor motor2(2, MOTOR12_64KHZ); // create motor #2, 64KHz pwm AF_DCMotor motor1(1, MOTOR12_64KHZ); int state = 0; void setup() <br>{ Serial.begin(9600); // set up Serial library at 9600 bps Serial.println("Welcome: Forward = 1 Left = 2 Right = 3 Backwards = 4 Stop = 0"); motor2.setSpeed(200); // set the speed to 200 motor1.setSpeed(200); // set the speed to 200 } void loop(){ //if some data is sent, read it and save it in the state variable if(Serial.available() > 0) { state = Serial.read(); Serial.print("I received: "); Serial.println(state); delay(10);</p> if (state == '0') { motor2.setSpeed(200); // set the speed to 200/255 motor1.setSpeed(200); // set the speed to 200/255 motor2.run(RELEASE); // turn it on going forward motor1.run(RELEASE); // turn it on going forward Serial.println("Stopped"); delay(100); state = 0; } else if (state == '1') // If 1 Forwards { motor2.run(RELEASE); // turn it on going forward motor1.run(RELEASE); // turn it on going forward motor2.setSpeed(200); // set the speed to 200 motor1.setSpeed(200); // set the speed to 200 motor2.run(FORWARD); // turn it on going forward motor1.run(FORWARD); // turn it on going forward Serial.println("Forward"); delay(100); state = 0; } else if (state == '2') // If 2 Turn Left { motor2.run(RELEASE); // turn it on going forward motor1.run(RELEASE); // turn it on going forward motor1.setSpeed(255); // set the speed to 200/255 motor1.run(FORWARD); //motor2.run(BACKWARD); Serial.println("Left"); delay(100); state = 0; } else if (state == '3') { // If 3 Turn Right motor2.run(RELEASE); // turn it on going forward motor1.run(RELEASE); // turn it on going forward motor2.setSpeed(255); // set the speed to 255 motor2.run(FORWARD); //motor1.run(BACKWARD); Serial.println("Right"); delay(100); state = 0; } else if (state == '4') // If 4 Backwards { motor2.run(RELEASE); // turn it on going forward motor1.run(RELEASE); // turn it on going forward motor2.setSpeed(200); // set the speed to 200 motor1.setSpeed(200); // set the speed to 200 motor2.run(BACKWARD); // Motor 2 backwards motor1.run(BACKWARD); // Motor 1 backwards Serial.println("Backward"); delay(100); state = 0; } else if (state == '5') { motor2.run(RELEASE); // turn it on going release motor1.run(RELEASE); // turn it on going release motor2.setSpeed(255); // set the speed to 255 motor1.setSpeed(140); // set the speed to 140 motor2.run(FORWARD); // Motor 2 forward motor1.run(FORWARD); // Motor 1 forward Serial.println("Forward Right"); delay(100); state = 0; } else if (state == '6') { motor2.run(RELEASE); // turn it on going release motor1.run(RELEASE); // turn it on going release motor1.setSpeed(255); // set the speed to 255 motor2.setSpeed(140); // set the speed to 140 motor2.run(FORWARD); // Motor 2 forward motor1.run(FORWARD); // Motor 1 forward Serial.println("Forward Left"); delay(100); state = 0; } else if (state == '7') // If 4 Backwards { motor2.run(RELEASE); // turn it on going forward motor1.run(RELEASE); // turn it on going forward motor1.setSpeed(255); // set the speed to 255 motor2.setSpeed(140); // set the speed to 140 motor2.run(BACKWARD); // Motor 2 backwards motor1.run(BACKWARD); // Motor 1 backwards Serial.println("Backward Right"); delay(100); state = 0; } else if (state == '8') // If 4 Backwards { motor2.run(RELEASE); // turn it on going forward motor1.run(RELEASE); // turn it on going forward motor2.setSpeed(255); // set the speed to 255 motor1.setSpeed(140); // set the speed to 140 motor2.run(BACKWARD); // Motor 2 backwards motor1.run(BACKWARD); // Motor 1 backwards Serial.println("Backward Left"); delay(100); state = 0; } else if (state >= '9') { Serial.println("Invalid"); delay(100); state = 0; } } } <br>
Attachments
Step 5: Android Application
So now that you hopefully have your working Bluetooth Tank its time to work out how to control it.
On the Google Play store there are many options for Bluetooth serial communication applications that would be capable of controlling your tank such as BlueTerm or ArduDroid and many more.
If you wish to download my application that currently offers 8 directional controls you can download it from the Play Store here: Bluetooth Arduino Tank
How i created this application was through App Inventor 2, which is a really awesome way for you to quickly develop Android Applications.
The images above how both the "Designer" & "Block" view of my application so you can build and change thing up if you wish!
If you have never used App Inventor 2 before heres a link to some helpful tutorials to get you started: App Inventor Tutorials
Once you have the application installed there are a few steps before it will work. The images above show all the steps needed to get the Bluetooth Arduino Tank application working. Make sure that the Arduino is powered on before attempting this!
- Go to setting
- Click on the Bluetooth Tab
- Turn Bluetooth on
- Wait for your phone to find the HC-06 Bluetooth module
- Once it has been found click it and input the password by default it should be either "1234" or "1111"
- Now it should say the time that you were last paired with the HC-06 module
- Now enter the Bluetooth Arduino Tank Application and click the "Pick Bluetooth Module" button
- You should see the HC-06 Module (If not re-try the steps above) click on the Module
- The Application will then return automatically to the main screen with directional controls and now under the "Pick Bluetooth Module" Button it should now say "connected"
- At this point the HC-06 Modules Red LED should now be constantly on instead of pulsing meaning a device is currently connected
Step 6: Project Conclusion
So at this point you should have a cool, simple and cheap Bluetooth controlled tank, that you can now use as a starting point for more development.
The Motor shield i selected allows another 2 Motors and 2 Servos to be added, as i mentioned in the Intro i intend to update this guide with any developments i make on this build (Maybe a turret or grabbing claw).
If any of you guys decide to add on to this i'd love to see how it worked out for you!
If you had any issues or questions about this project please feel free to ask, i will do my best to help you!
Thanks for reading!
I have a few ideas for other future projects so i hope you hear from me again soon!
Second Prize in the
Robotics Contest 2016
Participated in the
First Time Author Contest 2016
Participated in the
Make it Move Contest 2016
4 People Made This Project!
Recommendations
21 Discussions
4 years ago
Massive thanks to anyone who voted for my in the ROBOTICS CONTEST 2016 contest! I can't believe i came second place! Please follow me to make sure you see my upcoming projects!
3 years ago
hi I have a question for the arduino tank. I've built one for myself but with a army tank tracks and motors and gearboxes. I've done everything as in your build. my Bluetooth module does connect as it should but I can't get the tank to do anything at all. is there any advice or ideas of what I need or should do. thanks again I'll be looking forward to hearing from you soon.
3 years ago
would it be possible to make the bluetooth module connect to a dualshock 4 for control or would you need to make a program on an android device to translate the dualshock 4 to connect to the arduino bluetooth ???
Reply 3 years ago
Hello, yes it would be possible to do this the functions would just need to be called by the dualshocks outputs instead of my apps output which was just integers. I found this guide for using the HC-06 module with a dualshock controller. I would advice trying something like this first then modding my code a little.
3 years ago
Hi jack
My android phone is not picking up the bluetooth hc-06 module signals what could be wrong since I have checked the connections and nothing seems to be odd.
Reply 3 years ago
hey Homers just a few little questions, have you ever used this Bluetooth module before in any other projects? (Just so we can rule out if the modules working or not) if this is the first time your using it I would advice following a guide for a simple Bluetooth test like one here:
If it works on there then it must be down to something on the wiring up of the tank or the tanks code, maybe the library you have used is different maybe re-download that.
I assume that the blinking red LED is on when you have tested this and it is not constantly on (meaning that it has connected to a device )
Also check that the module is getting 3v3 not 5v if you have a multimeter
hope one of these helps you solve this problem if it still is an issue feel free to message again, but if you check these things hopefully it can resolve the problem, thanks for the comment Jack
Reply 3 years ago
Hi Jack
I have narrowed down theh issue to communication between the bluetooth HC-06 and phone app I tried it on two different apps now.
The arduino motor shield works on serial monitor and the bluetooth connection and power-supply are fine .
4 years ago
Hello again guys. Questionmy motors run very slow is there a way to increase the power to the motor and if so how you modify the code to do so?.
Reply 3 years ago
Hey sorry for the really slow reply! must have missed this you can do this both in the code and with the gearing ratio.
With the code you can:
See these lines in each of the statements bellow,
motor2.setSpeed(200); // set the speed to 200
motor1.setSpeed(200); // set the speed to 200
you can change the 200 to 255 this will effectively change the PWM to increase the speed. So it should look like this for max speed:
motor2.setSpeed(255); // set the speed to 255
motor1.setSpeed(255); // set the speed to 255
With the hardware you can:
Change the gearing ratio from 38.2:1 to another one which will reduce torque but increase speed.
Or you can add two batteries like you did :)
4 years ago
Good job. Great first Instructable.
Reply 4 years ago
Hey thanks Ohoilett! had a busy few weeks with uni, hopefully should get a new one out soon!
Reply 4 years ago
Cool. I look forward to seeing it.
4 years ago
Hi Jack, yes it is a 3d printed arm. I found it, also on instructables.
theGHIZmo is the author and also did a great job explaining it's operation. I followed his recommendation and an controlling it with a Pololu USB servo Mini Maestro. If you haven't played with one, they are fun, not cheap, but fun. The arm is pretty jumpy at full speed, but this controller does a great job slowing it down so it's a lot smoother. This controller is very easy to program, No C+ code!!! It also can remember a pre-programmed routine and repeats it. If I understand correctly if can be used with an Arduino so it should be able to be blue toothed somehow. This is where I get lost... I don't know if you have access to a printer. I have 2 and would be happy to help you out with printing if you need it, in exchange for helping teach me the Arduino control side of it.
Thanks again, Lee
Reply 4 years ago
Sorry for the late reply Lee the last week has been crazy. This robotic arm looks awesome, now i have some free time I'll go through this guide and see what were working with here :) sure it can be adapted to make this tank even better
4 years ago
Hello again Jack, The newbie figured out my problem... Step 1 follow Jack's schematic :) lol I had the Tx to TX and Rx to Rx I switch them, as your schematic shows...lol and it works great! Thanks again for the great instructable!
Lee
Reply 4 years ago
Hey Lee, glad to hear you got it working! yeah i've made that mistake many times :)
4 years ago
Hello, last time I'll bother you tonight! Just wanted to add, it works great through the serial monitor when I type in the numbers... Thanks again
Reply 4 years ago
Just updated the guide to show how to properly sync your mobile to the HC-06 and how to use my Android Application with images, i hope this solves your issue if not please message me again! :)
Reply 4 years ago
Hey glad you liked the guide Lee! So are you having issues with syncing to the HC-06 module, I just realised i did forget to explain that before using my Android application you want to go to - Settings -> Bluetooth (Make sure its on) -> scan for devices -> connect to the HC-06 Module, the password should be either 1234 or 1111 once you have paired retry my Android Application i will update my guide when i get a change with this. To use my Android Application you first want to click the "Pick Bluetooth Module" button select the HC-06 (It will take a second to sync) once it successfully is connected it will come up with a pop-up message confirming this. The red LED on the HC-06 module should change from pulsing to solid at this point, then you will be able to control the tank
4 years ago
Hello again, I hope you can help a poor newbie... I'm having some trouble. I have mine wired just like your schematic. only difference is I'm running on a 9 volt power supply instead of a battery and this power is connected to the 9v and gnd connections on the motor shield. When I jump to the power feed to the motor terminals on the motor shield, they run. I had to disconnect the HC-06 to get the .ino file to load, but once I did, It loaded perfectly with no errors. I am using your app and am connected to the HC-06 with a solid red light. But I get no response... Any help would be appreciated. Thank you! | https://www.instructables.com/id/Arduino-Bluetooth-Tank-With-Custom-Android-Applica/ | CC-MAIN-2020-29 | refinedweb | 3,366 | 71.95 |
Recently I received a bunch of emails asking how to install java libraries. So, in this tutorial I’ll show you how.
Since I use Eclipse in all of my tutorials, I’ll show you how to install java libraries with Eclipse. I use Eclipse because it is free and looks the same on every operating system.
Specifically I’ll show you how to install the very popular Apache Commons Java Library.
Get Apache Commons Java Library
This is real simple. Go to this page Apache Commons and download and unzip this commons-lang3-3.1-bin.zip.
Install Java Library in Eclipse
Here you’ll see a series of images that will walk you through the installation of the Apache Commons library. As you can see, this series of steps will work for any Java Library.
Of course you need to have Eclipse. I use the version of Eclipse named Eclipse IDE for Java Developers. Now open Eclipse and follow what you see in the pictures below.
Right click on the top most folder in the Eclipse Package Explorer and create a new folder
Create a new folder named lib as you see below.
Right click on the top most folder again in package explorer and click import.
Select General and then File System.
Browse to the location for your Java jar files and select them. Click Finish
Right click on the top most folder again in package explorer and click properties.
Click Libraries, Add Jars, Select the jar files and hit OK.
Now if you want to use the Apache Commons library for example, just add the following line of code to your code import org.apache.commons.lang3.*;
That’s it! If you want to learn java, I have a pretty nice Java Video Tutorial.
I hope that helped you out. If you have any questions leave them below.
If you like videos like this tell Google
To make me extra happy, feel free to share it
Till Next Time.
Thank you in advance,
Matt
Try reinstalling eclipse Indigo. The Juno version (the newest) seems to be buggy. Just search for Eclipse Indigo and you’ll find it. I hope that helps
Hi there — Thanks very much for this.
How do you determine what to include in the import line? In your example, you have:
import org.apache.commons.lang3.*;
For other libraries, how would you figure that out?
Thanks!
The easy thing to do is to let Eclipse tell you what library is needed. If you set up the libraries the right way, Eclipse will do just that. I show you how to do that in How to Install Java Libraries. I hope that helps
hello there, thanks for your nice turorials, I found all of them great!
I wanted to install java library in Netbeans, since I work in Netbeans not Eclipise. Can I just do the same way as you taught here? Thanks again.
Thank you 🙂 Basically you do the same thing, but the interface is different. This page should help NetBeans How To
There are two recommendations that I would like to make to your great set of instructions:
1. At the very beginning: “Create a new Java Project. Name it Java Code.”
2. Just before the import instructions, “unzip the file commons-lang3-3.1-bin.zip”.
Thanks.
There are to suggestions that I would like to make to your great set of instructions.
1. At the beginning add, “Create a new Java project in Eclipse. Call it Java Code.”
2. Before the import instructions add, “unzip the file commons-lang3-3.1-bin.zip
Thanks.
Thank you very much 🙂 I will add those
This is amazing! Your tutorials are clear and more than comprehensible! I’m still mulling over the underrated views you get in YouTube.
Thank you very much 🙂 I don’t care much about views. There are many things that I do that cause me to lose views. For example if I made a bunch of 5 minute videos instead of 25 minute videos that are edited down from 45 minute videos I could quadruple my view count. That doesn’t matter to me. I just do this to help and I don’t care about money
The world need more man like you sir!
Thank you 🙂 You are very kind to say that.
I can find that folder!
Help I don’t know where to look.
Which folder? Sorry I get a ton of questions and I don’t remember everything
Hi Derek, I add the the jar files to the library but every time I try to import the Apache Commons library it says that The import org.apache cannot be resolved. Am I suppose to add the files to the System Library itself?
This should help. You need to add the required jar to your build path.Right click on your project in the Package Explorer,then select Build Path -> Configure Build Path, click on Libraries then click on Add JARs. Now point your required.jar from the Add JARs dialog.
I had the same problem and you probably saved me an hour or two of research with that response. Thanks. I’m enjoying the videos.
Great I’m glad I could help 🙂
For anyone who gets a NoClassDefFoundError when trying to use these, do this.
Thanks for helping 🙂
Thank you very much!
You’re very welcome 🙂
Thanks mate!!!
I was hoping to start learning java
and i found a good tutor. 🙂
You’re very welcome 🙂 I’m glad I could help | http://www.newthinktank.com/2012/01/how-to-install-java-libraries/ | CC-MAIN-2016-50 | refinedweb | 926 | 84.17 |
We’re in the second day of the lambda week. Today you’ll learn about the options you have when you want to capture things from the external scope. Local variables, global, static, variadic packs,
this pointer… what’s possible and what’s not?
The Series
This blog post is a part of the series on lambdas:
- The syntax changes (Tuesday 4th August)
- Capturing things (Wednesday 5th August) (this post)
- Going generic (Thursday 6th August)
- Tricks (Friday 5th August)
The basic overview
The syntax for captures:
[&]- capture by reference all automatic storage duration variables declared in the reaching scope.
[=]- capture by value (create a copy) all automatic storage duration variables declared in the reaching scope.
[x, &y]- capture
xby value and
yby a reference explicitly.
[x = expr]- a capture with an initialiser (C++14)
[args...]- capture a template argument pack, all by value.
[&args...]- capture a template argument pack, all by reference.
[...capturedArgs = std::move(args)](){}- capture pack by move (C++20)
Some examples:
int x = 2, y = 3; const auto l1 = []() { return 1; }; // No capture const auto l2 = [=]() { return x; }; // All by value (copy) const auto l3 = [&]() { return y; }; // All by ref const auto l4 = [x]() { return x; }; // Only x by value (copy) // const auto lx = [=x]() { return x; }; // wrong syntax, no need for // = to copy x explicitly const auto l5 = [&y]() { return y; }; // Only y by ref const auto l6 = [x, &y]() { return x * y; }; // x by value and y by ref const auto l7 = [=, &x]() { return x + y; }; // All by value except x // which is by ref const auto l8 = [&, y]() { return x - y; }; // All by ref except y which // is by value const auto l9 = [this]() { } // capture this pointer const auto la = [*this]() { } // capture a copy of *this
// since C++17
It’s also worth mentioning that it’s best to capture variables explicitly! That way the compiler can warn you about some misuses and potential errors.
Expansion into a Member Field
Conceptually, if you capture
str as in the following sample:
std::string str {"Hello World"}; auto foo = [str]() { std::cout << str << '\n'; }; foo();
It corresponds to a member variable created in the closure type:
struct _unnamedLambda { _unnamedLambda(std::string s) : str(s) { } // copy void operator()() const { std::cout << str << '\n'; } std::string str; // << your captured variable };
If you capture by reference
[&str] then the generated member field will be a reference:
struct _unnamedLambda { _unnamedLambda(std::string& s) : str(s) { } // by ref! void operator()() const { std::cout << str << '\n'; str = "hello"; // can modify values references by the ref... } std::string& str; // << your captured reference };
The
mutable Keyword
By default, the
operator() of the closure type is marked as
const, and you cannot modify captured variables inside the body of the lambda.
If you want to change this behaviour, you need to add the
mutable keyword after the parameter list. This syntax effectively removes the
const from the call operator declaration in the closure type. If you have a simple lambda expression with a
mutable:
int x = 1; auto foo = [x]() mutable { ++x; };
It will be “expanded” into the following functor:
struct __lambda_x1 { void operator()() { ++x; } int x; };
On the other hand, if you capture things by a reference, you can modify the values that it refers to without adding
mutable.
Capturing Globals and Statics
Only variables with automatic storage duration can be captured, which means that you cannot capture function statics or global program variables. GCC can even report the following warning if you attempt to do it:
int global = 42; int main() { auto foo = [global]() mutable noexcept { ++global; }; // ...
warning: capture of variable 'global' with non-automatic storage duration
This warning will appear only if you explicitly capture a global variable, so if you use
[=] the compiler won’t help you.
Capture with an Initialiser
Since C++14, you can create new member variables and initialise them in the capture clause. You can access those variables inside the lambda later. It’s called capture with an initialiser or another name for this feature is generalised lambda capture.
For example:
#include <iostream> int main() { int x = 30; int y = 12; const auto foo = [z = x + y]() { std::cout << z << '\n'; }; x = 0; y = 0; foo(); }
In the example above, the compiler generates a new member variable and initialises it with
x+y. The type of the new variable is deduced in the same way as if you put
auto in front of this variable. In our case:
auto z = x + y;
In summary, the lambda from the preceding example resolves into a following (simplified) functor:
struct _unnamedLambda { void operator()() const { std::cout << z << '\n'; } int z; } someInstance;
z will be directly initialised (with
x+y) when the lambda expression is defined.
Captures with an initialiser can be helpful when you want to transfer objects like
unique_ptr which can be only moved and not copied.
For example, in C++20, there’s one improvement that allows pack expansion in lambda init-capture.
template <typename ...Args> void call(Args&&... args) { auto ret = [...capturedArgs = std::move(args)](){}; }
Before C++20, the code wouldn’t compile and to work around this issue, and you had to wrap arguments into a separate tuple.
Capturing
*this
You can read more about this feature in a separate article on my blog:
Lambdas and Asynchronous Execution
Summary
Next Time
In the next article, you’ll see how to go “generic” with lambdas. See here: Going Generic.) | https://www.bfilipek.com/2020/08/lambda-capturing.html | CC-MAIN-2021-04 | refinedweb | 902 | 53.44 |
Viewing Structure of a Source File
You can examine the structure of the file currently opened in the editor using the Structure tool window or the Structure pop-up window.
By default, RubyMine shows all the namespaces, classes, methods, and functions presented in the current file.
To have other members displayed, turn
In JavaScript files, it is possible to view fields.
- Turn on the Show Fields option on the context menu of the title bar.
To have inherited members displayed
In JavaScript files, it is possible to view inherited members.
- Click Show Inherited option on the context menu of the title bar.
By default, RubyMine shows only methods, constants, and fields defined in the current class. If shown, inherited members are displayed gray. | https://www.jetbrains.com/help/ruby/8.0/viewing-structure-of-a-source-file.html | CC-MAIN-2018-05 | refinedweb | 123 | 64.41 |
When I first began to learn React, I didn't even realize there was a difference between class components and stateless functional components. I thought they were just different ways to write the same thing.
In some ways, they are. In many ways, they aren't the same. In this article, I'll explain the differences between the two as well as when and why to use the different types.
What is a "Class Component"?
A class component is a component that takes advantage of ES6 classes to manage various pieces of the component. State is something we use a lot in React and I'll write more about it in a later post. For now, just know that it's how we manage data within our component. In addition to state, we can create custom functions to use in our component and take advantage of lifecycle methods.
These things can be useful when we are storing or maniplating data within our component. Cases such as these will be our primary use cases for class components. I have provided an example of a class component which will render "Hello World" below using state:
class HelloWorld extends React.Component { constructor(props) { super(props); this.state = { greeting: "Hello World" } } render() { return ( <div> <p>{ this.state.greeting }</p> </div> ) } }
What is a "Stateless Functional Component"?
I know, I know. "Stateless Functional Component" sounds like something big and scary, but I promise: it's not. A stateless functional component is just a function that returns JSX. It's very simple but incredibly useful.
There are two ways to create a stateless functional component. Both are similar and do the same thing, it's just a matter of conciseness. I will be using ES6 arrow functions to create the components. If you haven't used them, I highly recommend you check ES6 out.
The first way: Put it in a variable
If we are putting all of our components in a single file, then this should be how we create stateless functional components. The ability choose how succinctly we want to create our functional components comes into play when we have a different file for each component. The code below illustrates how we can create a functional component within a variable and export it for use in another area of our app.
const HelloWorld = (props) => ( <div> <p>{ props.greeting }</p> </div> ); export default HelloWorld; === <HelloWorld greeting="Hello World!" />
The second way: export the function
When we have a stateless functional component in a file by itself, we don't need to name the component. I know, this saves us, like, 10 characters but hey I'll take what I can get. We can simply create the function and export it like the code below.
export default (props) => ( <div> <p>{ props.greeting }</p> </div> ); === <HelloWorld greeting="Hello World!" />
As you can see, these two functional components look almost identical and they do the same thing. It's really just a matter of personal preference.
A Quick Note:
With ES6 arrow functions, we can use curly braces and put a return inside of those. To keep things concise, we
can also write the function on one line. If no curly braces are placed after arrow, the function will
automatically return whatever is behind the arrow. If the JSX we are returning takes more than one line, we can
wrap our code in parenthesis like the code above.
Which one should I use?
Typically I see that "best practice" is to use stateless functional components whenever possible to reduce code bloat. On Syntax.fm they discussed just using class components all the time because they find that they change a lot of their components from functional to class and don't want to keep converting. Unless you're building an app that is going to be HUGE, I don't see this really causing any problems in terms of performance, so that's completely up to you.
I would love to hear your thoughts about when to use each of these components. How often do you use stateless function components vs class components?
Posted on by:
Tim Smith
I’m a full stack developer who has experience with several front-end tools like Reactjs, Vuejs, and jQuery as well as some back-end tools like PHP, Laravel, Node, and Express.
Discussion
In the context of React “Stateless Functional Component (SFC)” is actually the correct term. There is a documented difference between properties and state. While all components in React may take input, that input is always referred to as properties. State is specifically data that a component manages internally. SFCs are thus named due to their inability to possess/manage state.
Technically, one could return a closure and manage state in a SFC, but that would be unusual.
Touché. Typically, the nomenclature I see around this is “stateless functional components” so I figured i’d stick with it.
Isomorphic just means 'same shape's so it should be the same shape code on both server and client.
Hey, that's great — not doubting you. Portable tends to be used for software that doesn't need to be recompiled for different machines.
No words going to be perfect though.
What do you mean by side effect free? | https://practicaldev-herokuapp-com.global.ssl.fastly.net/iam_timsmith/class-components-vs-stateless-functional-components-51he | CC-MAIN-2020-34 | refinedweb | 878 | 65.42 |
I read sometimes in newsgroups about problems with Rave and printer drivers, especially HP, e.g. 2600.
The interesting thing is, that the customer can print without any problems with MS-office or other application, but with Nevrona Rave Reports there is a Division by zero exception with some drivers. An update of the driver sometimes solves the problem but not 100%.
With the following solution is a workaround available that create the reports without exceptions.
Try this and if you found another problem (or solution, of course
Some programming libraries (and perhaps MS-Office?!) contain such procedure
call: Set8087CW($133f);
this disables FPU exceptions....
Normally there is some bad code (in the driver) that is changing the FPU control word to cause it to ignore some exceptions and not properly resetting it.
The default value of Default8087CW is $1332, with the following code you can check it in your pascal-code.
If (Get8087CW and $1F3F) <> $1332)
then ShowMessage(Format('CW=$%4.4x'
To make your rave-reporting with "every" driver stable, the following code should work;
var
CW: Word;
begin
CW := Get8087CW;
try
Set8087CW($133f);
RvProject1.ExecuteReport('Report1');
Set8087CW(CW);
except
..
end;
end;
or the asm-way
asm
FLDCW cw
end;
with C++ the workaround should be the secureFPU (I’m not the C++-expert, I hope this is correct !?)
#include "float.h";
void secureFpu() { _control87(PC_64|MCW_EM,MCW_PC
Another trick especially for HP-printer driver is setting in the application the SkipAbortProc to true:
RPDev.SkipAbortProc := true;
(you must insert the RpDevice-unit)
Now your customer can print with Rave and don't get an exception....
Kommentare:
I've had this problem for years. My workaround was to install the postscript driver, rather than the PCL driver, onto the client's PC. I've had no problems since doing it this way; the only problem I see is when IT set up a new PC and install the PCL driver by mistake.
Plase give me your e-mail id. thomas.pfister _ @_ gmail.com this is not working.
firstname dot lastname at gmail.com is correct;
I'm having problems with the printer driver matrix Epson FX-2190. He brings a BottomWaste of 0.42cm in one Form continuous de 8.5in width x 11in height when it should bring 0, and I need to print on the last line of the form. You can change the BottomWaste?
Tank´s
Kommentar veröffentlichen | http://rave-notes.blogspot.com/2008/01/rave-reports-and-problems-with-hp.html | CC-MAIN-2018-34 | refinedweb | 405 | 66.13 |
TL;DR I start trying to write a library and get sidetracked into learning about Haskell’s type system.
So last time, I talked about Wai and how you could use it directly. However, if you’re going to do that, you’ll need a routing library. So, let’s talk about how we could build one up. One of the first things you’d need to do is to provide simple boolean conditions on the request object.
It turns out that this raises enough questions for someone at my level to fill more than one blog post.
{-# LANGUAGE TypeSynonymInstances #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE GADTs #-}
import qualified Network.Wai as Wai import qualified Network.HTTP.Types as H
So, how should we define conditions? Well, the Clojure model of keyword and string isn’t going to work here, because the
Wai.Request object is heavily strongly typed. So how about instead we just use the expected values and deduce the key from the type?
So, we’re going to want to implement the same method for several different types. There’s several different ways of doing that: * Create a union/enum class. This is a good approach, but not extensible. * Create a typeclass, which is extensible. * Create a type family, which is also extensible, but I don’t really understand.
You Can’t Buy Class
With that in mind, let’s create our first typeclass!
class RequestCondition rc where isMatch :: Wai.Request -> rc -> Bool
So, in English this says “If the type
rc is a
RequestCondition then there is a method
isMatch which takes a
Wai.Request and an
rc and returns a
Bool.” This is pretty interesting from an OO standpoint. The OO representation would look like
rc.isMatch(request). A Clojure protocol would change this to
(isMatch rc request). In practice, it doesn’t matter: what’s happening is that there’s dynamic dispatch going on on the first parameter.
In the Haskell case, there’s no dynamic dispatch in sight and the first parameter isn’t special.
isMatch on
HTTPVersion and
isMatch on
Method are different functions.
We can now implement the
RequestCondition for some obvious data types.
instance RequestCondition H.HttpVersion where isMatch = (>=) . W.httpVersion
So, here we’ve said “calling
isMatch with a
HttpVersion as a parameter calls
(>=) . W.httpVersion i.e. checks the request is using the version specified. We’d probably need a more sophisticated way of dealing with this if we were writing a real system.
instance RequestCondition H.Method where isMatch = (==) . W.requestMethod
This is much the same, with one wrinkle:
H.Method isn’t actually a type. It’s a type synonym. In C++ you’d introduce one with
typedef, in C# with
using. Haskell, because it likes to confuse you, introduces something that is not a type with the keyword
type. If you look up method on Hackage you see:
type Method = ByteString
You might wonder why this matters. The answer is that the Haskell standard doesn’t allow you to declare instances of synonyms. You can understand why when you realize that you might have multiple synonyms for
ByteString and shoot yourself in the foot. However, for now I’m going to assume we know what we’re doing and just switch on
TypeSynonyms in the header.
Let’s do one more, because three’s a charm.
instance RequestCondition H.Header where isMatch = flip elem . W.requestHeaders
We’d need (a lot) more functionality regarding headers, but let’s not worry about that now. However, again this will fail to compile. This time H.Header is a type synonym, but a type synonym for a very specific tuple.
type Header = (CIByteString, ByteString)
Problem is, Haskell doesn’t like you declaring instances of specific tuples either. This time, you need
FlexibleInstances to make the compiler error go away. To the best of my knowledge,
FlexibleInstances is much less of a hefalump trap than
TypeSynonyms could be.
For fun, let’s throw in a newtype
newtype IsSecure = IsSecure Bool
isSecure :: IsSecure isSecure = IsSecure True
instance RequestCondition IsSecure where isMatch r (IsSecure isSecure) = W.isSecure r == isSecure
Under Construction
How about when we’ve got multiple conditions to apply? Well, if we were writing Java, we’d be calling for a composite pattern right now. Let’s declare some types for these.
newtype And rc = MkAnd [rc] newtype Or rc = MkOr [rc]
I described
newtypes back in Fox Goose Corn Haskell. Note that there’s no reference to
RequestCondition in the declaration. By default, type variables in declarations are completely unbound.
Before we go any futher, let’s fire up a REPL (if you’re in a Haskell project right now you can type
cabal repl) and take a look at what that does:
data And rc = MkAnd [rc] :t MkAnd MkAnd :: [rc] -> And rc
Yes,
MkAnd is just a function. (Not exactly, it can also be used in destructuring, but there isn’t a type for that.) Let’s try expressing it a different way while we’re here:
:set -XGADTs data And2 rc where MkAnd2 :: [rc] -> And2 rc
(You’ll need to hit return twice) Now we’re saying “
And2 has one constructor,
MkAnd2, which takes a list of
m. The GADTs extension does way more than this, some of which I’ll cover later on, but even then I’m only really scratching the surface of what this does. For now I’ll just observe how the GADTs extension provides a syntax that is actually more regular than the standard syntax.
Incidentally, I could have called
MkAnd just
And, but I’ve avoided doing so for clarity.
Composing Ourselves
With the data types, we can easily write quick functions that implement the
RequestCondition typeclass.
instance (RequestCondition rc) => RequestCondition (And rc) where isMatch r (MkAnd l) = all (isMatch r) l
instance (RequestCondition rc) => RequestCondition (Or rc) where isMatch r (MkOr l) = any (isMatch r) l
The most interesting thing here is that we haven’t said that
And is an instance of
RequestCondition, we’re say that it is if its type parameter is an instance of
RequestCondition. Since data types normally don’t have type restrictions themselves, this is the standard mode of operation in Haskell.
So, now I can write
Or [H.methodGet, H.methodPost]
and it’ll behave. So we’re finished. Right? Not even close.
What if we wanted to write
And [H.methodGet, H.http10]
It’s going to throw a type error at you because HTTP methods aren’t HTTP versions. If you take a look at the declaration, it says “list of
rcs that are instances of
RequestCondition” not “list of arbitrary types that are instances of
RequestCondition“. If you’re used to OO, (and I have some bad news for you if you’re a Clojure programmer, that means you) this makes no sense at all. If you’re a C++ programmer, this is going to make a lot more sense. You see, when you do that in Java you’re telling Java to call through a vtable to the correct method. Haskell doesn’t have pervasive vtables in the same way. If you want one, you’re going to have to ask nicely.
Pretty Please and Other Existential Questions
What we want, then, is a function that boxes up a
RequestCondition and returns a type that isn’t parameterized by the original type of the
RequestCondition. What would that function look like?
boxItUp :: (RequestCondition rc) => rc -> RequestConditionBox
Hang on, that looks like the type of a constructor! Except for one really annoying little detail: as I said before, you can’t put type restrictions in
data declarations.
Except you can, if you enable GADTs.
data RequestConditionBox where RC :: (RequestCondition rc) => rc -> RequestConditionBox
RequestConditionBox is what’s known as an “existential type”. As I understand it that should be interpreted as “
RequestConditionBox declares that it boxes a
RequestCondition, but declares nothing else”. So its quite like declaring a variable to be an interface.
Since I wrote this, I’ve learned that existential types are indeed very like interfaces in C#/Java: they are bags of vtables for the relevant type classes. They don’t expose their parameterization externally, but destructuring them still gets the original type out. This is bonkers.
It just remains to actually implement the typeclass:
instance RequestCondition RequestConditionBox where isMatch r (RC m) = isMatch r m
And now we can finally write
And [RC H.methodPost, RC isSecure]
And the compiler will finally accept it. Not quite as pretty as in an OO language where polymorphism is baked into everything, but keeping the character count low isn’t everything. We’ve traded implicit polymorphism for explicit polymorphism.
So we’re done, right? Well, we could be, but I want to go further.
The Power of Equality
If you take a look, what we’ve built looks very much like a toy interpreter (because it is one). What if we wanted a toy compiler instead? In particular, imagine that we really were building a routing library and we had thousands of routes. We might want to only check any given condition once by grouping, for example, all of the
GET routes togther.
Now, you could leave that to the user of the library, but let’s pose the question: given two
RequestConditions, both of which may be composite, how do you determine what conditions are common between the two?
One route is to backtrack, and look at HLists. I think that’s probably an extremely strong approach, but I really haven’t got my head around the type equality proofs-as-types stuff. Another approach is add some stuff to
RequestCondition to track the types in some way. It turns out there’s a way to get the compiler to do most of the work here, so I’ll talk about that next time.
FOOTNOTE: On the Reddit discussion it was pointed out that
RequestConditionBox is an example of the existential type anti-pattern. To summarize: if all you’ve got is a bunch of methods, why not just have a record with those methods as properties? If all you’ve got is one method, why not just use a function.
This is a completely valid criticism of the code in this post as a practical approach. However, we wouldn’t have learned about existential types in the first place, and we couldn’t make functions implement
Eq and
Show. Implementing
Eq is the subject of the next post.
The commenter also added an elegant implementation of the functionality given above in terms of pure functions.
EDIT: Lennart Augustsson clarified that existential types do indeed construct vtables. So “boxing” something in an existential type is very like casting a struct to an interface it implements in C#. I should also clarify that the word bonkers used in the above text was meant as a good thing. 🙂 | https://colourcoding.net/2015/03/02/a-route-to-learning-the-haskell-type-system/ | CC-MAIN-2021-49 | refinedweb | 1,816 | 65.22 |
.
One of the highlights of BUILD last week was the announcement that SignalR, a framework for real-time client to server (or cloud, if you will) communication, would be a real supported thing now with the weight of Microsoft behind it. Love the open source flava!>() method to get a reference to the hub, and then makes a call to clients in the group matched by the topic ID. It’s calling the notifyNewPosts method on the client. The TopicService class, which handles the reply data from the MVC controller, has an instance of the broker new’d up by dependency injection, so it took literally one line of code in the reply action method to get things moving.
.
var parkRepo = new ParkRepo(); var results = await parkRepo.GetAllParks(); // bind results to some UI or observable collection or something
Hopefully this saves you a little time.
Being the ever cautious fan of technology, I ordered a Surface RT within minutes of it going live on Microsoft’s store. I received it Friday, and spent the weekend with it, and wrote a review. I posted that review on my personal blog.
TL;DR: It’s a pretty cool device, but has spots of weirdness that need to be addressed..
When you run your Web app on one box, HttpRuntime.Cache is a sweet and stupid-simple solution. Somewhere in the data fetching pieces of your app, you can see if an object is available in cache, and return that instead of hitting the data store. I did this quite a bit in POP Forums, and it dramatically cuts down on the database chatter. The problem is that it falls apart if you run the app on many servers, in a Web farm, where one server may initiate a change to that data, and the others will have no knowledge of the change, making it stale. Of course, if you have the infrastructure to do so, you can use something like memcached or AppFabric to do a distributed cache, and achieve the caching flavor you desire.
You could do the same thing in Azure before, but it would cost more because you’d need to pay for another role or VM or something to host the cache. Now, you can use a portion of the memory from each instance of a Web role to act as that cache, with no additional cost. That’s huge. So if you’re using a percentage of memory that comes out to 100 MB, and you have three instances running, that’s 300 MB available for caching. For the uninitiated, a Web role in Azure is essentially a VM that runs a Web app (worker roles are the same idea, only without the IIS part). You can spin up many instances of the role, and traffic is load balanced to the various instances. It’s like adding or removing servers to a Web farm all willy-nilly and at your discretion, and it’s what the cloud is all about. I’d say it’s my favorite thing about Windows Azure.
The slightly annoying thing about developing for a Web role in Azure is that the local emulator that’s launched by Visual Studio is a little on the slow side. If you’re used to using the built-in Web server, you’re used to building and then alt-tabbing to your browser and refreshing a page. If you’re just changing an MVC view, you’re not even doing the building part. Spinning up the simulated Azure environment is too slow for this, but ideally you want to code your app to use this fantastic distributed cache mechanism.
So first off, here’s the link to the page showing how to code using the caching feature. If you’re used to using HttpRuntime.Cache, this should be pretty familiar to you. Let’s say that you want to use the Azure cache preview when you’re running in Azure, but HttpRuntime.Cache if you’re running local, or in a regular IIS server environment. Through the magic of dependency injection, we can get there pretty quickly.
First, design an interface to handle the cache insertion, fetching and removal. Mine looks like this:
public interface ICacheProvider { void Add(string key, object item, int duration); T Get<T>(string key) where T : class; void Remove(string key); }
Now we’ll create two implementations of this interface… one for Azure cache, one for HttpRuntime:; }
Feel free to expand these to use whatever cache features you want.
I’m not going to go over dependency injection here, but I assume that if you’re using ASP.NET MVC, you’re using it. Somewhere in your app, you set up the DI container that resolves interfaces to concrete implementations (Ninject call is a “kernel” instead of a container). For this example, I’ll show you how StructureMap does it. It uses a convention based scheme, where if you need to get an instance of IFoo, it looks for a class named Foo. You can also do this mapping explicitly. The initialization of the container looks something like this:
ObjectFactory.Initialize(x => { x.Scan(scan => { scan.AssembliesFromApplicationBaseDirectory(); scan.WithDefaultConventions(); }); if (Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.IsAvailable) x.For<ICacheProvider>().Use<AzureCacheProvider>(); else x.For<ICacheProvider>().Use<LocalCacheProvider>(); });
If you use Ninject or Windsor or something else, that’s OK. Conceptually they’re all about the same. The important part is the conditional statement that checks to see if the app is running in Azure. If it is, it maps ICacheProvider to AzureCacheProvider, otherwise it maps to LocalCacheProvider.
Now when a request comes into your MVC app, and the chain of dependency resolution occurs, you can see to it that the right caching code is called. A typical design may have a call stack that goes: Controller –> BusinessLogicClass –> Repository. Let’s say your repository class looks like this:
public class MyRepo : IMyRepo { public MyRepo(ICacheProvider cacheProvider) { _context = new MyDataContext(); _cache = cacheProvider; }
private readonly MyDataContext _context; private readonly ICacheProvider _cache;
public SomeType Get(int someTypeID) { var key = "somename-" + someTypeID; var cachedObject = _cache.Get<SomeType>(key); if (cachedObject != null) { _context.SomeTypes.Attach(cachedObject); return cachedObject; } var someType = _context.SomeTypes.SingleOrDefault(p => p.SomeTypeID == someTypeID); _cache.Add(key, someType, 60000); return someType; } ... // more stuff to update, delete or whatever, being sure to remove // from cache when you do so
When the DI container gets an instance of the repo, it passes an instance of ICacheProvider to the constructor, which in this case will be whatever implementation was specified when the container was initialized. The Get method first tries to hit the cache, and of course doesn’t care what the underlying implementation is, Azure, HttpRuntime, or otherwise. If it finds the object, it returns it right then. If not, it hits the database (this example is using Entity Framework), and inserts the object into the cache before returning it.
The important thing not pictured here is that other methods in the repo class will construct the key for the cached object, in this case “somename-“ plus the ID of the object, and then remove it from cache, in any method that alters or deletes the object. That way, no matter what instance of the role is processing the request, it won’t find the object if it has been made stale, that is, updated or outright deleted, forcing it to attempt to hit the database.
So is this good technique? Well, sort of. It depends on how you use it, and what your testing looks like around it. Because of differences in behavior and execution of the two caching providers, for example, you could see some strange errors. For example, I immediately got an error indicating there was no parameterless constructor for an MVC controller, because the DI resolver failed to create instances for the dependencies it had. In reality, the NuGet packaged DI resolver for StructureMap was eating an exception thrown by the Azure components that said my configuration, outlined in that how-to article, was wrong. That error wouldn’t occur when using the HttpRuntime.
That’s something a lot of people debate about using different components like that, and how you configure them. I kinda hate XML config files, and like the idea of the code-based approach above, but you should be darn sure that your unit and integration testing can account for the differences. | http://weblogs.asp.net/jeff/default.aspx?PageIndex=2 | CC-MAIN-2013-48 | refinedweb | 1,403 | 61.06 |
From: Preston A. Elder (prez_at_[hidden])
Date: 2005-01-29 22:40:20:
In this code, the validate functions are only there to handle how the
value being converted is stored, NOT how to convert the value. The
validate_internal class does the actual conversion by default. However a
substitute for this may be passed to the typed_value class - and it will
not matter whether its being stored in a vector or whatever.
As you can see, many of the user-provided validators often simply call
validate_internal to get the value, then do their own processing on it
(for example, see validate_range, which uses the default validator to get
the item before doing the range comparison).
It doesn't have to, as with validate_space (which uses some of my own code
to convert a textual space ("3k", or "5m") to a boost::uint64_t - it is
left as an exercise for the user to ensure that the typed_value class is
typed for boost::uint64_t).
You will also note I specialized the bool version of validate_internal,
just as was done with the bool version of validate in the original -
again, this calls my own function which looks for a textual version of a
boolean, and returns a tribool (the code is in 'utils.h' in the same tree
if you care :P).
This is the kind of system I REALLY hope to see inside program_options, as
it takes care of multiple data stores and custom validators all in one
fell swoop. Including 'daisy chaining' validators (such as what
validate_range does).
Please note, some stuff in here (such as the 'duration' stuff, and
get_bool, and validate_host, etc) are kind of specific to my application,
but should not be too hard to either replicate or remove. I am using this
in my own application right now (which is why it is in my own namespace),
and I can confirm it works :)).
-- PreZ :) Founder. The Neuromancy Society ()
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2005/01/79416.php | CC-MAIN-2021-43 | refinedweb | 343 | 59.23 |
Microsoft Security Bulletin MS01-030 - Critical
Incorrect Attachment Handling in Exchange OWA Can Execute Script
Published: June 06, 2001 | Updated: June 13, 2003
Version: 3.1
Originally posted: June 06, 2001
Updated: June 13, 2003
Summary
Who should read this bulletin:
System administrators who have deployed Outlook Web Access using Microsoft® Exchange 5.5 Server or Exchange 2000 Server.
Impact of vulnerability:
Run code of attacker's choice.
Recommendation:
Customers with OWA implementations should install the patch immediately.
Affected Software:
- Microsoft Exchange 5.5 Server Outlook Web Access
- Microsoft Exchange 2000 Server Outlook Web Access
General Information
Technical description:
On June 06, 2001 Microsoft released the original version of this bulletin. We subsequently identified two issues that necessitated updating the patch and the bulletin on June 08, 2001:
- The vulnerability was found to affect Exchange Server 5.5. We have developed a patch that eliminates the vulnerability, and recommend that customers offering OWA services using Exchange 5.5 install it.
- A regression error was identified in the patch that was originally provided for Exchange 2000. We have corrected the error and provided an updated version of the patch. We recommend that customers who installed the original version of the Exchange 2000 patch install the updated version..
OWA is a service of Exchange 5.5 and 2000.
Vulnerability identifier: CAN-2001-0340
Tested Versions:
Microsoft tested Exchange 5.5 and 2000 to assess whether they are affected by this vulnerability. Previous versions are no longer supported and may or may not be affected by this vulnerability.
Why was this bulletin updated?
After releasing the updated version of this bulletin on June 08, 2001, we discovered that the updated patch for Exchange 2000 contained outdated files that could cause performance problems on the server in certain instances. We have eliminated the error and provided an updated patch.
This bulletin was originally updated because shortly after releasing the original version of this bulletin on June 06, 2001, we discovered two problems that necessitated updating it:
- Contrary to the original version of the bulletin, Exchange 5.5 is affected by the vulnerability. We have developed a patch for Exchange 5.5.
- The patch that was originally provided for Exchange 2000 contained a regression error that could cause performance problems on the server. We have eliminated the error and provided an updated patch.
What's the scope of the vulnerability?
This vulnerability could enable an attacker to run script of his choice against a user's Exchange mailbox by embedding script in any attachment to a mail message. In order for the attack to be successful, the attachment would have to be viewed using OWA. The attachment need not be an HTML attachment. When activated, such a malicious attachment would be capable of taking any action that the user himself could take on the mailbox, including adding, changing, or deleting data in the mailbox.
The vulnerability only affects attachments received via Outlook Web Access. In order for an attacker to successfully attack a user via this vulnerability, she would need to be able to persuade the user to open a specially crafted attachment to a mail message using Outlook Web Access. As a general security practice, users should only open attachments from a trusted source.
What causes the vulnerability?
If a mail message is read in OWA and contains an attachment, and that attachment contains HTML content, a flaw in the interaction between OWA and Internet Explorer causes the browser to render the HTML in the namespace of the server. If the HTML contains scripting, that script may be executed without warning.
What is Outlook Web Access (OWA)?
OWA is a feature that first shipped with Exchange 5.0. When OWA is installed and configured, users can use a web browser as their mail client to access Exchange. OWA is installed by default with Exchange 2000 Server.
What's the problem with how OWA handles attachments when using IE?
By design, when a user double-clicks on a mail attachment in OWA the user should see a dialogue asking whether to save the attachment or to open it. If the user chooses to open it, the file should be handed off to the Operating System and opened using the application that's appropriate for the file type.
The vulnerability results because the dialogue isn't displayed and the file is instead automatically opened. Moreover, the file is opened using IE, which will parse any script it finds in the file.
Are all versions of OWA are vulnerable?
No. The vulnerability only affects OWA in Exchange 5.5 and Exchange 2000.
Does this vulnerability affect Outlook or Outlook Express?
No. The vulnerability only affects Outlook Web Access. It does not affect any of the Outlook or Outlook Express clients.
Does this vulnerability affect all browsers using OWA?
No, the issue only occurs when using IE with OWA. No other browsers are affected.
What would this vulnerability enable an attacker to do?
The attachment would be able to take any action that the user could take on his Exchange mailbox. This could include manipulating messages or folders with complete control.
How might an attacker use this vulnerability?
To exploit this vulnerability, an attacker would have to construct a specially crafted attachment and send it to the intended victim in a mail message. The intended victim would have to use OWA to open the mail message and then the attachment. It's important to note that if the user were to open the attachment in the Outlook client, the attack would fail. Because the attack would require a user to use a specific mail client, a significant degree of social engineering would be required to successfully exploit this vulnerability.
Is there any way to exploit this vulnerability just by causing the user to open a mail message?
No. The vulnerability affects attachments only, not mail messages. It's important to note that OWA strips potentially dangerous content from mail messages.
What does the patch do?
The patch eliminates the vulnerability by changing the way that OWA handles attachments After the patch is applied, OWA sends information that causes IE to prompt the user to download attached documents before they are opened. The user can then save the document locally, or cancel the download.
What Exchange Servers should I install the patch on?
This patch is intended only for Exchange 5.5 and Exchange 2000 servers that are running OWA. You do not need to install this patch on Exchange Servers that are not running OWA.
I've installed earlier versions of the Exchange 2000 patch, what's the best way to install the updated patch?
You can install the updated patch by performing a normal install of the patch. You do not need to uninstall previous versions of the Exchange 2000 patch to update your system.
Download locations for this patch
- Microsoft Exchange Server 5.5:
- Microsoft Exchange 2000 Server:
Additional information about this patch
Installation platforms:
This patch can be installed on systems running Exchange 2000 Gold and Exchange 5.5 Service Pack 4.
Inclusion in future service packs:
The fix for this issue will be included in Exchange 2000 Service Pack 1.
Superseded patches:
None.
Verifying patch installation:
To verify that the patch has been installed on the machine, confirm that the files listed in the knowledge base article have been installed.
Caveats:
In some cases, Internet Explorer will prompt users twice to open an attachment once this fix is applied. To work around this issue, the attachment may be saved to a folder then opened from that location.
Localization:
- The Exchange 5.5 patch can be installed on any language platform.
- Localized versions of the Exchange 2000 patch are available from the Microsoft Download Center. Joao Gouveia for reporting this issue to us and working with us to protect customers.
Support:
- Microsoft Knowledge Base article Q2995 06, 2001): Bulletin Created.
- V2.0 (June 08, 2001): Bulletin updated to advise customers that Exchange 5.5 is also affected by the vulnerability and that the version of the Exchange 2000 patch released on June 06, 2001, contained a regression error that has been corrected.
- V3.0 (June 13, 2001): Bulletin updated to advise customers that the updated version of the Exchange 2000 patch released on June 08, 2001, contained outdated files that has been corrected.
- V3.1 (June 13, 2003): Updated download links to Windows Update.
Built at 2014-04-18T13:49:36Z-07:00 | https://technet.microsoft.com/en-us/library/security/ms01-030.aspx | CC-MAIN-2015-27 | refinedweb | 1,408 | 56.66 |
Python client for Solr
Project description
solr-dsl
A high-level library for querying Solr with Python. Built on the lower-level Pysolr. Supports Python 2 and 3.
Example
from pysolr import Solr from solr_dsl import Field, Range, Search solr = Solr(' query = (Field('doc_type', 'solution') & Range("date", '2018-01-01T00:00:00Z', 'now')) search = Search(solr, query) for document in search.scan(): ...
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
solr-dsl-0.0.15.tar.gz (2.5 kB view hashes) | https://pypi.org/project/solr-dsl/ | CC-MAIN-2022-21 | refinedweb | 105 | 60.51 |
Heap is an efficient data structure used for managing data during the execution of algorithms like heapsort and priority queue. It supports insert and extract min (or max) operations efficiently. Heap achieves this by maintaining a partial ordering on the data set. This partial ordering is weaker than the sorted order but stronger than random or no order. Heap is a binary tree in which key on each node dominates key of its children.
There are two variations of heap, min-heaps, and max-heaps(as shown below).
10 < 45,75 2001 > 1045,1175
45 < 70,55 1045 > 70,55
Property of Heap
- The tree is completely filled on all levels except the lowest level which gets filled from left up to a point. So, it can be viewed as the nearly complete binary tree.
- Both heaps(min and max) satisfy a heap property. In max-heap, the key of each node is larger than the key of its children nodes. A min-heap is organized the opposite way.
- Smallest element in min-heap is at the root. Largest element in max-heap is at the root.
- Heap is NOT a binary search tree (it is a binary tree though ). So binary search doesn't work on the heap.
Heap Implementation
- Binary Tree: The obvious implementation of heap is binary tree (evident from above diagram). Each key will get stored in node with two pointers to its children. Height of heap is the height of root node; and height of a node is number of edges on the longest downward path from node to the leaf. So height of above trees are 3 ( lg8 ).
- Array: Nearly complete binary tree in case of heap enables it to represent it without any pointers. Data can be stored as an array of keys, and use the position of the keys to implicitly satisfy the role of the pointers.
Node(i)left = 2* Node(i)
Node(i)Right = 2*Node(i)+1
As shown above, root of the tree is stored at the first position of the array, and its left and the right children in the second and third positions, respectively. Assumed that array starts with index 1 to make calculations simpler. So left child of i sits in position and right child in 2i+1. This way you can easily move around without pointers.
int first_child(int i){
return 2*i;
}
int parent(int i){
if(i == 1) return -1
else return (int)i/2;
}
Array representation of heap saves memory but is less flexible than tree implementation. Updating nodes in array implementation is not efficient. It helps to implement heap as array as its fully packed till last level. But it will not be so efficient for normal binary trees.
Heap ConstructionThe fundamental point which needs to be taken care of is; the heap property should be retained during each new insertion. Let's take the case of the above min-heap. Now assume that you want to add one more item to the heap with key 5.
Step 1: Add 5 to the next slot.
The new element will get added to the next available slot in the array. This ensures the desired balanced shape of the heap-labeled tree, but heap property might no longer be valid.
Step 2: Swap Node 5 and 70 to satisfy the min-heap property.
After swap min-heap property test passes for node 5. But min-heap property fails for the parent node of 5.
Step 3: Swap node 45 with 5.
Now node with key 5 is satisfied as its children have key values > 5.
Step 4: Swap root node with its left child.
Bingo. Heap is perfect now.
Insertion Complexity: Swap process takes constant time at each level. So each insertion takes at most O(log n) time.
Creation Complexity: Each insertion takes O(log n) time; then creation (or insertion of n nodes will take O(n log n) time.
Extracting Minimum from min-heap
Minimum of a min-heap sits at first position; so finding first minimum value is easy. Now the tricky part is how to find next minimum element.
To find the next minimum; let's remove the first element. After this to make sure that binary tree is maintained, let's copy the last element to the first position. This is required to ensure that the tree is filled at all levels except the lowest level. But after this, the heap property might go for a toss as now a bigger element may sit at the root. Now the heap should re-arrange itself so that the min-heap property satisfies.
Heapify: This is a process in which a dissatisfied element bubbles-down to appropriate position. To achieve this the dominant child should be swapped with the root to push down the problem to the next level. This should be continued further till the last level.
The heapify operation for each element takes lgn steps (or height of tree) so complexity is O(log n) time.
Heapsort
Exchanging the minimum element with the last element and calling heapify repeatedly gives an O( n logn) sorting algorithm, named as Heapsort. Worst case complexity is O( n logn ) time, which is best that can be expected from any sorting algorithm. It is an in-place sorting i.e. uses no extra memory | http://geekrai.blogspot.com/2013/05/heap-data-structure.html | CC-MAIN-2019-04 | refinedweb | 894 | 74.08 |
Odin
Monkey: ARM support
RESOLVED FIXED in mozilla23
Status
()
People
(Reporter: luke, Assigned: mjrosenb)
Tracking
Firefox Tracking Flags
(Not tracked)
Details
(Whiteboard: [gdc2013])
Attachments
(2 attachments, 15 obsolete attachments)
Marty has graciously volunteered to add ARM support to Odin. This is mostly adding codegen for new IM nodes and ARM trampolines. One challenge is catching out-of-bounds heap access: we can neither use the x86 segment register trick nor the x64 large-address space trick. The cheapest solution I can find is to use the 'usat' instruction to clamp all out-of-bounds access to a single address which we ensure will safely fault.
this is still throws a few assertions. I'm pretty sure most of them are due to stack issues. I'll look into it a bit more as I'm able.
hacks on top of hacks. I haven't rebased in a while (coming soon), and I'm pretty sure I've totally broken x64 support, but that should be trivial to fix (just it isn't yet) however, the good news: /home/mrosenberg/src/odin/odin/js/src/jit-test/lib/asm.js:5:0 TypeError: redeclaration of const ASM_OK_STRING mrosenberg@tegra-ubuntu:~/src/odin/odin/js/objs/arm-dbg$ ../../src/jit-test/jit_test.py -f ./shell/js asm.js [17| 0| 0| 0] 100% ==================================================>| 8.6s PASSED ALL
Attachment #713997 - Attachment is obsolete: true
Nice work! I'm guessing you'll work on rebasing now and then we can look over the patch together in MV next week?
Is it possible for us to use the ThumbEE 'chka' when we have it instead of saturating?
I think we're generating ARM, not Thumb code, although there may be future plans for Thumb.
Yes. Currently the Ion assembler is ARM only, If code size becomes a problem for odin, I can probably write a Thumb-2 assembler.
Adds in arm support, passes all jit-tests, and was rebased late last week. There is currently a bug exposed by the banana bread test, but it looks like that is in the assembler rather than Odin.
Attachment #719297 - Attachment is obsolete: true
You're probably right, but just in case: if you see what looks like a corrupted instruction, I'd suspect patching bugs due to offset/actualOffset since that is an ARM-only thing that is only going to show up at scale. That should only be happening in ModuleCompiler::finish, so it might be worth taking a slow pass over that.
The Android NDK does not appear to defined mcontext_t and ucontext_t which is a problem for a Fennec build. See: and The first link has some definitions that may be appropriate, and these at least get AsmJSSignalHandlers.cpp compiling.
ok, rebased off of mozilla-central. passes all jit-tests bananabread has some issues. major cleanups needed, I'll get on them this evening. I just left some of the methods that were recently merged as seperate ARM functions, because there was enough to rebase that I didn't want to also risk
Awesome work! In the meantime, could you build a browser and verify that it runs and?
This doesn't build; same problem as what Doug said in comment #9 -- there is no mcontext_t defined in the Android NDK.
A possible patch to get AsmJSSignalHandlers.cpp compiling on Android ARM.
Comment on attachment 726260 [details] [diff] [review] /home/mjrosenb/patches/addARMSupport-r3.patch Review of attachment 726260 [details] [diff] [review]: ----------------------------------------------------------------- Great progress! Mostly code organization comments below: It'd be nice if you took a pass over this patch; there are a bunch of printfs, #if 0, #if TODO_REMOVE, etc to be cleaned out. Could you reuse AsmJSHeapAccess instead of adding an AsmJSBoundsCheck? I realize that ARM doesn't need opLength or isFloat32Load, but we can just #ifdef those fields in AsmJSHeapAccess as we've done with x86/x64. There seems to be a lot of code duplication resulting from this. ::: js/src/ion/AsmJS.cpp @@ +4389,5 @@ > // CodeGenerator so we can destory it now. > return true; > } > > +//static const unsigned CodeAlignment = 8; rm @@ -4761,5 @@ > // we must align StackPointer dynamically. Don't worry about restoring > // StackPointer since throwLabel will clobber StackPointer immediately. > masm.andPtr(Imm32(~(StackAlignment - 1)), StackPointer); > - if (ShadowStackSpace) > - masm.subPtr(Imm32(ShadowStackSpace), StackPointer); That needs to stay. @@ +4984,5 @@ > #else > + > + //on ARM, we should always be aligned, just do that stuff. > + LoadAsmJSActivationIntoRegister(masm, IntArgReg0); > + LoadJSContextFromActivation(masm, IntArgReg0, IntArgReg0); Comment is rather ambiguous... @@ +4988,5 @@ > + LoadJSContextFromActivation(masm, IntArgReg0, IntArgReg0); > + > + void (*pf)(JSContext*) = js_ReportOverRecursed; > + masm.call(ImmWord(JS_FUNC_TO_DATA_PTR(void*, pf))); > + masm.ma_b(throwLabel); Can't the call/jump-to-throw path be shared with x86/64? @@ +5116,1 @@ > masm.mov(Operand(activation, AsmJSActivation::offsetOfErrorRejoinSP()), StackPointer); Is there not a portable assembler function here and for the 'ret'? ::: js/src/ion/AsmJSLink.cpp @@ +11,5 @@ > #include "jstypedarrayinlines.h" > > #include "AsmJS.h" > #include "AsmJSModule.h" > +#include "Ion.h" \n after #include @@ +200,5 @@ > } > +#elif defined(JS_CPU_ARM) > + // Now the length of the array is know, patch all of the bounds check sites > + // with the new length. > + ion::IonContext ic(cx, cx->compartment, NULL); I'm curious which IonContext constructor this is calling, I only see IonContext constructors taking 1 and 2 arguments... ::: js/src/ion/AsmJSModule.h @@ +519,5 @@ > + ion::AutoFlushCache afc("patchBoundsCheck"); > + int bits = -1; > + for (bits = -1; bits < 31; bits++) { > + if (heapSize >> (bits + 1) == 0) > + break; use JS_CEILING_LOG2 ::: js/src/ion/CodeGenerator.cpp @@ +4327,5 @@ > // according to the system ABI. The MAsmJSParameters which represent these > // parameters have been useFixed()ed to these ABI-specified positions. > // Thus, there is nothing special to do in the prologue except (possibly) > // bump the stack. > + if (!generateAsmPrologue()) Now the name to use is generateAsmJSPrologue and generateAsmJSEpilogue. @@ +5738,3 @@ > JS_ASSERT((AlignmentAtPrologue + masm.framePushed()) % StackAlignment == 0); > +#else > + JS_ASSERT((masm.framePushed()) % StackAlignment == 0); Could you fix either AlignmentAtPrologue or StackAlignment so that the assertion doesn't need an #ifdef? @@ +5770,5 @@ > > bool > CodeGenerator::visitAsmJSParameter(LAsmJSParameter *lir) > { > +#if defined(JS_CPU_ARM) && ! defined(JS_CPU_ARM_HARDFP) !defined @@ +5790,5 @@ > // Don't emit a jump to the return label if this is the last block. > +#if defined(JS_CPU_ARM) && !defined(JS_CPU_ARM_HARDFP) > + if (lir->getOperand(0)->isFloatReg()) { > + masm.ma_vxfer(d0, r0, r1); > + } No { } ::: js/src/ion/IonAllocPolicy.h @@ +94,5 @@ > { > public: > void *malloc_(size_t bytes) { > + void *ret = GetIonContext()->temp->allocate(bytes); > + return ret; Undo ::: js/src/ion/IonLinker.h @@ +18,5 @@ > > namespace js { > namespace ion { > > +//static const int CodeAlignment = 8; Where is CodeAlignment coming from? ::: js/src/ion/LIR.cpp @@ +9,5 @@ > #include "MIRGraph.h" > #include "LIR.h" > #include "IonSpewer.h" > #include "LIR-inl.h" > +#include "shared/CodeGenerator-shared.h" \n after @@ +340,5 @@ > for (size_t i = 0; i < moves_.length(); i++) > JS_ASSERT(*to != *moves_[i].to()); > +#if 0 > + if (!from->isGeneralReg() && ! from->isFloatReg()) > + JS_ASSERT((ToStackOffset(from) & 3) == 0); Remove ::: js/src/ion/RegisterAllocator.h @@ +312,5 @@ > #ifdef JS_CPU_X64 > if (mir->compilingAsmJS()) > allRegisters_.take(AnyRegister(HeapReg)); > #endif > +#ifdef JS_CPU_ARM #if defined(JS_CPU_X64) #elif defined(JS_CPU_ARM) #endif ::: js/src/ion/arm/Architecture-arm.h @@ +32,5 @@ > // components of a js::Value. > static const int32_t NUNBOX32_TYPE_OFFSET = 4; > static const int32_t NUNBOX32_PAYLOAD_OFFSET = 0; > > +static const uint32_t ShadowStackSpace = 0; \n after ::: js/src/ion/arm/Assembler-arm.h @@ +67,4 @@ > static const Register CallTempNonArgRegs[] = { r5, r6, r7, r8 }; > static const uint32_t NumCallTempNonArgRegs = > mozilla::ArrayLength(CallTempNonArgRegs); > +#ifdef REDEFINED REDEFINED? ::: js/src/ion/arm/CodeGenerator-arm.cpp @@ +1645,5 @@ > return true; > } > +#if 0 > +bool > +CodeGeneratorARM::generateAsmPrologue(const MIRTypeVector &argTypes, MIRType returnType, Remove this code and the code below. ::: js/src/ion/arm/IonFrames-arm.h @@ +8,5 @@ > #ifndef jsion_ionframes_arm_h__ > #define jsion_ionframes_arm_h__ > > #include "ion/shared/IonFrames-shared.h" > +#include "ion/arm/Assembler-arm.h" \n ::: js/src/ion/arm/Lowering-arm.cpp @@ +42,5 @@ > bool > LIRGeneratorARM::lowerConstantDouble(double d, MInstruction *mir) > { > uint32_t index; > +#ifdef WHY_DO_I_NEED_THIS good question ::: js/src/ion/arm/Lowering-arm.h @@ +62,5 @@ > bool visitGuardShape(MGuardShape *ins); > bool visitRecompileCheck(MRecompileCheck *ins); > bool visitStoreTypedArrayElement(MStoreTypedArrayElement *ins); > bool visitInterruptCheck(MInterruptCheck *ins); > + //bool visitAsmCheckStackAndInterrupt(MAsmCheckStackAndInterrupt *ins); rm ::: js/src/ion/shared/CodeGenerator-shared.cpp @@ +70,5 @@ > +#else > + bool forceAlign = false; > +#endif > + > + if (gen->performsAsmJSCall() || forceAlign) { As before, it'd be nice if this just fell out of the constants chosen for AlignmentAtPrologue and StackAlignment. ::: js/src/methodjit/BaseAssembler.h @@ +1147,5 @@ > } > > template <typename T> > +#if defined(__GNUC__) && __GNUC__ == 4 && __GNUC_MINOR__ == 7 && defined(JS_CPU_ARM) > + __attribute__((optimize("-O1"))) Wasn't this fix already landed? ::: memory/mozalloc/mozalloc.h @@ +189,5 @@ > #define MOZALLOC_THROW_IF_HAS_EXCEPTIONS /**/ > #define MOZALLOC_THROW_BAD_ALLOC_IF_HAS_EXCEPTIONS > #else > #define MOZALLOC_THROW_IF_HAS_EXCEPTIONS throw() > +#define MOZALLOC_THROW_BAD_ALLOC_IF_HAS_EXCEPTIONS abort() Umm, I don't think you can do that..
(In reply to Vladimir Vukicevic [:vlad] [:vladv] from comment #15) >. Yes, I have been getting crashes on the timer tests and am trying to trace the callback exit patch to find the problem. The signal handler seems to obtain the correct program counter, so perhaps it's in the return path.
The stack does not appear to be restored after a callback. The example code below appears to have been generated by GenerateOperationCallbackExit and stack change annotations have been added and do not balanced. I have some concern about state being exposed to corruption on the stack by another signal, but need to double check the ARM s/w manuals. 0x400df090: b 0x400df098 0x400df094: bkpt 0x00fb 0x400df098: sub sp, sp, #60 ; 0x3c // sp -= 60 0x400df09c: stm sp, {r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, r10, r11, r12, lr, pc} 0x400df0a0: sub sp, sp, #0 0x400df0a4: mrs r0, CPSR 0x400df0a8: vmrs r1, fpscr 0x400df0ac: strd r0, [sp, #-12]! // sp -= 12 0x400df0b0: movw r0, #4096 ; 0x1000 0x400df0b4: movt r0, #16466 ; 0x4052 0x400df0b8: ldr r0, [r0, #68] ; 0x44 0x400df0bc: ldr r1, [r0, #24] 0x400df0c0: str r1, [sp, #68] ; 0x44 0x400df0c4: ldr r0, [r0] 0x400df0c8: b 0x400df0d0 0x400df0cc: bkpt 0x00fc 0x400df0d0: sub sp, sp, #128 ; 0x80 // sp -= 128 0x400df0d4: vstmia sp!, {d0-d15} // sp += 128 (is the state expose on the stack?) 0x400df0d8: sub sp, sp, #128 ; 0x80 // sp -= 128 0x400df0dc: movw r12, #38117 ; 0x94e5 0x400df0e0: movt r12, #6 0x400df0e4: blx r12 // Call into C. Stack preserved on return. 0x400df0e8: cmp r0, #0 0x400df0ec: beq 0x400df118 0x400df0f0: vpop {d0-d15} // sp += 128 0x400df0f4: sub sp, sp, #128 ; 0x80 // sp -= 128 (is this combination necessary?) 0x400df0f8: add sp, sp, #128 ; 0x80 // sp += 128 0x400df0fc: ldrd r0, [sp], #12 // sp += 12 0x400df100: vmsr fpscr, r1 0x400df104: msr CPSR_fs, r0 0x400df108: sub sp, sp, #0 0x400df10c: ldm sp, {r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, r10, r11, r12, lr, pc} 0x400df110: add sp, sp, #60 ; 0x3c // ?? not reached 0x400df114: nop {0} 0x400df118: movw r5, #4096 ; 0x1000 0x400df11c: movt r5, #16466 ; 0x4052 0x400df120: ldr r5, [r5, #68] ; 0x44 0x400df124: ldr sp, [r5, #16] 0x400df128: sub sp, sp, #0 0x400df12c: ldm sp, {r4, r5, r6, r7, r8, r9, r10, r11, r12, lr} 0x400df130: add sp, sp, #40 ; 0x28 0x400df134: mov r0, #0 0x400df138: bx lr
To expedite this work, I've put both of Marty's and Douglas's (thanks btw!) up on a user repo: We can continue to iterate on this repo and avoid the terrible patch game. Douglas, feel free to post patches here and I'll land them on armodin for you. Running the asm.js unit test suite (run: js/src/jit-tests/jit_tests.py $(SHELL) asm.js) under qemu (very easy to do:) shows failures in asm.js/testTimeout{1-4}.js, so hopefully this is what we're seeing in comment 17. I'll dig into this more tomorrow if noone beats me to it. << 0x403cb098: sub sp, sp, #4 <<< modified 0x403cb09c: b 0x403cb0a4 << are these needed? 0x403cb0a0: bkpt 0x012d << 0x403cb0ac: sub sp, sp, #0 << redundant? << 0x403cb0dc: sub sp, sp, #128 ; 0x80 << 0x403cb0e0: vstmia sp!, {d0-d15} << looks broken. Just use vpop? 0x403cb0e4: sub sp, sp, #128 ; 0x80 << Douglas Crosher from comment #19) > Created attachment 727068 [details] [diff] [review] > ARM Callback exit part fix. > > << not needed, just for debugging, quite easy to remove. > 0x403cb098: sub sp, sp, #4 > <<< modified > 0x403cb09c: b 0x403cb0a4 << are these needed? > 0x403cb0a0: bkpt 0x012d << ditto. > IIRC, nrc has a patch that does this. I'll look into it asap. He wrote the code to use stm rather than pushing each register individually. > 0x403cb0ac: sub sp, sp, #0 << redundant? should also get removed with nrc's patch. > << same debugging code > Marty Rosenberg [:mjrosenb] from comment #20) > (In reply to Douglas Crosher from comment #19) > > Created attachment 727068 [details] [diff] [review] > > ARM Callback exit part fix. ... > >? The 'vstmia sp!, {d0-d15}' instruction increases the sp above the data it saves, so if the code is interrupted before the next instruction then the state may be corrupted. Could use a 'vpush' (not vpop sorry), and it would replace all three instructions and avoid exposing the state on the stack. ... > >. Yes. The ARM manual states 'ARM instructions that include both the LR and the PC in the list are deprecated', so it would be wise to separate them. Your call.
Whiteboard: [gdc2013]
Fixed about 90% of luke's complaints, and rolled in the callback exit fix.
Attachment #724264 - Attachment is obsolete: true
Attachment #726260 - Attachment is obsolete: true
Attachment #727068 - Attachment is obsolete: true
Try run for b997bed88738 is complete. Detailed breakdown of the results available here: Results (out of 23 total builds): warnings: 1 failure: 22 Builds (or logs if builds failed) available at:
Here's a possible patch to correct the FP register push sequence which exposed the state on the stack to corruption. The patch also optimizes the instruction sequences, making use of the ability of the instructions to update the destination pointer. This patch could stand on it's own, so let me know if a separate bug should be submitted for it? With the latest patch set plus this patch, the callback exit becomes: 0x40124070: push {r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, r10, r11, r12, lr, pc} 0x40124074: mrs r0, CPSR 0x40124078: vmrs r1, fpscr 0x4012407c: strd r0, [sp, #-12]! 0x40124080: movw r0, #0 0x40124084: movt r0, #16450 ; 0x4042 0x40124088: ldr r0, [r0, #28] 0x4012408c: ldr r1, [r0, #24] 0x40124090: str r1, [sp, #68] ; 0x44 0x40124094: ldr r0, [r0] 0x40124098: vpush {d0-d15} 0x4012409c: movw r12, #23229 ; 0x5abd 0x401240a0: movt r12, #3 0x401240a4: blx r12 0x401240a8: cmp r0, #0 0x401240ac: beq 0x401240c8 0x401240b0: vpop {d0-d15} 0x401240b4: ldrd r0, [sp], #12 0x401240b8: vmsr fpscr, r1 0x401240bc: msr CPSR_fs, r0 0x401240c0: pop {r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, r10, r11, r12, lr} 0x401240c4: pop {pc} ; (ldr pc, [sp], #4) 0x401240c8: movw r5, #0 0x401240cc: movt r5, #16450 ; 0x4042 0x401240d0: ldr r5, [r5, #28] 0x401240d4: ldr sp, [r5, #16] 0x401240d8: pop {r4, r5, r6, r7, r8, r9, r10, r11, r12, lr} 0x401240dc: mov r0, #0 0x401240e0: bx lr
In case it helps others, here's my set of patches against nightly for building the Android Fennec ARM with the asm.js support. It is seems to be improving in reliability as more test code here is working without crashes. The performance improvement is great for some code, over 10x faster than Ion nightly.
Here's another version of the callback exit that takes care to align the stack. Code could be interrupted without the stack aligned, so it might be necessary to align it here. The ma_msr instruction emitter was also broken, except when using r0. The ARM specific code: masm.setFramePushed(0); // set to zero so we can use masm.framePushed() below masm.PushRegsInMask(RegisterSet(GeneralRegisterSet(Registers::AllMask & ~(1<<Registers::sp)), FloatRegisterSet(uint32_t(0)))); // save all GP registers,excep sp // Save both the APSR and FPSCR in non-volatile registers. masm.as_mrs(r4); masm.as_vmrs(r5); // Save the stack pointer in a non-volatile register. masm.mov(sp,r6); // Align the stack. masm.ma_and(Imm32(~7), sp, sp); // Store resumePC into the return PC stack slot. LoadAsmJSActivationIntoRegister(masm, IntArgReg0); masm.loadPtr(Address(IntArgReg0, AsmJSActivation::offsetOfResumePC()), IntArgReg1); masm.storePtr(IntArgReg1, Address(r6, 14 * sizeof(uint32_t*))); // argument 0: cx masm.loadPtr(Address(IntArgReg0, AsmJSActivation::offsetOfContext()), IntArgReg0); masm.PushRegsInMask(RegisterSet(GeneralRegisterSet(0), FloatRegisterSet(FloatRegisters::AllMask))); // save all FP registers JSBool (*pf)(JSContext*) = js_HandleExecutionInterrupt; masm.call(ImmWord(JS_FUNC_TO_DATA_PTR(void*, pf))); masm.branchTest32(Assembler::Zero, ReturnReg, ReturnReg, throwLabel); // Restore the machine state to before the interrupt. this will set the pc! masm.PopRegsInMask(RegisterSet(GeneralRegisterSet(0), FloatRegisterSet(FloatRegisters::AllMask))); // restore all FP registers masm.mov(r6,sp); masm.as_vmsr(r5); masm.as_msr(r4); // Restore all GP registers masm.startDataTransferM(IsLoad, sp, IA, WriteBack); masm.transferReg(r0); masm.transferReg(r1); masm.transferReg(r2); masm.transferReg(r3); masm.transferReg(r4); masm.transferReg(r5); masm.transferReg(r6); masm.transferReg(r7); masm.transferReg(r8); masm.transferReg(r9); masm.transferReg(r10); masm.transferReg(r11); masm.transferReg(r12); masm.transferReg(lr); masm.finishDataTransfer(); masm.ret(); Corrected as_msr: BufferOffset Assembler::as_msr(Register r, Condition c) { // hardcode the 'mask' field to 0b11 for now. it is bits 18 and 19, which are the two high bits of the 'c' in this constant. JS_ASSERT((r.code() & ~0xf) == 0); return writeInst(0x012cf000 | int(c) | r.code()); } The resulting callback exit code becomes: 0x40411070: push {r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, r10, r11, r12, lr, pc} 0x40411074: mrs r4, CPSR 0x40411078: vmrs r5, fpscr 0x4041107c: mov r6, sp 0x40411080: bic sp, sp, #7 0x40411084: movw r0, #0 0x40411088: movt r0, #16466 ; 0x4052 0x4041108c: ldr r0, [r0, #28] 0x40411090: ldr r1, [r0, #24] 0x40411094: str r1, [r6, #56] ; 0x38 0x40411098: ldr r0, [r0] 0x4041109c: vpush {d0-d15} 0x404110a0: movw r12, #23229 ; 0x5abd 0x404110a4: movt r12, #3 0x404110a8: blx r12 0x404110ac: cmp r0, #0 0x404110b0: beq 0x404110d0 0x404110b4: vpop {d0-d15} 0x404110b8: mov sp, r6 0x404110bc: vmsr fpscr, r5 0x404110c0: msr CPSR_fs, r4 0x404110c4: pop {r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, r10, r11, r12, lr} 0x404110c8: pop {pc} ; (ldr pc, [sp], #4)
Comment on attachment 727258 [details] [diff] [review] /home/mjrosenb/patches/addARMSupport-r4.patch Not sure if Doug's most recent patch is this rebased or one of my previous patches rebased, Either way, there shouldn't be a huge amount of variance between them.
Comment on attachment 727258 [details] [diff] [review] /home/mjrosenb/patches/addARMSupport-r4.patch Perhaps this isn't the right patch; this one contains a bunch of printfs and I can see a lot of the comments from above aren't addressed.
Revised patch re-based to mozilla-central which now includes the fix from bug 849489. This started out with addARMSupport-r4.patch plus: fixes to get it to compile; some further cleanups suggested in Luke's review; a reworked callback exit that aligns the stack; and Android support.
Attachment #727615 - Attachment is obsolete: true
Oh wow, thanks! If you want slightly more rapid responses than bugzilla supports, feel free to hop onto IRC, I am basically always responsive on IRC.
This patch attempts to also address build issues for non-ARM systems, and completed a x64 build. > +++ b/js/src/ion/shared/CodeGenerator-shared.h ... - inline int32_t ToStackOffset(const LAllocation *a) const { + int32_t ToStackOffset(const LAllocation *a) const { Reverted this change in Marty's patch. Looks like a debug mod and should not be committed. Put it back if necessary. +++ b/memory/mozalloc/mozalloc.h ... -#define MOZALLOC_THROW_BAD_ALLOC_IF_HAS_EXCEPTIONS throw(std::bad_alloc) +#define MOZALLOC_THROW_BAD_ALLOC_IF_HAS_EXCEPTIONS abort() Reverted this change in Marty's earlier patches. Was this every necessary for some ARM asm.js builds? > +++ b/js/src/ion/shared/IonAssemblerBuffer.h ... struct AssemblerBuffer - : public IonAllocPolicy { public: - AssemblerBuffer() : head(NULL), tail(NULL), m_oom(false), m_bail(false), bufferSize(0) {} + AssemblerBuffer() : head(NULL), tail(NULL), m_oom(false), m_bail(false), bufferSize(0), LifoAlloc_(8192) {} ... +++ b/js/src/ion/shared/IonAssemblerBufferWithConstantPools.h ... - Pool(int maxOffset_, int immSize_, int instSize_, int bias_, int alignment_, + Pool(int maxOffset_, int immSize_, int instSize_, int bias_, int alignment_, LifoAlloc &LifoAlloc_, ... Might the LifoAlloc changes have been a separate enhancement or do they add infrastruture needed for the ARM asm.js support? Might it be better to split them out?
Attachment #728598 - Attachment is obsolete: true
Update re-basing after bug 850070 landed.
Attachment #728617 - Attachment is obsolete: true
Sorry for the large amount of divergent development without rebasing. I'll merge the changes and bugfixes into your rebased patch shortly. This one passes all jit-tests, as well as the emscripten tests, including bananabread.
Attachment #727258 - Attachment is obsolete: true
got rebased, and it still passes all jit-tests, as well as bananabread.
Attachment #728859 - Attachment is obsolete: true
Attachment #728878 - Attachment is obsolete: true
Marty: could you give me the backstory on these changes to Load() and all the associated test changes? On first glance, it seems like these should be in a separate bug...
Comment on attachment 729115 [details] [diff] [review] /home/mjrosenb/patches/addARMSupoort-rebased-r0.patch Review of attachment 729115 [details] [diff] [review]: ----------------------------------------------------------------- Great work getting the Emscripten tests to pass! I think it's almost done, I'd just like to see the below small fixes and see the answer to my question in comment 35. ::: js/src/ion/CodeGenerator.cpp @@ +5806,3 @@ > JS_ASSERT((AlignmentAtPrologue + masm.framePushed()) % StackAlignment == 0); > +#else > + JS_ASSERT((masm.framePushed()) % StackAlignment == 0); Could you change AlignmentAtPrologue/StackAlignment so that the original assertion is valid on ARM? ::: js/src/ion/LIR.cpp @@ +339,5 @@ > JS_ASSERT(*from != *to); > for (size_t i = 0; i < moves_.length(); i++) > JS_ASSERT(*to != *moves_[i].to()); > +#if 0 > + if (!from->isGeneralReg() && ! from->isFloatReg()) rm ::: js/src/ion/arm/Lowering-arm.cpp @@ +50,5 @@ > bool > LIRGeneratorARM::visitConstant(MConstant *ins) > { > if (ins->type() == MIRType_Double) { > +#ifdef WHY_DO_I_NEED_THIS rm ::: js/src/ion/arm/MacroAssembler-arm.h @@ +1194,5 @@ > + > + void lea(Operand addr, Register dest) { > + ma_add(addr.baseReg(), Imm32(addr.disp()), dest); > + } > +#ifdef TODO rm ::: js/src/ion/shared/CodeGenerator-shared.cpp @@ +75,1 @@ > unsigned alignmentAtCall = AlignmentAtPrologue + frameDepth_; Can you fix AlignmentAtPrologue/frameDepth etc so that forceAlign isn't necessary? That is, it seems like that if you defined these with the right values, you'd get 'rem == 0' below. ::: js/src/ion/x64/MacroAssembler-x64.h @@ +940,5 @@ > JS_ASSERT(nextInsn <= code + codeBytes); > uint8_t *target = code + codeBytes + globalDataOffset; > ((int32_t *)nextInsn)[-1] = target - nextInsn; > } > +#if 0 rm
w.r.t. comment 35, that was me backing out sfink's changes, since they rendered me unable to run the shell version of bananbread at all. Somehow or other, I managed to roll them up into my patch. They have been stripped out now.
Attachment #729115 - Attachment is obsolete: true
Attachment #729426 - Flags: review?(luke)
Seems to be running great, fantastic work!
Comment on attachment 729426 [details] [diff] [review] /home/mjrosenb/patches/addARMSupoort-rebased-r1.patch Review of attachment 729426 [details] [diff] [review]: ----------------------------------------------------------------- Great work! ::: js/src/ion/AsmJS.cpp @@ +5365,5 @@ > JS_ASSERT(masm.framePushed() == 0); > > masm.mov(Imm32(0), ReturnReg); > + masm.abiret(); > + rm extra \n
Attachment #729426 - Flags: review?(luke) → review+
The ARM Android signal handler support re-based after bug 851880 landed.
Attachment #726389 - Attachment is obsolete: true
The stack alignment changes appear to have problems for the non-ARM ports. * This change does not appear to preserve the original intent of the code. The AlignmentAtPrologue was changed to 0 for the ARM, so perhaps this was just a typo mistake and the original was intended to be maintained. js/src/ion/AsmJS.cpp: bool CodeGenerator::visitAsmJSCall(LAsmJSCall *ins) ... - JS_ASSERT((AlignmentAtPrologue + masm.framePushed()) % StackAlignment == 0); + JS_ASSERT((masm.framePushed()) % StackAlignment == 0); * This change introduces AlignmentMidPrologue which is only defined for the ARM but touched by all the ports. js/src/ion/shared/CodeGenerator-shared.cpp + if (gen->performsAsmJSCall() || forceAlign) { + unsigned alignmentAtCall = AlignmentMidPrologue + frameDepth_; On a separate matter, bug 854045 includes the Android ARM signal context support and it is more comprehensive then the patch proposed here, so could I suggest that bug 854045 be landed first.
This rebased patch set has a alternative native stack layout for the ARM asm.js support. The approach used here is to expand the native frame size to 8 bytes: the 4 byte return pc, plus 4 bytes of unused fill. The framePushed base then starts after this block and is aligned to an 8 byte boundary, and this simplifies some uses of the framePushed. This also aligns the local stack on an 8 byte boundary and should speed access to double words in this area of the stack. The AlignmentAtPrologue becomes zero for the ARM. The patch makes the function 'adjustFrame' un-protected - perhaps a new function to allocate stack and set framePushed would be better. There are other ways to address the issues. The return pc slot could be managed by the Ion stack allocator, and then the 'unused fill' slot might be used. Or the Ion stack allocator could be extended to account for the alignment. Have some other patches to extend the ARM macro assembler to account for a framePushed alignment offset, etc. However making the native frame size 8 bytes might be a simplest approach for now. This patch set assumes that bug 854045 has landed so does not include the Android signal content support. The Banana Benchmark completes when run from the shell, even with a debug build, but fails when run in the browser (fails in the browser even with asm.js disabled). The Ammo benchmark runs for a non-debug build.
Attachment #727591 - Attachment is obsolete: true
Attachment #730548 - Attachment is obsolete: true
.)
I'm going to land the currently r+'ed patch, then doug, could you submit the stack fixes for review as a separate patch? thanks.
(In reply to Vladimir Vukicevic [:vlad] [:vladv] from comment #43) > .) The browser test can fail even using the 'headless' path. It reports errors loading the images from the decrunched blobs. Moved the banana bench code to a local server and added some checksums along the blob paths, but it works now, so it needs further analysis. Anyway it does not appear to be a asm.js specific issue, so should not hold up asm.js, and I'll open a separate bug if it can be narrowed down.
sorry, didn't land last night because doug pointed out that i'd totally break x86 and x64.
landed (apparently, safely):
Status: ASSIGNED → RESOLVED
Last Resolved: 6 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla23 | https://bugzilla.mozilla.org/show_bug.cgi?id=840285 | CC-MAIN-2019-18 | refinedweb | 4,311 | 56.55 |
Hyper-V VHDX support
Registered by Alessandro Pilotti
Hyper-V uses the VHD and VHDX formats for virtual disks.
VHDX has been introduced with Windows Server 2012, providing better performance and the ability to resize differential disks (aka CoW in the driver's implementation). VHDX disks are not supported on Windows Server 2008 and 2008 R2.
The Nova Hyper-V driver currently supports the VHD format only. The aim of this blueprint in to provide support for VHDX as well.
Note: VHDX support in Hyper-V requires the WMI root \virtualization\v2 namespace.
Gerrit topic: https:/
Addressed by: https:/
Adds Hyper-V VHDX support
Dependency tree
* Blueprints in grey have been implemented. | https://blueprints.launchpad.net/nova/+spec/hyper-v-vhdx | CC-MAIN-2021-39 | refinedweb | 112 | 56.25 |
Hello I must merge two text files which contain a variable number of numbers. Both of these text files are sorted from lowest to highest value and when they are merged into one list must retain this property.
My friend and I were working on developing code for this but we have no idea if this is right. This is what we have so far...
We're really quite confused and so we just tried to sort if using a really,really large array but when we try to compile it we receive these errors:We're really quite confused and so we just tried to sort if using a really,really large array but when we try to compile it we receive these errors:Code:#include <iostream> #include <string> #include <fstream> #include <sstream> #include <cstdlib> #include <vector> #include <list> int main(int argc, char argv[]) { using namespace std; ifstream in_stream1; ofstream out_stream; ifstream in_stream2; in_stream1.open("prob2list1.txt"); in_stream2.open("prob2list2.txt"); out_stream.open("prob2merged.txt"); int num1; int num2; int i; int j; int mergeList[10000]; while(in_stream1.eof!=0) { in_stream1>>num1; mergeList[i]= num1; i++; } while(j<i) { in_stream2>>num2; if(mergeList[j]>num2) j++; else { if(j==0) { mergeList.insert(j,1); mergeList[0]=num2; } else { mergeList.insert(j,1); mergeList[j-1]=num2; } j++; } } return 0; }
Line 30 - Error:Invalid use of member<did you forget the &?>
Line 46 - Error:Request for member 'insert' in 'mergeList', which is of non-class type 'int[10000]'
Line 51 - Error:Request for member 'insert' in 'mergeList', which is of non-class type 'int[10000]'
I know this code probably won't do the job in the first place and so any assistance would be greatly appreciated,thanks.
I'm quite new at this so please excuse any errors. | https://cboard.cprogramming.com/cplusplus-programming/83776-sorting-two-lists-into-one.html | CC-MAIN-2018-05 | refinedweb | 300 | 57.1 |
Living With Arduino and the L298N H-Bridge for Bi-Polar Stepper Motor Control
The above module is an L298N daughter board I bought off Amazon a week ago, and getting it working has been a rather frustrating journey that is finally seeing some resolution today. This was a major reason I thought I would put together an instructable to allow people to follow in my footsteps. The L298N is an H-Bridge, this is a fancy name for a set of electronic switches that allows you to reverse the polarity of the output without a more complex circuit. If you look up the capabilities of the L298N chip, it's a very powerful and versatile circuit, however there is a ton of information about how to use it to control DC motors for robots, I am only dealing with stepper motors. It breaks down like this: The board (and chip) have 4 inputs shown/labeled as IN1-4 and four outputs OUT1-4. When you apply a positive TTL signal to any pin it will then send positive voltage from the supply line to that output port. However, if you set IN1 and IN2 high, you are creating a dead short, which may damage both the chip and your motor, therefore it's necessary to make sure both pins are never held high.
Step 1: What You Will Need
This project is going to require a few things, if you're reading this, you're likely to already have:
* An arduino of some flavor (I'm using an UNO)
* A stepper motor (go look up it's spec sheet)
* An L298N driver board similar to the one shown in the picture
* Some kind of power supply that provides at least 5V but less than the max of your motor
* Hookup wire, wire strippers, wire clippers, etc (no soldering on this project)
A digital multi-meter may be helpful
Step 2: Wiring the L298N to the Arduino
I described some of this in the last section, but lets get detailed:
There are four pins on the L298N module IN1-4, there are four output connections OUT1-4. There is also a +V and GND in a terminal block on the module. There is also a +5V terminal (we will not be using this).
The IN pins may be connected to any of the control pins on the Arduino. In my case I have an LCD shield on the UNO, so I used the analog pins (A1 through 4 This will be important later). These were then connected as follows:
A1 -> IN1
A2 -> IN2
A3 -> IN3
A4 -> IN4
Importantly, you must also connect a ground pin from the arduino, into the common ground terminal, otherwise this will not work!!!!
I then attached +V to a variable power supply, and again Ground was connected to ground through the terminal.
Step 3: Wiring the Stepper
If you looked up the spec sheet, it should list which wires are A+ A- B+ and B- sometimes, they don't use this specific language, but what you have is a box with 4 wires coming out of it, these are broken up into two sets. You can check with your Multi-Meter by measuring resistance (or connection) to see which two are paired (if you don't have documentation). In order for the motor to actually spin, you need to make sure A+ and B+ are hooked to OUT1 and OUT3 respectively. If you wire up the motor, and it just vibrates, one of your pairs is reversed.
Basically the wiring diagram is:
A+ (Black) -> OUT1
A- (Green) -> OUT2
B+ (Blue) -> OUT3
B- (Red) -> OUT4
Step 4: Programming: Overview and Warnings
So when I first started this odyssey, the documentation was rather poor, and worse still most of the examples depend on using either digitalWrite for handling pin manipulation, or worse still depend on the built in arduino stepper library, which essentially implements the same thing. However, there is a massive problem with doing it this way. The digitalWrite system is ungodly slow, and worse still, if you're executing a digitalWrite followed by another digitalWrite it's really ugly ugly horrible slow kludgy code. DON'T DO THIS!
If you're not already familiar with it, you should read this:...
What this allows us to do is to rather than writing pins high or low one at a time, simply write a whole set of pins high or low just by addressing the register that controls these pins.
So the warning: the L298N H-Bridge is essentially 4 individual switches operating as one, and has one major bad habit if you use it with the existing arduino Stepper library, or if you use the repeated digitalWrite() statements, that is, because of the wait time before setting pins, it's likely you may put IN1 and IN2 high at the same time. This creates a dead short, and after perhaps no more than a minute or two, will likely smoke-check your bridge. It took me several days of debugging to figure out why the bridge was pulling 4 amps, and after about 5 seconds of running, the heat sink became too hot to touch.
Step 5: Programming Example
So, there's some extra stuff in here that you may or may not need, like the code for the LCDShield, or the code that checks the execution time of the main loop.
If you're using the A1-A4 this code should compile (Arduino IDE 1.6.5) and get your motor to spin. Most of this code is my own, with a bit borrowed or altered from the Stepper.h file.
Notes:
The delay on "StepFast" is in microseconds, so 2000 is only 2 milliseconds, most of the time if you try stepping the motor with a delay of less than 1200 it will skip steps, and despite 800 steps being 4 full revolutions for most motors, you may find your motor only makes maybe a quarter turn.
This code is intended as an example, it currently doesn't reverse, nor does it take feedback from the L298, or do a lot of other things I would like. In looking at the existing Stepper.h, I may re-write it in the coming weeks using this method for handling the steps, as the existing method will likely damage the L298 or any other H-bridge configuration.
#include <Arduino.h> #include <LiquidCrystal.h> //Keypad Shield LCD pins LiquidCrystal lcd(8, 9, 4, 5, 6, 7); long unsigned int lasttime; long unsigned int timer; int timeuntil; float exectime; int smallcount; void setup() { lcd.begin(16, 2); lcd.print("Motor Test"); delay(2500); lcd.clear(); } void StepFast(long int steps,long unsigned wait) { DDRC = B00011110; //set arduino ports A1-A4 output remember this works backwards! // ^-pin 7^-pin 0 int pattern = 0; int mydelay = 0; for (int i = 0;i < steps; i++) { switch (pattern) { case 0: // 1010 PORTC = B00001010; //arduino analog port we're using pins A1-A4 So we're only going to change those break; case 1: // 0110 PORTC = B00001100; break; case 2: //0101 PORTC = B00010100; break; case 3: //1001 PORTC = B00010010; break; } pattern++; if (pattern > 3) {pattern = 0; } delayMicroseconds(wait); } PORTC = B00000000; //de-energize to motor } void loop() { lasttime = timer; timer = millis(); exectime = (timer - lasttime)/1000; if (timeuntil < timer) { lcd.clear(); lcd.setCursor(0,0); lcd.print("Clockwise "); lcd.print(exectime); lcd.setCursor(0,1); lcd.print(timer); lcd.print(" "); lcd.print(lasttime); timeuntil = timer + 1500; } StepFast(800,2000); //steps,delay in microseconds delay(5000); }
Notes:
This code works fairly well at moderate step speeds of ~300RPM (step rate of 1ms or so) as you try to get more towards 1000RPM, it will start missing steps unless the voltage increases, however if you run the motor at 60RPM (5ms) at >5V the L298N will start to get quite hot.
Step 6: Afterthoughts and Additions
So there are a few things I didn't really address in the original write up that I'm very much in the process of dealing with. Of these, the main issues are:
- As step speed increases, the supply voltage also must increase
- Using the pins ENA and ENB as PWM inputs to keep voltage low at low step speeds, and raise it as step speed increases
- Dealing with Acceleration and Inertia | http://www.instructables.com/id/Living-With-Arduino-and-the-L298N-H-Bridge-for-Bi-/ | CC-MAIN-2017-26 | refinedweb | 1,391 | 58.05 |
History |
View |
Annotate |
Download
(4.57 KB)
/*
* $OpenBSD: dup2test.c,v 1.3 2003/07/31 21:48:08 deraadt Exp $
* $OpenBSD: dup2_self.c,v 1.3 2003/07/31 21:48:08 deraadt Exp $
* $OpenBSD: fcntl_dup.c,v 1.2 2003/07/31 21:48:08 deraadt Exp $
*
* Written by Artur Grabowski <art@openbsd.org> 2002 Public Domain.
* $FreeBSD: src/tools/regression/file/dup/dup.c,v 1.2.2.1.2.1 2008/11/25 02:59:29 kensmith Exp $
*/
/* Modified for Prex by Kohsuke Ohtani */
* Test #1: check if dup(2) works.
* Test #2: check if dup2(2) works.
* Test #3: check if dup2(2) returned a fd we asked for.
* Test #4: check if dup2(2) cleared close-on-exec flag for duped fd.
* Test #5: check if dup2(2) allows to dup fd to itself.
* Test #6: check if dup2(2) returned a fd we asked for.
* Test #7: check if dup2(2) did not clear close-on-exec flag for duped fd.
* Test #8: check if fcntl(F_DUPFD) works.
* Test #9: check if fcntl(F_DUPFD) cleared close-on-exec flag for duped fd.
* Test #10: check if dup2() to a fd > current maximum number of open files
* limit work.
#include <sys/types.h>
#include <sys/time.h>
#include <sys/resource.h>
#include <err.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
static int getafile(void);
static int
getafile(void)
{
int fd;
char temp[] = "/tmp/dup2XXXXXXXXX";
if ((fd = mkstemp(temp)) < 0)
err(1, "mkstemp");
remove(temp);
if (ftruncate(fd, 1024) != 0)
err(1, "ftruncate");
return (fd);
}
int
main(int argc, char *argv[])
struct rlimit rlp;
int orgfd, fd1, fd2, test = 0;
orgfd = getafile();
printf("1..17\n");
/* If dup(2) ever work? */
if ((fd1 = dup(orgfd)) < 0)
err(1, "dup");
printf("ok %d - dup(2) works\n", ++test);
/* Set close-on-exec */
if (fcntl(fd1, F_SETFD, 1) != 0)
err(1, "fcntl(F_SETFD)");
/* If dup2(2) ever work? */
if ((fd2 = dup2(fd1, fd1 + 1)) < 0)
err(1, "dup2");
printf("ok %d - dup2(2) works\n", ++test);
/* Do we get the right fd? */
++test;
if (fd2 != fd1 + 1)
printf("no ok %d - dup2(2) didn't give us the right fd\n",
test);
else
printf("ok %d - dup2(2) returned a correct fd\n", test);
#if 0
/* Was close-on-exec cleared? */
if (fcntl(fd2, F_GETFD) != 0)
printf("not ok %d - dup2(2) didn't clear close-on-exec\n",
printf("ok %d - dup2(2) cleared close-on-exec\n", test);
#endif
/*
* Dup to itself.
*
* We're testing a small tweak in dup2 semantics.
* Normally dup and dup2 will clear the close-on-exec
* flag on the new fd (which appears to be an implementation
* mistake from start and not some planned behavior).
* In todays implementations of dup and dup2 we have to make
* an effort to really clear that flag. But all tested
* implementations of dup2 have another tweak. If we
* dup2(old, new) when old == new, the syscall short-circuits
* and returns early (because there is no need to do all the
* work (and there is a risk for serious mistakes)).
* So although the docs say that dup2 should "take 'old',
* close 'new' perform a dup(2) of 'old' into 'new'"
* the docs are not really followed because close-on-exec
* is not cleared on 'new'.
* Since everyone has this bug, we pretend that this is
* the way it is supposed to be and test here that it really
* works that way.
* This is a fine example on where two separate implementation
* fuckups take out each other and make the end-result the way
* it was meant to be.
*/
if ((fd2 = dup2(fd1, fd1)) < 0)
printf("ok %d - dup2(2) to itself works\n", ++test);
if (fd2 != fd1)
printf("not ok %d - dup2(2) didn't give us the right fd\n",
printf("ok %d - dup2(2) to itself returned a correct fd\n",
if (fcntl(fd2, F_GETFD) == 0)
printf("not ok %d - dup2(2) cleared close-on-exec\n", test);
printf("ok %d - dup2(2) didn't clear close-on-exec\n", test);
/* Does fcntl(F_DUPFD) work? */
if ((fd2 = fcntl(fd1, F_DUPFD, 0)) < 0)
err(1, "fcntl(F_DUPFD)");
printf("ok %d - fcntl(F_DUPFD) works\n", ++test);
printf(
"not ok %d - fcntl(F_DUPFD) didn't clear close-on-exec\n",
printf("ok %d - fcntl(F_DUPFD) cleared close-on-exec\n", test);
if (getrlimit(RLIMIT_NOFILE, &rlp) < 0)
err(1, "getrlimit");
if ((fd2 = dup2(fd1, (int)(rlp.rlim_cur + 1))) >= 0)
printf("not ok %d - dup2(2) bypassed NOFILE limit\n", test);
printf("ok %d - dup2(2) didn't bypass NOFILE limit\n", test);
return (0); | http://roboticsclub.org/redmine/projects/scoutos/repository/revisions/03e9c04a454a17f4ca7c67eb5a87c251d9e8fac0/entry/prex-0.9.0/usr/test/dup/dup.c | CC-MAIN-2014-15 | refinedweb | 773 | 83.46 |
Now that we understand controllers we’re going to create one for Posts.
The Posts handler should have the following actions:
With what we’ve learned already you could write all of these actions yourself. Fortunately, the bulk of this functionality is already provided to you by Ferris via the Scaffold.
To demonstrate the full extent of what scaffolding can do, we’re going to use the scaffold to create an admin interface for Posts.
Create app/controllers/posts.py:
from ferris import Controller, scaffold class Posts(Controller): class Meta: prefixes = ('admin',) components = (scaffold.Scaffolding,) admin_list = scaffold.list #lists all posts admin_view = scaffold.view #view a post admin_add = scaffold.add #add a new post admin_edit = scaffold.edit #edit a post admin_delete = scaffold.delete #delete a post
Open in your browser. You’ll see that you now have a complete interface for managing posts.
Note
If you receieve an Access Denied page, make sure you choose the ‘Sign in as Administrator’ option when you logged in.
Let’s walk through what we did:
Let’s take a moment to explore the features that the admin scaffold provides us. Open and click ‘Add’ on the top navigation.
Here we see that a form has been automatically generated for the two fields in our Post model. Recall that we made the title property required. If we try to submit this form without putting anything in the title field, we’ll see that we get a nicely formatted error.
Let’s go ahead and create an actual post with a title and content. Once we’ve submitted the form, we’ll be redirected to the list. Our new post has appeared (if it hasn’t appeared, refresh, the datastore is eventually consistent).
On the right side of our new Post there are three buttons: view, edit, and delete. These map to the actions admin_view, admin_edit, and admin_delete.
Our requirements state that we need to have two different lists: one of everyone’s posts, and one of just our posts. This means that we need to build on top of the scaffold’s list action to conditionally add a filter.
First, create a few posts as one user and a few posts as another user using the Admin scaffold so that we have some data to test with.
Note
You can sign in as a different user via the url. Make sure to check the ‘Sign in as Administrator’ checkbox. You’ll want keep being an admin so you can add posts via the Admin scaffold.
First, we want to add an non-prefixed list action so that we can access a list of everyone’s posts at. We can do that exactly like we did with admin_list:
list = scaffold.list
If we open up we’ll see that the scaffold does indeed list everyone’s posts. However, they’re in the wrong order. We need to modify it to use the all_posts method from our Posts class.
The scaffold’s logic for list is very simple. It just sets the self.context['posts'] to our Model’s default query. We can easily set that variable ourselves using our all_posts() method. Remove the scaffolded list and add this:
def list(self): self.context['posts'] = self.meta.Model.all_posts()
Notice that we’re still able to use the scaffold’s template; all we had to do was set the posts template variable and the scaffold knew what to display. You’ll also notice that the scaffold automatically determined that the Posts controller uses the Post model and provides that via self.meta.Model.
With that in place all that’s left is to add the ability for list to show just our posts using the all_posts_by_user method. Modify the list method again:
def list(self): if 'mine' in self.request.params: self.context['posts'] = self.meta.Model.all_posts_by_user() else: self.context['posts'] = self.meta.Model.all_posts()
Now if we open up it will show only the posts for the currently logged-in user.
As nice as the admin scaffold is, we don’t want to have to give every user admin rights to be able to add a new post. We can give all users that ability by adding a non-prefixed add action just like we did intially with list:
add = scaffold.add
We’ll just use the scaffold’s behavior since it is perfectly acceptable for this case. If we open up we’ll see a form like the one in the admin scaffolding.
At this point users can add posts but they can’t edit any of the posts they’ve already created. Let’s add the edit using the scaffold like we did with add:
edit = scaffold.edit
At this point we have a problem: a user can edit any post, even those created by other users. While this could be slightly amusing, this behavior is undesirable. We need to add a check to make sure the user is editing a post that they created:
def edit(self, key): post = self.util.decode_key(key).get() if post.created_by != self.user: return 403 return scaffold.edit(self, key)
Let’s walk through this:
Now users can only edit posts that they themselves have created. | http://ferris-framework.appspot.com/docs21/tutorial/4_scaffolding.html | CC-MAIN-2017-13 | refinedweb | 871 | 74.29 |
Investors in CRISPR Therapeutics AG (Symbol: CRSP) saw new options become available this week, for the June 19th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the CRSP options chain for the new June 19th contracts and identified one put and one call contract of particular interest.
The put contract at the $50.00 strike price has a current bid of $4.10. If an investor was to sell-to-open that put contract, they are committing to purchase the stock at $50.00, but will also collect the premium, putting the cost basis of the shares at $45.90 (before broker commissions). To an investor already interested in purchasing shares of CRSP, that could represent an attractive alternative to paying $52.96.20% return on the cash commitment, or 53.45% annualized — at Stock Options Channel we call this the YieldBoost.
Below is a chart showing the trailing twelve month trading history for CRISPR Therapeutics AG, and highlighting in green where the $50.00 strike is located relative to that history:
Turning to the calls side of the option chain, the call contract at the $55.00 strike price has a current bid of $4.70. If an investor was to purchase shares of CRSP stock at the current price level of $52.96/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $55.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 12.73% if the stock gets called away at the June 19 $55.00 strike highlighted in red:
Considering the fact that the $55.87% boost of extra return to the investor, or 57.84% annualized, which we refer to as the YieldBoost.
The implied volatility in the put contract example is 79%, while the implied volatility in the call contract example is 77%.
Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 252 trading day closing values as well as today's price of $52.96) to be 63%.. | https://www.breathinglabs.com/monitoring-feed/genetics/first-week-of-crsp-june-19th-options-trading/ | CC-MAIN-2020-45 | refinedweb | 356 | 65.22 |
and load an egg in Trac. In the advanced parts you'll learn how to serve templates and static content from an egg.
You should be familiar with component architecture and plugin development. This plugin is based on the example in the plugin development article. Here we extend it a bit further.
Required items
First you need setuptools. For instructions and files see EasyInstall page.
You also need Trac 0.9.5. Download it from the TracDownload page.
Directories
To develop a plugin you need to create a few directories to keep things together.
So let's create following directories:
./helloworld-plugin/ ./helloworld-plugin/helloworld/
Main plugin
The first step is to generate the main module for this plugin. We will construct a simple plugin that will display "Hello world!" on the screen when accessed through the /helloworld URL. The plugin also provides a "Hello" button that is, by default, rendered on the far right in, Markup class HelloWorldPlugin(Component): implements(INavigationContributor, IRequestHandler) #): req.send_response(200) req.send_header('Content-Type', 'text/plain') req.end_headers() req.write('Hello world!')
To help understand how that works, read the INavigationContributor and IRequestHandler interface specifications.
Make it a module
To make the plugin a module, you simply create an __init__.py in ./helloworld-plugin/helloworld/:
# Helloworld module from helloworld import *
Make it an egg
Now it's time to make it an egg. For that we need a chicken called setup.py in ./helloworld-plugin/:
from setuptools import setup PACKAGE = 'TracHelloworld' VERSION = '0.1' setup(name=PACKAGE, version=VERSION, packages=['helloworld'], entry_points={'trac.plugins': '%s = helloworld' % PACKAGE}, )
You will also have to add special egg metadata to cater to Trac's plugin loader. the main navigation bar when accessing your site. Click it.
Aftermath
Now that 11 years ago.
First version of tutorial plugin
- helloworld-plugin-1-trac-0.9.5.tar.gz (774 bytes) - added by maxb 11 years ago.
Tutorial files, updated for Trac 0.9.5
Download all attachments as: .zip | https://trac-hacks.org/wiki/EggCookingTutorial/BasicEggCooking?version=31 | CC-MAIN-2017-04 | refinedweb | 330 | 52.97 |
This book is about describing the meaning of programming languages. The author teaches the skill of writing semantic des
276 12 2MB
English Pages 240 [229] Year 2021
This book is about describing the meaning of programming languages. The author teaches the skill of writing semantic des
446 112 2MB Read more
From the Introduction: """ My Philosophy Many people would regard this as being two books in one. One b
756 151 7MB Read more
This clearly written textbook provides an accessible introduction to the three programming paradigms of object-oriented/
919 163 18MB Read more
This International Standard specifies the form and establishes the interpretation of programs written in the C programmi
509 133 1MB Read more
Table of contents :
Preface
Using this book
Writing style
Acknowledgements
Contents
Chapter 1 Programming languages and their description
1.1 Digital computers and programming languages
1.2 The importance of HLLs
1.3 Translators, etc.
1.4 Insights from natural languages
1.5 Approaches to describing semantics
1.6 A meta-language
1.7 Further material
1.7.1 Further reading
1.7.2 Classes of languages
1.7.3 Logic of Partial Functions
Chapter 2 Delimiting a language
2.1 Concrete syntax
2.2 Abstract syntax
2.3 Further material
Projects
Further reading
Historical notes
Chapter 3 Operational semantics
3.1 Operational semantics
3.2 Structural Operational Semantics
3.2.1 Relations
3.2.2 Inference rules
3.2.3 Non-deterministic iteration
3.3 Further material
Projects
Alternatives
Further reading
Historical notes
Chapter 4 Constraining types
4.1 Static vs. dynamic error detection
4.2 Context conditions
4.3 Semantic objects
4.3.1 Input/output
4.3.2 Arrays
4.3.3 Records
4.4 Further material
Projects
Further reading
Chapter 5 Block structure
5.1 Blocks
5.2 Abstract locations
5.3 Procedures
5.4 Parameter passing
5.4.1 Passing “by reference”
5.4.2 Passing “by value”
5.5 Further material
Projects
Further reading
Chapter 6 Further issues in sequential languages
6.1 Own variables
6.2 Objects and methods
6.3 Pascal variant records
6.4 Heap variables
6.5 Functions
6.5.1 Marking the return value
6.5.2 Side effects
6.5.3 Recursion
6.5.4 Passing functions as parameters [*]
Procedure variables/results
6.6 Further material
Projects
Chapter 7 Other semantic approaches
7.1 Denotational semantics
7.2 Further material
7.3 The axiomatic approach
7.3.1 Assertions on states
7.3.2 Hoare’s axioms
7.3.3 Specification as statements
7.3.4 Formal development
7.3.5 Data abstraction and reification
7.4 Further material
7.5 Roles for semantic approaches
Chapter 8 Shared-variable concurrency
8.1 Interference
8.2 Small-step semantics
8.3 Granularity
8.4 Rely/Guarantee reasoning [*]
8.5 Concurrent Separation Logic [*]
8.6 Further material
Projects
Further reading
Chapter 9 Concurrent OOLs
9.1 Objects for concurrency
9.1.1 An example program
9.1.2 Semantic objects
9.2 Expressions
9.3 Simple statements
9.4 Creating objects
9.5 Method activation and synchronisation
9.5.1 Method activation
9.5.2 Method synchronisation
9.5.3 Delegation
9.6 Reviewing COOL
The example class
COOL summary
9.7 Further material
Chapter 10 Exceptional ordering [*]
10.1 Abnormal exit model
10.2 Continuations
10.3 Relating the approaches
10.4 Further material
Projects
Historical notes
Chapter 11 Conclusions
11.1 Review of challenges
11.2 Capabilities of formal description methods
11.3 Envoi
Appendix A Simple language
A.1 Concrete syntax
A.1.1 Dijkstra style
A.1.2 Java-style statement syntax
A.2 Abstract syntax
A.3 Semantics
Statements
Expressions
Appendix B Typed language
B.1 Abstract syntax
B.2 Context conditions
B.3 Semantics
Appendix C Blocks language
C.1 Auxiliary objects
Objects needed for context conditions
Semantic objects
C.2 Programs
C.3 Statements
C.4 Simple statements Assignment
C.5 Compound statements
C.6 Blocks
C.7 Call statements
C.8 Expressions
Appendix D COOL
Abbreviations
D.1 Auxiliary objects
Types for context conditions
Types for semantics
D.2 Expressions
D.3 Statements
D.3.1 Assignments
D.3.2 If statements
D.4 Methods
D.4.1 Activate method
D.4.2 Call method
D.4.3 Rendezvous
D.4.4 Method termination
D.4.5 Delegation
D.5 Classes
D.5.1 Creating objects
D.5.2 Discarding references
D.6 Programs
Appendix E VDM notation
E.1 Logical operators
E.2 Set notation
E.3 List (sequence) notation
E.4 Map notation
E.5 Record notation
E.6 Function notation
Appendix F Notes on influential people
References
Index
Cliff B. Jones
Understanding Programming Languages
Understanding Programming Languages
Cliff B. Jones
Understanding Programming Languages
Cliff B. Jones School of Computing Newcastle University Newcastle upon Tyne, UK
ISBN 978-3-030-59256-1 ISBN 978-3-030-59257-8 (eBook) © Springer Nature Switzerland AG 2020 principal objective of this book is to teach a skill; to equip the reader with a way to understand programming languages at a deep level. There exist far more programming languages than it makes sense even to attempt to enumerate. Very few of these languages can be considered to be free from issues that complicate –rather than ease– communication of ideas. Designing a language is a non-trivial task and building tools to process the language requires a significant investment of time and resources. The formalism described in this book makes it possible to experiment with features of a programming language far more cheaply than by building a compiler. This makes it possible to think through combinations of language features and avoid unwanted interactions that can confuse users of the language. In general, engineers work long and hard on designs before they commit to create a physical artefact; software engineers need to embrace formal methods in order to avoid wasted effort. The principal communication mode that humans use to make computers perform useful functions is to write programs — normally in “high-level” programming languages. The actual instruction sets of computers are low-level and constructing programs at that level is tedious and unintuitive (I say this from personal experience having even punched such instructions directly into binary cards). Furthermore these instruction sets vary widely so another bonus from programming in a language like Java is that the effort can migrate smoothly to computer architectures that did not even exist when the program was written. General-purpose programming languages such as Java are referred to simply as “High-Level Languages” (HLLs). Languages for specific purposes are called “Domain Specific” (DSLs). HLLs facilitate expression of a programmer’s intentions by abstracting away from details of particular machine architectures: iteration can be expressed in an HLL by an intuitive construct — entry and return from common code can be achieved by procedure calls or method invocation. Compilers for HLLs also free a programmer from worrying about when to use fast registers versus slower store accesses. Designing an HLL is a challenging engineering task: the bigger the gap between its abstraction level and the target hardware architecture, the harder the task for the v
vi
Preface
compiler designers. A large gap can also result in programmers complaining that they cannot get the same efficiency writing in the HLL as if they were to descend to the machine level. An amazing number of HLLs have been devised. There are many concepts that recur in different languages but often deep similarities are disguised by arbitrary syntactic differences. Sadly, combinations of known concepts with novel ideas often interact badly and create hidden traps for users of the languages (both writers and readers). Fortunately, there is a less expensive way of sorting out the meaning of a programming language than writing a compiler. This book is about describing the meaning (semantics) of programming languages. A major objective is to teach the skill of writing semantic descriptions because this provides a way to think out and make choices about the semantic features of a programming language in a costeffective way. In one sense a compiler (or an interpreter) offers a complete formal description of the semantics of its source language. But it is not something that can be used as a basis for reasoning about the source language; nor can it serve as a definition of a programming language itself since this must allow a range of implementations. Writing a formal semantics of a language can yield a far shorter description and one about which it is possible to reason. To think that it is a sensible engineering process to go from a collection of sample programs directly to coding a compiler would be naive in the extreme. What a formal semantic description offers is a way to think out, record and analyse design choices in a language; such a description can also be the basis of a systematic development process for subsequent compilers. To record a description of the semantics of a language requires a notation — a “meta-language”. The meta-language used in this book is simple and is covered in easy steps throughout the early chapters. The practical approach adopted throughout this book is to consider a list of issues that arise in extant programming languages. Although there are over 60 such issues mentioned in this book, there is no claim that the list is exhaustive; the issues are chosen to throw up the challenges that their description represents. This identifies a far smaller list of techniques that must be mastered in order to write formal semantic descriptions. It is these techniques that are the main takeaway of the current book. Largely in industry (mainly in IBM), I have worked on formal semantic descriptions since the 1960s1 and have taught the subject in two UK universities. The payoff of being able to write formal abstract descriptions of programming languages is that this skill has a far longer half-life than programming languages that come and go: one can write a description of any language that one wants to understand; a language designer can experiment with combinations of ideas and eliminate “feature interactions” at far less cost and time than would be the case with writing a compiler. The skill that this book aims to communicate will equip the reader with a way to understand programming languages at a deep level. If the reader then wants to 1
This included working with the early operational semantic descriptions of PL/I and writing the later denotational description of that language. PL/I is a huge language and, not surprisingly, contains many examples of what might be regarded as poor design decisions. These are often taken as cautionary tales in the book but other languages such as Ada or CHILL are not significantly better.
Preface
vii
design a programming language (DSL or HLL), the skill can be put to use in creating a language with little risk of having hidden feature interactions that will complicate writing a compiler and/or confuse subsequent users of the language. In fact, having mastered the skill of writing a formal semantic description, the reader should be able to sketch the state and environment of a formal model for most languages in a few pages. Communicating this practical skill is the main aim of this book; it seeks neither to explore theoretical details nor to teach readers how to build compilers.
Using this book The reader is assumed to know at least one (imperative) HLL and to be aware of discrete maths notations such as those for logic and set theory — [MS13], for example, covers significantly more than is expected of the reader. On the whole, the current book is intended to be self-contained with respect to notation. The material in this book has been used in final-year undergraduate teaching for over a decade; it has evolved and the current text is an almost complete rewrite. Apart from a course environment, it is hoped that the book will influence designers of programming languages. As indicated in Chapter 1, current languages offer many unfortunate feature interactions which make their use in building major computer systems both troublesome and unreliable. Programming languages offer the essential means of expression for programmers — as such they should be as clean and free from hidden traps as possible. The repeated message throughout this book is that it is far cheaper and more efficient to think out issues of language design before beginning to construct compilers or interpreters that might lock in incompletely thought-out design ideas. Most chapters in the book offer projects, which vary widely in their challenge. They are not to be thought of as offering simple finger exercises — some of them ask for complete descriptions of languages — the projects are there to suggest what a reader might want to think about at that stage of study. Some sections are starred as not being essential to the main argument; most chapters include a section of “further material”. Both can be omitted on first reading.
Writing style “The current author” normally eschews the first person (singular or plural) in technical writing; clearly, I have not followed this constraint in this preface. Some of the sections that close each chapter and occasional footnotes also use the first person singular when a particular observation warrants such employment.
viii
Preface
Acknowledgements I have had the pleasure of working with many colleagues and friends on the subject of programming language semantics. Rather than list them here, their names will crop up throughout the book. I have gained inspiration from students who have followed my courses at both Newcastle University and the University of Manchester. I’m extremely grateful to Jamie Charsley for his insertion of indexing commands. I owe a debt to Troy Astarte, Andrzej Blikle, Tom Helyer, Adrian Johnson and Jim Woodcock, who kindly offered comments on various drafts of this book. (All remaining errors are of course my responsibility.) My collaboration with Springer –especially with Ronan Nugent– has been a pleasure. I have received many grants from EPSRC over the years — specifically, the “Strata” Platform Grant helped support recent work on this book.
Contents
1
Programming languages and their description . . . . . . . . . . . . . . . . . . . . . 1 1.1 Digital computers and programming languages . . . . . . . . . . . . . . . . . . 1 1.2 The importance of HLLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Translators, etc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4 Insights from natural languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.5 Approaches to describing semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.6 A meta-language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.7 Further material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2
Delimiting a language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Concrete syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Abstract syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Further material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19 19 25 31
3
Operational semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Operational semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Structural Operational Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Further material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33 33 38 45
4
Constraining types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Static vs. dynamic error detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Context conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Semantic objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Further material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51 52 53 57 62
5
Block structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Abstract locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Parameter passing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Further material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
65 65 68 73 76 80 ix
x
Contents
6
Further issues in sequential languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Own variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Objects and methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Pascal variant records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Heap variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Further material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
Other semantic approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 7.1 Denotational semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 7.2 Further material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 7.3 The axiomatic approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 7.4 Further material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 7.5 Roles for semantic approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
8
Shared-variable concurrency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 8.1 Interference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 8.2 Small-step semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 8.3 Granularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 8.4 Rely/Guarantee reasoning [*] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 8.5 Concurrent Separation Logic [*] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 8.6 Further material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
9
Concurrent OOLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 9.1 Objects for concurrency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 9.2 Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 9.3 Simple statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 9.4 Creating objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 9.5 Method activation and synchronisation . . . . . . . . . . . . . . . . . . . . . . . . . 141 9.6 Reviewing COOL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 9.7 Further material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
10
Exceptional ordering [*] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 10.1 Abnormal exit model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 10.2 Continuations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 10.3 Relating the approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 10.4 Further material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
11
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 11.1 Review of challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 11.2 Capabilities of formal description methods . . . . . . . . . . . . . . . . . . . . . 160 11.3 Envoi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
83 83 84 85 87 89 93
Contents
xi
A
Simple language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 A.1 Concrete syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 A.2 Abstract syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 A.3 Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
B
Typed language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 B.1 Abstract syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 B.2 Context conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 B.3 Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
C
Blocks language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 C.1 Auxiliary objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 C.2 Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 C.3 Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 C.4 Simple statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 C.5 Compound statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 C.6 Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 C.7 Call statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 C.8 Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
D
COOL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 D.1 Auxiliary objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 D.2 Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 D.3 Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 D.4 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 D.5 Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 D.6 Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
E
VDM notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 E.1 Logical operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 E.2 Set notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 E.3 List (sequence) notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 E.4 Map notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 E.5 Record notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 E.6 Function notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
F
Notes on influential people . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Chapter 1
Programming languages and their description
This chapter sets the scene for the rest of the book. Sections 1.1–1.3 outline the problems presented by programming languages and their related tools; Section 1.4 points out that there is material from the study of natural languages that is relevant to the problems of describing artificial languages such as those used to program computers; an overview of a range of techniques for recording the meaning of programming languages is given in Section 1.5 and Section 1.6 introduces the specific notation used throughout this book. In common with most of the following chapters, this one closes with a section (1.7) that contains further material — in particular such sections point to related reading.
1.1 Digital computers and programming languages Consider the phrase “high-level languages for programming general-purpose computers”; starting at the end: • The focus in this book is on digital –rather than analogue– computers. The qualification of “general-purpose” indicates that the behaviour of the computer is controlled by a stored program. The crucial idea that machines can be devised that are in some sense universal is credited to Alan Turing’s famous paper [Tur36] on a technical issue in logic but few people would choose to program a “Turing machine”. • An essential question underlying this book is what is meant by “programming”. A position can be taken that a program for device x should extend what can be expressed in terms of the basic repertoire of x. Thus an early computer that had no support for, say, floating-point numbers had to be programmed to simulate such calculations; even modern computers1 do not normally offer instructions 1
Many modern machine architectures follow the idea of the “Reduced Instruction Set Computer” (RISC); design and programming at the RISC level often requires re-building concepts that were in the instruction sets of earlier machine architectures. © Springer Nature Switzerland AG 2020 C. B. Jones, Understanding Programming Languages,
1
2
1 Programming languages and their description
that compute, say factorial, so this is programmed in terms of multiplication etc.; similarly many sorting algorithms can be realised as programs. • Even leaving aside the arcane language of Turing machines, programming at the level of machine code is tedious and error-prone (see Section 1.2). The reason for designing “high-level” languages (HLLs) is to get above this finicky way of expressing programs and make it easier and more productive for programmers to express their ideas. Thus high-level languages for programming general-purpose computers are a means of generating (via a compiler) a series of instructions that can be executed on a digital computer and that realise concepts that are not directly available as expressions of the language.
1.2 The importance of HLLs Picking up the point about the productivity of programmers from the preceding section, there was a panel discussion2 on Programming Languages and their Semantics at a conference in Pittsburgh in May 2004 to which Vaughan Pratt put the intriguing question of how much money the panelists thought that high-level programming languages had saved the world. Pratt was aware of the difficulty of the question because he added a subsidiary query as to whether an answer to the main question would qualify for a Nobel Prize in economics. Without hoping –even with time to reflect– to provide a number, considering Pratt’s question is illuminating. There are almost certainly millions of people in the world for whom programming forms a significant part of their job. Programmers tend to be well paid. A good programmer today can create –using a high-level programming language such as Java– systems that were unthinkable when programs could only be communicated in machine code. A good programming language can, moreover, ensure that many mistakes made by even an average-level programmer are easily detected and corrected. To these powerful savings, ease of program migration can be added: avoiding the need to write versions of essentially the same program for different machine instruction sets must itself have saved huge wastage of time and money. It is also important to appreciate the distribution of costs around programs: even back in the days of mainframes, the majority of programs cost more to develop than their lifetime machine usage costs. Since that period, decades of tracking Moore’s Law [Moo65] have dramatically reduced the cost of executing programs. With modern interactive applications, and factoring in the human cost of user time, the actual machine time costs are close to irrelevant. The productivity of programmers and their ability to create systems that are useful to the end user are the paramount concerns. 2
The panelists were John McCarthy, John Reynolds, Dana Scott and the current author.
1.2 The importance of HLLs
3
The mere fact that there are thousands3 of programming languages is an indication that their design is a subject of interest. The additional observation that there is no one clear “best buy” suggests that designing a high-level programming language is non-trivial. One tension in design is between offering access to the power of the underlying machine facilities so that programs can be made to be efficient versus providing abstractions that make it easier to create programs. The problems of writing programs are also likely to be exceeded by the costs of their maintenance, where the intentions of the original programmer must be understood if the person changing the program is to do so safely. A good programming language offers several aids to the programmers who use it to express their ideas: • data structuring • common forms of control can be built into a language (with only the compiler having to fiddle with the specific machine-level instruction sequences that realise the higher-level expressions) • protecting programmers from mistakes It is worth expanding on the issue of how programming languages provide abstractions. Most computers have a small number of registers, in which all basic computations are performed and instructions can only access one additional storage cell. A calculation involving several values must –at the machine level– involve a series of instructions and might require the storage of intermediate results. A first level of abstraction allows programmers to write arbitrary expressions that have to be translated into strings of machine operations. Clear layers of abstraction can be seen with regard to data representation. The storage of a computer can most easily be viewed as a sequence of small containers (bits, bytes or words).4 From its inception, the FORTRAN language supported declaring arrays of multiple dimensions whose elements could be addressed in a style familiar to mathematicians (e.g. A[I, J ∗ 3]). Such operands have to be translated into sequences of machine instructions that compute the machine address in the sequence of addressable storage cells of the machine. The APL language pushed arrays to extremes and even PL/I provides ways of manipulating slices of ndimensional arrays — such sub-arrays can then be manipulated as if they had been declared to be arrays of lesser dimensions. Array elements are all of one type. Both the COBOL and Pascal languages offer ways of defining inhomogeneous records5 that facilitate grouping data elements whose types differ from each other. Furthermore the whole inhomogeneous object can be used (e.g. as parameters or in input/output) as a single data item or its components can be addressed separately. 3
As early as 1969, Jean Sammet’s book [Sam69] recognised 500 programming languages; a web site that claimed to be listing all known languages got to 8,512 in 2010 then gave up. 4 Of course, computer architectures normally include registers and might themselves provide abstraction such as “virtual memory” (supported by “paging”). Some of these features are discussed when issues relating to code generation are considered in subsequent chapters. 5 Some languages, including PL/I, use the term “structures” rather than records.
4
1 Programming languages and their description
List processing facilitates the creation of arbitrary graphs of data by allowing the manipulation of something like machine addresses as data. Early languages such as IPL-V, Lisp and Scheme had to develop a lot of techniques for garbage collection before list processing could be adopted into more mainstream languages. The concept of objects is a major contribution to the abstractions, as offered in object-oriented languages such as Simula, Smalltalk and Java. Object orientation is discussed in Section 6.2 and its role in taming concurrent computation is the main topic of Chapter 9. A similar story of programming languages developing abstraction mechanisms above the raw conditional jumps of machine level programming could be developed: conditional if constructs, compound statement lists, for and while looping constructs and –above all– recursion make it possible to present complicated programs as a structured and readable text. Tony Hoare reported [Hoa81, p.76] that he could not express his innovative Quicksort [Hoa61] algorithm until he learned the ALGOL 60 programming language.6 This almost certainly contributed to his judgement on ALGOL 60 in [Hoa74b]: Here is a language so far ahead of its time, that it was not only an improvement on its predecessors, but also on nearly all its successors. Of particular interest are its introduction of all the main program structuring concepts, the simplicity and clarity of its description, rarely equalled and never surpassed.
A final, but crucial, area where programming language designers have sought to offer abstractions is that of concurrency.7 Most computers offer rather low-level primitives such as a compare-and-swap instruction; programming even at the level of Dijkstra’s semaphores [Dij62, Dij68a] is extremely error-prone. The whole subject of concurrency in programming languages is still in evolution and its modelling occupies several chapters later in this book. References to histories of the evolution of programming languages are given in Section 1.7. Here, the concern is with the problem of knowing how to record the meaning –or semantics– of useful abstractions such as those sketched above. Semantic description is a non-trivial problem and occupies Chapters 3–10 of this book. The payoff for mastering these techniques is large and can have effects far beyond the language design team. It is just not realistic to expect anyone to be able to design a programming language that will overcome the challenges listed above by sketching sample programs and then proceeding to compiler writing. In fact, such a procedure is a denial of everything known about engineering. The ability to record the meaning of a language at an abstract level means that designers can think out, document and refine their ideas far less expensively than by coding processors. Furthermore, the 6
Although initially designed as a publication language (and the vast majority of algorithms published in the Algorithms section of Communications of the ACM were written in ALGOL 60) the language contributed so many fundamental concepts to programming language design that it has had enormous influence (see [AJ18, §1.4]). 7 In contrast, so-called “weak” (or “relaxed”) memory is a hardware feature which might inflict considerable damage on software development because it is hard to find apposite abstrac+ 13, LV16]. ˇ tions [SVZN
1.3 Translators, etc.
5
far bigger bonus is that users of better thought-through languages will become more productive and stumble into far fewer unexpected “feature interactions”. Before techniques for describing semantics and the case for formalism are discussed in Section 1.5, it is worth considering the software tools that process programming languages.
1.3 Translators, etc. Given that a programmer elects to write a program in a high-level language –say L – and that the hardware on which the program is to run only obeys instructions in its particular machine language,8 some tool is required to bridge the gap. Two key tools are translators and interpreters. Perhaps the more obvious tool is to write an interpreter program in machine language. Such an interpreter would read in the program written in L and simulate its behaviour step by step. This can be done, but pure interpreters tend to run rather slowly. The alternative is to write a program (again in machine language) that can translate any program of L into a sequence of machine instructions that have the effect of the original program. The obvious name for such a program is a translator but the commonly used term is to refer to “compilers” for L .9 In practice, the description above is a simplification to make the distinctions clear.10 Rather than producing pure machine code, a compiler can translate to the language of a virtual machine which is in turn interpreted on actual hardware. This idea has significant advantages in reducing the task of supporting multiple languages. Furthermore, compilers or interpreters can at least partially be written in high-level programming languages; this process is referred to as “bootstrapping”. It is worth reinforcing the point about translation by considering just a few examples. The focus in this book is on imperative programming languages.11 In general-purpose languages, the key imperative statement is the assignment. (Other 8
Grace Hopper said [Hop81]: In the early years of programming languages, the most frequent phrase that we heard was that the only way to program a computer was in octal.
9
The origin of this word is explained in [Bey09] as deriving from the first attempts to automate program construction by collecting (compiling) a group of subroutines. It is surprising that this is the term that continues to be more commonly used for what is clearly a process of translation. 10 In a detailed study [vdH19] of the ALGOL 60 implementation by Dijkstra and his colleagues, Gauthier van den Hove makes clear that, even early in the history of language processing, the question of compiling and interpreting was seen less as a dichotomy and more as a spectrum. 11 Functional languages such as Miranda [Tur85] and Haskell [Hut16] make it easier to reason about programs as though they were mathematical functions. Early implementations of functional languages tended to perform considerably more slowly than imperative languages but this gap has reduced and some serious applications are now written in functional languages. Logic languages such as Prolog [SS86] make a further step both in expressiveness and in their challenge to offer performance. (In fact, Prolog still has imperative features.) The techniques presented in this book
6
1 Programming languages and their description
imperative languages might move the arm of a robot, project an image or update a database.) Most of the statement types actually only orchestrate the order in which updates to variables are made by assignments. As outlined above, straightforward expression evaluation has to be implemented by loads, stores and single-address operations of the machine. But a compiler will often try to optimise expression evaluation. For example “common sub-expressions” might be evaluated only once. Even in early FORTRAN compilers, expressions that occurred inside FOR loops but did not depend on variables whose values were changed in the loop could be evaluated once before the loop. More subtly, expressions such as those which compute array indexes in the loop could be subject to strength reduction so that the effect of multiplication could be achieved by addition each time round a loop. Many of these optimisations are known as “sourceto-source” in the sense that there is an equivalent source program that represents the optimised code. There are other optimisations such as those concerned with maximising efficiency by minimising the number of loads and saves for registers (especially index registers for address calculation) that cannot be expressed in the source language. In either case, it is clearly essential that the “optimisations” do not result in a program doing something different from the programmer’s legitimate expectations. In other words, any optimisations must respect the semantics of the given high-level language. Similar points could be made about compiling control constructs. Most machines provide a primitive (conditional) jump instruction. High-level languages offer far more structured control constructs. The task of a compiler is to translate the latter into the former in a way that results in efficient code. But, again, that low-level code must respect the semantics of the programming language. Three final points can be made about tools for executing high-level languages: • The cost of building a translator for a high-level programming language is normally significant.12 Many arguments can be advanced against undertaking this step too early in the design process for a new programming language. Clearly, if some language deficiencies or irregularities can be resolved less expensively by writing a formal semantics, this is a wise move. More worryingly, once a mistaken design choice is cemented in the code of a translator, the designer might be far more reluctant to undertake the rework to correct the infelicity. • The division of a compiler into lexical analysis, parsing, dictionary building, . . . , code generation provides useful analogies for the task of formal description of languages — these are used in later chapters. • There are also other important tools such as those that assist a programmer in debugging programs written in some language L . for describing the semantics of imperative languages would apply to both functional and logic languages (see for example [AB87, And92]). 12 This can be reduced if translation to the language of an existing virtual machine is possible.
1.5 Approaches to describing semantics
7
1.4 Insights from natural languages The languages that are spoken by human beings were not designed by committees13 — they just evolved and they continue to change. The evolution process is all too obvious from the irregularities in most natural languages. The task of describing such natural languages is therefore very challenging but, because they have been around longer, it is precisely on natural languages that the first systematic studies were undertaken. Charles Sanders Peirce (1839-1914) used the term Semiotics for the study of languages. Peirce14 divided the study of languages into: • syntax: concerning the structure of utterances in a language • semantics: about the meaning of the language • pragmatics: covering the intention of using the language Heinz Zemanek applied the terminology to programming languages in [Zem66] but work on formalising the syntax of programming languages goes back to the ALGOL effort in the late 1950s. As shown in Chapter 2, it is not difficult to write syntax rules for parts of natural languages but, because of their irregularity, a complete syntax is probably not a realistic objective. It is far harder to give a semantics of a language (than to write syntactic rules) and Section 1.5 expands on this topic, which then occupies the majority of the following chapters. One way of appreciating the distinction between syntax and semantics is to consider some questions that arise with programming languages: • Ways to fix the nesting of expressions (e.g. 2 + 3 ∗ 4 vs. (2 + 3) ∗ 4) is a syntactic question. • Another syntactic question is how to fix to which test the else clause in if a = b then if c = d then x: = 1 else x: = 2 applies. • Choosing which of the many possibilities of parameter passing modes is to be included in a language is a semantic question. • Deciding whether to under-determine the results expected by a program is certainly a semantic decision and the role of non-determinism in programming languages is particularly important in the context of concurrency (see Chapter 8).
1.5 Approaches to describing semantics Some general points about semantics can be made concrete by looking at a few specific programs. The following program (in a syntax close to that of ALGOL 60): 13 14
Of course, there are a small number of exceptions such as Volap¨uk and Esperanto. Pronounced “Purse”.
8
1 Programming languages and their description begin integer n, fn;
...
begin integer t;
t : = 0; fn : = 1; while t 6= n do t : = t + 1; fn : = fn ∗ t; od end
end
computes the factorial of the (integer) value of the variable n and places the result in variable fn. A formal justification of this fact is contained in Section 7.3 but it is apparent that such a claim can only be based on an understanding of the meaning of the statements of the language in which the program is written. Thus it is clear that one role for a semantic description of a language –say L – is to be able to reason about the claim that a program written in language L meets its specification. (This notion is examined in detail in Chapter 7.) But, if it were the case that a programmer presented an argument –possibly even a formal proof– that a program met its specification and then a purported compiler for L introduced errors in the sense that the machine code created by the compiler was not in accord with the semantics of L , the programmer would have every right to be aggrieved. So it is clear that a key role for a semantic description of a language is to remove this risk: any compiler or interpreter for L must realise the semantics of L . (A formal statement of this notion is given in Section 3.3.) Thus the semantics should mediate the bridge between programmers using L and implementers of language L so that the former group expect and the latter group deliver the same meaning. Faced with the large challenge of getting a computer to behave in an intended way, the semantics provides a “division of concerns”: programs should be written on the assumption of the recorded semantics and processors for the language should reflect that semantics. There are also other interesting questions. Consider the following alternative program: begin integer n, fn;
.. . fn : = 1; while n 6= 0 do fn : = fn ∗ n; n : = n − 1;
od end
This also has the effect of placing in fn the factorial of the initial value of n. If the earlier program included n : = 0 after the while loop, the two programs might
1.5 Approaches to describing semantics
9
be considered to be equivalent in some sense. Making such notions of equivalence precise is another semantic issue. (These issues of what it means for programs to be equivalent become clearer with larger programs such as those for sorting: many sorting algorithms have been devised (see [Knu73]) but two such programs can be unrecognisably different from each other.) It is time to discuss different approaches to recording the semantics of programming languages. If one knows one natural language, another language might be explained by translating it into the known language (although nuances and beauty might be lost in translation). So an option for giving the semantics of language L is to record a translation into a known language. This is, of course, exactly what a compiler does but unfortunately machine code is not an attractive lingua Franca for recording semantics. The problem with machine code as a vehicle for explaining semantics to a human being is that its own semantics might be opaque. Of more concern –and a clearer criteria– is that machine code is not tractable in the sense that there is no natural way of reasoning about texts in machine language. An approach that was originally called mathematical semantics, but is now commonly referred to as denotational semantics, defines semantics by mapping into mathematically tractable objects. This approach is described in Section 7.1 and commonly uses mathematical functions from states to states as denotations. Denotational semantics is mathematically elegant but requires some fairly sophisticated mathematical concepts in order to describe programming languages of the sort that are used to build real applications. Taking a cue from the class of language tools that are known as interpreters points to a more direct approach to providing the semantics of a programming language. A so-called operational semantics provides a way of taking a program and a starting state and computing its final state. Again, it could be said that this is exactly what an interpreter written in machine code does but the key to a clear operational semantics is to write an abstract interpreter in a limited meta-language. Such meta-languages should themselves be tractable so that it is straightforward to reason about the description. The majority of chapters in this book employ operational methods of semantic description and Section 1.6 describes the meta-language used throughout this book. The notion of a state is central to both denotational and operational semantics. The two approaches share the need to choose states that are as abstract as possible; both can therefore be viewed as offering model-oriented language descriptions. In contrast, it is possible to consider property-oriented approaches to semantics. One obvious way to convey information about the meaning of complex features of a programming language is to describe their meaning by relating them to simpler constructs in the same language. Early attempts to use this idea formally include [Bek64] and it can be seen in less formal descriptions in, for example, the way that [BBG+ 63] describes the for statement of ALGOL 60. This approach can be compared to the way in which a mono-lingual dictionary defines the more arcane words in terms of a limited subset of the vocabulary of a natural language. There is of course an inherent circularity about this approach in that a reader must be able to understand some expressions in a language but it is certainly possible to explain less familiar features such as “call by name” in ALGOL 60 by their translation to
10
1 Programming languages and their description
simpler subsets of a language. Notice that equivalences provide semantic knowledge without any notion of the state of a computation having to be described A more radical –and now more widely used– approach that can be viewed as giving a property-oriented semantics is to provide ways of of reasoning about programs in a language L . The idea of adding assertions to programs has a surprisingly long history (see Section 7.3.1) but the really important step was made by Tony Hoare who introduced an approach known as axiomatic semantics. Essentially, this approach provides rules of inference that facilitate proofs about programs and their specifications. Section 7.3 goes into some detail on such axiomatic approaches. The tasks of reasoning about programs in a language L and justifying the correctness of a translator for L are distinguished above. It can be argued that one or another approach to semantics is better suited to the different tasks (this topic is reviewed in Section 7.5). But it should be clear that, if more than one approach is used, they must be shown to be coherent in the sense that they fix the same semantics. The bulk of this book uses operational semantic descriptions because the aim is to equip the reader with the ability to describe useful programming languages. (Chapter 7 reviews the other potential approaches.) The above division of approaches to giving semantics can even be discerned when looking at informal descriptions of programming languages that are contained in most textbooks, although it is also true that most modern textbooks put heavy reliance on examples that the reader has to somehow “morph” to solve their actual problem.15 What then is the case for promoting formality in descriptions of programming languages? Although main-line programming languages (and –today– their indispensable libraries) are orders of magnitude larger than the formal metalanguage described in Section 1.6, the desirability of them being as tractable as practical carries over to programming languages. For example, programmers are unlikely to use language constructs safely if a phrase in a language has unexpectedly different meanings depending on the context in which the phrase is embedded. A clear test of a language being tractable is the ease of writing out formal rules of inference for the language. Furthermore, if users can be given a clear mental model of the state of a programming language, they can understand the meaning of each construct by seeing how it affects that state. With both considerations (properties and models of a language), one way of testing clarity is to be able to record the ideas formally. As with any piece of knowledge, such a record makes it available for review and subsequent study. A compiler or interpreter for language L is itself “formal” — but its construction costs far more than does a formal language description and a compiler is certainly not tractable in the sense that it would be convenient to reason from the compiler about programs in L .16 15
This “use case” approach applies to many physical objects and their manuals fail to give the user any picture of the internal state of the object with which they are trying to interact. 16 A more subtle facet of the question of relying on a compiler comes from the thorny issue of non-determinism: most HLLs actually permit a range of results (e.g. because of concurrency or to leave implementors some flexibility): even if a user is interested in the result of a program on
1.6 A meta-language
11
As indicated in Section 1.2, a premature leap from example programs for a new language to beginning to write a compiler for the language does not constitute sound engineering. A process of experimenting with language choices within a formal model can iron out many potential consistencies more quickly and far less expensively. The formal model can also serve as the starting point for a systematic design process for compilers once the language has been thought out. Chapter 11 lists a number of formal descriptions of extant programming languages. One possibility opened up by making these descriptions formal is to provide tools that use them. There is quite good support for reasoning about program correctness from various forms of property-oriented semantics, although this normally applies to restricted subsets of major languages such as SPARK-Ada. There are far more tools based on formal ideas that check specific properties of programs in languages (e.g. possible dereferencing of null pointers, deadlock detection). Having listed the technical criteria of being able to reason about programs written in L and acting as a base for compilers for L , there remains a bigger objective. The main aim of this book is to ensure that formal models are more widely used in the design of future programming languages. It is to be regretted that most of the current main-line programming languages have semantic traps that surprise programmers and/or complicate the task of providing compilers for the language. Such anomalies can be detected early by writing formal semantic descriptions before tackling the far more costly job of programming a compiler. What then is the impediment to writing, for example, an operational semantics of a language? Section 1.6 introduces a meta-language that should not prove difficult for any programmer to understand. With that one meta-language, he or she can describe any (imperative) programming language. Chapter 3 covers the basic method of writing an operational semantics and subsequent chapters consider new challenges and eight known techniques for coping with them. Experience of teaching these methods over the years suggests that the real hurdle is learning to employ the right degree of abstraction in tackling language descriptions. That can probably only be learned by looking at examples, and Chapters 3–9 provide many such examples.
1.6 A meta-language The term object language can be used to refer to the language whose syntactic and semantic description is to be undertaken17 and the script letter L is used when making a general point rather than discussing a specific object language such as FORTRAN or Java. In contrast, languages that are used in the description of an object language are referred to as meta-languages. a single input item, knowing that the result is as required in one implementation of L does not guarantee that the program will give the same result on a different correct implementation of L . 17 The qualification “object” indicates that it is the object of study; this is not to be confused with “object code”, which is what a translator generates when it compiles a source program.
12
1 Programming languages and their description
A meta-language is itself a formal language. To serve the purpose of describing a range of programming languages: • A useful meta-language must be capable of describing a large class of object languages — it must be “rich enough” for its intended task. • A meta-language should be far smaller than most programming languages. • Crucially, any meta-language should be tractable in the sense that there are clear rules for reasoning about its expressions. The meta-language used in this book is derived from (but is a subset of) the notation used in the Vienna Development Method (VDM). A brief outline of the origins of VDM is given in Section 3.3. The reader is assumed to be familiar with set notation;18 Figure 1.1 indicates the symbols used as logical operators in VDM. Fortunately, textbooks use fairly uniform set notation and the only points worth mentioning from Figure 1.1 are: • rather than use a special symbol (φ ), VDM uses { } for the empty set because it is just a special case of an enumerated set with no elements; • the type of all finite subsets of some type X is written X-set; • the set of all (finite or infinite) subsets is written (conventionally) as P(X); • the name of the set of natural numbers is N = {0, 1, · · ·}, the set of all integers Z = {· · · , −1, 0, 1, · · ·} and the set of Boolean values B = {true, false}. T-set {t1 , t2 , . . . , tn } {} B N Z {x ∈ S | p(x)} {i, · · · , j} t∈S t∈ /S S1 ⊆ S2 S1 ⊂ S2 S1 ∩ S 2 S1 ∪ S 2 S1 − S 2 card S P(X)
all finite subsets of T set enumeration empty set {true, false} {0, · · ·} {· · · , −1, 0, · · ·} set comprehension subset of integers (from i to j inclusive) set membership ¬ (t ∈ S) set containment (subset of) strict set containment set intersection set union set difference cardinality (size) of a set power set
Fig. 1.1 Set notation Notation for VDM sequences is introduced in Section 2.2 (see Figure 2.2) and maps in Section 3.1 (Figure 3.1) when they are needed. 18
Many useful textbooks exist on the notations of discrete mathematics including [Gro09].
1.6 A meta-language
13
A call for abstraction pervades work on modelling computer systems in general and in giving the semantics to programming languages in particular: as such, “abstraction” is a Leitmotiv of this book. For example, if the order of a collection of objects has no influence on the semantics of a language, it is far better to employ a set than a sequence. It might be possible to express the semantics in terms of a sequence but the description will have to cope with messy details; far more tellingly, any reader of the formal description is left to determine whether the order does in fact influence the semantics by reading every line of the description. In contrast, as soon as a reader sees that something is modelled as a set, it is abundantly clear that the order of its elements can have no semantic effect. B ¬E E 1 ∧ E2 E1 ∨ E2 E 1
Fig. 1.2 Logic symbols The symbols used for logic (technically, first-order predicate calculus) vary between textbooks and Figure 1.2 indicates the symbols used in this book. It is common to set out proof rules that define valid deductions about the logical operators. Examples that are assumed below include a definition of implication: ⇒ -I
¬ E1 ∨ E2 E1 ⇒ E2
equivalence as bi-implication: E1 ⇒ E2 E ⇒ E1 ⇔ 2 E1 ⇔ E2
What can be thought of as a definition of disjunction in terms of conjunction is characterised by the bi-directional rule: de-Morgan-1
¬ (¬ E1 ∧ ¬ E2 ) E1 ∨ E2
is one of de Morgan’s laws; another is: de-Morgan-2
¬ (E1 ∨ E2 ) ¬ E1 ∧ ¬ E2
14
1 Programming languages and their description
There is, however, an interesting extension in VDM to conventional logic in that a Logic of Partial Functions (LPF) is used. The reader should have no difficulty with an innocent reading of the propositional connectives listed in Figure 1.2 but LPF offers a principled extension to cope with the situation where operands can be “undefined”. An obvious example is the proposition 5/0 = 0 which –since division by zero means that the arithmetic term (5/0) fails to denote a number– is taken as “failing to denote a truth value”. Far more useful examples of partial expressions arise in both specifications and reasoning about such documents and attention is drawn to examples in later chapters — but it is important that the obvious interpretation of such logical expressions works; more detail is given in Section 1.7.3.
1.7 Further material 1.7.1 Further reading An excellent source of material on the history of programming languages themselves is the series of conferences on History of Programming Languages [Wex81, BG96, RH07] — the first of these is a real gem. The PL/I language is mentioned several times throughout the current book and some of these mentions indicate the ways in which the interactions between language design decisions can result in confusion. George Radin’s [Rad81] account of the creation of PL/I throws some light on how committee compromises can complicate a language. However, the use of PL/I to illustrate excesses in language design comes more from the current author’s familiarity with PL/I than as a claim that it is the language that suffers the worst interaction of language features. Another way to read the words of the masters is to access the Turing Award talks — printed versions include [Knu74a, Hoa74b, Sco77, Bac78, Wir85, Mil93, Ive07]; further wise words are in [Hoa81, Wir67]; specific language discussions include: [Knu67], [PJ03] and [Hut16]. An extremely useful book on concepts in programming languages is [Sco00] and it could be useful to read Michael Scott’s book alongside the current text. Another book that covers a range of languages and provides useful historical background is [Seb16]. References on approaches to the formal description of programming languages are given in the closing sections of later chapters of the current book. There are actually two aspects of VDM itself: it offers a formal development approach for any form of program and it has specific support for the denotational description of programming language semantics. The current book uses only the simplest common features of these two aspects. A general book on VDM is [Jon90] and the ISO standard for VDM is described and referenced in [PL92]. The semantics of full VDM is complicated by the fact that it was designed to write denotational semantic descriptions (see Chapter 7); the subset used here for operational semantics should be clear.
1.7 Further material
15
1.7.2 Classes of languages This book tackles the semantics of imperative languages such as ALGOL and Java. Descriptions of functional and logic programming languages (e.g. Scheme [ASS85], Prolog [SS86]) would use the same ideas but it is worth saying a few words about the differences. Rather than design algorithmic solutions to solve problems, it would be attractive to write logical assertions and have a program find solutions. Even in the restricted world of mathematics or logic19 this is impossible in general, but Prolog-style logic programming moves in this direction. Unfortunately, in order to circumvent the problems of massive search spaces, imperative features have been included in Prolog itself [SS86]. An intermediate approach between fully imperative and logic languages is the class of functional programming languages. In the extreme, such languages avoid all imperative constructs such as assignment. This makes it possible to reason about functional programs as though they are mathematical functions. Among other things, this avoids the need for a Hoare-style logic. Most functional languages actually offer some limited ability to “change the world”.
1.7.3 Logic of Partial Functions As indicated above, the logic used in the VDM meta-language is an extension of standard first-order predicate calculus (see for example [MS13]). The need for a Logic of Partial Functions (LPF) comes from the frequency with which reasonable expressions in specifications involve terms that can fail to denote a value. Writing the specific expression 5/0 = 0 would be bizarre but the following is more reasonable:20 i > 0 ∧ j 6= 0 ⇒ i/j ≤ j
where the troublesome i/0 is one instance of i/j. Far more useful examples arise in both specifications and reasoning about such documents and attention is drawn to examples in later chapters, but it is important that the obvious interpretation of such logical expressions works — for example, while the antecedent of the preceding implication does not guard the consequent in classical logic, the implication in LPF has the desired effect. For example, the head (first element) of an empty list is undefined and applying a mapping (finite function) to an element not in its domain fails to denote a value. LPF offers obvious extensions to the normal meanings of propositional operators as in Figure 1.3 where an operand that fails to denote a truth value is marked as an asterisk. 19
It is by no means obvious how to develop or validate specifications of systems that interface to the physical world. Some work in this area is described in [JHJ07, JGK+ 15, BHJ20]. 20 Such an expression is better written with type constraints and these are used below but do not cover all cases.
16
1 Programming languages and their description
The reason that the reader should have no difficulty with these extended meanings is that key properties such as the symmetry of conjunction and disjunction hold: ∨ -sym
E1 ∨ E2 E2 ∨ E1
∧-sym
E1 ∧ E2 E2 ∧ E1
In fact, the key difference with conventional propositional logic is that the so-called “law of the excluded middle” (P ∨ ¬ P) only holds in LPF where it is established that P denotes a truth value. a true ∗ false true ∗ false true ∗ false
b ¬a true false true ∗ true true ∗ ∗ ∗ false false false
a∧b true ∗ false ∗ ∗ false false false false
a∨b a ⇒ b a ⇔ b true true true true true ∗ true true false true ∗ ∗ ∗ ∗ ∗ ∗ true ∗ true false false ∗ ∗ ∗ false true true
Fig. 1.3 LPF extensions of propositional operators Quantifiers are in no way mysterious. Over finite sets, they are just convenient abbreviations: (∃i ∈ {1, · · · , 3} · p(i)) ⇔ (p(1) ∨ p(2) ∨ p(3)) (∀i ∈ {1, · · · , 3} · p(i)) ⇔ (p(1) ∧ p(2) ∧ p(3))
Even the infinite cases should present no difficulty: ∀i ∈ N · ∃j ∈ N · i < j
With all of the quantifiers, the scope is assumed to extend as far as possible to the right; parentheses are not required for this case but they can be used to define different grouping. This leaves only the end cases with the empty range for the bound variable to note: ∃i ∈ { } · p(i) ⇔ false ∀i ∈ { } · p(i) ⇔ true
which are obviously related from the quantifier versions of de Morgan’s laws: de-Morgan-3
¬ (∃x · p(x)) ∀x · ¬ p(x)
1.7 Further material
de-Morgan-4
17
¬ (∀x · p(x)) ∃x · ¬ p(x)
There are other logics that attempt to handle terms that fail to denote a value and a comparison is given in [CJ91]. Details of the specific LPF used in VDM are addressed in [BCJ84, JM94]. Kleene (in [Kle52]) attributes the propositional operator definitions in Figure 1.3 to [Łuk20]. Other papers that address the issue of undefinedness include [Kol76, Bla86, KTB88, Bli88].
Chapter 2
Delimiting a language
The body of this book addresses the task of describing –or designing– the semantics of programming languages. This chapter prepares the way for that task by introducing the division between syntax and semantics. A tiny language is introduced which, because it has few surprises, can be used to explain the description method. As more complicated language features are considered in later chapters of this book, they are treated independently as far as is possible (e.g. input/output is modelled in Section 4.3.1 and a similar extension could be made to the concurrent object-oriented language in Chapter 9). For an extant language,1 it is necessary to delimit the set of allowed programs before their meaning can be discussed: Section 2.1 outlines “concrete syntax” notations for fixing the textual strings of an object language (such strings include various marks that make it possible to parse the strings); Section 2.2 addresses a way of defining the “abstract syntax” of programs without the symbols needed to facilitate parsing. This chapter also covers most of the VDM notation used in the current book. The topic of semantics is first tackled in Chapter 3 and runs throughout the remainder of this book.
2.1 Concrete syntax Any interesting language allows an unbounded number of different “utterances”. It is therefore not possible to enumerate all of the allowable strings of characters in a language. It is not difficult to devise meta-languages with which to define the set of plausible programs of a language. Many of the notations used are close relatives of Backus 1
A Leitmotiv of the remaining chapters of this book is the value of using formal models in the design of languages but this topic is postponed until the basic description tools have been covered in the current chapter.
© Springer Nature Switzerland AG 2020 C. B. Jones, Understanding Programming Languages,
19
20
2 Delimiting a language
Normal Form2 (also known as Backus-Naur Form).3 BNF was used in [BBG+ 60] to define the concrete syntax of ALGOL 60. A slight elaboration of BNF is Niklaus Wirth’s Extended BNF — EBNF is described in [Wir77]. Despite claiming that devising syntactic meta-languages is relatively simple, the first “challenge” is: Challenge I: Delimiting a language (concrete representation) How can the set of valid strings of an object language be delimited? Syntax is about content and “concrete syntax” concerns the linear sequence of characters that are allowed in the object language. A well-designed concrete syntax can also suggest a structure that points to the semantics of the object language. Crucially, a concrete syntax must be devised that makes it possible to “parse” strings4 but it is easier to consider the generation of strings first. Starting with a simple example from natural language, a grammar can be written as a set of BNF rules: hSimpleSentencei : : = hPronounihVerbihSpreadi. hPronouni : : = I | You hVerbi : : = like | hate hSpreadi : : = Marmite | Peanut Butter
This defines a set of eight sentences: I like Marmite., I like Peanut Butter., I hate Marmite., I hate Peanut Butter., You like Marmite., You like Peanut Butter., You hate Marmite., You hate Peanut Butter. Looking more carefully at the BNF rules, each rule starts with a “non-terminal”, which is marked by enclosure in h· · ·i; the non-terminal that is being defined is separated from its definition by “: : =”; this is followed by the definition, which is intended to fix the set of possible strings; all but the first rule above list options separated by a vertical bar (|). Such a definition can be made of a sequence of items that can be either “terminal” strings or non-terminal symbols that should be defined in other rules. Terminal (in the sense that no further production is needed) symbols just stand for themselves; non-terminal symbols can be replaced by any string that is valid from their production rule. The rules above are not unique in generating the strings — all of the above and more are generated by: hSimpleSenti : : = hWordihWordihWordi. hWordi : : = I | You | like | hate | Marmite | Peanut Butter 2
Marking the insight of John Warner Backus (1924–2007) who proposed the notation. Acknowledging Peter Naur’s (1928–2016) contribution to the development and use of BNF. 4 It is noted below that designing languages that are easy to parse is itself a technical challenge but is not within the scope of the current book — references to this material are given in Section 2.3. 3
2.1 Concrete syntax
21
One way to move from a finite language like hSimpleSenti to languages with an unbounded number of possible strings5 is to use recursion — for example: hParagraphi : : = hSimpleSenti | hParagraphihSimpleSenti
A potential problem can be seen if a pronoun (“He”) that requires a different form of the verb (“likes”) is added to the language. Writing: hPronouni : : = I | You | He hVerbi : : = like | likes | · · ·
can generate the ungrammatical string “He like Marmite.” This could be resolved by splitting hSimpleSentencei but the more general issue of “context dependancy” is addressed below in Section 4.2. Moving to describing the concrete syntax of an example programming language, it has to be recognised that there is a huge variety of syntax styles across the many known programming languages and debates about such stylistic differences often generate more heat than light. Interestingly for the main purpose of this book, most such syntactic argument has no impact at all on semantics. It is for this reason that, from Section 2.2 onwards, semantic discussions are based on abstract syntaxes. When however example programs are presented, concrete syntax is used. The stylistic choice here is that a vaguely ALGOL/Pascal flavour of concrete syntax is used for sequential programs and a move towards Java syntax is made for concurrent (object-oriented) languages. For the initial simple language, a complete hProgrami might be bracketed by keywords and contain (yet to be defined) lists of allowed variable names and statements: hProgrami : : = program vars hIdsi: hStmtsi end The design choice recorded in this rule is that the names of variables used in hStmtsi of a hProgrami must be declared in the list of identifiers given after the keyword vars. For now, variables can only contain natural numbers (N) — multiple types are addressed in Chapter 4. Lists of identifiers are separated by commas: hIdsi : : = hIdi [, hIdsi] Here, the square brackets of EBNF are used to show that the bracketed portion can be omitted. This could equally be written as: hIdsi : : = hIdi | hIdi, hIdsi No grammar is given here for hIdi — were this done, it would probably require that the first character was a letter followed by a string of digits or letters.6 5
EBNF provides the alternative of showing that a group of things can be repeated: hParagraphi : : = hSimpleSenti∗
But recursion is stronger than iteration because the former can describe arbitrary nesting such as bracketing in (()()) 6 Some languages limit the length of identifiers.
22
2 Delimiting a language
Statements are separated by semicolons — notice that this syntax eliminates writing a semicolon after the last statement in a list:7 hStmtsi : : = [hStmti [; hStmtsi]]
Statements can be one of three types (for now): hStmti : : = hAssigni | hIfi | hWhilei
The idea that the left- and right-hand sides of hAssigni should be separated by an equality sign is anathema to mathematicians who point out that: x = x+1 is a nonsense. ALGOL 60 uses : = between the left-hand side reference and the expression which is to be evaluated and assigned to that reference:8 hAssigni : : = hIdi : = hArithExpri
There is an interesting parsing issue associated with hIfi — but this is discussed below — for now a closing bracket (fi) is given to complete the conditional statement: hIfi : : = if hRelExpri then hStmtsi [else hStmtsi] fi
This is far from the only way to make a grammar unambiguous but some such device is necessary to ensure that parsing is unique.9 Similarly: hWhilei : : = while hRelExpri do hStmtsi od
Moving on to the definition of the two forms of expression: hArithExpri : : = hBinArithExpri | hNaturalNumberi | hIdi | (hArithExpri)
notice that the fourth option for hArithExpri allows the insertion of parentheses to distinguish the priority of operators between x ∗ y + z and x ∗ (y + z).
7
Writing the semicolon between hStmti is in contrast to terminating every statement with the punctuation symbol (as is done in most (European) languages, where a full stop marks the end of a sentence). Many years ago, Jim Horning reported an experiment in which he compared the number of mistakes made by programmers under each rule: in his experiment, the terminating semicolon was actually the cause of fewer slips than its placing as a separator. 8 Some languages use “←” between the left- and right-hand sides of assignments. 9 Appendix A.1.2 shows how Java disambiguates conditional (and looping) statements. Tony Hoare proposed the more radical idea of writing the test between the two statement sequences: hIfi : : = hStmtsi / hRelExpri . hStmtsi
This proves convenient for showing algebraic properties of conditionals.
2.1 Concrete syntax
23
Notice that hRelExpri offers only a restricted way of obtaining a Boolean value; it would be easy to add a more general form of logical expression to the syntax. The intention is that any given hProgrami is finite (i.e. instead of taking recursive options all of the time, the option to use terminal symbols is eventually chosen) but the set of possible programs is infinite. As with the natural language example above, there are other ways of defining hArithExpri. One option would be to have a single set of strings for hExpri that allowed all four operators. This would define a larger class of strings. Grammars like those above can be used to generate the strings of a language; in fact, a single generator program can easily be written that takes a grammar as input and can generate (random) strings of the given syntax. Christopher Strachey showed that a simple grammar could be used to generate English paragraphs that could pass as love letters and IBM had a project in the 1960s that created random PL/I test cases: “APEX” generated random strings from a grammar of PL/I.10 Using grammars as generators of languages finesses the issue of “ambiguity”. Even writing the deliberately ambiguous grammar: hAmbiguousi : : = a | ahAmbiguousi | hAmbiguousia allows many different ways of generating strings of the letter “a” but their unique generation can be recorded. If however the task is to analyse a string to determine how the rules could be used to generate such a string, the ambiguity issue is serious. Such analysis programs are called parsers and writing general-purpose parsers is a significant challenge.11 Language issue 1: Avoiding syntactic ambiguity The concrete syntax of a programming language should be unambiguous. Consider a grammar for conditional statements as in ALGOL 60: hA-Ifi : : = if hRelExpri then hStmtsi [else hStmtsi] This is intended to permit programs that omit else parts of conditional statements but it can generate the ambiguous string: if a = b then if c = d then x: = 1 else x: = 2 in two distinct ways. Indentation can be used to suggest these options: if a = b then if c = d then x: = 1 else x: = 2 10
In fact, the grammars for APEX used a “dynamic syntax” to cope with context dependancies — see Section 4.4 for references. 11 This can be compared to the fact that determining the factors of a composite number is far harder than multiplying numbers together.
24
2 Delimiting a language
and: if a = b then if c = d then x: = 1 else x: = 2 and these have different meanings.12 This is why some form of bracketing is added to the syntax of conditionals — here it is the closing keyword fi but it can equally be (as in Java) some explicit bracketing around the two sequences of statements contained in each conditional statement — see Appendix A.1.2. The front end of a compiler or interpreter for a language has to: • decompose the sequence of characters into distinct “tokens” (keywords, identifiers and constants); • create a parse tree that shows how the sequence of tokens can be generated from the grammar of the language (and produce, hopefully useful, diagnostics if the input string is not valid with respect to the grammar of the language). The grammar of a language must not be ambiguous in such a way that different generating sequences have different meanings. Language issue 2: Defining priority of operators The concrete syntax given above for hArithExpri makes no attempt to show the priority of multiplication and addition; in for example [BBG+ 63] the syntax makes clear that the potentially ambiguous a + b ∗ c should be parsed as though a + (b ∗ c) had been written; this is achieved by adding extra phrase classes for htermi/hfactori. An alternative approach to disambiguate expressions is by adopting a canonical linearisation of a tree such as that known as “reverse Polish” notation (e.g. +a ∗ bc). There is a further issue when considering the parsing problem for compilers and that is the efficiency of parsing. The PL/I language designers declined to reserve the keywords of the language (and there were rather a lot!) and had no way of distinguishing keywords from identifiers. This has the consequence that a sequence of characters that looks as though it might be a keyword might actually be the start of an assignment to a variable of that name. Such ambiguities in a grammar complicate the design of its processors and damage their performance (this is further complicated in PL/I because declarations are not required to be at the start of a block). Language issue 3: Avoiding syntactic inefficiency The concrete syntax of a programming language should be such that parsing can be efficient — this amounts to minimising backtracking. 12
This is sometimes referred to as the “dangling else” problem. With his typical pragmatism, Niklaus Wirth simply ordained that, in Pascal, the else clause related to the closest if.
2.2 Abstract syntax
25
Section 2.2 shows how the use of an “abstract syntax” avoids these complications, so they are left aside here. There are however many concrete syntax issues for language designers to resolve. Language issue 4: Style of syntax Many issues have to be resolved in designing the style of the concrete syntax of a programming language — a few examples are: • are blanks significant? • are keywords to be reserved or distinguished by some special markers? • how are comments to be distinguished from the intended program in which they occur? • should two-dimensional layout have an effect on parsing? The payoffs from studying grammars include: • General-purpose parsing algorithms can be written that handle arbitrary grammars — as distinct from writing a hand-crafted parser for each language. • Parser generators take as input a concrete syntax and produce a parser that yields efficient parsing speeds. • Tools can be constructed that transform grammars into ones that are equivalent but admit more efficient parsing. • The important topic of error recovery in parsing can be studied systematically. Syntactic meta-languages used to define concrete syntax (such as BNF) are of course languages themselves and, as such, have their own syntax and semantics. Without going into a fully formal definition, the semantics in terms of the set of strings is outlined above. To make the point about the syntax of a syntactic metalanguage, Niklaus Wirth’s “railroad diagrams” can define exactly the same sets — an example is given in Figure 2.1 and they are employed in the description of the Modula-3 language. One important issue is being postponed and that is that the BNF syntax notation cannot describe the requirement that the statements in a program can only use identifiers that have been declared; Section 4.2 describes ways to handle such “context dependancies”.
2.2 Abstract syntax The concrete syntax notations covered in Section 2.1 give both a way of producing the texts of programs and a way of parsing such texts. But, even for simple languages, such a concrete syntax can be “fussy” in that it is concerned with details which make it possible to parse strings (e.g. the commas, semicolons, keywords and those marks that serve to bracket strings that occur in recursive definitions). Peter Landin used the lovely term “syntactic sugar” (which can be sprinkled on the essential content).
26
2 Delimiting a language
ArithExpr
ArithExpr
ArithExpr
BinArithOperator
NaturalNumber
Id
(
ArithExpr
)
Fig. 2.1 Concrete syntax using “railroad diagrams” For a programming language such as C or Java, there are many different ways of writing semantically indistinguishable programs. In PL/I, variable declarations can be placed anywhere in a block but their position has no meaning; most languages allow comments that have no influence on semantics. It should thus be clear that the concrete syntax is not a convenient basis for semantic descriptions. The concrete syntax does have to be defined but, since the syntactic variants have identical semantics, basing a semantic description on a concrete syntax clouds it in an unnecessary way. Here, the first big dose of abstraction is deployed and all of the subsequent work is based on an “abstract syntax”. Challenge II: Delimiting the abstract content of a language Semantic descriptions mostly ignore textual details of programs. Furthermore, an abstract syntax can make it clear what cannot happen (e.g. order of declaration of identifiers cannot have an influence because the abstract declaration contains a set). How can the abstract syntax of a language be defined? It is again useful to relate what is done in a language description to the tools that process programs. The parsing phase of a compiler or interpreter generates (a representation of) a tree form of the program to be compiled. An abstract syntax defines a class of objects that retain as little superfluous information as possible. In most cases, such objects are tree-like in that they are (nested) VDM composite objects. To achieve abstraction, sets, sequences and maps are used whenever appropriate. This section reviews more VDM notation. The concepts will almost certainly be familiar to any reader. The VDM notation for sets is listed in the previous chap-
2.2 Abstract syntax
27
ter (Figure 1.1). Sequences (sometimes referred to as lists) provide another useful abstraction whose operators are listed in Figure 2.2. T∗ len s [t1 , t2 , . . . , tn ] [] s1 y s2 hd s tl s inds s elems s
type defining finite sequences (elements are of type T) length of a sequence sequence given by enumeration the empty sequence sequence concatenation the element at the head of a sequence the sequence comprising the tail of a sequence the set of indexes to a sequence the set of elements in a sequence
Fig. 2.2 Sequence notation Just as with X-set, instances of X ∗ are always finite. Notice that the head of a sequence is the first element, thus hd [a, b] = a, whereas the tail of a sequence is a sequence without its first element: tl [a, b] = [b]. Either of these operators applied to the empty sequence is undefined and this serves as a reminder of VDM’s use of a Logic of Partial Functions — see Section 1.6. The indexes of a list are the set of natural numbers that can be used as indexes: inds s = {1, · · · , len s}
and that makes the elements of a list easy to define: elems s = {s(i) | i ∈ inds s}
Selecting an indexed element of a sequence uses a notation identical to function application: s(i). Notice that s(len s + 1) or s(0) again both fail to denote values. Consider the task of checking that a list does not contain the same element more than once. In VDM, it is standard to be explicit about the type of a function which is recorded in its signature: uniquel: X ∗ → B
A definition of uniquel that follows the verbal description above is: uniquel(s) 4 ∀i, j ∈ inds s · i = 6 j ⇒ s(i) 6= s(j) The 4 is an equality but is used for a definition rather than a simple assertion such as 1 + 1 = 2. An equivalent definition of uniquel would be uniquel : X ∗ → B
uniquel(s) 4 len s = card elems s
28
2 Delimiting a language
The definition can even be written recursively to illustrate that VDM function notation allows both case constructs and conditionals: uniquel : X ∗ → B
uniquel(s) 4 cases s of [] → true [hd] y rest → if hd ∈ elems rest
then false else uniquel(rest) fi
end
Functions such as uniquel that yield Boolean results are referred to as predicates. The concept of records is present in many programming languages (PL/I called them structures); its use in meta-languages dates back to John McCarthy’s [McC66]. In VDM these sources are pulled together to make several facets automatic. An example record class is defined as follows: Example :: field-1 : X field-2 : Y Such a record definition automatically defines a function mk-R for record type R — for building objects of Example from values of type X and Y: mk-Example: X × Y → Example The set of all Example objects is: Example = {mk-Example(x, y) | x ∈ X ∧ y ∈ Y}
These implicit mk- functions can be thought of as labelling the values of the objects such that mk-A(· · ·) can never yield the same values as mk-B(· · ·) — thus the sets A and B are disjoint. The record definition also automatically defines selectors (which are written as suffixes) so that: e = mk-Example(x, y) e.field-1 = x e.field-2 = y (Section 3.1 shows how the use of the mk- constructors in parameter lists can almost eliminate the need to use selectors explicitly.) Record definitions often occur in places where recursion can lead back to the record type; here again, the intention is that only finite instances are considered. The use of | to define unions of types is carried over from concrete syntax notation. In abstract syntax: [X] = X | {nil}
2.2 Abstract syntax
29
where nil is a unique elementary object. Notice that, in contrast to record values where the constructor makes the set unique, equality rules are genuine set equalities. So with: X=A|B Y =B|C
it is not true that X and Y are disjoint. | N | Id
Fig. 2.3 Abstract syntax of SimpleProgram The abstract syntax of SimpleProgram is given in Figure 2.3. Remember that Stmt∗ includes empty sequences — this obviates the need for a special skip statement in the language. So, if 1 ∈ N, i ∈ Id then: mk-BinArithExpr(i, P LUS, 1) ∈ ArithExpr mk-RelExpr(i, L ESS T HAN E Q, 1) ∈ RelExpr
and then with:
s1 = mk-If (mk-RelExpr(i, L ESS T HAN E Q, 1), [mk-Assign(i, mk-BinArithExpr(i, T IMES, 1))], [mk-Assign(i, i)]) s1 ∈ Stmt
And, finally
mk-SimpleProgram({i}, [s1 ]) ∈ Program
30
2 Delimiting a language
Writing out such objects is clearly longwinded for actual programs so example programs will normally be given in (some arbitrary) concrete syntax. The case for using abstract syntax as a basis for semantic descriptions becomes ever clearer as the languages covered in later chapters become more realistic. Looking at Figure 2.3, notice that: • the fact that the vars component of SimpleProgram is defined as a set (of Id) makes it immediately clear that their order has no semantic significance;13 • Stmt is defined recursively (via If and While); • the concrete syntax markings in hIfi statements are no longer required because the record nesting resolves ambiguity; • ArithExpr is also defined recursively; • the nesting of records fixes the tree structure of expressions, obviating the need for parentheses; • the use of the mathematical set N as an option for ArithExpr is reasonable because there is a simple finite representation for any natural number (more care would be needed if an abstract syntax used real numbers (R)); • P LUS etc. are constants that stand for unit sets containing a string. Comparison of Figure 2.3 with the concrete syntax developed in Section 2.1 is useful but, on such a simple language, shows only limited progress; the difference between the abstract and concrete syntaxes of a language like Java or PL/I is much more marked because they offer many alternative ways of writing semantically identical programs. Another advantage of basing semantic descriptions on abstract syntaxes is that it is possible to envisage programs being printed in differing concrete forms (e.g. the original FORTRAN syntax for DO statements was built around coding pads and statement numbers but it could be generated from an abstract syntax that covers the for of ALGOL 60). The description is, in a sense, getting closer to the underlying “language concept”. Christopher Strachey’s observation was: “one should work out what one wants to say before fixing on how to say it”. Notice that the problem identified with concrete syntaxes that there is no constraint that the statements in a SimpleProgram should only use identifiers that are declared in its vars part also pertains to the abstract syntax. A reader who objects that the declaration of variable names in the vars part of SimpleProgram is not being used for checking is asked to remain patient until he or she reaches Section 4.2. There is however an important language issue: 13
The ECMA/ANSI PL/I standard [ANS76] eschewed the use of sets — its authors arguing that sequences would be more familiar to programmers. Unfortunately this leaves the reader with no choice but to read hundreds of pages to determine whether the order of a sequence actually has any influence on the semantics. If not, this fact could have been made completely obvious by employing a set.
2.3 Further material
31
Language issue 5: Constructive redundancy Mistyping an identifier in a language with no declarations can result in it being treated as the name of a distinct variable. Where possible, a simple typo should not make a program execute wrongly — simple mistakes should be detected by a compiler before execution is undertaken. One form of redundancy is that all variable names should be listed so that any undeclared names can be identified as errors. Early versions of FORTRAN had no (redundant) variable declarations — the type (integer or real) of a variable was determined by the first letter of its identifier. Furthermore, blanks had no influence on strings punched as programs. A valid FORTRAN DO statement might be written: DO 15 I = 1,100 where 15 is the statement number of the end of the loop, “I” is the intended control variable, 1 its initial value and 100 the intended final value of the iteration. Unfortunately, given the above decisions on FORTRAN, typing the comma as a full stop allowed the interpretation of an assignment of a floating point number to an (undeclared) variable (i.e. DO15I = 1.100). The claim that this resulted in the loss of a Mariner space probe has been discredited but the fact remains that lack of redundancy is extremely perilous.
2.3 Further material Projects Even at this stage where only syntax description is covered, there are many projects that the reader could pursue: 1. Many programming languages include some form of iterative for statement; an example in ALGOL 60 is: for j: = I + G, L, 1 step 1 until N, C + D do A[k, j]: = B[k, j]
A concrete syntax can be found in [BBG+ 60] or [BBG+ 63]; write an appropriate abstract syntax. 2. Add a class of Boolean expressions to SimpleProgram. The basic elements of Boolean expressions could be elements of RelExpr; a few simple propositional operators would be for (unary) negation and (binary) disjunction. 3. SimpleProgram has conditional statements; it is easy to envisage conditional expressions such as: x := 5 + if a = b then 2 else 3 fi It is even possible to have conditional references as in: if a = b then x else y fi := 5
32
2 Delimiting a language
Write appropriate abstract syntax definitions. 4. Languages such as C/C++/Java allow constructs of the form x + +; as statements such increments might be viewed as useful clues to a compiler; written within expressions their semantics causes side effects. Discuss how this complicates even the abstract syntax of the language.14 More ambitiously, readers could find the concrete syntax of a favourite programming language and write out a complete abstract syntax for that language. A word of warning is in order: the reader might have to hunt beyond books bearing titles such as “Programming in L ”. Unfortunately, it is now far less common to find BNF in language textbooks; they tend to limit themselves to covering “use cases” of programs that one has to perturb. The best source of a formal concrete syntax is likely to be the standard for L .
Further reading Although the next chapter turns to the main subject of this book (i.e. semantics), there are many interesting publications on syntax. A good starting point might be [MP99]; the subject of efficient general-purpose parsing is covered in [SJv19].
Historical notes The argument that it is better to base a semantic description on an abstract –rather than a concrete– syntax is set out in [McC66]. McCarthy defined the constructors and selectors explicitly; the group in IBM Lab Vienna moved to these functions being implicit as soon as the record description is given. Oxford researchers often based the semantic description of (small) languages on a concrete syntax. Where they moved closer to an abstract syntax, they used a disjoint union operator rather than records with constructor functions.
14
More exotic forms might tax the understanding of someone who has to maintain a program containing statements such as x := + +x ∗ x + +.
Chapter 3
Operational semantics
Chapter 2 shows how to delimit texts in a language by using a syntax meta-language. This chapter moves on to the problem of fixing the meaning of texts in languages by using a semantic meta-language. Three main object languages are described formally in subsequent chapters. In order to introduce the meta-language involved, a very simple language is described in this chapter. This initial language has only one type of variable and a limited repertoire of statements. An initial approach to describing the semantics of deterministic languages is contained in Section 3.1; this is enriched, in Section 3.2, to the style used to tackle the description of all of the remaining language concepts in the book. This enriched style copes with non-determinism in languages and the need is illustrated by the inclusion of a non-deterministic iterative statement.
3.1 Operational semantics The presentation of a semantic description should be related closely to the abstract syntax of the language: in this case SimpleProgram, whose syntax is given in Figure 2.3. This leaves the question of the order in which it is easiest to read a semantic description.1 Rather than start from the top of the grammar it is easier to see what needs doing in the semantic description by working first on the expression constructs and working up to the program level. Challenge III: Recording semantics (deterministic languages) How can the semantics of a deterministic language be recorded? The abstract syntax of ArithExpr in Figure 2.3 is recursive because both operands of BinArithExpr are themselves elements of ArithExpr. Any instance of ArithExpr 1
There is no single best solution to the question of order. The appendices of this book illustrate different orders of presentation of language descriptions. Clearly some interactive tool could escape the linear constraint of printed pages. © Springer Nature Switzerland AG 2020 C. B. Jones, Understanding Programming Languages,
33
34
3 Operational semantics
is finite because its leaves are either natural numbers or members of Id. A recursive function that evaluated arithmetic expressions whose leaves were all constants (N) would require only the expression as an argument. The fact that variable names occur in expressions (Id ⊆ ArithExpr) indicates that the meaning of an expression depends on the current values of any such names: the fact that such values are changed by assignment statements is the essence of imperative programming languages. Associations of values with some sort of key are a useful tool in most abstract m models and their types are recorded in VDM as Key −→ Value. As with VDM’s X-set, maps are always finite. The basic operators on maps are given in Figure 3.1. A map value is really just a finite set of pairs:2 the domain of a map (dom m) is the set of values that are the first element of any pair in m; similarly, rng m is the set of values contained as the second element of any pair in m. m
D −→ R dom m rng m m(d) {d1 7→ r1 , d2 7→ r2 , . . . , dn 7→ rn } {7→} {d 7→ f (d) ∈ D × R | p(d)} m1 † m2
finite maps from D to R domain of a map range of a map map application map enumeration empty map map defined by comprehension map overwrite
Fig. 3.1 Map notation (basic operators) Applying a map to a value (m(d)) yields the second element of a pair whose first element is equal to d. Because maps are like functions, there can be at most one such pair. Map application m(d) is undefined if there is no pair in m whose first element is d.3 Further intuition about maps comes from noting that sequences are a special case m of maps: T ∗ can be thought of as N −→ T. The sequence operator inds s gives the domain of the map m and elems s yields its range; selecting an element of a sequence (s(i)) is the same as applying the appropriate map to i. It is somewhat of a tradition in semantic descriptions to use the Greek letter Sigma (Σ) for the set of all states and σ ∈ Σ for specific values; this convention is followed here — thus the state required for SimpleProgram is: m
Σ = Id −→ N
Maps are many:one associations in that different identifiers can map to the same value but any identifier can, in any particular σ ∈ Σ, only map to one value in N. Given this, the signature of the function to evaluate expressions is: eval: ArithExpr × Σ → N 2
General functions (e.g. square) can be used where infinite associations are required; these can be understood to define infinite sets of pairs. 3 Limiting the propagation of such undefined terms is one of the benefits of the “Logic of Partial Functions” (LPF) mentioned in Section 1.5 and described in more detail in Section 1.7.3.
3.1 Operational semantics
35
In any large body of formulae, it is useful to record the types or signatures of functions along with their definition. It would be possible to write a definition of the eval function that evaluates arithmetic expressions with respect to a state as: eval : ArithExpr × Σ → N
eval(e, σ ) 4 if e ∈ BinArithExpr then if e.operator = P LUS then eval(e.operand1, σ ) + eval(e.operand2, σ ) else eval(e.operand1, σ ) ∗ eval(e.operand2, σ ) fi else if e ∈ Id then σ (e) else e fi fi
Map application is written in the same way as function application; thus the value of a given Id (say a) can be accessed in σ by writing σ (a). Thus: eval(mk-BinArithExpr(1, P LUS, i), {i 7→ 2, j 7→ 4}) = 3
The meaning of VDM’s conditional expressions should be clear but the next convention removes the need for them in this context. Even for such a small language, the definition of eval above looks slightly heavy and this style becomes inconvenient for larger descriptions. A pattern matching style was used in early VDM language descriptions and it is much more convenient to split the definition of a function like eval by cases (and use the pattern to define local names for the values of fields) so that the definition of eval is given in cases such as: eval(mk-BinArithExpr(op1, P LUS, op2), σ ) 4 eval(op1, σ ) + eval(op2, σ ) The six cases are given in full in Figure 3.2. This style becomes very natural for larger language description. Formally, the constructors (mk-) and any constants (e.g. P LUS) are defining when a match occurs and any other identifiers become bound to the values present in the specific argument passed. Thus: ∃op1, op2 ∈ ArithExpr · e = mk-BinArithExpr(op1, P LUS, op2) ⇒ eval(e, σ ) = eval(op1, σ ) + eval(op2, σ ) Repeating the example above: eval(mk-BinArithExpr(1, P LUS, i), {i 7→ 2, j 7→ 4}) = eval(1, {i 7→ 2, j 7→ 4}) + eval(i, {i 7→ 2, j 7→ 4})
36
3 Operational semantics 4
eval(mk-BinArithExpr(op1, P LUS, op2), σ )
eval(mk-BinArithExpr(op1, T IMES, op2), σ )
eval(mk-RelExpr(op1, E QUALS, op2), σ )
eval(op1, σ ) + eval(op2, σ )
4
4
eval(mk-RelExpr(op1, L ESS T HAN E Q, op2), σ )
eval(op1, σ ) ∗ eval(op2, σ )
eval(op1, σ ) = eval(op2, σ )
4
eval(op1, σ ) ≤ eval(op2, σ )
e ∈ N ⇒ eval(e, σ ) = e e ∈ Id ⇒ eval(e, σ ) = σ (e)
Fig. 3.2 The function eval given by cases It is a useful by-product of employing the pattern-matching style that the names of the object fields are rarely needed as selectors. So far, eval looks as though it is doing little more than identifying the syntactic mark P LUS with the mathematical notion of addition. This is partly the result of the object language in the current chapter having been kept deliberately simple in order to explain the meta-language. Notice however that the definition could be modified to take the modulus (with respect to the word length of a physical machine) of the mathematical result.4 There is a snag with the final case in Figure 3.2: applying a map to a value that is not in its domain does not denote a value (this is one of the common causes of non-denoting expressions, as mentioned in Section 1.5). The obvious purpose of the vars part of SimpleProgram in Figure 2.3 is that all variables should be declared. It would then be possible for the semantics to initialise all declared variables5 (otherwise variables having values before they are accessed in expressions depends on the flow through assignment statements). The question of context dependancies such as requiring that all identifiers are declared is addressed in Section 4.2; handling issues such as run-time errors is discussed in Section 4.1. Moving to the description of the semantics of statements (Stmt in Figure 2.3), the description can again be split into cases. As mentioned in the general discussion of what constitutes an imperative language, the key state transitions are brought about by assignments with conditional (If ) and iterative (While) statements orchestrating 4
Interestingly, few formal language descriptions do this. The issue of “computer arithmetic” is addressed in [vW66a] and then [Hoa69]; and [Sit74, CH79] tackle proofs about “clean termination” — see Section 7.4. 5 An argument against initialisation is the cost (to a programmer who is careful).
3.1 Operational semantics
37
the order in which the assignments are executed. Given that statements are to change the state, it is reasonable that a semantic function exec should have the signature: exec: Stmt × Σ → Σ
The map operator that can be used to update a map is the overwrite (†) and it is used as follows: exec(mk-Assign(lhs, rhs), σ ) 4 σ † {lhs 7→ eval(rhs, σ )} Thus: exec(mk-Assign(j, mk-BinArithExpr(1, P LUS, i)), {i 7→ 2, j 7→ 4}) = {i 7→ 2, j 7→ 3}
It is important to appreciate that where an identifier is placed affects how it is interpreted: on the right of an assignment, the identifier denotes its value in the current store; on the left, the identifier is the name of the variable to be updated. Christopher Strachey used the terms “right-hand value” and “left-hand value” to distinguish these uses. Having precise terms is useful because there are other places where the distinction is important (e.g. different parameter mechanisms use either the left- or right-hand value — see Section 5.4). This discussion is returned to in Section 5.2. The abstract syntax of Stmt in Figure 2.3 is recursive (via Stmt∗ ) so –just as with eval– the exec semantic function is recursive: it is straightforward to define: exec(mk-If (b, th, el), σ ) 4 if eval(b, σ ) then exec-list(th, σ ) else exec-list(el, σ ) fi
exec(mk-While(b, body), σ ) 4 if eval(b, σ ) then exec(mk-While(b, body), exec-list(body, σ )) else σ fi
The topic of non-termination is discussed in Section 3.3. Finally, an exec-list function can also be defined by cases: exec-list([ ], σ ) 4 σ exec-list([s] y rl, σ ) 4 exec-list(rl, exec(s, σ ))
38
3 Operational semantics
The function exec-list fixes the semantic order of execution of statements as left to right. The way this is done fits exactly the description that the semantics is given by an abstract interpreter. An operational description mimics the operation of an abstract machine. As the object languages being described acquire more features in later chapters, extra constraints on the preferred operational style come into play.
3.2 Structural Operational Semantics The preceding section provides an intuition for operational semantics using functions. As pointed out in Section 3.3, this fits the historical development of ideas. The approach is also convenient with respect to providing tool support for a semantic description: such an operational semantic description could be implemented using a functional programming language such as Haskell [Hut16] or Scala [OS16] or supported by the VDM Toolset.6 This section addresses the challenge of nondeterminism where a language description has to define the range of acceptable answers for acceptable implementations of a language. Challenge IV: Operational semantics (non-determinism) How can an operational semantics describe a non-deterministic language in a way that clearly relates the structure of the semantics to its abstract syntax? The most compelling case for non-determinism in programming languages comes from languages that support concurrent execution: such languages do not fix the relative progress of threads and this, in general, means that a program can legitimately deliver different results even from the same starting state. The task of a language description is to define all allowable outcomes. The way in which relations can cope with non-deterministic language features and notations to describe relations conveniently are introduced in this section and illustrated on the semantic description of a non-deterministic loop construct. Functions have the crucial limitation that they define a unique result for their arguments — thus writing f (x) denotes a unique value. Although uniqueness of result might be thought of as an appealing property for program semantics, there are a number of ways in which non-determinism arises in programming languages: 1. many languages leave the order of expression evaluation up to the implementation — where expression evaluation can give rise to side effects (e.g. because of function calls), a language description has to show the resulting nondeterminism; 2. some (even sequential) languages include specific non-deterministic constructs (e.g. Edsger Dijkstra’s so-called “guarded commands” — see Section 3.3); 3. most importantly, it is difficult to add meaningful concurrency to a language without introducing non-determinism. 6
See
3.2 Structural Operational Semantics
39
The first category is messy and discussion of this language feature is postponed until Section 6.5. Category two can be introduced at will and an example different from “guarded commands” is given below. It is really the third of these categories that is both most interesting and is the strongest reason for switching to a way of presenting semantics that copes naturally with any form of non-determinism. The material on concurrency (including a concurrent object-based language) is addressed in Chapters 8 and 9; here an alternative way is chosen to introduce non-determinism into the simple language.
3.2.1 Relations A mathematical function defines an infinite set of pairs. For example, the function: square : Z → Z
square(i) 4 i ∗ i
gives the set of pairs: {(i, i ∗ i) | i ∈ Z}
Although the VDM type Z-set only includes finite subsets, the mathematical notation for power sets can be used: square ⊆ P(Z × Z) The function exec in the preceding section takes a pair of arguments but it is straightforward to extend the notion of a function to one whose domain is also a set of pairs: exec ⊆ P((Stmt × Σ) × Σ) Although a function applied to different arguments can denote the same value: square(2) = square(−2) = 4 {(2, 4), (−2, 4)} ⊆ square
the set of pairs has the many-to-one property that prohibits two possible results for the same argument value. One part of the specification for sorting mentioned in Chapter 1 is the concept of a permutation. Here, a function cannot be defined that yields any permutation of its inputs because there are many such results. It is, of course, possible to define a function that yields all permutations: permutations: X ∗ → (X ∗ )-set Alternatively this can be formalised by defining a predicate: is-perm = X ∗ × X ∗ → B
40
3 Operational semantics
Similarly, it would be possible to define a non-deterministic statement using a function that yields a set of possible result states: execS: Stmt × Σ → Σ -set but composing such functions becomes notationally messy and relations offer a more natural mathematical extension. Functions are a special case of relations and relations do allow many:many connections. To illustrate the idea, suppose that the top-level notion of a program in Figure 2.3 were changed to contain a set of statements that could be executed in arbitrary order: NonDeterministicProgram :: vars : Id-set body : Stmt-set One program (in an unimportant concrete syntax) might be: vars x:
{x := 1, x := 2}
The valid executions of the body of this program give rise to a many:many relationship between states: {(σ , σ † {x 7→ i}) | i ∈ {1, 2}} The semantics shifts from being given by a function: SimpleProgram × Σ → Σ to a relation between pairs of NonDeterministicProgram × Σ and Σ. This relation will in general be an infinite set.
3.2.2 Inference rules There are many ways in which such a relation could be defined — as with the permutation example (is-perm), it could be characterised by a predicate: valid-transition: Stmt-set × Σ × Σ → B
and defined as follows:
s ∈ stmts ∧ exec(s, σ ) = σ 0 ∧ valid-transition((stmts − {s}, σ 0 ), σ 00 ) ⇒ valid-transition((stmts, σ ), σ 00 ) A more readable notation (than writing the predicate valid-transition in functional style) is to define the relation as an infix operator marked by an arrow that indicates the type of the first argument — for statements: st
(stmts, σ ) −→ σ 0 this can be thought of as stating that configuration (stmts, σ ) “can transition to” state σ 0 . For:
3.2 Structural Operational Semantics
41
st
−→: P((Stmt × Σ) × Σ) it is possible that both: st
({x := 1, x := 2}, {x 7→ 3, y 7→ 4}) −→ {x 7→ 1, y 7→ 4} and: st
({x := 1, x := 2}, {x 7→ 3, y 7→ 4}) −→ {x 7→ 2, y 7→ 4} hold. “Natural Deduction” presentations of logic [Pra65] use inference rules such as: ∧-E
E1 ∧ E2 Ei
∨ -I
E E ∨ E0
Even in logic, some of the inference rules require multiple hypotheses (Ei ` E means that E can be deduced from Ei ): E1 ∨ E2 E1 ` E E `E ∨ -E 2 E Similar rules can be used to define semantic relations conveniently — for example: s ∈ stmts st (s, σ ) −→ σ 0 st (stmts − {s}, σ 0 ) −→ σ 00 st (stmts, σ ) −→ σ 00
Just as with Natural Deduction, the rules are “schema” into which matching values can be substituted. Figure 3.3 shows two valid deductions from the rule above and establishes formally that the simple program {x := 1, x := 2} can give rise to different final states. The essence of the rule notation is that the relation under the line in a rule holds if all of the hypotheses above the line hold. More is said about this reading as inference rules below but for now the rule above can be read as: Given that stmts is a set of Assign statements and an s ∈ stmts can be found (thus stmts is a non-empty set), if (s, σ ) can transition to σ 0 and stmts without s can transition from σ 0 to σ 00 , then it is valid to conclude that (stmts, σ ) can transition to σ 00 .
In fact, the rule style becomes so natural that most people use it even for simple sequential (deterministic) languages and it is first applied to the language as given in Figure 2.3. The same pattern-matching idea is used. With:
42
3 Operational semantics (x := 1) ∈ {x := 1, x := 2} st (x := 1, {x 7→ 3}) −→ {x 7→ 1} st ({x := 2}, {x 7→ 1}) −→ {x 7→ 2} st ({x := 1, x := 2}, {x 7→ 3}) −→ {x 7→ 2} (x := 2) ∈ {x := 1, x := 2} st (x := 2, {x 7→ 3}) −→ {x 7→ 2} st ({x := 1}, {x 7→ 2}) −→ {x 7→ 1} st ({x := 1, x := 2}, {x 7→ 3}) −→ {x 7→ 1}
Fig. 3.3 Two deductions from a non-deterministic rule
m
Σ = Id −→ N st
−→: P((Stmt × Σ) × Σ) stl
−→: P((Stmt∗ × Σ) × Σ) st
(s, σ ) −→ σ 0 stl
(rl, σ 0 ) −→ σ 00 stl ([s] y rl, σ ) −→ σ 00
Unlike the description with functions in Section 3.2.1, the description here is given top-down. The style of semantic description used in Figure 3.4 was dubbed “Structural Operational Semantics” by its originator Gordon Plotkin.7 (The figure only gives the description of the statements of the language; Appendix A includes the SOS rules for expressions.) The adjective “structural” indicates that the semantic rules should follow closely the structure of the (abstract) syntax of the language. This objective offers greater benefits when the semantic objects of the language need to be more complicated, such as with environments in Chapter 5. st stl ex The idea that SOS rules define inference relations (−→ / −→ / −→) is very important and Figure 3.5 illustrates two possible chains of inference for: fn := 1; while n = 6 0 do fn := fn ∗ n; n := n − 1
od
ex
The SOS rules for expressions (−→) are given in Appendix A. Apart from the argument that the use of inference rules offers uniformity of presentation, there is ex a technical reason for defining −→ as a relation: the fact that it can be undefined (in the case where a reference is made to an undeclared identifier) means that it is 7
See Section 3.3 for references and a sketch of the history.
3.2 Structural Operational Semantics
stl
([ ], σ ) −→ σ st
(s, σ ) −→ σ 0
stl (rest, σ 0 ) −→
σ 00
([s] y rest, σ ) −→ σ 00 stl false st (mk-While(test, body), σ ) −→ σ ex
(test, σ ) −→ true stl
(body, σ ) −→ σ 0 st (mk-While(test, body), σ 0 ) −→ σ 00 st (mk-While(test, body), σ ) −→ σ 00 ex
eval(rhs, σ ) −→ v st (mk-Assign(lhs, rhs), σ ) −→ σ † {lhs 7→ v}
Fig. 3.4 SOS of the statements in SimpleProgram ex
(e, {fn 7→ 1, n 7→ 0}) −→ false st (mk-While(e, b), {fn 7→ 1, n 7→ 0}) −→ {fn 7→ 1, n 7→ 0} ex
(e, {fn 7→ 1, n 7→ 1}) −→ true st (b, {fn 7→ 1, n 7→ 1}) −→ {fn 7→ 1, n 7→ 0} st (mk-While(e, b), {fn 7→ 1, n 7→ 0}) −→ {fn 7→ 1, n 7→ 0} st (mk-While(e, b), {fn 7→ 1, n 7→ 1}) −→ {fn 7→ 1, n 7→ 0}
where:
e = mk-RelExpr(n, N OT E QUALS, 0) b = [mk-Assign(fn, mk-BinArithExpr(fn, T IMES, n)), mk-Assign(n, mk-BinArithExpr(n, M INUS, 1))]
Fig. 3.5 Two traces of inferences from the SOS in Figure 3.4
43
44
3 Operational semantics
actually wrong to define eval as a function. With the inference rules, there is simply no rule whose hypotheses are all dischargeable. In this case, the computation is undefined. It is also worth repeating the point that, so far, expression evaluation cannot change the state and this is made clear by the form of the semantic relation: ex
−→: P((Expr × Σ) × N)
This situation changes with the inclusion of functions in a language8 — see Section 6.5.
3.2.3 Non-deterministic iteration The idea in NonDeterministicProgram above of just executing a set of statements in arbitrary order is somewhat artificial. This section introduces and describes formally a more plausible construct for non-deterministic iteration. (Although the different instances of the body are executed separately here, this construct is developed further in Chapter 8 to exhibit concurrency.) Because assignments change the store, the left-to-right order of statement evaluation is part of the essence of imperative programming languages. There is, however, a subtle danger of writing programs that are unnecessarily ordered. Although formal description of arrays is tackled in Section 4.3.2, the reader should readily spot that an initialisation such as: for i := 1 to 100 do
A(i) := 0
od
does not need to be executed in any particular order. A non-deterministic iteration such as: for all i ∈ {1, .., 100} do
A(i) := 0
od
would leave a compiler freedom to choose an optimal order or even make it easier to spot that some form of bulk write would be much more efficient. Furthermore, because multiplication is commutative, the factorial example could be written: fn := 1; for all i ∈ {1, .., n} do fn := fn ∗ n
od 8
Also by allowing pseudo-expressions such as x + + or + + x.
3.3 Further material
45
The above examples actually determine a unique final state but: for all i ∈ {1, .., 10} do
result := i
od
is certainly non-deterministic. An abstract syntax for such a non-deterministic looping construct could be: ND-For :: control low high body
: : : :
Id ArithExpr ArithExpr Stmt∗
The semantics could be fixed by the following SOS rules:9 ex
(low, σ ) −→ lv ex (high, σ ) −→ hv st (mk-Repeat(c, {i ∈ N | lv ≤ i ≤ hv}, body), σ ) −→ σ 0 st (mk-ND-For(c, low, high, body), σ ) −→ σ 0 set = { } st (mk-Repeat(c, set, body), σ ) −→ σ v ∈ set
stl
(body, σ † {c 7→ v}) −→ σ 0 st (mk-Repeat(c, set − {v}, body), σ 0 ) −→ σ 00 st (mk-Repeat(c, set, body), σ ) −→ σ 00
3.3 Further material Projects Semantics can now be tackled for all of the enumerated syntactic projects listed in Section 2.3.10 The final unnumbered challenge of looking at a whole language is likely to present the challenges discussed in subsequent chapters and thus should be postponed until description techniques for handling these challenges have been understood. 9
This way of recording the semantics requires that Repeat is an acceptable first element of the st iter pairs in the −→ relation. An alternative would be to define a separate relation −→. 10 There is an interesting interaction between for loops and declarations in that some languages treat the control variable as being “bound” within the loop — this aspect is picked up in Section 5.5. An alternative adopted in ALGOL 60 is to say that the value of the control variable is undefined on termination of the loop.
46
3 Operational semantics
Alternatives The semantics given so far are often referred to as “big-step” (or “natural” semantics). The applicability of this term is seen in the rule for statement sequences: st
(s, σ ) −→ σ 0 stl
(rest, σ 0 ) −→ σ 00 stl ([s] y rest, σ ) −→ σ 00
where the conclusion of the SOS rule gives the relation for the whole sequence. (It is shown in Chapter 8 that describing the merging of threads in concurrency requires a “small-step” semantics. In a small-step semantics, a “configuration” keeps track of the activity remaining in each thread and a step makes an atomic transition of one thread at a time; SOS rules both show the state change and update the text remaining to be executed. Such a semantics is necessarily non-deterministic.) An important issue arises with big-step semantics and termination. Consider the rule given for while statements: ex
(test, σ ) −→ true stl
(body, σ ) −→ σ 0 st (mk-While(test, body), σ 0 ) −→ σ 00 st (mk-While(test, body), σ ) −→ σ 00
For a non-terminating loop (in the extreme, while true do x := x + 1 od) it will never be possible to discharge the third hypothesis of the rule. Thus non-termination in the program is modelled by non-termination in the semantics.11 This fits with the notion that an operational semantics is providing an abstract interpreter. There is, of course, no way in general that the semantics could solve the “halting problem”.12 The situation is even more complicated with non-deterministic programs where –from the same state– a program might either diverge or converge. A yet further issue involves “fairness” — see [Fra86, vGH15]. There are many ways of recording the relations between (program + initial state) and final state. An obvious candidate is to write the program as though it is a relation symbol (as could be done for permutations; [A, A, B, B] permutes [A, B, B, A]); thus σ [[S]]σ 0 — this does not however extend cleanly to “small-step” semantics (see Chapter 8) because the text of the program has to be updated as well as the state. Other options for recording the semantic relation include writing the program text on the relation arrow — see also Peter Mosses’ “Modular SOS” [Mos04, Mos09] 11
A more mathematically pleasing treatment is possible with denotational semantics and this is discussed in Section 7.1. 12 Although the “Halting Problem” is often associated with Alan Turing’s name, the undecidability result in [Tur36] is different because it concerns infinite number representations and Turing’s programs were not in general intended to terminate. The impossibility of constructing a program that will determine whether an arbitrary machine will halt is given in [Dav65a]. Useful historical notes are [Pet08, pp.328–329] and further discussion by Post and Davis is in [Dav65b].
3.3 Further material
47
and [HC12] from Rob Colin and Ian Hayes. What is common to these presentations is the necessity of using relations and the convenience of defining such relations by inference rules. It is worth noting that the advantages of mathematical abstractions like objects, functions and relations do not guarantee ease of implementation. This point becomes clearer when more advanced object language features like “heap storage” and the consequent need for “garbage collection” are considered in Chapter 6. One point that can be made here is that mathematical abstractions such as natural numbers cannot be implemented — for example, the non-terminating program given above is bound to overflow on any actual computer.
Further reading There are a number of books that treat operational semantics including [RNN92, Hen90]. They could be said to go deeper into the theory whereas the current book aims to show how to apply the ideas of operational semantics to realistic programming languages. (References to books on other semantic approaches are given in Chapter 6.)
Historical notes As far as tackling the semantic description of typical imperative programming languages, John McCarthy’s “Micro-ALGOL” description in [McC66] is a key historical reference. In the talk given at the famous Formal Language Description Languages working conference in 1964, he both introduces a form of abstract syntax and gives an operational semantics using a function. McCarthy’s choice of language constructs is interesting: he could have made his life easier by doing as Edsger Dijkstra later did and selecting only the structured language features described here in Appendix A. In fact, Micro-ALGOL has no while loop but does have a goto statement. The reader should avoid jumping to the conclusion that this was a mistake that would not have been made had Dijkstra’s goto letter [Dij68b] been written earlier. It is at least plausible that McCarthy made a deliberate choice to include labelled statements and it certainly had a significant impact on work that followed. The state of McCarthy’s semantics had to include the full text of the program and a program counter; the semantics of a goto changed the program counter. That same working conference was held at Baden bei Wien and its proceedings [Ste66] are an invaluable source because all of the formal discussions were recorded and transcribed. Professor Heinz Zemanek became the leader of the IBM Laboratory in Vienna and made sure that many of his colleagues took part in the
48
3 Operational semantics
process of capturing this valuable material.13 The relevance of this is that the IBM Vienna Laboratory went on to produce operational semantic descriptions of the PL/I language. This was a huge undertaking: PL/I had most of the features found in any of FORTRAN, COBOL and ALGOL (sadly without the elegance and the taste of the last of those three). Not only did the Vienna group have to find ways of modelling all of the features of ALGOL omitted from McCarthy’s Micro-ALGOL, they also had to model features such as the concurrency that came with “tasking” in PL/I, “exceptions” and under-determined storage mapping. The PL/I formal descriptions were labelled “ULD-III” which stood for “Uniform Language Description” — the Roman numbering three gave absolute precedence to IBM’s official natural-language document that claimed to describe the language and the existence of a semi-formal ULD-II written by the IBM UK Lab in Hursley. There were three complete versions of ULD-III.14 Probably the most useful firsthand account is [LW69]. J.A.N. Lee coined the name “Vienna Definition Language” (VDL is not to be confused with VDM, which came later — see [Jon01]) for the description method. As is so often the case, hindsight provides rather unfair judgements but it is true that the VDL description method had unfortunate properties that complicated its use. The Vienna group acknowledged their main influences as: • John McCarthy — including [McC66], • Cal Elgot — especially [ER64] and • Peter Landin — [Lan66b], which was also presented at the Baden bei Wien IFIP Working Conference. Perhaps the most troublesome features of VDL descriptions can be grouped under McCarthy’s term of “grand-state” descriptions:15 1. The full text of a PL/I program is included in the ULD-III state because of abnormal sequencing (whether from goto statements or exceptions). This decision is a magnified version of the program counter in McCarthy’s [McC66] — but the magnification makes a monster of the original. 2. The state of VDL descriptions included a stack of “environments” (see Section 5.2). This fits with Landin’s SECD machine idea but –as explained in Section 5.5– is a definite impediment to proofs about VDL descriptions. McCarthy’s phrase pinpoints the fact that there are serious disadvantages in putting things in the state that are not changed by executing (normal) statements of the program. In passing, it is worth noting how ALGOL 60 became a testbed for semantic description techniques — in [JA16, AJ18] four more-or-less complete descriptions 13
More is said on this meeting in [AJ18] and at greater length in [Ast19]. These have all been scanned and, along with key working documents, are available from my web site: 15 McCarthy used this term in several discussions — an early reference is in an exchange at an IFIP Working Group discussion [Wal69, p.33] — but it is also clear that the concern is behind Strachey’s comment on McCarthy’s talk at the 1964 Working Conference (see [McC66, p.11]). 14
3.3 Further material
49
of that language are described along with references to other attempts. The earliest of these ([Lau68]) was undertaken by Peter Lauer because Heinz Zemanek wanted evidence to show that the criticism levelled at the ULD descriptions of PL/I had far more to do with the chosen object language than being a valid objection to the VDL meta-language. (The problem referred to as having a grand state did however also dog Lauer’s ALGOL description.) The comment in Section 1.5 about a language description being used as a criterion for compiler correctness can now be expressed formally. If a compiler comp maps source programs to the language of some machine whose semantics is demc st scribed by −→, that compiler is correct with respect to the language semantics −→ 16 providing: ∀s ∈ Stmt · mc st ∀σ , σ 0 · (comp(s), σ ) −→ σ 0 ⇒ (s, σ ) −→ σ 0
Notice that this is not an equivalence (i.e. the reverse implication is not required) because the language semantics can be non-deterministic and describe a range of permissible results. Two of the earliest publications on using a formal description in compiler correctness proofs are [MP66, Pai67]. Details of the IBM Vienna Lab work on developing compilers from formal language descriptions (particularly that on the “block concept” such as [Luc68, JL70]) are given in Section 5.5; a useful overview is contained in [Jon82a] and extensive work on Ada in [BO80b, BO80a]. The work on the Texas “stack” is published in [Moo19]. Unhappiness with gratuitous difficulties in basing reasoning about compiler designs on such (grand-state operational) definitions led the Vienna group to move to denotational semantic descriptions (see Sections 7.1–7.2). Again, ALGOL was used as a demonstration [HJ78, HJ82] and among other object languages, Pascal [AH82] and Modula-II [AGLP88] have been described using VDM (see Chapter 11). Using the denotational approach certainly prompts the use of “small-state semantics”. Gordon Plotkin had made significant contributions to the domain theory that underlies the denotational approach when he took a sabbatical (from Edinburgh) to Aarhus University. There he taught an operational approach and his course notes [Plo81] provide the foundation of Structural Operational Semantics. These notes were widely circulated and eventually republished as [Plo04b]; the accompanying [Plo04a] provides a fascinating commentary on this period.
16
This is a slight simplification in that –in reality– the states of the machine will differ from those of the language description. Expanding this is not difficult but does show the interesting need to relate the run-time state to the abstraction (see [Jon76] for further details).
Chapter 4
Constraining types
In early versions of FORTRAN, the type of a variable was determined by the first letter of its identifier. This chapter shows how to describe an object language that expands on the idea of listing the permissible names of variables and looks at the advantages of declaring a specific type for each variable. This makes it possible to detect –before a program is executed– some mistakes that a programmer might make. More generally, all forms of context dependancy offer ways of avoiding an attempt to give semantics to meaningless programs. There is nothing in formal semantic approaches that requires that object languages be strongly typed. Methods for describing languages at various points on the strength-of-typing spectrum are presented in Section 4.1. Language issue 6: Type declarations Fixing the types of variables in declarations makes it possible to detect programming errors statically (before execution). Maximising the idea of types results in a “strongly typed” language. The availability of static type information can also make it easier for a compiler to generate efficient code for the statements in the language. This is an extension of the redundancy concept addressed in Issue 5. A meta-objective of this and subsequent chapters is to show that the metalanguage introduced in Chapters 2 and 3 copes easily with new features in the object language. In fact, the semantic tools in SOS do not need to change at all in this chapter; it is the idea of using “context conditions” to define the context dependancies of a language which is new (Section 4.2). Section 4.3 goes on to explore the key role of state descriptions. As in other chapters, a concluding section on further material (Section 4.4) is included.
© Springer Nature Switzerland AG 2020 C. B. Jones, Understanding Programming Languages,
51
52
4 Constraining types
4.1 Static vs. dynamic error detection There are many things that can be wrong with a program. In general, they cannot all be detected by any algorithm. There are, however, some aspects of languages for which it can be said that certain programs make no sense. In fact, the language descriptions in Chapters 2 and 3 have not so far excluded SimplePrograms that use identifiers in statements that are not listed in the vars list. This can be checked statically for a SimpleProgram: all uses of Id in expressions or on the left-hand side of assignments must have been declared in the vars list. This is a simplified case of the more general context dependancies considered in Section 4.2. There is a related issue that requires a dynamic –or run-time– check and that is the danger that a program attempts to access a variable before any assignment has been made to it. Of course, one way of avoiding this problem is for the language designer to arrange that all declared variables are given initial values at the start of execution. This is however a design choice. If a language does not decree that such initialisation occurs, there needs to be a check on variable access such as the second hypothesis in the following SOS rule: e ∈ Id e ∈ dom σ ex (e, σ ) −→ σ (e)
The description in Section 3.2.2 of reading SOS descriptions as inference rules indicates that, if no rule can be found whose hypotheses are fulfilled, the execution is undefined and thus the program is erroneous. It might, of course, be desirable in a full language to include some form of exception-raising and -handling mechanism but this topic is deferred to Chapter 10. Language issue 7: Initial values Programming languages that require all variables to be initialised avoid some programming errors (or, alternatively, obviate the need for run-time checks). There is however a trade-off here because a careful programmer who always makes sure that variables are set before use is paying for the redundant initialisation. Note that checking for initialisation by (meaningful) assignment requires flow tracing. Language issue 8: Initial expressions An alternative language design decision is to allow programmers to write initial expressions as part of variable declarations. Notice that until there are nested blocks (see Chapter 5) such expressions can only yield constant initial values.
4.2 Context conditions
53
4.2 Context conditions A prime example of context dependancy concerns the use of variables in ways which do not correspond to their intended types. Language designers can choose to make compile-time detection of such errors possible by recording rules that are statically checkable. For example, a language might rule out adding a character string to a number. If variables are untyped and can be assigned any values, the consequences of such violations can only be detected at run-time. In contrast, if variables are typed, it is possible for a compiler to flag any program which applies an arithmetic operator to a character string operand. There are, of course, cases where operators are intended to be polymorphic in the sense that the same operator can be applied to operands of different types. For example, the plus symbol might be overloaded and –as well as being applied to numbers– used in a programming language to mean concatenation of strings. A language can also be designed to be permissive in the sense that some type differences in operands to a binary operator are resolved by coercing the type of one operand to that of the other. Thus an arithmetic operator with operands that are integers or reals could convert an integer operand to a floating-point number before applying the operator. Language issue 9: Type information The designer of a language must decide the extent to which type information is required in programs. Type information provides redundancy and makes it possible to detect some program errors from the static text of a program. The availability of type information can also facilitate generation of more efficient code than is possible if no information is available about the intended roles of different variables. In order to introduce the alternative description techniques, a rather coarse distinction is made in this chapter between static and dynamic checking. This degree of difference can be demonstrated by allowing variables in BasePrograms to take values that are either integers (Z) or Booleans (B). The full description of BaseProgram is contained in Appendix B; the text of this chapter picks out the main decisions; after understanding these points, the reader should have no difficulty in reading the description in the appendix. Note that some distinctions from the abstract syntax of SimpleProgram of Chapter 2 have been dropped in that the test parts of both If and While here appear to allow any Expr — the checking on types is now done in the context conditions. Given that there are two types of variables, meaningless programs can arise in a number of ways — for example: • the test expression in an If or While statement might be an arithmetic expression; • an arithmetic operator might be applied to expressions (RelExpr) that yield a Boolean result. If there were no type indications in programs, it would be possible to say that the type of a variable was determined by the first assignment to its name. In this case,
54
4 Constraining types
it would not be possible –in general– to detect type errors statically. It is true that egregious cases of these errors such as: if 42 then n := 1 else n := 2 fi n := true + 7
could be detected statically but, since assignments to variables can depend on the flow of control, types cannot –in general– be determined statically. The checking would then have to be made in the SOS rules as in Section 4.1 above. Because both BNF and the basic description of objects can essentially only cope with context-free languages, the challenge faced by those describing –or designing– a programming language is: Challenge V: Context dependancy A convenient notation is required to cut down the objects defined by the abstract syntax of a language to the “meaningful” texts (i.e. those which are in accord with any type restrictions that the language imposes on valid programs).
wf-Program wf-BaseProgram
Program BaseProgram
Fig. 4.1 Context conditions define a subset of objects given by an abstract syntax Defining the subset of objects (satisfying an abstract syntax) to which meaning will be given is pictured in Figure 4.1. There are several ways in which the class of well-formed objects can be defined. Here a predicate is written: wf -BaseProgram: BaseProgram → B Only objects of BaseProgram for which wf -BaseProgram yields true are considered in the semantic description. This test can be defined in terms of three recursive predicates that check statements and expressions. These predicates require an extra argument that carries down the type information from BaseProgram: m
TypeMap = Id −→ ScalarType
The signatures of the two statement-level predicates are: wf -StmtList: Stmt∗ × TypeMap → B wf -Stmt: Stmt × TypeMap → B
4.2 Context conditions
55
The first of these is trivial, just requiring that all elements of the list of statements are well formed (with respect to the same type map): wf -StmtList : Stmt∗ × TypeMap → B
wf -StmtList(sl, tpm) 4 ∀i ∈ inds sl · wf -Stmt(sl(i), tpm) The well-formedness of an assignment statement depends on two things: • that the expression forming its right-hand side is well formed — a function such as: wf -Expr: Expr × TypeMap → B
could be used to determine this; • a type match between the lhs and rhs components of the statement. Because it is a single identifier in this simple language, the type of lhs is simply tpm(lhs). To complete the check of matching, there is an obvious need for a function that determines the type of an expression but there are also expressions that are in themselves obviously inconsistent. Rather than define two separate functions for checking the well-formedness of an expression (wf -Expr) and for computing its type, a single function can be defined that performs both tasks (with the result E RROR indicating non-well-formed expressions): c-type: Expr × TypeMap → (I NT T P | B OOLT P | E RROR)
With this, it is straightforward to define:
wf -Stmt(mk-Assign(lhs, rhs), tpm) 4 lhs ∈ dom tpm ∧ c-type(rhs, tpm) = tpm(lhs) The test for well-formed If and While in Appendix B should be obvious and the full definition of c-type is easy to read. Finally, it has to be shown how the type information in a BaseProgram provides the TypeMap:1 wf -BaseProgram : BaseProgram → B
wf -BaseProgram(mk-BaseProgram(types, body)) wf -StmtList(body, types) 1
4
In languages with nested blocks/procedures (see Chapter 5) or classes and methods (see Chapter 9), the TypeMap has to be updated with declaration information that is contained in nested texts.
56
4 Constraining types
Because semantics are only given for well-formed texts (see Figure 4.1), the appropriate wf predicate can be thought of as an extra hypothesis for any SOS rule. The elision of such lines fits with the position in VDM that data type invariants restrict types. It is useful to think of the context conditions in relation to the type checking done in a compiler. Just as with the diagnostics in a compiler, it is not always obvious how far such checks should extend. For example, a while statement whose test is true will definitely loop forever — but so would one that looked for a counterexample to Fermat’s last theorem. An indication of what is reasonable to check in context conditions is whether the test depends only on symbols (such as operators) rather than their meaning. Language issue 10: Types as assertions Type definitions are essentially a form of assertion: they constrain what can be done to variables declared to be of a particular type. Whereas general assertions (see Section 7.3.1) require proof of their consistency, typing declarations are checkable by simple static rules. Language issue 11: Stong type systems Type systems can be much richer than that used in this chapter. For example, variables might be declared to hold values that are intended to represent lengths or areas so that it is valid to multiply two lengths to compute an area but incorrect to add a length to an area. Such types provide extra compile-time checking even though both types might use the same machine representations of numbers at run-time. Furthermore, types can be restricted by predicates. A trivial example is that values representing minutes might be limited to be integers between 0 and 59; once records are introduced (see Section 4.3.3), predicates can use a value in one field to restrict the range of admissible values in another. A reader who has followed the semantics in Section 3.2 will have no difficulty in reading the semantics of BaseProgram in Appendix B. In that description, the class of Id has not been divided; a completely equivalent description of the language could be written separating, say, IntId/BoolId. This point becomes more pressing when identifiers for procedures (or classes and methods) are present in a language. The decision on sub-dividing the class of identifiers is really a matter of taste. The position adopted here is that, if the written forms of the identifiers are not distinguished, the work of defining their use should be done in the context conditions and a single class of Id used in the abstract syntax. It is an important property that –for elements of BaseProgram that satisfy wf -BaseProgram– there will be no run-time type errors. This type safety property is useful because it shows that type information is not needed in the run-time state.2 2
There were a number of technical reports written around the VDM description of PL/I [BBH+ 74] that addressed the use of the formal description of a language as a basis for compiler design. One that relates to the current topic is [Izb75], in which Herbert Izbicki proves that –because of checks
4.3 Semantic objects
57
Language issue 12: Writeability versus readability There are situations where un-typed scripting languages can be justified for productivity but, if such programs are to be used for a long time, the cost of maintenance is likely to increase because changes cannot benefit from a record of the intentions of the original designer.
4.3 Semantic objects In the introductory languages of Chapter 3 and Appendix B, the states (Σ) are simple mappings that associate values with identifier names. Even there the notion of the state as the objects underlying the semantic description is informative but the role of semantic objects is even clearer with richer languages. In fact, the notion of the abstract state space used in a description of a language tells a skilled reader an enormous amount about the language. This is extremely valuable because the description of the semantic objects –even for a large language– is likely to be rather short (e.g. the VDM description of PL/I [BBH+ 74] comprises over 100 pages of formulae but the definition of the semantic objects is less than two pages long). More importantly, returning to the Leitmotiv of this book, states are a crucial starting point for the designer of a programming language. This section outlines the way in which semantic objects are enriched for several modest extensions to the language of Appendix B; more ambitious languages are modelled in Chapters 5–10.
4.3.1 Input/output The key characteristic of imperative languages is that their core statements change something. That something could be a database, a projection of some virtual reality or the position of a robot arm. In the languages considered above, assignment statements can change the values of variables. In order to illustrate how other sorts of state can be manipulated, BasePrograms can be extended to include input/output (I/O) by adding Read and Write statements. This can be illustrated by rather simple forms of I/O but extensions to multiple named files etc. are considered below. The syntax and semantics of While and If statements are unchanged — they continue to play the role of orchestrating which core (state-changing) statements are executed and in what order. The simple message here is that, if the core statements of a language can change something, the state (Σ) must contain fields in which the current values are stored. Adding a simple Write as an option to the abstract syntax for Stmt is trivial: that have been made in the context conditions– no run-time type mismatch errors can occur. Such “type soundness” arguments are also discussed in [Sym99].
58
4 Constraining types
Stmt = · · · | Write Write :: value : Expr The semantics needs to arrange that the value of the expression is appended to a state component that records all such values written. Thus the underlying semantic objects need to be: m
Σ :: store : Id −→ N output : N∗ Notice that, in addition to the new output field, the values of variables are still in the store component of the state. An SOS rule for Write is: ex
(e, store) −→ v st (mk-Write(e), mk- Σ(store, out)) −→ mk- Σ(store, out y [v])
Notice that executing a Write statement does not change the store. The SOS rules for accessing values from identifiers and changing them by assignments need straightforward revision:3 ex
(rhs, store) −→ v st (mk-Assign(lhs, rhs), mk- Σ(store, out)) −→ mk- Σ(store † {lhs 7→ v}, out)
Again, what does not change is important: executing an assignment does not change the output file. Adding an input statement poses only one extra question — the syntax is straightforward: Stmt = · · · | Read Read :: lhs : Id Of course, the Read identifies a place where the next input value should be placed and thus the lhs field is an identifier rather than an Expr as in Write. The extension of the semantic objects is also obvious: m
Σ :: store : Id −→ ScalarValue output : N∗ input : N∗ The additional consideration is that, presumably, a Read from an empty input should fail — see the discussion on run-time errors in Section 4.1. So the SOS rule might be: in 6= [ ] st (mk-Read(lhs), mk- Σ(store, out, in)) −→ mk- Σ(store † {lhs 7→ hd in}, out, tl in) 3
Peter Mosses’ M-SOS is discussed in Section 4.4: his approach attempts to record semantic rules in a way which enhances their re-usability in different contexts.
4.3 Semantic objects
59
To return to the point about the extent to which semantic objects can contribute disproportionally to understanding a language, consider the semantic object: m
Σ :: store : Id −→ ScalarValue m files : FileId −→ N∗ This would immediately prompt a reader to think about a language in which I/O statements can create and access any number of named files. Furthermore, an extension to: m
Σ :: store : Id −→ ScalarValue m files : FileId −→ File
File :: contents : ScalarValue∗ index : N would support a language with statements that operate at indexed points within files. Further extensions might include ownership and (read/write) permissions as in Unix. Language issue 13: What are the underlying objects of a language? The designer of a programming language must decide what can be changed by the core statements of the language; a description of the chosen language must use semantic objects that reflect what can be changed.
4.3.2 Arrays The simple languages considered so far have manipulated only ScalarValues. The message about semantic objects being helpful in grasping what can –and cannot– be done in a programming language is reinforced when composite values such as arrays and records are considered. The ability to manipulate some form of array value is present in most programming languages. Language issue 14: The role of arrays Arrays are fundamental to many mathematical and engineering problems. Their inclusion in the first versions of FORTRAN was probably key to its adoption and APL [Ive62] pushed array handling to its limit. Interestingly, few programming languages support classical matrix algebra support — instead the most common approach is to offer minimal ways of grouping elements into arrays and to provide statements such as for loops with which a programmer can define algorithms over array values by manipulating their elements. Hardware index registers make it possible to generate efficient code for referencing array elements. Compilers can further improve code by using techniques such as “strength reduction” [GvRB+ 12]. A simple form of an array is a one-dimensional vector that can be modelled with: Vector = ScalarValue∗
60
4 Constraining types
Arrays can then be defined as vectors of things that could either be scalar or array: Array = (ScalarValue ∪ Array)∗
But this has the disadvantage that it would appear to allow a form of “ragged array” in which different indexes at the outermost level select sub-objects of varying dimensionality.4 A better basic model might be: m
Array = N∗ −→ ScalarValue
For simplicity, this assumes that the indexing of any dimension starts at one but it is easy to change this so that, for example, defining array dimensions A(5: 15, −10: 20) is permitted. Language issue 15: Dynamically defining array bounds Without some outer context, array bounds can only be constants. Blocks are covered in Chapter 5 and they make it possible –as in ALGOL 60– to declare arrays in inner blocks whose bounds are defined by variables (or expressions that use such variables) from outer blocks. Furthermore, array parameters can be declared whose bounds are determined by the size of a passed argument array. Normal arrays might have a denseness requirement that all valid index lists are in the domain of the array model but there are also applications for “sparse arrays” that allow gaps. Language issue 16: Mapping arrays A multi-dimensional array has to be mapped onto the linear addresses of the target computer and this requires that either “row major” or “column major” order is adopted. This can have significant impact on the concept of accessing “slices” of an array; this issue becomes more interesting when combined with parameter passing and is left to Chapter 5. There is, however, a clear question for the language designer of whether to prescribe the layout of an array as part of a language description. FORTRAN’s COMMON storage made it possible to declare a two-dimensional array in one sub-program and to view it as a one-dimensional vector in another sub-program that actually shares the same storage.
4.3.3 Records Viewed abstractly, there are two differences between records (known as “structures” in PL/I) and arrays. Firstly, array elements are homogenous in the sense that all elements are of the same type, whereas the fields of a structure can be of different types. Secondly, array elements are accessed by numerical indexing whilst the fields of a structure are identified by identifiers. In a sense, 4
APL [Ive62] did actually allow such raggedness.
4.3 Semantic objects
61
A: array(3) of N and S: struct one: N two: N three: N could serve the same purpose. Language issue 17: Supporting records Many programming languages offer the ability to define records (or structures). Furthermore, the fields of such records –as well as being scalar values– can be arrays and the elements of arrays can be records. There are issues around declaring record types that are addressed in Section 6.3. It should come as no surprise that a model of the store can be built around:5 m
Store = Id −→ Value Value = ScalarValue | ArrayValue | RecordValue m
ArrayValue = N∗ −→ Value m
RecordValue = Id −→ Value
But there is a potential difficulty beyond the two obvious differences between records and arrays, and this points to a more general warning. Language issue 18: Storage mapping of records Given that the fields of a record are inhomogeneous, there can be short fields followed by ones that take more machine store; the latter are likely to need alignment on store boundaries (be they bytes, words or double words) — this provides flexibility in projecting the abstract record onto the linear machine addresses. This language issue was present in PL/I structures and led to the need to implicitly define the storage mapping in the formal description — this is described in [BW71] by Hans Bekiˇc and Kurt Walk. The generic warning here is that mathematical abstractions do not necessarily expose all of the problems faced in implementations. It is, however, true that difficulty in finding a clean mathematical model is a clear indication that it will also be difficult to implement a feature (or interaction between several language features). One of the advantages in constructing a formal model of a language is that it is much less time-consuming than building a full implementation; ironing out problems with a formal description can save much wasted effort. The message of this subsection is that thinking carefully about the semantic objects of a language description is extremely cost-effective. 5
Pascal has an intriguing with construct that “opens up” the names of a record — this is discussed in Section 5.5.
62
4 Constraining types
4.4 Further material Projects Interesting extensions to the language in Appendix B include: 1. Adding a string type to the language with a set of appropriate operators including ones that take, say, strings and integers as operands (basing the semantic description on an abstract syntax separates out the issue of defining the concrete syntax). 2. Assignment statements that have multiple left-hand sides and the same number of expressions on the right-hand side were allowed in, for example, CPL [BBHS63]. They can be used, for example, to switch the values of two variables as in: x, y := y, x Notice that, for this to have the desired effect, the above is not the same as two assignments: x := y; y := x 3. A fairly ambitious project is to add proper array expressions and assignments to the given language description. 4. An even more ambitious –but very interesting– language extension is to add relational database features to the language — there are many other decisions to be made: a “tuple” can be represented as a vector and a relation is a set of tuples — an alternative is to bring field names into play and model a tuple as a mapping. The role of types and field names in defining “join” requires thought. Relational division is a fun exercise. (Related reports are [Dat82, Han76, Owl79].)6
Further reading Despite the emphasis put here on the distinction between problems that can be detected statically (compile-time) and dynamically (run-time), many formal semantic descriptions (e.g. [Lau68, Mos74]) handle them together. This has the unfortunate result that the semantic description is further complicated by errors that are manifest in the source text alone. Furthermore, what is here referred to as “context conditions” is sometimes called “static semantics” (and what is here just “semantics” is termed “dynamic semantics”). These terms are avoided in the current book. The term “context conditions” probably made its first appearance in [vWMPK69]. Aad van Wijngaarden’s own approach involved “two-level grammars” (see [Sin67, vWMPK69]) but any comparison is complicated by the fact that, although such grammars could clearly describe context conditions, Van Wijngaarden chose, in the description of ALGOL 68 to minimise the distinction between syntax and semantics [vWSM+ 76, LvdM80, Lin93]. 6
The major issues would concern concurrency control — this point is picked up in Section 8.6.
4.4 Further material
63
Another approach is the “dynamic syntax” idea of [HJ73], in which the process of parsing declarations dynamically creates appropriate syntax rules for parsing statements. It is also worth noting that well-formedness could be expressed using inference rules such as: lhs ∈ Id rhs ∈ Expr c-type(rhs, tpm) = tpm(lhs) mk-Assign(lhs, rhs) ∈ Assign The subject of types has its own wide literature — see for example [Pie02]. A valuable approach that is not covered here is “type inference”, in which as much type information as can be deduced from use of identifiers etc. is used to determine a sensible typing for the whole text — see [Mil78b].
Chapter 5
Block structure
Section 4.3 emphasises the important role that semantic objects can play in understanding or, indeed, designing a programming language. This point becomes more obvious as the language challenges increase. This chapter examines ideas used to model the way in which blocks can be used to define different scopes for names of variables and a variety of parameter passing modes for procedures. It is again the case that the meta-languages introduced for simple languages cope with describing the new language features.
5.1 Blocks Language issue 19: Scoping Most programming languages offer ways to define different “scopes” for variables so that the same name can be used to refer to different variables in various contexts. ALGOL 60 employed blocks to define different scopes and the block concept is present in a wide variety of languages that appeared subsequently. Figure 5.1 presents an example program (in a simple but arbitrary concrete syntax) in which the name a in the inner block denotes a different variable than the a in the outer block — the example emphasises this by giving the two variables different types. The outer block also introduces i and j, which remain visible in the inner block because these names are not redeclared; the inner block also declares b, which name is only visible in that inner block. A full description of a small language that includes blocks and procedures is contained in Appendix C. The text of this chapter brings out only the main modelling points. One possible abstract syntax for BlocksProgram is: BlocksProgram :: body : Stmt As well as assignments, conditionals etc., Block is now an option for Stmt: Stmt = · · · | Block © Springer Nature Switzerland AG 2020 C. B. Jones, Understanding Programming Languages,
65
66
5 Block structure program begin bool a; int i int j; a := true; i := 1; j := 2; begin int a; int b; a := 1; j := 3; b := 7 end; if i = j then a := false fi end end
Fig. 5.1 An example with blocks defining scopes of variables Thus any sequence of statements can contain blocks and blocks can be nested to any depth the programmer chooses. In an initial abstract syntax for Block, only the local variable declarations are considered (in Section 5.3 the ellipses are replaced by procedure declarations):1 m
Block :: var-types : Id −→ ScalarType ··· body : Stmt∗ As in Section 4.2, a TypeMap is needed to define the context conditions — this is also extended in Section 5.3 to cover procedures — but, as far as the scalars are concerned, it suffices to have: m
TypeMap = Id −→ (ScalarType | · · ·)
The interesting context condition shows how the local type information overwrites that of the context in which the block is located:2 wf -Stmt(mk-Block(vm, · · · , body), tpm) 4 ··· wf -Stmt-List(body, tpm † vm) At the outermost (BlocksProgram) level, there are no variables declared. wf -BlocksProgram : BlocksProgram → B
wf -BlocksProgram(mk-BlocksProgram(b))
1
4
wf -Stmt(b, {7→})
Again, there are alternative ways of defining the same language content. It might for example be sensible to insist that the Stmt in the body of BlocksProgram is always a Block. In contrast to earlier chapters, a Compound statement is introduced in Appendix C — as becomes clear, this is the same as a Block with no local declarations. Such choices have only minor influence on the semantics. 2 Remember that the VDM map overwrite operator gives precedence to pairs from its second operand.
5.1 Blocks
67
Language issue 20: Pre-defined constants A language could include pre-defined constants such as (an approximation to) π; such names would be installed in the initial type map. It is possible to couch the semantics in terms of the state from Appendix B: m
Σ = Id −→ ScalarValue
The important semantic point that must be fixed is that the block structure dictates a “nesting” discipline on the variables; for this reason, they are often called “stack variables”. Thus the body of the Block is executed (see σi in the SOS rule below) with local variables even if the names were known in the encompassing list of statements; once that body has been executed, values are taken from σi0 for those identifiers that remained visible from the outer block and the values that were hidden are recovered from σ : σi = σ † ({id 7→ 0 | id ∈ dom vm ∧ vm(id) = I NT T P} ∪ {id 7→ true | id ∈ dom vm ∧ vm(id) = B OOLT P}) stl
(body, σi ) −→ σi0 st (mk-Block(vm, body), σ ) −→ ((dom vm) C − σi0 ) ∪ ((dom vm) C σ )
Figure 5.2 provides an annotated version of Figure 5.1 showing the states at key points.
program begin bool a; int i int j; a := true; i := 1; j := 2; σ = {a 7→ true, i 7→ 1, j 7→ 2} begin int a; int b; a := 1; j := 3; b := 7 σi0 = {a 7→ 1, i 7→ 1, j 7→ 3, b 7→ 7} end; σ 0 = {a 7→ true, i 7→ 1, j 7→ 3} if i = j then a := false fi end end
Fig. 5.2 The example of Figure 5.1 annotated with states A way of simplifying this description is given in Section 5.2 but, before that is done, the useful VDM map operators C/− C are explored. The state resulting from a block is defined in the SOS rule above to be the union m of two mappings (of type Id −→ ScalarValue). Looking firstly at the left operand of the union, assume for the moment that: dom σi0 = dom σi
68
5 Block structure
Installing the initial values in σi gives: dom σi = dom σ ∪ dom vm
− expands to give: The definition of C
(dom vm) C / dom vm} − σi0 = {id 7→ σi0 (id) | id ∈ dom σi0 ∧ id ∈
Then it follows that:
dom σi0 = dom σi = (dom σ ∪ dom vm) ⇒ / dom vm} (dom vm) C − σi0 = {id 7→ σi0 (id) | id ∈ dom σ ∧ id ∈
Turning to the second operand of the union:
(dom vm) C σ = {id 7→ σ (id) | id ∈ dom σ ∧ id ∈ dom vm} Combining the results shows that the initial assumption:
dom σ 0 = dom σ
holds. Moreover, if all variables from the outer context are redeclared in the inner block: dom σ ⊆ vm
st
then (writing σ 0 for the right-hand side of −→ in the SOS rule):3 σ0 = σ
Language issue 21: Arrays whose size depends on data There are many language features modelled in this book but different chapters try to focus on one feature at a time. Extensions such as adding I/O or arrays can be applied to most languages so formal descriptions are only written out where the combination is non-obvious. It is, however, worth making a point on arrays here. There are obviously many applications where a programmer would want to create arrays whose bounds depend on input data. Section 4.3.2 points out that this is not possible in the outermost scope of a program. Inner blocks offer an easy way to define the bounds of arrays in terms of variables whose values are computed in embracing contexts.
5.2 Abstract locations The preceding section addresses the language issues around the same name referring to different variables (in different scopes); this section moves towards a more delicate feature of many programming languages. Section 5.4.1 shows how to model 3
Of course, this throws doubt on the perceivable effect of such a block. In fact, if the body of a BlocksProgram is a Block, the conclusion has to be that executing the program has no effect. It would, however, be straightforward to add some form of I/O to this language as described in Section 4.3.1.
5.2 Abstract locations
69
one way in which different identifiers can refer to the same variable. (Different parameter passing modes for the example program in Figure 5.6 are discussed along with Issue 25. Other ways of explicitly manipulating addresses include “heap variables” — see Section 6.4.) Challenge VI: Modelling sharing How is a language description to model sharing? In the case in hand, multiple identifiers sharing access to the same variable (and the pattern of sharing varying over time). This sharing problem is in fact far more general and crops up again with objects in Chapter 9 and would, for example, play a role in the description of a Unix-style file system. Two important modelling points can be extracted before “by reference” parameter passing is tackled: • locations serve as an abstraction of machine addresses; and • the fact that the relationship between identifiers and locations can be held in an environment that changes less often than the store (Σ). Both of these points can also be used to provide a clearer model of the semantics of Block than is given in the preceding section. The set ScalarLoc is an infinite set of objects about which nothing is known other than the ability to test them for equality. The modelling decision is to split the mapping from identifiers to their values into two mappings: m
Env = Id −→ ScalarLoc m
Σ = ScalarLoc −→ ScalarValue st
The semantic relation −→ now has the type: st
−→: P((Stmt × Env × Σ) × Σ)
This makes clear that the environment (Env) cannot be changed by executing a statement (Stmt): the i + 1th statement in a list is executed in the same environment as the ith statement even if the ith statement is a Block.4 (Moreover, sharing can be modelled as in Figure 5.3, where two different identifiers can be mapped (in an Env) to the same location.) The abstract syntax of Assign is unchanged from that in Appendix B: Assign :: lhs : Id rhs : Expr but its semantics now has to obtain the location corresponding to the identifier (lhs); the state (σ ) is updated at the appropriate location: ex
(rhs, env, σ ) −→ v st (mk-Assign(lhs, rhs), env, σ ) −→ σ † {env(lhs) 7→ v} 4
As discussed in Section 5.5, not making this clear was a serious disadvantage of the form of operational semantics used in the early Vienna Lab formalisation of PL/I.
70
5 Block structure
Similarly, the option that a simple form of expression can be just an identifier is unchanged in the abstract syntax: Expr = · · · | Id
ex
but the type of the −→ relation becomes: ex
−→: P((Expr × Env × Σ) × ScalarValue)
and the SOS rule has to obtain the location (from env) before it can use the location to access the value (from σ ): e ∈ Id ex (e, env, σ ) −→ σ (env(e))
Christopher Strachey –who made many wise observations about programming languages– referred to env(id) as the left-hand value and σ (env(id)) as the righthand value of id. These terms obviously derive from assignment statements but are useful in discussing differing evaluation modes in other contexts including parameter passing (see Section 5.4).
Id
ScalarLoc
ScalarVal
x l
v
y
Env
!
Fig. 5.3 Identifiers sharing a location
5.2 Abstract locations
71
The following SOS rule shows that newlocs is a one:one mapping5 whose domain is exactly the set of identifiers for local variables of the Block and whose range is disjoint from any locations in use in the current σ . After executing the body of a block, the state that results from executing the whole Block is found simply by restricting σ 0 to the set of locations that existed before the Block was executed. There is less to do than in the model above because identifiers from the context of the block that were redeclared within the block had different locations under which their values were stored: m
newlocs ∈ (Id ←→ ScalarLoc) dom newlocs = dom vm rng newlocs ∩ dom σ = { } env0 = env † newlocs † · · ·
The definition of env0 is completed below when procedures are added to the language. Figure 5.4 provides an annotated version of Figure 5.1 showing the environments and states at key points. As predicted in Section 5.1, deriving (from σi0 ) the state that results from the whole block is easier in the presence of env because the locations form the appropriate bridge (notice that, in Figure 5.4, σi0 retains the value for la but that this location is not in env0 ). It is important that ScalarLoc is a set of unanalysed tokens. Were it, for example, to be made equal to some form of number (N) it would be unclear whether a program could perform address arithmetic. Whilst there are programming languages that allow such manipulation, this is not the intention here and the constraint is again made completely clear by the choice of semantic objects. The choice of new locations in the SOS rule for blocks is non-deterministic but, given the preceding point about the set ScalarLoc being just tokens, there is no program that can be influenced by the choice. Furthermore, there is no difference in the resulting state after the execution of the block terminates, whatever choice is made m
5
A one:one mapping Id ←→ ScalarLoc is employed in preference to a normal many:one mapping m Id −→ ScalarLoc, which would require a data type invariant: m
one-one: (X −→ Y) → B one-one(m)
4
card rng (m) = card dom m
one-one(m)
4
∀a, b ∈ dom m · m(a) = m(b) ⇒ a = b
or:
72
5 Block structure program begin bool a; int i int j; a := true; i := 1; j := 2; env = {a 7→ la, i 7→ li, j 7→ lj} σ = {la 7→ true, li 7→ 1, lj 7→ 2} begin int a; int b; a := 1; j := 3; b := 7 env0 = {a 7→ ln, i 7→ li, j 7→ lj, b 7→ lb} σi0 = {la 7→ true, ln 7→ 1, li 7→ 1, lj 7→ 3, lb 7→ 7} end; env = {a 7→ la, i 7→ li, j 7→ lj} σ 0 = {la 7→ true, li 7→ 1, lj 7→ 3} if i = j then a := false fi end end
Fig. 5.4 The example of Figure 5.1 annotated with environments and states for rng newlocs. Interestingly, there is a strong technical reason for showing the nondeterminism. The obvious way to compile blocks is to reflect the stack structure of blocks6 and to allocate the next n machine addresses for the local variables of any block; on exit from the block, the stack pointer is simply set back to the machine address on block entry and a sibling block would reuse the same addresses for its local variables. But this is not the only possibility: a compiler could compute different addresses for all blocks contained in one scope (this would create space for all contained blocks regardless of the fact that sibling blocks cannot be active at the same time). It is straightforward to show that this is an allowable implementation of the non-deterministic choice of locations; a much messier equivalence proof would be required if natural numbers were used for locations and the description essentially “bolted in” the stack implementation. As observed, careful choice of abstractions such as tokens for ScalarLoc makes properties of a language description manifest without needing to draw out consequences from detailed sequences of state transitions.
6
Stack variables (in blocks and procedures) can be contrasted to “heap” variables, which are discussed in Section 6.4.
5.3 Procedures
73
5.3 Procedures Language issue 22: Procedures and functions The idea of separating out portions of a program that can be invoked at different points in a program has been around since the time of Alan Turing (Gauthier van den Hove [vdH19] discusses Turing’s use of bury/disinter and shows need for “modifying programs” (or indirect jump)). In languages with nested scopes, this idea becomes more interesting and careful models fix essential issues about the binding of variables. Procedures are normally invoked using some form of call statement (being invoked in a statement context, procedures do not normally return values); functions are used in expression contexts and normally return at least one result value. Not only does the use of procedures and functions make it easier for a reader to understand a program, they also ensure that subsequent modifications are applied to all uses. Procedures are considered first — functions are discussed in Section 6.5. Procedures are named and their definitions (ProcDef ) are local to a block: m
Block :: var-types : Id −→ ScalarType m proc-defs : Id −→ ProcDef body : Stmt∗ In the same way as variable names are local to the block in which they are declared, any procedures declared in a block are only known within their declaring block. As in Section 4.2, no attempt is made to subdivide the class of Id; since their written forms are taken to be the same, checking for disjointness of variable and procedure names is left to the context conditions. A possible abstract syntax for procedure definitions lists the names of parameters and separates their types.7 The body of a procedure is shown as a Stmt: ProcDef :: params : Id∗ paramtypes : ScalarType∗ body : Stmt The TypeMap needed for the context conditions can now be completed (from that in Section 5.1): m
TypeMap = Id −→ (ScalarType | ProcType) ProcType :: paramtypes : ScalarType∗ Notice that only the types of parameters are needed to type check calls. The completed (formal) context condition for blocks is contained in Appendix C — it essentially: 7
This is another place where there is not one single abstract syntax that works well for all purposes: for instance, there are advantages and disadvantages in separating the parameter types from their names.
74
5 Block structure
• checks that names of variables and procedures are disjoint;8 • checks each ProcDef is well formed with respect to a type map containing both the variables known in the context and the local parameter names; • checks that the body of the Block is well formed with respect to a type map containing the local variables and procedures (this includes checking that the argument lists in call statements match the types of the respective parameters). So far –and as in Appendix C– procedures cannot be called recursively. The neatest way of modelling recursion needs the concept of “fixed points” (see Section 7.1) over environments. (There is a messy alternative with labelling environments that is not pursued here.) It was noted above that the association between identifiers and locations should be kept in an environment (Env) because it cannot be changed by assignments. The same train of thought makes it sensible to place procedure denotations (ProcDen) in the environment: m
Env = Id −→ Den Den = ScalarLoc | ProcDen ProcDen :: params : Id∗ body : Stmt context : Env There is an important and profound language issue in the binding of non-local names in procedure definitions. Consider the procedure p in the block depicted in Figure 5.5: the (parameterless) procedure p has a reference to the non-local identifier i; nearly all programming languages are defined so that this is taken to refer to the declaration in the closest embracing block. To emphasise this point, notice that p is called from an inner block that declares a separate variable i. The reference to i in the procedure definition has no connection with this new variable.9 Language issue 23: Bindings of variables in procedures A language designer must take a position on static (lexicographic) versus dynamic (call chain) semantics. ALGOL 60 chose lexicographic binding and most subsequent languages adopted this convention. Early versions of Lisp implemented dynamic binding in spite of the fact that McCarthy was motivated by the lambda calculus (which he frankly confessed that he did not fully understand at the time). Later versions of Lisp and Scheme support static binding. Turning to the Call statements, which invoke procedures: Stmt = · · · | Call 8
Notice the decision to forbid variables and procedures having the same name: this restriction is not essential and is shown solely as an illustration. 9 Informal (and even some formal) descriptions of procedure call semantics with lexicographic binding of non-local identifiers often use a “copy rule” that replaces the procedure call with a copy of its definition. This poses a serious danger of getting the wrong binding and the danger is only avoided by a rather complicated (and oft-times imprecise) renaming of clashing variable names.
5.3 Procedures
75 begin int i; proc p() · · · ; i := 1; · · · end .. . begin int i; .. . call p() .. . end
.. . end
Fig. 5.5 Lexicographic binding of non-local names in procedures For now, arguments are restricted to be identifiers:10 Call :: procedure : Id arguments : Id∗ Well-formedness is checked by:) The semantic rules for different parameter passing modes are discussed in the remainder of this chapter and the two predominant modes are described in Appendix C. What is common to the two rules for a call to a procedure, say p, is that its denotation is retrieved from the environment in which the call is written; that procedure denotation includes the environment of the context where the procedure was declared; the denotation also contains the list of parameter names and the body of the procedure. A local environment is generated (differently in the two parameter passing modes) and the state extended in the case of call-by-value parameter passing. The body of the block is then executed using this environment and state. Notice that type information is not required in the procedure denotations because the body of the procedure has been type checked against the information about the types of the parameters. 10
Restriction to identifiers means that the same syntax can be used for call by reference (see Section 5.4.1) and call by value (see Section 5.4.2); the latter case can use general expressions as arguments.
76
5 Block structure
Language issue 24: Array arguments Issue 21 explains how nested blocks can declare arrays whose bounds are defined in terms of variables whose values are set in outer blocks. Arrays can be passed as arguments to procedures so that the bounds of the argument determine those of the parameter (see Section 5.5). Although not the main topic of the current book, there are points at which it is worth drawing attention to the challenges of implementing features in high-level programming languages. As pointed out in [vdH19], finding a way of keeping track of the accessible stack variables in a language (ALGOL 60) with nested blocks and recursive procedures required the invention of Dijkstra’s “display” mechanism. This idea became a key test bed for the justification of implementation ideas with respect to formal language descriptions — see [JL71, HJ70, HJ71].
5.4 Parameter passing There are many modes in which arguments can be passed to procedures.11 The two most widely used of these are to pass the address of the argument or to pass its value; these are modelled in Sections 5.4.1 and 5.4.2 respectively. Further alternative parameter passing modes are discussed in Section 5.5. The outline program in Figure 5.6 provides a basis for comparing parameter passing modes.
program begin int i, j; proc p(int x, int y) i := i + 1; x := x + 1; y := y + 1 end .. . i := 1; j := 2; call p(i, j); write(i, j); call p(i, i); write(i, j) end end
Fig. 5.6 Parameter passing modes to procedures
11
The terminology used here is: “arguments” are what occur in the call statement (or function reference); “parameters” are the names used within the header of the procedure or function definition. ALGOL 60 uses the terms “actual parameter” and “formal parameter”.
5.4 Parameter passing
77
Language issue 25: Parameters in object-oriented languages Language designers normally select one or two different parameter passing modes to be available to programmers (see Issue 27 on how to distinguish the passing mode of each parameter if there is more than one mode). This issue is somewhat different in object-oriented languages — see Chapter 9.
5.4.1 Passing “by reference” One option for parameter passing is to pass the address of the argument — at least for scalar values, this is a very simple idea. Different languages employ the terms “by reference” or “by location” for this mode of parameter passing.12 There are several reasons why languages allow some form of “by reference” parameter passing: • it avoids copying values (this point is more important when considering arrays etc. — see below); • it makes it possible, for example, to switch the values of arguments;13 • it provides a way of returning more than one result (effect) of a procedure invocation.
program begin int i, j; proc p(int x, int y)
cenv = {i 7→ li, j 7→ lj}
lenv = {i 7→ li, x 7→ li, j 7→ lj, y 7→ lj} σ = {li 7→ 1, lj 7→ 2} i := i + 1; x := x + 1; y := y + 1 σ 0 = {li 7→ 3, lj 7→ 3} end
.. . i := 1; j := 2; call p(i, j) end end
cenv = {i 7→ li, j 7→ lj} σ = {li 7→ 1, lj 7→ 2} σ 0 = {li 7→ 3, lj 7→ 3}
Fig. 5.7 Parameter modes: pass by reference 12
This is also a restricted form of ALGOL 60’s “by name” parameter passing mode — see Section 5.5. 13 This is known as “Jensen’s device”.
78
5 Block structure
The program in Figure 5.7 is decorated with the environment (env) and state (σ ) at key points in the execution. The invocation of p(i, j) passes the location of the outer variables i and j to x and y respectively; within this invocation of p, references to x are effectively the same as references to i (and the same is true of y/j). Therefore after the execution of p(i, j) the values of i and j are both 3. A subsequent invocation of p(i, i) would essentially equate all of the addresses of i, x and y and the resulting value of i would be 6. The SOS rule for Call in the case of “by location” parameter passing is actually simpler than that in Section 5.4.2 because the left-hand value of an identifier can be obtained with env(args(i)). Thus:
It should be noted that there is a serious complication with this mode of parameter passing in that a reader of a program cannot assume that different identifiers denote distinct variables; someone reading a program (even its original author after some elapsed time) might miss quite subtle errors deriving from an assumption of separation. It is useful now to see how clean ideas can be combined. If the model of arrays discussed in Section 4.3.2 is changed so that: m
Env = Id −→ Den Den = ScalarLoc | ArrayDen | ProcDen m
ArrayDen = N∗ −→ ScalarLoc
then it is trivial to change the semantics for Call so that elements of arrays can be passed as arguments in the “by reference” mode. Such generalisations become mandatory for clear presentations of algorithms such as those that manipulate “Btrees” (see [Knu73, §6.2.4]) where B-tree nodes need to be passed by reference (or by value-return) to achieve efficient updates. Language issue 26: Array slices It is, in fact, possible to go much further. PL/I allows, for example, a “slice” of a two-dimensional array to be passed as an argument to a parameter that is declared to be a (one-dimensional) vector. This facility can be completely general: arbitrary dimensions can be sliced to access arrays of any lesser dimension. The preceding language issue indicates another place where it is easy to define things on a mathematical abstraction that are non-trivial to implement. To take the two-dimensional case, an n × m array must be mapped onto the linear addresses of the hardware either in row-dimensional or column-dimensional order. Whichever mapping order is chosen, a slice on one dimension will be in contiguous store but
5.4 Parameter passing
79
the other will be fragmented. By-reference passing of fragmented slices requires considerable ingenuity on the part of the compiler writer. The meta-point here is that failure to find a neat mathematical abstraction is almost certainly an indication that user comprehension and/or the ability to compile a language feature will be compromised; successful mathematical abstractions might still be challenging to map onto an unforgiving von Neumann architecture.
5.4.2 Passing “by value” Parameter passing “by value” does just what its name suggests. To use Strachey’s terminology (see Section 5.2) it is the right-hand value that is passed to the called procedure. The semantics of Call essentially creates a local block introducing new locations for the arguments; the values of the arguments from the call are then installed as the initial values of these new locations. In contrast to call by reference, assignment to a named parameter has no effect outside the block. (However, assignments to non-local variables can cause side effects and this point becomes important when considering functions — see Section 6.5.) Just as in a block written by the programmer, the local variables disappear on exit from the called procedure. The semantics for this form of call are included in Appendix C, and Figure 5.8 indicates the values of env/σ for the specific program under call-by-name parameter passing. In the semantics of Call, the value of a single identifier can be obtained by σ (env(args(i))) for each argument. Figure 5.6 has used simple identifiers as arguments so that the contrast with passing “by location” from Section 5.4.1 can be made, but there is, in fact, no reason with “by value” why the arguments should not be general expressions. In this case, the semantics would have to construct a list of ex values using (args(i), env, σ ) −→ vl(i). Language issue 27: Marking parameter passing modes In a programming language that offers more than one way of passing arguments to parameters, there must be a way of marking which mode is to be selected. Interestingly, ALGOL 60 makes “by name” parameter passing the default and any parameter names that are to be passed by value must be explicitly listed in the hvalue-parti. Pascal uses “by value” as its default and requires that by reference parameters are marked var in the parameter list. PL/I takes a different path: the mode of parameter passing is determined by the form of the argument: an expression argument is passed by value whereas a simple identifier is passed by reference.
80
5 Block structure program begin int i, j; proc p(int x, int y)
cenv = {i 7→ li, j 7→ lj}
lenv = {i 7→ li, j 7→ lj, x 7→ lx, y 7→ ly} σ = {li 7→ 1, lj 7→ 2, lx 7→ 1, ly 7→ 2} i := i + 1; x := x + 1; y := y + 1 σ = {li 7→ 2, lj 7→ 2, lx 7→ 2, ly 7→ 3} end
.. . i := 1; j := 2; call p(i, j)
cenv = {i 7→ li, j 7→ lj} σ = {li 7→ 1, lj 7→ 2} σ = {li 7→ 2, lj 7→ 2}
end end
Fig. 5.8 Parameter modes: pass by value
5.5 Further material Projects Because the languages being considered are themselves getting interesting, there are many projects that the reader could now enjoy. For example: 1. The syntax of iterative for loops is a suggested project in Section 2.3; the semantics of the option to bind the control variable as local to the statement could now be fully explored. 2. It is relatively straightforward to write out the semantics of parameter passing “by value/return” because it works like a combination of “by value” parameter passing and the creation of a new block (but remember that a value must be passed back at the end of the procedure execution). There is a need for a context condition to avoid two different values being passed back to the same variable. 3. ALGOL 60’s full by name parameter passing is slightly trickier and requires that there is a check that the argument is only a single identifier if the parameter is used in a “left-hand” context. 4. Modify the description in Appendix C to allow arrays to be passed as by location arguments. A slightly more ambitious version of this project would include the ability to pass “slices” of arrays (the reader might also want to think about the attendant need for functions that make it possible to determine the lower and higher bounds of a dimension lbound/hbound). 5. Thinking about “separate compilation” is interesting because it focuses on what information is needed in descriptions of the interface.
5.5 Further material
81
6. Pascal offers an intriguing with statement that unfolds the names of the fields in a record.
Further reading It was indicated in Section 3.3 that there are various ways of recording a semantic relation (between pairs of Program/ Σ and Σ); now that environments (Env) are involved it would also be possible to emphasise their relative constancy by writing them separately from the main relation — e.g. ex
(rhs, σ ) −→ v env ` st (mk-Assign(lhs, rhs), σ ) −→ σ † {lhs 7→ v} The clear separation of the environment (Env) from the state (Σ) is important and has an interesting history. The early operational semantics work in the IBM Vienna Lab focussed on the PL/I language. This was a huge undertaking; an outline of the effort is given in [AJ18, §3] and a first-hand account in [LW69]; further connections are also traced in Chapter 11. Jan Lee introduced the term Vienna Definition Language (VDL) [Lee72] (which is not to be confused with VDM). The state of the VDL operational descriptions of PL/I was huge and among other things included a stack of environments for all contexts that had been entered but not completed. This had unforeseen consequences when it came to basing proofs on VDL semantics: the property alluded to above that the environment is the same after any statement as it was before that statement required a messy argument because of the presence of the stack of Envs. Peter Lucas wrote the first such “twin machine” proof [Luc68] but even the more developed [JL71] spent more space on this lemma than on the real core of the design. These difficulties were identified as shortcomings of the operational approach and contributed to the move to denotational semantics (see Section 7.1), where the separation of the environment from the state was almost mandatory. Subsequent to the Vienna group moving to the denotational approach [BBH+ 74] (see [Jon01] for more details), Gordon Plotkin proposed Structural Operational Semantics (SOS), where the separation of environment from store is also clear [Plo81].
Chapter 6
Further issues in sequential languages
As made clear at the beginning of this book, it is not the aim to cover all possible language challenges. This chapter mentions some interesting extensions to languages and either sketches an approach to their models or provides references to where such models are developed. The material here concerns only sequential features of languages — mainly in the ALGOL or Pascal families; material on concurrency in the spirit of Eiffel [Mey88], Go [DK15] or even Java [GJSB00] (i.e. concurrency in an object-oriented context) is deferred to Chapter 9. Here again, an important message is that the meta-language introduced to cope with the semantics of languages as simple as that in Chapter 3 suffices to describe programming languages that have been –or are still– used to build significant computer applications.
6.1 Own variables There is an obvious issue of making the effect of executing a program visible beyond its execution. Language issue 28: Effects of a program There are many ways in which programs can have an influence beyond their execution — for example: • Input and output are discussed in Section 4.3.1; • Updating databases is touched on in Section 4.4 and further discussion is in Section 9.7; • Object-oriented programs can be linked to object stores (see Chapter 9). Within a program, there is a related issue of how to retain the values of blocklocal variables beyond an execution of a Block. Note that, in Chapter 5, even the locations of local variables are discarded at block exit.
© Springer Nature Switzerland AG 2020 C. B. Jones, Understanding Programming Languages,
83
84
6 Further issues in sequential languages
Language issue 29: Retaining values of block-local variables It is sometimes useful to arrange that the value of a variable on entry to a block is influenced by the value associated with that name from the previous execution of that block. The own variable feature of ALGOL 60 proved to be one of its most contentious (see, for example, [Knu67]). The intention is clear: a programmer can add the qualification own to a declaration and such variables do, precisely, retain their values between block executions. Nor, in the simple cases, is there any difficulty in providing a model: all that is necessary is to retain the location of the variable in the store and find a way of recording that location in a place where it can be retrieved at block entry. In fact, some descriptions of ALGOL 60 (see [AJ18]) simply add a fictitious outermost block containing the whole program and generate the locations of own variables in that encompassing block. This trick neatly finesses some of the “trouble spots” relating to own variables. Recall however that inner blocks in ALGOL can declare array dimensions that depend on the values of non-local variables. If an own array variable has such dynamic bounds, there is no obvious way in which the value of the array can be retained between block executions. This is a case where attempting to write a formal description of a language could have readily located an issue that needed resolving by its designers.
6.2 Objects and methods The idea of “object-oriented” languages has become a major topic of programming language theory and practice. The reason that such languages are important in this book is the role they play with concurrency and, after that general issue is tackled in Chapter 8, Chapter 9 is devoted to a concurrent object-oriented language. It is, however, useful to build links between the material in Chapter 5 and object-orientation independently of the topic of concurrency and that is done in this section. Looking at what might be loosely called members of the ALGOL family of languages, blocks are effectively executed in the same way as other statements in that, when execution reaches the begin bracket of the block, the whole block is executed to its end bracket1 at which point the block is complete. During the execution of the block, any local variables are created (on the stack) but they disappear on completion of the block. The situation with procedure calling is slightly more complicated but in essence similar. On encountering a Call, the appropriate procedure body is located and (after parameter installation) executed to its end at which point all trace of any variables within the procedure is expunged. 1
This ignores the possibility of abnormal exit: this issue is tackled in Chapter 10 but it is both true and important that it does not change the discussion here.
6.3 Pascal variant records
85
This forgetting of variables from blocks or procedures is made clear by the fact that the statement after a block or a call uses the environment that sets the context prior to that action. Objects were first implemented in the Simula language [DMN68]. As its name hints, the application was writing simulation programs. The specific challenge was writing Monte Carlo simulations of ships docking at piers. Faced with the challenge of writing programs that represented arbitrary numbers of physical-world objects, the designers of SIMULA (Ole-Johan Dahl and Kristen Nygaard) realised that blocks could have multiple instantiations to stand for such things as the boats in their simulations. Thus blocks can be thought of as defining classes, whose instances are objects. Furthermore, procedures associated with a block provide a guide to what become the methods of the class. But there are three essential differences to normal blocks: • Firstly, for the intended purpose, objects must retain their values between uses. (It is perhaps worth comparing this with the idea of own variables in ALGOL 60 which are discussed in Section 6.1.) This means that the semantic objects for a language like Simula need at least a mapping from object References to a VDM record that has as one component a mapping from (local) variable names to their values. • Secondly, since objects do not have the same ephemeral existence as Blocks, there will need to be a way of garbage collecting objects that can no longer be used. • Thirdly and, perhaps most interestingly, the scope of method names is deliberately external to the objects (in contrast to procedures in ALGOL-like languages, which methods otherwise resemble). A full description of such a sequential object-oriented language is straightforward to construct but, as is made clear above, the real interest here is to use objectoriented concepts in tandem with concurrency (see Chapter 9). A major problem with concurrency is “data races” and it turns out that the localisation of state in OOLs helps ameliorate this problem. It is worth observing one further advantage of OOLs: the aim of abstract data types is to insulate users from the representation of complicated data types. The fact that data is internal to objects and that the only interface is through the methods ensures that the representation details are hidden from users of the class and they can be changed without affecting users of the class.
6.3 Pascal variant records Pascal is a well-designed and clean language from one of the world’s greatest designers of programming languages: Niklaus Wirth and Tony Hoare provided key insights for ALGOL W [WH66, BBG+ 68] and Wirth was the major designer of
86
6 Further issues in sequential languages
both the Modula series [CDG+ 89] and the Oberon series [Fra00] of languages.2 The Pascal feature of variant records has however some messy consequences and it is interesting to look at how a formal description pinpoints the problems. Language issue 30: Variant records It is sometimes useful to declare a record type that, as well as some common fields, indicates that there are variants of the type; these variants can have different numbers of fields with distinct sets of names and their types (and thus sizes) do not need to match. An example declaration of a variant record might be: var r: record
f : Type1; case b: {OPTION 1, OPTION 2} of OPTION 1: {x: Type2} OPTION 2: {y: Type3} end
(Here Type2 and Type3 could themselves be complicated records — even variant records.) There are both implementation efficiency and program clarity arguments for some form of variant record feature. One of Wirth’s strengths as a language designer is that he always appears to have a clear idea of how a language feature can be implemented and one of the initial arguments for variant records was that there was a way of saving store by overlaying different record types. With ever-cheaper store, the more enduring argument is that a programmer’s intentions are clearer with a variant record than with two separate record declarations that a reader has to compare line by line to spot what is common and where the differences are located. The form of variant record in the example above is a tagged variant record. It is correct to write statements such as: r.b := OPTION 2; r.y := · · · ;
But this shows that the model of records envisaged in Section 4.3.3 is no longer adequate. Changing the model is not entirely trivial. The deeper problem is that the distinction that says that environments are fixed at the block level no longer holds: the first assignment above changes the identifiers and their respective locations. Furthermore, an assignment: r.x := · · · ;
implicitly changes the tag. If this were not enough complications, untagged variant records are allowed where there is no explicit tag field within the record. Coupled with the ability to pass values of such types by say a value/return parameter mechanism, this collection of decisions greatly complicates the model of Pascal in [AH82] 2
Perhaps he would prefer to have Euler [WW66] forgotten because one of its key design objectives was rather limiting.
6.4 Heap variables
87
and re-reading pages 180–186 of that reference brings out the pain that Derek Andrews and Wolfgang Henhapl experienced in having to model the interaction of such a collection of features.
6.4 Heap variables Tackling applications related to artificial intelligence (AI) in general and machineassisted reasoning in particular prompted Herb Simon and Allen Newell to develop the IPL-V language [New63, SN86] and John McCarthy to design Lisp 1.5 [ML65], which are both languages that support list processing. In contrast to arrays (or records), lists are seen as dynamic data structures where individual data items are linked. Such links have to be in close correspondence with machine addresses.3 In fact, Lisp used the terms car/cdr to stand for “contents of address register” and “contents of decrement register” which directly reflected the structure of the IBM 7xx machine that was used for the initial implementation. Language issue 31: Dynamic data topology Computer applications that are best implemented with arbitrary and dynamic graphs of elements indicate a need for programming languages that directly support list processing. Variables declared in Blocks are accessed via their names and are referred to as stack variables because their lifetime is governed by entering and completing blocks, which means that they can be allocated on a (last-in-first-out) stack. In contrast, dynamically created values are referred to as heap variables. Lisp-like languages organise everything in lists.4 Pascal supports both stack and heap variables. A natural programming style is to have records whose values can contain both application data and pointers to other instances of records. In the meta-language being used in this book (VDM), records can be nested: R :: v : N n : R In a programming language, a pointer to the record is used (that pointer can of course be nil). Language issue 32: Declaring and using pointer types Languages with both stack and heap data need a way to distinguish a variable that contains a value of a given type from a variable that contains the address of a value of that type. Similarly there need to be ways of distinguishing the use of an identifier to access the value of a pointer variable in contrast to using it as a pointer to another element. 3
It is of course possible to simulate list processing with arrays using the array index as a surrogate for machine addresses, but the same difficulties recur. 4 This includes programs themselves, which makes it possible for AI applications to change programs dynamically.
88
6 Further issues in sequential languages
In Pascal, the dynamic creation of a data element of some type is achieved by executing a new statement. At the implementation level, this implies the existence of a free-storage manager that tracks unused storage and allocates free space on each call. There are a number of related programming pitfalls associated with list processing. An obvious difficulty is that the free-storage manager cannot create arbitrary numbers of new addresses because any machine has a finite store and address space. This problem is particularly acute for programs that execute for a long period of time. One way of ameliorating this problem is to offer a dispose statement that a programmer should use to return surplus addresses to the free-storage manager. But this approach has its own dangers. A program might be designed to create a complicated graph-like data structure where pointers are copied and can occur in many places. It should be a programming error to dispose of an address that can still be accessed via another path: such dangling pointers can result in unpredictable behaviour that –especially in the presence of concurrency– can be extremely difficult to debug. An alternative way forward is to make the implementation responsible for recognising when elements of data can no longer be reached. So-called concurrent garbage collectors are highly delicate pieces of code whose requirement is that they should not affect the semantics of the program they are meant to assist (see for example [JL96]). Given that the idea of having a surrogate for machine addresses has been introduced in Section 5.2, it is not difficult to devise a semantic model of a language that includes heap variables. The key is to recognise that the set of Values must include Locs. Such a definition is not written out in the current book because most of the issues (including garbage collection) are discussed in the context of object-oriented languages in Chapter 9. It is worth mentioning that garbage collection is one of the places where it is not difficult to write a mathematical description but its implementation can be rather expensive (in both running time and programmer ingenuity). There is a variety of language features relating to heap storage. For example PL/I has a notion of regions, which are disjoint spaces that can be managed as whole collections. PL/I also includes the unwise decision that programmers can obtain the addresses of stack variables. This last point emphasises that making machine addresses data items that programmers can manipulate is extremely dangerous because a program can access or change storage that is completely outside its own collection of variables. Reading or changing data that belongs to the operating system has probably cost more money than any other feature of programming languages. Introducing a proper type structure that forbids any modification of addresses is one useful step but it does not prevent malicious code being written in a language like C. Running all programs on top of a virtual machine can provide a level of security but hardware implementation of something like capabilities [Lev84] is the only really safe solution.
6.5 Functions
89
6.5 Functions In most respects, the issues around modelling functions are the same as those addressed in Chapter 5 for procedures but there are some additional points that are of interest for models of languages. One important distinction is that, whereas procedures are invoked in a statement context, functions are activated as expressions. As such, functions should obviously be given a return type in a language which is strongly typed. Language issue 33: Pre-defined functions ALGOL 60 defined some functions (at a notional outermost scope) that can be used anywhere within a program. This can be modelled as an encompassing block. An abstract syntax for programmer-defined functions can be given as a simple extension of the language in Chapter 5: FunDef :: type : ScalarType params : Id∗ paramtypes : ScalarType∗ body : Stmt There is no difficulty in extending this to return non-scalar values. Some care with regard to matching dimensions is required if array values can be returned.
6.5.1 Marking the return value A related question is how the portion of the program that defines the function (its body) should identify the value to be returned: • A language can require that an explicit return statement identifies the value to be returned; the programmer would write something like: function f (· · ·)
···
return(e); end
···
As well as causing evaluation of e to yield the value to be returned to the calling context, executing the return terminates execution of the body. • In some languages (including ALGOL 60) the return value is indicated by an assignment to the name of the function as in:
90
6 Further issues in sequential languages function f (· · ·)
end
··· f := · · · ···
The value returned is that of the last assignment executed before the body completes (i.e. the assignment does not cause execution of the body to terminate). • In languages where statements have values, the value of the function can be the value of its defining body. In languages that have an explicit return statement, that statement can be placed inside other phrases such as loops, blocks etc. Modelling abnormal termination is itself a problem whose discussion is postponed to Chapter 10.
6.5.2 Side effects Both procedures and functions can, in most procedural languages, give rise to side effects in that statements to be executed in their body part can assign to variables that are non-local to that body or even perform input/output. Procedure calls in a sequential language are executed in a clearly defined order. Function calls are initiated in expression contexts such as: x := f (x) + g(y) This can open up a form of non-determinacy that must be faced in a semantic description. A –possibly unexpected– feature interaction is with the fact that language designers do not typically constrain the order in which sub-expressions are evaluated. There are good reasons for this: • compiler writers are normally faced with a limited set of fast registers and will want to optimise their use — this can result in evaluating sub-expressions in non-obvious orders; • an even more extreme optimisation is that a compiler might be written to evaluate “common sub-expressions” only once. Thus the fact that function calls occur within expressions gives rise to some messy questions about non-determinacy (because of side effects). Language issue 34: Contrast with pure mathematical functions If a language allows side effects and does not fix the exact order in which terms in expressions are to be evaluated, expressions that look as though they use mathematical functions can give rise to non-determinacy. Pascal is a rather clean language but has a most unpleasant surprise for the writer of its formal description. The compiler writer is given permission to re-order how subexpressions are evaluated because the Pascal documentation says that any program
6.5 Functions
91
that gives different results depending on the order of evaluation is deemed to be erroneous. A faithful formal description of Pascal must therefore be at pains5 to specify all possible results and then check that the set of such results has exactly one element. Language issue 35: Side effects from shorthands As well as bringing assignment-statement-like side effects into expressions, shorthands such as x + +/ + +x introduce similar problems to function calls.
6.5.3 Recursion Most of the topics in this sub-section could be discussed in the context of procedures but they are issues which really beg resolution for functions. In particular, recursive procedures can be useful but recursion is almost ubiquitous for functions. Amending the context conditions for BlocksProgram in Appendix C to allow recursion is straightforward: it is only necessary to add the local proc-tpm to the updated environment used in: ∀p ∈ dom pm · wf -ProcDef (pm(p), tpm0 † proc-tpm)
Changing the semantics for Block is more difficult because of the need to store the environment of the procedures (or functions) within their denotations. One way of solving this is to have a separate labelled collection of environments and to store the label of the environment rather than the object itself. A more elegant solution is to accept the recursive definition of environments and to define the value as the relevant fixed point. This idea is explained in Section 7.1.
6.5.4 Passing functions as parameters [*] A useful way of achieving generic programs is to write functions that accept arguments that are functions. For example, a function that returns a sequence that results from applying its functional argument to every element of an argument sequence might be: apply : (X → Y) × X ∗ → Y ∗
apply(f , s) 4 if s = [ ] then [ ] y apply(f , tl s) else [f (hd s)] fi 5
This would require a “small-step” semantics — see Chapter 8.
92
6 Further issues in sequential languages
The type of apply shows that its first argument is itself a function. The function apply could be used for many different purposes (e.g. doubling every number in a list or reversing every string in a list of sequences of characters). Language issue 36: Higher order programming Higher-order functions are a key to achieving generic programs in purely functional languages. With care on the part of programmers, higher-order functions can also be used in imperative programming languages but indisciplined use of imperative features such as side effects can subvert any advantages that might otherwise be gained by this style of programming. Models of full-blown passing of functions and procedures are given in the ALGOL 60 descriptions cited in [AJ18]. Language issue 37: Function types In order to write (a finite form of) a type for a function that can take itself as an argument, it is necessary to have a way of separating out the naming of the function type.
Procedure variables/results In geometry, orthogonal lines are at right angles; more generally, orthogonality has to do with independence. The term is sometimes used in programming language design to argue that values of various types must be subject to the same rules. Language issue 38: First-class objects It can be argued that since –for example– variables can contain integers, integers can be passed as arguments and expressions can yield integers, values of any type should enjoy the same “first-class” status. Earlier projects have indicated that there is virtue in saying that a concept like conditional statements invites consideration of conditional expressions and even conditional references (see Section 2.3). ALGOL 60 extended this argument to labels: since there were constant labels, there should be switch variables to which labels could be assigned and such switch variables could be used in goto statements. The designers of ALGOL 68 went further and argued that there should be variables that could take procedures as values. It is worth examining this plausible-sounding argument in terms of formal models. As Hans Bekiˇc showed in [Bek73], such variables can result in violations of the normal scoping rules. While it is true that passing procedures or functions as arguments to other procedures or functions can only result in them being called whist their context is still active, procedure variables can be assigned values that exist longer than their context. The same problem can arise with returning procedure values from functions.
6.6 Further material
93
6.6 Further material Projects 1. It is interesting to look at functions that, instead of returning a single value, can return a tuple of results. Ways of passing multiple values from a function include side effects and using parameters that are passed either by name or value/return but there is no reason why function types cannot be extended so that a call could be written as: (x, y) := f 2(· · ·)
2. Looking in detail at the semantics of allowing side-effect-inducing expressions like x + + in C is instructive. 3. The semantics of the sequential OOL envisaged in Section 6.2 are not difficult to write. In its simplest form, run-time exception handling can be viewed as a form of procedure call. But exception handlers can be programmed so as to not return to the source of the exception and are thus better discussed in Chapter 10.
Chapter 7
Other semantic approaches
The main focus in this book is on the operational approach to documenting the semantics of programming languages. There are however other approaches and understanding them is both instructive in itself and also throws light on operational semantics by clarifying their relationship thereto. A broad distinction between semantic methods can be made: • “Model-oriented” methods are built around an explicit notion of an (abstract) state of a machine underlying the semantics. • “Property-oriented” approaches attempt to define the semantics in terms of properties of texts in the language. Operational semantics is clearly model oriented in that meaning is given to texts in a language L by defining how those texts transform an underlying abstract state. Denotational semantics makes an important step of abstraction by fixing the semantics of L by mapping its constructs into functions from states to states. It turns out that the states in operational and denotational approaches can be identical for simple languages and this supports viewing both of these approaches as model oriented. Early in attempts to capture the semantics of programming languages, researchers investigated fixing key aspects of semantics by characterising equivalencies between texts (e.g. [Bek64]). This is certainly one way to define semantics by properties and relates to recent research on “algebraic semantics” (see Section 7.5). More prominent in the property-oriented semantics world is the research on “axiomatic semantics”, in which logics are provided for deducing properties of programs written in a language L . Denotational semantics is outlined in Sections 7.1 and 7.2; Sections 7.3 and 7.4 discuss axiomatic semantics. A full study of these approaches would require far more than this short chapter and fortunately good texts exist already — a selection of these are cited in Sections 7.2 and 7.4.
© Springer Nature Switzerland AG 2020 C. B. Jones, Understanding Programming Languages,
95
7 Other semantic approaches
96
7.1 Denotational semantics The step from operational to denotational semantics can be compared to that from interpreters to translators. An operational semantics provides an abstract interpreter that takes a program and a starting state and –for a deterministic language– computes the final state. (The extension to use relations to final states for non-deterministic languages is introduced in Section 3.2.) Denotational semantics maps a deterministic program to a function from states to states. Just as operational approaches provide abstract interpreters that avoid the details required in a machine-code interpreter, the functions that serve as denotations of programs are more abstract than a machinecode program generated by a translator for the language. As is shown below, the abstract states used in denotational semantics are, in simple cases, the same as would be used in an operational description. The denotational description is more abstract than an operational description because the former abstracts away from the initial state required by the operational description. There is however a cost associated with this abstraction: denotational semantics needs some more sophisticated mathematical concepts than underlie operational descriptions.1 This section only outlines the main objectives of the denotational approach and mentions the mathematical challenges. Starting with the observation made in earlier chapters that the effective statements in an imperative programming language are those that change the state, a way of creating a function from states to states is required — for example a meaning function M applied to Assign statements should yield a function:2 M [[mk-Assign(lhs, rhs)]] 4 · · ·
For a simple language such as that in Chapter 3, the type of this meaning function M is: M [[ ]]: Stmt → (Σ → Σ)
The (M ) semantics for assignment statements could be written with the state (σ ) made explicit: M [[mk-Assign(lhs, rhs)]](σ ) 4 σ † {lhs 7→ eval(rhs, σ )}
The discussion that follows about the need to have a uniform way of defining M as having the type Stmt → (Σ → Σ) for any statement argues for having a direct way of defining M [[mk-Assign(lhs, rhs)]] without applying it to the state argument. Fortunately Alonzo Church’s Lambda notation (see for example [Han04]) provides a way of writing unnamed functions. For example, the identity function can be defined in the Lambda notation as: Id = λ σ · σ 1
During the development by Strachey, Scott and colleagues at the University of Oxford, the term “mathematical semantics” was used; use of the adjective “denotational” came later — see [Sto77]. 2 The use of “Strachey brackets” ([[ ]]) is a convention that has no deep meaning.
7.1 Denotational semantics
97
A Lambda expression follows the Greek letter λ with a list (in this case one) of parameter names with the definition of the function after a dot. The Lambda calculus is more than a notation — its semantics is fixed by a theory of equality given by a small collection of equality rules between Lambda expressions. Lambda functions can have type decorations — a specific identity function could be defined: Id = λ σ : Σ · σ
and this point becomes important below. Although it was clear that the typed Lambda calculus had models, until Dana Scott’s ground-breaking research in Oxford, no one had succeeded in showing that there were underlying models of the untyped Lambda calculus. Unfortunately, features of programming languages like ALGOL 60 relied on the untyped calculus. Using Lambda notation: λ σ · σ † {lhs 7→ eval(rhs, σ )}
is a function from states to states and can be used to avoid writing the σ on the left of the defining 4 above: M [[mk-Assign(lhs, rhs)]] 4 λ σ · σ † {lhs 7→ eval(rhs, σ )}
A key goal of denotational semantics is to express the meaning (denotation) of compound statements in terms of the meaning of the components of the compound object. Technically, this notion is that M is a homomorphic mapping from syntactic objects to their denotations. For simple compounds, this works nicely with mathematical composition of two functions defined as:3 f1 ◦ f2 4 λ x · f2 (f1 (x))
Fixing the meaning of compound statements simply composes the meanings of the two statements: M [[S1; S2]] 4 M [[S1]] ◦ M [[S2]]
The mathematical challenge begins to increase with the denotation of while statements. With the identity function (Id) as above and using an obvious notation for conditionals,4 the denotation of While can be written: M [[mk-While(test, body)]] 4 M [[test]] → M [[mk-While(test, body)]] ◦ M (body) Id The fact that the left-hand side of the definition (i.e. M [[mk-While(test, body)]]) also appears on the right of the definition symbol requires some clarification. Given 3
Mathematicians are divided about which order defines composition — the choice makes no essential difference to the rest of the discussion. 4 The conditional can be encoded as a Lambda function but this detail is not germane to what follows.
7 Other semantic approaches
98
certain conditions, such definitions can be considered to define fixed points. Fixed points of recursive definitions can be built up — consider: WH = while i 6= 0 do i := i − 1 od
Where the test is false, the recursive branch of the definition given above is not needed and the whole function is defined to be the identity function. Therefore the pair (0, 0) ∈ M [[WH]]. But once that base element is in M [[WH]], so must be the pair (1, 0) ∈ M [[WH]]. Iterating this process requires that at least: {(i, 0) | i ∈ N} ⊆ M [[WH]]
The set {(i, 0) | i ∈ N} is a fixed point of M [[WH]] because the recursive definition does not force the addition of any further pairs. It is in fact the least fixed point because arbitrarily adding, say, (−7, 0) results in further values. But the least fixed point is the denotation that makes sense for While. Since the least-fixed-point construction creates infinite objects in general, it does not actually offer a useful tool for calculation. But it is the underlying semantics and in terms of that semantics the useful proof rule of fixed-point induction can be justified. It is useful to return to the comparison between operational semantics as offering an (abstract) interpreter and denotational semantics as defining a translation. The mapping provided in the latter case is really an expression of the Lambda notation. This is what Peter Landin envisioned in his important papers [Lan65a, Lan65b]. It must be understood, however, that obtaining an expression for say the application of the meaning function M to a program for factorial does not immediately yield the mathematical function: {(i, i!) | i ∈ N}
To prove this requires properties of the factorial operator. But given such properties, there is a mathematical rule for such proofs. In contrast, using an operational semantics needs not only the properties of the factorial function but any proof has to be an induction over the steps of the computation. This means that there are proofs that are more elegant when based on denotational semantic descriptions than if they were based on operational semantics. Typical cases where denotational semantics shine are to show that composition is associative or that unwrapping a while loop with a conditional preserves the original meaning. The advantages of abstracting denotations become clearer for languages that are modelled with environments: M [[ ]]: Stmt → (Env → (Σ → Σ)) Firstly, there is an expression for
M [[mk-Assign(· · ·)]]: Env → (Σ → Σ)
The environment has been bundled into the definition. Furthermore, the formula: M [[S1; S2]] 4 M [[S1]] ◦ M [[S2]]
7.2 Further material
99
still provides a homomorphic mapping to the richer denotations. Environments are also encapsulated in procedure definitions so that there is an obvious comparison between call by value: Pden = ScalarValue∗ → (Σ → Σ)
and call by reference:
Pden = ScalarLoc∗ → (Σ → Σ)
Unfortunately, this is precisely the point at which the mathematical underpinning becomes more questionable. If there were a strict hierarchy of procedures, types could be associated with each denotation and the typed Lambda calculus would have sufficed for the space of denotations. But, for languages in which procedures can accept arbitrary procedures as arguments, no such ordering could be defined. Strachey and Landin had rather naively continued to use an untyped Lambda calculus as a way of expressing the semantics of languages like ALGOL 60 and CPL [BBHS63], where self application was allowed. Scott raised the alarm and was for a time convinced that there were no models for the untyped Lambda calculus [Sco69].5 Scott, however, went on to provide precisely such a model and, in a series of monographs from the Oxford Programming Research Group in 1969, established what is now known as domain theory. The essence of Scott’s insight was that restricting denotations to monotone and continuous functions provides a sufficient foundation. Although Scott’s models for the untyped Lambda calculus had resolved a key issue in the foundations of mathematics, this did not mean that finding denotations for programming constructs would always be straightforward. The maligned GoTo statement is an example of exceptional sequencing in that a goto can force the closure of an arbitrary collection of dynamic contexts. Even the concept of a Return statement from within a function can have a similar effect and exception handlers present the same sort of challenge. The homomorphic rule suggests that the denotation of any statement should be constructed from the denotations of its components but it is not obvious how to apply this dictum in the case of exceptional sequencing. One solution is to use continuations and this approach is described in Chapter 10. As explained in Chapter 8, non-determinacy is inherent in modelling concurrency and this posed further challenges for denotational semantics.
7.2 Further material Chris Hankin’s [Han04] is more than adequate to provide the necessary background on the Lambda notation.6 5
Just as Cantor had shown that there are more reals than rational numbers by an enumeration argument, it appeared that a cardinality contradiction existed for functions that could take themselves as arguments. 6 Church’s [Chu41] is the original source and includes the wonderfully clear description:
100
7 Other semantic approaches
The history of the evolution of denotational semantics is addressed in several places: • Joe Stoy’s excellent book [Sto77] is still an invaluable source (the Foreword by Dana Scott is extremely useful). • A masterly general biographical article on Strachey from Martin Campbell-Kelly is [CK85]. • Shortly before Christopher Strachey’s untimely death, he wrote jointly with Robert Milne [MS74], which was a submission for the (Cambridge University) Adams Prize. They did not win the award but, after Strachey “shuffled off this mortal coil”, Milne revised the work into a rather challenging two-volume book [MS76]. Both a formal description of Sal and a proof of correctness of a compiler are covered. • The group at the IBM Lab in Vienna adopted a denotational approach when in 1973 they had the opportunity to tackle developing a compiler from a formal description of PL/I. The areas of VDM that relate to language semantics adopt a denotational approach albeit with differences from the Oxford style — see Chapter 10. This story is told in some detail in [AJ18]. Troy Astarte’s thesis [Ast19] expands on the historical context of these events. • The current author taught denotational semantics at Manchester University up to 1996 but on returning to academia in 1999 switched to teaching SOS at Newcastle University. The argument being –as presented by the current book– that a carefully constructed operational semantics is the perfect tool for thinking about the design of a programming language. • The hundredth anniversary of Strachey’s birth was marked by a conference in Oxford and video recordings of the talks are available.7 Two gems among them are Stoy’s talk and the panel discussion where Roger Penrose describes his attempt to interest Strachey in the Lambda calculus and his delayed acceptance. • The origin of the series of theorem proving assistants from HOL [Gor86] through Isabelle [Nip09] was actually LCF from Robin Milner and colleagues [GMW79]. The “Logic of Computable Functions” was motivated by Scott’s work. The LCF implementation was also notable for being the genesis of the original ML (“Meta-Language”) programming language, which evolved into Standard ML [MTHM97]. The topic of basing compiler designs on denotational descriptions warrants some expansion. The Sal language tackled in [MS76] is certainly substantial. The 1973– 76 efforts at the IBM Lab in Vienna tackled PL/I: a denotational semantics for the ECMA/ANSI subset of PL/I is given in [BBH+ 74]; specific details of the work are in Technical Reports [Wei75, Izb75, BIJW75]; and a summary of the approach A function is a rule of correspondence by which when anything is given (as an argument) another thing (the value of the function for that argument) may be obtained. That is, a function is an operation which may be applied on one thing (the argument) to yield another thing (the value of the function). 7
7.3 The axiomatic approach
101
is in [Jon76]. Key aspects of the approach include viewing the run-time state of execution as a representation of the abstract states (see Section 7.3.5) and relating programs that satisfy the concrete syntax to the abstract syntax of the language description. As described in more detail in [AJ18, JA18], the work was terminated when IBM cancelled the machine for which the compiler was being constructed. First as an LNCS [BJ78] –and later as [BJ82]– the aspects of VDM relating to language description eventually received wider publication. Chapter 11 mentions other uses of VDM as a basis for compiler development including the Danish Ada compiler [BO80a].
7.3 The axiomatic approach If program specifications are presented as pre and post conditions:8 pre: Σ → B post: Σ × Σ → B it is possible to reason about program correctness in terms of an operational semantics as follows: st
∀σ , σ 0 ∈ Σ · pre(σ ) ∧ (s, σ ) −→ σ 0 ⇒ post(σ , σ 0 )
But such proofs can become cumbersome. The term “axiomatic semantics” comes from Tony Hoare’s seminal 1969 paper [Hoa69] entitled “An axiomatic basis for computer programming”. This approach offers a far more natural way to reason about program correctness and, moreover, lends itself to support a development method for programs. This section gives enough of an overview of the approach to relate it to operational semantics.
7.3.1 Assertions on states It is both useful and historically relevant to begin with the idea of recording assertions about the state of a computation on a flowchart. A key reference that had a significant influence on subsequent research is Bob Floyd’s “Assigning meanings to programs”: in [Flo67] a program is presented by its “flowchart” but, as well as the instructions and tests being written in rectangles and ovals, logical assertions are associated with the arcs between boxes. 8
The case is made below that post conditions should be relations between initial and final states; Sections 7.3.1 and 7.3.2 follow the historical development where even post conditions were originally taken to be predicates of a single state.
102
7 Other semantic approaches
Figure 7.1 contains a version of Floyd’s annotated flowchart for an algorithm that computes integer division by successive subtraction. The algorithm is straightforward: x is to be divided by y computing the quotient in q and leaving any remainder in r.9 Looking at the decorating assertions, the overall required effect is associated with the exit from the program (just before the oval marked HALT) as: 0≤r0 Some of the assertions can be shown to be mechanically derivable from others but the assertion within the loop is crucial to establishing correctness: r≥y>0 x≥0 q≥0 x = r+q∗y
START x ≥ 0, y > 0 q := 0 x ≥ 0, y > 0, q = 0 r := x x ≥ 0, y > 0, q = 0, r = x r ≥0, x ≥ 0, y > 0, q ≥ 0, x = r + q*y r 0, x ≥ 0, q ≥ 0, x = r + q*y r := r - y r ≥ 0, y > 0, x ≥ 0, q ≥ 0, x = r + (q+1)*y q := q + 1 r ≥0, x ≥ 0, y > 0, q > 0, x = r + q*y
Fig. 7.1 Integer division example from Floyd’s [Flo67] 9
Notice that this fits with the idea that programs extend the instruction set of a machine: Floyd assumed that there was a subtract instruction but not one for integer division.
7.3 The axiomatic approach
103
Clearly, the annotating assertions need to be consistent with the program on which they are placed and rules for checking this are provided in [Flo67].10 While it is true that adding assertions to a program (in Floyd’s case, on its flowchart) requires extra effort from the programmer, their presence makes it possible to prove that the program satisfies its specification (under a clear set of assumptions). In fact, Floyd’s rules provide a way of deriving some assertions from the code plus a minimal set of assertions. Annotations must at least be provided for the final arc and one to mark some point within any loop. The first of these is anyway the specification of what the program should do and an assertion within the loop captures the intention of the loop.11 Floyd’s paper was circulated privately in 1967 and discussed at the 1968 IBM Yorktown conference on the “Mathematical Theory of Computation”. A copy of Floyd’s original hand-drawn figure is given in Figure 7.2. Apart from the trivial difference between lower- and upper-case identifiers, the obvious addition to Figure 7.1 is that Floyd has two lines on each assertion. The second line of Floyd’s annotations provides an argument for termination of the loop and is not examined in detail here because Hoare chose not to include them in his system. Suffice it to say that the termination argument relies of finding a reducing quantity that is bounded from below (a well-founded ordering) — in Figure 7.2, Floyd uses a lexicographic pair. Section 7.4 mentions that even earlier than Floyd, Alan Turing used the idea of adding annotations to a flowchart in [Tur49]. Turing also saw the need to reason about termination and has the lovely comment: Finally the checker has to verify that the process comes to an end. Here again he should be assisted by the programmer giving a further definite assertion to be verified. This may take the form of a quantity which is asserted to decrease continually and vanish when the machine stops. To the pure mathematician it is natural to give an ordinal number. . . . A less highbrow form of the same thing would be to give the integer . . .
The final ellipses contain an expression in terms of two to the power of the word size of the machine!
7.3.2 Hoare’s axioms Tony Hoare’s paper [Hoa69] includes a generous acknowledgement of the influence of Floyd’s paper but takes a crucial step beyond the idea of assertions as annotations. The key innovation is that a logical system can be created for reasoning about programs and consistent assertions. What are now known as “Hoare triples” con10
The paper includes many other interesting technical ideas — some of which are mentioned in Section 7.4. 11 Such loop-cutting assertions become “loop invariants” in Hoare’s approach — see Section 7.3.2. They can also be thought of as local “data type invariants” that are like context conditions.
104
7 Other semantic approaches
[Floyd67]
44
Fig. 7.2 Floyd’s original version of Figure 7.1 tain two logical assertions (predicates) surrounding a program text — they are now written:12 {P} S {Q}
and are to be read as asserting that, if program S is started in a state that satisfies predicate P, any final state will satisfy predicate Q. The predicate P is referred to as the pre condition and Q as the post condition of S. One of Hoare’s claims was that it was not necessary to pin down more details of the domain of these predicates. It facilitates the comparison with operational semantics to assume that they are predicates on the state of the computation, and this is certainly the way in which Hoare triples are most commonly used. An inference system can be defined for deducing valid judgements that are recorded as Hoare triples. “Axioms” (or rules of inference) for the simple language of Chapter 3 are given in Figure 7.3. The use of inference rules is of course famil12
In fact, Hoare originally (in [Hoa69]) chose to present the triples bracketed as P {S} Q.
7.3 The axiomatic approach
105
{P} S1 {Q} {Q} S2 {R} ; {P} S1 ; S2 {R} if
{P ∧ b} S1 {Q} {P ∧ ¬ b} S2 {Q} {P} if b then S1 else S2 fi {Q}
while :=
{P ∧ b} S {P} {P} while b do S od {P ∧ ¬ b}
{P[e/x]} x := e {P}
P0 ⇒ P Q ⇒ Q0 {P} S {Q} consequence {P0 } S {Q0 }
Fig. 7.3 Hoare’s axioms iar from SOS (see discussion in Section 3.2.2) and –as with SOS rules– those in Figure 7.3 are generic in the sense that any valid substitution is taken to be allowed. The first rule in Figure 7.3 is for the composition of two statements and identifies the predicate that characterises the post state of S1 with the pre condition for S2 . An example inference that corresponds to the body of the loop in Figure 7.1 would be: {x = r + q ∗ y} r := r − y {x = r + (q + 1) ∗ y} {x = r + (q + 1) ∗ y} q := q + 1 {x = r + q ∗ y} {x = r + q ∗ y} r := r − y; q := q + 1 {x = r + q ∗ y}
The two hypotheses of that example can both be justified using the fourth rule in Figure 7.3, which is for assignment statements. That assignment rule uses a notion of substitution of an expression for an identifier: P[e/x] is the predicate expression P with all occurrences of x replaced by e. The axiom (with no hypotheses) says that P[e/x] is a valid pre condition for the assignment x := e to achieve a post state that satisfies P. Thus the first hypothesis of the argument above about the body of the loop follows from: {x = (r − y) + (q + 1) ∗ y} r := r − y {x = r + (q + 1) ∗ y}
The most interesting of the rules in Figure 7.3 is the one (while) that addresses loops because it brings in the important notion of a loop invariant. Ignoring the occurrences of b for the moment, the rule states that, if P is a predicate whose truth is preserved by S, then it follows that any number (including zero) of iterations of S will preserve P. The actual rule makes discharging the hypothesis easier to do by noting that S will only be executed in situations where b holds. Furthermore, the conclusion can be strengthened by noting that, when the loop terminates, b cannot hold.
106
7 Other semantic approaches
A key property of the loop in Figure 7.1 could be justified by the following instance of this rule (which uses the result established above for the body of the loop): x = r + q ∗ y r := r − y; {x = r + q ∗ y} r≥y q := q + 1 while r ≥ y do r := r − y; x = r+q∗y {x = r + q ∗ y} q := q + 1 r 0} DIV {0 ≤ r < y ∧ x ≥ 0 ∧ x = r + q ∗ y}
which follows from:
• simple instances of the assignment and composition axioms to verify the initialisation; and • a composition of that initialisation with the result about the loop. The rule for conditional statements (the second in Figure 7.3) should be obvious. The consequence rule notes that, given {P} S {Q} has been established, a triple with a stronger pre condition and/or a weaker post condition must also hold. Note that the rules as given in Figure 7.3 do not offer a way of establishing termination — this and other comments on the method itself are given in Section 7.4. The conditional result is that a program will satisfy its specification if it terminates; this is sometimes referred to as “partial correctness” but the term is not used further in this book. It is interesting to observe that checking programs which contain assertions does not fit the strict distinction between static context conditions and run-time errors. The idea of program verification is certainly that it should be conducted prior to execution on the static text of a program but checking assertions is not –in general– a decidable process because it requires theorem proving. Of more interest for now is that there are two senses in which axiomatic semantics can be viewed as complementary to model-oriented approaches such as SOS: • It should be possible to reason in a natural way about the correctness of programs written in a language L . An axiomatic semantics for L has formal rules for such reasoning and it is possible to mechanise the checking of such rules in a theorem proving system such as Isabelle [NPW02]. Beyond the question of whether formal proofs will be written for programs in L , it should be realised that difficulties in constructing an axiomatic semantics is a warning that even informal reasoning might be error prone. One obvious example is that the proof rule given for assignments in Figure 7.3 does not hold for a language that permits parameter passing by location because an assignment to one identifier could affect the value of what appears to be a distinct variable. • There are well-known dangers in writing “axioms”. In particular, it is difficult with extended sets of rules to be certain that they are “consistent” in the sense that
7.3 The axiomatic approach
107
inferences cannot yield contradictions. The standard way of establishing “consistency” is to show that a model of the axioms exists, and with programming languages this can be done by showing that the axioms are true of some modeloriented semantics. This task has been undertaken in [Lau71b, HL74, Don76]. There is also the question of the completeness of a set of axioms. For a system such as Hoare’s, this asks whether all true statements about programs can be deduced from the axioms. This question becomes rather technical because of concerns about the expressiveness of predicates and the inevitable undecidability of the predicate calculus over arithmetic. An insightful description of the completeness issue is given in [AO19].
7.3.3 Specification as statements Assertions on states as in the style of Floyd are certainly useful in proving that a given program satisfies a stated specification. With some care, such assertions can also be used in program development. But Hoare-style axioms make it much easier to see how a development process can be based on formalism. The idea is to start with a formal specification13 of the program and to use the inference rules to decompose the task. Thus an overall specification can be realised by a decomposition that introduces putative components that are –at that point– only given as specifications. Such decomposition steps are repeated until all of the specifications have been developed to code. The final executable program is the collection of these expansions. This idea has prompted various authors (e.g. Andrzej Blikle [Bli81], Carroll Morgan [Mor88, Mor90] and Ralph Back [BvW98]) to include a “specification statement” in a programming language and for contracts to be included in the Eiffel language [Mey88]. Morgan uses: frame: [P, R] where frame lists the names of variables that can be changed, P is a predicate of one state as the pre condition and R is a relation over two states that is the post condition.14 It is interesting to see how easy it is to extend the language description in Chapter 3 to allow specification statements embedded in a program; such a specification will contain a pre condition (a predicate of a single state) and a post condition (a relation over two states). The differences between Morgan-style specification statements, Eiffel-style contracts or some two-dimensional layout (with keywords distinguishing the predicates) are just concrete syntax details. Thus, extending Stmt in SimpleProgram of Chapter 3: 13
This does not, of course, answer the question of how a formal specification of a complex system is obtained. Research in this area is contained for example in [Jac00] and given a more formal basis in [JHJ07, BHJ20]. 14 The move to relational post conditions is discussed in Section 7.4.
7 Other semantic approaches
108
Stmt = · · · | Spec Spec :: frame : Id-set pre : LogExpr post : LogExpr The semantics of Spec is both partial and non-deterministic so the hypotheses of the SOS rule for Spec require that the pre condition P is true and that the relational post condition holds for the pair of states σ , σ 0 :15 P(σ ) frame C −σ − σ 0 = frame C Q(σ , σ 0 ) st (mk-Spec(frame, P, Q), σ ) −→ σ 0
So, for example:
st
mk-Spec({y}, true, x ≤ y0 ≤ (x + 2), σ1 ) −→ σ2
non-deterministically allows: σ1 = {x 7→ 1, y 7→ 0} {x 7→ 1, y 7→ 1} σ2 ∈ {x 7→ 1, y 7→ 2} {x 7→ 1, y 7→ 3}
There is a danger with specifications that they ask for something infeasible such as finding the largest prime number; the immediately preceding specification can be made unrealisable by changing its frame: { }: [true, x ≤ y0 ≤ (x + 2)]
which would only be achievable if the initial value of y already satisfied the post condition whereas the pre condition specifies that an implementation should work for any state.
7.3.4 Formal development After Hoare’s 1969 paper, it was realised that there was an even more important use for the axioms than reasoning about finished programs: the stepwise development of programs from their specifications could be formalised using the same rules of inference. Hoare published a stepwise development of his famous Quicksort algorithm [Hoa61] in [Hoa71b, FH71] and a variety of formal development approaches followed. These include the program development aspects of VDM from the early 1970s that were eventually published as a book [Jon80]. The reason that having a formal basis for design decisions is important in that their validity can be checked as they are made — long before all of the code is developed: under the assumption that subsequent steps will find valid implementations 15
Detailed syntax and semantics for LogExpr are omitted here.
7.3 The axiomatic approach
109
of the precisely specified sub-components, a proof that the higher-level component is correct can be constructed and reviewed. This topic moves away slightly from language description but is sufficiently important to warrant the diversion and anyway connects with the development of compilers from semantic descriptions of their source languages. Specifications are then an abstraction of the code that can be developed from them. They record what any user of that code needs to know. (The reader might want to look back at the comments about specifications of factorial and sorting in Section 1.5.) As in the artificial example in Section 7.3.3, and in general, such specifications can be non-deterministic. They are frequently also “partial” in the sense that they record assumptions about the initial state. This is important because programs can rarely achieve their post conditions from arbitrary initial states. For states where the pre condition does not hold, the code is unconstrained. It is thus the responsibility of the programmer to ensure that the context of the specified code establishes the pre condition. Carroll Morgan’s Refinement Calculus works nicely with small examples and has the advantage (over the rules in Figure 7.3) that its post conditions are relations between initial and final states.16 A specification of multiplication might be written as a specification statement:17 {r, i, j}: [0 ≤ i, r0 = i ∗ j]
meaning that any program that satisfies this specification is allowed to change the values of the variables r, i, j and must ensure that the final value of r (thus r0 ) is the product of the initial values of i and j. It is essential that the specification uses the initial values of i, j because otherwise the post condition could be satisfied by: r := 0; i := 0 So-called wide-spectrum languages allow specifications and code to be mixed so that it is possible to record a first mini-step of design as: r := 0; {r, i, j}: [0 ≤ i, r0 = r + i ∗ j]
Motivated by Hoare’s rules, an inference system can be defined for judgements that one such mixed expression satisfies another: S1 satby S2 An obvious substitution for the assignment shows that: {r, i, j}: [0 ≤ i, r0 = i ∗ j] satby r := 0; {r, i, j}: [0 ≤ i, r0 = r + i ∗ j] 16
Technically the rule for while loops used below differs from Morgan’s in its handling of termination; the reason for using the VDM termination argument is given below. 17 To provide a compact example, it assumed that the language does not offer a multiplication operator. Here again, there is an echo of the role of programs as providing the route to extending the expressive power of a language.
7 Other semantic approaches
110
The specification statement on the right can be developed (with no need to modify j) to: {r, i}: [0 ≤ i, r0 = r + i ∗ j] satby while i 6= 0 do {r, i}: [0 < i, r0 + i0 ∗ j0 = r + i ∗ j ∧ 0 ≤ i0 < i] od
And the specification of the body of the loop:
{r, i}: [0 < i, r0 + i0 ∗ j0 = r + i ∗ j ∧ 0 ≤ i0 < i] satby r := r + j; i := i − 1
This gives an algorithm that takes time linear in the initial value of j but it is possible to get a logarithmic performance by taking advantage of the ability to change j: {r, i, j}: [0 < i, r0 + i0 ∗ j0 = r + i ∗ j ∧ 0 ≤ i0 < i] satby {r, i, j}: [0 < i, r0 + i0 ∗ j0 = r + i ∗ j ∧ 0 ≤ i0 ≤ i ∧ ¬ is-even(i)]; r := r + j; i := i − 1
and use shifts to multiply/divide by two:
{r, i, j}: [0 < i, r0 + i0 ∗ j0 = r + i ∗ j ∧ 0 ≤ i0 ≤ i] satby while is-even(i) do i := i/2; j := j ∗ 2 od
There is a crucial property of the satby ordering. The technical expression is that the constructs of the programming language are monotonic in this order. Simply put this says that if a program fragment C has been shown to satisfy a specification S — and C contains a component that is given by a specification scomp — then a development of C where scomp is replaced by anything that satisfies the specification scomp will also satisfy S. So, for example: [P, Q] satby while b do [Pc , Qc ] od ∧ [Pc , Qc ] satby C ⇒ [P, Q] satby while b do C od This justifies collecting the steps above to justify that the program: r := 0; while i 6= 0 do while is-even(i) do i := i/2; j := j ∗ 2 od
od
r := r + j; i := i − 1
satisfies the specification {r, i, j}: [0 ≤ i, r0 = i ∗ j]. In the multiplication example above, the specification: {r, i, j}: [0 < i, r0 + i0 ∗ j0 = r + i ∗ j ∧ 0 ≤ i0 < i]
7.3 The axiomatic approach
111
is non-deterministic in that it does not say by how much the value of i should be reduced. This flexibility is used to develop both the linear algorithm in which the reduction is by one per execution of the loop body and the faster algorithm in which i is halved as long as its value remains even. The rules for VDM differ from those for the refinement calculus only in the way that termination is proved. There is also a difference in concrete syntax because VDM specifications have tended to be used on applications where long pre and post conditions do not fit conveniently into a single-line specification statement. VDM specifications are usually displayed vertically with keywords marking the pre/post conditions (see [Jon90]): Mult ext wr r, i : Z rd j : Z pre 0 ≤ i
post r0 = i ∗ j
The VDM rule for sequential composition can be written: {pre} S1 {interface ∧ rel1 } {interface} S2 {rel2 } ; -I {pre1 } S1 ; S2 {rel1 ;rel2 } Where rel1 ;rel2 denotes composition of relations. The decision to write interface (rather than pre2 ) serves to emphasise that the decomposition should actively divorce the sub-components from each other. A small example of such active decomposition can be extracted from the Mult development. While it would not result in incorrect code, a specification of {r, i}: [0 ≤ i ∧ r = 0, r0 = i ∗ j]
fails to separate the sub-components as well as: {r, i}: [0 ≤ i, r0 = r + i ∗ j]
This point is echoed in Section 7.4 on a more interesting example. The issue of termination is, as always, of interest (and more is said about it in Section 7.4). Morgan follows precedent in giving an argument about a reducing value; Dijkstra calls this a variant function that maps single states to a set like the integers. Given that VDM uses relational post conditions, it is more natural to establish termination by saying that the body of the loop should be specified by a well-founded relation (rel) and use the rule:18 while-I
{inv ∧ B} S {inv ∧ rel} {inv} while B do S od {inv ∧ ¬ B ∧ rel∗ }
A summary of a VDM development of Mult can be written out as in Figure 7.4. 18
The relation rel∗ is the reflexive closure of rel.
7 Other semantic approaches
112 pre 0 ≤ i r := 0; pre 0 ≤ i while i 6= 0 do inv 0 ≤ i rel r0 + i0 ∗ j = r + i ∗ j ∧ i0 < i while is-even(i) do inv 0 ≤ i rel r0 + i0 ∗ j = r + i ∗ j ∧ i0 < i i := i/2; j := j ∗ 2 od; r := r + j; i := i − 1 od post r0 + i0 ∗ j = r + i ∗ j ∧ i0 = 0 post r0 = i ∗ j
Fig. 7.4 Annotated version of the multiplication program.
Developments of larger applications show more clearly the importance of employing non-deterministic specifications. For example, the development of a system that needs a free-storage manager might rely on only outline properties such as never being allocated the same address twice. These properties can be recorded and the main application developed on the assumption that an appropriate free-storage manager will be developed. This effectively delays (or separates the task of) making design choices about the specific organisation of free chains etc.
7.3.5 Data abstraction and reification The material on axiomatic descriptions of constructs of imperative programming languages fits most naturally into the material in the current book. But experience has shown [Jon80, Jon90] that the topic of data reification19 is actually more important in specifying and formally developing programs. There is also a link with language description that is explained at the end of this section. An important part of designing any program is choosing data structures that make algorithms efficient. The details of, for example, doubly linked lists have however no place in a specification; they are neither the first issues to be clarified as to what a program should do nor are they of concern to a user of the program who only wants to understand its functionality. It is therefore wise to describe a program in terms of abstract data objects that fit the concepts being specified and to defer the design of data structures that admit efficient algorithms.
19
Most authors use the term data refinement.
7.4 Further material
113
Two examples are: • The Sieve of Eratosthenes is an algorithm for finding all prime numbers up to some given n by sieving out all of the composite numbers. A program that implements the algorithm will almost certainly use a vector of bits where the ith bit being 1 indicates –in the final state– that i is a prime number. But this is one possible representation and introduces messy implementation details that have no place in a specification. A much more perspicuous specification can be written in terms of sets of numbers. Furthermore, the example lends itself to implementations using concurrency (see Section 8.4) and important early steps of the development can be made and verified with the more abstract data representation. • An application sometimes referred to as union/find provides a way of recording equivalence relations. The specification is clearly and briefly described in [Jon90, Chap. 11] in terms of a partition of some arbitrary set X. There is an algorithm due to Michael Fisher and Bernie Galler that uses an ingenious tree representation of equivalence classes. The taste and efficiency of the representation does not justify its incursion into the specification. The process of choosing appropriate representations (or “reifications”) of abstractions has similar monotonic properties to the rules related to satby and thus fits into a natural formal development process. Despite its importance, even programming languages that allow assertions do not support documentation of data abstractions. The closest approximation is the use of libraries as in Java’s Standard Template Library but this is only for a fixed repertoire of abstractions. The links to semantic language description are both general and specific. Generally, the message of using –for example in state descriptions– objects that are as abstract as possible has been emphasised in earlier chapters. Specifically, the choice of an abstract syntax is a clear attempt to avoid clouding a semantic description with the representation details necessary to support parsing. In the Vienna Lab compiler work, [Wei75] describes the connection between abstract and concrete syntax and its role in compiler development; [Jon76] describes how the relationship between the abstract state of the semantic types and the actual run-time state informs the compiler development.
7.4 Further material The literature on program verification and/or formal development is extensive. One attempt to trace the evolution of the field is [Jon03]; an early assessment of Hoare’s axiomatic approach is given in [Apt81], which has been considerably expanded to [AO19] (which was conveniently published 50 years after [Hoa69]). In view of these sources, only a few key steps are noted here together with additional references to those mentioned in Section 7.3: • John von Neumann’s decision to use a form of assertion box in what became [GvN47] has been pinpointed by Mark Priestley [Pri18] to a letter from
114
7 Other semantic approaches
von Neumann to Herman Goldstine dated March 1947. It must be said that the description in [GvN47] is far from clear. • Alan Turing’s “Checking a large routine” [Tur49] does have a clear programme of annotating a flowchart with assertions. This is a remarkable paper: in just three pages Turing gives an inspired motivation for assertions, a proof of a doubly nested program and an argument for its termination. Sadly, neither of these papers had any significant effect on verification research: [GvN47] introduced the idea which became known as the von Neumann (computer) architecture and was studied mainly by people who were designing early digital computers; [Tur49]20 was not known to Floyd or Hoare until after their key papers were published. As noted in [Jon03], van Wijngaarden was at the 1949 conference where Turing gave his talk but he failed (or refused) to link it to his own [vW66a]. • Bob Floyd’s paper [Flo67] (discussed above) certainly set the stage for many subsequent steps on program verification.21 As published, it used a complicated “forward assignment rule” that requires an existential quantifier. The paper does include termination proofs of the two algorithms considered and gives properties that are required of sensible proof rules for programming constructs. (Dijkstra [Dij76] would later formalise such rules as healthiness conditions for his predicate transformers.) • Jim King’s Ph.D. [Kin69] was supervised at CMU by Floyd — King built the Effigy system [Kin71] that both attempted to check Floyd-style annotations to (PL/I) programs and deploy symbolic execution as an additional tool.22 • As well as acknowledging the influence of Floyd’s paper, Hoare’s [Hoa69] cites Aad van Wijngaarden’s [vW66a], which tackles axioms for finite computer arithmetic, and Peter Naur’s [Nau66], which uses general snapshots to record assertions but expressed more as comments than in a formal logical notation. • Hoare (possibly prompted by Floyd’s form of annotating assertions), Dijkstra and others used post conditions that were predicates of a single state. From early publications, VDM used relational post conditions. The consequent inference rules are bound to be somewhat more complicated but unfortunately those in [Jon80] were (to use Peter Aczel’s understatement) “unmemorable”. Aczel showed in an unpublished note [Acz82] that rules for post conditions of two states (a) were better and (b) could be presented clearly. These rules were then employed in [Jon86] and subsequent publications on VDM. Other specification languages such as Z [Hay87], B [Abr96] and Event-B [Abr10] also use relational post conditions. 20
Not only are these proceedings somewhat inaccessible, Turing’s short paper was printed with many typographical errors that impaired understanding — it was “exhumed” and republished in [MJ84]. 21 Floyd’s paper was first available as a mimeographed copy in 1966 and can be seen at: 22 King moved to IBM Research and the current author used Effigy and showed (around 1976) that it could be used to formally develop programs by using Prove/Assume commands to record specifications of undeveloped sub-components.
7.4 Further material
115
• The SIEVE example mentioned above in connection with data reification provides a more compelling example of the desirability of “active decomposition”. The post condition of the whole program specifies that the final state should contain only primes (up to some given n). A natural decomposition in the development of the Eratosthenes program is to have an initialisation phase that puts all natural numbers from 2..n into the state and a second phase that removes composites. Following a weakest pre condition method computes the pre condition of the sieving phase to be exactly the post condition of initialisation. But the sieving process functions perfectly well on any initial state: it will remove composites if there are any. It is for this reason that the ; -I rule of VDM shown above emphasises finding an interface predicate (interface) that the designer can use to separate the sub-components properly. • Although static proofs about programs provide much more assurance than testing, even without such proofs, run-time evaluation of assertions provides a way of detecting errors much closer to their source than trying to trace back from a program crash resulting from corrupt data. This idea was proposed in Ed Satterthwaite’s thesis [Sat75], is used in Eiffel [Mey88] and GCC23 and is employed informally by many industrial groups. • The topic of termination arguments has an interesting history. Turing and Floyd both used formal arguments about reducing quantities; Dijkstra [Dij76] confined himself to predicates of a single state so formalised the idea of reducing quantities with rules about variant functions; VDM uses the fact that termination follows directly if the relation for the loop is well founded. • In addition to the problem of proving that loops do not run forever, there is a danger that they abort in some way such as division by zero or computer representations of numbers overflowing. (This was why van Wijngaarden looked at the axiomatisation of finite computer arithmetic in [vW66a].) Dick Sites in his beautiful thesis [Sit74] describes needing to prove clean termination — the same problem is tackled in [CH79]. • Since King’s early Effigy system referred to above, huge strides have been made in providing software that supports the task of creating (machine-checked) proofs. General theorem provers include HOL-light [Har09], Isabelle [NPW02]. and Coq.24 ACL-2 (the most recent development of the Texas work that began with [BM81]), KIV25 and Dafny26 are examples of tools more closely geared to software development. Returning to the history of the ideas on axiomatic semantics, there is an interesting connection with the famous (1964) Baden bei Wien Working Conference. Hoare did not present a paper but expressed strongly the idea that a language description should be able to leave some things undefined (more in the tone of the current book, one might say “under-defined”). Hoare went on to produce at least two significant 23 24 25 26 Compiler Collection
116
7 Other semantic approaches
drafts of an approach that attempted to be more axiomatic than say McCarthy’s operational semantics. Floyd’s paper was sent to Hoare by Peter Lucas because the Vienna group had been studying it; Hoare saw that Floyd’s assertions provided a key idea that resolved issues with his earlier attempts and quickly wrote the definitive [Hoa69]. Hoare has reflected on this experience in [HJ89] and talked about how he might have done things differently in an ACM recorded interview.27 Hoare and colleagues went on to tackle various other programming constructs including [Hoa71a, CH72] but the attempt to provide an axiomatisation of Pascal [HW73] is incomplete. The only full language description in the axiomatic style appears to be the Turing language [HMRC87]. A more promising avenue that is pursued in SPARK-Ada [Bar06] and “featherweight Java” [IPW01] is to identify subsets of complicated languages that can be axiomatised.
7.5 Roles for semantic approaches Given the range of semantic approaches, it is worth indicating where this author considers their respective contributions are most likely to be effective. Authors of early operational semantic descriptions tended to put too many things into a monolithic “grand” state. This had the effect of making it hard to establish properties of such definitions. Plotkin’s “Structural Operational Semantics” essentially resolved this issue and, for example, the split between environments and states made in denotational descriptions can be mirrored in operational descriptions. The consistent argument throughout this book is that SOS descriptions provide a very productive tool for both language understanding and design. With little need for sophisticated mathematical concepts, features of modern programming languages can be written and read. The argument is often made that compiler designers should base their developments on denotational descriptions but this author would also use an operational description as a basis for compiler design. An unassailable point is that programming languages have a sequential aspect that can be difficult to express in approaches that might look more mathematically elegant. Model-oriented approaches use an explicit notion of the state of a computation; this affords a way of coping with the fact that so-called variables change their values during a computation. Operational descriptions make this explicit; denotational descriptions do have a neat mathematical model of composing functions from states to states. But concurrent threads updating a shared state (as discussed in Chapter 8) are much harder to cope with denotationally precisely because this sort of interference is inherently operational. A further challenge comes with exceptional ordering (as discussed in Chapter 10). The above praise of operational semantics is in no way intended to deny the advantages of denotational semantics for looking at deeper properties of programming languages. One important example is the way that denotational semantics provides 27
7.5 Roles for semantic approaches
117
an understanding of termination: partial functions (from states to states) neatly capture what it means for a while loop to fail to terminate on some inputs. The fact that an operational semantic description will itself yield no result for a non-terminating program is one manifestation of the fact that proofs based on operational descriptions tend to be inductions over the computation. It is also worth noting that there are other spaces of mathematically tractable denotations than functions over states: several authors have used Robin Milner’s πcalculus [SW01] as a target for mapping concurrent object-oriented languages; and “game semantics” has been used for example in [A+ 97]. Moving on to property-oriented descriptions, that considered in Section 7.3 is axiomatic semantics. Although it is possible to base proofs about programs in some language L , something like Hoare’s axioms –or a variant such as the refinement calculus– provide by far the most natural way to verify or develop programs. It is noted above that there are few languages that have a complete axiomatic semantics but a practical way forward is to identify subsets of larger languages about which an axiomatic style of reasoning is practical. Furthermore, designers of languages are well advised to understand where their design decisions make it difficult to provide proof obligations because such features are likely to present challenges even for informal understanding of programs in the putative language. An example of an issue that might be motivated by considering the axiom of assignment given in Figure 7.3 is that this axiom is not valid for languages that permit parameter passing by location (see Section 5.4) because an assignment to the lefthand-side value of one identifier can affect the right-hand values of other identifiers. This might prompt a language designer to consider incorporating parameter passing by value/return. This latter mode does not however offer avoidance of copying data during procedure or function calls. Similar comments can be made about “algebraic semantics” [Koz97, HvS12, HCM+ 16, DHMS12]: such properties –or their absence– can inform language designs. Investigating algebraic properties of concurrency has proved both challenging and revealing. Tony Hoare and colleagues have looked at “unifying theories” in [HH98] — a useful introduction to the “UTP” approach is [WC04]. In particular, UTP can be used to provide insights into the relationships between semantic approaches.
Chapter 8
Shared-variable concurrency
This chapter moves beyond issues present in sequential languages typified by ALGOL descendants. The topic of concurrency is important and challenging in many ways. There are several reasons why programs need to exploit parallelism. • applications such as those that support many simultaneous users are inherently parallel; • using fast processor cycles when some threads of execution are held up waiting for slower external devices; • as circuits approach atomic limits, hardware speed increase and miniaturisation are unlikely to continue to follow Moore’s law and provide the speedup on which society has relied for decades — fortunately, it is now practical to put many cores on a wafer — but this potential parallelism has to be exploitable via software.
8.1 Interference In some cases, programs can achieve rapid execution using parallel threads with disjoint data.1 However, as soon as there is a need for threads to access and change shared data, the resulting concurrent threads become extremely difficult to design: • The number of paths through a sequential program is exponential with respect to the number of branch points; with concurrency, the number of effective paths explodes because of interference from state changes made by concurrent threads. • It is notoriously difficult to debug concurrent programs since executions starting in identical states can progress differently because of interference from concurrent processes. • One particularly unpleasant consequence of the preceding point is that a programmer who is trying to locate the source of erroneous behaviour can add trac1
Often referred to as “Single Instruction Multiple Data” (SIMD) parallelism.
© Springer Nature Switzerland AG 2020 C. B. Jones, Understanding Programming Languages,
119
120
8 Shared-variable concurrency
ing statements that change the timing behaviour in a way that hides the error (this gives rise to the term “Heisenbugs”). Language issue 39: Concurrency There are actually many issues in concurrency: they include interference and its control for mutual exclusion, synchronisation, the transfer of information and deadlock detection/avoidance. Typically, hardware provides low-level concurrency primitives (e.g. a “compare and swap” instruction) for synchronisation. Programming language designers have devised a range of ideas in an attempt to make the design and justification of concurrent programs somewhat tractable. Dijkstra’s semaphore idea (using p/v) is one of the earliest.2 More structured language extensions followed including: • • • •
Conditional critical sections [Hoa72] Monitors [BH73, Hoa74a] software transactional memory designers of process algebras [Hoa85, Mil89, Bae90] attempted to eschew the notion of shared state but communication-based concurrency does not, in fact, slay the dragon of interference.3
Modelling such constructs is an interesting challenge which is addressed in this chapter. The specific target in Chapter 9 is to show how to use object-oriented ideas as a way of structuring concurrency but the modelling ideas are generic over most concurrency constructs. Describing concurrent programming languages poses some of the same challenges as face the programmer using such languages: the interaction between threads makes it difficult to describe aspects of a language in a structured way. The good news is that there is no need to extend the meta-language developed in earlier chapters. The challenge is to express interference in a reasonably structured way. Challenge VII: Modelling concurrency How can shared-variable concurrency –and the inherent interference that manifests itself by state changes that give rise to massive non-determinacy– be described using SOS? Section 8.2 explains the essential development of the operational semantic descriptions that is required to model concurrent shared-variable threads; a small (and clearly artificial) language is used to explain the core idea with a minimum of distractions. Section 8.3 extends the discussion on granularity; Sections 8.4 and 8.5 pick up the topic of reasoning about programs written in the object languages. The topic of object-oriented languages (initiated in Section 6.2) is resumed in Chapter 9 because such languages can provide an extremely useful way of controlling concurrency and thus provide tractable languages for programmers. 2
It is interesting that Gary Peterson [Pet81] found a way of programming the p and v operations without hardware support. 3 The focus in this book is on shared-variable concurrency — some discussion of process algebras is in Section 8.6.
8.2 Small-step semantics
121
8.2 Small-step semantics The first issue to get clear is the way in which (shared-variable) concurrency gives rise to non-determinacy. With two threads (S1 , S2 ), (S3 , S4 ) running in parallel, (S1 ; S2 ) || (S3 , S4 )
there are six possible orders in which the statements can be executed even if statements are considered to execute atomically4 — the set of sequences is: [S1 ; S2 ; S3 ; S4 ] [S1 ; S3 ; S2 ; S4 ] [S1 ; S3 ; S4 ; S2 ] [S3 ; S4 ; S1 ; S2 ] [S3 ; S1 ; S4 ; S2 ] [S3 ; S1 ; S2 ; S4 ] To see how this affects the results, consider the following instances of the Si : (x := 1; x := x + 3) || (x := 2; x := x ∗ 2)
Again, for the moment, assuming that assignment statements execute atomically, the final value of x is in the set {4, 5, 7, 8, 10}. It is the task of the language description to say that all of these outcomes are allowed — and to make clear that no others are considered to be correct. The point is made in Section 3.2 that SOS rules provide a natural way of describing non-determinacy and they are therefore ideal for concurrency. What has to be recognised, in an operational framework, is that there needs to be a way to record the statements that are still to be executed in each thread. In previous chapters, the SOS rules are written so that they discard executed statements. The most obvious case is the left-to-right evaluation of a list of statements, where the head of the list is executed and the rest of the computation is only affected by the tail of the list. With concurrent threads, there is essentially a tree of putative next steps. In the early Vienna Lab (VDL) operational descriptions, this control tree was completely explicit as a state component. An advantage of SOS is that the selection of next steps is implicit in the selection of SOS rules. The same choices as were explicit in the VDL control tree have to be indicated but SOS succeeds in factoring the nondeterminacy out of the state and into rule selection. The key response to the challenge of describing concurrency is to define the semantic relation over configurations that pair the remaining text to be executed with the state. To illustrate this, a part of an artificial programming language with two parallel threads is considered: threads contain only assignment statements (issues such as blocks and procedures are postponed): Par :: thrd1 : Assign∗ thrd2 : Assign∗ 4
This unrealistic assumption is reconsidered in Section 8.3.
8 Shared-variable concurrency
122
and the state is as in the simplest languages: m
Σ = Id −→ ScalarValue
Configurations are Par × Σ and the semantic relation becomes: par
−→: P((Par × Σ) × (Par × Σ))
The rule that expresses what happens when a statement from thread thrd1 is exepar st cuted uses −→ to reflect the state change of a single assignment statement and −→ shows the executed statement s1 being dropped from the configuration leaving the remaining list of statements (rl1) in the resulting configuration: st
(s1, σ ) −→ σ 0 par (mk-Par([s1] y rl1, sl2), σ ) −→ (mk-Par(rl1, sl2), σ 0 )
The obvious symmetrical rule for thrd2 is: st
(s2, σ ) −→ σ 0 par (mk-Par(sl1, [s2] y rl2), σ ) −→ (mk-Par(sl1, rl2), σ 0 )
If Par is added as an option to the types of Stmt: Stmt = · · · | Par
the effect of a whole Par as a statement requires executing all statements in both threads; this uses the notion of the transitive closure of a relation: par ∗
config −→ config0 par config0 −→ config00 par ∗
config −→ par
par ∗
config −→ config
config00
st
With this −→ can be linked back to −→ as follows: par ∗
(mk-par(thrd1, thrd2), σ ) −→ (mk-Par([ ], [ ]), σ 0 ) st (mk-Par(thrd1, thrd2), σ ) −→ σ 0
To summarise:
• non-determinism is modelled by defining a relation because there can be more than one potential outcome of a program; • small-step semantics have to use configurations that combine the program text that remains to be executed with the state of the variables; • SOS rules provide a natural way of defining such a relation over configurations.
8.3 Granularity It is straightforward to increase the size of language components that are executed atomically — that is, to have coarser granularity of merging concurrent threads. This is achieved by making large steps in the SOS rules. An extreme position is
8.3 Granularity
123
to prohibit any sharing of variables between parallel threads.5 This certainly makes it easy to reason independently about the threads but the constraint is too extreme for many applications. There is a spectrum ranging from low-level system code that often results from intimate access to shared variables through to applications that revolve around large shared databases. Although the detailed language resolutions differ, the general need for ways to control access from separate threads to shared variables is something that must be modelled. The serious challenge for semantic description is to move in the direction of finer granularity. Language issue 40: Granularity Fixing the granularity of interference in a shared-variable concurrent language is an important design issue. The ways in which the programmer can determine granularity must make it easy to understand programs. But it must also be possible to implement the language efficiently on realistic hardware. The comment is made in Section 8.2 that the assumption that assignment statements can be executed atomically is unrealistic. This is because a compiler will typically expand a statement such as x := x ∗ 2 into steps that place the (right-hand) value of the variable x into a register, then perform the multiplication before writing the computed result back into the location (left-hand value) for x. If another thread accesses and changes x between these steps, that update can be overwritten.6 Thus the example threads at the beginning of Section 8.2 could –under the realistic assumption that only variable read and write are atomic– also give rise to the additional outcome that execution of the two threads would result in a final outcome with x0 = 2. To see how this can come about, the following sequence of steps makes explicit the use of a temporary variable t — the two threads might interleave their steps as follows: x := 2; x := 1; t := x; x := x + 3; x := t ∗ 2
This is by no means an arcane detail: leaving aside for the moment that many crucial low-level programs have to be written in terms of such sequences of accesses, it is difficult to avoid similar problems at the level of transferring money between back accounts. This is a clear case against a language with such ill-constrained interference. Although such low-level interference is undesirable, it is worth sketching how it can be modelled. The key to modelling finer-level thread merging is to modify configurations at the appropriate level. Thus a single SOS rule might be needed to show accessing one scalar value and replacing the identifier with the accessed value. 5
Such a standpoint is adopted by many process algebras –see Section 8.6– as explained there, unfortunately this does not get around the problem of interference. 6 One proposal to avoid this problem is known as “Reynolds’ rule” (although John Reynolds told the current author that he had nothing to do with it!), which requires that only one shared variable occurs in any assignment. Unfortunately this fix does not resolve the real problem.
124
8 Shared-variable concurrency
8.4 Rely/Guarantee reasoning [*] This optional section picks up –from Section 7.3– the theme of providing ways of formally reasoning about –or formally developing– programs in an object language. Here, of course, the interest is in how to provide inference rules that support the introduction of concurrent threads whereas Chapter 7 addressed the story for sequential programming languages. The rule for decomposing a specified task into two components that are to be executed sequentially shows that the second statement is initiated in the state that results from executing the first.7 In contrast, parallel threads are initiated in identical states. Assuming that the two threads are specified as: {P1 } S1 {Q1 } {P2 } S2 {Q2 }
then, under rather strong assumptions, it would be true that their specifications can be combined as follows: {P1 ∧ P2 } S1 || S2 {Q1 ∧ Q2 }
The key assumption is that there is no interference between the threads. This is a useful observation (and looks forward to the ideas in Section 8.5). Unfortunately, many interesting uses of concurrency have to cope with interference and the SOS rules covered in the earlier parts of this chapter are aimed at exactly characterising such interference. This leaves the challenge of how a proof-oriented approach can deal with interference. This section outlines one approach that tackles interference head on and Section 8.5 outlines a line of attack that is predicated on avoiding interference. The Rely/Guarantee (R/G) approach extends specification by pre/post conditions both to face interference and to provide ways of reasoning about it in program development. The fact that few programs can achieve their post relation in arbitrary starting states is recognised by recording pre conditions as part of a specification. Almost no useful post condition could be achieved by a program that experienced unconstrained interference on its variables so an R/G specification uses a rely condition that describes the interference that executions of the program must tolerate. Rely conditions are relations over two states; this fits naturally with VDM’s relational post conditions and admits the view of a rely condition as the post condition of a potential interference step. As emphasised by the colouring in Figure 8.1, both the pre and rely conditions are assumptions that the designer can make; ensuring that they are satisfied is a requirement on the context; in other words, the decomposition that introduces the specified components must show that the conditions pertain. To this end, it is also necessary to document –for each component– its guarantee condition that expresses the maximum interference that it can inflict on sibling 7
This holds in either the original Hoare rules as in Figure 7.3 or the VDM style that uses relational post conditions (see Section 7.4).
8.4 Rely/Guarantee reasoning [*]
125
processes. Like post conditions, guarantee conditions are obligations on the running code. Figure 8.1 indicates how the various predicates apply to the execution of the ongoing process and any other processes that can interfere with its variables. The contention is that rely and guarantee conditions offer a useful abstraction of interference. This claim is supported by evidence from a corpus of examples. pre
z}|{ σ0
rely
···
z }| { σi σi+1
···
σj σj+1 | {z }
···
σf
guar
|
{z
}
pre/rely are assumptions the developer can make guar/post are commitments that the code must achieve
Fig. 8.1 A trace of states made by execution of a component and its context An outline of one example of the use of R/G in development can be based on the “Sieve of Eratosthenes” mentioned in Section 7.4. The specification of the interesting part of the algorithm is to remove all composite numbers from a set. The following informal notes indicate how R/G rules are used (see [HJ18]) to formalise the development: • A sequential program could execute Rem(i) for values of i from 2 (to the square root of the maximum value in the set) whose role is to remove multiples of i. This was the core of Eratosthenes’ inspired algorithm. • For such a sequential implementation, the post condition of Rem(i) could require that exactly the products (2 and above) of i should be removed from the set. • If however the Rem(i) procedures can execute concurrently, this exact equality cannot hold because interfering processes are also removing elements from the set. • The post condition of Rem(i) can be weakened to say that each instance is required to ensure that no multiples of i are present at termination of that instance. This is a lower bound on how Rem(i) can affect the set. • But the weakened post condition is not achievable with arbitrary interference on the set: Rem(i) needs a rely condition that the set can only get smaller so the program can remove say j ∗ i and rely on the fact that it will not be re-inserted. • Unfortunately, the weakening of the post condition would admit an implementation that removed elements (e.g. primes) that ought not to be removed — a guarantee condition on Rem(i) can insist that it only removes multiples of i. This is the upper bound on its changes to the set. • Finally, since the Rem processes must co-exist, each must guarantee to never put elements into the set. The above conditions fit into the generic picture in Figure 8.1 and the appropriate proof rule can be used to justify this step of development. The pre, rely, guarantee
8 Shared-variable concurrency
126
and post conditions can be written as a quintuple wrapped around the program text that is to be executed: {P, R} S {G, Q}.8 To indicate how the rely/guarantee rules relate to the non-interfering version of the parallel rule as at the beginning of this section, a slight simplification of the actual rule is:9 {P, R ∨ G2 } S1 {G1 , Q1 } {P, R ∨ G1 } S2 {G2 , Q2 } || -RG {P, R} S1 || S2 {G1 ∨ G2 , Q1 ∧ Q2 ∧ · · ·}
This rule shows that the pre and post conditions of the two parallel components can be combined providing the rely and guarantee conditions of the components agree. The development of the parallel sieve in [HJ18] makes a subsequent data reification of the set into arrays of bits. The “Sieve” example involves a collection of threads that (apart from their parameter) have identical specifications. Applications where the processes differ such as senders and receivers in “Asynchronous Communication Mechanisms” are more interesting (see [JH16]) and can be handled with the same proof rules. R/G specifications can be written as five-tuples (pre, rely, program, guarantee, post) and proof rules given for justifying the introduction of concurrent processes (such a rule is given in [Jon00]). Recent research has embraced the idea of specification statements and records rely and guarantee conditions as clauses to be wrapped around any specification. This way of presenting R/G thinking makes it possible to emphasise algebraic properties such as the distribution of rely conditions over decomposition [JHC15, HCM+ 16]. Ian Hayes presented a tutorial in Chengdu (China) during 2018 and the proceedings [HJ18] include two worked examples (the tutorial itself additionally covered the Treiber stack). Further examples in the literature include: parallel “cleanup” operations for the Fisher/Galler algorithm [CJ00]; Simpson’s “four-slot” implementation of Asynchronous Communication Mechanisms (ACMs) [JP11, JH16]; concurrent garbage collection [JVY17]; and Mergesort [JY15]. The origins of the R/G approach (in particular its relationship to the OwickiGries approach [Owi75, OG76]) are explored in [dRdBH+ 01]. Examples of R/G developments clearly indicate a top-down design approach; finding a compositional approach to developing concurrent programs was a major objective of the research (again see [dRdBH+ 01]).
8.5 Concurrent Separation Logic [*] The key reference for Concurrent Separation Logic (CSL) is [O’H07]. In that paper Peter O’Hearn emphasises that CSL supports reasoning about (data) “race free8
This quintuple version of rely-guarantee obviously follows Hoare triples (see Section 7.3.2). The simplification is that a stronger post condition can use information from the guarantee conditions. 9
8.5 Concurrent Separation Logic [*]
127
dom” and contrasts this with the rely/guarantee approach, which tackles “racy programs”. It is useful to again look at the idealised rule at the beginning of Section 8.4: what this indicates is that a parallel combination can combine the pre and post conditions of its sub-components providing there is no interference. Tony Hoare in [Hoa72] could establish non-interference by looking at the alphabets of the two parallel processes because only normal (i.e. stack) variables were being considered. John Reynolds’ “Separation Logic” [Rey02] tackles reasoning about heap variables (i.e. dynamically allocated variables). This was in itself a bold and important step. Concurrency adds to this the challenge that the ownership of dynamic addresses can be exchanged between concurrent threads. The success of CSL is that it makes it possible to reason about programs that achieve disjointness –and thus avoid data races– even in the presence of such ownership exchanges. The key CSL rule for reasoning about concurrent threads is: {P1} S1 {Q1} {P2} S2 {Q2} || -SL {P1 ∗ P2} S1 || S2 {Q1 ∗ Q2} This differs from the ideal rule at the head of Section 8.4 only in that logical conjunction has been replaced by “separating conjunction” (written as “*”). This operator requires that the addresses in the two operands do not overlap. It is important to remember that both Reynolds’ original Separation Logic and CSL address heap variables.10 CSL owes its origins to detailed analysis of intricate pieces of code and tends to be used in a bottom-up analysis of such programs rather than in top-down design. That having been said and despite their different attitudes to data races, there are many connections between CSL and R/G methods: • RGSep [VP07, Vaf07] and SAGL [FFS07] offer explicit combinations of the approaches; • Local R/G [Fen09] brings local reasoning and information hiding to concurrency verification; • Deny/Guarantee [DFPV09] tackles fork/join concurrency, which is not obviously handled by the original phrase-structured R/G rules; • research on “Views” [DYBG+ 13] provides a common framework for justifying proof obligations. A different sort of connection is exhibited in [JY15], where it is shown that separation can be viewed as yet another abstraction and a (top-down) development requirement is to show that the separation is preserved when mapping onto heap store is undertaken. 10
Another claim for separation logic is the use of a “frame rule” that provides a formal way of promoting an assertion on one state to apply to a larger state. The claims for the uniqueness of this rule tend to ignore that other methods have ways of defining frames. It is however true that VDM and the refinement calculus handle stack –rather than heap– variables.
128
8 Shared-variable concurrency
8.6 Further material Projects The technique of small-step SOS is exploited in the next chapter and any number of projects can be attempted there. The reader might like at this point to experiment with changing the semantics of the non-deterministic for loop from Section 3.2.3 so that all instances are executed concurrently.
Further reading There are many interesting and useful books on the general topic of concurrency including [Sch97, MK99, BA06]. Even for operational semantics, there are further issues around concurrency that are left aside here. One that deserves at least a mention is fairness — consider: x := 0 || while x 6= 0 do i := 1 + 1 od
There is clearly no a priori limit on the value of i but the question of whether the right-hand loop terminates depends on whether the scheduler is fair in the sense that it ensures the left-hand assignment does eventually execute. The standard reference on fairness is [Fra86]; Ian Hayes and Larissa Meinicke have also explored [HM18] the notion of “justness”. Because of its essentially operational nature, concurrency poses strong challenges for denotational semantics: Plotkin [Plo76] showed how to use power domains to handle concurrency; resumptions are described in [BA90]; other approaches include game semantics [Abr13]. A more radical approach to concurrency is to attempt to move away entirely from shared variables. Tony Hoare [Hoa78, Hoa85] and Robin Milner [Mil78a, Mil80] each developed process algebras in which communication was the main focus. Hoare’s CSP is given a semantics in [BHR84] in terms of traces and refusals. It is, however, worth emphasising that process algebras do not avoid the problem of interference as can be seen by the ease with which analogues of shared variables can be programmed in these notations. The question can then be asked whether traces and refusals offer a more convenient way of reasoning about interference than, say, R/G. Further afield, many researchers prefer to reason about concurrent programs using Temporal Logics. Classic texts in this area include [MP95, Mos85] and a recent book is [Fis11]. An interesting combination of interval temporal logic and R/G is [STER11]. So-called true concurrency (as opposed to an interleaving model) has been studied via Petri nets — see [Rei12, Rei13].
8.6 Further material
129
The handling of concurrency in database management systems (DBMS) is interestingly different from the way that most HLLs embody the concept. Programming languages like Java put the onus on the programmer to acquire and release locks in a way that avoids data races. A database is a huge shared variable and a DBMS can run many concurrent transactions but here the detection of –and recovery from– clashing updates is handled entirely by the DBMS. So, although a project in Section 4.4 points out that it is not difficult to add relations as a value type, concurrency would be harder to model (see [BHG87, L+ 94, WV01] and [HW90]).
Chapter 9
Concurrent OOLs
Although it is essential that specification methods are capable of describing languages –such as that outlined in Section 8.2– that permit unconstrained access to shared variables, it is more advantageous to use the description techniques to understand –and potentially design– languages that embody tractable concurrency. The core ideas of object-oriented programming languages (OOLs) were first materialised in Simula [DMN68]; the concept proved to be extremely fruitful, offering advantages over say ALGOL. • As its name suggests, “Simula” is a language for writing simulation programs. The ability to create arbitrary numbers of instances of classes made it easy to have one internal object per physical entity in a simulation. • Objects provide a way of encapsulating “abstract data types”, whose internal representation can be changed without affecting programs that use the prescribed interface. • Because instance variables are local to objects, it is possible to limit data visibility between concurrent threads and thus control data races. • Ideas around object-oriented databases followed from OOLs. Objects can be seen as the culmination of earlier lines of evolution in programming languages. Objects themselves can be seen as multiply instantiated blocks; methods correspond to functions and procedures — albeit with non-ALGOL-like visibility rules; data races on instance variables within objects can be controlled; and the control of interference is governed by the programmer via the sharing of object references. To expand on this last point, a class can be defined whose instances behave like shared variables; the instance variable of the class contains the current value; methods for say read and write can be defined. Although (for each instance of the class) only those methods can access that instance variable, any object that has a (shared) reference to the class would face essentially the same interference issues as are considered in Chapter 8. The control or reference sharing provides, however, a useful intuitive approach. © Springer Nature Switzerland AG 2020 C. B. Jones, Understanding Programming Languages,
131
132
9 Concurrent OOLs
Not only is it true that no new meta-language is required to describe concurrent OOLs, there are even no new language challenges. Preceding chapters have, for example, shown both how to introduce surrogates to model sharing and how concurrency is handled by choosing apposite “configurations”. Challenges II–VII have provided the equipment to tackle the non-trivial combination of language features that are brought together in COOL. Tackling concurrent OOLs in Sections 9.1–9.5 also affords the possibility to emphasise just how much the semantic objects can tell a skilled reader about a language. Returning to the message about the use of semantic descriptions in the design of languages, this of course argues for starting language design precisely with such semantic objects.
9.1 Objects for concurrency A language that shows that object-oriented ideas can make concurrency tractable is Pierre America’s POOL [Ame89]; the language COOL introduced in this section and fully described in Appendix D is inspired by POOL. As mentioned in Section 6.2, a central idea in OOLs is that each object has its own copy of the instance variables of a class — this offers separation providing only the methods of the class are allowed to access the (instance) variables. At first sight, this might appear to go too far and make the activity in objects entirely disjoint. Such extreme isolation is overcome by allowing objects to communicate via method calls or invocations. As explained in Section 6.2, to make this communication possible, it is necessary to ensure appropriate visibility of method names. In say ALGOL 60, procedure names declared within a block are visible only inside that block (see Section 5.3); in OOLs method names are visible to other classes. In fact, method invocation is precisely the means by which objects interact. As also pointed out in Section 6.2, instance variables in OOLs preserve their values between method invocations. A number of key questions remain about how to embed concurrency in a manageable OOL and options for COOL are investigated throughout this chapter.
9.1.1 An example program In order to introduce COOL, consider the task of creating a “sorting ladder” that keeps a series of objects in ascending order of values of their v field as in Figure 9.1. Instances of the Sort class are linked via their l field (the final element in the ladder has a nil value in l). Thus the class description might declare variables:1 1
Many years ago (in a Heuriger in Vienna) T.C. Chen outlined a potential use of “bubble memory” which could sort numbers in time (constant) one! The idea is to use parallel logic at each memory
9.1 Objects for concurrency
133
Sort class vars v: N; l: ref(Sort); · · · .. . By insisting that variables that contain references are declared to be specific to one class, it is possible to check statically that only known methods are invoked.2 Language issue 41: Strong typing in OOLs The issue of static type checking (cf. Issue 6) in object-oriented languages can be extended to names of classes and their methods.
v 2
v 4
v 6
v 8
l
l
l
l nil
Fig. 9.1 Picture of a possible state of a sorting ladder The intuition of COOL programs can be given by considering two methods. • The min method simply returns the value of v in its instance. • An insert(x) method either stores the value of the to-be-inserted parameter (x) or, if x is larger than the locally stored v, passes it to the next object in the ladder. The body of this method could contain a conditional (abbreviated below as iif ):3 if v ≤ x then {activate(l.insert(x))} else {activate(l.insert(v)); v := x} fi
This is embedded in a conditional that handles the end of the ladder (abbreviated below as ins): if is-nil(l) then {new (l); v := x; } else {iif } fi cell so that inserted values trickle down to the appropriate place in the ladder; the smallest value can always be obtained from the first element of the ladder (followed by shuffling values up). This is effectively a concurrent algorithm — see Section 9.5. 2 An alternative would be dynamic checking and, hopefully, some form of exception for unknown methods that could be trapped and also be handled dynamically. 3 The full text of Sort is in Figure 9.9. The names iif /ins are used in Figure 9.3 to refer to these pieces of code (as though they were translated into abstract syntax form).
134
9 Concurrent OOLs
Suppose a client object –that has a variable l pointing to the first Sorter object in a ladder– executes: activate(l.insert(7)); activate(l.insert(3)); activate(l.insert(9))
Then, providing each instance of the Sort class is executing as an independent thread, the three insert method calls can be handled concurrently as in Figure 9.2.
insert(9)
insert(3)
insert(7)
v 2
v 4
v 6
v 8
l
l
l
l nil
Fig. 9.2 Picture of activity in a concurrent sorting ladder COOL’s concurrency constructs are described in Section 9.5 and a number of alternatives are suggested in Section 9.7. Clearly, many details need to be pinned down but it is more interesting to look first at the objects that underpin the semantics of COOL. A full description of the language is given in Appendix D (this includes a list of abbreviations used to shorten the names of some records).
9.1.2 Semantic objects The descriptions of the languages in earlier chapters have been followed by an indication of how informative their semantic objects can be. For COOL, the description here starts with the semantic objects in order to emphasise how much insight they convey about a language even before any detailed SOS rules are written. Classes define the shape of objects including their instance variables; each object created for a class has local values for each of the instance variables. If each object is uniquely keyed by a Reference, a prime aspect of the state must be: m
ObjMap = Reference −→ ObjInfo ObjInfo :: · · · σ : VarStore ··· m
VarStore = Id −→ Val
9.1 Objects for concurrency
135
The values allowed include integers and Booleans; in addition values of type Reference can be stored (and nil used to mark an uninitialised variable of type Reference). Val = Z | B | Reference
In the basic version of COOL, any object is created in a quiescent state being R EADY for a method call (this decision is reconsidered in Section 9.7). Objects can also be in an Active state and, as in the language sketch in Section 8.2, the remaining code of the method being executed is the other essential information for Active to function as a “configuration” (compare Section 8.2). Furthermore: • an active method records in client the identity of the object that gave rise to its activity;4 and • method activation requires that the body of a method can be located, so the class field of ObjInfo contains the name of the class. Thus the key semantic objects are:5 ObjInfo :: class : Id σ : VarStore mode : R EADY | Active
∗ Active :: rem : Stmt client : Reference The fact that the rem field has exactly one sequence of statements indicates the important decision in COOL that at most one method can be active in any object at any one time. An example of ObjMap that corresponds to Figure 9.1 is shown in Figure 9.3(b).
Language issue 42: OOLs: data races The advantage that OOLs derive from instance variables only being accessed from methods within the class is that it removes the risk of simple data races. This advantage can be squandered if more than one method can be active in the same object at any time. A safe position is to require that only one method is active per object. If (using a concrete syntax): sl0 = activate(l.insert(7)); activate(l.insert(3)); activate(l.insert(9)) were executed against the ObjMap in Figure 9.3(b), then the result depicted in Figure 9.3(c) would be the ObjMap in Figure 9.3(d) An important property of many object-oriented languages is that the instance variables of any object (ObjInfo) can only be accessed and changed by the methods 4
Details of when this value can be nil are contained in Section 9.5. Alternatively, an Active object could be distinguished from one that is quiescent (R EADY) by saying that an empty statement list marks the latter mode. However, the R EADY mode makes for clearer hypotheses to the SOS rules. 5
136
9 Concurrent OOLs
r1
r2
r3
r4
v 2
v 4
v 6
v 8
l
l
l
l nil
(a) Picture of a possible state of a sorting ladder. r0 7→ mk-ObjInfo(Client, {sort 7→ r1 }, mk-Active(slo , k0 )), r1 7→ mk-ObjInfo(Sort, {v 7→ 2, l 7→ r2 }, R EADY), r2 7→ mk-ObjInfo(Sort, {v 7→ 4, l 7→ r3 }, R EADY), r3 7→ mk-ObjInfo(Sort, {v 7→ 6, l 7→ r4 }, R EADY), r4 7→ mk-ObjInfo(Sort, {v 7→ 8, l 7→ nil}, R EADY) (b) The ObjMap corresponding to the picture above with all objects quiescent.
insert(9)
insert(3)
r1
insert(7)
r2
r3
r4
v 2
v 4
v 6
v 8
l
l
l
l nil
(c) Picture of activity in a concurrent sorting ladder. r 7→ mk-ObjInfo(Client, σ0 , mk-Active([ ], k0 )), 0 r1 7→ mk-ObjInfo(Sort, {v 7→ 2, l 7→ r2 , x 7→ 9}, mk-Active(iif , r0 )), r2 7→ mk-ObjInfo(Sort, {v 7→ 4, l 7→ r3 , x 7→ 3}, mk-Active(iif , r1 )), r3 7→ mk-ObjInfo(Sort, {v 7→ 6, l 7→ r4 }, R EADY), r4 7→ mk-ObjInfo(Sort, {v 7→ 8, l 7→ nil, x 7→ 7}, mk-Active(ins, r3 ))
(d) A possible ObjMap during the three method activations in the picture above.
Fig. 9.3 Examples of ObjMap of that object;6 as observed above, this is what achieves encapsulation of data representations. Providing only one method can be active at any one time, the danger of data races on instance variables is eliminated. This cautious position is taken in the initial form of COOL. (An associated risk that comes from sharing references is discussed in Section 9.7.) There remain two key further questions about adding concurrency to the language discussed in Section 6.2: 6
Some languages (including Java) offer ways of exposing internal details of representations — an extension of this sort is considered in Section 9.7.
9.2 Expressions
137
• how are the concurrent threads to be generated? • what level of granularity of switching between threads is to be chosen? Answers to both of these questions are interesting and could give rise to a variety of models which could be distinguished by studying their semantic objects. Here, a rather conservative view is considered initially — with some alternatives sketched in Section 9.7. As with the configurations in Section 8.2, any active ri can be selected to make the next small step in concurrent execution of the whole ObjMap. COOL sets the granularity of interference at the level of single statements: thus, switching between threads can occur after any Stmt is executed. For this reason, method activation is treated as a statement and cannot occur within expressions. Language issue 43: Limiting interference in OOLs The problems of non-determinacy and granularity that result from invoking functions within expressions are discussed in Section 6.5. In COOL, a conservative position is to require that methods can only by invoked at the statement level (i.e. not from within expressions). Appendix D contains a full description of COOL including its abstract syntax and context conditions. It is however important to note how much of the capability of the language has been brought out by looking at the semantic objects before writing any SOS rules: • ObjInfo clarifies what an object (instance of a Class) needs to contain. • A newly created object is in the R EADY state. • Knowledge of the variables (and later methods) of a class has to be obtained from the text of the Classes. • The values of (instance) variables are local to each object and can only be accessed and changed by methods of that object. • These values are preserved between method calls. • An ObjInfo also records the reference of the client on whose behalf it is executing. • The remaining code to be executed in a method is stored in the ObjInfo. • (Crucially) the granularity of interleaving between threads is set at the level of single statements.
9.2 Expressions The syntax of COOL expressions given in Appendix D has only one extension from the languages considered in earlier chapters and that is the addition of a unary test as to whether a variable contains a nil reference: Expr = · · · | TestNil TestNil :: object : Id
138
9 Concurrent OOLs
The relevant change to the context conditions is to make c-type identify this form of expression as delivering a Boolean result. To emphasise that the expression evaluation is deterministic, the semantics of COOL expressions is given as a function: eval: Expr × VarStore → Val
and the relevant case is:
eval(mk-TestNil(id), σ ) 4 σ (id) = nil
9.3 Simple statements Moving on to the statements in COOL, the concurrency that is explained below requires that the semantic relation for statements is between two ObjMaps. Some constructs in COOL require additional information that is discussed below. For now the discussion is framed around: st
−→: P((· · · × ObjMap) × ObjMap) st
Most −→ transitions select one Reference that is Active and has the relevant statement type as the first element of rem.7 Execution consists of making appropriate state changes and discarding the completed statement. The resulting object is stored under the original reference. Thus SOS rules for simple statements take the form: robj = O(r) mk-ObjInfo(cl, σ , mk-Active([mk-StatementType(· · ·)] y rl, k)) = robj ··· robj0 = mk-ObjInfo(cl, σ 0 , mk-Active(rl, k)) st (· · · , O) −→ O † {r 7→ robj0 }
The simplest statement is Assign: its abstract syntax and semantics only deviate from the equivalent statements in previous chapters insofar as an assignment can only affect the state of the object in which it is executed. Stmt = Assign | If | · · · Assign :: lhs : Id rhs : Expr For such simple statements, a Reference is found for which the remaining text of the corresponding ObjInfo indicates that the appropriate statement type is to be executed. For example, r3 in Figure 9.3(b) can transition to: mk-ObjInfo(Sort, σ , mk-Active([mk-Assign(v, x)] y rl, k)) 7
Section 9.5.2 deals with synchronising objects and requires more than one object to have a suitable status.
9.3 Simple statements
139
with: σ = {v 7→ 6, l 7→ r4 , x 7→ 5}
Executing the assignment should change the variable map of Oi (r) to give: σ 0 = {v 7→ 5, l 7→ r4 , x 7→ 5}
Furthermore, the completed statement should be removed, leaving only rl in the rem field — the class and client fields of Oi (r3 ) are unchanged. Thus the resulting ObjMap would be: Oi+1 (r3 ) = mk-ObjInfo(Sort, σ 0 , mk-Active(rl, k)) The SOS rule for Assign is: (· · · , O) −→ O † {r 7→ robj0 }
The context conditions for Assign should be obvious from earlier language descriptions and are spelled out in Appendix D. It is worth repeating that methods cannot be invoked from expressions in COOL — Call (and Delegate) are statements and their semantics is covered in Sections 9.5.2 and 9.5.3 respectively. Conditional statements should also offer few surprises given the languages covered in Chapters 3 and 4. The abstract syntax is: If :: test : Expr then : Stmt∗ else : Stmt∗ The context condition for If is given in Appendix D and should anyway be obvious: c-type of test must yield B OOLT P and both th and el must be well formed. Turning to the semantics of conditionals in COOL, the ObjMap in Figure 9.3(d) contains two ObjInfos indexed by r1 , r2 that are ready to execute an If (i.e. iif ). Because the concurrency in COOL requires a small-step semantics, iif is unrolled and, if r1 is selected for progress, this would give rise to: Oi+1 (r1 ) = mk-ObjInfo(Sort, {v 7→ 2, l 7→ r2 , x 7→ 9}, mk-Active(p-i, r0 ))
where:
p-i: activate(l.insert(x)) The SOS rule for the then case of If is: robj = O(r) mk-ObjInfo(cl, σ , mk-Active([mk-If (test, th, el)] y rl, k)) = robj eval(test, σ ) = true robj0 = mk-ObjInfo(cl, σ , mk-Active(th y rl, k)) st (· · · , O) −→ O † {r 7→ robj0 }
140
9 Concurrent OOLs
The else case is obvious. The importance of unfolding the branches of the If is for granularity reasons: the semantics should allow other threads to execute between statements in the th or el lists.
9.4 Creating objects The creation of new instances of classes is more interesting than the foregoing simple statements. Objects are created as instances of classes using: Stmt = · · · | New | · · · New :: target : Id No type information is required in the statement itself because the class can be determined from the type of the variable to which the new reference is to be assigned. The ObjInfo in Figure 9.3(d) that is indexed by r4 will evolve to: mk-ObjInfo(Sort, {v 7→ 8, l 7→ nil, x 7→ 7}, mk-Active([mk-New(l), mk-Assign(v, x)], r3 )) The effect of executing the New should change this to: mk-ObjInfo(Sort, {v 7→ 8, l 7→ rn , x 7→ 7}, mk-Active([mk-Assign(v, x)], r3 )) and create a new quiescent thread rn with default initial values: mk-ObjInfo(Sort, {v 7→ 0, l 7→ nil}, R EADY)
In order for the semantics of New (and Call below) to locate information about st classes, the type of the semantic relation −→ needs to be: st
−→: P((ClMap × ObjMap) × ObjMap) with: m
ClMap = Id −→ Class
m
Class :: vars : Id −→ Type m methods : Id −→ Meth The full SOS rule is given in Appendix D — here the routine description of state initialisation is omitted in order to focus on the more interesting aspects of the semantics of New:
9.5 Method activation and synchronisation
141) σn = initial values nobj = mk-ObjInfo(cln , σn , R EADY) st (C, O) −→ O † {r 7→ robj0 , n 7→ nobj}
As can be seen, the new thread (indexed by n) is created in an inactive state. An alternative would be to have an initial method that executes on object creation — this idea is outlined in Section 9.7. A programmer might wish to delete the value of a reference. One way of doing this would be to add nil as an option in the abstract syntax of Expr but this would cause a problem with defining c-type in this case because nil could be a value of any optional reference type. Rather than take this route, Appendix D.5.2 defines a Discard statement that sets the appropriate reference variable to nil. Language issue 44: Anonymous values Any use of values (symbols) that can belong to more than one type can be difficult in a programming language that tries to offer strong typing. One option is to have a form of type hierarchy. Notice that Discard does not destroy the referenced ObjInfo because there could be other reference variables in the current object or in other objects that contain the same reference and might need to invoke methods in the referenced object. The subject of “garbage collection” of objects is also sketched in Section 9.7.
9.5 Method activation and synchronisation Objects (as instances of classes) can be in a quiescent state, which is marked by the mode field in their ObjInfo containing R EADY. As explained in Section 9.4 objects are created in this R EADY state — a server also returns to this state on completion of activity on behalf of a client. Activity in a quiescent object can be started by a request from another (client) object: • One possibility is that –after activation– there is no need for further communication between the client and the server. This scenario is described in Section 9.5.1. • Another possibility is that a result is required by the client and the server must return such a value before the client can progress. There are actually sub-options here depending on whether the client has useful work that it can perform in parallel with the server before the result is available. Section 9.5.2 gives the descriptions of the relevant parts of COOL.
142
9 Concurrent OOLs
• Another way of enhancing concurrency is for the object that acted as initial server to delegate computing a result to another object and thus free itself for work on behalf of some new client. This delegation concept is described in Section 9.5.3.
9.5.1 Method activation Methods are activated using the statement whose abstract syntax is: Activate :: object : Id method : Id args : Expr∗ Notice that the object field contains the name of a variable whose value is the reference of the server object in which the method is to be activated (i.e. a programmer cannot write a Reference in a statement because they are machine generated). The context conditions are routine and are contained in Appendix D.4.1. Executing the first call: activate(l.insert(7))
in the state depicted in Figure 9.3(a) should make thread r1 become active in order to execute its insert method; the previous state of the ObjInfo should be updated with argument values passed (in this case the identifier x gets the value 7). The picture in Figure 9.4 serves to introduce the formal semantics — it indicates the fact that both the activated server object (rs ) and the client (rc ) continue to execute.8
rc
Active
Activate
rs
READY
Active
Fig. 9.4 Activate spawns a concurrent thread To return to the example of activate(l.insert(7)), the resulting state of r1 would be: mk-ObjInfo(Sort, {v 7→ 2, l 7→ r2 , x 7→ 7}, mk-Active(insert, r0 )) 8
The Go language also spawns — but can then communicate over channels.
9.5 Method activation and synchronisation
143
The SOS rule to achieve this step has to update two ObjInfos: cobj = O(c) mk-ObjInfo(clc , σc , mk-Active([mk-Activate(obj, meth, args)] y rlc , k)) = cobj cobj }
The SOS rule above makes clear that both the object whose reference is cobj0 can continue actively executing rlc and sobj0 begins executing body. Notice that this rule can only be used if the server object (sobj indexed by σc (obj)) is in the R EADY mode. (Remember that all hypotheses of an SOS rule have to be satisfied for a rule to confirm the relation in its conclusion.) Any attempt to activate a method in an active object will have to wait. This of course brings danger of “deadlock”, where the server never makes progress (see Section 9.7). In the terminology of Section 5.4, the parameter passing mode in COOL is “by value”: evaluated arguments are installed in local objects of the server. It is however true that passing a Reference confers considerable power to the receiving method: possession of a reference makes it possible to invoke any of its methods.
9.5.2 Method synchronisation An obvious way to write a method for the class Sort that tests whether a value is present anywhere in the ladder is with a Call statement, whose execution has to wait until its server object returns a value: test(x: N) method : B if is-nil(l) ∨ x < v then return (false) elif x = v then return (true) else call(b, l.test(x)); return (b) fi
Such a Call statement has an abstract syntax that is similar to that for Activate but has, in addition, an lhs field to which the server’s return value will be assigned: Call :: lhs object method arguments
: : : :
Id Id Id Expr∗
144
9 Concurrent OOLs
Here again, the context conditions are straightforward and are spelled out in Appendix D.4.2.
Active
rc
Active
Call Return rs
READY
Active
READY
Fig. 9.5 Sequential method call Figure 9.5 indicates the flow of control with the gap in the upper line indicating that rc has to suspend activity until the server rs returns a result. (The fact that rs can continue activity after the return is explained below.) The test method above could evolve into an ObjMap with: r1 7→ mk-ObjInfo(Sort, {v → 7 2, l → 7 r , x → 7 7}, 2 mk-Active([mk-Call(b, l, test, x), mk-Return(b)], r0 )), r2 7→ mk-ObjInfo(Sort, {v 7→ 4, l 7→ r3 }, R EADY), .. . The vertical ellipses are to remind the reader that there could be other threads that are also candidates for execution. This form of call statement is in fact a special case of a more interesting “future call” present in ABC/L [Yon90]. This more general synchorisation starts with a future call whose syntax does not need the lhs field; the semantics allows the client object to continue execution until it needs the result from the server; at this point, the client executes an Await statement that indicates where (in the client) the returned value is to be stored. Await :: lhs : Id The flow of control is pictured in Figure 9.6 but, instead of using future call, Activate serves the same purpose. It can be seen that the client remains active until the Await statement is executed. Furthermore, the server can return a value and continue executing until the method code is exhausted. It is simple to describe Call (which requires an answer before the client can make progress) in terms of the more general case and use Await in the description of Call.9 Thus the SOS rule is: 9
An alternative –but equivalent– model could add a new mode to the ObjInfo semantic object.
9.5 Method activation and synchronisation
rc
145
Active
Active
Activate
Await Return
rs
READY
Active
[] READY
Fig. 9.6 Future call cobj = O(c) mk-ObjInfo(clc , σc , mk-Active([mk-Call(lhs, obj, meth, args)] y rlc , k)) = cobj 0 cob }
Thus the next state of ObjMap would be: r1 7→ mk-ObjInfo(Sort, {v → 7 2, l → 7 r , x → 7 7}, 2 mk-Active([mk-Await(b), mk-Return(b)], r )), 0 r2 7→ mk-ObjInfo(Sort, {v 7→ 4, l 7→ r3 , x 7→ 7}, mk-Active(· · · , r )), 1 . . . After computation further down the ladder the ObjMap will arrive at: r1 7→ mk-ObjInfo(Sort, {v 7→ 2, l 7→ r2 , x 7→ 7}, mk-Active([mk-Await(b), mk-Return(b)], r )), 0 r2 7→ mk-ObjInfo(Sort, {v 7→ 4, l 7→ r3 , x 7→ 7}, mk-Active([mk-Return( false )], r )), 1 . . . This brings the discussion to the return statement: Return :: value : Expr
146
9 Concurrent OOLs
the semantics of which completes the rendezvous with the r1 client: }
In the example: r1 7→ mk-ObjInfo(Sort, {v 7→ 2, l 7→ r2 , x 7→ 7, b 7→ false}, mk-Active([mk-Return(b)], km )), r2 7→ mk-ObjInfo(Sort, {v 7→ 4, l 7→ r3 , x 7→ 7}, mk-Active([ ], nil )), . . .
When a method has no more statements to execute (a Return can occur anywhere in the body of a method) it reverts to the quiescent status: O(s) = mk-ObjInfo(cl, σ , mk-Active([ ], k)) st (C, O) −→ O † {s 7→ mk-ObjInfo(cl, σ , R EADY)}
Thus the ObjInfo for r2 changes to: r2 7→ mk-ObjInfo(Sort, {v 7→ 4, l 7→ r3 }, R EADY), .. .
Notice however that there is a problem type checking the lhs in an Await statement and this would in general have to be a dynamic check. Furthermore, executing multiple future calls before an Await could give rise to dangerous program errors.
9.5.3 Delegation The test method given above is unnecessarily sequential, as is shown in Figure 9.7(a). Nor can either Activate or future call resolve the problem because the initial client needs a value to be returned and must wait for that value. But it is possible to avoid tying up the intermediate objects, as is shown in Figure 9.7(b). The Delegate statement achieves this effect by allowing the flow of control pictured in Figure 9.8. Its abstract syntax is:
9.5 Method activation and synchronisation
test(7)
147
test(7)
test(7)
v 2
v 4
v 6
v 8
l
l
l
l nil
false
false
false
test(7)
test(7)
(a) Picture of sequential test
test(7)
v 2
v 4
v 6
v 8
l
l
l
l nil
(b) Picture of concurrent test
false
Fig. 9.7 Two ways of executing test in Sort Delegate :: object : Id method : Id arguments : Id∗ Delegate statements are much like Call statements — the distinction is that a Delegate passes down its client to be the client of the newly called object whereas the object making a normal Call becomes the client of the called object. A use of Delegate is indicated in the test method in Figure 9.9. Suppose that some client r0 has called the test method passing 3 to r1 : r1 cannot determine the result required by r0 so the code of test reaches Delegate; providing the status of r2 is R EADY, the situation would be:
148
9 Concurrent OOLs
rc
Active
Await
Activate rs
READY
Active READY
Delegate rt
READY
Active
Return Active
READY
Fig. 9.8 COOL delegation r0 7→ mk-ObjInfo(c0 , σ0 , mk-Active([mk-Await(· · ·)], k0 )), r1 7→ mk-ObjInfo(Sort, {v 7→ 2, l 7→ r2 , x 7→ 3}, mk-Active([mk-Delegate(l, test, x)], r0 )), r → 7 mk-ObjInfo(Sort, {v 7→ 4, l 7→ r3 }, R EADY) 2 .. . with the semantics: }
This would step to: r0 7→ · · · r1 7→ mk-ObjInfo(Sort, {v 7→ 2, l 7→ r2 , x 7→ 3}, R EADY), r2 7→ mk-ObjInfo(Sort, {v 7→ 4, l 7→ r3 , x 7→ 3}, mk-Active([· · ·], r0 )), .. . Notice that r1 is now available to act as a server for another client and that the return from r2 goes directly to r0 .
9.6 Reviewing COOL
149
9.6 Reviewing COOL The example class Figure 9.9 depicts the Sort class with three methods (insert, min and test) in what is hopefully a readable concrete syntax. As in the illustrative examples in Sections 9.5.1–9.5.3: • The insert method does not return a result and is invoked via an activate statement which causes the method to run concurrently with the object that caused the activation. In the case that this is the first insert to this object, the parameter value is stored and a new object is created; in all other cases, work is passed on to objects further down the sorting ladder. • The min method is only given to illustrate that the minimum value in the ladder is always available in the first object. • The test method is shown as using delegate to achieve concurrency by passing on the need to return a value to the client that called test.
Sort class vars v: N; l: ref(Sort) insert(x: N) method begin if is-nil(l) then {new (l); v := x; } else if v ≤ x then {activate(l.insert(x))} else {activate(l.insert(v)); v := x} fi fi end min() method : N return(v) test(x: N) method : B begin if is-nil(l) ∨ x < v then return (false) else if x = v then return (true) else delegate (l.test(x)) fi fi end
Fig. 9.9 Concurrent Sort As is shown in Appendix D.6, a CoolProgram contains a collection of named classes. In order to generate an initial ObjMap, an initial method call must also be
150
9 Concurrent OOLs
identified. Thus the semantics of a CoolProgram creates an ObjInfo for start-class and then activate start-method (with no arguments). Adding some form of input/output statements to the language would clearly be one way to make COOL programs more useful. A more interesting alternative would be to have a way of linking a CoolProgram with an existing object store. Doing this is straightforward but is outside the realm of the language itself.
COOL summary A full formal description of COOL is given in Appendix D. This section offers some observations on COOL as a language. COOL is strongly typed with variables that contain object references only being allowed to store references to objects of the declared class (because of this, there is no need to have a class type argument to the new statement). The context conditions for COOL are given for each language feature in Appendix D. The type information (of methods) required is: m
ClTypes = Id −→ ClInfo m
ClInfo = Id −→ MethInfo : Type MethInfo :: restype paramtypes : Type∗ Type = Id | ScType ScType = I NT T P | B OOLT P m
VarEnv = Id −→ Type
Here again (see Section 4.2), no attempt is made to subdivide the class of Id. Since their written forms are taken to be the same, relating variable, method and class names is left to the context conditions. The least conventional aspect of the semantics of COOL is the insistence that at most one method can be active in any particular object at any point in time.10 This decison certainly increases the possibilities of deadlocks in COOL programs. On the other hand, it has the advantage that there is no possibility of data races on instance variables within an object. If a programmer wants to share data, this can be achieved by placing the data in an object whose references can be shared. This puts explicit sharing firmly in the hands of programmers and it should be exploited with great care. Although related to POOL [AdB91], COOL is not intended to be a full language — its features have been chosen to illustrate points about semantic description. 10
A similar approach is taken by Bertrand Meyer in his SCOOP proposal.
9.7 Further material
151
There are many ways (e.g. inheritance) in which COOL could be extended and a number of these are outlined as projects in Section 9.7.
9.7 Further material There are many projects that can be developed from the description in Appendix D including: 1. The semantics for Call in Section 9.5.2 is indirect in that it employs another statement (Await) to put the client into a waiting state. An alternative would be to define another mode for ObjInfo. 2. Rather than have newly created objects start life in the quiescent (R EADY) status, an initialisation method could be added to each class that is activated on creation. This, of course, provides another source of concurrent execution. Options include whether or not New passes arguments to this method. 3. There is no attempt in Appendix D to make the access to objects “fair” in the sense that two attempts to call an active object will not necessarily be served in order when the called object becomes free. It is not difficult to add some form of queue to control the order of invocation. 4. A solution that is perhaps more satisfactory than the preceding point might be to add a form of conditional Call where the client has a list of alternative statements that are executed in the event that the sought-after server object is not Ready. 5. It would be possible to add a limited form of “self call” that either works like a local Delegate or is only allowed to call methods that are limited in some way as to the variables they can access or change. 6. There are several approaches to “garbage collection” of unwanted objects. An explicit Destroy statement should have some pre conditions; automatic collection of objects to which no reference remains is not difficult to write formally but could be expensive to implement; collecting “circular garbage” is more challenging. 7. Java allows access from object ri to internal variables of any object for which ri holds a reference; such an extension is not difficult to add to COOL but it is important to note that it reintroduces the danger of data races. 8. The Go language has an activate statement but activated “Go-routines” are closed when the main routine finishes. Go also offers a form of channel similar to those in the π-calculus [MPW92] in that channel names can be passed. 9. Various forms of inheritance could be added to COOL. 10. Careful addition of arrays (notably of references) can introduce extra ways of generating concurrent activity. There is a rich literature relating to formal models of object-oriented languages. Starting with items close to COOL and working outwards: • The debt to POOL2 has been acknowledged. An overview is given in [Ame89]; a layered semantics is given in [AR92, AdBKR89]; proof theory is considered in [AdB91, dB91].
152
9 Concurrent OOLs
• Bertrand Meyer’s Eiffel language [Mey88] is a fully fledged and interesting language; his SCOOP proposal for simple concurrency shares many features with COOL. • Transformations that preserve observational equivalence and can be used to introduce more concurrent threads were studied for a language referred to as πoβ λ whose semantics were given in [Jon93] by mapping to Robin Milner’s π-calculus [MPW92, SW01]. (David Walker had already published [Wal91].) It proved non-trivial to justify the equivalences via bi-simulation but Davide Sangiorgi settled the issue in [San99].11 The validity of the equivalences links to Hogg’s notion [Hog91] of “islands”. • Simula [DMN68] and Smalltalk [GR83] were early object-oriented languages. The task of providing semantics was addressed in [Wol88]. • A careful look at the basic ideas behind object orientation is [AC12].
11
This links to the issue raised in Section 7.1 about the tractability of denotations when a semantics is given by mapping to another language.
Chapter 10
Exceptional ordering [*]
This short optional chapter addresses the topic of modelling statements that cause execution to occur in orders different from their textual juxtaposition. The controversial goto statement provides the classic example of this modelling challenge but it is not the only manifestation of the difficulty. The discussion can be undertaken without worrying about concurrency and is limited below to consideration of sequential languages. This chapter also moves more freely than earlier parts of the book between operational and denotational description techniques.1 There are two aspects of the challenge of describing the semantics of the goto statement: on the one hand, it is necessary to show the change of order of execution; on the other hand, there is the question of the denotation of the label for a statement. Normal sequential execution of S1; S2 dictates that execution of S1 is followed by execution of S2. This is clear in a “big-step” SOS rule for statement sequencing: st
(s, σ ) −→ σ 0 stl
(rl, σ 0 ) −→ σ 00 stl ([s] y rl, σ ) −→ σ 00
The same sequencing is shown in denotational semantics by composition of functions (from Σ → Σ). This simple model does not work if S1 can be a goto statement that explicitly chooses the next statement to be executed by using its label. Language issue 45: Goto The goto statement proved to be controversial (compare Dijkstra’s original letter [Dij68b] with Knuth’s [Knu74b]) but it was present in nearly all early languages. This was presumably as a direct analogue of the branch instruction in the hardware. Modern programming languages have tended to follow Dijkstra’s plea to provide more structured control constructs but the challenge to be able to model exceptional ordering remains. 1
Gordon Plotkin observes in [Plo04a] that ideas from operational semantics have influenced denotational semantics. © Springer Nature Switzerland AG 2020 C. B. Jones, Understanding Programming Languages,
153
154
10 Exceptional ordering [*]
The reason that this optional chapter considers exceptional ordering is that, even without goto statements, there remain features in most languages that present a similar challenge to providing formal models. A key interaction of language features is that abnormal execution order can cut across the structure of the syntax of a language. This requires that the semantics essentially has to perform clean up actions even though previously anticipated executions are abandoned. (Chapter 5 shows that pre-planned entry and exit from procedures and functions can be modelled without difficulty.) This chapter tackles the modelling of programming language constructs that cause execution to deviate from neat composition of statements. Language issue 46: Exceptions A further example of a language issue that complicates semantic description is exception handling. Unlike function calls, exception handlers can abandon previous anticipated execution. The last of the language challenges is: Challenge VIII: Modelling exits How can a neat model be given for the semantics of a language in which features permit abnormal exits from structured text? To return to the point about feature interaction, there are several language features that do cut across the phrase structure of a language but whose modelling is unproblematic. • Premature termination of a looping construct is easy to model because nothing has to be reinstated at the end of a loop. • A return statement from a method in COOL can be embedded within other phrase structures but OOLs leave the values of instance variables alone at method termination so all that the semantics of Return has to ensure is that the rest of the rem code is discarded. These cases contrast with a goto that abnormally terminates a Block or Procedure. As is shown in Chapter 5, there are changes to the state (Σ) that must be made at normal termination of the text of a block or procedure; such resetting steps that clean up the state by removing locations also have to be executed when a goto statement causes abnormal termination. Two distinct approaches to coping with Challenge VIII are described in the following sub-sections; it would however be possible to mix some aspects of the approaches described in Sections 10.1 and 10.2. A key distinguishing aspect of the approaches is whether they work forwards or backwards from labels.
10.1 Abnormal exit model In the old VDL style of operational semantics, one component of the (grand) state of the description contained the text of the program that was being executed. State-
10.1 Abnormal exit model
155
ments such as Goto could then be given a meaning by changing the text component. Any required cleanup operations could be programmed in terms of this tree navigation. Operationally, this does not fit with the objective of being structural and it certainly fails to fit the homomorphic objective of denotational approaches. A much more structural account of the way in which goto statements explicitly appoint their successor statement can be given in terms of relations that are extended (beyond Σ × Σ) to mark any abnormal sequencing that results from executing a portion of program text. The range of the relation can contain an optional “abnormal” component which need only be a label in the case of a language allowing a simple goto statement: st −→: P((Stmt × Σ) × (Σ × Id )) Executing a normal statement returns a nil abnormal component: ex
(rhs, σ ) −→ v st (mk-Assign(lhs, rhs), σ ) −→ (σ † {lhs 7→ v}, nil)
In contrast the effect of something like a goto statement contains an indication of what is to be done next: in the simplest case this is just to continue execution from the appointed label: ex
(rhs, σ ) −→ v st (mk-Goto(id), σ ) −→ (σ , id)
Explicitly showing the propagation of this exit field would require writing:2 st
(s, σ ) −→ (σ 0 , abn) abn 6= nil stl ([s] y rl, σ ) −→ (σ 0 , abn)
and:
st
(s, σ ) −→ (σ 0 , nil) stl
(rl, σ 0 ) −→ (σ 00 , abn) stl ([s] y rl, σ ) −→ (σ 00 , abn)
st
It is, however, simple to adopt a convention that the default for −→ is to return nil unless otherwise marked, which restores the description to: ex
(rhs, σ ) −→ v st (mk-Assign(lhs, rhs), σ ) −→ σ † {lhs 7→ v}
Such conventions are related to the use of the abnormal exit model in denotational descriptions, which is discussed further in Section 10.3. Further combinators that trap abnormal exits are also described there. 2
This becomes cumbersome but was actually carried through in the ALGOL 60 description in [ACJ72].
156
10 Exceptional ordering [*]
As is explained in the next section, the use of the exit approach is the largest difference between VDM denotational descriptions3 and those associated with Oxford (where the “continuation” approach was developed and employed). The connections between these approaches are reviewed in Section 10.3.
10.2 Continuations The approach taken to modelling goto statements in denotational descriptions by the Oxford group looks very different from the exit mechanism described in Section 10.1; language descriptions such as SAL in [MS76] and ALGOL 60 in [Mos74] employ “continuations”. At first meeting, continuations appear to do everything backwards: the meaning of a statement is given in the context of the rest of the computation. Among other things, this immediately lifts the whole definition to: Tr → Tr Tr = Σ → Σ
The meaning of an assignment could be written: M[[mk-Assign(lhs, rhs)]]{θ } 4 assign(· · ·) ◦ θ
to define that the overall denotation is the composition of the obvious Σ → Σ for assignment with the continuation θ . Denotations of labels are stored in the environment and denote the entire computation from the label through to the end of the program. It is then possible to have the denotation of a goto statement simply deploy the denotation of the label. It is, however, possible to question whether the idea that the denotation of a label within a block involves its execution to the end of the surrounding context really respects the homomorphic constraint espoused by denotational semantics researchers.
10.3 Relating the approaches Continuations can describe language constructs for which no obvious exit model has been found. But power is not necessarily an advantage: as observed in other contexts, making clear that something cannot be done can simplify reasoning about a language description. This section relates the exit and continuation approaches to 3
Peter Mosses points out in [Mos11] that the “semicolon combinator” used in [BBH+ 74] can be related to Moggi’s “monads”, which were published as [Mog89]. The use of a semicolon combinator to mean either simple functional composition or a function-level version of the composition sketched above goes some way towards the reuse of formal language descriptions. Mosses goes much further in his work on component-based specifications “funcons” [BSM16],
10.4 Further material
157
denotational language descriptions in the context of languages –such as one containing goto statements– where what needs to be modelled is abnormal termination of part of a program. Once a reader has overcome the feeling that a continuation description gives the meaning backwards, there are two further technical differences between continuation and exit forms of denotational semantics: • The denotation of a label in a continuation semantics reflects the effect of starting execution at the statement with that label and continuing to the end of the entire program. In an exit formulation, the effect of a statement label extends only to the end of its enclosing block. In the former case, any clean up steps have to be composed into the denotation; in the latter, the default clean up intervenes at block end. • The denotations of labels in a continuation description are normally stored in an environment4 whereas an exit description tends to wrap a trapping combinator around each block. There is in fact no reason why an environment could not be used with an exit formulation — this is simply a matter of taste. Denotational semantic descriptions exist for languages as large and complicated as PL/I [BBH+ 74] and Ada [Don80, HM87]. Useful comparisons between the continuation- and exit-style versions of denotational semantics can be made by looking at two descriptions of ALGOL 60: [Mos74] uses classical Oxford-style continuations and [HJ82] uses the exit combinators. Choices such as lengths of function names (and the extent of use of the Greek alphabet) make the definitions appear more different than they really are; [JA16, JA17] get below these surface issues and tease apart the more important decisions. The existence of two ALGOL descriptions shows that both approaches to modelling language features are expressive enough to handle constructs such as goto statements. This makes it possible to look at the formal equivalence of the two approaches on non-trivial language features. An argument of equivalence is given in [Jon82b] (which is not mechanically checked). It is interesting that the observations above can be used to structure a chain of equivalences of which continuations and the most common use of exit combinators are just the extreme points.
10.4 Further material Projects ALGOL 60 introduced switch variables whose values are labels. (This would appear to result again from another spurious argument of orthogonality: making labels into first-class objects.) Modelling switch variables would let the reader explore the snags that this idea brings with it. 4
As a consequence, the environment has to be defined using a fixed point.
158
10 Exceptional ordering [*]
Historical notes The continuation concept is normally listed as having been separately invented by Lockwood Morris [Mor70] and Chris Wadsworth [Wad72, SW74]. In [Rey93], John Reynolds traces even more “inventions” of the idea5 and sees the first hint of the idea in van Wijngaarden’s [vW66b].
5
Reynolds revisited this discussion in a talk given at the BCS Computer Conservation Society in 2004 (a video recording of this talk exists).
Chapter 11
Conclusions
This short concluding chapter begins with a review of the eight challenges discussed in the body of this book; this is followed by comments on some significant formal language descriptions or specifications.
11.1 Review of challenges Despite the many “language issues” that are discussed and modelled in the body of this book, only eight significant “challenges” are teased out: they are summarised here. (I) Delimiting a language (concrete representation) How can the set of valid strings of an object language be delimited? BNF –or a variant such as EBNF– is the most common notation used for describing concrete syntax but less common notations such as Wirth’s “railroad” diagrams have equivalent expressive power. (II) Delimiting the abstract content of a language How can the abstract syntax of a language be defined? In Chapter 2 and on all subsequent examples, simple parts of VDM notation for describing objects are used for describing the abstract syntax of programming languages. (III) Recording semantics (deterministic languages) How can the semantics of a deterministic language be recorded? An operational semantics can be defined by a recursive function (over the abstract syntax) and is used in Section 3.1. (IV) Operational semantics (non-determinism) How can an operational semantics describe a non-deterministic language in a way that clearly relates the structure of the semantics to its abstract syntax? The key step in Section 3.2 is to recognise that the semantics is a relation be-
© Springer Nature Switzerland AG 2020 C. B. Jones, Understanding Programming Languages,
159
160
11 Conclusions
tween initial states and permissible final states. “Structural Operational Semantics” (SOS) defines the requisite relation by using inference rules. (V) Context dependancy How can abstract syntax objects that exhibit type errors be ruled out before semantics are tackled? Section 4.2 shows how recursive predicates over the abstract syntax can define the set of type correct programs. These predicates are conventionally named wf -X for objects of type X. (VI) Modelling sharing How can a language description model sharing? Section 5.2 shows how the introduction of a surrogate such as Loc as an abstraction of machine addresses makes it possible to have an environment (Env) that maps different identifiers to the same location. It is important to indicate that the environment is separate from –and changes less often than– Σ. Similarly, Section 9.1.2 uses object references to record the sharing of objects in an object-oriented language. (VII) Modelling concurrency How can shared-variable concurrency be described using SOS? The fact that the inference rules of SOS provide a natural way of defining semantics as a relation means that non-determinacy is handled easily. The key change from big-step to small-step semantics is described in Section 8.2. (VIII) Modelling exits How can a neat model be given for the semantics of a language in which features permit abnormal exits from structured text? Chapter 10 outlines both the use of abnormal signals and the continuation model.
11.2 Capabilities of formal description methods Although the author has complained that insufficient use has been made of formal language description ideas, it is important to record some of the examples that indicate that formal methods can cope with realistic programming languages. Clearly, the optimal use is in the actual specification of a programming language (rather than a post hoc description). The Modula-II standard uses VDM notation (see [AGLP88]). The situation with PL/I is more complicated: the repeated updatings of VDL operational semantic descriptions of PL/I were followed by the VDM denotational description [BBH+ 74]; the standard [ANS76] builds on this work — it has a formal description of the state of a semantics but then attempts to describe the mapping to denotations in words. The standardisation effort also came many years after IBM had gone through its normal language control processes. A far more promising example of the use of formal methods by language designers is SML. Two versions are [MTHM97] and [HMT88]; there is also a useful web site: ML language
11.2 Capabilities of formal description methods
161
Amongst post hoc descriptions of programming languages, ALGOL 60 has a special place: it has been used as a testbed for several specification methods: • • • •
Peter Lauer’s VDL description a “functional semantics” is given in [ACJ72] Peter Mosses wrote an Oxford-style denotational description [Mos74] A VDM-style denotational description is [HJ82].
These descriptions are analysed in detail in [JA16] (a more accessible but slightly less detailed account is given in [JA18]). ALGOL 60 is a clean and well-thought-out language that presents fewer challenges to post hoc description than less disciplined languages. At the other extreme, PL/I was a nightmare for those who had to write its formal descriptions. For this author, it represents the most convincing example of where complications could have been reduced had the designers themselves employed formal modelling. PL/I is a huge language but it also presents many avoidable feature interactions. The three major versions of the VDL operational semantic descriptions of PL/I are discussed by those involved in [LW69] and with the benefit of hindsight in [JA18]. Thankfully, the ECMA/ANSI subset removed some of the complications and the VDM denotational description [BBH+ 74] is a fraction of the length of those in VDL. Another monster language is CHILL, whose description is in [Stu80]. The story of the Ada language could have been so much more rewarding: • the DoD’s “Ironman” requirements for the language that was to become Ada stated that there should be a formal semantics either as Hoare axioms or in VDL; • in the event, Ichbiah’s winning proposal was designed with, again, formalists attempting to model indisciplined design decisions; • the French efforts are described in [DGKL80, Don80]; • the team around Dines Bjørner in Denmark’s DDC produced a well-respected Ada compiler from their VDM description [BO80b]; the concurrency features used SMoLCS [HM87]. Although the efforts to describe the Pascal language were also conducted post facto, it represents a much better thought-out language: [AH82] does have to cope with some unnecessary feature interactions but fits comfortably in a chapter of a book rather than being a book itself. An important line of operational research that looks rather different is that conducted by J Strother Moore at Austin Texas. Using Lisp, this group has, over many years, provided executable semantic descriptions. Some of these descriptions are of hardware instruction sets and have been used to verify hardware designs. Moreover, the group has tackled the key task of verifying a “stack” of languages that are implemented on the next layer down. A wide-ranging survey of this work is given in [Moo19]. Descriptions of logic programming languages include [AB87] and [And92].
162
11 Conclusions
Moving further afield, descriptions of database systems include [Han76, Owl79] and their interestingly different (from programming languages) concurrency issues are covered in [BHG87, L+ 94, WV01]. The use of techniques for reasoning formally about programs in a language is much more widespread. Some historical material is contained in [Jon03]. Moreover, extremely encouraging examples of use by major corporations are reported in [CDD+ 15, DFLO19] and [Coo18].
11.3 Envoi A huge number of programming languages exist and there is no reason to believe that people will cease to design new ones. Perhaps one reason for there being so many languages is that there is dissatisfaction with the state of play. The challenge of designing a language that makes programs clear and easy to reason about is significant; the effort of building a translator or interpreter for a language is significant (and these days a complete development environment must be added). The argument in this book is that it is less effort to think out the ideas of languages using formal models and the resulting language should possess clearer structure. If this book can play a small part in helping the language design process yield better thought-out languages that –in turn– make it possible to construct better programs, the effort in writing the book will be amply repaid.
Appendix A
Simple language
This appendix separates syntax, context conditions and semantics and is thus in the order in which the topics are introduced in Chapters 2–4 of the current book. Abbreviations used in the description: Σ σ ∈Σ Arith Expr Id lhs Rel rhs Stmt
the set of all “states” a single “state” Arithmetic Expression Identifier left-hand side Relational right-hand side Statement
© Springer Nature Switzerland AG 2020 C. B. Jones, Understanding Programming Languages,
163
164
A Simple language
A.1 Concrete syntax A.1.1 Dijkstra style hProgrami : : = program varshIdsi: hStmtsi end hIdsi : : = hIdi [, hIdsi] hStmtsi : : = [hStmti [; hStmtsi]] hStmti : : = hAssigni | hIfi | hWhilei hAssigni : : = hIdi : = hArithExpri hIfi : : = if hRelExpri then hStmtsi [else hStmtsi] fi hWhilei : : = while hRelExpri do hStmtsi od hArithExpri : : = hBinArithExpri | hNaturalNumberi | hIdi | (hArithExpri)
A.1.2 Java-style statement syntax hStmtsi : : = [hStmti [; hStmtsi]] hStmti : : = hAssigni | hIfi | hWhilei hAssigni : : = hIdi = hArithExpri hIfi : : = if (hRelExpri) hBodyStmtsi [else hBodyStmtsi] hWhilei : : = while (hRelExpri) hBodyStmtsi hBodyStmtsi : : = hStmti | {hStmtsi}
A.2 Abstract syntax
A.2 Abstract syntax | Id | N
165
166
A Simple language
A.3 Semantics Statements m
Σ = Id −→ N
The semantic transition relation for statement lists is st
−→: P((Stmt × Σ) × Σ) st
(s, σ ) −→ σ 0 stl
(rl, σ 0 ) −→ σ 00 stl ([s] y rl, σ ) −→ σ 00 stl
([ ]s, σ ) −→ σ
The semantic transition relation for single statements is given by cases below. stl
−→: P((Stmt∗
A.3 Semantics
167
Expressions Although the evaluation of expressions is deterministic, the semantics is given as a relation for consistency with semantics of statements: Expr = ArithExpr | RelExpr ex
−→: P((Expr × Σ) × (B | N)) Given by cases below. ex
(op1, σ ) −→ v1 ex (op2, σ ) −→ v2 ex (mk-BinArithExpr(op1, P LUS, op2), σ ) −→ v1 + v2 ex
(op1, σ ) −→ v1 ex (op2, σ ) −→ v2 ex (mk-BinArithExpr(op1, T IMES, op2), σ ) −→ v1 ∗ v2 ex
(op1, σ ) −→ v1 ex (op2, σ ) −→ v2 ex (mk-RelExpr(op1, E QUALS, op2), σ ) −→ v1 = v2 ex
(op1, σ ) −→ v1 ex (op2, σ ) −→ v2 ex (mk-RelExpr(op1, L ESS T HAN E Q, op2), σ ) −→ v1 ≤ v2 e ∈ Id ex (e, σ ) −→ σ (e) e∈N ex (e, σ ) −→ e
Appendix B
Typed language
The formulae in this appendix separate abstract syntax, context conditions and semantics. This is not the order used in subsequent appendices but it serves at this stage to emphasise the distinctions. Abbreviations used in the description: Σ σ ∈Σ Arith Expr Id lhs Rel rhs Stmt wf-
the set of all “states” a single “state” Arithmetic Expression Identifier left-hand side Relational right-hand side Statement well-formed
© Springer Nature Switzerland AG 2020 C. B. Jones, Understanding Programming Languages,
169
170
B Typed language
B.1 Abstract syntax m
BaseProgram :: types : Id −→ ScalarType body : Stmt∗ ScalarType = I NT T P | B OOLT P Stmt = Assign | If | While Assign :: lhs : Id rhs : Expr If :: test : Expr then : Stmt∗ else : Stmt∗ While :: test : Expr body : Stmt∗ Expr = ArithExpr | RelExpr | Id | ScalarValue ArithExpr :: operand1 : Expr operator : P LUS | M INUS | · · · operand2 : Expr RelExpr :: operand1 : Expr operator : E QUALS | L ESS T HAN E Q | · · · operand2 : Expr ScalarValue = Z | B
B.2 Context conditions
171
B.2 Context conditions In order to define the Context Conditions below, an auxiliary object is required in which the types of declared identifiers can be stored. m
TypeMap = Id −→ ScalarType wf -BaseProgram : BaseProgram → B
wf -BaseProgram(mk-BaseProgram(types, body)) wf -StmtList(body, types)
wf -StmtList : Stmt∗ × TypeMap → B
4
wf -StmtList(sl, tpm) 4 ∀i ∈ inds sl · wf -Stmt(sl(i), tpm) wf -Stmt : Stmt × TypeMap → B
wf -Stmt(s, tpm) 4 given by cases below
wf -Stmt(mk-Assign(lhs, rhs), tpm) 4 lhs ∈ dom tpm ∧ c-type(rhs, tpm) = tpm(lhs) wf -Stmt(mk-If (test, th, el), tpm) 4 c-type(test, tpm) = B OOLT P ∧ wf -StmtList(th, tpm) ∧ wf -StmtList(el, tpm) wf -Stmt(mk-While(test, body), tpm) 4 c-type(test, tpm) = B OOLT P ∧ wf -StmtList(body, tpm)
172
B Typed language
An auxiliary function c-type is defined c-type : Expr × TypeMap → (I NT T P | B OOLT P | E RROR) c-type(e, tpm) 4
given by cases below
c-type(mk-ArithExpr(e1, opt, e2), tpm) 4 if c-type(e1, tpm) = I NT T P ∧ c-type(e2, tpm) = I NT T P then I NT T P else E RROR fi
c-type(mk-RelExpr(e1, opt, e2), tpm) 4 if c-type(e1, tpm) = I NT T P ∧ c-type(e2, tpm) = I NT T P then B OOLT P else E RROR fi
For the base cases: e ∈ Id ⇒ c-type(e, tpm) = tpm(e) e ∈ Z ⇒ c-type(e, tpm) = I NT T P e ∈ B ⇒ c-type(e, tpm) = B OOLT P
B.3 Semantics
173
B.3 Semantics An auxiliary object is needed to describe the Semantics — this “Semantic Object” (Σ) stores the association of identifiers and their values. m
Σ = Id −→ ScalarValue σ0 = {id 7→ 0 | id ∈ dom types ∧ types(id) = I NT T P}∪ {id 7→ true | id ∈ dom types ∧ types(id) = B OOLT P} stl
(body, σ0 ) −→ σ 0 pr (mk-BaseProgram(types, body)) −→ D ONE
The semantic transition relation for statement lists is stl
−→: P((Stmt∗ × Σ) × Σ) stl
([ ], σ ) −→ σ st
(s, σ ) −→ σ 0 stl
(rest, σ 0 ) −→ σ 00 stl ([s] y rest, σ ) −→ σ 00
The semantic transition relation for single statements is st
−→: P((Stmt
174
B Typed language
The semantic transition relation for expressions is ex
−→: P((Expr × Σ) × ScalarValue) ex
(e1, σ ) −→ v1 ex (e2, σ ) −→ v2 ex (mk-ArithExpr(e1, P LUS, e2), σ ) −→ v1 + v2 ex
(e1, σ ) −→ v1 ex (e2, σ ) −→ v2 ex (mk-ArithExpr(e1, M INUS, e2), σ ) −→ v1 − v2 ex
(e1, σ ) −→ v1 ex (e2, σ ) −→ v2 v1 = v2 ex (mk-RelExpr(e1, E QUALS, e2), σ ) −→ true ex
(e1, σ ) −→ v1 ex (e2, σ ) −→ v2 v1 6= v2 ex (mk-RelExpr(e1, E QUALS, e2), σ ) −→ false ex
(e1, σ ) −→ v1 ex (e2, σ ) −→ v2 v1 ≤ v2 ex (mk-RelExpr(e1, L ESS T HAN E Q, e2), σ ) −→ true ex
(e1, σ ) −→ v1 ex (e2, σ ) −→ v2 v1 > v2 ex (mk-RelExpr(e1, L ESS T HAN E Q, e2), σ ) −→ false e ∈ Id ex (e, σ ) −→ σ (e) e ∈ ScalarValue ex (e, σ ) −→ e
Appendix C
Blocks language
Unlike the preceding appendices (where the whole of the abstract syntax is given before all of the context conditions to be followed by the semantics for every construct), this appendix is in the “preferred order”: that is, it is ordered by language concept. For reference purposes, this order is normally most convenient. There remains the decision whether to present the parts of a language in a top-down (from BlocksProgram to Expr) order or bottom-up: this decision is fairly arbitrary. What is really needed is an interactive support system! Abbreviations σ ∈ Σ a single “state” the set of all “states” Σ Arith Arithmetic Def Definition Den Denotation env a single “environment” Env the set of all “environments” Expr Expression param parameter Proc Procedure Rel Relational Stmt Statement
© Springer Nature Switzerland AG 2020 C. B. Jones, Understanding Programming Languages,
175
176
C Blocks language
C.1 Auxiliary objects The objects required for both context conditions and semantic rules are given first.
Objects needed for context conditions The following objects are needed in the description of the Context Conditions. m
TypeMap = Id −→ (ScalarType | ProcType) ScalarType = I NT T YPE | B OOLT YPE ProcType :: paramtypes : ScalarType∗
Semantic objects The following objects are needed in the description of the Semantics. m
Env = Id −→ Den Den = ScalarLoc | ProcDen
where ScalarLoc is an infinite set chosen from Token. ProcDen :: params : Id∗ body : Stmt context : Env The state only contains a “store”:1 m
Σ = ScalarLoc −→ ScalarValue
A useful predicate is: uniquel : (X ∗ ) → B
uniquel(l) 4 len l = card elems l
1
I/O as in Section 4.3.1 could, of course, be added.
C.3 Statements
177
C.2 Programs Abstract syntax BlocksProgram :: body : Stmt . Context conditions wf -BlocksProgram : BlocksProgram → B
wf -BlocksProgram(mk-BlocksProgram(b))
4
wf -Stmt(b, {7→})
Semantics pr
−→: P(BlocksProgram × D ONE) env0 = {7→} σ0 = {7→} st (b, env0 , σ0 ) −→ σ 0 pr mk-BlocksProgram(b) −→ D ONE
C.3 Statements Abstract syntax Stmt = Assign | If | While | Compound | Block | Call Context conditions wf -Stmt : Stmt × TypeMap → B
wf -Stmt(s, tpm) 4 given by cases below
178
C Blocks language
Semantics The semantic relation (which is also given by cases below) for statements is: st
−→: P((Stmt × Env × Σ) × Σ)
C.4 Simple statements Assignment Abstract syntax
.
Assign :: lhs : Id rhs : Expr
Context conditions wf -Stmt(mk-Assign(lhs, rhs), tpm) 4 lhs ∈ dom tpm ∧ c-type(rhs, tpm) = tpm(lhs)
Semantics ex
(rhs, env, σ ) −→ v st (mk-Assign(lhs, rhs), env, σ ) −→ σ † {env(lhs) 7→ v}
If Abstract syntax
.
If :: test : Expr then : Stmt else : Stmt
C.5 Compound statements
Context conditions wf -Stmt(mk-If (test, th, el), tpm) 4 c-type(test, tpm) = B OOLT P ∧ wf -Stmt(th, tpm) ∧ wf -Stmt(el, tpm)
Semantics ex
(test, env, σ ) −→ true st (th, env, σ ) −→ σ 0 st (mk-If (test, th, el), env, σ ) −→ σ 0 ex
(test, env, σ ) −→ false st (el, env, σ ) −→ σ 0 st (mk-If (test, th, el), env, σ ) −→ σ 0
C.5 Compound statements Abstract syntax Compound :: body : Stmt∗ . Context conditions wf -Stmt : Compound × TypeMap → B
wf -Stmt(mk-Compound(sl), tpm) 4 ∀i ∈ inds sl · wf -Stmt(sl(i), tpm)
Semantics stl
(stl, env, σ ) −→ σ 0 st (mk-Compound(stl), env, σ ) −→ σ 0
179
180
C Blocks language
Statement lists stl
([ ], env, σ ) −→ σ st
(s, env, σ ) −→ σ 0 stl
(rl, env, σ 0 ) −→ σ 00 stl ([s] y rl, env, σ ) −→ σ 00
C.6 Blocks Abstract syntax m
.
Block :: var-types : Id −→ ScalarType m proc-defs : Id −→ ProcDef body : Stmt∗
Context conditions wf -Stmt(mk-Block(vm, pm, body), tpm) 4 dom vm ∩0 dom pm = { } ∧ let tpm = tpm † vm in let proc-tpm = {p 7→ mk-ProcType(pm(p).paramtypes) | p ∈ dom pm} in ∀p ∈ dom pm · wf -ProcDef (pm(p), tpm0 )∧ 0 wf -Stmt(body, tpm † proc-tpm)
C.7 Call statements
181
Semantics m
newlocs ∈ (Id ←→ ScalarLoc) dom newlocs = dom vm rng newlocs ∩ dom σ = { } penv = {p 7→ mk-ProcDen(pm(p).params, pm(p).body, env) | p ∈ dom pm} env0 = env † newlocs † penv
Procedure definition Abstract syntax
.
ProcDef :: params : Id∗ paramtypes : ScalarType∗ body : Stmt
Context conditions wf -ProcDef : ProcDef × TypeMap → B
wf -ProcDef (mk-ProcDef (ps, ptps, body), tpm) 4 uniquel(ps) ∧ len ps = len ptps ∧ wf -Stmt(body, tpm † {ps(i) 7→ ptps(i) | i ∈ inds ps})
C.7 Call statements Abstract syntax
.
Call :: proc : Id args : Id∗
182
C Blocks language
Context conditions)
Semantics (parameter passing by reference) Semantics (parameter passing by value) mk-ProcDen(parms, body, cenv) = env(p) m newlocs ∈ (Id ←→ ScalarLoc) dom newlocs = elems parms rng newlocs ∩ dom σ = { } lenv = cenv † newlocs σi = σ ∪ {newlocs(parms(i)) 7→ σ (env(args(i))) | i ∈ inds parms} st (body, lenv, σi ) −→ σi0 st (mk-Call(p, args), env, σ ) −→ dom σ C σi0
C.8 Expressions
C.8 Expressions The only interesting deviation from earlier languages is the case of identifiers. Abstract syntax Expr = · · · | Id Semantics ex
−→: P((Expr × Env × Σ) × ScalarValue) e ∈ Id ex (e, env, σ ) −→ σ (env(e))
183
Appendix D
COOL
This appendix is –like Appendix C– in the “preferred order”: i.e. the abstract syntax, context conditions and semantics are grouped under each language concept.
Abbreviations Arith Arithmetic Cl Class Expr Expression Obj Object opd operand Meth Method Rel Relational rem remaining Sc Scalar Stmt Statement Val (semantic) Value Var Variable
© Springer Nature Switzerland AG 2020 C. B. Jones, Understanding Programming Languages,
185
186
D COOL
D.1 Auxiliary objects Types for context conditions The following types are needed in the description of the Context Conditions. m
ClTypes = Id −→ ClInfo m
ClInfo = Id −→ MethInfo : Type MethInfo :: restype paramtypes : Type∗ Type = ScType | Id ScType = I NT T P | B OOLT P m
VarEnv = Id −→ Type
The types of the context condition predicates are: c-type: Expr × ClTypes × VarEnv → Type wf -Stmt: Stmt × ClTypes × VarEnv × Type → B wf -CoolProgram: CoolProgram → B
Types for semantics In addition to the abstract syntax of ClMap (see abstract syntax in Section D.6), the following types are needed in the description of the semantics. m
ObjMap = Reference −→ ObjInfo ObjInfo :: class : Id : VarStore σ mode : R EADY | Active m
VarStore = Id −→ Val Val = Z | B | Reference
/ Reference. The set Reference is infinite and nil ∈ ∗ Active :: rem : Stmt client : Reference The types of the semantic relation/function are st
−→: P((ClMap × ObjMap) × ObjMap) eval: Expr × VarStore → Val
D.2 Expressions
187
D.2 Expressions Abstract syntax Expr = ArithExpr | RelExpr | Id | Value | TestNil ArithExpr :: operand1 : Expr operator : P LUS operand2 : Expr RelExpr :: operand1 : Expr operator : E QUALS operand2 : Expr TestNil :: object : Id Value = Z | B Context conditions The context conditions here are similar to those for the typed language (Appendix B) except that an additional parameter (ctps) carries the types of classes so that a check can be made that the argument to TestNil refers to an (instance) variable of type Reference; the result of applying c-type to a TestNil will of course be Boolean. c-type(mk-TestNil(id), ctps, v-env) 4 if v-env(id) ∈ dom ctps then B OOLT P else E RROR fi
Semantics The semantics of Expr also broadly follows those for simpler languages because expression evaluation depends only on the local instance variables (cf. semantics of Assign) and method calls are disallowed within expressions. eval: Expr × VarStore → (Z | B) The semantics of TestNil are: eval(mk-TestNil(id), σ ) 4
σ (id) = nil
188
D COOL
D.3 Statements Stmt = Assign | If | New | Discard | Activate | Call | Return | Await | Delegate
D.3.1 Assignments Remember that method calls cannot occur in an Assign — method invocation is covered in D.4. Abstract syntax
.
Assign :: lhs : Id rhs : Expr
Context conditions wf -Stmt(mk-Assign(lhs, rhs), ctps, v-env, mtp) 4 lhs ∈ dom v-env ∧ c-type(rhs, ctps, v-env) = v-env(lhs)
Semantics (C, O) −→ O † {r 7→ robj0 }
D.3 Statements
D.3.2 If statements Abstract syntax
.
If :: test : Expr then : Stmt∗ else : Stmt∗
Context conditions wf -Stmt(mk-If (test, th, el), ctps, v-env, mtp) 4 c-type(test, ctps, v-env) = B OOLT P ∧ ∀i ∈ inds th · wf -Stmt(th(i), ctps, v-env, mtp) ∧ ∀i ∈ inds el · wf -Stmt(el(i), ctps, v-env, mtp)
Semantics robj = O(r) mk-ObjInfo(cl, σ , mk-Active([mk-If (test, th, el)] y rl, k)) = robj eval(test, σ ) = true robj0 = mk-ObjInfo(cl, σ , mk-Active(th y rl, k)) st (C, O) −→ O † {r 7→ robj0 } robj = O(r) mk-ObjInfo(cl, σ , mk-Active([mk-If (test, th, el)] y rl, k)) = robj eval(test, σ ) = false robj0 = mk-ObjInfo(cl, σ , mk-Active(el y rl, k)) st (C, O) −→ O † {r 7→ robj0 }
189
190
D COOL
D.4 Methods Abstract syntax
.
Meth :: result-type params paramtypes body
: : : :
Type Id∗ m Id −→ Type Stmt∗
Context conditions wf -Meth : Meth × ClTypes × VarEnv → B
wf -Meth(mk-Meth(rtp, ps, ptpm, body), ctps, v-env) 4 (rtp = nil ∨ rtp ∈ ScType ∨ rtp ∈ dom ctps) ∧ (∀id ∈ dom ptpm · ptpm(id) ∈ ScType ∨ ptpm(id) ∈ dom ctps) ∧ uniquel(ps) ∧ elems ps ⊆ dom ptpm ∧ ∀i ∈ inds body · wf -Stmt(body(i), ctps, v-env † ptpm, rtp)
The definition of uniquel is given in Chapter 2. There are no semantics for methods as such — see the semantics of Call etc. below.
D.4 Methods
191
D.4.1 Activate method Abstract syntax
.
Activate :: object : Id : Id method arguments : Expr∗
Context conditions wf -Stmt(mk-Activate len args = len pts∧ ∀i ∈ inds args · c-type(args(i), ctps, v-env) = pts(i)
Semantics rc
Active
Activate
rs
READY
cobj = O(c) mk-ObjInfo(clc , σc , mk-Active([mk-Activate(obj, meth, args)] y rlc , k)) = cobj 0 cob }
Active
192
D COOL
D.4.2 Call method Abstract syntax
.
Call :: lhs object method arguments
: : : :
Id Id Id Expr∗
Context conditions wf -Stmt(mk-Call(lhs, obj, meth, args), ctps, v-env, mtp) 4 lhs ∈ dom v-env ∧
Active
Call Return rs
READY
Active
cobj = O(c) mk-ObjInfo(clc , σc , mk-Active([mk-Call(lhs, obj, meth, args)] y rlc , k)) = cobj cobj }
READY
D.4 Methods
193
D.4.3 Rendezvous An Await in a client thread should be matched with a Return in a server. (But, if no result is to be passed back, the server thread just completes the code of the method.) Abstract syntax Return :: value : Expr Await :: lhs : Id . Context conditions wf -Stmt(mk-Return(val), ctps, v-env, mtp) 4 c-type(val) = mtp
Semantics rc
Active
Activate
Active
Await Return
rs
READY
Active
[] READY }
D.4.4 Method termination When a method has no more statements to execute, it returns to quiescent status. O(s) = mk-ObjInfo(cl, σ , mk-Active([ ], k)) st (C, O) −→ O † {s 7→ mk-ObjInfo(cl, σ , R EADY)}
194
D COOL
D.4.5 Delegation Delegation invokes a method in another object and passes on the responsibility to return a value to its client. Abstract syntax
.
Delegate :: object : Id : Id method arguments : Id∗
Context conditions wf -Stmt(mk-Delegate
Await
Activate rs
READY
Active READY
Delegate rt
READY
Active
Return Active
READY
D.5 Classes
195 }
D.5 Classes Abstract syntax m
.
Class :: vars : Id −→ Type m methods : Id −→ Meth
Context conditions wf -Class : Class × ClTypes → B
wf -Class(mk-Class(vars, meths), ctps) 4 ∀tp ∈ rng vars · (tp ∈ ScType ∨ tp ∈ dom ctps) ∧ ∀m ∈ rng meths · wf -Meth(m, ctps, vars) There are no semantics for classes as such — the semantics of New follows.
196
D COOL
D.5.1 Creating objects Abstract syntax New :: target : Id . Context conditions wf -Stmt(mk-New(targ), ctps, v-env, mtp) 4 targ ∈ dom v-env ∧ v-env(targ) ∈ Id Semantics) mk-Class(vars, meths) = C(cln ) σn = {v 7→ 0 | v ∈ dom vars ∧ vars(v) = I NT T P}∪ {v 7→ false | v ∈ dom vars ∧ vars(v) = B OOLT P}∪ {v 7→ nil | v ∈ dom vars ∧ vars(v) ∈ / ScType} nobj = mk-ObjInfo(cln , σn , R EADY) st (C, O) −→ O † {r 7→ robj0 , n 7→ nobj}
D.5 Classes
D.5.2 Discarding references Abstract syntax Discard :: target : Id . Context conditions wf -Stmt(mk-Discard(targ), ctps, v-env, mtp) 4 targ ∈ dom v-env ∧ v-env(targ) ∈ Id
Semantics robj = O(r) mk-ObjInfo(clr , σr , mk-Active([mk-Discard(targ)] y rl, k)) = robj robj0 = mk-ObjInfo(clr , σr † {targ 7→ nil}, mk-Active(rl, k)) st (C, O) −→ O † {r 7→ robj0 }
197
198
D COOL
D.6 Programs Abstract syntax : ClMap CoolProgram :: class-map start-class : Id start-method : Id m
ClMap = Id −→ Class Context conditions wf -CoolProgram : CoolProgram → B
wf -CoolProgram(mk-CoolProgram(cm, start-cl, start-m)) start-cl ∈ dom cm ∧ start-m ∈ dom (cm(start-cl).methods) ∧ let ctps = {c 7→ c-clinfo(cm(c)) | c ∈ dom cm} in ∀c ∈ dom ctps · wf -Class(cm(c), ctps)
4
The following two functions extract ClInfo and MethInfo respectively. c-clinfo : Class → ClInfo
c-clinfo(mk-Class(tpm, mm))
4
c-minfo : Meth → MethInfo
{m 7→ c-minfo(mm(m)) | m ∈ dom mm}
c-minfo(mk-Meth(ret, pnl, ptm, body)) 4 mk-MethInfo(ret, apply(pnl, ptm)) m
apply : X ∗ × (X −→ Y) → Y ∗
apply(l, m) 4 if l = [ ] then [ ] y apply(tl l, m) else [m(hd l)] fi
D.6 Programs
199
Semantics For mk-CoolProgram(cm, cl0 , meth0 ), the semantics creates an ObjMap containing an ObjInfo for cl0 and activates meth0 with an empty argument list.
Appendix E
VDM notation
E.1 Logical operators The logical operators and quantifiers are written as follows: B ¬E E1 ∧ E2 E1 ∨ E2 E1
The Logic of Partial Functions (LPF) copes with values that fail to denote: a
b
¬a a∧b a ∨ b a ⇒ b a ⇔ b
true true false true true ∗ true ∗ ∗ true false true true false true true ∗ ∗ true
true true true
false false true false ∗ false false
true false
∗
∗ false ∗
true false ∗ false false false
∗
∗ ∗
∗ ∗
∗
true
true
∗
false
∗ ∗ ∗
false
∗
true
© Springer Nature Switzerland AG 2020 C. B. Jones, Understanding Programming Languages,
201
202
E VDM notation
E.2 Set notation The symbols used in VDM for the operators of set theory are: T-set all finite subsets of T {t1 , t2 , . . . , tn } set enumeration {} empty set {true, false} B N {0, · · ·} Z {· · · , −1, 0, 1, · · ·} {x ∈ S | p(x)} set comprehension {i, · · · , j} subset of integers (from i to j inclusive) t∈S set membership ¬ (t ∈ S) t∈ /S set containment (subset of) S1 ⊆ S2 S1 ⊂ S2 strict set containment set intersection S1 ∩ S2 set union S1 ∪ S2 S1 − S2 set difference card S cardinality (size) of a set power set P(X) The following pictorial representation of the set operators can be a useful reminder of their signatures. The ovals contain the names of types; operators are marked with incoming arcs indicating the types of their operands and an outgoing arc indicating the type of the result.
.
∩∪−
.
card
X
X-set
.⊂⊆
.
∈∉
!
ℕ
E.3 List (sequence) notation E.3 List used (sequence) notation The symbols in VDM for the operators of sequence theory are:
203
⇤
T type defining finite sequences (elements are of type T) E.3 List (sequence) notation len s length of a sequence [t1 , t2 , . . . , tn ] sequence given by enumeration
[ ] symbols used the empty sequence The in VDM for the operators of sequence theory are:
s1 y s2 sequence concatenation ∗ sequences (elements are of type T) typeatdefining hd s T the element the head finite of a sequence tl s the tail of a sequence len s the sequence of a sequence lengthcomprising inds s [t1 , t2 , .the indices togiven a sequence tn ] ofsequence . . ,set by enumeration elems [s ] the set of elements a sequence the empty in sequence
s1 y s2 hd s tl s inds s elems s
sequence concatenation the element at the head of a sequence the sequence comprising the tail of a sequence the set of indexes to a sequence the set of elements in a sequence
The pictorial representation of the sequence operators is:
The pictorial representation of the sequence operators is:
.
(X ⇤ )⇤
⌧
y r ⇣ ◆ ◆ r
dconc
tl
. inds
ℕ-set
-
?⌧
X⇤ 6
X*
.
⌃r ⇧
r
len
-
.
. .
hd
r
ℕ
⌧ r ()
. (i)
hd
? -
X-set
N
len
tl
elems
⌧
X
X
⌧
1 7! r1 , 2 7! r2 , . . . , n 7! rn The map theory are: {7! } symbols used in VDM for the operators emptyof map m {d 7! f (d)D2 ! D R⇥ R | p(d)} mapfinite defined by comprehension maps from D to R E VDM notation m204 † m map overwrite domain of a map 1 2 dom m rng m E.4 Map notation m(d)
range of a map map application {d1 7! r1 , d2 7! r2 , . . . , dn 7! rn } map enumeration {7!}used in VDM for the operators emptyofmap map theory are: The symbols {d 7! f (d) 2 D ⇥ R | p(d)} map defined by comprehension m m1−→ † m2R map overwrite finite maps from D to R D
domain of a map range of a map m(d) map application {d1 7→ r1 , d2 7→ r2 , . . . , dn 7→ rn } map enumeration empty map {7→} {d 7→ f (d) ∈ D × R | p(d)} map defined by comprehension m1 † m2 map overwrite m1 ∪ m2 map union The pictorial representation of the map operators is: domain restriction of a map sCm −m sC domain deletion of a map dom m rng m
The pictorial representation of the map operators is:
r ⇣ ◆ ◆
The pictorial representation of the†,map [ operators is:
D-set
⌧ D-set
.
⌧ dom
D-set
✓
r
✓
†, [ r ⇣ ◆ ◆
r
dom
.
D
dom
?⌧
? ⌧ rng m rngr ! mR r D !R r ⌘ ✓
r C,⌘ ✓ C C,C
.
⌧
D
.
rng
D6 6R m
D⌧
D
- - R-setR-set
R-set
.() r ()
r ()
⌧
⌧
-
⌧
R
R
R
⌧
E.6 Function notation
205
E.5 Record notation :: mk-N(. . .) o.s1 Type
record generator selector optional nil omitted object µ(o, s1 7→ t) modify a component Each record definition gives distinct selector and constructor functions. For: m
Program :: vars : Id −→ Type body : Stmt∗ the pictorial representation of its types would be:
.
.
vars
body
Program
Id-set
mk-Program
.
E.6 Function notation f : D1 × D2 → R f (d) if · · · then · · · else · · · let x = · · · in · · · f (d: D) r: R pre · · · d · · · post · · · d · · · r · · ·
signature application conditional local definition
Stmt*
Appendix F
Notes on influential people
This appendix contains brief notes about the main people who have played a part in the development of formal semantics and are mentioned in the body of the book. The notes are not intended to be biographies and are limited to facts related to the subject of this book. A more historical account of many of the interactions between these players can be found in [Ast19]. Hans Bekiˇc (1936–1982) Unlike the majority of his colleagues at the IBM Laboratory in Vienna, who were engineers, Bekiˇc was a mathematician. He worked with Lucas on an early ALGOL 60 compiler and then on the 1960s VDL (operational) semantics of PL/I. He spent a year in London working with Peter Landin and this put him in a position where he encouraged the move towards denotational description methods for VDM. As well as his role in developing and co-authoring the VDM semantics of PL/I [BBH+ 74], Bekiˇc was making seminal contributions to concurrency before his untimely death. A collection of his papers was published posthumously as [BJ84]. Robert W. Floyd (1936–2001) Bob Floyd made many contributions to the theoretical side of computing (see [Knu03] for a fitting tribute). One of his significant contributions to research on semantics was [Flo67], which provides a clear account of one way of verifying programs. Floyd’s approach was predicated on flowcharts but the cited paper had a major influence on Hoare’s [Hoa69], which, in turn, is the foundation stone of 50 years of productive research on the formal development of programs. Charles Antony Richard Hoare (b. 1934) Tony Hoare has made many key contributions to the theory of computing and has also been involved in seeing that theoretical ideas are transferred to practical applications. The major semantic avenue discussed in Section 7.3 derives from his paper [Hoa69] on “Axiomatic semantics”. Hoare has tackled the search for unification of the various approaches to semantics (see [HH98]). His contributions have been recognised by the ACM Turing Award in 1983, the Kyoto Prize in 2000, two “Queen’s Awards to Industry” and numerous honorary degrees. There are recorded video interviews on the ACM web site for many Turing Laureates — that for Hoare is available at: © Springer Nature Switzerland AG 2020 C. B. Jones, Understanding Programming Languages,
207
208
F Notes on influential people 4622167.cfm Two Festschriften are [Ros94, JRW10] and a selection of his papers up to 1989 is contained in [HJ89]. Peter John Landin (1930–2009) Peter Landin noted the link between programming languages and Church’s Lambda Calculus in [Lan65a, Lan65b]; he also spoke on the subject at the 1964 “Formal Language Description Languages” conference at Baden bei Wien (Landin’s paper is printed as [Lan66b] and a masterful overview of the approaches presented is [Ste66, p. 290]). His [Lan66a] is a classic paper. Together with Rod Burstall, Landin went on to consider algebraic approaches to reasoning about language descriptions. Peter Lauer (b. 1934) Peter Lauer was a member of the IBM Laboratory in Vienna. His interests were more philosophical than most members of the group. Two key points of contact with the material in the current book are his VDL description of ALGOL 60 [Lau68] (undertaken to show that it was the PL/I language that gave rise to the huge size of its description — not the VDL method) and his Ph.D. research supervised by Tony Hoare that showed that the axioms of a language were consistent with an underlying operational semantics [Lau71a, HL74]. Peter Lucas (1935–2015) was one of the original members of the IBM Laboratory in Vienna. He wrote an early ALGOL 60 compiler together with Bekiˇc and then became a key member of the team that wrote three versions of the VDL (operational) description of PL/I in the 1960s. See [LW69, Luc71]. Importantly, he then started to consider how such a description could be used as the basis from which to design compiling algorithms. He championed the idea of identifying “language concepts” that he hoped could be considered separately. A key output from this was his “twin machine” argument which was used to justify implementations of the “block concept” (locating the correct instance of an identifier with nested blocks, procedure calls etc.). References to this research include [Luc68, JL70] — Lucas managed one of the IBM Vienna groups working on the 1970s PL/I compiler and is a co-author of the VDM (denotational) semantics of PL/I [BBH+ 74]. He left the Vienna Lab to move to IBM Research in the USA. John McCarthy (1927–2011) is best known as a father of Artificial Intelligence (AI) research. He did, however, also make significant contributions to formal description of programming languages: his [McC66] paper was presented at the 1964 Formal Language Description Languages working conference at Baden bei Wien. This paper presented a clear case for operational semantic descriptions using “Micro-ALGOL” as an example. His work on AI and formal approaches to computing coalesced in his research on support for theorem proving. Arthur John Robin Gorell Milner (1934–2010) Robin Milner made many important contributions to computer science including LCF [GMW79] (which influenced nearly all subsequent theorem proving assistants), Type inference and the ML programming language [HMT87]. His study of process algebras created CCS and the π-calculus. Plotkin gives credit to Milner for the way that semantics can
F Notes on influential people
209
be presented using inference rules. Milner’s Turing award acceptance speech is printed as [Mil93]. A Festschrift in his honour is [PST00]. Peter David Mosses (b. 1948) Peter Mosses undertook his doctoral studies with Strachey in Oxford. His thesis [Mos75] on “SIS” showed that a denotational semantics could be used to generate a prototype compiler for a language. In parallel with this research he produced one of the descriptions of ALGOL 60 [Mos74] that is discussed in [JA16]. He continued these interests with research into semantics with “Action Semantics” [Mos92], “Modular Operational Semantics” [Mos04] and attempts to build tools for compiler generation [BSM16]. John von Neumann (1903–1957) is, of course, a legendary figure in computing and mathematics — “von Neumann architecture” is a standard phrase. The specific interest in him in this book is his use of annotations of flowcharts in [GvN47]. Gordon David Plotkin (b. 1946) Gordon Plotkin has made many contributions to theoretical topics including “power domains” [Plo76]. The huge debt that this book owes to Plotkin’s work is however “Structural Operational Semantics” which he used in his Aarhus lectures in 1981 — thankfully the lecture notes were republished as [Plo04b]. Dana Stewart Scott (b. 1932) Dana Scott received (together with Michael Rabin) the ACM Turing Award in 1976 — see [Sco77]. The obvious link from the current book to Dana Scott is his fundamental contribution to –what became known as– “Denotational Semantics”. Scott met Strachey at the April 1969 IFIP WG 2.2 meeting in Vienna and was so impressed by the latter’s insights into programming languages that he immediately arranged to extend his stay in Europe and spend the last part of 1969 in Oxford with Strachey. (Scott had previously worked with Jaco de Bakker in Amsterdam — see [dBS69], which was presented during a visit to the IBM Lab in Vienna in August 1969.) Scott initially warned that the untyped Lambda calculus lacked foundations but then found models that gave birth to a whole research direction. Most of the original material was reported as monographs of the Oxford “Programming Research Group” — an accessible account is [Sco80] and Stoy’s [Sto77] provides insight into this exciting period of research. Christopher S. Strachey (1916–75) Strachey formed and led the Programming Research Group at Oxford University. He wrote many wise words about programming languages (e.g. [Str67, Str73]), was a co-designer of the CPL language [BBHS63] and led the way to what became the denotational approach to language description (e.g. [Str66]). Towards the end of his life, Strachey wrote (together with Robert Milne) a submission to the Cambridge University Adams Essay Prize; after Strachey’s untimely death, Milne completed this as [MS76]. Martin Campbell-Kelly wrote a wonderful survey of Strachey’s life and achievements [CK85]. Interesting videos of the speakers and panel discussion from a conference to mark the hundredth anniversary of Strachey’s birth are online at: Joe E. Stoy (b. 1943) Joe Stoy was a key figure in the Oxford “Programming Research Group” who went on to co-found “Bluespec Inc.”. In his time at Oxford
210
F Notes on influential people
he was a contributor to the development of what became known as “Denotational Semantics” and authored the classic reference on the subject [Sto77]. After Strachey’s untimely death, Stoy held the group together and provided the foundation for the arrival of Tony Hoare. Alan Mathison Turing (1912–1954) Alan Turing’s important contributions are well documented — a perfect biography was provided by Andrew Hodges and little that has been written since comes close to the insight in [Hod83]. Despite the fundamental nature of [Tur36], the link to the current material is Turing’s three-page [Tur49]. ACM’s most prestigious award is, of course, named after Alan Turing. Adriaan van Wijngaarden (1916–1987) Aad van Wijngaarden was a Dutch mathematician who both transitioned to computing and became the father of Dutch computer science. To his credit are his contributions to ALGOL 60; more controversial was his spearheading of the ALGOL 68 effort. His paper [vW66a] tackled the messy issue of reasoning about computer arithmetic (where finite representations of even integers do not match Peano’s natural numbers) — this paper was cited and used in [Hoa69]. Niklaus Emil Wirth (b. 1934) Niklaus Wirth is undoubtedly one of the most influential and successful designers of programming languages (and, in fact, systems including hardware). Landmark languages include ALGOL W, Pascal, the Modula series of languages and Oberon. A video of his Turing Award lecture is available at:1 1025774.cfm A lovely book noting his contributions and superb taste is [BGP00]. Two of his own influential books are [Wir73, Wir76] and he is also a co-author of [DDH72]. Heinz Zemanek (1920–2014) Heinz Zemanek founded the IBM Laboratory in Vienna and led it through the early research on VDL — the operational descriptions of PL/I. Trained as an electrical engineer, he became a leader who furthered the careers of his many colleagues. One notable contribution that relates to the subject matter of the current book was his hosting of the Baden bei Wien IFIP Working Conference on “Formal Language Description Languages”. His own interests moved more into philosophy and his energy into international affairs including the IFIP organisation. A tribute to Zemanek is contained in [FCSR15].
1
Towards the end of this useful interview, Wirth says: “My idea was that programming languages allow programming on a higher level of abstraction compared to machine coding. You can abstract specific properties and facilities of a specific machine. You can abstract to a higher level and create programs that will then be available and runnable on all computers. That’s called abstraction. And the term “higher-level languages” comes exactly from that. . . . Look at today’s situation. People program in C++, the worst disease ever created. Or C# or Java, which are a bit better. But they all suffer from their mightiness. I’m always expecting they’re going to collapse under their own weight.”
References
[A+ 97]
Samson Abramsky et al. Semantics of interaction: an introduction to game semantics. Semantics and Logics of Computation, 14(1):1–32, 1997. [AB87] Bijan Arbab and Daniel M Berry. Operational and denotational semantics of Prolog. The Journal of Logic Programming, 4(4):309–329, 1987. [Abr96] J.-R. Abrial. The B-Book: Assigning Programs to Meanings. Cambridge University Press, 1996. [Abr10] J.-R. Abrial. The Event-B Book. Cambridge University Press, Cambridge, UK, 2010. [Abr13] Samson Abramsky. Semantics of interaction. arXiv preprint arXiv:1312.0121, 2013. [AC12] Martin Abadi and Luca Cardelli. A Theory of Objects. Springer-Verlag, 2012. [ACJ72] C. D. Allen, D. N. Chapman, and C. B. Jones. A formal definition of ALGOL 60. Technical Report 12.105, IBM Laboratory Hursley, 8 1972. [Acz82] P. H. G. Aczel. A note on program verification. (private communication) Manuscript, Manchester, 1 1982. [AdB91] Pierre America and Frank S. de Boer. A proof theory for a sequential version of POOL. CWI, Nationaal Instituut voor Onderzoek op het gebied van Wiskunde en Informatica, 1991. [AdBKR89] Pierre America, Jaco de Bakker, Joost N Kok, and Jan Rutten. Denotational semantics of a parallel object-oriented language. Information and Computation, 83(2):152– 205, 1989. [AGLP88] D.J. Andrews, A. Garg, S.P.A. Lau, and J.R. Pitchers. The formal definition of Modula-2 and its associated interpreter. In Robin E. Bloomfield, Lynn S. Marshall, and Roger B. Jones, editors, VDM ’88 VDM — The Way Ahead, volume 328 of Lecture Notes in Computer Science, pages 167–177. Springer-Verlag, 1988. [AH82] Derek Andrews and Wolfgang Henhapl. Pascal. In Bjørner and Jones [BJ82], chapter 6, pages 175–252. [AJ18] Troy K. Astarte and Cliff B. Jones. Formal semantics of ALGOL 60: Four descriptions in their historical context. In Liesbeth De Mol and Giuseppe Primiero, editors, Reflections on Programming Systems - Historical and Philosophical Aspects, pages 71–141. Springer Philosophical Studies Series, 2018. [Ame89] P. America. Issues in the design of a parallel object-oriented language. Formal Aspects of Computing, 1(4):366–411, 1989. [And92] James H. Andrews. Logic Programming: Operational Semantics and Proof Theory. Distinguished Dissertations in Computer Science. Cambridge, 1992. [ANS76] ANSI. Programming language PL/I. Technical Report X3.53-1976, American National Standard, 1976. [AO19] Krzysztof R. Apt and Ernst-R¨udiger Olderog. Fifty years of Hoare’s logic. Formal Aspects of Computing, 31(6):751–807, 2019. © Springer Nature Switzerland AG 2020 C. B. Jones, Understanding Programming Languages,
211
212 [Apt81] [AR92] [ASS85] [Ast19] [BA90] [BA06] [Bac78] [Bae90] [Bar06] [BBG+ 60]
[BBG+ 63]
[BBG+ 68] [BBH+ 74] [BBHS63] [BCJ84] [Bek64] [Bek73] [Bey09] [BG96] [BGP00] [BH73] [BHG87] [BHJ20]
References Krzysztof R. Apt. Ten years of Hoare’s logic: a survey—part I. ACM Transactions on Programming Languages and Systems, 3(4):431–483, 10 1981. Pierre America and Jan Rutten. A layered semantics for a parallel object-oriented language. Formal Aspects of Computing, 4(4):376–408, 1992. Harold Abelson, Gerald Jay Sussman, and Julie Sussman. Structure and Interpretation of Computer Programs. MIT Press, 1985. Troy K. Astarte. Formalising Meaning: a History of Programming Language Semantics. PhD thesis, Newcastle University, 6 2019. Mordechai Ben-Ari. Principles of Concurrent and Distributed Programming. Prentice Hall International Series in Computer Science. Prentice Hall, 1990. Mordechai Ben-Ari. Principles of Concurrent and Distributed Programming. Pearson Education, 2006. John Backus. Can programming be liberated from the Von Neumann style?: A functional style and its algebra of programs. Communications of the ACM, 21(8):613– 641, August 1978. J. C. M. Baeten, editor. Applications of Process Algebra. Cambridge University Press, 1990. John Barnes. High Integrity Software: The SPARK Approach to Safety and Security. Addison-Wesley, 2006. John W. Backus, Friedrich L. Bauer, Julien Green, Charles Katz, John McCarthy, Peter Naur, Alan J. Perlis, Heinz Rutishauser, Klaus Samelson, Bernard Vauquois, et al. Report on the algorithmic language ALGOL 60. Numerische Mathematik, 2(1):106–136, 1960. John W. Backus, Friedrich L. Bauer, Julien Green, Charles Katz, John McCarthy, Peter Naur, Alan J. Perlis, Heinz Rutishauser, Klaus Samelson, Bernard Vauquois, Joseph H. Wegstein, Adriaan van Wijngaarden, and Michael Woodger. Revised report on the algorithmic language ALGOL 60. The Computer Journal, 5(4):349–367, 1963. Henry Bauer, Sheldon Becker, Susan L Graham, Edwin Satterthwaite, and Richard L Sites. Algol W language description. Technical Report CS89, Computer Science Dept., Stanford Univ, 1968. Hans Bekiˇc, Dines Bjørner, Wolfgang Henhapl, Cliff B. Jones, and Peter Lucas. A formal definition of a PL/I subset. Technical Report 25.139, IBM Laboratory Vienna, 12 1974. D. W. Barron, J. N. Buxton, D. F. Hartley, and C. Strachey. The main features of CPL. Computer Journal, 6(2):134–143, 1963. H. Barringer, J.H. Cheng, and C. B. Jones. A logic covering undefinedness in program proofs. Acta Informatica, 21(3):251–269, 1984. Hans Bekiˇc. Defining a language in its own terms. Technical Report 25.3.016, IBM Laboratory Vienna, 12 1964. Hans Bekiˇc. An introduction to ALGOL 68. Annual Review in Automatic Programming, 7:143–169, 1973. Hard copy. Kurt W. Beyer. Grace Hopper and the Invention of the Information Age. The MIT Press, 2009. Thomas J. Bergin and Richard G. Gibson, editors. History of Programming Languages—II. ACM Press, New York, NY, USA, 1996. L´aszl´o B¨osz¨orm´enyi, J¨urg Gutknecht, and Gustav Pomberger, editors. The School of Niklaus Wirth: the art of simplicity. dpunkt. verlag, 2000. P. Brinch Hansen. Operating System Principles. Prentice Hall Series in Automatic Computation. Prentice Hall, 1973. P. A. Bernstein, V. Hadzilacos, and N. Goodman. Concurrency Control and Recovery in Database Systems. Addison-Wesley, 1987. Alan Burns, Ian J. Hayes, and Cliff B. Jones. Deriving specifications of control programs for cyber physical systems. The Computer Journal, 63(5):774–790, 2020.
References [BHR84] [BIJW75] [BJ78] [BJ82] [BJ84] [Bla86] [Bli81] [Bli88] [BM81] [BO80a] [BO80b] [BSM16] [BvW98] [BW71] [CDD+ 15]
[CDG+ 89] [CH72] [CH79] [Chu41] [CJ91] [CJ00]
[CK85]
213 Stephen Brookes, Charles Anthony Richard Hoare, and Andrew William Roscoe. A theory of communicating sequential processes. Journal of the ACM, 31(3):560–599, 7 1984. H. Bekiˇc, H. Izbicki, C. B. Jones, and F. Weissenb¨ock. Some experiments with using a formal language definition in compiler development. Laboratory Note LN 25.3.107, IBM Laboratory, Vienna, 12 1975. D. Bjørner and C. B. Jones, editors. The Vienna Development Method: The MetaLanguage, volume 61 of Lecture Notes in Computer Science. Springer-Verlag, 1978. Dines Bjørner and Cliff B. Jones, editors. Formal Specification and Software Development. Prentice Hall International, 1982. Hans Bekiˇc and Cliff B. Jones. Programming Languages and Their Definition: Hans Bekiˇc (1936-1982). Selected papers, volume 177 of Lecture Notes in Computer Science. Springer-Verlag, 1984. Stephen Blamey. Partial-Valued logic. PhD thesis, University of Oxford, 1986. Andrzej J. Blikle. On the development of correct specified programs. IEEE Transactions on Software Engineering, 7(5):519–527, 1981. A. Blikle. Three-valued predicates for software specification and validation. In R. Bloomfield, L. Marshall, and R. Jones, editors, VDM—The Way Ahead, volume 328 of Lecture Notes in Computer Science, pages 243–266. Springer-Verlag, 1988. R. S. Boyer and J. S. Moore. The Correctness Problem in Computer Science. International Lecture Series in Computer Science. Academic Press, London, 1981. D. Bjørner and O. N. Oest, editors. The DDC Ada Compiler Development Project, chapter 0. Volume 98 of Bjørner and Oest [BO80b], 1980. D. Bjørner and O. N. Oest, editors. Towards a Formal Description of Ada, volume 98 of Lecture Notes in Computer Science. Springer-Verlag, Berlin, 1980. L Binsbergen, Neil Sculthorpe, and Peter D Mosses. Tool support for componentbased semantics. In Companion Proceedings of the 15th International Conference on Modularity, pages 8–11. ACM, 2016. Ralph-Johan Back and Joakim von Wright. Refinement Calculus: A Systematic Introduction. Springer-Verlag, 1998. H. Bekiˇc and K. Walk. Formalization of storage properties. In E. Engeler, editor, [Eng71], pages 28–61. Springer-Verlag, 1971.. Moving fast with software verification. In NASA Formal Methods, volume 9058 of Lecture Notes in Computer Science, pages 3–11. Springer International Publishing, 2015. Luca Cardelli, James Donahue, Lucille Glassman, Mick Jordan, Bill Kalsow, and Greg Nelson. Modula-3 report (revised). Technical report, DEC SRC, 1989. Maurice Clint and C. A. R. Hoare. Program proving: Jumps and functions. Acta Informatica, 1(3):214–224, 1972. Derek Coleman and Jane W. Hughes. The clean termination of Pascal programs. Acta Informatica, 11(3):195–210, 1979. A. Church. The Calculi of Lambda-Conversion. Princeton University Press, 1941. J. H. Cheng and C. B. Jones. On the usability of logics which handle partial functions. In C. Morgan and J. C. P. Woodcock, editors, 3rd Refinement Workshop, pages 51–69. Springer-Verlag, 1991. Pierre Collette and Cliff B. Jones. Enhancing the tractability of rely/guarantee specifications in the development of interfering operations. In Gordon Plotkin, Colin Stirling, and Mads Tofte, editors, Proof, Language and Interaction, chapter 10, pages 277–307. MIT Press, 2000. Martin Campbell-Kelly. Christopher Strachey, 1916-1975: A biographical note. IEEE Annals of the History of Computing, 1(7):19–42, 1985.
214 [Coo18]
References
Byron Cook. Formal reasoning about the security of Amazon Web Services. In International Conference on Computer Aided Verification, pages 38–47. SpringerVerlag, 2018. [Dat82] C. J. Date. A formal definition of the relational model. ACM Sigmod Record, 13(1):18–29, 1982. [Dav65a] Martin Davis. Computability and Undecidability. Dover, 1965. [Dav65b] Martin Davis, editor. The Undecidable. Raven Press, 1965. [dB91] Frank de Boer. A proof system for the language POOL. In J.W. de Bakker, W.P. de Roever, and G. Rozenberg, editors, Foundations of Object-Oriented Languages, volume 489 of Lecture Notes in Computer Science, pages 124–150. Springer-Verlag, 1991. [dBS69] J. W. de Bakker and D. Scott. A theory of programs. Manuscript notes for IBM Seminar, Vienna, 8 1969. O.-J. Dahl, E. W. Dijkstra, and C. A. R. Hoare, editors. Structured Programming. [DDH72] Academic Press, 1972. [DFLO19] Dino Distefano, Manuel F¨ahndrich, Francesco Logozzo, and Peter W. O’Hearn. Scaling static analyses at Facebook. Communications of the ACM, 62(8):62–70, 2019. [DFPV09] Mike Dodds, Xinyu Feng, Matthew Parkinson, and Viktor Vafeiadis. Deny-guarantee reasoning. In Giuseppe Castagna, editor, Programming Languages and Systems, volume 5502 of Lecture Notes in Computer Science, pages 363–377. Springer Berlin / Heidelberg, 2009. [DGKL80] V. Donzeau-Gouge, G. Kahn, and B. Lang. On the formal definition of Ada. In N.D. Jones, editor, Semantics-Directed Compiler Generation: Proceedings of a Workshop Aarhus, Denmark, January 1980, volume 94 of Lecture Notes in Computer Science, pages 475–489. Springer-Verlag, 1980. [DHMS12] Brijesh Dongol, Ian J. Hayes, Larissa Meinicke, and Kim Solin. Towards an algebra for real-time programs. In W. Kahl and T.G. Griffin, editors, 13th International Conference on Relational and Algebraic Methods in Computer Science (RAMiCS), volume 7560 of Lecture Notes in Computer Science, pages 50–65. Springer-Verlag, 2012. [Dij62] E. W. Dijkstra. Over de sequetialiteit van procesbeschrivjingen. EWD35, 1962. [Dij68a] E. W. Dijkstra. Cooperating sequential processes. In F. Genuys, editor, Programming Languages, pages 43–112. Academic Press, New York, 1968. [Dij68b] E. W. Dijkstra. Go to statement considered harmful. Communications of the ACM, 11(3):147–148, 1968. [Dij76] E. W. Dijkstra. A Discipline of Programming. Prentice Hall, Englewood Cliffs, N.J., USA, 1976. Alan AA Donovan and Brian W Kernighan. The Go programming language. [DK15] Addison-Wesley Professional, 2015. O.-J. Dahl, B. Myhrhaug, and K. Nygaard. SIMULA 67 common base language. [DMN68] Technical Report S-2, Norwegian Computing Center, Oslo, 1968. James Edward Donahue. Complementary Definitions of Programming Language [Don76] Semantics, volume 42 of Lecture Notes in Computer Science. Springer-Verlag New York, Inc., 1976. [Don80] V. Donzeau-Gouge. Formal Definition of the Ada Programming Language. PhD thesis, INRIA, 1980. [dRdBH+ 01] Willem-Paul de Roever, Frank de Boer, Ulrich Hanneman, Jozef Hooman, Yassine Lakhnech, Mannes Poel, and Job Zwiers. Concurrency Verification: Introduction to Compositional and Noncompositional Methods. Cambridge Tracts in Theoretical Computer Science. Cambridge University Press, 2001. [DYBG+ 13] Thomas Dinsdale-Young, Lars Birkedal, Philippa Gardner, Matthew Parkinson, and Hongseok Yang. Views: compositional reasoning for concurrent programs. In Proceedings of the 40th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pages 287–300. ACM, 2013.
References
215
E. Engeler. Symposium on Semantics of Algorithmic Languages. Number 188 in Lecture Notes in Mathematics. Springer-Verlag, 1971. C. C. Elgot and A. Robinson. Random access stored-program machines: An ap[ER64] proach to programming languages. Journal of the ACM, 11(4):365–399, 10 1964. [FCSR15] Karl Anton Fr¨oschl, Gerhard Chroust, Johan Stockinger, and Norbert Rozsenich, editors. In Memoriam: Heinz Zemanek. Oesterreichische Computer Gesellschaft, 2015. [Fen09] Xinyu Feng. Local rely-guarantee reasoning. In Proceedings of the 36th annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL ’09, pages 315–327, New York, NY, USA, 2009. ACM. Xinyu Feng, Rodrigo Ferreira, and Zhong Shao. On the relationship between con[FFS07] current separation logic and assume-guarantee reasoning. In R. De Nicola, editor, ESOP: Programming Languages and Systems, pages 173–188. Springer-Verlag, 2007. Michael Foley and Charles Antony Richard Hoare. Proof of a recursive program: [FH71] Quicksort. The Computer Journal, 14(4):391–395, 1971. [Fis11] Michael Fisher. An Introduction to Practical Formal Methods using Temporal Logic. John Wiley & Sons, 2011. [Flo67] R. W. Floyd. Assigning meanings to programs. In J.T. Schwartz, editor, Mathematical Aspects of Computer Science, volume 19 of Proc. of Symposia in Applied Mathematics, pages 19–32. American Mathematical Society, 1967. [Fra86] N. Francez. Fairness. Springer-Verlag, New York, 1986. Michael Franz. Oberon – the overlooked jewel. In B¨osz¨orm´enyi et al. [BGP00]. [Fra00] James Gosling, Bill Joy, Guy Steele, and Gilad Bracha. The Java Language Specifi[GJSB00] cation. Addison-Wesley Professional, 2000. [GMW79] M. Gordon, R. Milner, and C. Wadsworth. Edinburgh LCF, volume 78 of Lecture Notes in Computer Science. Springer-Verlag, 1979. [Gor86] M. Gordon. Why higher-order logic is a good formalism for specifying and verifying hardware. In G. Milne and P.A. Subrahmanyam, editors, Formal Aspects of VLSI Design, pages 153–177. North-Holland, 1986. [GR83] A. Goldberg and D. Robson. Smalltalk-80: The Language and its Implementation. Addison-Wesley, 1983. Peter Grossman. Discrete Mathematics for Computing. Macmillan International [Gro09] Higher Education, 2009. [GvN47] Herman H. Goldstine and John von Neumann. Planning and coding of problems for an electronic computing instrument. Technical report, Institute of Advanced Studies, Princeton, 1947. [GvRB+ 12] Dick Grune, Kees van Reeuwijk, Henri E Bal, Ceriel JH Jacobs, and Koen Langendoen. Modern Compiler Design. Springer-Verlag, New York, NY, 2nd edition, 2012. A. Hansal. A formal definition of a relational data base system. Technical Report [Han76] UKSC 0080, IBM UK Scientific Centre, Peterlee, Co. Durham, 6 1976. [Han04] Chris Hankin. Lambda Calculi: a guide for Computer Scientists. Oxford University Press, 2004. [Har09] John Harrison. Handbook of Practical Logic and Automated Reasoning. Cambridge University Press, 2009. I. J. Hayes, editor. Specification Case Studies. Prentice Hall International, 1987. [Hay87] [HC12] Ian J. Hayes and Robert J. Colvin. Integrated operational semantics: Small-step, big-step and multi-step. In John Derrick, John Fitzgerald, Stefania Gnesi, Sarfraz Khurshid, Michael Leuschel, Steve Reeves, and Elvinia Riccobene, editors, Abstract State Machines, Alloy, B, VDM, and Z - Third International Conference, ABZ 2012, Pisa, Italy, June 18-21, 2012. Proceedings, volume 7316 of Lecture Notes in Computer Science, pages 21–35. Springer-Verlag, 2012. [Eng71]
216
References
[HCM+ 16] I. J. Hayes, R. J. Colvin, L. A. Meinicke, K. Winter, and A. Velykis. An algebra of synchronous atomic steps. In J. Fitzgerald, C. Heitmeyer, S. Gnesi, and A. Philippou, editors, FM 2016: Formal Methods: 21st International Symposium, Proceedings, volume 9995 of Lecture Notes in Computer Science, pages 352–369. Springer International Publishing, 11 2016. [Hen90] Matthew Hennessy. The Semantics of Programming Languages: an Elementary Introduction using Structural Operational Semantics. John Wiley & Sons, New York, NY, 1990. [HH98] C. A. R. Hoare and Jifeng He. Unifying Theories of Programming. Prentice Hall, 1998. [HJ70] W. Henhapl and C. B. Jones. The block concept and some possible implementations, with proofs of equivalence. Technical Report 25.104, IBM Laboratory Vienna, 4 1970. W. Henhapl and C. B. Jones. A run-time mechanism for referencing variables. In[HJ71] formation Processing Letters, 1(1):14–16, 1971. K. V. Hanford and C. B. Jones. Dynamic syntax: A concept for the definition of the [HJ73] syntax of programming languages. In Annual Review in Automatic Programming, volume 7, pages 115–140. Pergamon, 1973. Wolfgang Henhapl and Cliff B. Jones. A formal definition of ALGOL 60 as described [HJ78] in the 1975 modified report. In D. Bjørner and Cliff B. Jones, editors, The Vienna Development Method: The Meta-Language, volume 61 of Lecture Notes in Computer Science, pages 305–336. Springer-Verlag, 1978. Wolfgang Henhapl and Cliff B. Jones. ALGOL 60. In Dines Bjørner and Cliff B. [HJ82] Jones, editors, Formal Specification and Software Development, chapter 6, pages 141–174. Prentice Hall International, 1982. C. A. R. Hoare and Cliff B. Jones. Essays in Computing Science. Prentice Hall, Inc., [HJ89] 1989. I. J. Hayes and C. B. Jones. A guide to rely/guarantee thinking. In Jonathan Bowen, [HJ18] Zhiming Liu, and Zili Zhan, editors, Engineering Trustworthy Software Systems – Third International School, SETSS 2017, volume 11174 of Lecture Notes in Computer Science, pages 1–38. Springer-Verlag, 2018. C. A. R. Hoare and P. E. Lauer. Consistent and complementary formal theories of [HL74] the semantics of programming languages. Acta Informatica, 3(2):135–153, 1974. [HM87] A. Nico Habermann and Ugo Montanari, editors. System Development and Ada, volume 275 of Lecture Notes in Computer Science. Springer-Verlag, 1987. [HM18] I. J. Hayes and L. A. Meinicke. Encoding fairness in a synchronous concurrent program algebra. In Klaus Havelund, Jan Peleska, Bill Roscoe, and Erik de Vink, editors, Formal Methods, volume 10951 of Lecture Notes in Computer Science, pages 222–239. Springer International Publishing, 2018. [HMRC87] Richard C. Holt, Philip A. Matthews, J. Alan Rosselet, and James R. Cordy. The Turing Programming Language: Design and Definition. Prentice Hall, Inc., 1987. [HMT87] Robert Harper, Robin Milner, and Mads Tofte. The semantics of standard ML: Version 1. Technical Report ECS-LFCS-87-36, University of Edinburgh, Department of Computer Science, Laboratory for Foundations of Computer Science, 1987. Robert Harper, Robin Milner, and Mads Tofte. The definition of standard ML, ver[HMT88] sion 2. Technical Report ECS-LFCS-88-62, LFCS Report Series, 1988. C. A. R. Hoare. Algorithm 63, partition; algorithm 64, quicksort; algorithm 65, find. [Hoa61] Communications of the ACM, 4(7):321–322, 7 1961. [Hoa69] C. A. R. Hoare. An axiomatic basis for computer programming. Communications of the ACM, 12(10):576–580, 1969. C. A. R. Hoare. Procedures and parameters: An axiomatic approach. In E. Engeler, [Hoa71a] editor, Symposium On Semantics of Algorithmic Languages, volume 188 of LNM, pages 102–116. Springer-Verlag, 1971. C. A. R. Hoare. Proof of a program: FIND. Communications of the ACM, 14(1):39– [Hoa71b] 45, January 1971.
References [Hoa72] [Hoa74a] [Hoa74b] [Hoa78] [Hoa81] [Hoa85] [Hod83] [Hog91] [Hop81] [Hut16] [HvS12] [HW73] [HW90] [IPW01] [Ive62] [Ive07] [Izb75] [JA16] [JA17] [JA18]
[Jac00] [JGK+ 15]
[JH16]
217 C. A. R. Hoare. Towards a theory of parallel programming. In C. A. R. Hoare and R. Perrott, editors, Operating System Techniques, pages 61–71. Academic Press, 1972. C. A. R. Hoare. Monitors: An operating system structuring concept. Communications of the ACM, 17(10):549–557, 1974. C.A.R. Hoare. Hints on programming language design. In C.J. Bunyan, editor, State of the Art Report 20: Computer Systems Reliability, pages 505–534. Pergamon/Infotech, 1974. C. A. R. Hoare. Communicating sequential processes. Communications of the ACM, 21(8):666–677, 1978. C.A.R. Hoare. The emperor’s old clothes: The ACM Turing Award Lecture. Communications of the ACM, 24(2):75–83, 2 1981. C. A. R. Hoare. Communicating Sequential Processes. Prentice Hall, 1985. A. Hodges. Alan Turing: The Enigma. Burnett Books, 1983. Vintage edition, 1992. John Hogg. Islands: Aliasing protection in object-oriented languages. ACM SIGPLAN Notices, 26(11):271–285, 1991. Grace Murray Hopper. Keynote address at ACM SIGPLAN History of Programming Languages conference, June c1–3 1978. In Wexelblat [Wex81]. Graham Hutton. Programming in Haskell. Cambridge University Press, 2nd edition, 2016. Tony Hoare and Stephan van Staden. In praise of algebra. Formal Aspects of Computing, 24(4-6):423–431, 2012. C. A. R. Hoare and N. Wirth. An axiomatic definition of the programming language Pascal. Acta Informatica, 2(4):335–355, 1973. Maurice Herlihy and Jeannette M. Wing. Linearizability: A correctness condition for concurrent objects. ACM Trans. Program. Lang. Syst., 12(3):463–492, 1990. Atsushi Igarashi, Benjamin C Pierce, and Philip Wadler. Featherweight Java: a minimal core calculus for Java and GJ. ACM Transactions on Programming Languages and Systems (TOPLAS), 23(3):396–450, 2001. K. E. Iverson. A Programming Language. J. Wiley, 1962. Kenneth E Iverson. Notation as a tool of thought. ACM SIGAPL APL Quote Quad, 35(1-2):2–31, 2007. H. Izbicki. On a consistency proof of a chapter of a formal definition of a PL/I subset. Technical Report TR 25.142, IBM Laboratory Vienna, 2 1975. Cliff B. Jones and Troy K. Astarte. An exegesis of four formal descriptions of ALGOL 60. Technical Report CS-TR-1498, Newcastle University School of Computer Science, 9 2016. Cliff B. Jones and Troy K. Astarte. Challenges for semantic description: comparing responses from the main approaches. Technical Report CS-TR-1516, Newcastle University School of Computer Science, 11 2017. Cliff B. Jones and Troy K. Astarte. Challenges for semantic description: comparing responses from the main approaches. In Jonathan P. Bowen, Zili Zhang, and Zhiming Liu, editors, Proceedings of the Third School on Engineering Trustworthy Software Systems, volume 11174 of Lecture Notes in Computer Science, pages 176–217, 2018. Michael Jackson. Problem Frames: Analyzing and Structuring Software Development Problems. Addison-Wesley, 2000. Jean-Baptiste Jeannin, Khalil Ghorbal, Yanni Kouskoulas, Ryan Gardner, Aurora Schmidt, Erik Zawadzki, and Andr´e Platzer. A formally verified hybrid system for the next-generation airborne collision avoidance system. In C. Baier and C. Tinelli, editors, International Conference on Tools and Algorithms for the Construction and Analysis of Systems, volume 9035 of Lecture Notes in Computer Science, pages 21– 36. Springer-Verlag, 2015. Cliff B. Jones and Ian J. Hayes. Possible values: Exploring a concept for concurrency. Journal of Logical and Algebraic Methods in Programming, 85(5):972–984, 2016.
218 [JHC15] [JHJ07]
[JL70] [JL71] [JL96] [JM94] [Jon76] [Jon80] [Jon82a] [Jon82b] [Jon86] [Jon90] [Jon93] [Jon00] [Jon01] [Jon03] [JP11] [JRW10] [JVY17]
[JY15]
[Kin69]
References Cliff B. Jones, Ian J. Hayes, and Robert J. Colvin. Balancing expressiveness in formal approaches to concurrency. Formal Aspects of Computing, 27(3):465–497, 2015. Cliff B. Jones, Ian J. Hayes, and Michael A. Jackson. Deriving specifications for systems that are connected to the physical world. In Lecture Notes in Computer Science, pages 364–390. Springer Verlag, 2007. C. B. Jones and P. Lucas. Proving correctness of implementation techniques. Technical Report TR 25.110, IBM Laboratory Vienna, 8 1970. C. B. Jones and P. Lucas. Proving correctness of implementation techniques. In E. Engeler, editor, A Symposium on Algorithmic Languages, volume 188 of Lecture Notes in Mathematics, pages 178–211. Springer-Verlag, 1971. Richard Jones and Rafael D Lins. Garbage Collection: Algorithms for Automatic Dynamic Memory Management. Wiley, 1996. C.B. Jones and C.A. Middelburg. A typed logic of partial functions reconstructed classically. Acta Informatica, 31(5):399–430, 1994. C. B. Jones. Formal definition in compiler development. Technical Report 25.145, IBM Laboratory Vienna, 2 1976. C. B. Jones. Software Development: A Rigorous Approach. Prentice Hall International, Englewood Cliffs, N.J., USA, 1980. Cliff B. Jones. Compiler design. In Bjørner and Jones [BJ82], chapter 8, pages 253–270. Cliff B. Jones. More on exception mechanisms. In Dines Bjørner and Cliff B. Jones, editors, Formal Specification and Software Development, chapter 5, pages 125–140. Prentice Hall International, 1982. C. B. Jones. Systematic Software Development Using VDM. Prentice Hall International, 1986. C. B. Jones. Systematic Software Development Using VDM. Prentice Hall International, second edition, 1990. C. B. Jones. A pi-calculus semantics for an object-based design notation. In E. Best, editor, CONCUR’93: 4th International Conference on Concurrency Theory, volume 715 of Lecture Notes in Computer Science, pages 158–172. Springer-Verlag, 1993. C. B. Jones. Compositionality, interference and concurrency. In Jim Davies, Bill Roscoe, and Jim Woodcock, editors, Millennial Perspectives in Computer Science, pages 175–186. Macmillan Press, 2000. C. B. Jones. The transition from VDL to VDM. Journal of Universal Computer Science, 7(8):631–640, 2001. Cliff B. Jones. The early search for tractable ways of reasoning about programs. IEEE Annals of the History of Computing, 25(2):26–49, 2003. Cliff B. Jones and Ken G. Pierce. Elucidating concurrent algorithms via layers of abstraction and reification. Formal Aspects of Computing, 23(3):289–306, 2011. Cliff B Jones, A William Roscoe, and Kenneth R Wood, editors. Reflections on the Work of C.A.R. Hoare. Springer Science & Business Media, 2010. Cliff B. Jones, Andrius Velykis, and Nisansala Yatapanage. General lessons from a rely/guarantee development. In Kim Guldstrand Larsen, Oleg Sokolsky, and Ji Wang, editors, Dependable Software Engineering: Theories, Tools, and Applications, volume 10606 of Lecture Notes in Computer Science, pages 3–24. Springer-Verlag, 2017. Cliff B. Jones and Nisansala Yatapanage. Reasoning about separation using abstraction and reification. In Radu Calinescu and Bernhard Rumpe, editors, Software Engineering and Formal Methods, volume 9276 of Lecture Notes in Computer Science, pages 3–19. Springer-Verlag, 2015. J. C. King. A Program Verifier. PhD thesis, Department of Computer Science, Carnegie-Mellon University, 1969.
References [Kin71] [Kle52] [Knu67] [Knu73] [Knu74a] [Knu74b] [Knu03] [Kol76] [Koz97] [KTB88]
[L+ 94] [Lan65a] [Lan65b] [Lan66a] [Lan66b] [Lau68] [Lau71a] [Lau71b] [Lee72] [Lev84] [Lin93] [Luc68] [Luc71] [Łuk20] [LV16]
[LvdM80]
219 J. C. King. A program verifier. In C. V. Freiman, editor, Information Processing 71, pages 234–249. North-Holland, 1971. Stephen C. Kleene. Introduction to Metamathematics. North-Holland, 1952. D. E. Knuth. The remaining trouble spots in ALGOL 60. Communications of the ACM, 10(10):611–618, 1967. D. E. Knuth. Sorting and Searching, volume III of The Art of Computer Programming. Addison-Wesley Publishing Company, 1973. Donald E Knuth. Computer programming as an art. Communications of the ACM, 17(12):667–673, 1974. Donald E Knuth. Structured programming with go to statements. ACM Computing Surveys (CSUR), 6(4):261–301, 1974. Donald E. Knuth. Robert W. Floyd, in memoriam. ACM SIGACT News, 34(4):3–13, 12 2003. George Koletsos. Sequent calculus and partial logic. Master’s thesis, University of Manchester, 1976. Dexter Kozen. Kleene algebra with tests. ACM Trans. Program. Lang. Syst., 19(3):427–443, May 1997. B. Konikowska, A. Tarlecki, and A. Blikle. A three-valued logic for software specification and validation. In R. Bloomfield, L. Marshall, and R. Jones, editors, VDM— The Way Ahead, volume 328 of Lecture Notes in Computer Science, pages 218–242. Springer-Verlag, 1988. Nancy Lynch et al. Atomic Transactions. MIT Press, 1994. Peter J. Landin. A correspondence between ALGOL 60 and Church’s lambdanotation: Part I. Communications of the ACM, 8(2):89–101, February 1965. Peter J. Landin. A correspondence between ALGOL 60 and Church’s lambdanotation: Part II. Communications of the ACM, 8(3):158–167, March 1965. P. J. Landin. The next 700 programming languages. Communications of the ACM, 9(3):157–166, 1966. Peter J. Landin. A formal description of ALGOL 60. In Steel [Ste66], pages 266– 294. Peter E. Lauer. Formal definition of ALGOL 60. Technical Report 25.088, IBM Laboratory Vienna, 12 1968. P. Lauer. Consistent formal theories of the semantics of programming languages. Technical Report TR 25.121, IBM Laboratory Vienna, 11 1971. P. E. Lauer. Consistent Formal Theories of the Semantics of Programming Languages. PhD thesis, Queen’s University of Belfast, 1971. Printed as TR 25.121, IBM Lab. Vienna. John AN Lee. Computer Semantics. Van Nostrand Reinhold, 1972. Henry M. Levy. Capability-Based Computer Systems. Butterworth-Heinemann, Newton, MA, USA, 1984. C. H. Lindsey. A history of ALGOL 68. In The Second ACM SIGPLAN Conference on History of Programming Languages, HOPL-II, pages 97–132. ACM, 1993. Peter Lucas. Two constructive realisations of the block concept and their equivalence. Technical Report TR 25.085, IBM Laboratory Vienna, 6 1968. P. Lucas. Formal definition of programming languages and systems. In C. V. Freiman, editor, Information Processing 71. Proceedings of the IFIP Congress 1971, volume 1, pages 291–297. North-Holland, 1971. J. Łukasiewicz. O logice tr´ojwarto´sciowej (on three-valued logic). Ruch Filozoficzny, 5:169–171, 1920. Ori Lahav and Viktor Vafeiadis. Explaining relaxed memory models with program transformations. In J. Fitzgerald, C. Heitmeyer, S. Gnesi, and A. Philippou, editors, FM 2016: Formal Methods: 21st International Symposium, Limassol, volume 9995 of Lecture Notes in Computer Science, pages 479–495. Springer-Verlag, 2016. CH Lindsey and SG van der Meulen. Informal Introduction to ALGOL 68. NorthHolland, revised edition, 1980.
220 [LW69] [McC66] [Mey88] [Mil78a] [Mil78b] [Mil80] [Mil89] [Mil93] [MJ84] [MK99] [ML65] [Mog89] [Moo65] [Moo19] [Mor70] [Mor88] [Mor90] [Mos74] [Mos75] [Mos85]
[Mos92] [Mos04] [Mos09] [Mos11] [MP66] [MP95] [MP99]
References Peter Lucas and Kurt Walk. On the formal description of PL/I. Annual Review in Automatic Programming, 6:105–182, 1969. John McCarthy. A formal description of a subset of ALGOL. In Formal Language Description Languages for Computer Programming, pages 1–12. North-Holland, 1966. B. Meyer. Object-oriented Software Construction. Prentice Hall, 1988. Robin Milner. Synthesis of communicating behaviour. Mathematical Foundations of Computer Science 1978, 64:71–83, 1978. Robin Milner. A theory of type polymorphism in programming. Journal of Computer and System Sciences, 17(3):348–375, 12 1978. R. Milner. A Calculus of Communicating Systems, volume 92 of Lecture Notes in Computer Science. Springer-Verlag, 1980. Robin Milner. Communication and Concurrency. Prentice Hall, January 1989. Robin Milner. Elements of interaction. Communications of the ACM, 36(1):78–89, 1993. F. L. Morris and C. B. Jones. An early program proof by Alan Turing. Annals of the History of Computing, 6(2):139–143, 1984. Jeff Magee and Jeff Kramer. State Models and Java Programs. Wiley, 1999. John McCarthy and Michael I Levin. LISP 1.5 Programmer’s Manual. MIT Press, 1965. Eugenio Moggi. An Abstract View of Programming Languages. PhD thesis, Edinburgh University Laboratory for the Foundations of Computer Science, 1989. Gordon E. Moore. Cramming more components onto integrated circuits. Electronics, 38(8), 4 1965. J Strother Moore. Milestones from the Pure Lisp theorem prover to ACL2. Formal Aspects of Computing, 31(6):699–732, 2019. F. L. Morris. The next 700 formal language descriptions. Manuscript, 1970. Carroll Morgan. The specification statement. ACM Trans. Program. Lang. Syst., 10(3):403–419, July 1988. Carroll Morgan. Programming from Specifications. Prentice Hall, 1990. Peter David Mosses. The mathematical semantics of ALGOL 60. Technical report, Programming Research Group, 1 1974. Peter David Mosses. Mathematical semantics and compiler generation. PhD thesis, University of Oxford, 4 1975. Ben Moszkowski. Executing temporal logic programs. In Stephen D. Brookes, Andrew William Roscoe, and Glynn Winskel, editors, Seminar on Concurrency, volume 197 of Lecture Notes in Computer Science, pages 111–130. Springer Berlin Heidelberg, 1985. Peter D. Mosses. Action Semantics. Number 26 in Cambridge Tracts in Theoretical Computer Science. Cambridge Press, 1992. Peter D Mosses. Modular structural operational semantics. The Journal of Logic and Algebraic Programming, 60:195–228, 2004. Peter D Mosses. Component-based semantics. In Proceedings of the 8th International Workshop on Specification and Verification of Component-Based Systems, pages 3–10. ACM, 2009. Peter D. Mosses. VDM semantics of programming languages: combinators and monads. Formal Aspects of Computing, 23(2):221–238, 2011. John McCarthy and James A. Painter. Correctness of a compiler for arithmetic expressions. Technical Report CS38, Computer Science Department, Stanford University, 4 1966. Z. Manna and A. Pnueli. Temporal Verification of Reactive Systems: Safety. SpringerVerlag, 1995. Dale Miller and Catuscia Palamidessi. Foundational aspects of syntax. ACM Comput. Surv., 31(3es):11, 1999.
References
221
R. Milner, J. Parrow, and D. Walker. A calculus of mobile processes. Information and Computation, 100(1):1–77, 1992. [MS74] Robert Milne and Christopher Strachey. A theory of programming language semantics. Privately circulated, 1974. [MS76] R. Milne and C. Strachey. A Theory of Programming Language Semantics (Parts A and B). Chapman and Hall, 1976. [MS13] Faron Moller and Georg Struth. Modelling Computing Systems: Mathematics for Computer Science. Springer Science & Business Media, 2013. [MTHM97] Robin Milner, Mads Tofte, Robert Harper, and David MacQueen. The Definition of Standard ML (Revised). MIT Press, 1997. [Nau66] Peter Naur. Proof of algorithms by general snapshots. BIT Numerical Mathematics, 6(4):310–316, 1966. Allen Newell. Documentation of IPL-V. Communications of the ACM, 6(3):86–89, [New63] 1963. Tobias Nipkow. Programming and Proving in Isabelle/HOL. Springer-Verlag, 2009. [Nip09] Tobias Nipkow, Lawrence C. Paulson, and Markus Wenzel. Isabelle/HOL — A Proof [NPW02] Assistant for Higher-Order Logic, volume 2283 of Lecture Notes in Computer Science. Springer-Verlag, 2002. S. S. Owicki and D. Gries. An axiomatic proof technique for parallel programs I. [OG76] Acta Informatica, 6:319–340, 1976. [O’H07] P. W. O’Hearn. Resources, concurrency and local reasoning. Theoretical Computer Science, 375(1-3):271–307, 5 2007. Martin Odersky and Lex Spoon. Programming in Scala. Artima, 3rd edition, 2016. [OS16] [Owi75] S. S. Owicki. Axiomatic Proof Techniques for Parallel Programs. PhD thesis, Department of Computer Science, Cornell University, 1975. Published as technical report 75-251. J. Owlett. A Theory of Database Schemata – Studies in Conceptual and Relational [Owl79] Schemata. PhD thesis, Wolfson College, Oxford University, 10 1979. [Pai67] J. A. Painter. Semantic correctness of a compiler for an Algol-like language. Technical Report AI Memo 44, Computer Science Department, Stanford University, 3 1967. [Pet81] G.L. Peterson. Myths about the mutual exclusion problem. Information Processing Letters, 12(3):115–116, 1981. [Pet08] Charles Petzold. The Annotated Turing: a Guided Tour Through Alan Turing’s Historic Paper on Computability and the Turing Machine. Wiley Publishing, 2008. [Pie02] Benjamin C. Pierce. Types and Programming Languages. MIT Press, 2002. [PJ03] Simon Peyton Jones. Wearing the hair shirt: a retrospective on Haskell. Invited talk at POPL, 206, 2003. [PL92] Nico Plat and Peter Gorm Larsen. An overview of the ISO/VDM-SL standard. ACM Sigplan Notices, 27(8):76–82, 1992. [Plo76] G. D. Plotkin. A powerdomain construction. SIAM Journal on Computing, 5:452– 487, 9 1976. [Plo81] G. D. Plotkin. A structural approach to operational semantics. Technical Report DAIMI FN-19, Aarhus University, 1981. [Plo04a] Gordon D. Plotkin. The origins of structural operational semantics. Journal of Logic and Algebraic Programming, 60–61:3–15, July–December 2004. [Plo04b] Gordon D. Plotkin. A structural approach to operational semantics. Journal of Logic and Algebraic Programming, 60–61:17–139, July–December 2004. Dag Prawitz. Natural Deduction: a Proof-Theoretical Study. Dover Publications, [Pra65] 1965. [Pri18] Mark Priestley. Routines of Substitution: John von Neumann’s Work on Software Development, 1945–1948. SpringerBriefs in History of Computing. Springer-Verlag, 2018. G. Plotkin, C. Stirling, and M. Tofte, editors. Proof, Language, and Interaction: [PST00] Essays in Honour of Robin Milner. MIT Press, 2000. [MPW92]
222 [Rad81] [Rei12] [Rei13] [Rey93] [Rey02] [RH07] [RNN92] [Ros94] [Sam69] [San99] [Sat75] [Sch97] [Sco69] [Sco77] [Sco80] [Sco00] [Seb16] [Sin67] [Sit74] [SJv19] [SN86] [SS86] [Ste66] [STER11] [Sto77]
References George Radin. The early history and characteristics of PL/I. In Richard L. Wexelblat, editor, History of programming languages, pages 551–589. Academic Press, 1981. Wolfgang Reisig. Petri Nets: an Introduction, volume 4 of Monographs in Theoretical Computer Science. Springer Science & Business Media, 2012. Wolfgang Reisig. Understanding Petri Nets: Modeling Techniques, Analysis Methods, Case Studies. Springer-Verlag, 2013. John C Reynolds. The discoveries of continuations. Lisp and Symbolic Computation, 6(3-4):233–247, 1993. John Reynolds. A logic for shared mutable data structures. In Gordon Plotkin, editor, Proceedings of the Seventeenth Annual IEEE Symp. on Logic in Computer Science, LICS 2002, pages 55–74. IEEE Computer Society Press, 7 2002. Barbara G. Ryder and Brent Hailpern, editors. HOPL III: Proceedings of the Third ACM SIGPLAN Conference on History of Programming Languages, New York, NY, USA, 2007. ACM. H. Riis Nielson and F. Nielson. Semantics with Applications: A Formal Introduction. Wiley, 1992. Bill Roscoe, editor. A Classical Mind: Essays in Honour of CAR Hoare. Pearson Education, 1994. Jean E Sammet. Programming Languages: History and Fundamentals. Prentice Hall, Inc., 1969. Davide Sangiorgi. Typed π-calculus at work: a correctness proof of Jones’s parallelisation transformation on concurrent objects. Theory and Practice of Object Systems, 5(1):25–34, 1999. Edwin H. Satterthwaite. Source Language Debugging Tools. PhD thesis, Stanford University, 1975. Fred B. Schneider. On Concurrent Programming. Texts in Computer Science. Springer-Verlag, 1997. D. Scott. A type-theoretical alternative to CUCH, ISWIM, OWHY. Typescript – Oxford, 10 1969. Dana S Scott. Logic and programming languages. Communications of the ACM, 20(9):634–641, 1977. Dana Scott. Lambda calculus: some models, some philosophy. Studies in Logic and the Foundations of Mathematics, 101:223–265, 1980. Michael L. Scott. Programming Language Pragmatics. Morgan Kaufmann, 2000. ISBN 1-55860-578-9. Robert W. Sebesta. Concepts of Programming Languages. Pearson, eleventh edition, 2016. Michel Sintzoff. Existence of a van Wijngaarden sytnax for every recursively enumerable set. Annales Soc. Sci. Bruxelles, 81(2):115–118, 1967. R. L. Sites. Proving that Computer Programs Terminate Cleanly. PhD thesis, Computer Science Department, Stanford University, 1974. Printed as STAN-CS-74-418. Elizabeth Scott, Adrian Johnstone, and J. Thomas van Binsbergen. Derivation representation using binary subtree sets. Science of Computer Programming, 2019. Herbert A Simon and Allen Newell. Information Processing Language V on the IBM 650. IEEE Annals of the History of Computing, 8(1):47–49, 1986. L. Sterling and E. Shapiro. The Art of Prolog: Advanced Programming Techniques. MIT Press, 1986. T. B. Steel, editor. Formal Language Description Languages for Computer Programming. North-Holland, 1966. G. Schellhorn, B. Tofan, G. Ernst, and W. Reif. Interleaved programs and relyguarantee reasoning with ITL. In Temporal Representation and Reasoning (TIME), 2011 Eighteenth International Symposium on, pages 99–106, 2011. Joseph E. Stoy. Denotational Semantics: The Scott-Strachey Approach to Programming Language Theory. MIT Press, Cambridge, MA, USA, 1977.
References [Str66] [Str67]
223
C. Strachey. Towards a formal semantics. In Steel [Ste66]. Christopher Strachey. Fundamental concepts in programming languages. Notes from a series of lectures given at the Summer School in Computer Programming held in Copenhagen in August 1967, 1967. [Str73] C. Strachey. The varieties of programming language. Technical Monograph PRG-10, Oxford University Computing Lab, 3 1973. Study Group XI. CHILL Language Definition. Technical report, C.C.I.T.T. Period [Stu80] 1977–1980, 5 1980. + 13] Jaroslav Sevˇ ˇ ˇ c´ık, Viktor Vafeiadis, Francesco Zappa Nardelli, Suresh Jagannathan, [SVZN and Peter Sewell. Compcerttso: A verified compiler for relaxed-memory concurrency. Journal of the ACM (JACM), 60(3):22, 2013. [SW74] Christopher Strachey and Christopher Peter Wadsworth. Continuations: A mathematical semantics for handling jumps. Monograph PRG-11, Oxford University Computing Laboratory, Programming Research Group, 1 1974. [SW01] Davide Sangiorgi and David Walker. The π-Calculus: A Theory of Mobile Processes. Cambridge University Press, Cambridge, United Kingdom, 2001. Donald Robert Syme. Declarative theorem proving for operational semantics. PhD [Sym99] thesis, University of Cambridge, 1999. [Tur36] Alan M. Turing. On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, Series 2, 42:230– 265, 1936. Correction published: ibid, 43:544–546, 1937. A. M. Turing. Checking a large routine. In Report of a Conference on High Speed [Tur49] Automatic Calculating Machines, pages 67–69. University Mathematical Laboratory, Cambridge, 6 1949. David A Turner. Miranda: A non-strict functional language with polymorphic types. [Tur85] In Conference on Functional Programming Languages and Computer Architecture, volume 201 of Lecture Notes in Computer Science, pages 1–16. Springer-Verlag, 1985. [Vaf07] Viktor Vafeiadis. Modular fine-grained concurrency verification. PhD thesis, University of Cambridge, 2007. [vdH19] Gauthier van den Hove. New Insights from Old Programs: The Structure of the First ALGOL 60 System. PhD thesis, University of Amsterdam, 2019. Rob van Glabbeek and Peter H¨ofner. Progress, fairness and justness in process alge[vGH15] bra. arXiv preprint of ACM Surveys article arXiv:1501.03268, 2015. [VP07] Viktor Vafeiadis and Matthew Parkinson. A marriage of rely/guarantee and separation logic. In Lu´ıs Caires and Vasco Vasconcelos, editors, CONCUR 2007 – Concurrency Theory, volume 4703 of Lecture Notes in Computer Science, pages 256–271. Springer-Verlag, 2007. [vW66a] Adriaan van Wijngaarden. Numerical analysis as an independent science. BIT Numerical Mathematics, 6(1):66–81, 1966. Adriaan van Wijngaarden. Recursive definition of syntax and semantics. In Steel [vW66b] [Ste66], pages 13–24. [vWMPK69] A. van Wijngaarden, B. J. Mailloux, J. E. L. Peck, and C. H. A. Koster. Report on the Algorithmic Language ALGOL 68. Mathematisch Centrum, Amsterdam, 10 1969. Second printing , MR 101. [vWSM+ 76] A. van Wijngaarden, M. Sintzoff, B. J. Mailloux, C. H. Lindsey, J. E. L. Peck, L. G. L. T. Meertens, C. H. A. Koster, and R. G. Fisker. Revised Report on the Algorithmic Language ALGOL 68. Mathematical Centre Tracts 50. Mathematisch Centrum, Amsterdam, 1976. Christopher P. Wadsworth. Notes on continuations. Unpublished, 7 1972. [Wad72] Kurt Walk. Minutes of the 4th meeting of IFIP WG 2.2 on Formal Language De[Wal69] scription Languages, 9 1969. Held in Colchester, Essex, England. Chaired by T. B. Steel.
224 [Wal91] [WC04]
[Wei75] [Wex81] [WH66] [Wir67] [Wir73] [Wir76] [Wir77] [Wir85] [Wol88] [WV01] [WW66] [Yon90] [Zem66]
References D. Walker. π-calculus semantics for object-oriented programming languages. In T. Ito and A. R. Meyer, editors, TACS’91, volume 526 of Lecture Notes in Computer Science, pages 532–547. Springer-Verlag, 1991. Jim Woodcock and Ana Cavalcanti. A tutorial introduction to designs in unifying theories of programming. In E.A. Boiten, J. Derrick, and G. Smith, editors, International Conference on Integrated Formal Methods, pages 40–66. Springer-Verlag, 2004. F. Weissenb¨ock. A formal interface specification. Technical Report TR 25.141, IBM Laboratory Vienna, 2 1975. Richard L. Wexelblat, editor. History of Programming Languages. Academic Press, 1981. N. Wirth and C.A.R. Hoare. A contribution to the development of ALGOL. Communications of the ACM, 9(6):413–432, 6 1966. N. Wirth. On certain basic concepts of programming languages. Technical Report CS 65, Computer Science Department, Stanford University, 5 1967. N. Wirth. Systematic Programming: An Introduction. Prentice Hall, 1973. N. Wirth. Algorithms + Data Structures = Programs. Prentice Hall, 1976. Niklaus Wirth. What can we do about the unnecessary diversity of notation for syntactic definitions? Commun. ACM, 20(11):822–823, 1977. Niklaus Wirth. From programming language design to computer construction. Commun. ACM, 28(2):159–164, 1985. M. I. Wolczko. Semantics of Object-Oriented Languages. PhD thesis, Department of Computer Science, University of Manchester, 3 1988. Also published as Technical Report UMCS-88-6-1. Gerhard Weikum and Gottfried Vossen. Transactional Information Systems: Theory, Algorithms, and the Practice of Concurrency Control and Recovery. Morgan Kaufmann Publishers Inc., 2001. Niklaus Wirth and Helmut Weber. Euler: a generalization of ALGOL and its formal definition: Part I. Communications of the ACM, 9(1):13–25, 1966. Akinori Yonezawa, editor. ABCL: An Object-Oriented Concurrent System. MIT Press, 1990. ISBN 0-262-24029-7. Heinz Zemanek. Semiotics and programming languages. Communications of the ACM, 9(3):5, 3 1966.
Index
Aarhus University, 49 ABC/L, 144 Abnormal Exit Model, 154, 155 Abnormal Termination, 84, 90, 154, 157, 160 Abstract Syntax, 19, 21, 25, 26, 28–33, 37, 38, 42, 45, 47, 53, 54, 56, 57, 62, 65, 66, 69, 70, 73, 89, 101, 113, 137–139, 141–143, 146, 159 Abstraction, 3, 4, 11, 13, 26, 27, 61, 69, 78 ACL-2, 115 ACM, 116 Action Semantics, 209 Active Decomposition, 111, 115 Actual Parameter, see Argument Aczel, P. H. G., 114 Ada, 49, 101, 157, 161 AI, 87, 208 Algebraic Semantics, 95 ALGOL, 7, 15, 48, 49, 83–85, 119, 131, 157 ALGOL 60, 4, 5, 9, 20, 22, 23, 30, 31, 45, 48, 60, 65, 74, 76, 77, 79, 80, 84, 85, 89, 92, 97, 99, 132, 155–157, 161, 208–210 ALGOL 68, 62, 92, 210 ALGOL W, 85, 210 Ambiguity, 22–24, 30 America, P., 132 Andrews, D., 87 APEX, 23 APL, 3, 59, 60 Argument, 35, 38, 39, 54, 60, 74–81, 91, 92, 96, 99, 100, 142 Array, 3, 6, 59–62, 68, 76–78, 80, 84, 87, 126 Assertion, 10, 15, 27, 56, 101–103, 106, 107, 113–115 Assertion Box, 113 Astarte, T. K., 100, 207
Asynchronous Communication Mechanisms, 126 Axiom, 104–108, 114, 115 Axiomatic Semantics, 10, 95, 101, 106, 112, 113, 115, 117, 207 B (Specification Language), 114 B-Tree, 78 Back, R. J., 107 Backus, J. W., 20 Baden bei Wien Working Conference, 115, 208, 210 BCS Computer Conservation Society, 158 Bekiˇc, H., 61, 92, 207 Big-Step Semantics, 46, 153 Bjørner, D., 161 Blikle, A, 107 Block, 24, 26, 52, 55, 60, 65–68, 71, 73, 74, 76, 79, 80, 83–87, 90, 131, 132, 154, 157 BNF, 20, 25, 32, 54, 159 Bootstrapping, 5 Bubble Memory, 132 C (Programming Language), 26, 32, 88 C++, 32 Call By Location, see Call By Reference Call By Name, 9, 77, 79, 80, 93 Call By Reference, 69, 75, 77–80, 99, 106 Call By Value, 75, 79, 80, 99 Call By Value/Return, 78, 80, 86, 93 Cambridge University Adams Prize, 100 Campbell-Kelly, M., 100 Cantor, 99 Capabilities, 88 CCS, 208 Challenge Context Dependancy, 54, 160
© Springer Nature Switzerland AG 2020 C. B. Jones, Understanding Programming Languages,
225
226 Delimiting a Language (Concrete Representation), 20, 159 Delimiting the Abstract Content of a Language, 26, 159 Modelling Concurrency, 120, 160 Modelling Exits, 154, 160 Modelling Sharing, 69, 160 Operational Semantics (Non-Determinism), 38, 159 Recording Semantics (Deterministic Languages), 33, 159 Chen, T. C., 132 CHILL, 161 Church, A., 96, 99 Class (OOL), 55, 56, 85, 131–135, 137, 140, 141, 143, 150, 151 Clean Termination, 36 CMU, 114 COBOL, 3, 48 Communications of the ACM, 4 Compiler, 5, 6, 8–11, 24, 26, 31, 32, 44, 49, 51, 53, 56, 59, 72, 79, 90, 96, 100, 101, 109, 113, 123 Computer Architecture, 3, 6, 209 Computer Arithmetic, 36, 114, 115, 210 Concrete Syntax, 19–21, 23–26, 28, 30–32, 40, 62, 65, 101, 107, 111, 113, 149 Concurrency, 4, 7, 38, 39, 44, 46, 48, 83–85, 88, 99, 113, 119–122, 124, 126, 128, 129, 131–134, 136, 138, 139, 142, 149, 151–153 Concurrent OOL, 21, 131, 132 Concurrent Separation Logic, 126 Configuration, 40, 46, 121–123, 132, 135, 137 Context Condition, 51, 53, 56, 57, 62, 66, 73, 80, 91, 103, 106, 137–139, 142, 144, 150 Context Dependancy, 21, 23, 25, 36, 51–53 Continuation, 99, 156, 157 Contract, 107 COOL, 19, 21, 39, 84, 132–139, 141, 149–151, 154 Coq, 115 Cost of Programming Languages, 2, 6, 10, 57, 61 CPL, 62, 99 CSP, 128 Dafny, 115 Dahl, O.-J, 85 Dangling Else Problem, 24 Dangling Pointer, 88 Data Race, 85, 127, 131, 135, 136, 150 Data Refinement, see Data Reification Data Reification, 112, 115, 126
Index DBMS, 83, 129 Deadlock, 11, 120, 143, 150 Delegation, 142, 146, 149, 151 Denotational Semantics, 9, 14, 46, 49, 81, 95–100, 116, 128, 153, 155–157, 209, 210 Determinism, 138 Deterministic Language, 33, 41, 96 Dijkstra, E. W., 4, 5, 38, 47, 76, 111, 114, 115, 120, 153 Discrete Mathematics, 12 Disjoint Data, 119 Dispose Statement, 88 Domain Theory, 49, 99 Dynamic Error Detection, 36, 52, 53, 56, 58, 62, 106, 133, 146 Dynamic Semantics, 62 Dynamic Syntax, 23, 63 Effigy System, 114, 115 Eiffel, 83, 107, 115 Elgot, C. C., 48 Environment, 42, 48, 69, 71, 74, 75, 78, 81, 85, 86, 91, 98, 156, 157 Equivalence, 9, 10, 27, 49, 56, 72, 95, 113, 144, 152, 157 Esperanto, 7 Euler, 86 Event-B, 114 Example Programs, 21 Exception Handling, 48, 52, 93, 99, 133, 154 Exit Combinator, 157 Extended BNF, 20, 21, 159 Fairness, 46, 128, 151 Featherweight Java, 116 First-Class Object, 92, 157 First-Order Predicate Calculus, 13, 15 Fisher, M., 113 Fixed Point, 74, 98 Fixed-Point Induction, 98 Flowchart, 101–103, 114 Floyd, R. W., 101–103, 107, 114–116, 207 Formal Parameter, see Parameter FORTRAN, 3, 6, 11, 30, 31, 48, 51, 59, 60 Forward Assignment Rule, 114 Free-Storage Manager, 112 Function, 34, 35, 37–40, 42, 44, 47, 55, 73, 79, 89–93, 95–97, 100, 131, 137, 138, 154 Functional Language, 5, 15, 38 Future Call, 144, 146 Galler, B. A., 113 Game Semantics, 128
Index Garbage Collection, 4, 47, 85, 88, 126, 141, 151 GCC, 115 General Snapshot, 114 General-Purpose Computer, 1, 2 General-Purpose Language, 5 Generation of Strings, 20, 23 Go, 83, 142, 151 Goldstine, H. H., 114 GoTo Statement, 47, 92, 99, 153–157 Grand-State Description, 154 Grand-State Semantics, 48, 49, 116 Granularity, 120, 122, 123, 137, 140 Guarantee Condition, 124, 126 Guarded Commands, 38, 39 Halting Problem, 46 Hankin, C., 99 Haskell, 5, 38 Hayes, I. J., 47, 126, 128 Health Conditions, 114 Heap, 47, 87 Heap Variable, 69, 72, 87, 88 Heisenbug, 120 Henhapl, W., 87 High-Level Language, 1–3, 5, 6, 76, 129 Higher-Order Functions, 92 Hoare Triple, 103, 104, 106 Hoare, C. A. R., 4, 10, 22, 85, 101, 103, 107–109, 113–115, 124, 128, 207 Hoare-Style Logic, 15 HOL, 100 HOL-light, 115 Hopper, G. B. M., 5 Horning, J. J., 22 I/O, 3, 19, 57, 59, 68, 90, 150 IBM, 23, 48, 101, 114 IBM Lab Hursley, 48 IBM Lab Vienna, 32, 47, 49, 69, 81, 100, 113, 116, 207, 208, 210 IBM Yorktown Conference, 103 IBM 7xx, 87 Ichbiah, J., 161 Imperative Language, 5, 11, 15, 34, 36, 44, 47, 92, 96, 112 Inference Rule, 10, 40–42, 47, 52, 63, 104, 107, 114, 124 Initialisation, 36, 52, 106, 151 Input/Output, see I/O Interference, 119, 120, 123, 124, 128, 131, 137 Interpreter, 5, 8–10, 24, 26, 96 IPL-V, 4, 87 Isabelle, 100, 106, 115
227 ISO, 14 Izbicki, H., 56 Java, 2, 4, 11, 15, 24, 26, 30, 32, 83, 113, 129, 136, 151 Jones, C. B., 2, 100, 114, 123 Justness, 128 King, J., 114, 115 KIV, 115 Knuth, D. E., 153 Lambda Calculus, 74, 97, 100, 208, 209 Lambda Notation, 96, 98, 99 Landin, P. J., 25, 48, 98, 99, 207, 208 Lauer, P. E., 49, 107, 161, 208 Law of the Excluded Middle, 16 LCF, 100, 208 Lee, J.A.N., 48, 81 Left-Hand Value, 37, 70, 78, 80 Lisp, 4, 74, 87, 161 List, see Sequence List Processing, 4, 87, 88 Location, 69–71, 74, 78, 79, 84, 86, 154 Logic Language, 5, 15 Logic of Partial Functions, see LPF Loop Invariant, 103, 105 LPF, 14–17, 27, 34 Lucas, P., 81, 116, 208 Machine-Assisted Reasoning, 87 Manchester University, 100 Many-to-Many, 40 Many-to-One, 34, 39, 71 Map, 12, 15, 26, 34–37, 57, 66, 67, 69, 85, 95 Mathematical Semantics, see Denotational Semantics McCarthy, J., 2, 28, 32, 47, 48, 74, 87, 116, 208 Meaning Function, 96–98 Meinicke, L., 128 Meta-Language, 9–12, 15, 19, 20, 25, 28, 33, 36, 49, 51, 65, 83, 87, 100, 120, 132 Method, 85 Method Activation, 135, 141–143, 149 Meyer, B., 150 Micro-ALGOL, 47, 48 Milne, R. W., 100 Milner, A. J. R. G., 100, 128, 152, 208 Miranda, 5 ML, 208 Model-Oriented Semantics, 9, 10, 95, 106, 107 Modula, 86, 210 Modula-3, 25
228 Modula-II, 49, 160 Modular Operational Semantics, 209 Modular SOS, 47 Moggi, E., 156 Monads (Moggi), 156 Moore’s Law, 2, 119 Moore, J., 161 Morgan, C. C., 107, 109, 111 Morris, F. L., 158 Mosses, P. D., 46, 58, 156, 161 Natural Language, 1, 7, 9, 20, 23, 48 Natural Semantics, see Big-Step Semantics Naur, P., 20, 114 Nesting, 66, 67, 73, 76, 87 New Statement, 88 Newcastle University, 100 Newell, A., 87 Non-Determinism, 7, 33, 38–40, 42, 44–46, 49, 71, 90, 96, 99, 108, 109, 111, 112, 120, 121, 128, 137, 159 Non-Terminal, 20 Non-Termination, 37, 46, 47 Nygaard, K., 85 O’Hearn, P., 126 Oberon, 86, 210 Object (OOL), 4, 69, 85, 131–135, 137, 138, 140–144, 146, 147, 149–151 Object (VDM), 13, 26, 28–30, 47, 54, 69, 91, 98, 112, 113, 159 Object Language, 11, 12, 20, 33, 36, 38, 47, 49, 51, 120, 124, 159 Object-Oriented Language, 4, 77, 83–85, 88, 93, 120, 131–133, 135, 137, 151, 154 One-to-One, 71 Operational Semantics, 9–11, 14, 33, 38, 46–49, 69, 81, 95, 96, 98, 100, 101, 104, 116, 120, 121, 128, 153, 154, 159 Optimisation, 6 Own Variable, 83–85 Oxford Programming Research Group, 99 Oxford University, 32 Parameter, 60, 73, 75, 76, 78–80, 84, 93, 97, 149 Parameter Passing, 37, 60, 77 Parameter Passing Modes, 7, 65, 69, 70, 75–79 Parse Tree, 24 Parsing, 20, 22–26, 32, 63 Partial Correctness, 106 Pascal, 3, 24, 49, 61, 79, 81, 83, 85–88, 90, 116, 161, 210 Pattern Matching, 35, 36, 41
Index Peirce, C. S., 7 Penrose, R., 100 Peterson, G. L., 120 Petri Nets, 128 Pi-calculus, 208 PL/I, 3, 23, 24, 26, 28, 30, 48, 49, 57, 60, 61, 69, 78, 79, 81, 88, 100, 114, 157, 160, 161, 207, 208 Plotkin, G. D., 42, 49, 81, 128, 209 Polymorphism, 53 POOL, 132, 150, 151 Post Condition, 101, 104, 106–109, 111, 114, 115, 124–126 Pratt, V., 2 Pre Condition, 101, 104–109, 111, 115, 124, 126, 151 Predicate, 28, 39, 40, 54, 56, 104, 105, 107, 114, 115, 125 Predicate Transformer, 114 Priestley, M., 113 Procedural Language, 90 Procedure, 55, 56, 65, 66, 71, 73–77, 79, 84, 85, 89–93, 99, 131, 154 Process Algebras, 120, 123, 128 Prolog, 5, 15 Proof Rules, 13 Property-Oriented Semantics, 9–11, 95 R/G, 124–126, 128 Radin, G., 14 Ragged Array, 60 Railroad Diagram, 25, 159 Record, 3, 28, 30, 32, 56, 60, 61, 81, 85–87, 134 Redundant Variable Declaration, 31 Refinement Calculus, 109 Regions (PL/I), 88 Relation, 38, 40–42, 45–47, 70, 81, 96, 107, 109, 121, 122, 138, 140, 143, 155 Rely Condition, 124, 126 Rely/Guarantee Reasoning, see R/G Resumptions, 128 Return Statement, 89, 99, 154 Reverse Polish Notation, 24 Reynolds’ Rule, 123 Reynolds, J. C., 2, 123, 127, 158 RGSep, 127 Right-Hand Value, 37, 70, 79, 123 RISC, 1 SAGL, 127 SAL, 100, 156 Sammet, J. E., 3 Sangiorgi, D., 152
Index Satterthwaite, E., 115 Scala, 38 Scheme, 4, 15, 74 SCOOP, 150, 152 Scope, 16, 65, 68, 73, 85 Scott, D. S., 2, 96, 97, 99, 100, 209 SECD machine, 48 Selector, 28, 32, 36 Semantic Object, 42, 57–59, 61, 65, 71, 85, 132, 134, 135, 137, 144 Semaphore (Dijkstra), 4, 120 Semicolon Combinator, 156 Semicolons, 22 Semiotics, 7 Separation Logic, 127 Sequence, 12, 13, 15, 20–22, 24, 26, 27, 29, 30, 34, 55, 66, 67, 69, 75, 79, 91, 135 Set, 12, 13, 16, 20, 23, 25–27, 29, 30, 34, 39–41, 62, 69, 71, 88, 91, 98, 103, 113, 121, 126 Shared-Variable Concurrency, 119–121, 123, 128, 129, 160 Side Effect, 32, 38, 78, 79, 90, 92, 93 SIMD, 119 Simon, H. A., 87 Simula, 4, 85, 131 Sites, R., 115 Small-State Semantics, 49 Small-Step Semantics, 46, 91, 128, 139 Smalltalk, 4 SML, 160 SOS, 42, 45, 46, 49, 51, 52, 54, 56, 58, 67, 68, 70, 71, 78, 81, 100, 105, 106, 108, 116, 120–124, 134, 135, 137, 139, 140, 143, 144, 153, 160, 209 Source-to-Source, 6 SPARK-Ada, 11, 116 Sparse Array, 60 Specification Statement, 107, 109, 126 Stack, 72, 84, 87 Stack Variable, 67, 72, 76, 87, 88 Static Error Detection, 31, 51–54, 56, 62, 133 Static Semantics, 62 Stoy, J. E., 100, 209 Strachey, C. S., 23, 30, 37, 48, 70, 79, 96, 99, 100, 209 Strength Reduction, 6, 59 Structural Operational Semantics, 160 Structure, see Record Switch Variable, 157 Symbolic Execution, 114 Syntactic Sugar, 25
229 Tagged Variant Record, 86 Tasking, 48 Temporal Logics, 128 Terminal, 20, 23 Termination, 71, 106, 109, 111, 114, 117 Token, 24, 71 Tractability, 9, 10, 12, 120, 131, 132 Transitive Closure, 122 Translator, see Compiler True Concurrency, 128 Turing Award, 14 Turing Machine, 1, 2 Turing, A. M., 1, 46, 73, 103, 114, 115, 210 Twin-Machine Proof, 81 Two-Level Grammars, 62 Type Inference, 63 Typed Lambda Calculus, 99 ULD-II, 48 ULD-III, 48 Under-Determined Storage Mapping, 48 University of Oxford, 96, 100, 156 Unix, 59, 69 Untagged Varient Record, 86 Untyped Lambda Calculus, 97, 99 van den Hove, G., 5, 73 van Wijngaarden, A., 62, 114, 115, 158, 210 Variant Function, 111, 115 Variant Record, 85, 86 VDL, 48, 49, 81, 121, 154, 207, 208, 210 VDM, 12, 14, 15, 17, 19, 26–28, 34, 35, 39, 49, 56, 57, 66, 67, 81, 85, 87, 100, 101, 108, 111, 114, 115, 124, 159, 207, 208 VDM Toolset, 38 Vector, 59, 62, 78, 113 Virtual Machine, 5, 6, 88 Volap¨uk, 7 von Neumann Architecture, 79, 114 von Neumann, J., 113, 209 Wadsworth, C. P., 158 Walk, K., 61 Walker, D., 152 Well-Formed Object, 54–56, 63, 74, 75, 139 Wide-Spectrum Language, 109 Wirth, N. E., 20, 24, 25, 85, 86, 159, 210 Z (Specification Language), 114 Zemanek, H., 7, 47, 49, 210 | https://dokumen.pub/understanding-programming-languages-1nbsped-9783030592561-9783030592578.html | CC-MAIN-2022-40 | refinedweb | 75,224 | 51.78 |
raise. The program creates four stacks, one for each of the terms. It then interleaves the term computations by iteration, performing in turn one multiply from each. When all have performed five multiplies, the individual products are returned and totaled.
/* * This program demonstrates the use of regsave(), regrest() and restack(). * It is a demonstration program, and does nothing particularly useful. What * it does do is compute * * 5 5 5 5 * 4 + 12 + 3 + 9 = 309148 * * In the strangest possible way. * */ #include <iostream> #include <stdlib.h> #include <tswitch.h> using namespace std; /* This keeps track of my four computers (threads really) other than the main one. Each one has some space to hold the stack and a register save area. */ /* Size of each stack area. */ #define STSIZE 600 struct tstate { regbuf_t regs; // Register save area. unsigned stack[STSIZE]; // Stack space. } states[4]; /* Register save area for the main thread. */ regbuf_t mstate; // We create four copies of this beast in separate threads. We start each one, // then use regrest in the main to step each one through five iterations, // then exit. int raise(int i, int p) { int pwr = 1; // Get a new stack, save a state using the new stack, then // restore the mstate, which will bring us back to the main loop. // Whenever the main loop restores us again, we'll proceed from the // regsave() below. restack(states[i-1].stack, STSIZE, 2, 3); if(regsave(states[i-1].regs) == 0) regrest(mstate, i); // This goes until the boss has us end. while(1) { // One step in raising p to some power. pwr *= p; // Save our state, and switch back to main. If we are // restated with 2, we leave the loop and return. int n = regsave(states[i-1].regs); if(n == 0) regrest(mstate, i); if(n == 2) break; } return pwr; } int main() { // This will fire up four copies of raise(), each with its own stack. // To create each one, we save our state in the global mstate and call // the raise function. The function creates a new stack, then restores // mstate. This returns control our regsave call, which then does the // next one. Each raise restores mstate with its n value, so we'll // know when we've done the last one. Once each is started, we go on // to the for loop below. None of the raise() calls return until // released by that loop, so we won'd to the second half of this if // until then. int base[] = { 4, 12, 3, 9 }; volatile int n = 0; // The volatile should keep from using a register. int tot = 0; // Total returns here. int *totptr = &tot; // Point to tot in main stk from other stacks. if(regsave(mstate) < 4) { // Start a copy of raise on its own stack, then transfer back // to the regsave(). ++n; int pwr = raise(n, base[n-1]); //cout << "tot = " << tot << endl; // The raise() function doesn't return until we restore its // state with return code 2, which is done in the next loop. // When that happens, add into the sum, then resume the loop // below. *totptr += pwr; regrest(mstate, n); } // We now have four raise()s running on four stacks. Run each five // times to perform multiplies, then once more to exit and return. // The state then goes back to the start loop, were we use rerest() // to bring it back here. for(int ct = 1; ct <= 6; ct++) { // Runs each thread for one iteration. n = 0; if(regsave(mstate) < 4) regrest(states[n++].regs, ct == 6 ? 2 : 1); } cout << "Total is: " << tot << endl; } | http://sandbox.mc.edu/~bennet/thrd/tstest4_cc.html | CC-MAIN-2019-09 | refinedweb | 591 | 84.17 |
Next, install the following libraries using pip:
pip install nltk pip install sklearn
Then, create a list of list that represents the rows and columns of the CSV file:
import csv reviews = [row for row in csv.reader(open('reviews.csv'))] print reviews
This generates the following output:
[ ['Text', 'Sentiment', 'Topic'], ['Room safe did not work.', 'negative', 'Facilities'], ['Mattress very comfortable.', 'positive', 'Comfort'], ['No bathroom in room', 'negative', 'Facilities'], ... ]
Now, you’re ready to do some data cleaning. Keep in mind that this is a key step; you cannot build an accurate model with dirty data. With this in mind, we’ve defined a rule to use NLTK to filter out stopwords, remove non-alphabetic characters, and stem each word to its root:
import re import nltk # We need this dataset in order to use the tokenizer nltk.download('punkt') from nltk.tokenize import word_tokenize # Also download the list of stopwords to filter out nltk.download('stopwords') from nltk.corpus import stopwords from nltk.stem.porter import PorterStemmer stemmer = PorterStemmer() def process_text(text): # Make all the strings lowercase and remove non alphabetic characters text = re.sub('[^A-Za-z]', ' ', text.lower()) # Tokenize the text; this is, separate every sentence into a list of words # Since the text is already split into sentences you don't have to call sent_tokenize tokenized_text = word_tokenize(text) # Remove the stopwords and stem each word to its root clean_text = [ stemmer.stem(word) for word in tokenized_text if word not in stopwords.words('english') ] # Remember, this final output is a list of words return clean_text
Next, we use the process text function to process the data:
# Remove the first row, since it only has the labels reviews = reviews[1:] texts = [row[0] for row in reviews] topics = [row[2] for row in reviews] # Process the texts to so they are ready for training # But transform the list of words back to string format to feed it to sklearn texts = [" ".join(process_text(text)) for text in texts]
This process transformed the reviews into a nice list of stemmed words without any stopwords, that is, words that don’t provide value to a classification model (e.g. ‘the’, ‘is’, or ‘and’). Now, the hotel reviews looks like this:
['room extrem small practic bed', 'room safe work', 'mattress comfort', 'uncomfort thin mattress plastic cover rustl everi time move', 'bathroom room', ... ]
Now that we have cleaned the data, we can go ahead and train our classifier using scikit-learn. First, we need to transform the texts into something a machine learning algorithm can understand. So, we’ll be transforming the texts into numbers:
from sklearn.feature_extraction.text import CountVectorizer matrix = CountVectorizer(max_features=1000) vectors = matrix.fit_transform(texts).toarray()
Next, we’ll partition the data into two different groups: data that will be used for training the model (AKA training data) and data that will be used for understanding how accurate it is (AKA testing data):
from sklearn.model_selection import train_test_split vectors_train, vectors_test, topics_train, topics_test = train_test_split(vectors, topics)
Finally, the exciting part: training the machine learning model! We’ll train a Naive Bayes classifier that will be able to predict the topics of hotel reviews:
from sklearn.naive_bayes import GaussianNB classifier = GaussianNB() classifier.fit(vectors_train, topics_train)
And voilà! You have trained a text classifier with machine learning! But how accurate is this model? Let’s check by using the testing data to obtain performance metrics:
# Predict with the testing set topics_pred = classifier.predict(vectors_test) # ...and measure the accuracy of the results from sklearn.metrics import classification_report print(classification_report(topics_test, topics_pred))
This outputs the precision, recall, and F1-score for the different categories of the classifier:
Besides the performance metrics, this output shows the number of training samples used for training each category (also known as support). Taking into account the small size of the training dataset, the accuracy of the classifier is pretty good!
Keep in mind that training a machine learning model is an iterative process. From here, you should experiment and tweak the model to get better results. For example, you can add more training data so the algorithm has more information to learn from. Then, you can experiment with the number of maximum features to find the optimal setting for this particular model. You can eventually try another machine learning algorithm (such as SVM) to see if this improves the performance metrics.
This may be a simple classifier, but you have completed all the necessary steps for training a machine learning model from scratch, that is, cleaning and processing data, training a model, and testing its performance.
If you want to keep practicing your skills, you can follow the same step-by-step process with the same dataset to train a classifier for sentiment analysis. Instead of using topics to tag each review, use sentiment categories to train your model.
Topic Modeling in Python
Now, it’s time to build a model for topic modeling! We’ll be using the preprocessed data from the previous tutorial. Our weapon of choice this time around is Gensim, a simple library that’s perfect for getting started with topic modeling.
So, as a first step, let’s install Gensim in our local environment:
pip install gensim
Now, pick up from halfway through the classifier tutorial where we turned reviews into a list of stemmed words without any connectors:
['room extrem small practic bed', 'room safe work', 'mattress comfort', 'uncomfort thin mattress plastic cover rustl everi time move', 'bathroom room', ... ]
With this list of words, we can use Gensim to create a dictionary using the bag of words model:
from gensim import corpora texts = [process_text(text) for text in texts] dictionary = corpora.Dictionary(texts) corpus = [dictionary.doc2bow(text) for text in texts]
Next, we can use this dictionary to train an LDA model. We’ll instruct Gensim to find three topics (clusters) in the data:
from gensim import models model = models.ldamodel.LdaModel(corpus, num_topics=3, id2word=dictionary, passes=15) topics = model.print_topics(num_words=3) for topic in topics: print(topic)
And that’s it! We have trained our first model for topic modeling! The code will return some words that represent the three topics:
0, '0.034*"room" + 0.021*"bathroom" + 0.013*"night"') (1, '0.056*"bed" + 0.043*"room" + 0.024*"comfort"') (2, '0.059*"room" + 0.033*"clean" + 0.023*"shower"')
For each topic cluster, we can see how the LDA algorithm surfaces words that look a lot like keywords for our original topics (Facilities, Comfort, and Cleanliness).
Since this is an example with just a few training samples we can’t really understand the data, but we’ve illustrated the basics of how to do topic modeling using Gensim.
Topic Modeling in R
If you want to do topic modeling in R, we suggest checking out the Tidy Topic Modeling tutorial for the topicmodels package. It's straightforward to follow, and it explains the basics for doing topic modeling using R.
If you’re not familiar with the programming languages we mentioned above, or you don’t have the resources to implement these models, fear not. With MonkeyLearn, building topic classification models is easy, fast, and affordable (in fact, so affordable you can get started for free!).
To get started, sign up for free and follow the steps below to discover how machine learning models can simplify your topic sorting tasks.
1. Create a new classifier
Go to the dashboard, click on ‘create a model’ and choose your model type, in this case a classifier: | https://monkeylearn.com/blog/introduction-to-topic-modeling/ | CC-MAIN-2022-27 | refinedweb | 1,247 | 53 |
Your country is at war and your enemies are using a secret code to communicate with each other. you have managed to intercept a message that reads as follows:
:mmZ\dxZmx]Zpgy
The message is obviously encrypted using the enemy's secret code. You have just learned that their encryption method is based on the ASCII code. Individual characters in a string are encoded using this system. (ex: the letter "A" is encoded using the number 65.)
Your enemy's secret code takes each letter of the message and encrypts it as follows:
if (OriginalChar + key > 126) then EncryptedChar = 32 + ((OriginalChar + key) - 127) else EncryptedChar = (OriginalChar + key)
for example, if the enemy uses key = 10 then the message "Hey" would be encrypted as
H = 72
e = 101
y = 121
Encrypted H = (72 + 10) = 82 = R in ASCII
..
..
you get "Hey" = "Ro$"
I have to write a program that decrypts the intercepted message. I know that they key used is a number between 1 and 100 and the program has to decode the message using all 100 keys.
I found out the correct key is 88 and it says "Attack At Dawn!" but I have to display all of them anyway. I've attached a printscreen of what the run program looks like.
this is what I have so far, and it may not make much sense because I am really confused:
#include <iostream> #include <string> using namespace std; int main() { char EncryptedChar[] = ":mmZ\\dxZmx]Zpgy"; int key; char OriginalChar; if (OriginalChar + key > 126) EncryptedChar = 32 + ((OriginalChar + key) - 127); else EncryptedChar = (OriginalChar + key); for (key = 0; key <= 100, key++) { cout << "Key: " << key << " Decoded Message: " << EncryptedChar; } system("Pause"); return 0; } | https://www.daniweb.com/programming/software-development/threads/174874/project-using-strings-help | CC-MAIN-2022-33 | refinedweb | 278 | 57.1 |
On Sun, 23 Nov 1997, Roy T. Fielding wrote:
> >LocationMatch/Location do collapse double slashes, but I consider this to
> >be a bug. They are documented to work in the URI space, not in the
> >filespace.
>
> Yep, that's a bug. Dean's analysis matches what I would have said.
> >RFC1738, RFC1808, and Roy's new draft appear silent on the issue.
>
> "/" never equals "//". The only reason we collapse them for matches
> against Directory sections is security within the filesystem mapping.
> If the string is modified, the result should be a redirect or rejection.
> A "//" is meaningful for all resource namespaces not aligned with the
> filesystem, and that's the case for what mod_rewrite is doing.
> '?' :-)
Dw. | http://mail-archives.apache.org/mod_mbox/httpd-dev/199711.mbox/%3CPine.GSO.3.96.971124092846.1197E-100000@elect6.jrc.it%3E | CC-MAIN-2018-43 | refinedweb | 117 | 75.81 |
About me
Follow @sdorman
Scott Guthrie has a great blog post about some really cool editor improvements in Visual Studio 2008. As Scott points out, one of the big annoyances with VS2005 as the fact that the intellisense window obscures any code behind it. In VS2008, if you hold down the "Ctrl" while the Intellisense window is visible it will switch to a semi-transparent mode that allows you to see the code beneath it. When you release the "Ctrl" key, it switches back to normal.
There are a lot of improvements for VB developers, but one of the improvements for C# is the "Organize Usings" context menu.
This allows you to better organizing your "using" statements, including the ability to sort them alphabetically (a pet peeve of both Scott and myself) and to remove any un-necessary declarations (another pet peeve of mine). It does this by analyzing the types used in the code file and automatically removes namespaces that are declared but not needed to support them. | http://geekswithblogs.net/sdorman/archive/2007/08/08/Visual-Studio-2008-Code-Editor-Improvements.aspx | CC-MAIN-2018-22 | refinedweb | 169 | 55.68 |
Boxing and unboxing in C#
What is Boxing and unboxing in c# ?
- Boxing.
Sample Code
using System; class Test { static void Main() { Console.WriteLine(3.ToString()); } }
click below button to copy the code. By - c# tutorial - team
- calls the object-defined ToString method on an integer literal.
Sample Code
class Test { static void Main() { int i = 1; object o = i; // boxing int j = (int) o; // unboxing } }
click below button to copy the code. By - c# tutorial - team
- An int value can be converted to object and back again.
Boxing conversions
-:
Sample Code
class vBox { G value; G_Box(G g) { value = g; } }
click below button to copy the code. By - c# tutorial - team
- Boxing of a value v of type G now consists of executing the expression new G_Box(v), and returning the resulting instance as a value of type object.
- Thus, the statements
int i = 12; object box = i;
click below button to copy the code. By - c# tutorial - team
conceptually correspond to
int i = 12; object box = new int_Box(i);
click below button to copy the code. By - c# tutorial - team
-,
int i = 12; object box = i; if (box is int) { Console.Write("Box contains an int"); }
click below button to copy the code. By - c# tutorial - team; } }
click below button to copy the code. By - c# tutorial - team
the following statements
Point p = new Point(10, 10); object box = p; p.x = 20; Console.Write(((Point)box).x);
click below button to copy the code. By - c# tutorial - team
-.
Unboxing conversions
-.
Thus, the statements
object box = 12; int i = (int)box;
click below button to copy the code. By - c# tutorial - team
conceptually correspond to
object box = new int_Box(12); int i = ((int_Box)box).value;
click below button to copy the code. By - c# tutorial - team
-. | https://www.wikitechy.com/tutorials/csharp/boxing-and-unboxing-in-csharp | CC-MAIN-2019-43 | refinedweb | 298 | 66.44 |
Abstract Syntax Tree is a very strong features in Python. Python AST module allows us to interact with Python code itself and modify it.
Table of Contents
Python AST Module
With the Python AST module, we can do a lot of things like modifying Python code and inspect it. The code can be parsed and modified before it is compiled to
bytecode form. It is important to understand that each Abstract Syntax Tree represents each element in our Python code as an object. We will understand this in detail in the coming sections. Let’s try the real code.
Modes for Code Compilation
As we mentioned mode in the last script above, there are three modes in which Python code can be compiled. They are:
- exec: We can execute normal Python code using this mode.
- eval: To evaluate Python’s expressions, this mode will return the result fo the expression after evaluation.
- single: This mode works just like Python shell which execute one statement at a time.
Executing code
We can use AST module to execute Python code. Here is a sample program:
import ast code = ast.parse("print('Hello world!')") print(code) exec(compile(code, filename="", mode="exec"))
Let’s see the output for this program:
As mentioned above, we used
exec mode here.
Evaluating Python Expression
Based on the second mode we mentioned above, AST can be used to evaluate a Python expression and get the response of the expression. Let’s look at a code snippet:
import ast expression = '6 + 8' code = ast.parse(expression, mode='eval') print(eval(compile(code, '', mode='eval')))
Let’s see the output for this program:
It is also possible to see the AST which was formed for the above expression, just add this line with above script:
print(ast.dump(code))
This is what it gives:
Constructing multi-line ASTs
Till now, we made a single line ASTs and in the last example, we also saw how they look using the dump. Now, we will make a transform multi-line Python code to an AST. Here is a sample program:
import ast tree = ast.parse(''' fruits = ['grapes', 'mango'] name = 'peter' for fruit in fruits: print('{} likes {}'.format(name, fruit)) ''') print(ast.dump(tree))
Let’s see the output for this program:
We can visit each node by modifying the script:
import ast class NodeVisitor(ast.NodeVisitor): def visit_Str(self, tree_node): print('{}'.format(tree_node.s)) class NodeTransformer(ast.NodeTransformer): def visit_Str(self, tree_node): return ast.Str('String: ' + tree_node.s) tree_node = ast.parse(''' fruits = ['grapes', 'mango'] name = 'peter' for fruit in fruits: print('{} likes {}'.format(name, fruit)) ''') NodeTransformer().visit(tree_node) NodeVisitor().visit(tree_node)
Let’s see the output for this program:
The Visitor class we made above implement methods that are called for each AST nodes whereas with Transformer class, it first calls the corresponding method for node and finally replaces it with the return value of the method. We can execute the methods here by adding this line:
tree_node = ast.fix_missing_locations(tree_node) exec(compile(tree_node, '', 'exec'))
Now the output will be:
When to use Python AST Module?
Many automation testing tools, code coverage tools rely on the power of the Abstract Syntax Trees to parse the source code and find the possible flaws and errors in the code. Apart from this, ASTs are also used in:
- Making IDEs intelligent and making a feature everyone knows as intellisense.
- Tools like Pylint uses ASTs to perform static code analysis
- Custom Python interpreters
Conclusion
In this lesson, we studied the AST module which is used to evaluate and modify the Python’s code in your program.
I’d like to help with this doc. Fix typos etc. | https://www.journaldev.com/19243/python-ast-abstract-syntax-tree | CC-MAIN-2021-21 | refinedweb | 612 | 64.61 |
Artifact 7f3ea8c4686db8e40b0a0e7a8e0b00fac13aa7a3:
- File ext/userauth/sqlite3userauth.h — part of check-in [ffd61fb4] at 2017-02-25 20:57:46 on branch trunk — Add an 'extern "C"' block to header file sqlite3userauth.h. (user: dan size: 3591) [more...]
/* ** 2014-09-08 ** ** The author disclaims copyright to this source code. In place of ** a legal notice, here is a blessing: ** ** May you do good and not evil. ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ************************************************************************* ** ** This file contains the application interface definitions for the ** user-authentication extension feature. ** ** To compile with the user-authentication feature, append this file to ** end of an SQLite amalgamation header file ("sqlite3.h"), then add ** the SQLITE_USER_AUTHENTICATION compile-time option. See the ** user-auth.txt file in the same source directory as this file for ** additional information. */ #ifdef SQLITE_USER_AUTHENTICATION #ifdef __cplusplus extern "C" { #endif /* ** If a database contains the SQLITE_USER table, then the ** sqlite3_user_authenticate() interface must be invoked with an ** appropriate username and password prior to enable read and write ** access to the database. ** ** Return SQLITE_OK on success or SQLITE_ERROR if the username/password ** combination is incorrect or unknown. ** ** If the SQLITE_USER table is not present in the database file, then ** this interface is a harmless no-op returnning SQLITE_OK. */ int sqlite3_user_authenticate( sqlite3 *db, /* The database connection */ const char *zUsername, /* Username */ const char *aPW, /* Password or credentials */ int nPW /* Number of bytes in aPW[] */ ); /* ** The sqlite3_user_add() interface can be used (by an admin user only) ** to create a new user. When called on a no-authentication-required ** database, this routine converts the database into an authentication- ** required database, automatically makes the added user an ** administrator, and logs in the current connection as that user. ** The sqlite3_user_add() interface only works for the "main" database, not ** for any ATTACH-ed databases. Any call to sqlite3_user_add() by a ** non-admin user results in an error. */ int sqlite3_user_add( sqlite3 *db, /* Database connection */ const char *zUsername, /* Username to be added */ const char *aPW, /* Password or credentials */ int nPW, /* Number of bytes in aPW[] */ int isAdmin /* True to give new user admin privilege */ ); /* ** The sqlite3_user_change() interface can be used to change a users ** login credentials or admin privilege. Any user can change their own ** login credentials. Only an admin user can change another users login ** credentials or admin privilege setting. No user may change their own ** admin privilege setting. */ int sqlite3_user_change( sqlite3 *db, /* Database connection */ const char *zUsername, /* Username to change */ const char *aPW, /* New password or credentials */ int nPW, /* Number of bytes in aPW[] */ int isAdmin /* Modified admin privilege for the user */ ); /* **. */ int sqlite3_user_delete( sqlite3 *db, /* Database connection */ const char *zUsername /* Username to remove */ ); #ifdef __cplusplus } /* end of the 'extern "C"' block */ #endif #endif /* SQLITE_USER_AUTHENTICATION */ | https://sqlite.org/src/artifact/7f3ea8c4686db8e4 | CC-MAIN-2020-16 | refinedweb | 454 | 54.02 |
I'm trying to stop rv from loading files with certain extensions. Just simply ignore them and do nothing. From what I can gather, not loading them is not possible. I'm instead trying to UNload them after they are loaded by calling commands.deleteNode() on a new-source event but this is not working (seg faults instead). Is there a way to do this?
Nick Burdan
For example, I'd like to ignore txt files if they are dragged into rv. Here I am trying to delete them after they are loaded, which causes seg fault:
class NewSourcePathMode(rvtypes.MinorMode):
def __init__(self):
rvtypes.MinorMode.__init__(self)
self.enabled = True
self.init("New Source",
None,
[("new-source", self.newSource, "New Source")],
None
)
def newSource(self, event):
contents = event.contents()
tokens = contents.split(';;')
node = tokens[0]
filepath = tokens[2]
if filepath.endswith('.txt'):
commands.deleteNode(node)
event.reject()
def createMode():
return NewSourcePathMode()
(Michael) Kessler
Hi Nick,
Thanks for writing in! I'm sorry you've been wrestling this, but I think I've come up with something that might work for you. You are indeed correct in looking for a way to rectify loading something that was intended to be loaded; there's not really a way I can see to prevent something from loading that was already requested to be loaded.
The events require a bit more use out of the nodes that are created immediately after the events are fired, so it is unsafe to modify them then, however immediately after source-group-complete you are free to remove the node. I do this with a 0ms timer which effectively places the removal event at the end of the event loop. This ensures that the events are complete. This is my implemention to do what you are asking about.
I hope this finds you well. Please note that there may be some things you want to add to this, but this should at least get you started.
-Kessler
from rv import commands, extra_commands, rvtypes
from rv.commands import NeutralMenuState
def groupMemberOfType(node, memberType):
for n in commands.nodesInGroup(node):
if commands.nodeType(n) == memberType:
return n
return None
class {PACKAGE_NAME}Mode(rvtypes.MinorMode):
def __init__(self):
rvtypes.MinorMode.__init__(self)
globalBindings = [
("source-group-complete", self.newSource, "New Source"),
]
localBindings = None
menu = None
self.init("{PACKAGE_NAME}", globalBindings, localBindings, menu, 'zz_source_setup', 0)
def filterSourcePaths(self, sourcePaths):
# Do your own logic for filtering source paths.
return [i for i in sourcePaths if not i.endswith('.txt')]
def newSource(self, event):
try:
args = event.contents().split(";;")
group = args[0]
fileSource = groupMemberOfType(group, "RVFileSource")
if fileSource:
mmp = "%s.media.movie" % fileSource
sourcePaths = commands.getStringProperty(mmp)
filteredSourcePaths = self.filterSourcePaths(sourcePaths)
if not filteredSourcePaths:
def deleteFileSource():
(_,outputs) = commands.nodeConnections(group)
# Remove all dependencies. See session manager for similar behavior.
for output in outputs:
(inputs, _) = commands.nodeConnections(output)
commands.setNodeInputs(output, [i for i in inputs if i != group])
commands.deleteNode(group)
# Delay this until after the event completes.
from PySide import QtCore
self._deleteTimer = QtCore.QTimer.singleShot(0, deleteFileSource)
else:
# If we have removed media paths from our source, then re-set the source path to remove the garbage.
# This allows garbage to be removed from second eyes or when trying to add audio.
if sourcePaths != filteredSourcePaths:
commands.setStringProperty(mmp, filteredSourcePaths, True)
# Ignore the event if the source isn't deleted, otherwise absorb it to prevent further processing.
event.reject()
except Exception as e:
print e
event.reject()
def createMode():
return {PACKAGE_NAME}Mode()
Nick Burdan
Thanks so much! That works! The console window still pops up with the ERROR: Open of ... failed: unsupported media type but the file itself is successfully removed from the sources. Is there is a way to close/dismiss the console window?
Nick Burdan
I suppose I can add this to the prefs to not open the console by default:
[Console]
showOn=4
I'm curious though if there is a way to close it with the api after it opens? | https://support.shotgunsoftware.com/hc/en-us/community/posts/360009449074-how-to-not-load-or-unload-files-with-certain-extensions?page=1 | CC-MAIN-2019-30 | refinedweb | 662 | 51.75 |
JSF REST GET-request troubles
Gregory Androsov
Greenhorn
Joined: Apr 24, 2013
Posts: 5
posted
Apr 24, 2013 09:00:35
0
Hello!
I'm newbie in
JSF
. Also I'm from Russia and my English is bad. So I hope you'll forgive me if I write stupidities.
I use richfaces 4.3 and Eclipse Juno as my enviroment. I'm trying to use get-post as I've red in book of David Herry.
So I've wrote been-class
@ManagedBean @SessionScoped public class NavigateGetBean implements Serializable{ private static final long serialVersionUID = 1L; private HashMap<String, JsfView> views; private JsfView workZone; public NavigateGetBean() { views = new HashMap<String, JsfView>(); views.put("1", new JsfView("work_zone", "/pages/views/mainView1.xhtml")); views.put("2", new JsfView("work_zone", "/pages/views/mainView2.xhtml")); workZone=views.get("1"); } public String getWorkZone() { return workZone.getWay(); } public void setWorkZone(String workZone) { this.workZone = views.get(workZone); } }
I've wrote xhtml View
<ui:composition <f:metadata> <f:viewParam </f:metadata> <h:link <f:param </h:link> <br/> <h:link<br/> <h:link </ui:composition>
I've added breakpoint on getWorkZone() and setWorkZone() methods. When my page opens this view, getter is called (I don't sure it is right, but it is so). But when I click link, setter is NOT called. So I don't know where I made mistake. And if I made it.
Tim Holloway
Saloon Keeper
Joined: Jun 25, 2001
Posts: 15502
13
I like...
posted
Apr 25, 2013 05:28:04
1
Здравствуйте, Gregory! Welcome to the JavaRanch! Here you will find many people who are bad at English. Many of them are American.
I think your biggest problem with JSF is the same problem that most people have when they first begin JSF.
JSF is based on the paradigm known as
Inversion of Control
, or IoC.
In "normal" programming, the programs go out and find things and do things to what they found. In IoC, the things come to the programs, instead.
For example, a JSF View (xhtml) should properly be a template for how the client's screen will be laid out. It should be declarations, therefore not contain parameters or logic, except for limited logic that manages the View format, such as the "rendered" attribute.
All of the logic and data to support the View should be in the Model (backing bean, which is Managed Bean). JSF does not have you code Controllers, since the Controllers are built into the JSF framework itself. The
FacesServlet
is the master controller.
The backing bean has 2 primary functionalities. 1) As a data repository. 2) As a collection of action methods. It is possible to have only 1 of these 2 functions.
As a data repository, the backing bean is a POJO JavaBean object. In this functionality, the property set/get methods are used by the IoC framework to store and retrieve property values. These methods should do little or no business logic, since they may be invoked an unpredictable number of times for a single HTTP request/response cycle and therefore side-effects or heavy computational requirements would be bad.
As a collection of action methods, the backing bean contains one or more methods which accept no parameters and return either a
String
(navigation destination URL) or null. The reason for no parameters is that all parameters are already internal properties of the backing bean and therefore do not need to be passed in. The JSF lifecycle ensures that the bean properties have already been set with the latest valid values from the View as it was submitted, because the JSF IoC has injected them via the bean's property "set" methods.
The other trap that people new to JSF are likely to fall into is excessive use of Listener methods and control binding. There is a lot of outdated documentation on the Internet with examples using these features. In common use, Listeners are comparatively rare, and binding much, much rarer. The time to use them is when a simpler POJO solution cannot work. Since JSF was designed to do as much as possible using POJO techniques, that should be the exception, not the rule.
Customer surveys are for companies who didn't pay proper attention to begin with.
Gregory Androsov
Greenhorn
Joined: Apr 24, 2013
Posts: 5
posted
Apr 26, 2013 07:59:14
0
Thank you, Tim!
A lot of useful information, but I asked another thing. Or I don't understand you fully.
So problem was that setter wasn't called, but it was made.
So I've resolved it by replayceing metadate-block from View to main.xhtml like so
<?xml version="1.0" encoding="UTF-8"?> <!--main.xhtml--> <ui:composition <f:metadata> <f:viewParam </f:metadata> <ui:define <ui:include </ui:define> <ui:define <ui:include </ui:define> </ui:composition>
But, unfortunally, it works not so I wanted. It refreshes all page not using ajax. So I'm searching for another solving :)
Consider Paul's
rocket mass heater
.
subject: JSF REST GET-request troubles
Similar Threads
JSF navigation problems.
custom converter usage
JEE example with EJB, JSF is not working
Bean value not maintained in RequestScoped
Passing parameters with h: link
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/610249/JSF/java/JSF-REST-request-troubles | CC-MAIN-2014-10 | refinedweb | 882 | 66.03 |
Introduction
In the first part of this lecture, I introduce the structure of memory chunk and the internal implementation of memory allocation in ptmalloc. In this part, I will continue the remaining part in ptmalloc. First, I will give a introduction on deallocation and reallocation procedure in ptmalloc. Then I will introduce the security checks in ptmalloc and their intentions.
Ptmalloc Deallocation
Function __libc_free
Similar to ptmalloc allocation, function _int_free is the internal implementation of pamalloc deallocation and is invoked via the wrapper function __libc_free. Different from __libc_malloc, there are some extra steps to do before calling _int_free.
void __libc_free (void *mem) { mstate ar_ptr; mchunkptr p; /* chunk corresponding to mem */; } ar_ptr = arena_for_chunk (p); _int_free (ar_ptr, p, 0); } libc_hidden_def (__libc_free)
Function __libc_free will first check if the current chunk is directly allocate via mmap. If so, the deallocator will unmap the chunk directly without call _libc_free.
Then it will check if the current chunk belongs to the main_arena. If no, the deallocator will retrieve the arena pointer first. To get the arena pointer, the chunk pointer will be ANDed with 0xfff00000 on X86 system, while on X64 system the chunk pointer will be ANDed with 0xfffffffffc000000.
Function unlink
Unlink function is a very important function in heap chunk management. As demonstrated in part one, all freed chunks except for fastbin chunk are maintained via doubly linked list. Function unlink(P, BK, FD) will remove the chunk P from its current doubly linked list.
Free internal
In general, the procedure can be divided into three steps. In the first step, if the freed chunk belongs to fastbin chunk, the deallocator will insert the chunk into fastbin chunk. In the second step, if size of the freed chunk is larger than fast bin size, the deallocator will process the chunk first and insert the processed chunks into corresponding chunk. The third step is an extra step, if size of the freed chunk is larger than a threshold (0x10000), the deallocator will try to consolidate the freed chunk as much as possible. More details have been given in the picture below and I will introduce the workflow of each step combined with the source code of libc.
Fastbin Chunk Deallocation
/* We might not have a lock at this point and concurrent modifications of system_mem might have let to a false positive. Redo the test after getting the lock. */ if (chunk_at_offset (p, size)->size <= 2 * SIZE_SZ || chunksize (chunk_at_offset (p, size)) >= av->system_mem;) { errstr = "free(): invalid next size (fast)"; goto errout; } if (! have_lock) { (void)mutex_unlock(&av->mutex); locked = 0; } free_perturb (chunk2mem(p), size - 2 * SIZE_SZ); set_fastchunks(av); unsigned int idx = fastbin_index(size); fb = &fastbin (av, idx); /* Atomically link P to its fastbin: P->FD = *FB; *FB = P; */ mchunkptr old = *fb, old2; unsigned int old_idx = ~0u; do { /* Check that the top of the bin is not the record we are going to add (i.e., double free). */ if (__builtin_expect (old == p, 0)) { errstr = "double free or corruption (fasttop)"; goto errout; } /* Check that size of fastbin chunk at the top is the same as size of the chunk that we are adding. We can dereference OLD only if we have the lock, otherwise it might have already been deallocated. See use of OLD_IDX below for the actual check. */ if (have_lock && old != NULL) old_idx = fastbin_index(chunksize(old)); p->fd = old2 = old; } while ((old = catomic_compare_and_exchange_val_rel (fb, p, old2)) != old2);
The deallocator will first check the validity of size of the freed chunk and the first chunk (old) in the fastbin. If the check passes, the deallocator will insert the freed chunk into the fastbin as the new first chunk and set the FD pointer to point to old.
Smallbin Chunk Deallocation
nextchunk = chunk_at_offset(p, size); /* Lightweight tests: check whether the block is already the top block. */ if (__glibc_unlikely (p == av->top)) { errstr = "double free or corruption (top)"; goto errout; } /* Or whether the next chunk is beyond the boundaries of the arena. */ if (__builtin_expect (contiguous (av) && (char *) nextchunk>= ((char *) av->top + chunksize(av->top)), 0)) { errstr = "double free or corruption (out)"; goto errout; } /* Or whether the block is actually not marked used. */ if (__glibc_unlikely (!prev_inuse(nextchunk))) { errstr = "double free or corruption (!prev)"; goto errout; } nextsize = chunksize(nextchunk); if (__builtin_expect (nextchunk->size <= 2 * SIZE_SZ, 0) || __builtin_expect (nextsize >= av->system_mem, 0)) { errstr = "free(): invalid next size (normal)"; goto errout; } free_perturb (chunk2mem(p), size - 2 * SIZE_SZ); /*); /* consolidate forward */ if (!nextinuse) { unlink(nextchunk, bck, fwd); size += nextsize; } else clear_inuse_bit_at_offset(nextchunk, 0); /*)) { errstr = "free(): corrupted unsorted chunks"; goto errout; } p->fd = fwd; p->bk = bck; if (!in_smallbin_range(size)) { p->fd_nextsize = NULL; p->bk_nextsize = NULL; } bck->fd = p; fwd->bk = p; set_head(p, size | PREV_INUSE); set_foot(p, size); check_free_chunk(av, p); } else { size += nextsize; set_head(p, size | PREV_INUSE); av->top = p; check_chunk(av, p); }
The deallocation on a smallbin chunk works as following:
(1) If the adjacent previous chunk is freed, merge the current chunk with the previous chunk. Unlink the previous chunk and set the newly merged chunk as current chunk.
(2) If the next adjacent chunk is top chunk, merge the current chunk into top chunk. Otherwise go to step (3).
(3) If the next adjacent chunk is freed, merge the current chunk with the next adjacent chunk. Unlink the next adjacent chunk and set the newly merged chunk as current chunk. If the next adjacent chunk is not freed, set the P bit of next adjacent chunk to 0.
(4) Insert the current chunk into the unsorted bin. Set header and footer accordingly.
Reduce Fragmented Chunks
if ((unsigned long)(size) >= FASTBIN_CONSOLIDATION_THRESHOLD) { if (have_fastchunks(av)) malloc_consolidate(av); if (av == &main_arena) { #ifndef MORECORE_CANNOT_TRIM if ((unsigned long)(chunksize(av->top)) >= (unsigned long)(mp_.trim_threshold)) systrim(mp_.top_pad, av); #endif } else { /* Always try heap_trim(), even if the top chunk is not large, because the corresponding heap might go away. */ heap_info *heap = heap_for_ptr(top(av)); assert(heap->ar_ptr == av); heap_trim(heap, mp_.top_pad); } }
If the size of freed chunk exceeds FASTBIN_CONSOLIDATION_THRESHOLD, the deallocator will merge the fastbin chunks as much as possible. According to the type of mstate, trim the heap respectively.
Ptmalloc Reallocation
Function realloc will try to resize the target chunk and return a new chunk pointer of the requested size. The first step of reallocation extracts the desired chunk from heap. The second step will do the post-processing on the remaining part.
Realloc Internal
The workflow of _int_realloc is shown below.
Resize chunk
if ((unsigned long) (oldsize) >= (unsigned long) (nb)) { /* already big enough; split below */ newp = oldp; newsize = oldsize; } else { /* Try to expand forward into top */ if (next == av->top && (unsigned long) (newsize = oldsize + nextsize) >= (unsigned long) (nb + MINSIZE)) { set_head_size (oldp, nb | (av != &main_arena ? NON_MAIN_ARENA : 0)); av->top = chunk_at_offset (oldp, nb); set_head (av->top, (newsize - nb) | PREV_INUSE); check_inuse_chunk (av, oldp); return chunk2mem (oldp); } /* Try to expand forward into next chunk; split off remainder below */ else if (next != av->top && !inuse (next) && (unsigned long) (newsize = oldsize + nextsize) >= (unsigned long) (nb)) { newp = oldp; unlink (next, bck, fwd); } /* allocate, copy, free */ else { newmem = _int_malloc (av, { /*code of copying data*/ _int_free (av, oldp, 1); check_inuse_chunk (av, newp); return chunk2mem (newp); } } }
(1) If the requested size is smaller than the current size, go to post-processing.
(2) If the next chunk is top chunk, split one chunk from the top chunk and merge the current chunk with the new chunk and return.
(3) In the situation when the next adjacent chunk is not top chunk, the next adjacent chunk is freed and the addition of size of the current chunk and next adjacent chunk is larger than the requested size. Unlink the next chunk, merge current chunk and next chunk, set the merged chunk as current chunk and go to post-processing. Otherwise go to step 4.
(4) Allocate a new chunk of requested size. If the new chunk happens to be the next adjacent chunk of current chunk, merge the current chunk with the new chunk and go to post-processing. Otherwise, free the current chunk and return the newly allocated chunk.
Post-processing
remainder_size = newsize - nb; if (remainder_size < MINSIZE) /* not enough extra to split off */ { set_head_size (newp, newsize | (av != &main_arena ? NON_MAIN_ARENA : 0)); set_inuse_bit_at_offset (newp, newsize); } else /* split remainder */ { remainder = chunk_at_offset (newp, nb); set_head_size (newp, nb | (av != &main_arena ? NON_MAIN_ARENA : 0)); set_head (remainder, remainder_size | PREV_INUSE | (av != &main_arena ? NON_MAIN_ARENA : 0)); /* Mark remainder as inuse so free() won't complain */ set_inuse_bit_at_offset (remainder, remainder_size); _int_free (av, remainder, 1); } check_inuse_chunk (av, newp); return chunk2mem (newp);
(1) If the size of remainder chunk is smaller than the smallest size, do not split the chunk and return the whole chunk as return value.
(2) Otherwise, split the current chunk into a chunk of requested size and a chunk of remainder size. Free the remainder chunk and return the chunk of requested size.
Security Checks in Ptmalloc
- Now I will introduce the security checks in ptmalloc covering unlink, _int_malloc and _int_free.
Security checks in unlink
Check if the chunk (smallbin chunk) to unlink belongs to an doubly linked list.
if (__builtin_expect (FD->bk != P || BK->fd != P, 0)) malloc_printerr (check_action, "corrupted double-linked list", P, AV);
If the chunk to unlink is a largebin, take the following check if the previous check pass. The following code checks if the largebin belongs to another doubly linked list.
if (__builtin_expect (P->fd_nextsize->bk_nextsize != P, 0) || __builtin_expect (P->bk_nextsize->fd_nextsize != P, 0)) malloc_printerr (check_action,"corrupted double-linked list (not small)", P, AV);
Security checks in _int_malloc
1. The victim chunk to alloc is placed in the fastbin linked list. If the size victim chunk does not fit the fastbin index, it means that metadata of the victim chunk has been corrupted.
if (__builtin_expect (fastbin_index (chunksize (victim)) != idx, 0)) { errstr = "malloc(): memory corruption (fast)"; errout: malloc_printerr (check_action, errstr, chunk2mem (victim), av); return NULL; }
- Check if the backward chunk and victim chunk belong to the same doubly linked list. If the forward pointer of the backward chunk does not point to the victim chunk, it means that the smallbin double linked list is corrupted.
if (__glibc_unlikely (bck->fd != victim)) { errstr = "malloc(): smallbin double linked list corrupted"; goto errout; }
When processing the unsorted chunks, it will first check the metadata of unsorted chunk is legal. If the size of victim chunk is too small or too large, the check fails.
if (__builtin_expect (victim->size <= 2 * SIZE_SZ, 0) || __builtin_expect (victim->size > av->system_mem, 0)) malloc_printerr (check_action, "malloc(): memory corruption", chunk2mem (victim));
When inserting a remainder chunk splitted from large chunk, it will check if the first chunk in unsorted bin list points to the head of unsorted bin list.
if (__glibc_unlikely (fwd->bk != bck)) { errstr = "malloc(): corrupted unsorted chunks"; goto errout; } if (__glibc_unlikely (fwd->bk != bck)) { errstr = "malloc(): corrupted unsorted chunks 2"; goto errout; }
Security checks in _int_free
1. Check if the chunk to free is located at an valid address: (1) The chunk does exceed the bottom of address space. (2) The last 4 (on X64) or 3 (on X86) bits are all zero.
if (__builtin_expect ((uintptr_t) p > (uintptr_t) -size, 0) || __builtin_expect (misaligned_chunk (p), 0)) { errstr = "free(): invalid pointer"; errout: if (!have_lock && locked) (void) mutex_unlock (&av->mutex); malloc_printerr (check_action, errstr, chunk2mem (p), av); return; }
- Check if the size of the chunk is valid: (1) larger than minimal space. (2) The last 4 (on X64) or 3 (on X86) bits are not all zero.
if (__glibc_unlikely (size < MINSIZE || !aligned_OK (size))) { errstr = "free(): invalid size"; goto errout; }
Check if the size of chunk to insert into fastbin is valid.
if (chunk_at_offset (p, size)->size <= 2 * SIZE_SZ || chunksize (chunk_at_offset (p, size)) >= av->system_mem) { errstr = "free(): invalid next size (fast)"; goto errout; }
Check the first chunk in fastbin is not the current chunk that is to be inserted.
if (__builtin_expect (old == p, 0)) { errstr = "double free or corruption (fasttop)"; goto errout; }
Check the size of first chunk in fastbin is the same as the size of the current chunk that is to be inserted.
if (have_lock && old != NULL && __builtin_expect (old_idx != idx, 0)) { errstr = "invalid fastbin entry (free)"; goto errout; }
Check the chunk to be freed is not the top chunk.
if (__glibc_unlikely (p == av->top)) { errstr = "double free or corruption (top)"; goto errout; }
Check the next adjacent chunk is not exceeding the size of current heap.
if (__builtin_expect (contiguous (av) && (char *) nextchunk >= ((char *) av->top + chunksize(av->top)), 0)) { errstr = "double free or corruption (out)"; goto errout; }
Check the P bit of next adjacent chunk is set.
if (__glibc_unlikely (!prev_inuse(nextchunk))) { errstr = "double free or corruption (!prev)"; goto errout; }
Check the size of next adjacent chunk is valid.
if (__builtin_expect (nextchunk->size <= 2 * SIZE_SZ, 0) || __builtin_expect (nextsize >= av->system_mem, 0)) { errstr = "free(): invalid next size (normal)"; goto errout; }
When inserting the chunk into the unsorted bin, check if the backward pointer of first chunk in unsorted bin is pointing to the head of unsorted bin.
if (__glibc_unlikely (fwd->bk != bck)) { errstr = "free(): corrupted unsorted chunks"; goto errout; }
Conclusion
After this part of my tutorial, I recommend to read the Summary section in previous one again. I believe you will find more details hidden there. So far, I have finished the tutorial on ptmalloc memory management. I will move to the exploitation part next. | https://dangokyo.me/2017/12/12/introduction-on-ptmalloc-part-2/ | CC-MAIN-2019-30 | refinedweb | 2,176 | 61.06 |
21 April 2011 11:31 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The plant will be shut for five days, the source said.
Titan’s other facilities at the site includes a 480,000 tonne/year PP plant as well as its No 1 and 2 naphtha crackers, which have a combined capacity of 720,000 tonnes/year.
They were not affected by the disruption in electricity supply as they were running on the company’s power generator, he said.
The regional HDPE market is under pressure because of weak downstream demand. The resultant production loss at Titan's plant shutdown might ease the downward price pressure, regional traders said.
For more on PE | http://www.icis.com/Articles/2011/04/21/9454364/malaysias-titan-shuts-hdpelldpe-swing-plant-on-power.html | CC-MAIN-2014-41 | refinedweb | 114 | 64.81 |
go to bug id or search bugs for
Description:
------------
I suggest adding a function htmlentities_decode() as a replacement for html_entity_decode() and possibly deprecate that one.
It is really misleading and unintuitive because there are functions htmlspecialchars() and htmlspecialchars_decode() doing similar thing.
Add a Patch
Add a Pull Request
php functions uses a lot of different syntax
isset
is_array
isPublic
but aliasing is evil and renaming is not appreciated by users... the best thing you can do is implement your renamed function in your namespace
bye
Yes and that is what I think should change, because current naming conventions are really horrible. For instance, look at differences between str_replace, strlen, parse_str, htmlspecialchars. All work with same type but their names are completely different.
So, string functions should go to String namespace (String\replace()), array functions to Array namespace (Array\search()) and so on.
But unfortunately this will not happen because PHP does not like changes... Think about it.
We don't mind change, but our users really really don't like it when we break
their working applications for cosmetic reasons.
I feel that this is a big problem in PHP. It makes it super hard to remember
function names (especially for newcomers) with these inconsistencies and gives
PHP an ugly syntax reputation.
Please change all function names to:
words_separated_with_underscore()
and then alias the original functions to the new ones.
It may take a long time for everyone to change the functions in their
applications but you can keep them aliased for however long everyone needs.
This naming convention needs to become a standard in PHP at some point, why not
get the ball rolling now.
The core functions’ naming is one the most frowned upon "feature" of PHP and it
is well overdue for a refactor. Old frameworks and application are a pain to
convert, and it pretty pointless to do it for a cosmetic reason as rasmus pointed
out, but I think the core devs are underestimating how much the community wants
it done and how many people are willing to do their part.
Let’s face it:
• htmlentities/html_entity_decode
• str_replace/strtr
• current/array_pop
• array_push($array, $item)/array_search($item, $array)
I believe a very responsible roadmap would be to :
1. Create a PHP library that would essentially just wrap a function in another with consistent naming and arguments order.
2. Get some feedback of the community and work on the names. The guys at FIG would probably be a blessing on that.
3. Implement those using aliasing and a compiled extension.
4. Let it sit for a couple time while people get to know about it.
5. Merge extension into core. Real world application will begin to use it.
6. Drop the deprecated ones in a distant future.
I don't see why this can't be done.
Alias the functions to a single standard and depreciate the old ones.
In the next version of PHP, add a configuration toggle ALLOW_LEGACY_FUNCTIONS set to default false.
If ALLOW_LEGACY_FUNCTIONS is true, all the depreciated functions work as expected.
If ALLOW_LEGACY_FUNCTIONS is false, all the depreciated functions throw errors.
Keep the toggle in all future versions of PHP. Eventually applications using the legacy function names
will either run a search-and-replace or fall out of use. It wouldn't be too difficult to migrate if the
only change is a name change.
Excuse me rasmus but WHY NOT? It's completely normal evolution process. Let's deprecated all things that have inconsistent naming in PHP 5.6 to be able to just remove them in PHP 6.0 where breaking compatibility would be possible. It would be just great to have PHP 6.0 as PHP 5.x with consistent function naming convention, with removed all of deprecated stuff.
It seems silly for any developer to change certain function names even though it
is something in the back of there head. It comes down to, "if it isn't broke,
why
fix it?".
But for a community this large and people that are trying out PHP and learning
best practices, this needs to be done.
However, there needs to be a vote on the naming conventions that are used.
Perhaps following PSR-1 or PSR-2.
I think we should keep current look and
feel of this low level part of PHP,
functions. I don't believe PSR have
anything to do with that naming
conventions. It could end up with some
huge proposal about moving functions to
some namespaces which would be huge
change. I think it's just simple
renaming we have to do. A real proposal
about revolution in functions could be
IMO autoboxing but this is another
topic. And BTW that idea about providing
switches in php.int could make huge
mess. Let's deprecate and in major
release remove, no incompatibility with
same versions of PHP due to some magic
+1 to make function name and parameters order more consistent!
+1 for fixing naming conventions. It would be REALLY easy for most developers to
refactor. The only exception being those who created their own functions with
names that would be used by php core functions. That is if something was
coreFunction() and the user created core_function(/*some custom function with
almost the same name*/) and the new release changed to underscores.
It would be a burden on hosting companies to not break their users' software via
an upgrade though. Someone with an old application and no one maintaining it
could be in trouble if their host upgraded.
I agree for consistency. I've develop at PHP over 7 years and It's annoying that some time I should look up the proper function name or order (when editing without IDE).
By semver 2.0 we can do PHP ver. 6.0 with BC. So there projects can exist at 5.x. There is huge frameworks like Symphony and Zend Framework, Drupal and Joomla that was rewritten almost from scratch to utilize new features and architecture.
So GIVE people to choose which consistency to use CURRENT or NEW. After a one-two years It will appear needs for it or contraries. | https://bugs.php.net/bug.php?id=52424 | CC-MAIN-2018-17 | refinedweb | 1,024 | 65.62 |
I have a Mock Service running in SOAPUI and I want to get the message inside the SOAP Body for the Request and write it out to a file.
I have tried many methods but just can't seem to get the SOAP Body. I can get the Request which is the SOAP Envelope but get stuck here.
I am writing some groovy script in the OnRequest Script for the Mock Service, not sure if this is the right place.
Any help would be most appreciated.
Rich
Yes, I want the XML Payload (ApplicationOutcomeUpdate) in the SOAP Body...
<s:Envelope xmlns:s=""
<s:Body>
<ns0:ApplicationOutcomeUpdate xmlns:....
...and I have a MockService which accepts this Request from an Application and I want to save the XML Payload inside the SOAP Body to a file.
I have got the Request using...
def request = new XmlParser().parseText(mockRequest.requestContent)
I then try and use XmlHolder, XmlSlurper or XmlParser to extract the Xml I need inside the SOAP Body but all I seem to get is Null.
Regards
Stewart
Richie
Apologies for the delay.
What I want to get is the Request which contains the Soap Envelope inside is the Soap Body and inside this is the payload message Its the payload message inside the Soap Body I want to save off to a file.
I have tried the following...
def req = new XmlHolder(messageExchange.requestContent)
def req = new XmlHolder(mockRequest.requestContent)
def req = new XmlHolder(context.request)
def req = new XmlSlurper(messageExchange.requestContent)
def req = new XmlSlurper(mockRequest.requestContent)
def req = new XmlSlurper(context.request)
...but I get either an error or null returned.
Richie
OK, an update.
I have been able to get hold of the Request i.e. the SOAP Envelope using the following...
def request = mockRequest.getRequestContent()
Now I need to extract the payload inside the SOAP Body. That's what I need help on now.
I can confirm I want the Payload inside the SOAP Body for the Request. I have a application that sends the Request to the Mock Service and as part of my Unit Testing I want to compare the Payload with my expected results.
Rich
Yes I am still wanting to extract the Payload from the SOAP Body.
Appreciate if you could provide the groovy code.
Regards | https://community.smartbear.com/t5/SoapUI-Open-Source/Write-SOAP-Body-to-File-for-Mock-Service/m-p/205194/highlight/true | CC-MAIN-2021-25 | refinedweb | 383 | 68.16 |
A while ago I mentioned a voting website I created and a home-grown REST-ful library I created that I use in the site's code. I'm thinking that the library might be useful to others, but I'd like some feedback, both on it's probable utility and on the code in the library itself.
(Update: REST = Representational State Transfer; a good place to start reading up on this, as always, is wikipedia.)
So, here goes! First, the library - REST.pm
package REST;
use strict;
use CGI;
our $DEBUG = 0;
sub new { return bless {CGI => CGI->new}, shift };
sub cgi { shift()->{CGI} }
sub debug {
my ($self, $urls, @problems) = @_;
my $cgi = $self->cgi;
print $cgi->header,
$cgi->start_html, "REST::debug executing: @problems<hr>",
"<dl>", (map { "<dt><tt>$_</tt></dt><dd>$urls->{$_}</dd>" } keys %
+$urls), "</dl>",
"Path info: <tt>", $cgi->path_info, "</tt><br>",
"Request method: <tt>", $cgi->request_method, "</tt><br>",
$cgi->end_html;
}
sub run {
my ($self, %urls) = @_;
my $pathInfo = $self->cgi->path_info;
my $method = $self->cgi->request_method;
my ($package, @pathParams);
foreach my $path ( keys %urls ) {
$package = $urls{$path};
if ( $pathInfo =~ $path ) {
@pathParams = ($1, $2, $3, $4, $5, $6, $7, $8, $9);
}
undef $package;
}
if ( $package ) {
my $dispatcher = $package->new($self->cgi);
if ( $dispatcher->can($method) ) {
$dispatcher->$method(@pathParams);
return;
}
}
if ( $DEBUG ) {
$self->debug(\%urls, $@);
}
else {
print $self->cgi->header, $self->cgi->start_html, $self->cgi->end_
+html;
}
}
1;
[download]
As far as how you'd use REST, the overall idea is to set up a hash of "paths" to class/package names, create a REST instance, and then invoke run on that instance. These paths are actually regular expressions, allowing you to pass paramters as part of the path. E.g., the URL would be mapped via a path like q{/candidate/(\d+)/vote/yes}. The regular expression would match 4 and then pass that value along as a parameter to the handling function.
#!/usr/bin/perl -w
# (index.pl)
use strict;
use REST;
use HTML::Template;
my %urls = (
qr{^/?$} => 'Welcome',
qr{^/hello$} => 'NiceToMeetYou',
qr{^/bye/(\w+)$} => 'Goodbye',
);
REST->new->run(%urls);
[download]
Each class would need to implement GET or POST, or whatever other verbs as appropriate for your HTTP clients. In essence, REST.pm is a dispatcher. Your "application code" would be in classes like the following (which could either be in index.pl or use'd as appropriate):
package Renderer;
# a class w/common functionality in all "application" classes
sub new {
my ($class, $cgi) = @_;
return bless { CGI => $cgi }, $class;
}
sub cgi { return shift->{CGI} }
sub render {
my ($self, $content) = @_;
my $templateText = q{
<html>
<head><title>Welcome</title></head>
<body><TMPL_VAR NAME='CONTENT'></body>
</html>
};
my $template = HTML::Template->new_scalar_ref(\$templateText);
$template->param(CONTENT => $content);
print $self->cgi->header(-type => 'text/html'), $template->output;
}
package Welcome;
use base 'Renderer';
sub GET {
my $self = shift;
$self->render(q{
<p>Welcome. My name is Perl. What's your name?</p>
<form method="POST" action="index.pl/hello">
<input type="text" name="name">
<input type="submit">
</form>
});
}
package NiceToMeetYou;
use base 'Renderer';
sub POST {
my $self = shift;
my $name = $self->cgi->param('name'); # yes, it would be scrubbed in
+ production code
$self->render(qq{
<p>Nice to meet you $name.
<a href="../index.pl/bye/$name">Leaving already?</a></p>
});
}
package Goodbye;
use base 'Renderer';
sub GET {
my ($self, $name) = @_;
$self->render("Goodbye, $name. It was nice visiting.");
}
[download]
There are a couple ideas and possible improvemetns I have in mind. One thing I don't like is that even though this "simulates" REST-fulness, it doesn't really dispatch differently based on different MIME-types. And I don't like how index.pl (or whatever your filename is) appears in the URL in the browser; I've tried playing with mod_rewrite, but have never been able to get it just right.
So, any suggestions? Comments? If this turns out to be useful for others, I'd consider putting it on CPAN. Let me know what you all think.
...every application I have ever worked on is a glorified munger...
The main thing that leaps out at me is the name: in my humble opinion the concept is not so extraordinary that it merits its own top-level namespace. CGI::REST would be more reasonable.
Looking more closely, the $1, $2, $3, ... construct is pretty fugly. I would recommend you investigate what @- and @+ do, in order to build something that works to any arbitrary depth.
The dispatcher package is pretty minimalist: this forces a lot of make-work code into the renderer packages. I think it is important to add more infrastructure in order to make the renderers as simple as possible to write. You need to refactor agressively and pull as much repetitive code out of the renderers as possible.
The module would be entering a fiercely competitive mindshare space. There's a lot of modules out there that already do this sort of thing. You'll need to have a few very good real-life ready-to-run applications that people can tinker with and see what happens.
It would also be a good thing to have a look at what's out there already and discuss how your module is similar to others, and how it differs.
In any event, feel free to publish it on CPAN.
• another intruder with the mooring in the heart of the Perl
Thanks for the idea of looking at @+ and friend; $1 - $9 always bugged me too (though after having spent a little time hacking Forth, I flinch a little at any function that takes more than three parameters :-) , so the usefullness of actually having those extra parameters is lessened somewhat for me).
I agree on factoring out the common, rendering code. I've done that with my own projects, and put it into a utility REST::Dispatchable (not Renderer; I just came up with that name last night) class; I didn't include it here because I thought it might detract from the main subject.
No, I didn't define what I meant with "REST" elsewhere. I really had only two goals when starting this:
The fact that I'm having trouble accomplishing #1 (i.e., unable to hack mod_rewrite sufficiently that index.pl is hidden, or even that it depends on mod_rewrite) is frustrating but minor. From the reading I've done on REST, handling the HTTP verbs differently is pretty important, and I think this approach does it pretty well.
On the value of "pretty" url's, I've recently been thinking that having a url-mapping for functionality within a web application is a useful tool to organize the application.
The biggest drawback I see with my implementation is the extra code you'd have to write (as opposed to "configure" in the url-mapping) to differentiate content-types, e.g.:
# somewhere in a Renderer subclass...
sub POST {
my $self = shift;
my $filename = $self->cgi->param('file');
my $type = $self->cgi->uploadInfo($filename)->{'Content-Type'};
if ( $type eq 'text/html' ) {
# do something with the HTML
}
elsif ( $type eq 'image/gif' ) {
# do something with the GIF
}
# ...
}
[download]
It is nice that you've done reading on REST. Perhaps you could enlighten us with some links to some of your favorite such readings... because perhaps one of the things you read would happen to define what "REST" is for the REST of us.
Even just expanding the acronym "REST" (as you are using it) would be a small help. :)
httpd.conf:
ScriptAliasMatch /rest\b /path/to/index.pl
or:
ScriptAliasMatch /candidate\b /path/to/index.pl
Throwing nuclear weapons into the sun
Making everybody on Earth disappear
A threat from an alien with a mighty robot
A new technology or communications medium
Providing a magic fish to a Miss Universe
Establishing a Group mind
Results (150 votes). Check out past polls. | http://www.perlmonks.org/?node_id=648593 | CC-MAIN-2018-22 | refinedweb | 1,308 | 61.56 |
Lately, I've realized that game state management is always vastly overcomplicated. Here's a brain dead simple system that does everything you probably need in a straightforward way.
Just what the heck is a game state?Well, what happens when you boot up a game? You probably see some credit to the engine, get shown "the way it's meant to be played", and maybe watch a sweet FMV cutscene. Then you get launched into the menu, where you can tighten up the graphics on level 3 and switch the controls to accommodate your DVORAK keyboard. Then you pick your favourite level, and start playing. A half an hour later, you've had too much Mountain Dew, so you have to pause the game for a few minutes to stop the action to be resumed later. That's about 4 game states right there: introduction, menu, gameplay, pause screen.
Alright, how do we start coding?The job of a state is pretty simple. Generally, it needs to update something, and then draw something. Sounds like an interface to me.
public interface State { public void update(float dt); public void draw(); }You'd then have concrete states like Menu or Play that implement this interface. Now, I'm going to put a little spin on it, by changing the type of the update method.
public interface State { public State update(float dt); public void draw(); }Why did I do that? Well, one of the important parts about game states is the ability to change between them. A game wouldn't be very fun if all you could do was watch the intro FMV over and over again. So the update method now returns whichever state should be used next. If there's no change, it should just return itself.
public class Menu implements State { public State update(float dt) { if(newGameButton.clicked()) { return new Play("Level 1"); } return this; } public void draw() { drawSomeButtons(); } }Now, the state management code becomes extremely simple, and doesn't require any separate manager class or anything. Just stick it in your main method or whatever holds the game loop.
State current = new Intro(); while(isRunning) { handleInput(); current = current.update(calculateDeltaTime()); current.draw(); presentAndClear(); }
Wait, that's it?Yup.
For real?Nah, just kidding. Here's something really cool about this method. Take the pause state. You have to be able to unpause and return to what you were doing, unchanged, right? Usually, a stack is advocated. You push the pause state on to the stack, and pop it off when you're done to get back to the play state. You would then only update and draw the topmost state. I say, screw the stack. Have the pause state take a State in its constructor, which is stored, and then returned instead of the pause state itself when the update method detects that the game should be unpaused. If the pause screen needs to be an overlay over whatever was going on before the game was paused, that's really easy, too!
public class Pause implements State { private State previous; public Pause(State previous) { this.previous = previous; } public State update(float dt) { if(resumeButton.clicked()) { return previous; } return this; } public State draw() { previous.draw(); applyFancyBlurEffect(); drawThePauseMenu(); } }
You need to be a member in order to leave a comment
Sign up for a new account in our community. It's easy!Register a new account
Already have an account? Sign in here.Sign In Now | https://www.gamedev.net/articles/programming/general-and-gameplay-programming/brain-dead-simple-game-states-r4262/ | CC-MAIN-2018-51 | refinedweb | 579 | 75.3 |
The New Anonymous Types Feature in C# 3.0
At the PDC 2005, on the eve of the release of C# 2.0 (C# Whidbey), Microsoft previewed its plans for C# 3.0 (C# Orcas). Along with a list of fantastic new features such as Language Integrated Query (LINQ), Redmond also described a new feature called anonymous types. This article takes a deeper look at anonymous types.
Anonymous Types Defined
The C# 3.0 specifications describe anonymous types as tuple types automatically inferred and created from object initializers. Before you can fully understand the meaning of this definition, you need to understand the term "object initializer," which is the basis of the anonymous types feature.
Note: The compiler creates the anonymous type at compile time and not run time.
You can see this class in the disassembly through ILDASM (the IL Disassembler):
var p1 = new {Name = "A", Price = 3};
At compile time, the compiler creates a new (anonymous) type with properties inferred from the object initializer. Hence, the new type will have the properties Name and Price. The Get and Set methods, as well as the corresponding private variables to hold these properties, are generated automatically. At run time, an instance of this type is created and the properties of this instance are set to the values specified in the object initializer.
C# Internals
You might be surprised to learn that you define only the names of the properties and their values and C# 3.0 automatically creates a class from them. How does it do that? Check out how the compiler processes your request.
You started with the following line of code:
var p1 = new {Name = "A", Price = 3};
When the C# 3.0 compiler encounters a request such as this, it converts it in the background into a more verbose declaration, such as this:
class __Anonymous1 { private string name ; private int price; public string Name{ get { return name; } set { name = value ; } } public int Price{ get { return price; } set { price= value ; } } } __Anonymous1 p1 = new __Anonymous1(); p1.Name = "A"; pt.Price =3
Anonymous Types in Action
To get started, you need Visual Studio 2005 with .NET 2.0 installed. Next, you need to install the LINQ technology preview available for free download from MSDN.
If you have Visual Studio 2005 installed, you will have three new project templates titled LINQ Preview under Visual C#: LINQ Console Application, LINQ Windows Application, and LINQ Library.
To launch a project using anonymous types, take the following steps (Click here to download the accompanying source code for the example program.):
- Start the Visual Studio 2005 editor and create a new project, selecting LINQ Console as the project template in the New Project window.
- Name the project AnonTypes and click OK.
- Type the following code in the editor:
// Program.cs using System; using System.Query; using System.Data.DLinq; namespace AnonTypes { class Program { static void Main(string[] args) { var p1 = new {Name = "A", Price = 3}; Console.WriteLine("Name = {0}\nPrice = {1}", p1.Name, p1.Price); Console.ReadLine(); } } }
- Compile the application, which should compile correctly.
- Execute the application. It should print out the following:
Name = A Price = 3
If you don't have Visual Studio 2005, you can still compile this application from the command line by typing the following:
C:\Program Files\LINQ Preview\Bin\Csc.exe /reference:"C:\Program Files\LINQ Preview\Bin\System.Data.DLinq.dll" /reference: System.dll /reference:"C:\Program Files\LINQ Preview\Bin\System.Query.dll" /out:AnonTypes.exe /target:exe Program.cs
Even though you have not explicitly defined a class in the code, the C# compiler automatically performs the following tasks:
- Deciphers the type.
- Creates a new class (that has the property's name and price).
- Uses this class to instantiate a new object.
- Assigns the object the parameters<< | https://www.developer.com/net/csharp/article.php/3589916/The-New-Anonymous-Types-Feature-in-C-30.htm | CC-MAIN-2018-22 | refinedweb | 631 | 57.67 |
Ganglia and Nagios, Part 1
Monitor enterprise clusters with Ganglia
Install, configure, and extend open source Ganglia to effectively monitor a data center
Content series:
This content is part # of # in the series: Ganglia and Nagios, Part 1
This content is part of the series:Ganglia and Nagios, Part 1
Stay tuned for additional content in this series.
As data centers grow and administrative staffs shrink, the need for efficient monitoring tools for compute resources is more important than ever. The term monitor when applied to the data center can be confusing since it means different things depending on who is saying it and who is hearing it. For example:
- The person running applications on the cluster thinks: "When will my job run? When will it be done? And how is it performing compared to last time?"
- The operator in the network operations center (NOC) thinks: "When will we see a red light that means something needs to be fixed and a service call placed?"
- The person in the systems engineering group thinks: "How are our machines performing? Are all the services functioning correctly? What trends do we see and how can we better utilize our compute resources?"
Somewhere in this frenzy of definitions you are bound to find terabytes of code to monitor exactly what you want to monitor. And it doesn't stop there; there are also myriads of products and services. Fortunately though, many of the monitoring tools are open source -- in fact, some of the open source tools do a better job than some of the commercial applications that try to accomplish the same thing.
The most difficult part of using open source monitoring tools is implementing an install and configuration that works for your environment. The two major problems with using open source monitoring tools are
- There is no tool that will monitor everything you want the way you want it. Why? Because different users will define monitoring in different ways (as I mentioned earlier).
- Because of the first problem, there could be a great amount of customization required to get the tool working in your data center exactly how you want it. Why? Because every environment, no matter how standard, is unique.
By the way, these same two problems exist for the commercial monitoring tools also.
So, I'm going to talk about Ganglia and Nagios, two tools that monitor data centers. Both of them are used heavily in high performance computing (HPC) environments, but they have qualities that make them attractive to other environments as well (such as clouds, render farms, and hosting centers). Additionally, both have taken on different positions in the definition of monitoring. Ganglia is more concerned with gathering metrics and tracking them over time while Nagios has focused on being an alerting mechanism.
As the separate projects evolved, overlap developed. For example:
- Ganglia used to require an agent to run on every host to gather information from it, but now metrics can be obtained from just about anything through Ganglia's spoofing mechanism.
- Nagios also used to only poll information from its target hosts, but now has plug-ins that run agents on target hosts.
While the tools have converged in some functional areas, there is still enough different about them so you gain from running both of them. Running them together can fill the gaps in each product:
- Ganglia doesn't have a built-in notification system while Nagios excels at this.
- Nagios doesn't seem to have scalable built-in agents on target hosts (people may argue on that point) while this was part of the intentional, original design of Ganglia.
There are also other open source projects that do things these two do and some are better in certain areas than others. Popular open source monitoring solutions include Cacti, Zenoss, Zabbix, Performance Copilot (PCP), and Clumon (plus I'm sure you've got a favorite I didn't mention). Many of these (including Ganglia and some Nagios plug-ins) make use of RRDTool or Tobi Oetiker's MRTG (Multi Router Traffic Grapher) underneath to generate pretty graphs and store data.
With so many open source solutions for monitoring a data center, I'm often surprised to see how many scale-out computing centers develop their own solutions and ignore the work that has already been done by others.
In this two-part article, I will discuss Ganglia and Nagios since there is some anecdotal evidence that these are the most popular. And I think there is too little written on how to integrate them together even though it is a very prevalent practice. Especially in the large HPC labs and universities.
By the end of this series, you should be able to install Ganglia and make tie-ins with Nagios, as well as answer the monitoring questions that the different user groups will ask you. It will only be a start, but it should help you get your basics down and develop a total vision of your cluster.
In this article I will walk you through:
- Installing and configuring the basic Ganglia setup.
- How to use the Python modules to extend functionality with IPMI (the Intelligent Platform Management Interface).
- How to use Ganglia host spoofing to monitor IPMI.
Our goal -- to set up a baseline monitoring system of an HPC Linux® cluster in which these three different monitoring views above can be addressed at some level:
-.
Introducing Ganglia
Ganglia is an open source monitoring project, designed to scale to
thousands of nodes, that started at UC Berkeley. Each machine runs a
daemon called
gmond which collects and sends the metrics
(like processor speed, memory usage, etc.) it gleans from the operating
system to a specified host. The host which receives all the metrics can
display them and can pass on a condensed form of them up a hierarchy. This
hierarchical schema is what allows Ganglia to scale so well.
gmond has very little overhead which makes it a great piece
of code to run on every machine in the cluster without impacting user
performance.
There are times when all of this data collection can impact node performance. "Jitter" in the network (as this is called) is when lots of little messages keep coming at the same time. We have found that by lockstepping the nodes' clocks, this can be avoided.
Installing Ganglia
There are many articles and resources on the Internet that will show you how to install Ganglia. We will revisit the one I wrote on the xCAT wiki. I will assume for the purposes of this article that the operating system is some flavor of Red Hat 5 Update 2 (although the steps won't be that much different for other enterprise Linux operating systems).
Prerequisites
Provided you have your yum repository set up, installing prereqs should be easy for the most part. Something like this:
(Note: Yum is really supposed to handle most of these dependencies, but in one of my tests I saw failures to compile that were fixed by adding all these packages.)
After getting these, you need another prerequisite that is not in the Red Hat repository. You can get it and build it like this as long as your machine is connected to the Internet:
wget \ SRPMS/libconfuse-2.6-1.fc9.src.rpm rpmbuild --rebuild libconfuse-2.6-1.fc9.src.rpm cd /usr/src/redhat/RPMS/x86_64/ rpm -ivh libconfuse-devel-2.6-1.x86_64.rpm libconfuse-2.6-1.x86_64.rpm
Remember, mirrors often change. If this doesn't work, then use a search engine to find the libconfuse-2.6.-1.fc9 source RPM.
RRDTool
RRDTool means: Round Robin Database Tool. It was created by Tobias Oetiker and provides an engine for many high performance monitoring tools. Ganglia is one of them, but Cacti and Zenoss are others.
To install Ganglia, we first need to have RRDTool running on our monitoring server. RRDTool provides two very cool functions that are leveraged by other programs:
- It stores data in a Round Robin Database. As the data captured gets older, the resolution becomes less refined. This keeps the footprint small and still useful in most cases.
- It can create graphs by using command-line arguments to generate them from the data it has captured.
To install RRDTool, run the following (tested on versions 1.3.4 and 1.3.6):
cd /tmp/ wget tar zxvf rrdtool* cd rrdtool-* ./configure --prefix=/usr make -j8 make install which rrdtool ldconfig # make sure you have the new rrdtool libraries linked.
There are many ways you can use RRDTool as a standalone utility in your environment, but I won't go into them here.
The main Ganglia install
Now that you have all prerequisites, you can install Ganglia. First you need to get it. In this article we are using Ganglia 3.1.1. Download the ganglia-3.1.1.tar.gz file and place it in the /tmp directory of your monitoring server. Then do the following:
cd /tmp/ tar zxvf ganglia*gz cd ganglia-3.1.1/ ./configure --with-gmetad make -j8 make install
You should exit without errors. If you see errors, then you may want to check for missing libraries.
Configuring Ganglia
Now that the basic installation is done, there are several configuration items you need to take care of to get it running. Do the following steps:
- Command line file manipulations.
- Modify /etc/ganglia/gmond.conf.
- Take care of multi-homed machines.
- Start it up on a management server.
Step 1: Command line file manipulations
As shown in the following:
cd /tmp/ganglia-3.1.1/ # you should already be in this directory mkdir -p /var/www/html/ganglia/ # make sure you have apache installed cp -a web/* /var/www/html/ganglia/ # this is the web interface cp gmetad/gmetad.init /etc/rc.d/init.d/gmetad # startup script cp gmond/gmond.init /etc/rc.d/init.d/gmond mkdir /etc/ganglia # where config files go gmond -t | tee /etc/ganglia/gmond.conf # generate initial gmond config cp gmetad/gmetad.conf /etc/ganglia/ # initial gmetad configuration mkdir -p /var/lib/ganglia/rrds # place where RRDTool graphs will be stored chown nobody:nobody /var/lib/ganglia/rrds # make sure RRDTool can write here. chkconfig --add gmetad # make sure gmetad starts up at boot time chkconfig --add gmond # make sure gmond starts up at boot time
Step 2: Modify /etc/ganglia/gmond.conf
Now you can modify /etc/ganglia/gmond.conf to name your cluster. Suppose
your cluster name is "matlock"; then you would change
name = "unspecified" to
name = "matlock".
Step 3: Take care of multi-homed machines
In my cluster, eth0 is the public IP address of my system. However, the
monitoring server talks to the nodes on the private cluster network
through eth1. I need to make sure that the multicasting that Ganglia uses
ties to eth1. This can be done by creating the file
/etc/sysconfig/network-scripts/route-eth1. Add the contents
239.2.11.71 dev eth1.
You can then restart the network and make sure routes shows this IP going
through eth1 using
service network restart. Note: You should
put in 239.2.11.71 because that is the ganglia default multicast channel.
Change it if you make the channel different or add more.
Step 4: Start it up on a management server
Now you can start it all up on the monitoring server:
service gmond start service gmetad start service httpd restart
Pull up a Web browser and point it to the management server at. You'll see that your management server is now being monitored. You'll also see several metrics being monitored and graphed. One of the most useful is that you can monitor the load on this machine. Here is what mine looks like:
Figure 1. Monitoring load
Nothing much happening here, the machine is just idling.
Get Ganglia on the nodes
Up to now, we've accomplished running Ganglia on the management server; now we have to care more about what the compute nodes all look like. It turns out that you can put Ganglia on the compute nodes by just copying a few files. This is something you can add to a post install script if you use Kickstart or something you can add to your other update tools.
The quick and dirty way to do it is like this: Create a file with all your
host names. Suppose you have nodes
deathstar001-
deathstar100. Then you would have a
file called /tmp/mynodes that looks like this:
deathstar001 deathstar002 ...skip a few... deathstar099 deathstar100
Now just run this:
# for i in `cat /tmp/mynodes`; do scp /usr/sbin/gmond $i:/usr/sbin/gmond ssh $i mkdir -p /etc/ganglia/ scp /etc/ganglia/gmond.conf $i:/etc/ganglia/ scp /etc/init.d/gmond $i:/etc/init.d/ scp /usr/lib64/libganglia-3.1.1.so.0 $i:/usr/lib64/ scp /lib64/libexpat.so.0 $i:/lib64/ scp /usr/lib64/libconfuse.so.0 $i:/usr/lib64/ scp /usr/lib64/libapr-1.so.0 $i:/usr/lib64/ scp -r /usr/lib64/ganglia $i:/usr/lib64/ ssh $i service gmond start done
You can restart
gmetad, refresh your Web browser, and you
should see your nodes now showing up in the list.
Some possible issues you might encounter:
- You may need to explicitly set the static route as in the earlier step 3 on the nodes as well.
- You may have firewalls blocking the ports.
gmondruns on port 8649. If
gmondis running on a machine you should be able to run the command
telnet localhost 8649. And see a bunch of XML output scroll down your screen.
Observing Ganglia
Many system engineers have a hard time understanding their own workload or job behavior. They may have custom code or haven't done research to see what their commercial products run. Ganglia can help profile applications.
We'll use Ganglia to examine the attributes of running the Linpack benchmark. Figure 2 shows a time span where I launched three different Linpack jobs.
Figure 2. Watching over Linpack
As you can see from this graph, when the job starts there is some activity on the network when the job launches. What is interesting, however, is that towards the end of the job, the network traffic increases quite a bit. If you knew nothing about Linpack, you could at least say this: Network traffic increases at the end of the job.
Figure 3 and Figure 4 show CPU and memory utilization respectively. From here you can see that we are pushing the limits of the processor and that our memory utilization is pretty high too.
Figure 3. CPU usage
Figure 4. Memory usage
These graphs give us great insight to the application we're running: We're using lots of CPU and memory and creating more network traffic towards the end of the running job. There are still a lot of other attributes about this job that we don't know, but this gives us a great start.
Knowing these things can help make better purchasing decisions in the future when it comes to buying more hardware. Of course, no one buys hardware just to run Linpack ... right?
Extending capability
The basic Ganglia install has given us a lot of cool information. Using Ganglia's plug-ins gives us two ways to add more capability:
- Through the addition of in-band plug-ins.
- Through the addition of out-of-band spoofing from some other source.
The first method has been the common practice in Ganglia for a while. The second method is a more recent development and overlaps with Nagios in terms of functionality. Let's explore the two methods briefly with a practical example.
In-band plug-ins
In-band plug-ins can happen in two ways.
- Use a cron-job method and call Ganglia's
gmetriccommand to input data.
- Use the new Python module plug-ins and script it.
The first method was the common way we did it in the past and I'll more about this in the next section on out-of- band plug-ins. The problem with it is that it wasn't as clean to do. Ganglia 3.1.x added Python and C module plug-ins to make it seem more natural to extend Ganglia. Right now, I'm going to show you the second method.
First, enable Python plug-ins with Ganglia. Do the following:
- Edit the /etc/ganglia/gmond.conf file.
If you open it up, then you'll notice about a quarter of the way down
there is a section called
modules that looks something like
this:
modules { module { name = "core_metrics" } ... }
We're going to add another module to the modules section. The one you should stick in is this:
module { name = "python_module" path = "modpython.so" params = "/usr/lib64/ganglia/python_modules/" }
On my gmond.conf I added the previous code stanza at line 90. This allows
Ganglia to use the Python modules. Also, a few lines below that after the
statement
include ('/etc/ganglia/conf.d/*.conf'), add the
line
include ('/etc/ganglia/conf.d/*.pyconf'). These include
the definitions of the things we are about to add.
- Make some directories.
Like so:
mkdir /etc/ganglia/conf.d mkdir /usr/lib64/ganglia/python_modules
- Repeat 1 and 2 on all your nodes.
To do that,
- Copy the new gmond.conf to each node to be monitored.
- Create the two directories as in step 2 on each node to be monitored so that they too can use the Python extensions.
Now that the nodes are set up to run Python modules, let's create a new one. In this example we're going to add a plug-in that uses the Linux IPMI drivers. If you are not familiar with IPMI and you work with modern Intel and AMD machines then please learn about it (in Related topics).
We are going to use the open source IPMItool to communicate with the IPMI device on the local machine. There are several other choices like OpenIPMI or freeipmi. This is just an example, so if you prefer to use another one, go right on ahead.
Before starting work on Ganglia, make sure that IPMItool works on your
machine. Run the command
ipmitool -c sdr type temperature | sed 's/ /_/g'; if that
command doesn't work, try loading the IPMI device drivers and run it
again:
modprobe ipmi_msghandler modprobe ipmi_si modprobe ipmi_devintf
After running the
ipmitool command my output shows
Ambient_Temp,20,degrees_C,ok CPU_1_Temp,20,degrees_C,ok CPU_2_Temp,21,degrees_C,ok
So in my Ganglia plug-in, I'm just going to monitor ambient temperature. I've created a very poorly written plug-in called ambientTemp.py that uses IPMI based on a plug-in found on the Ganglia wiki that does this:
Listing 1. The poorly written Python plug-in ambientTemp.py
import os def temp_handler(name): # our commands we're going to execute sdrfile = "/tmp/sdr.dump" ipmitool = "/usr/bin/ipmitool" # Before you run this Load the IPMI drivers: # modprobe ipmi_msghandler # modprobe ipmi_si # modprobe ipmi_devintf # you'll also need to change permissions of /dev/ipmi0 for nobody # chown nobody:nobody /dev/ipmi0 # put the above in /etc/rc.d/rc.local foo = os.path.exists(sdrfile) if os.path.exists(sdrfile) != True: os.system(ipmitool + ' sdr dump ' + sdrfile) if os.path.exists(sdrfile): ipmicmd = ipmitool + " -S " + sdrfile + " -c sdr" else: print "file does not exist... oops!" ipmicmd = ipmitool + " -c sdr" cmd = ipmicmd + " type temperature | sed 's/ /_/g' " cmd = cmd + " | awk -F, '/Ambient/ {print $2}' " #print cmd entries = os.popen(cmd) for l in entries: line = l.split() # print line return int(line[0]) def metric_init(params): global descriptors temp = {'name': 'Ambient Temp', 'call_back': temp_handler, 'time_max': 90, 'value_type': 'uint', 'units': 'C', 'slope': 'both', 'format': '%u', 'description': 'Ambient Temperature of host through IPMI', 'groups': 'IPMI In Band'} descriptors = [temp] return descriptors def metric_cleanup(): '''Clean up the metric module.''' pass #This code is for debugging and unit testing if __name__ == '__main__': metric_init(None) for d in descriptors: v = d['call_back'](d['name']) print 'value for %s is %u' % (d['name'], v)
Copy Listing 1 and place it into /usr/lib64/ganglia/python_modules/ambientTemp.py. Do this for all nodes in the cluster.
Now that we've added the script to all the nodes in the cluster, tell Ganglia how to execute the script. Create a new file called /etc/ganglia/conf.d/ambientTemp.pyconf The contents are as follows:
Listing 2. Ambient.Temp.pyconf
modules { module { name = "Ambient Temp" language = "python" } } collection_group { collect_every = 10 time_threshold = 50 metric { name = "Ambient Temp" title = "Ambient Temperature" value_threshold = 70 } }
Save Listing 2 on all nodes.
The last thing that needs to be done before restarting
gmond
is to change the permissions of the IPMI device so that nobody
can perform operations to it. This will make your IPMI interface extremely
vulnerable to malicious people!
This is only an example:
chown nobody:nobody /dev/ipmi0.
Now restart
gmond everywhere. If you get this running then
you should be able to refresh your Web browser and see something like the
following:
Figure 5. IPMI in-band metrics
The nice thing about in-band metrics is they allow you to run programs on the hosts and feed information up the chain through the same collecting mechanism other metrics use. The drawback to this approach, especially for IPMI, is that there is considerable configuration required on the hosts to make it work.
Notice that we had to make sure the script was written in Python, the configuration file was there, and that the gmond.conf was set correctly. We only did one metric! Just think of all you need to do to write other metrics! Doing this on every host for every metric can get tiresome. IPMI is an out-of-band tool so there's got to be a better way right? Yes there is.
Out-of-band plug-ins (host spoofing)
Host spoofing is just the tool we need. Here we use the powerful
gmetric and tell it which hosts we're running on --
gmetric is a command-line tool to insert information into
Ganglia. In this way you can monitor anything you want.
The best part about
gmetric? There are tons of scripts
already written.
As a learning experience, I'm going to show you how to reinvent how to run ipmitool to remotely access machines:
- Make sure ipmitool works on its own out of band.
I have set the BMC (the chip on the target machine) so that I can run IPMI
commands on it. For example: My monitoring hosts name is redhouse. From
redhouse I want to monitor all other nodes in the cluster. Redhouse is
where
gmetad runs and where I point my Web browser to access
all of the Ganglia information.
One of the nodes in my cluster has the host name x01. I set the BMC of x01 to have an IP address that resolves to the host x01-bmc. Here I try to access that host remotely:
# ipmitool -I lanplus -H x01-bmc -U USERID -P PASSW0RD sdr dump \ /tmp/x01.sdr Dumping Sensor Data Repository to '/tmp/x01.sdr' # ipmitool -I lanplus -H x01-bmc -U USERID -P PASSW0RD -S /tmp/x01.sdr \ sdr type Temperature Ambient Temp | 32h | ok | 12.1 | 20 degrees C CPU 1 Temp | 98h | ok | 3.1 | 20 degrees C CPU 2 Temp | 99h | ok | 3.2 | 21 degrees C
That looks good. Now let's put it in a script to feed to
gmetric.
- Make a script that uses ipmitool to feed into
gmetric.
We created the following script /usr/local/bin/ipmi-ganglia.pl and put it on the monitoring server:
#!/usr/bin/perl # vallard@us.ibm.com use strict; # to keep things clean... er cleaner use Socket; # to resolve host names into IP addresses # code to clean up after forks use POSIX ":sys_wait_h"; # nodeFile: is just a plain text file with a list of nodes: # e.g: # node01 # node02 # ... # nodexx my $nodeFile = "/usr/local/bin/nodes"; # gmetric binary my $gmetric = "/usr/bin/gmetric"; #ipmitool binary my $ipmi = "/usr/bin/ipmitool"; # userid for BMCs my $u = "xcat"; # password for BMCs my $p = "f00bar"; # open the nodes file and iterate through each node open(FH, "$nodeFile") or die "can't open $nodeFile"; while(my $node = <FH>){ # fork so each remote data call is done in parallel if(my $pid = fork()){ # parent process next; } # child process begins here chomp($node); # get rid of new line # resolve node's IP address for spoofing my $ip; my $pip = gethostbyname($node); if(defined $pip){ $ip = inet_ntoa($pip); }else{ print "Can't get IP for $node!\n"; exit 1; } # check if the SDR cache file exists. my $ipmiCmd; unless(-f "/tmp/$node.sdr"){ # no SDR cache, so try to create it... $ipmiCmd = "$ipmi -I lan -H $node-bmc -U $u -P $p sdr dump /tmp/$node.sdr"; `$ipmiCmd`; } if(-f "/tmp/$node.sdr"){ # run the command against the cache so that its faster $ipmiCmd = "$ipmi -I lan -H $node-bmc -U $u -P $p -S /tmp/$node.sdr sdr type Temperature "; # put all the output into the @out array my @out = `$ipmiCmd`; # iterate through each @out entry. foreach(@out){ # each output line looks like this: # Ambient Temp | 32h | ok | 12.1 | 25 degrees C # so we parse it out chomp(); # get rid of the new line # grap the first and 5th fields. (Description and Temp) my ($descr, undef, undef, undef,$temp) = split(/\|/); # get rid of white space in description $descr =~ s/ //g; # grap just the temp, (We assume C anyway) $temp = (split(' ', $temp))[0]; # make sure that temperature is a number: if($temp =~ /^\d+/ ){ #print "$node: $descr $temp\n"; my $gcmd = "$gmetric -n '$descr' -v $temp -t int16 -u Celcius -S $ip:$node"; `$gcmd`; } } } # Child Thread done and exits. exit; } # wait for all forks to end... while(waitpid(-1,WNOHANG) != -1){ 1; }
Aside from all the parsing, this script just runs the
ipmitool command and grabs temperatures. It then puts those
values into Ganglia with the
gmetric command for each of the
metrics.
- Run the script as a cron job.
Run
crontab -e. I added the following entry to run every 30
minutes:
30 * * * * /usr/local/bin/ipmi-ganglia.sh. You may
want to make it happen more often or less.
- Open Ganglia and look at the results.
Opening up the Ganglia Web browser and looking at the graphs of one of the nodes, you can see that nodes were spoofed and were updated in each nodes entry:
Figure 6. The no_group metrics
One of the drawbacks to spoofing is that the category goes in the no_group
metrics group.
gmetric doesn't appear to have a way to change
the groupings in a nice way like in the in-band version.
What's next
This article gives a broad overview of what you can get done using Ganglia and Nagios as open source monitoring software, both individually and in tandem. You took an installation/configuration tour of Ganglia, then saw how Ganglia can be useful in understanding application characteristics. Finally, you saw how to extend Ganglia using an in-band script and how to use out-of-band scripts with host spoofing.
This is a good start. But this article has only answered the monitoring question the system engineer posed. We now can now view systemwide performance and see how the machines are being utilized. We can tell if machines are idle all the time, or if they're running at 60 percent capacity. We can now even tell which machines run the hottest and coldest, and see if their rack placement could be better.
Part 2 in this two-part series explores setting up Nagios and integrating it with Ganglia, including:
- Installing and configuring a basic Nagios setup for alerts
- Monitoring switches and other infrastructure
- Tying Nagios into Ganglia for alerts
As a bonus, the second part shows how to extend the entire monitoring system to monitor running jobs and other infrastructure. By doing these additional items, we'll be able to answer the other monitoring questions that different groups ask.
Downloadable resources
Related topics
- Grab Ganglia 3.1.1.
- RDDTool (Round Robin Database Tool) was created by Tobias Oetiker (bio); it provides an engine for many high performance monitoring tools.
- Gmetric is a command-line tool to insert information into Ganglia that sports lots of scripts to help with host spoofing.
- Some other monitoring tools:
- IPMItool is a utility for managing and configuring devices that support the Intelligent Platform Management Interface (IPMI) version 1.5 and version 2.0 specifications.
- The Multi Router Traffic Grapher monitors SNMP network devices and draws pretty pictures showing how much traffic has passed through each interface.
- In the developerWorks Linux hub, find more resources for Linux developers (including developers who are new to Linux.
- See all Linux articles and tutorials on developerWorks. | https://www.ibm.com/developerworks/linux/library/l-ganglia-nagios-1/index.html | CC-MAIN-2019-47 | refinedweb | 4,818 | 64.2 |
Created on 2015-10-15 08:05 by serhiy.storchaka, last changed 2018-11-04 21:31 by rhettinger.
Proposed(!)
These all look good except for perhaps #5 which I need to look at a bit more for its effect on OD subclasses.
Thanks.
Thank you for your review Eric.
As for using Py_TYPE(self) instead of the __class__ attribute in #3, this is consistent with the rest of Python core and stdlib. All C implementations use Py_TYPE(self) for repr and pickling, even if Python implementations can use __class__.
>>> class S(set): __class__ = str
...
>>> s = S()
>>> s.__class__
<class 'str'>
>>> s
S()
>>> s.__reduce_ex__(2)
(<class '__main__.S'>, ([],), {})
>>> s+''
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'S' and 'str'
Note that you can't just set s.__class__ = str, this is forbidden (issue24912). You should set __class__ in class definition or make it a property, and this doesn't have affect object's behavior, the only visible effect is the value of the __class__ attribute itself.
One possible argument for Py_TYPE(self) (besides simplicity and historical reasons) is that it is more reliable. It doesn't cause executing arbitrary Python code and therefore is thread-safe and reentrant. It returns the true type of the object, that can be important for debugging.
We should not care about exactly matching Python implementation, but rather about consistency with the rest of Python. If such type of mismatching is considered a bug, Python is full of such bugs.
About #5, be sure that the new code is exact semantic equivalence to the old code besides copying the dict that is not needed now. I just dropped an iteration on always empty dict and related code.
I don't see re-entrancy problem in #7. Could you please provide an example?. The two implementations should behave identically in nearly every case. The only place I would expect a deviation to not matter is for anything where Python-as-a-whole does not guarantee behavior. However there are *very* few of those when all is said and done. Any other variation should not be made casually and only if we are willing to accept that there may be code out there that relies on the pure Python behavior which the C implementation will break.
So like I said before, as a rule I'm absolutely okay with changing the behavior as long as the pure Python implementation is changed to match and OrderedDict remains backward-compatible (and the change is meaningful, e.g. efficiency, consistency). Otherwise my concerns remain and we have to have sufficient justification for the change..
-----
For this particular case, I think we should still aim for compatibility with the pure Python implementation. To that effect, we could use Py_TYPE(od) only if PyODict_CheckExact() returns true (as a fast path) and use od.__class__ otherwise. That fast path would be safe for the C implementation since code can't change OrderedDict().__class__ (due to #24912).
If we *do* continue supporting "type(od) != od.__class__" in repr then I'd say why bother with a fast path for PyOdict_CheckExact(). That sort of efficiency isn't necessary for repr. If we stop supporting a differing od.__class__ then I'm fine simply using Py_TYPE() throughout.
Likewise, if this is not a case we want to support then we must accept that we may break some code out there, however unlikely that is. In that case perhaps we could be more clear in the documentation that OrderedDict().__class__ should not be changed, though such an addition to the OrderedDict docs might just be clutter. A general FAQ or other broader doc entry about not assigning to obj.__class__ for stdlib types might be more appropriate. But that is where clarification from python-dev would help.
[1] There is also a difference between type(obj) and obj.__class__ in the case of proxies (e.g. see #16251), but that is less of an issue here.
Regarding #5, you're right about OrderedDict().__dict__ being empty for the C implementation. (Nice observation!) So I'm okay with ripping all that code out of odict_reduce(). Since we're still accessing od.__dict__ through _PyObject_GetAttrId() that should not impact subclassing.
Regarding #7, I see what you did now. That looks fine to me.
>.
There is no a difference. io, pickle, ElementTree, bz2, virtually all
accelerator classes was created as replacements of pure Python
implementations. All C implementations use Py_TYPE(self) for repr() and
pickling. I think this deviation is common and acceptable.
Backward compatibility related to __class__ assignment was already broken in C
implementation. In 3.4 following code works:
>>> from collections import *
>>> class foo(OrderedDict):
... def bark(self): return "spam"
...
>>> class bar(OrderedDict):
... pass
...
>>> od = bar()
>>> od.__class__ = foo
>>> od.bark()
'spam'
In 3.5 it doesn't.
>
No, this assignment is forbidden (due to #24912). You can't set __class_ for
an instance of a subclass of non-heap type.
>.
Could you please raise a discussion on Python-Dev? You will formulate the
problem better.
Updated patch addresses Eric's comments. Changes related to #3 are reverted. We will return to this after discussing on Python-Dev.
new patch LGTM
> Backward compatibility related to __class__ assignment was already broken in C
> implementation. In 3.4 following code works:
[snip]
> In 3.5 it doesn't.
Depending on what feedback we get from python-dev, that may need to be
fixed. I'd be inclined to not worry about it. :)
> No, this assignment is forbidden (due to #24912). You can't set __class_ for
> an instance of a subclass of non-heap type.
Ah, I see. So the solution to that issue has *forced* a compatibility break.
> Could you please raise a discussion on Python-Dev? You will formulate the
> problem better.
I will.
Posted to python-dev:
New changeset b6e33798f82a by Serhiy Storchaka in branch '3.5':
Issue #25410: Cleaned up and fixed minor bugs in C implementation of OrderedDict.
New changeset 741ef17e9b86 by Serhiy Storchaka in branch 'default':
Issue #25410: Cleaned up and fixed minor bugs in C implementation of OrderedDict.
Here is a patch that makes both implementations to use type(self) instead of self.__class__ in __repr__(), __reduce__() and copy().
There is a difference between current implementations. Python implementation uses self.__class__ in copy(), C implementation uses type(self).
Seems there is a leak in _odict_add_new_node() when PyObject_Hash(key) fails. Here is a fix.
both patches* LGTM
* odict_type.patch and odict_add_new_node_leak.patch
And thanks again, Serhiy, for taking the time on this. :)
New changeset 93f948120773 by Serhiy Storchaka in branch '3.5':
Issue #25410: Fixed a memory leak in OrderedDict in the case when key's hash
New changeset c3cec0f77eff by Serhiy Storchaka in branch 'default':
Issue #25410: Fixed a memory leak in OrderedDict in the case when key's hash
Thank you for your reviews and discussions (and for your appreciated C acceleration of OrderedDict of course) Eric. I just want to make the code a little cleaner and reliable.
As for odict_type.patch, I would prefer to commit only C part of the patch and left Python implementation unchanged. There few not very strong arguments for __class__ against type() in Python code.
1. Calling type() needs to make globals and builtins lookup. This is more than 2 times slower than access the __class__ attribute. Not critical for __repr__(), __reduce__() and copy().
2. If the code is invoked at shutdown after destroying the builtins module, type can be None. We already had issues with this in the past. In current Python such situation is almost impossible nevertheless, due to different defensive techniques.
Since the python-dev discussion about __class__, leaving the Python implementation alone is fine with me.
New changeset a42c0c1c5133 by Serhiy Storchaka in branch '3.5':
Issue #25410: C implementation of OrderedDict now uses type(self) instead of
New changeset 10b965d59b49 by Serhiy Storchaka in branch 'default':
Issue #25410: C implementation of OrderedDict now uses type(self) instead of
Thanks Eric.
A comment in PyODict_SetItem suggests to revert setting the value on the dict if adding new node failed. Following patch implements this suggestion. After committing the patch in issue25462 PyDict_DelItem can be replaced with _PyDict_DelItem_KnownHash and it will be always successful.
Following patch makes the code for the iterating a little simpler by merging common code..
Could you please make a review of last three patches Eric?
I will review those patches soon.
All 3 patches look fine to me.
In "odict_resize_sentinel.patch", it would be nice if you could accomplish that with a single sentinel. However, fixing the bug is more critical.
New changeset 45ce4c6b4f36 by Serhiy Storchaka in branch '3.5':
Issue #25410: Made testing that od_fast_nodes and dk_entries are in sync more
New changeset c16af48153a4 by Serhiy Storchaka in branch 'default':
Issue #25410: Made testing that od_fast_nodes and dk_entries are in sync more
Thank you for your review Eric.
I made error in commit messages, that is why they are non shown here. odict_revert_setting_on_error.patch and odict_iternext_simpler.patch were committed in 1594c23d8c2f and ad44d551c13c.
od_resize_sentinel2 in odict_resize_sentinel.patch was renamed to od_fast_nodes_size. Now I see that this is not enough. It is possible that ma_keys is located on the same place, has the same size, but has different layout for keys with matched hashes. I'm trying to write more reliable checks.
Following code prints X([(1, 1), (3, 3)]) on 3.4 and X([(0, 0), (1, 1), (2, 2), (3, 3), (4, 4)]) on 3.5+.
from collections import OrderedDict
class X(OrderedDict):
def __iter__(self):
for k in OrderedDict.__iter__(self):
if k % 2:
yield k
od = X((i, i) for i in range(5))
print(od.copy())
And even simpler example: list(od.keys()) is [1, 3] in 3.4 and [0, 1, 2, 3, 4] in 3.5.
Proposed patch makes OrderedDict.copy() more consistent between implementations.
Is this issue still relevant?
It might be more appropriate to start a new issue for this, but I'll leave that decision to somehow who would know for sure. Anyway, basically the entire dict/PyDictObject api functions do not appear work at all with OrderedDict. Or rather, OrderedDict doesn't seem to be able to recognize the changes the dict api makes to an object. This is present in both 3.6.0 and 3.7.0 by the way.
from operator import setitem
from collections import OrderedDict
from pprint import pprint
class thing:
def __init__(self):
ns = OrderedDict(a='od.__init__')
vars(__class__)['__dict__'].__set__(self, ns)
dict.__setitem__(ns, 'b', 'dict.__setitem__')
self.c = 'PyObject_SetAttr'
OrderedDict.__setitem__(ns, 'd', 'od.__setitem__')
ns.update(e='od.update')
object.__setattr__(self, 'f', 'PyObject_GenericSetAttr')
setattr(self, 'f', 'PyObject_SetAttr')
setitem(ns, 'g', 'od.__setitem__')
dict.update(ns, h='dict.update')
dict.setdefault(ns, 'i', 'i')
self = thing()
ns = self.__dict__
real_ns = {**ns}
missing = {k: ns[k] for k in real_ns.keys() - ns.keys()}
pprint(ns)
pprint(missing, width=len(f'{missing}')-1)
print(f'"b" in {ns.keys()} == {"b" in ns.keys()}')
print(f'"b" in {*ns.keys(),} == {"b" in [*ns.keys()]}')
del ns['a']
del ns['b']
print(f"ns.get('c', KeyError('c')) == {ns.get('c', KeyError('c'))}")
print(f"ns.pop('c', KeyError('c')) == {ns.pop('c', KeyError('c'))!r}")
ns.get('i')
ns.pop('i')
Maybe it's considered undefined behavior for a subclass to use
a method of one of its bases which it has overriden. That's fair enough, but as this example demonstrates, the silence and arbitrariness of the errors is a real problem when OrderedDict is used as a namespace, since it's a complete coin toss on whether one of the many internal API function will set an attribute or name via PyDict_SetItem or PyObject_SetItem. Only the former can invoke the methods OrderedDict overrides and there isn't any easy-to-find on the subject as far as I know.
> Maybe it's considered undefined behavior for a subclass to use
> a method of one of its bases which it has overriden.
In general, it's true that if OrderedDict is a subclass of dict, then it would have no defense against someone making a direct call to the dict base class. Such a call should be expected to violate the OrderedDicts invariants.
> it's a complete coin toss on whether one of the many internal
> API function will set an attribute or name via PyDict_SetItem
> or PyObject_SetItem.
Not really. The CPython source is supposed to only call PyDict_SetItem when the target is known to be an exact dictionary. If you find a case where that isn't true, please file a bug report and we'll fix it.
> It might be more appropriate to start a new issue for this, but I'll > leave that decision to somehow who would know for sure.
No need. We've known about this sort of problem for years. See for example. There isn't really much we could do about it without causing other issues that would be worse.
FWIW, this doesn't seem to be a problem in practice. Further, OrderedDict is expected to become less relevant now that regular dicts are order preserving. | https://bugs.python.org/issue25410 | CC-MAIN-2020-16 | refinedweb | 2,207 | 68.36 |
Regular expressions (a.k.a. RegEx) appear almost as a separate language inside .Net Framework. Their syntax looks cryptic for beginners thus many ASP.NET and other .Net developers avoid to learn regular expressions as long as possible, while using only classic string manipulation when work with textual data.
Although regular expression could look hard to understand at a first glance, keep in mind that they are just specially formatted strings; with small number of grammar rules you need to follow. This article will explain you the rules of writing regular expressions, so with a little practice you can add this powerful tool under your expertise.
To execute regular expressions against some text, we need to use classes from System.Text.RegularExpressions namespace. Regex class represents regular expression. More about how to use Regex class in C# or VB.NET, including four common uses you can read at Using Regex Class in ASP.NET tutorial. There is also online ASP.NET web application you can use to Test .Net Regular Expressions. This short application includes four web forms: Extract Data, Search and Replace, Data Validation and Split String. It is probably best to start with Extract Data tester and try example expressions from this tutorial.
Even if the term "regular expressions" sounds strange to you (or their short name, RegEx), you are probably already familiar with simple file search on DOS or Windows. For example, if you want to search for files in Windows, you can type something like *.pdf to find all files with .pdf extension. In this case, * character have special meaning and means "any file". Thus, this search returns all files that end with string ".pdf". Regular expressions are similar to this, but they have more rules and more special characters. Special characters in regular expressions are called metacharacters, and other normal characters are called literals.
Regular expressions can be simple or complex. Here is one very simple regular expression:
car
Of course, this is just simple string with three letters, but it is also regular expression which contains three literals. This expression will match string car regardless of its position in text. It could be in the middle of other bigger word, on the beginning or end of word or even whole text etc. This is similar to Find... function in Notepad or many other programs. Type few letters, click Find Next button and application marks where typed character sequence occurs.
To narrow previous search to start or end of the string only, we need ^ (caret) and $ (dollar) metacharacters. Caret ( ^ ) means "start of the text", and dollar ( $ ) metacharacter means "end of the text".
Expression ^car will match "car" only if text starts with "car" and will ignore other occurences in the middle or at the end of the string. In addition to that, Regex class allows using of RegexOptions.Multiline flag. If Multiline option is used in Regex class constructor, text is broken to lines so expression ^car will match the start of every line, not just start of complete text.
Expression car$ will match car only if it is at the end of the string. If RegexOptions.Multiline is used, it will match ends of each line too.
And, if we use both caret and dollar metacharacters to build expression like ^car$, it will match only if text or line (depending is Multiline option is used) is equal to "car".
\ (backslash) metacharacter is used to match metacharacters and also to add special meaning to literals. For example, regular expression ^b will match string if starts with letter b, but escaped \^b will literally search string for s substring "^b". For literals, expression d will just match letter d, but when escaped \d means any digit (from 0 to 9), expression n matches letter n, but escaped \n means new line. \ (backslash) is also a metacharacter, so to search string for this character you need to use \\.
Expression "car" will match string anywhere in text even if it is a part of larger word. To find only whole words use \b sequence. \b means start or end of the word (word boundary). The expression \bcat\b would match whole word cat, but not as subword in category, communication, ducat, scat or location.
Letter b without \ matches "b". By adding \ before, it becomes escape sequence and has special meaning to regular expression engine.
Previous problems, like matching exact word in the middle, start or end of the text are very simple tasks that could be done without regular expressions, for example using common System.String class methods. But, as problem is harder, regular expressions are more useful tool and their power reveals.
Regular expressions use | metacharacter (known as vertical bar character, or pipe) for choice between two or more alternatives, similar to OR in VB.NET or || in C#.
For example, regular expression ^red|blue$ will match both strings "red" or "blue".
Regular expressions are case sensitive by default. You can write alternation like b|B to match both cases of letter B, but .Net RegEx engine offers easier solution. Use RegexOptions.IgnoreCase for case insensitive expressions. Be aware that this option affects complete expression, so if you need just one part of the expression to be case insensitive, use classic way with alternation or using RegEx classes (more on classes in regular expressions in next tutorial).
As you see, regular expressions are not hard at all once you understand these few grammar rules. There is short .Net Regular Expressions Syntax summary you can use to remind in case you forget some metacharacter or escape sequence.
Don't forget to Test .Net Regular Expressions , for beginners is probably best to start with Extract Data page which just search for strings in given text. To use this tester, insert some text in "Input Text" textbox, then write regular expression in "Regular Expression" textbox control and finally click "Find Matches" button. All strings matched by RegEx engine will list bellow. You can test all examples in this tutorial, like extracting complete words, matching start or the end of the line or complete text, alternation, case sensitivity etc.
This is just fist step in understanding of regular expressions, but you already can see that regular expressions are not so difficult. In next tutorial, Writing Regular Expressions Character Classes, I cover regular expressions classes. I hope this tutorial series will be helpful for you and soon you'll impress your chief or coworkers with some "cryptic" but useful regular expressions :). Happy coding!
Tutorial toolbar: Tell A Friend | Add to favorites | Feedback | Google | https://beansoftware.com/NET-Tutorials/Write-Regular-Expressions.aspx | CC-MAIN-2021-49 | refinedweb | 1,091 | 64.61 |
>> outrageously one-sided. The passenger had the option of requesting to board before everybody else, and the option to request a seat up front. He did not do that. He started crying as he was wheeled down the aisle, and was sent to his seat. To accommodate him, they moved him to a different seat where his friend was and where there was an empty middle seat, so he had extra room and his friend. After boarding was complete, and likely after the plane was ready for take-off, somebody wanted to help him, and got someone from first class willing to switch seats. (Which is outlandish and I would be very upset if Delta let people guilt first class passengers into giving up the seats that they paid for/deserved) As per FAA regs, the flight attendants refused the request for the wheel-chaired gentleman and some random woman decides to complain and create a hullabaloo about some issue that is none of her business.
I am very happy to find this post very useful for me, as it contains lot of information. I always prefer to read the quality content and this thing I found in you post. I really enjoyed reading this.
Delta Airlines treats a double amputee veteran with disrespect. Refusing him a seat in First Class offered by two first class passengers because........ it would take up to much time!!!!
On my recent trip from Amsterdam to Boston and back, I can assure that Delta is the one of the worst airlines I ever used. So I am not surprised of their attitudes. On the contrary the KLM which flew me from home to Amsterdam, was much better and humane.
If taking up the offer of the first class passengers would, indeed, cause a delay then the crew did the right thing. What about all of the other passengers who would also be delayed? Neither the crew nor the well-meaning first class passengers had any right to make a unilateral decision that created inconvenience for many other people.
Gulliver's statement - "Most business travellers would surely accept waiting a few minutes to accommodate a soldier with disabilities." - is presumptuous in the extreme. No one has the right to make that decision for others.
And I have read - I cannot vouch for the truth of this - that airlines deliberately put the disabled in out-of-the-way spots so that in the event of an emergency they will not be an impediment, ie they are placed to be last off so that one disabled person doesn't block an exit and cause the deaths of many. If this is true, and I can't vouch for it, it certainly makes sense. To do otherwise, ie arrange things to get the disabled off first, would be ludicrous.
And what is this worship of the armed forces? People join as a career choice, weighing up the pros and cons as with any such choice. They go in knowing the risks, as do firemen, policemen and anyone else in a dangerous or somewhat risky job. They deserve respect and, much more important, they deserve to be adequately cared for by the government's medical and military welfare services, which are often inadequate. But the current automatic adulation is not rational.
It is unbelievable that all members of the flight crew from captain to air hostesses were so insensitive.Delta should have suspended them immediately pending investigation.All personnel carry a name tag on their uniform in many airlines and they should absolutely have given their name when asked.IF first class passengers wanted to accommodate this soldier, what was driving the crew, pure evil and spite?
oh sheeple. the disabled are at the back so they can be attended to easily.
If I was on the crew I'd do the same thing. There is respect and there is workplace policy.
Widening the disabled seats and making them comfortable seems to be a good solution in the long run.
Rubbish
when was the last time you worked? oh thats right, you must have gotten fired for acting like the CEO on a whim.
Ok, sure, the cabin crew behaved like morons and should be dismissed or at least disciplined for sheer stupidity, disrespect and utter lack of common sense; in short, huge professional incompetence. The airline should apologize. And then we should all move on. Please don,t turn this into yet another big existencial problem. If and when people in large numbers start behaving in such a stupid way there´d be reason for collective angst. For now it is only one silly decision taken by one crew in one flight of one airline; it«s not the end of civilization as we kow it!
I flew Delta almost exclusively during my days with the stove company. When I graduated college Delta was a Southern Airline who unabashedly offered Southern hospitality. They only offered Coca-Cola products (no Pepsi thank you) and smiled graciously while serving their customers all during the day and late into the evening. In my observation after acquiring Northwest Airlines (we in the air travel circles referred to them as Northworst), they began to assimilate their acquired partner's well known arrogant flight attendant ways, and the former benchmark smile and Southern hospitality began to quickly to fade away. In return (it seemed to me) the flying public reciprocated by treating the airline employees with same lack of respect, and public temper tantrums on planes and at ticket counters became far more common. The nearly non-existent customer service these days combined with the increased security screening protocols, added to the delayed and/or cancelled arrival and departure times, makes flying the dreaded chore that it is today. This recent display of disrespect for wounded warriors just adds insult to injury.
The act is highly condemnable and a big let down. It demands nothing less than an apology from the airline. They shouldn't allow such socially insensitive crew to man the cabin. Let them take a back seat elsewhere.
>the tone of the article suggest the cause of disability should >matter in how someone is treatd
This is no doubt because the person between Gulliver is clearly an American and has the normal US reaction to "the military".
People who are in a wheelchair because of say MS had no choice in the matter - just very bad luck. Veterans these days volunteered to take the risks being a soldier means. One can feel sorry for the person who because of that choice became wheelchair-bound (and the treatment here was mad when first class passangers were offering) but they should not get or expect to get better treatment than other wheelchair-bound people.
Please, follow up on this, let us know how Delta made things right.
While this view may not be popular, was the traveler in uniform - if yes, why are commercial airlines used for troop transport exposing non-military travelers to war threat. If the individual was traveling as just a citizen, why the fuss? Clearly Delta dropped the ball for this particular traveler, but this has nothing to do with being a veteran as respect is due all mobility challenged travelers regardless of the cause - plenty of air travelers have disabilities not of their own doing from illness etc, and some of the comments below and the tone of the article suggest the cause of disability should matter in how someone is treatd - with respect this is disgraceful.
My wife Shamsun N Atique & me are regular passengers of Delta Air Lines for long . Also we are supposed to fly to Sydney shortly . But we are very much saddened to learn this shocking incident , an inhuman behavior to a Veteran by the Delta's staff . We have been treated by the Delta's staffs very nicely . But I can't accept such a audacity by a staff . Now we are rethinking whether we would fly by Delta Air Lines or some other Air Lines . Thank you .
I never appreciated delta's customer service, they really should be more caring for their customers.
Having been on a American Airlines flight back in 2006 from Orlando -> San Francisco where the 1st class staff verbally abused and harassed a man who had a wheelchair with batteries, and then summarily for no reason jerked him off the plane. I wrote a letter to the American CEO and told him from that day forward (and I was an Exec Platinum traveller) they had lost my custom forever that I'd never fly them reporting the incident. Now with Delta's pathetic show of disrespect does not surprise me so they absolutely deserve what they get in spades!
US carriers just don't get it -- having several friends who are 4-wheel enhanced and they want to travel abroad to see us, the very thought of dealing with the US carriers' idiocy puts them right off travelling.
That is sad, especially considering the excellent and considerate service I have always received as a disabled traveler (mostly Air Canada, but also United and American). Is this a problem with Delta, or is that one crew particularly dysfunctional?
Why is it that the air always seems to smell like a badly maintained public restroom in the really, really bad part of town whenever I read a corporate statement?
Shame on you, Delta!
Delta is missing some very basic core values as a company. It's not what you SAY that matters in life, but rather what you DO that matters. Delta has repeatedly proven that a snake rots from the head down.
Mr Brown is veteran who made a big sacrifice for his country and people understandably respect him and are outraged by Delta's treatment of him. Nevertheless I think his patriotism and sacrifice are irrelevant to Delta's issue: why operationally do they wheel any disabled person to the back of the plane? Mr Brown's humiliation would be the same for any disabled passenger. | http://www.economist.com/comment/1806259 | CC-MAIN-2015-14 | refinedweb | 1,679 | 60.35 |
Terry Reedy wrote: > "Bruno Desthuilliers" <onurb at xiludom.gro> wrote in message > news:452a6a61$0$22844$426a74cc at news.free.fr... >> The current namespace object, of course. > > Implementing a namespace as a Python object (ie, dict) is completely > optional and implementation dependent. For CPython, the local namespace of > a function is generally *not* done that way. I know this, and that's not the point here. The op's question seemed to imply that the hypothetical __assign__ method should belong to the rhs object, which is obviously not the case - it must of course belongs to the lhs 'object'. -- bruno desthuilliers python -c "print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for p in 'onurb at xiludom.gro'.split('@')])" | https://mail.python.org/pipermail/python-list/2006-October/404703.html | CC-MAIN-2014-15 | refinedweb | 121 | 69.18 |
US7152983B2 - Lamina comprising cube corner elements and retroreflective sheeting - Google PatentsLamina comprising cube corner elements and retroreflective sheeting Download PDF
Info
- Publication number
- US7152983B2US7152983B2 US10404265 US40426503A US7152983B2 US 7152983 B2 US7152983 B2 US 7152983B2 US 10404265 US10404265 US 10404265 US 40426503 A US40426503 A US 40426503A US 7152983 B2 US7152983 B2 US 7152983B2
- Authority
- US
- Grant status
- Grant
- Patent type
-
- Prior art keywords
- cube
- groove
- angle
- corner
- lamina
-/007—Forming single grooves or ribs, e.g. tear lines, weak spots
-/105—Protective coatings
-/18—Coatings for keeping optical surfaces clean, e.g. hydrophobic or photo-catalytic films
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
- G02B5/00—Optical elements other than lenses
- G02B5/12—Reflex reflectors
- G02B5/122—Reflex reflectors cube corner, trihedral or triple reflector type
-82—Partial cutting bonded sandwich [e.g., grooving or inc
This application claim priority to provisional U.S. Patent Application Ser. No. 60/452,464 filed Mar. 6, 2003
The present invention is directed to a lamina comprising cube corner elements, a tool comprising an assembly of laminae and replications thereof including in particular retroreflective sheeting.
Retroreflective materials are characterized by the ability to redirect light incident on the material back toward the originating light source. This property has led to the widespread use of retroreflective sheeting for a variety of traffic and personal embedded in a binder layer and having associated specular or diffuse reflecting materials (e.g., pigment particles, metal flakes or vapor coats, etc.) to retroreflect incident light. Due to the symmetrical geometry of beaded retroreflectors, microsphere based sheeting exhibits the same total light return regardless of orientation, i.e. when rotated about an axis normal to the surface of the sheeting. Thus, such microsphere-based sheeting has a relatively low sensitivity to the orientation at which the sheeting is placed on a surface. In general, however, such sheeting has a lower retroreflective efficiency than cube corner sheeting.
Cube corner retroreflective sheeting typically comprises a thin transparent a negative (inverted) copy thereof, depending upon whether the finished sheeting is to have cube corner pyramids or cube corner cavities (or both). The mold is then replicated using any suitable technique such as conventional nickel electroforming to produce the manufacture of cube corner retroreflective sheeting.. Pin bundling offers the ability to manufacture a wide variety of cube corner geometries in a single mold, because each pin is individually machined. However, such techniques are impractical for making small cube corner elements (e.g. those having a cube height less than about 1 millimeter) because of the large number of pins and the diminishing size thereof required to be precisely machined and then arranged in a bundle to form the mold.. Techniques that employ laminae are generally less labor intensive than pin bundling techniques because fewer parts are separately machined. For example, one lamina can typically have about 400–1000 individual cube corner elements, in comparison to each pin having only a single cube corner element. However, techniques employing laminae have less design flexibility in comparison to that achievable by pin bundling. Illustrative examples of techniques that employ laminae can be found in EP 0 844 056 A1 (Mimura et al.); U.S. Pat. No. 6,015,214 (Heenan et al.); U.S. Pat. No. 5,981,032 (Sm.
The symmetry axis of a cube corner is a vector that trisects the structure, forming an equal angle with all three cube faces. In the aforementioned truncated cubes of Stamm, the symmetry axis is normal to the equilateral base triangle and the cubes are considered to have no cant or tilt. The nomenclature “forward canting” or “positive canting” has been used in the cube corner arts to describe truncated cube corner elements canted in a manner that increases only one base triangle included angle relative to 60°. Conversely, the nomenclature “backward canting” or “negative canting” has been used in the cube corner arts to describe cube corner elements canted in a manner that increases two of the included angles of the base triangle relative to 60°. See U.S. Pat. No. 5,565,151 (Nilsen) and U.S. Pat. No. 4,588,258 (Hoopman). Canting of PG cube corner elements is described in U.S. Pat. No. 6,015,214 (Heenan et al.).
Canting cube corner elements either backward or forward enhances entrance angularity. Full cube corner elements have a higher total light return than truncated cube corner elements for a given amount of cant, but the full cubes lose total light return more rapidly at higher entrance angles. One benefit of full cube corner elements is higher total light return at low entrance angles, without substantial loss in performance at higher entrance angles.
A common method for improving the uniformity of total light return (TLR) with respect to orientation is tiling, i.e. placing a multiplicity of small tooling sections in more than one orientation in the final production, as described for example in U.S. Pat. No. 4,243,618 (Van Arnam), U.S. Pat. No. 4,202,600; and U.S. Pat. No. 5,936,770 (Nestegard et al.). Tiling can be visually objectionable. Further, tiling increases the number of manufacturing steps in making the tooling employed for manufacture of the sheeting.
In addition to being concerned with the TLR, the performance of retroreflective sheeting also relates to the observation angularity or divergence profile of the sheeting. This pertains to the spread of the retroreflected light relative to the source, i.e. typically, vehicle headlights. The spread of retroreflected light from cube corners is dominated by effects including diffraction, polarization, and non-orthogonality. For this purpose, it is common to introduce angle errors such as described in Table 1 of column 5 of U.S. Pat. No. 5,138,488 (Szczech).
Similarly, Example 1 of EP 0 844 056 A1 (Mimura) describes a fly cutting process in which the bottom angles of V-shaped grooves formed with a diamond cutting tool were slightly varied in regular order, three types of symmetrical V-shaped grooves having depths of 70.6 μm, 70.7 μm and 70.9 μm were successively and repeatedly cut at a repeating pitch of 141.4 μm in a direction perpendicular to the major surfaces of the sheets. Thus, a series of successive roof-shaped projections having three different vertical angles of 89.90°, 90.0°, and 91.0° in a repeating pattern were formed on one edge of the sheets.
Although the art describes a variety of retroreflective designs and their measured or calculated retroreflective performance; industry would find advantage in retroreflective sheeting having new cube corner optical designs and methods of manufacturing, particularly those features that contribute to improved performance and/or improved manufacturing efficiencies.
In one embodiment, the invention discloses a lamina comprising cube corner elements having faces formed from grooves wherein adjacent grooves range from being nominally parallel to nonparallel by less than 1°. The adjacent grooves have included angles that differ by at least 2°. In one aspect the included angles of the grooves are arranged in a repeating pattern. In another aspect, the faces of the elements intersect at a common peak height. In yet another aspect, the grooves have bisector planes that range from being mutually nominally parallel to nonparallel by less than 1°.
In another embodiment, the invention discloses a lamina comprising preferred geometry cube corner elements wherein at least a portion of the cube corner elements are canted having an alignment angle selected from alignment angles between 45° and 135°, alignment angles between 225° and 315°, and combinations thereof. Preferably, a first cube corner element is canted having an alignment angle between 60° and 120° and a second adjacent cube is canted having an alignment angles between 240° and 300°. Further, the alignment angle of the first cube preferably differs from 0° or 180° by substantially the same amount as the alignment angle of the second cube differs.
In each of these embodiments, the cube corner elements preferably comprise faces formed from alternating pairs of side grooves. The included angle of each pair of side grooves preferably has a sum of substantially 180°. Further, the included angle of a first groove is preferably greater than 90° by an amount of at least about 5° (e.g. about 10° to about 20°) and the included angle of a second adjacent groove is less than 90° by about the same amount.
In another embodiment, the invention discloses a lamina having a microstructured surface comprising cube corner elements having faces formed from a side groove set wherein at least two grooves within the set are nonparallel by amounts ranging from greater than nominally parallel to about 1°. The elements preferably comprise dihedral angle errors having magnitudes between 1 arc minute and 60 arc minutes. The dihedral angle errors are preferably arranged in a repeating pattern. The grooves comprise skew and/or inclination that vary in sign and or magnitude.
In all disclosed embodiments, the adjacent grooves are preferably side grooves. Further, the elements preferably each have a face in a common plane that defines a primary groove face. In addition, the elements are preferred geometry cube corner elements.
In other embodiments, the invention discloses a master tool comprising a plurality of any one or combination of described lamina. The laminae are preferably assembled such that cube corner elements of adjacent laminae are in opposing orientations. The elements preferably have a shape in plan view selected from trapezoids, rectangles, parallelograms, pentagons, and hexagons.
In other embodiments, the invention discloses replicas of the master tool including multigenerational tooling and retroreflective sheeting. The retroreflective sheeting may be derived from the laminae or have the same optical features described with reference to a, lamina. Retroreflective sheeting may have cube corner elements, cube corner cavities, or combinations thereof.
Hence, in other embodiments, the invention discloses retroreflective sheeting comprising a row of preferred geometry cube corner elements having faces defined by grooves wherein adjacent side grooves range from being nominally parallel to nonparallel by less than 1° and have included angles that differ by at least 2°. In other embodiments, the retroreflective sheeting comprises a row of cube corner elements wherein a first cube corner element is canted having an alignment angle between 45° and 135° and a second adjacent cube is canted having an alignment angles between 225° and 315°. In yet other embodiments, the retroreflective sheeting comprises a row of preferred geometry cube corner elements having faces defined by a side groove set wherein at least two grooves within the set are nonparallel by amounts ranging from greater than nominally parallel to about 1°. In each of these embodiments, the sheeting preferably further comprises the features described with reference to the lamina or laminae.
In another aspect, the invention discloses retroreflective sheeting comprising a pair of adjacent rows of preferred geometry cube corner elements wherein adjacent elements in a row have at least one dihedral edge that ranges from being nominally parallel to nonparallel by less than 1° and wherein the pair of rows comprise at least two types of matched pairs.
In preferred embodiments, the retroreflective sheeting disclosed has improved properties. In one embodiment, the retroreflective sheeting exhibits a uniformity index of at least 1. Such uniformity can be obtained without tiling in more than one orientation. The uniformity index is preferably at least 3 and more preferably at least 5. In other preferred embodiments, the retroreflective sheeting comprises an array of preferred geometry cube corner elements that exhibits an average brightness at 0° and 90° orientation according to ASTM D4596-1a of at least 375 candelas/lux/m2 for an entrance angle of −4° and an observation angle of 0.5°. Preferably, the sheeting exhibits improved brightness at other observation angles as well.
The invention further discloses any combination of features described herein.
The drawings, particularly of the lamina(e), are illustrative and thus not necessary representative of actual size. For example the drawing(s) may be an enlarged lamina or enlarged portion of a lamina.
The present invention relates to a lamina and laminae comprising cube corner elements, a tool comprising an assembly of laminae and replicas. There invention further relates to retroreflective sheeting.
The retroreflective sheeting is preferably prepared from a master mold manufactured with a technique that employs laminae. Accordingly, at least a portion and preferably substantially all the cube corner elements of the lamina(e) and retroreflective sheeting are full cubes that are not truncated. In one aspect, the base of full cube elements in plan view are not triangular. In another aspect, the non-dihedral edges of full cube elements are characteristically not all in the same plane (i.e. not coplanar). Such cube corner elements are preferably “preferred geometry (PG)), trapezoids or pentagons are examples.
“Entrance angle” refers to the angle between the reference axis (i.e. the normal vector to the retroreflective sample) and the axis of the incident light.
“Orientation” refers to the angle through which the sample may be rotated about the reference axis from the initial zero degree orientation of a datum mark.
Lamina(e) refers to at least two lamina. “Lamina” refers to a thin plate having length and height at least about 10 times its thickness (preferably at least 100, 200, 300, 400, 500 times its thickness). The invention is not limited to any particular dimensions of lamina(e). In the case of lamina intended for use in the manufacture of retroreflective sheeting, optimal dimensions may be constrained by the optical requirements of the final design (e.g. cube corner structures). In general the lamina the lamina is at least about 0.001 inches (0.0254 mm) and more preferably at least about 0.003 inches (0.0762 mm). The lamina ranges in length from about 1 inch (25.4 mm) to about 20 inches (50.8 cm) and is typically less than 6 inches (15.24 cm). The height of the lamina typically ranges from about 0.5 inches (12.7 mm) to about 3 inches (7.62 cm) and is more typically less than about 2 inches (5.08 cm).
With reference to
Lamina 10 can be characterized in three-dimensional space by superimposing a Cartesian coordinate system onto its structure. A first reference plane 24 is centered between major surfaces 12 and 14.. For the sake of clarity, various geometric attributes of the present invention will be described with reference to the Cartesian reference planes as set forth herein. However, it will be appreciated that such geometric attributes can be described using other coordinate systems or with reference to the structure of the lamina.
The lamina(e) of the present invention preferably comprise cube corner elements having faces formed from, and thus comprise ,a first groove set, an optional second groove set, and preferably a third primary groove (e.g. primary groove face).
The direction of a particular groove is defined by a vector aligned with the groove vertex. The groove direction vector may be defined by its components in the x, y and z directions, the x-axis being perpendicular to reference plane 28 and the y-axis being perpendicular to reference plane 24. For example, the groove direction for groove 30 b is defined by a vector aligned with groove vertex 33 b. It is important to note that groove vertices may appear parallel to each other in top plan view even though the grooves are not parallel (i.e. different z-direction component).
As used herein, the term “groove set” refers to grooves formed in working surface 16 of the lamina 10 that range from being nominally parallel to non-parallel to within 1° to the adjacent grooves in the groove set. As used herein “adjacent groove” refers to the closest groove that is nominally parallel or non-parallel to within 1°. Alternatively or in addition thereto, the grooves of a groove set may range from being nominally parallel to non-parallel to within 1° to particular reference planes as will subsequently be described. Accordingly, each characteristic with regard to an individual groove and/or the grooves of a groove set (e.g. perpendicular, angle, etc.) will be understood to have this same degree of potential deviation. Nominally parallel grooves are grooves wherein no purposeful variation has been introduced within the degree of precision of the groove-forming machine. The grooves of the groove set may also comprise small purposeful variations for the purpose of introducing multiple non-orthogonality (MNO) such as included angle errors, and/or skew, and/or inclination as will subsequently be described in greater detail.
Referring to
In another embodiment depicted in
Both these first and second groove sets may also be referred to herein as “side grooves”. As used herein side grooves refer to a groove set wherein the groove(s) range from being nominally parallel to non-parallel to within 1°, per their respective direction vectors, to the adjacent side grooves of the side groove set. Alternatively or in addition thereto, side grooves refers to a groove that range from being nominally parallel to reference plane 28 to nonparallel to reference plane 28 to within 1°. Side grooves are typically perpendicular to reference plane 24 to this same degree of deviation in plan view. Depending on whether the side grooves are nominally parallel or non-parallel within 1°, individual elements in the replicated assembled master typically have the shape of trapezoids, rectangles, parallelograms and pentagons, and hexagons when viewed in plan view with a microscope or by measuring the dihedral angles or parallelism of the side grooves with an interferometer. Suitable interferometers will subsequently be described.
Although the third face of the elements may comprise working surface 12 or 14, such as describe in EP 0 844 056 A1 (Mimura et al.), the lamina preferably comprises a primary groove face 50 that extends substantially the full length of the lamina. Regardless of whether the third face is a working surface (i.e. 12 or 14) of the lamina or a primary groove face, the third face of each element within a row preferably share a common plane. With reference to
A pair of single laminae with opposing orientations and preferably multiple laminae with opposing orientations are typically assembled into a master tool such that their respective primary groove faces form a primary groove. For example, as depicted in FIGS. 6 and 8–9, four laminae (i.e. laminae 100, 200, 300 and 400 are preferably assembled such that every other pair of laminae are positioned in opposing orientations (i.e. the cube corner elements of lamina 100 are in opposing orientation with the cube corner elements of lamina 200 and the cube corner elements of lamina 300 are in opposing orientation with the cube corner elements of lamina 400). Further, the pairs of laminae having opposing orientation are positioned such that their respective primary groove faces 50 form primary groove 52. Preferably the opposing laminae are positioned in a configuration (e.g. 34 b aligned with 42 b) in order to minimize the formation of vertical walls..
In one embodiment, as depicted in
In one aspect, the differing included angles (e.g. of adjacent side grooves) are arranged in a repeating pattern to minimize the number of different diamond cutting tools needed. In such embodiment, the sum of adjacent side groove angles is about 180°. In a preferred embodiment, the lamina comprises a first sub-set of side grooves having an included angle greater than 90° alternated with second sub-set of side grooves having an included angle less than 90°. In doing so, the included angle of a first groove is typically greater than 90° by an amount of at least about 5°, and preferably by an amount ranging from about 10° to about 20°; whereas the included angle of the adjacent groove is less than 90° by about the same amount.
Although, the lamina may further comprise more than two sub-sets and/or side grooves having included angles of nominally 90°, the lamina is preferably substantially free of side grooves having an included angle of nominally 90°. In a preferred embodiment, the lamina comprises an alternating pair of side grooves (e.g. 75.226° and 104.774°) and thus, only necessitates the use of two different diamonds to form the totality of side grooves. Accordingly, with reference to
In another aspect, alternatively or in combination with the differing included angles (e.g. of adjacent side grooves) being arranged in a repeating pattern, the resulting cube corner elements have faces that intersect at a common peak height, meaning that cube peaks (e.g. 36) are within the same plane to within 3–4 microns. It is surmised that a common peak height contribute to improved durability when handling the tooling or sheeting by evenly distributing the load.
Alternatively or in combination thereof, the lamina comprises sideways canted cube corner elements. For cube corner elements that are solely canted forward or backward, the symmetry axes are canted or tilted in a cant plane parallel with reference plane 28. The cant plane for a cube corner element is the plane that is both normal to reference plane 26 and contains the symmetry axis of the cube. Accordingly, the normal vector defining the cant plane has a y component of substantially zero for cube corner elements that are solely canted forward or backward. In the case of cube corner elements that are solely canted sideways, the symmetry axes of the cubes are canted in a plane that is substantially parallel to reference plane 24 and thus, the normal vector defining the cant plane has an x component of substantially zero.
The projection of the symmetry axis in the x-y plane may alternatively be used to characterize the direction of cant. The symmetry axis is defined as the vector that trisects the three cube corner faces forming an equal angle with each of these three faces.
Alternatively, the cube may be canted such that the cant plane normal vector comprises both an x-component and y-component (i.e. x-component and y-component are each not equal to zero). At an alignment angle between 0° and 45° or between 0° and 315° the backward cant component is predominant with the backward cant component and sideways cant component being equal at an alignment angle of 45° or 315°. Further at an alignment angle between 135° and 225°, the forward cant component is predominant with the forward cant component and sideways cant component being equal at 135° and at 225°. Accordingly, cant planes comprising a predominant sideways cant component have an alignment angle between 45° and 135° or between 225° and 315°. Hence, a cube corner element is predominantly sideways canting when the absolute value of the y-component of the cant plane normal vector is greater than the absolute value of the x-component of the cant plane normal vector.
For embodiments wherein the sideways canted cubes are formed from an alternating pair of side grooves having different included angle cubes where the cant plane is parallel to reference plane 24 the adjacent cubes within a given lamina (e.g. α-β or α′-β′) are canted in the same or parallel planes. However, in general, if there is an x component to the cant plane normal vector, then adjacent cubes within a particular lamina are not canted in the same plane. Rather, the cube corner matched pairs are canted in the same or parallel planes (i.e. α-α′ or β-β′). Preferably, the cube corner elements of any given lamina have only two different alignment angles, e.g. derived from adjacent side grooves comprising different included angles. The alignment angle for the sideways canting example in
In contrast, sideways canting results in a cube design comprising two different cube orientations within the same row and thus created by the same side groove set. For a single lamina comprising both the first and second set of side grooves or a pair of adjacent laminae assembled in opposing orientations, the laminae comprise four distinctly different cubes and two different matched pairs, as depicted in
Predicted total light return for a cube corner matched pair array may be calculated from a knowledge of percent active area and ray intensity. Total light return is defined as the product of percent active area and ray intensity. Total light return for directly machined cube corner arrays is described by Stamm U.S. Pat. No. 3,812,706.
For an initial unitary light ray intensity, losses may result from two pass transmissions through the front surface of the sheeting and from reflection losses at each of the three cube surfaces. Front surface transmission losses for near normal incidence-and a sheeting refractive index of about 1.59 are Reflection losses for cubes that have been reflectively coated depend for example on the type of coating and the angle of incidence relative to the cube surface normal. Typical reflection coefficients for aluminum reflectively coated cube surfaces are roughly 0.85 to 0.9 at each of the cube surfaces. Reflection losses for cubes that rely on total internal reflection are essentially zero (essentially 100% reflection). However, if the angle of incidence of a light ray relative to the cube surface normal is less than the critical angle, then total internal reflection can break down and a significant amount of light may pass through the cube surface. Critical angle is a function of the refractive index of the cube material and of the index of the material behind the cube (typically air). Standard optics texts such as Hecht, “Optics”, 2nd edition, Addison Wesley, 1987 explain front surface transmission losses and total internal reflection. Effective area for a single or individual cube corner element may be determined by, and is equal to, the topological intersection of the projection of the three cube corner surfaces on a plane normal to the refracted incident ray with the projection of the image surfaces of the third reflection on the same plane. One procedure for determining effective aperture is discussed for example by Eckhardt, Applied Optics, v. 10, n. 7, July 1971, pg. 1559–1566. Straubel U.S. Pat. No. 835,648 also discusses the concept of effective area or aperture. Percent active area for a single cube corner element is then defined as the effective area divided by the total area of the projection of the cube corner surfaces. Percent active area may be calculated using optical modeling techniques known to those of ordinary skill in the optical arts or may be determined numerically using conventional ray tracing techniques. Percent active area for a cube corner matched pair array may be calculated by averaging the percent active area of the two individual cube corner elements in the matched pair. Alternatively stated, percent active aperture equals the area of a cube corner array that is retroreflecting light divided by the total area of the array. Percent active area is affected for example by cube geometry, refractive index, angle of incidence, and sheeting orientation.
Referring to
A single matched pair of forward or backward canted cubes typically have two planes (i.e. V1 and V2) of broad entrance angularity that are substantially perpendicular to one another. Forward canting results in the principle planes of entrance angularity being horizontal and vertical as shown in
In order to compare the uniformity of total light return (TLR) of various optical designs, the average TLR at orientations of 0°, 45° and 90° may be divided by the range of TLR at orientations of 0°, 45° and 90°, i.e. the difference between the maximum and minimum TLR at these angles, all at a fixed entrance angle. The entrance angle is preferably at least 30° or greater, and more preferably 40° or greater. Preferred designs exhibit the maximum ratio of average TLR relative to TLR range. This ratio, i.e. “uniformity index (UI)” was calculated for a 40° entrance angle for the forward and backward canted cubes of
Improved orientation uniformity results when the uniformity index is greater than 1. Preferably, the uniformity index is greater than 3 (e.g. 4), and more preferably greater than 5 (e.g. 6, 7, 8). Uniformity index will vary as a function of variables such as cube geometry (e.g. amount and type of cant, type of cube, cube shape in plan view, location of cube peak within aperture, cube dimensions), entrance angle, and refractive index.
Preferably, the alignment angle is greater than 50° (e.g. 51°, 52°, 53°, 54°), more preferably greater than 55° (e.g. 56°, 57°, 58°, 59°), and even more preferably greater than 60°. Further the alignment angle is preferably less than 130° (e.g. 129°, 128°, 127°, 126°) and more preferably less than 125° (e.g. 124°, 123°, 122°, 121°), and even more preferably less than 120°. Likewise the alignment angle is preferably greater than 230° (e.g. 231°, 232°, 233°, 234°), and more preferably greater than 235° (e.g. 236°, 237°, 238°, 239°), and even more preferably greater than 240° . Further the alignment angle is preferably less than 310° (e.g. 309°, 308°, 307°, 306°) and more preferably less than 305° (e.g. 304°, 303°, 302°, 301°) and even more preferably less than 300°.
The amount of tilt of the cube symmetry axes relative to a vector perpendicular to the plane of the cubes is at least 2° and preferably greater than 3°. Further, the amount of tilt is preferably less than 90°. Accordingly, the most preferred amount of tilt ranges from about 3.5° to about 8.5° including any interval having end points selected from 3.6°, 3.7°, 3.8°, 3.9°, 4.0°, 4.1°, 4.2°, 4.3°, 4.4° and 4.5° combined with end points selected from 7.5°, 7.6°, 7.7°, 7.8°, 7.9°, 8.0°, 8.1°, 8.2°, 8.30° and 8.4°. Cube geometries that may be employed to produce these differing amounts of sideways cant are summarized in Table 2. The alignment angle may be 90° or 270° for each amount of cant.
Although differing included angles alone or in combination with the previously described sideways canting provide improved brightness uniformity in TLR with respect to changes in orientation angle over a range of entrance angles, it is also preferred to improve the observation angularity or divergence profile of the sheeting. This involves improving the spread of the retroreflected light relative to the source (typically, vehicle headlights). As previously described retroreflected light from cube corners spreads due to effects such as diffraction (controlled by cube size), polarization (important in cubes which have not been coated with a specular reflector), and non-orthogonality (deviation of the cube corner dihedral angles from 90° by amounts less than 1°). Spread of light due to non-orthogonality is particularly important in (e.g. PG) cubes produced using laminae since relatively thin laminae would be required to fabricate cubes where the spreading of the return light was dominated by diffraction. Such thin laminae are particularly difficult to handle during fabrication.
Alternatively, or in addition to the features previously described, in another embodiment the present invention relates to an individual lamina, a master tool comprising the assembled laminae, as well as replicas thereof including retroreflective replicas, comprising side grooves wherein the side grooves comprise “skew” and/or “inclination”..
For example, with reference to and/or sign. The difference in magnitude is typically at least ¼ arc minutes, more preferably at least ½ arc minutes, and most preferably at least 1 arc minutes. Hence the grooves are nonparallel by amount greater than nominally parallel. Further, the skew and/or inclination is such that the magnitude is no more than about 1° (i.e. 60 arc minutes). Further, the (e.g. side) grooves may comprise a variety of different components of skew and/or inclination along a single lamina.
Dihedral angle errors may also be varied by changing the half angles of the primary or side grooves during machining. Half angle for side grooves is defined as the acute angle formed by the groove face and a plane normal to reference plane 26 that contains the groove vertex. Half angle for primary grooves or groove faces is defined as the acute angle formed by the groove face and reference plane 24. Changing the half angle for the primary groove results in a change in slope of groove face 50 via rotation about the x-axis. Changing the half angle for a side groove may be accomplished via either changing the included angle of the groove (the angle formed by opposing groove faces e.g. 82 c and 84 c) or by rotating a groove about its vertex. For example, changing the angle of the primary groove face 50 will either increase or decrease all of the dihedral 1-2 and dihedral 1-3 errors along a given lamina. This contrasts to changes in inclination where the dihedral 1-2 and dihedral 1-3 errors can be varied differently in each groove along a given lamina. Similarly, the half angle for the side grooves may vary, resulting in a corresponding change in dihedral 2-3. Note that for side grooves that are orthogonal or nearly orthogonal (within about 1°) to the primary groove face, dihedral 1-2 and dihedral 1-3 are very insensitive to changes in side groove half angle. As a result, varying the half angles of the primary or side grooves during machining will not allow dihedral 1-2 and dihedral 1-3 to vary in opposition within a given cube corner. Varying the half angles of the primary or side grooves during machining may be used in combination with skew and/or inclination to provide the broadest possible control over cube corner dihedral angle errors with a minimum number of tool changes. While the magnitude of any one of half angle errors, skew, or inclination can ranges up to about 1°, cumulatively for any given cube the resulting dihedral angle error is no more than about 1°.
For simplicity during fabrication, skew and/or inclination are preferably introduced such that the dihedral angle errors are arranged in patterns. Preferably, the pattern comprises dihedral angle errors 1-2 and 1-3 that are varied in opposition within a given cube corner.
Spot diagrams are one useful method based on geometric optics of illustrating the spread in the retroreflected light resulting from non-orthogonality from a cube corner array. Cube corners are known to split the incoming light ray into up to six distinct return spots associated with the six possible sequences for a ray to reflect from the three cube faces. The radial spread of the return spots from the source beam as well as the circumferential position about the source beam may be calculated once the three cube dihedral errors are defined (see e.g. Eckhardt, “Simple Model of Cube Corner Reflection”, Applied Optics, V10, N7, July 1971). Radial spread of the return spots is related to observation angle while circumferential position of the return spots is related to presentation angle as further described in US Federal Test Method Standard 370 (Mar. 1, 1977). A non-orthogonal cube corner can be defined by the surface normal vectors of its three faces. Return spot positions are determined by sequentially tracking a ray as it strikes and reflects from each of the three cubes faces. If the refractive index of the cube material is greater than 1, then refraction in and out of the front surface cube must also be taken into account. Numerous authors have described the equations related to front surface reflection and refraction (e.g. Hecht and Zajac, “Optics”, 2nd edition, Addison Wesley 1987). Note that spot diagrams are based on geometric optics and hence neglect diffraction. Accordingly, cube size and shape is not considered in spot diagrams.
The return spot diagram for five different cubes that are backward canted by 7.47 degrees (e.g.
The dihedral errors as a function of primary groove half angle errors are presented in Table 3 for the same errors used to produce
The return spot diagram for the same type of backward canted cubes with dihedral 2-3 errors of −20, −15, −10, −5, and 0 arc minutes is depicted in
The dihedral errors as a function of primary groove half angle errors are presented in Table 4 for the errors used to produce
The preceding examples (i.e.
The dihedral errors for this example of varying inclination are presented in Table 5. The order of machining of the inclinations (arc minutes) is −1, +5, +5, −1 in a repeating pattern. For example with reference to cube no. 1, the first side groove has an inclination of −1 and the second side groove has an inclination of +5. Note that dihedral 1-2 and dihedral 1-3 vary in opposition with different magnitudes (absolute value of the dihedral angle errors are unequal) and signs.
The positive and negative skews of the two preceding examples may be combined, providing the spot diagram of
The same skew and inclination combinations may also be utilized advantageously in combination with sideways canted cube corners to provide a uniformly distributed spot diagram. Sideways canted cubes, as previously discussed, comprise two different cube orientations within the same row. Preferably, care should be taken to apply the combinations of skew and/or inclination equally to both types of cube in a given row (e.g. alpha (α) and beta (β)) in order to obtain uniform performance at various entrance and orientation angle combinations. The return spot diagram for cubes that are sideways canted by 6.03° (
A characteristic of the exemplary cube corner elements of Tables 5–8 is the formation of at least one and typically a plurality of PG cube corner elements in a row having three dihedral angle errors wherein the dihedral angle errors are different from each other. Another characteristic is that the dihedral angle errors, and thus the skew and/or inclination, is arranged in a repeating pattern throughout a lamina or row of adjacent cube corner elements. Further the adjacent lamina or row is preferably optically identical except rotated 180° about the z-axis forming pairs of laminae or pairs of rows.
Methods of machining laminae and forming a master tool comprising a plurality of laminae. Patent No. 6,159,407 (Krinke et al.).
Accordingly, further described herein are methods of machining laminae by providing a lamina or laminae and forming V-shaped grooves on working surface 16 of the lamina wherein the grooves are formed with any one or combinations of the features previously described.
In general, the lamina(e) may comprise.
The diamond tools suitable for use are of high quality such as diamond tools that can be purchased from K&Y Diamond (Mooers, N.Y.) or Chardon Tool (Chardon, Ohio). In particular, suitable diamond tools are scratch free within 10 mils of the tip, as can be evaluated with a 2000× white light microscope. Typically, the tip of the diamond has a flat V-shaped grooves are groove spacing and groove depth is preferably at least as precise as +/−500 nm, more preferably at least as precise as +/−250 nm and most preferably at least as precise as +/−100 nm. The precision of the groove angle is at least as precise as +/−2 arc minutes (+/−0.033 degrees), more preferably at least as precise as +/−1 arc minute (+/−0.017 degrees), even more preferably at least at precise as +/−½ arc minute (+/−0.0083 degrees), and most preferably at least as precise as +/−¼ arc minute (+/−0.0042 degrees) over the length of the cut (e.g. the thickness of the lamina). Further, the resolution (i.e. ability of groove forming machine to detect current axis position) is typically at least about 10% of the precision. Hence, for a precision of +/−100 nm, the resolution is at least +/−10 nm. Over short distances (e.g. about 10 adjacent parallel grooves), the precision is approximately equal to the resolution. In order to consistently form a plurality of grooves of such fine accuracy over duration of time, the temperature of the process is maintained within +/−0.1° C. and preferably within +/−0.01° C.
While the change in shape of a single cube corner element due to skew and/or inclination is small with respect to a single element (e.g. limited primarily to changes in the dihedral angles), it is evident that forming skewed and/or inclined grooves in a stack of laminae may be problematic. Since the side grooves deviate from parallel up to as much as 1°, significantly varying cube geometries may be produced across the stack. These variations increase as the stack size increases. The calculated maximum number of laminae that can be machined concurrently (i.e. in a stack) without creating significantly varying cube geometries is as few as two laminae (e.g. for 1° skew, 0.020 inch (0.508 mm) thick lamina with 0.002 inch (0.0508 mm) side groove spacing).
Due to the problems of machining stacks of laminae having skewed and/or inclined side grooves, in the practice of such embodiments the side grooves are preferably formed in individual laminae with a groove-forming machine. A preferred method for forming grooves on the edge portion of individual laminae, assembling the laminae into a master tool, and replicating the microstructured surface of the assembled laminae is described in U.S. patent application Ser. No. 10/383039, entitled “Methods of Making Microstructured Lamina and Apparatus” filed Mar. 6, 2003, incorporated herein by reference. U.S. patent application Ser. No. 10/383039 was concurrently filed with Provisional Patent Application Serial No. 60/452464, to which the present application claims priority. is employed to make a positive copy (i.e. cube corner element) sheeting whereas, a positive copy tool is employed to make a negative copy (i.e. cube corner cavity) sheeting. Further, retroreflective sheeting may comprise combination of cube corner elements and cube corner cavity microstructures. Electroforming techniques such as described in U.S. Pat. Nos. 4,478,769 and 5,156,863 (Pricone) as well as U.S. Pat. No. 6,159,407 (Krinke) are known. Tiling such toolings together can then assemble a master tool of the desired size. In the present invention the toolings are typically tiled in the same orientation.
As used herein, “sheeting” refers to a thin piece of polymeric (e.g. synthetic) material upon which cube corner microstructures have been formed..020 inches (0.508 mm) and more preferably less than about 0.014 inches (0.3556 mm). The retroreflective sheeting may further include surface layers such as seal films or overlays. correspondingly smaller dimensions.
The retroreflective sheet is preferably manufactured as an integral material, i.e. wherein the cube-corner elements are interconnected in a continuous layer throughout the dimension of the mold, the individual elements and connections therebetween comprising the same material. The surface of the sheeting opposing the microprismatic surface is typically smooth and planar, also being referred to as the “land layer”. The thickness of the land layer (i.e. the thickness excluding that portion resulting from the replicated microstructure) is between 0.001 and 0.100 inches and preferably between 0.003 and 0.010 inches. Manufacture of such sheeting is typically achieved by casting a fluid resin composition onto the tool and allowing the composition to harden to form a sheet. A preferred method for casting fluid resin onto the tool is described in U.S. patent application Ser. No. 10/382375, entitled “Method of Making Retroreflective Sheeting and Slot Die Apparatus”, filed Mar. 6, 2003, incorporated herein by reference. U.S. patent application Ser. No. 10/382375 was concurrently filed with Provisional Patent Application Ser. No. 60/452464, to which the present invention claims priority.
Optionally, however, the tool can be employed as an embossing tool to form retroreflective articles, such as described in U.S. Pat. No. 4,601,861 (Pricone). Alternatively, the retroreflective sheeting can be manufactured as a layered product by casting the cube interconnected by the preformed film. Further, the elements and film are typically comprised of different materials.
In the manufacture of the retroreflective sheeting, it is preferred that the channels of the tool are roughly aligned with the direction of the advancing tool as further described in U.S. patent application Ser. No. 60/452605, entitled “Methods of Making Retroreflective Sheeting and Articles”, filed Mar. 6, 2003. U.S. patent application Ser. No. 60/452605 was filed concurrently with Provisional Patent Application Ser. No. 60/452464, to which the present invention claims priority. Accordingly, prior to any further manufacturing steps, the primary groove of the sheeting would be substantially parallel to the edge of the roll of the sheeting. The present inventors have found that orienting the channels in this downweb manner allows for faster replication than when the primary groove is oriented cross web. It is surmised that the primary groove and other cube structures combine to form channels for improved resin flow.
Suitable resin compositions for the retroreflective sheeting of this invention are preferably transparent materials that are dimensionally stable, durable, weatherable, and readily formable into the desired configuration. Examples of suitable materials ac higher refractive index, which generally contributes to improved retroreflective performance over a wider range of entrance angles. These materials may also include dyes, colorants, pigments, UV stabilizers, or other additives.
A specular reflective coating such as a metallic coating can be placed on the backside of the cube-corner elements. The metallic coating can be applied by known techniques such as vapor depositing or chemically.
An adhesive layer also can be disposed behind the cube-corner elements or the seal film to enable the cube-corner retroreflective sheeting to be secured to a substrate. Suitable substrates include wood, aluminum sheeting, galvanized steel, polymeric materials such as polymethyl methacrylates, polyesters, polyamids, polyvinyl fluorides, polycarbonates, polyvinyl chlorides, polyurethanes, and a wide variety of laminates made from these and other materials.
With reference to
Regardless of the method of making the retroreflective sheeting or whether the master tool was derived from a lamina technique or other technique, the sheeting of the invention has certain unique optical features that can be detected by viewing the sheeting with a microscope or interferometer as previously described.
In one aspect, the retroreflective sheeting comprises a row of cube corner elements or an array of cube corner element wherein the included angle between a first and second concurrent element in a row differs from the included angle between a second and a third concurrent element in the row. With respect to the sheeting, the row is defined by the elements wherein a face of each element within the row shares a common plane (e.g. primary groove face, working surface 12 or 14). The magnitude of the difference in included angle between adjacent cubes as well as other preferred characteristics (e.g. arranged in a repeating pattern, common peak height, bisector planes that range form being mutually nominally parallel to non-parallel by less than 1°) within a row or array is the same as previous described with respect to the lamina.
Alternatively or in combination thereof, the retroreflective sheeting comprises a row or an array of cube corner elements (e.g. PG cube corner elements) wherein at least a portion of the elements in a row or array are predominantly sideways canted, the elements having an alignment angles between 45° and 135° and/or having an alignment angle between 225° and 315° relative to the dihedral edge that is substantially perpendicular to a row of elements in plan view. In preferred embodiments, the retroreflective sheeting comprises a row of cube corner elements or an array having cube corner elements having each of these alignment angles. Such array is substantially free of predominantly forward canted or predominantly backward canted cube corner elements. The retroreflective sheeting comprising predominantly sideways canted cube corner elements may further comprise any of the characteristics previously described with regard to the lamina.
Alternatively or in combination thereof, the retroreflective sheeting comprises skewed and/or inclined grooves. Hence, the row or the array wherein at least two adjacent grooves and preferably all the grooves of the (e.g. side) groove set are non-parallel by amount ranging from greater than nominally parallel to about 1° and may further include the various attributes described with regard to lamina comprising this feature.
In another aspect, alone or in combination with differing included angles and/or sideways canting, the retroreflective sheeting may comprise a row or elements or an array wherein the grooves of the side groove set are nominally parallel to each other, yet range from being nominally parallel to non-parallel to reference plane 28., at various observation angles.. The brightness is preferably at least 625 candelas per lux per square meter (CPL), more preferably at least 650 CPL, even more preferably at least 675 CPL, and most preferably at least 700 CPL at an observation angle of 0.2°. Alternatively, and preferably in addition thereto, the brightness at an observation angle of 0.33° is preferably at least 575 CPL, more preferably at least 600 CPL, even more preferably at least 625 CPL, and most preferably at least 650 CPL. In addition or in the alternative, the brightness at an observation angle of 0.5° is preferably at least 375 CPL, more preferably at least 400 CPL, even more preferably at least 425 CPL, and most preferably at least 450 CPL. Further, the brightness at an observation angle of 1.0° is preferably at least 80 CPL, more preferably at least 100 CPL, and most preferably at least 120 CPL. Likewise, the brightness at an observation angle of 1.5° is preferably at least 20 CPL and more preferably at least 25 CPL. The retroreflective sheeting may comprise any combination of brightness criteria just stated.
Improved brightness in the region around 0.5 observation angle (i.e. 0.4 to 0.6) is particularly important for viewing traffic signs (e.g. right should mounted) from passenger vehicles at distances of roughly 200 to 400 feet and for the viewing of traffic signs (e.g. right should mounted) from drivers of large trucks at distances of about 450 to 950 feet.
Objects and advantages of the invention are further illustrated by the following examples, but the particular materials and amounts thereof recited in the examples, as well as other conditions and details, should not be construed to unduly limit the invention.
Grooves were formed in individual lamina, the individual lamina assembled, and the microstructured surface replicated as described in previously cited U.S. patent application Ser. No. 10/383039, filed Mar. 6, 2003. U.S. patent application Ser. No. 10/383039 was filed concurrently with Provisional Patent Application Serial No. 60/452464 to which the present application claims priority. All the machined laminae had the geometry depicted in
Eight laminae that differed with regard to the angle error and/or skew and/or inclination of the side grooves were formed such that the dihedral angle errors reported in each of the following Tables 10–14 were obtained with the exception of Table 13 wherein the skew of a portion of the side grooves was modified.
Lamina 1 and Lamina 2
The side groove parameters of the first lamina as well as the second lamina, the second lamina being an opposing lamina to the first lamina, are reported in Tables 10 and 11, respectively. The primary groove half angle error was −8 arc minutes for all the primary grooves. Side groove nominal included angles (the angles required to produce orthogonal cubes) were 75.226° and 104.774°. The included angle error for all side grooves was −9.2 arc minutes, resulting in actual side groove included angles of 75.073° and 104.621°. While the included angle error was constant for the side grooves, the half angle errors were varied. Half angle errors for the first lamina type ranged from −14.8 arc minutes to 5.6 arc minutes as shown in column 3 of Table 10. The half angle errors are presented in groups of two (totaling −9.2 arc minutes) corresponding to the two half angles for each side groove. The dihedral 2-3 error results from the combination of half angle errors on adjacent side grooves and is summarized in column 4. Dihedral 2-3 errors varied from −1.6 arc minutes to −16.8 arc minutes for the first lamina.
Skew and inclination are set forth in columns five and six of Table 10, respectively. Skew ranged from −8.0 arc minutes to 15.0 arc minutes for the first lamina. Inclination varied from −6.1 arc minutes to 10.8 arc minutes. The 1-2 and 1-3 dihedral errors resulting from skew and inclination of the side grooves are shown in the final two columns. Note that dihedral errors 1-2 and 1-3 varied in opposition, with at least one cube in the lamina comprising dihedral errors 1-2 and 1-3 with different magnitudes and/or signs.
The side grooves of the second lamina, is summarized in Table 11 and is closely related to the lamina of Table 10. The first and second columns, that set forth the nominal side groove angle as well as side groove included angle error, are identical. All other columns for side groove parameters (half angle errors, skew and inclination) as well as dihedral angle errors are inverted in relation to Table 10. This reflects the fact that an opposing lamina is optically identical to its counterpart except rotated 180° about the z-axis.
Lamina 4, Lamina 6 and Lamina 8
For simplicity, the side groove parameters of the fourth, sixth, and eight lamina that are respectively opposing the third, fifth and seventh lamina are not reiterated since the side grooves parameter have this same inverted relationship as just described.
Lamina 3
The side groove parameter of the third lamina is set forth in Table 12. Primary groove half angle error was −8 arc minutes. The basic geometry (dimensions and nominal side groove included angles) was the same as the first lamina type. The actual included angle error for all side grooves was again −9.2 arc minutes. Half angle errors for the second lamina type side grooves ranged from −14.8 arc minutes to 5.6 arc minutes. Dihedral 2-3 errors varied from −1.6 arc minutes to −16.8 arc minutes. Skew ranged from −14.0 arc minutes to 21.3 arc minutes while inclination varied from −12.7 arc minutes to 16.8 arc minutes for this lamina type. The 1-2 and 1-3 dihedral errors (shown in the final two columns) varied in opposition.
Lamina 5
The groove parameters of the fifth lamina is set forth in Table 13. The primary groove half angle error was −4 arc minutes. The basic geometry (dimensions and nominal side groove included angles) was the same as the preceding laminae. The included angle error for all side grooves was −1.6 arc minutes, resulting in actual side groove included angles of 75.199° and 104.747°. Half angle errors for the third lamina type ranged from −5.2 arc minutes to 3.6 arc minutes. Dihedral 2-3 errors varied from −7.2 arc minutes to 4.0 arc minutes. Skew ranged from −7.0 arc minutes to 9.5 arc minutes while inclination varied from −8.2 arc minutes to 1.4 arc minutes. The 1-2 and 1-3 dihedral errors (shown in the final two columns) varied in opposition.
Lamina 7
The side groove parameter for the seventh lamina is set forth in Table 14. The primary groove half angle error was −4.0 arc minutes. The basic geometry (dimensions and nominal side groove included angles) was the same as the first lamina type. The actual included angle error for all side grooves was again −1.6 arc minutes. Half angle errors ranged from −5.2 arc minutes to 3.6 arc minutes. Dihedral 2-3 errors varied from −7.2 arc minutes to 4.0 arc minutes. Skew ranged from −5.3 arc minutes to 5.3 arc minutes while inclination varied from −2.1 arc minutes to 4.6 arc minutes for this lamina type. The 1-2 and 1-3 dihedral errors (shown in the final two columns) varied in opposition.
A total of 208 laminae were assembled such that the non-dihedral edges of the elements of opposing laminae contacted each other to a precision such that the assembly was substantially free of vertical walls (e.g. walls greater than 0.0001 in lateral dimensions). The laminae were assembled such that the lamina order 1–8 was sequentially repeated throughout the assembly and the structured surface of the assembly was then replicated by electroforming to create a cube cavity tool. The assembly and electroforming process is further described in previously cited U.S. patent application Ser. No. 10/383039, filed Mar. 6, 2003. U.S. patent application Ser. No. 10/383039 was filed concurrently with Provisional Patent Application Serial No. 60/452464 to which the present application claims priority.
For Example 1A, the tool was used in a compression molding press with the pressing performed at a temperature of approximately 375° F. (191° C.) to 385° F. (196° C.), a pressure of approximately 1600 psi, and a dwell time of 20 seconds. The molded polycarbonate was then cooled to about 200° F. (100° C.) over 5 minutes.
For Example 2A, molten polycarbonate was cast onto the tool surface as described in previously cited U.S. patent application Ser. No. 10/382375, filed Mar. 6, 2003. U.S. patent application Ser. No. 10/382375 was filed concurrently with Provisional Patent Application Serial No. 60/452464 to which the present application claims priority.
For both Example 1A and 1B, a dual layer seal film comprising 0.7 mils polyester and 0.85 mils amorphous copolyester was applied to the backside of the cube-corner elements by contacting the amorphous copolyester containing surface to the microstructured polycarbonate film surface in a continuous sealing process. The construction was passed continuously through a rubber nip roll having a Teflon sleeve and a heated steel roll. The surface of the rubber nip roll was about 165° F. and the surface of the heated steel roll was about 405° F. The nip pressure was about 70 pounds/per linear inch and speed was 20 feet per minute. Brightness retention after sealing was about 70%. as follows:
Table 9 shows that the retroreflective sheeting of the present invention has a higher brightness at each of the indicated observation angles in comparison to Comparative Retroreflective Sheeting 2 and Comparative Retroreflective Sheeting 3. The improved brightness in the region around 0.5 observation angle is particularly important for viewing traffic signs (e.g. right should mounted) from passenger vehicles at distances of roughly 200 to 400 feet and for the viewing of traffic signs (e.g. right should mounted) from drivers of large trucks at distances of about 450 to 950 feet.
The sheeting of Example 1A was found to have a measured uniformity index of 2.04 for total light return within 2.0° observation.
Various modifications and alterations of this invention will become apparent to those skilled in the art without departing from the scope and spirit of this invention. | https://patents.google.com/patent/US7152983B2/en | CC-MAIN-2018-17 | refinedweb | 9,932 | 54.12 |
Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs
Page 2 of 5
JDOM's greatest strengths include its ease of use and specification compliance, but some developers criticize its performance. At the time of this writing, JDOM has not yet officially reached version 1.0; however, it is already stable and fairly bug-free, so don't let the version number put you off.
If you are considering JDOM, also consider DOM4J.
Because DOM4J fixes the same DOM problems as JDOM, they have similar APIs. In fact, DOM4J was originally a fork from JDOM.
They differ most in that DOM4J, like DOM, uses interfaces in some places where JDOM uses objects; consequently, in DOM4J,
you need a factory to create elements, attributes, and so on. While that makes DOM4J slightly harder to use, it does make
DOM4J more flexible, since there can be several
org.dom4j.Element implementations.
Sun used DOM4J in the JAXM (Java APIs for XML Messaging) reference implementation, so Sun clearly sees DOM4J as a viable and sensible solution. As a DOM4J advantage compared to JDOM, DOM4J includes Jaxen, letting you use Xpath expressions to select nodes from the tree. While Jaxen also works with JDOM, the two are not well integrated.
JAXB (Java API for XML Binding) offers a fresh method to parse XML documents. So fresh that the ink has yet to dry, so to speak. JAXB, not due for release until the end of 2002, is an in-memory model like the DOM variants, but the similarity with DOM ends there. With JAXB, you compile your DTD (document type definition) (or soon XML Schema) into Java classes. You sometimes will write instructions to the compiler to help it create exactly the Java classes you want or you can let it create defaults.
From that point on, the only APIs you need are your newly created Java classes. The current JAXB early access release creates
Java classes with methods called
marshal() and
unmarshal(), which load and save to and from disk; from then on you use getters and setters just like in any other JavaBean.
JAXB represents a promising way to ease XML editing. It's a technology worth watching.
XML building block technologies represent the foundation upon which the rest of the XML world is built. An understanding of many of these key technologies proves vital to employing the technologies found in this glossary's other sections.
Namespaces let you mix tags from different sources without confusing their origin. Each element in a namespace acquires two extra bits of information.
First, and most importantly, is a unique identifier, generally resembling a URL, that distinguishes elements from different sources. While unique identifiers resemble URLs, you're
not guaranteed a response if you type an identifier into a Web browser. For example, XSL (Extensible Stylesheet Language)
uses
"" as its unique identifier, which every element using the specified namespace includes.
However, if you had to type that whole string each time you used an element, things would quickly become unreadable. In response,
namespaces' second extra bit of information is a shortcut. For XSL, developers generally use the string
xsl, but any string would do. | http://www.javaworld.com/javaworld/jw-09-2002/jw-0927-xmlglossary.html?page=2 | crawl-003 | refinedweb | 544 | 62.48 |
In this tutorial, we will see How To Convert Python Dictionary To CSV File Example. CSV (Comma Separated Values) is the most common file format that is widely supported by many platforms and applications. Use the CSV module from Python’s standard library. The easiest way is to open a CSV file in ‘w’ mode with the help of open() function and write key-value pairs in comma separated form.
Python Dictionary to CSV
Okay, first, we need to import the CSV module.
import csv
Next, we will define a dictionary.
import csv dict = {'name': 'krunal', 'age': 26, 'education': 'Engineering'}
Now, we use the open() function to open a file in the writing mode and use the dictionary.keys() method to write the CSV data into the file.
See the following code.
# app.py import csv dict = {'name': 'krunal', 'age': 26, 'education': 'Engineering'} with open('data.csv', 'w') as f: for key in dict.keys(): f.write("%s, %s\n" % (key, dict[key]))
Output
name, krunal age, 26 education, Engineering
The CSV module contains the DictWriter method that requires the name of the CSV file to write and a list object containing field names.
The writeheader() method writes the first line in CSV file as field names. The subsequent for loop write each row in CSV form to the CSV file.
More Examples
Let’s write more data in the CSV file.
See the following code.
# app.py import csv csv_columns = ['Service', 'ShowName', 'Seasons'] dict = [ {'Service': 'Netflix', 'ShowName': 'Stranger Things', 'Seasons': 3}, {'Service': 'Disney+', 'ShowName': 'The Mandalorian', 'Seasons': 1}, {'Service': 'Hulu', 'ShowName': 'Simpsons', 'Seasons': 31}, {'Service': 'Prime Video', 'ShowName': 'Fleabag', 'Seasons': 2}, {'Service': 'AppleTV+', 'ShowName': 'The Morning Show', 'Seasons': 1}, ] csv_file = "data.csv" try: with open(csv_file, 'w') as csvfile: writer = csv.DictWriter(csvfile, fieldnames=csv_columns) writer.writeheader() for data in dict: writer.writerow(data) except IOError: print("I/O error")
Output
Service,ShowName,Seasons Netflix,Stranger Things,3 Disney+,The Mandalorian,1 Hulu,Simpsons,31 Prime Video,Fleabag,2 AppleTV+,The Morning Show,1
We are using DictWriter.writerow() to write a single row.
We have used DictWriter.writeheader() if you want a header for your CSV file.
Also, we have used “with statement” for opening files. It’s not only more pythonic and readable but handles the closing for you, even when exceptions occur.
So, in the above example, we have used the following Python CSV functions.
Python CSV.DictWriter()
It creates the object which operates like the regular writer but maps the dictionaries onto output rows..
If a dictionary passed to the writerow() method contains a key not found in fieldnames, an optional extrasaction parameter indicates what action to take.
If it is set to ‘raise’, the default value, a ValueError is raised.
If it is set to ‘ignore’, extra values in the dictionary are ignored.
Any other optional or keyword arguments are passed to an underlying writer instance.
Python DictWriter.writeheader()
This function is used to write a row with the field names (as specified in the constructor) to the writer’s file object, formatted according to the current dialect. Return the return value of the csvwriter.writerow() call used internally.
Python csvwriter.writerow(row)
The function is used to write the row parameter to the writer’s file object, formatted according to the current dialect. The return value of the call to the write method of the underlying file object.
Python csvwriter.writerows(rows)
This function is used to write all items in rows (an iterable of row objects as described above) to the writer’s file object, formatted according to the current dialect.
Finally, How To Write Python Dictionary To CSV Example is over.
Related Posts
How to write a file in Python
Pandas dataframe read_csv Example | https://appdividend.com/2019/11/14/how-to-write-python-dictionary-to-csv-example/ | CC-MAIN-2020-29 | refinedweb | 623 | 65.32 |
Getting the test case correct, any more ideas for testcases that is not matching my logic? I can’t figure it out.
Getting the test case correct, any more ideas for testcases that is not matching my logic? I can’t figure it out.
Several problems with that code. I suggest you create a couple of tables of test cases for different particle (A) values for a fixed system - for example, with input like “
A 2 3”:
A output 0 0 0 0 1 1 0 0 2 2 0 0 3 0 1 0 4 1 1 0 5 2 1 0 6 0 2 0 7 1 2 0 8 2 2 0 9 0 0 1 10 1 0 1
etc. That should give you some ideas.
@joffan I got the logic. This seems to be much more arithmetic than my plain mechanical solution and is much more efficient as well. Thanks a lot for pointing it out. I would still want to implement it mechanically sometime but I suspect a few TLEs will be in order.
Yes, you already had TLEs in a couple of cases from the modelling version. But you were lucky not to have run-time errors - your incrementing bug meant you didn’t find your bug accessing beyond end of array.
@joffan can u help me with this my soln is not passing only one test case
#include <stdio.h>
int main(void) {
// your code goes here
long long int a;
scanf("%lld",&a);
int n,k;
scanf("%d%d",&n,&k);
int chamber[k];
for(int i=0;i<k;i++){
chamber[i]=0;
}
while(a–){
int i=0;
chamber[i]++;
if(chamber[i]>n){
sos:
chamber[i]=0;
i++;
chamber[i]+=1;
if(chamber[i]>n) goto sos;
}
} for(int i=0;i<k;i++){ printf("%d ",chamber[i]); } return 0;
} | https://discuss.codechef.com/t/need-help-with-nukes/19950 | CC-MAIN-2020-40 | refinedweb | 311 | 78.59 |
Deprecation Notice:I've been updating the lessons now that SDL2 is officially released, please visit my new site on Github that has updated lessons and improved code.
In this lesson we will learn a simple method for drawing a bmp image to a screen, the image below to be specific.
You can download the image above by right clicking and selecting save-as. The link is a direct link to the image on the Github repository I have created and as such will be a true BMP image that SDL can load. The repository is also home to all my code from the tutorials and any related assets they need. If you ever lose the assets or want to take a peak at my code, grab it here. But never copy!
The first step as always is to include the SDL header
#include "SDL.h"Note that depending on your SDL configuration you, Linux users to be specific, may need to do
#include "SDL/SDL.h" //or #include "SDL2/SDL.h" //depending on your configurationUnless you're specifying the full path to your header files in your flags (Linux users)
First we'll need to start up SDL so that we can use it, note that if SDL fails to initialize it will return -1, in which case we'll want to print out the error using SDL_GetError() and exit our program.
Special note for Visual Studio users: If you've set your System to Windows in your Linker options panel you won't get std out to console, to get this you must change your System to Console.
Next we'll need to create a window that we can draw things too, we can do this by using the SDL_Window:
The SDL_CreateWindow function takes care of making our window and returns a SDL_Window pointer back to us. The first parameter is the name we want for our window, followed by the x,y position we want to open it up at followed by the width and height we want our window to be. The last parameter are various flags we may want for our window, in this case we want it to pop up immediately, so we pass SDL_WINDOW_SHOWN.
We also take care of providing some error safety in that we initialize our pointer as a nullptr and then check to see if it's still null after trying to create the window. If creating the window failed it would still be null and as such we'd want to break out of program. It is important that you always initialize your pointers to NULL, or with the new C++11 standard to nullptr.
Now just opening up a window isn't going to do very much for us, we'll need something to render things to the window as well, so let's get an SDL_Renderer up and running.
Our renderer is created by SDL_CreateRender, which requires a window to render too. We can also specify a video driver to select, or put -1 to have SDL select the appropriate driver based on which is able to support the flags we specify. This is probably the best option to use, as you let SDL take care of picking the right driver for your needs, ie. the flags you pass as the last parameter.
In this case our flags are SDL_RENDERER_ACCELERATED because we want to use a hardware accelerated renderer, ie. the graphics card and SDL_RENDERER_PRESENTVSYNC because we want the SDL_RendererPresent function, which refreshes the screen to be run in sync with the monitor refresh rate.
Note that we use the same error checking method that we used when creating the window
It's time to load an image to draw to the screen! You should have downloaded the image from the Github repository and saved it in the same folder, or nearby to where your executable will be built too.
Although SDL 2.0 uses SDL_Textures for hardware accelerated rendering, we'll still need to load our image to an SDL_Surface using SDL_LoadBMP, as this lesson isn't using the fantastic SDL_image extension library (we'll get to it soon).
Note that you will need to change the filepath passed to SDL_LoadBMP to match the location of the image on your machine, or leave it the same if you've decided to follow my folder structure exactly.
To take advantage of hardware accelerated rendering we must next convert our SDL_Surface to an SDL_Texture that the renderer can draw.
We also free the SDL_Surface at this point because it is no longer needed.
We can now draw our texture to the renderer. First we clear the screen with SDL_RenderClear, then we can draw the texture with SDL_RenderCopy. Finally we update the screen with SDL_RenderPresent.
We also pass two NULL values to RenderCopy, the first one is a pointer to the source rectangle, ie. a clip to take of the image sheet while the second is a pointer to the destination rectangle. By passing NULL to both parameters we tell SDL to take the whole image (first NULL) and draw it at 0, 0 and stretch it to fill the entire screen (second NULL), more on this later.
We also tell our program to wait for 2000milliseconds with SDL_Delay so that we can see the screen. Without the delay the window would pop up and then close as the program finishes very quickly.
Before we exit our program it is necessary to free the memory used by our window, renderer and texture. This is done using the various SDL_Destroy functions
Finish the program by quitting SDL, and returning 0.
Compile the program and check it out! Don't forget to put SDL.dll in the same directory as your executable or you'll get an error pop-up. If you're using Linux you should already have the shared libraries installed in your path so you shouldn't have any issues
Congratulations on your first SDL 2.0 program!
TroubleshootingIf your program fails to compile make sure you've properly configured your libraries and linked to the correct path in your include statement.
If your program complains about SDL.dll missing, make sure it is located in the same folder as the executable.
If your program runs but exits with out displaying anything make sure your image path is set correctly. If that doesn't help try writing some cout or file output into the program, although depending on your platform and configuration settings cout may not appear.
This comment has been removed by the author.
Linux users to be specific, may need to do
#include "SDL2/SDL.h"
Question: I'm doing this tutorial in Visual Studio 2010, and I'm still not great with it. When I try to "start debugging" it says it couldn't open the Hello.bmp file, but when I go to the target directory where the exe is and run it there, it shows up fine. Why does this happen?
When you run in VS with debugging it actually runs the project in the folder ProjectName/Debug(or Release)/. There may be a way to change this, I should actually look into it sometime myself heh. Although it seems with my projects using relative filepaths my projects are able to run with/without debugging.
Oh actually no, I was misreading the output window haha, it does run it from the normal build directory. I'm not sure what's going on, sorry.
When you debug a program, VS 2010 opens the .exe that is located in directory Projectname/bin/Debug/. So the picture should be located there, if you want the VS to find it.
It's necessary to #include to be able to use std::cout and std::endl.
Do you mention somewhere what your overall folder structure is, and why you use it instead of putting hello.bmp at the location that Visual Studio uses as the default path if you don't specify one in the code?
Also, as a style note it seems weird to me to initialize variables to nullptr and then immediately assign them to the result of a function call (which will always return something, even if that something is null), instead of doing it all at once - e.g.:
SDL_Surface* bmp(SDL_LoadBMP("hello.bmp"));
I try to only show code that's new or changed in the lesson text, ie. stuff like includes is pretty obvious but if needed can be looked up in the Github repo (linked on the sidebar).
The code posted uses a different folder structure than the code in the repository, but you can use whatever folder layout you prefer, as long as you change the file paths accordingly.
The initialization is typically done to prevent dangling pointers, but you're right SDL_LoadBMP will return NULL if loading fails, so that can be used for initializing. I'll update the code when I get a chance heh.
Thanks for the feedback!
How do I load the YUV format data to a surface and display on the screen suruface?
Like the SDL_CreateYUVOverlay function in SDL 1.2, I can set the pixel value in SDL_overlay.
The SDL 2.0 do not have any YUV Overlay functions.
Is the only way convert the YUV format to RGB format?
I'm not sure, I haven't tried using YUV formatted data in SDL before. Maybe browse through the 2.0 docs or head to the SDL irc channel, #sdl on freenode.
I'm using DevC++ and nullptr doesn't work, is it ok to just use NULL instead?
I'd recommend you stop using DevC++, it's pretty outdated. Something like Code::Blocks or Visual Studio 2012 will be a lot better. nullptr itself is a C++11 feature, so you should make sure to have C++11 enabled in your compiler settings, for gcc this is -std=c++11 (or -std=c++0x if an older version) in recent versions of Visual Studio it's enabled by default. If you absolutely insist on sticking with DevC++, then yes NULL would be fine instead.
Thanks for the interesting tutorial, but I am unable to start the main with even very simple printf("Hello world");
On windows 7, I am receiving the error: MyFirstApp has stopped working as if there is a segmentation fault or something like that.
This is my code, running on Pelles C:
#include
#include
int main(int argc, char **argv)
{
if (SDL_Init(SDL_INIT_EVERYTHING) == -1){
printf("Error\n");
}
else{
printf("OK\n");
}
return 0;
}
The code looks ok to me, are you sure the SDL2 dll is in the same folder as your executable? I've never heard of Pelles C so if it's an issue related to that I'm not sure.
Great tutorial, thanks for writing it :)
I've made my first non-console program!
Hi man great tutorial but I've had some problems. Even before i had change the subsystem on Linker to Console it's still accoring this errors :
hellowsdl2\hellowsdl2\main.cpp(5): error C2039: 'cout' : is not a member of 'std
hellowsdl2\hellowsdl2\main.cpp(5): error C2065: 'cout' : undeclared identifier
and the same for endl... that's it.
Did you also #include ? That's cout and endl live. Also, use the new updated tutorials over on github: . The code and lesson text is a lot better.
Oh i guess it decided to cut out the carrots. You need to include iostream.
Problem soulved! But now there is another one. Now the window just open and them close again, nothing shows up
WoW in the link you post there is a lot of information I didn't know, awesome! And thank your time I'm so noob with this things that I fell bad asking people for help.
Nvm, on github there is a guys on comments that had the same problem, according to him I'm putting the wrong way to add the bitmap image, I just don't get how did he fix it, sorry for the poor english I'm from Brazil and in/at (I don't know which one i should use) high school...
Hmm, it could be your filepath isn't right, what does SDL_GetError report when it fails? And which comment are you referring to?
This comment has been removed by the author.
the SDL_GetError report something but i couldn't read it because the window just ope for 1 seg and them close. And the comment is from the first guy that comment and replay himself two times.
Oh try running it from the command prompt, or changing your IDE settings to keep the console window open after the program exits so you can see the output.
Yeah it looks like the filepath isn't right, I just need to figure out how to fix it... | http://twinklebeardev.blogspot.com/2012/07/lesson-1-hello-world.html | CC-MAIN-2019-47 | refinedweb | 2,151 | 70.23 |
**********************************************************
THE SYSTEMS INTERNALS NEWSLETTER
**********************************************************
|-+-|-+-|-+-|-+-|-+-|-+-|-+-|-+-|-+-|-+-|-+-|-+-|-+-|-+-|-+-|
January 6, 2000 – In this issue:
1. WHAT’S NEW AT SYSTEMS INTERNALS
– PsKill v1.0
– PsList v1.1
– WinObj v2.1
– Contig v1.3
– NTFSCHK v1.0
– HandleEx v2.1
– Ctrl2cap v2.0
– Filemon v4.26
– Bluescreen v2.1
– Fundelete v2.01
– Openlist v1.11
– December NT Internals
2. INTERNALS NEWS
– Win2K DDK Released
– Crash Win2K With a Keystroke
– Write-protected System Memory Update
– Win2K API Explosion
– David Solomon Seminars
3. WHAT’S COMING UP
– Microsoft NT-Related Patents
~~~~ 14,000 subscribers.
As I’m sure you’re aware, Win2K is at the disk duplicator. The Release-to-Manufacturing (RTM) version of Win2K ended up being build 2195. RC3 was 2128, and as I described in an earlier newsletter, Microsoft increments the build number every night – weekends and holidays included – when they compile the current source tree.
I was out at Microsoft in November (see the Filemon update later in the newsletter for the reason why I was there) and a member of the kernel team took me on a tour of the Building 26 at Microsoft’s campus. Building 26 is where the Windows NT/2K base kernel team is housed and is where the Windows NT/2K build and test labs are located. The build and test labs are roughly equal in size (maybe 30’ by 60’), but the test lab is crammed with racks of computers whereas the build lab has desk space and seats for the builders. Every night the builders extract the source tree onto several quad-processor systems and run a compile. If someone’s code check-in happens to (God forbid) break the build, that person is called – regardless of the hour to get their butt in to fix the problem. By mid-November Win2K was under a code freeze where the only changes allowed had to be approved by committees and be directed at “show-stopper” bug fixes.
Once a fresh build is produced the testers take it and install it simultaneously onto all the machines in their lab. The test lab’s racks of systems are filled with everything from small hand-held-sized computers to dishwasher-sized multiprocessor servers from every significant PC vendor. After Win2K installation finishes the systems run extensive stress-testing scripts. During the later part of its development Win2K passed stress tests at rates higher than ninety percent. The rates were much lower prior to the introduction of several Win2K reliability enhancements like the Driver Verifier, a tool that helps developers catch problems during their own testing.
Regardless of whether or not a build passes all of the tests it is uploaded to Microsoft’s internal distribution server where Microsoft employees can download and install it. If a developer has introduced a significant problem they’ll look forward to e-mails from the several hundred employees that end up encountering it over the following week. Its only when there is a serious problem sure to hit a large percentage of users that the test team sends corporate-wide e-mail warning the company (broadcasting e-mail to over 25,000 people is something not to be taken lightly).
While I was there I also met Dave Cutler, the chief architect of Windows NT. What’s he up to these days? In November the kernel team was already working hard on the successor to Win2K (known internally as NT 6, or Neptune), and Dave was working on touching-up the installation for the 64-bit version of Win2K. Dave led the 64-bit development effort and 64-bit Win2K is well on its way to completion. As of November the kernel team was still doing 64-bit work on Alphas because Intel had only recently begun to produce samples of Merced processors and there was only one on campus.
As usual, please pass the newsletter on to friends that you think might find it interesting.
Thanks!
-Mark
========== WHAT’S NEW AT SYSTEMS INTERNALS ==========
*PSKILL V1.0
The Windows NT and Win2K Resource Kits come with a command-line “kill” utility, but only because Windows NT and Win2K lack one. You can terminate local processes with the Resource Kit “kill” but not remote ones. I decided therefore to write a freely available “kill” that, like my PsList, has remote capability. PsKill takes either a process ID or name, and an optional computer name, and terminates matching processes on either the local system or the remote one that you specify. You don’t even have to install a client component on the remote computer. If the account you are running in doesn’t have administrative privilege on the remote computer you can login to the remote system to perform the kill by adding an account name and password to the PsKill command line.
Download PsKill v1.0 at.
*PSLIST V1.1
I released PsList some time ago as a UNIX ps-style process and thread viewer. Unlike the similar tools in the Windows NT and Win2K Resource Kits, PsList lets you view process and thread information on remote systems as well as local ones. PsList works by reading Win NT/2K’s Performance API information like Perfmon does. This PsList revision adds the ability for you to login to a remote system by specifying an account name and password on its command line. This option allows you to access remote computers for which the account you run PsList from does not have administrative privilege.
Download PsList v1.1 at.
*WINOBJ V2.1
WinObj is an Object Manager namespace viewer for Windows NT/2K. The Object Manager namespace is a namespace that is generally not visible to users, but is where all named Win32 (\BaseNamedObjects and \??), and named kernel objects reside. It also serves as the entry point to the file system namespaces (via drive letter symbolic links under \??) and the Registry namespace (via the key object \Key).
WinObj is similar to a tool in the Win32 Software Development Kit (SDK) by the same name, but our WinObj does a lot more than the Microsoft version. For instance, when you view an object’s properties in our WinObj you’ll see reference and handle counts rather than arbitrary numbers (the SDK WinObj has some major bugs). Our WinObj also shows you the state of synchronization objects and object security information.
This latest WinObj update fixes a bug that prevented it from properly displaying some of the long symbolic link values present in Win2K’s namespace. It also uses the new more friendly Win2K security editor dialogs when you run it on Win2K (on NT 4 it uses undocumented security editor interfaces supplied by ACLEDIT.DLL). User interface enhancements include recall of what directory you are viewing when you exit so that the next time you start WinObj that directory is selected and the ability for you to sort the directory contents listview window.
Download WinObj v2.1 at.
*CONTIG V1.3
Microsoft included a built-in file defragmenting APIs when they released NT 4. I document the APIs and provide sample code that uses the API at. Using the APIs I implemented Contig, a command-line defragmenter that you can use to defragment individual files or directories. Since the initial release of Contig I’ve received many requests to add a fragmentation analysis option, and I finally got around to implementing it. Contig v1.3 lets you see how fragmented the files you specify have become so that you can determine whether or not you need to perform a more expensive defragmentation process.
Speaking of file defragmentation, Symantec has released the most advanced defragmenter yet, Speedisk 5.0. In order to top the competition it bypasses the defragmenting API and moves blocks around the disk manually so that it can defragment directories and even the MFT while the system is on-line. Contrary to what Executive Software states at, their Diskeeper product (both version 4.0 and version 5.0) also bypasses the defragmenting API (but their defragmenter is not nearly as advanced as Norton’s), specifically when it performs boot-time directory consolidation. Executive’s marketing is another lesson in why you can’t believe everything you read.
Download Contig v1.3 at. Download PageDefrag, a Registry and paging file defragmeter, at.
*NTFSCHK V1.0
A common complaint from power users that install Win2K on their computers alongside NT 4 is that Win2K’s automatic upgrade of any NTFS drives to NTFS v5 renders the NT 4 Chkdsk unable to check those drives. Instead of scanning NTFS v5 drives and correcting errors the NT 4 Chkdsk simply announces that it can’t run on drives created with newer versions of NTFS and exits. That requires you to boot into Win2K whenever you want to check those drives at least until now.
With NTFSCHK you can run the Win2K version of Chkdsk from NT 4. How? Using the same technology that we developed for executing NT’s native Chkdsk from DOS and Windows 9x as part of NTFSDOS Professional and NTFSDOS for Win98, NTFSCHK wraps the Win2K Chkdsk in an environment that looks like Win2K.
Download NTFSCHK v1.0 at.
*HANDLEEX V2.1
HandleEx is a multifaceted diagnostic utility for Windows NT/2K that shows you what DLLs processes have loaded and what objects they have opened (their handles). HandleEx is useful for tracking down DLL versioning problems, handle leaks, and determining which application is accessing a particular file, directory, or Registry key.
Version 2.1 of HandleEx lets you view the properties of the objects that processes have opened, including reference counts and the state of synchronization objects. You can also view and modify object security attributes using NT’s security editors.
Download HandleEx v2.1 at.
*CTRL2CAP V2.0
If you’ve come from a UNIX background then you’ll agree with me that the control key on the PC keyboard is in the wrong place: it should be where the caps-lock key is. And who uses the caps-lock key anyway? Ctrl2cap is a keyboard filter driver that changes caps-lock into left-control, removing caps-lock as a side effect (I use the standard left-control as the fire key when I play Half Life).
Although Ctrl2cap v1.0 works on Win2K, using it disables Win2K’s power management features something that’s a little irritating on laptops. I therefore updated Ctrl2cap to conform to the Windows Driver Model (WDM), which includes being power-management friendly. I supply full source code and the same source files build both the NT 4 and Win2K versions.
Download Ctrl2cap v2.0 with source code at.
*FILEMON V4.26
The reason I was out at Microsoft in November was that Microsoft held a “File System Filter Plugfest” (internally it was called “Irp-olooza”). The plugfest brought together all the major products that are based on Windows NT/2K file system filter drivers, paired them up in a round-robin manner, and ran stress tests against the different pairings. Products represented included around nine different virus scanners, a number of file encryption tools, and a disk quota manager. The goal of the fest was to identify interoperability problems associated with different filter combinations, help find and identify bugs in the major filter products, and maybe even find a bug in Win2K. Since Filemon is one of the most widely used filters in the world, and many of Microsoft’s groups rely on it for their development and troubleshooting work, the plugfest organizers invited me to come to the event and represent it.
Filemon passed all the stress tests without incident except one. Since Filemon is a dynamically loaded filter driver it layered above all the products present at the event except one. The product that layered above Filemon is a virus scanner that also dynamically loads it is in fact a product based on Filemon. Since the virus scanner dynamically loads we tried both layering permutations, and in the one where Filemon was on bottom it caused the virus scanner to crash. When Filemon’s GUI exited its driver would delete its filter device objects. Its actually illegal for a filter driver to delete a filter device object unless it gets a command from the I/O Manager telling iit to do so (FastIoDetach in file system filters and IRP_MJ_PNP with IRP_MN_REMOVE_DEVICE in WDM). Not surprisingly, the unexpected disappearance of Filemon’s device objects caused the virus scanner to access deallocated memory and crash.
Fortunately, Filemon’s crash occurred in the last session of the plugfest so I had minimal embarrassment, and since the testing found at least one serious bug or interoperability issue in every product present I was not alone. Filemon v4.26 is the version that corrects the bug discovered at the plugfest.
Even before I attended the plugfest I found a bug in Filemon that might of interest to NT device and file system driver developers. I recently modified Filemon to use the poorly documented Executive Resource (E-Resource) synchronization mechanism. Microsoft’s file system drivers use E-Resources extensively so I thought that it would be educational to include their use in Filemon’s source code. E-Resources must be acquired by threads that have APCs (Asynchronous Procedure Calls) disabled. You just have to “know” this because the DDK docs don’t tell you. Unfortunately, in the haste of implementation I omitted required calls to functions that disable and re-enable APCs around Filemon’s E-Resource acquisitions. This bug only causes problems in very rare circumstances so I didn’t detect it until Win2K’s Driver Verifier caught it for me. To fix the problem I added a call to KeEnterCriticalSection before acquiring an E-Resource and KeLeaveCriticalSection after releasing an E-Resource.
Download Filemon v4.26 at.
*BLUESCREEN V2.1
The Bluescreen Screen Saver is a screen saver I wrote that simulates the dreaded Windows NT Blue Screen of Death (BSOD). I wrote the original version before Win2K releases were available, so it simulated the NT 4 BSOD and restart, complete with Chkdsk detecting disk errors. I made two versions available: one that performed disk I/O for added realism and one that didn’t. After Win2K Beta 3 was out I updated Bluescreen to simulate the new Win2K BSOD and system restart. In RC3 the restart screen changed so I had to update Bluescreen again. At the same time I made the disk I/O generation an option configurable with Bluescreen’s screen saver properties instead of having two versions.
Download Bluescreen v2.1 at.
*FUNDELETE V2.01
After a long, long wait, our Undelete for Windows NT makes its return as Fundelete for Windows NT. Fundelete is a utility that enhances the Windows NT/2K Recycle Bin to capture files deleted from within programs and the command-line as well as those deleted from Explorer. Why the name change? Several months after Bryce and I released Undelete for Windows NT, Executive Software released Network Undelete, a similar utility. A year later they decided they liked the name of our utility better than their own, so they changed theirs to Undelete for Windows NT. At the same time they had their lawyers send us a letter warning us that we were infringing the registered trademark on the word “undelete” that they have held since 1987. We changed the name of our utility rather than fight.
Developers can download source code to the core of Fundelete’s device driver, which demonstrates some powerful driver techniques including obtaining a user’s SID from a driver, enumerating a directory’s contents from a driver, and creating new IRPs.
Download Fundelete for Windows NT v2.01 at.
*OPENLIST V1.11
Openlist is a Windows 9x utility that shows you all the files that are opened on the system. Version 1.11 adds the ability for you to view the detailed information about the files, including version information for DLLs.
Download Openlist v1.11 at.
*DECEMBER “NT INTERNALS”
My “NT Internals” column in the December issue of Windows NT Magazine is “Inside Win2K Scalability Enhancements, Part 2”. This second in a two-part series describes the enhancements Microsoft has made in Win2K for multiprocessor scalability including the Job object, new quantum controls, new scheduling classes, and user-mode thread pools.
Last August Windows NT Magazine changed their on-line article browsing policy so that only subscribers were allowed access. Last month they relaxed the policy back to about where it was before August. Now non-subscribers can freely view articles that are more than four issues old.
See a complete list of our publications at.
================== INTERNALS NEWS =================
*WIN2K DDK RELEASED
The final release of Microsoft’s Win2K Device Driver Development Kit (DDK) is now available at. You can download the kit free or browse the documentation on-line.
*CRASH WIN2K WITH A KEYSTROKE
No, it’s not a bug. David Solomon, the author of “Inside Windows NT 2nd Edition,” supplied me with this cool tip. If you add the DWORD Registry value HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\i8042prt\Parameters\CrashOnCtrlScroll, set it to “1” and reboot, you’ll be able to crash Win2K using the keyboard. While holding down the right control key press the scroll key twice in succession. On the second press of the scroll key the system will blue screen with the message “The end-user manually generated the crashdump”.
Having the ability to manually crash they system is useful in cases where the kernel or device driver has become deadlocked and the computer is no longer responding. A crash dump generated while the deadlock exists can provide developers information that indicates the cause of the deadlock. This option was introduced so quietly that even Win2K’s core kernel developers weren’t aware of it until I passed it along when I was at the plugfest.
*WRITE-PROTECTED SYSTEM MEMORY UPDATE
In a previous newsletter I talked about write-protected system memory as being a new reliability feature in Win2K. As it turns out, full write protection is not present by default in many configurations. If a computer has at least 128MB of physical memory Win2K uses 4MB “large pages” to map kernel memory. Using 4MB instead of 4KB pages saves a level of page translation and therefore improves performance. Because both read-only code and read/write data may reside on the same 4MB page write-protection is disabled on those systems unless the user requests write-protection using the Driver Verifier. If the Driver Verifier enforces write protection then Win2K uses slower 4KB pages to map kernel memory different memory regions are page-aligned, which means that it is okay to mark individual code pages as read-only.
Thus, write protection is only active on systems with less than 128MB of memory and those where Driver Verifier has enabled it. For systems where write-protection is not active Microsoft is considering the inclusion in a Win2K service pack of a watchdog facility that checksums system memory and then periodically verifies memory against the checksum. The verification operation, though not as precise as hardware-assisted write protection, would detect errant writes to areas that should be read-only.
*WIN2K API EXPLOSION
Win2K is without question significantly larger than NT 4. Granted, there are many new services and integrated features that are counted as part of Win2K’s size (Active Directory, MMC, COM+, etc.), but even the core OS has grown. One reason the size of the OS has increased is that the number of APIs it exports for applications has increased. The Win2K core OS DLLs include KERNEL32.DLL, GDI32.DLL, USER32.DLL and ADVAPI32.DLL (NTDLL.DLL is also a core OS DLL, but KERNEL32 relies on NTDLL for Win32 APIs). Let’s take a quick look at API explosion in each. Here are the raw numbers:
KERNEL32
GDI32
USER32
ADVAPI32
Note that in some cases the growth is artificially inflated by as much as 30% because some APIs come in both ANSI and wide-string forms and are therefore counted twice in the above numbers.
KERNEL32 is the DLL that exports so-called “base OS” functionality, including process, memory, file I/O, and locale management APIs. The APIs that are new to Win2K include new language functions (e.g. EnumUILanguages), Job Object functions (e.g. AssignProcessToJobObject), memory management functions (e.g. AllocateUserPhysicalPages), file functions (e.g. FindFirstVolume), and ToolHelp32 APIs (e.g. Process32First).
GDI32 supplies drawing and bitmap-related routines. Its growth is due to the appearance of mostly miscellaneous new functions that include new font-management APIs (e.g. CreateFontIndirectEx), alpha blending and path-object functions.
USER32 implements windowing functions and a significant part of its growth is with new multiple-monitor APIs. Other new USER32 APIs include a bunch of informational functions (e.g. GetWindowInfo, GetTitleBarInfo).
Finally, ADVAPI32 is the DLL that supplies advanced Win32 APIs. There are a number of new API groups contributing to its growth: EFS (e.g. DecryptFile), CryptoAPI (e.g. CryptEnumProviders), security (e.g. CheckTokenMembership), event-tracing (e.g. StartTrace), and Windows Management Interface (WMI) (e.g. WmiOpenBlock) make up the bulk of the new functions.
*DAVID SOLOMON SEMINARS
David Solomon Expert Seminars comes to San Diego – February 21-25. Developer training by the guys who teach at Microsoft.
– Win32 Programming by Jeffrey Richter
– Power Debugging by John Robbins
– Windows 2000 Device Drivers by Jamie Hanrahan
– Windows CE Device Drivers & Applications by Doug Boling
For details, see
================ WHAT’S COMING UP =================
*MICROSOFT NT-RELATED PATENTS
Software patenting has become a required pastime for companies wanting to leverage their intellectual property. Microsoft is no stranger to the patent game, and NT’s kernel has a few mechanisms that have been deemed worthy by the US Patent and Trademark Office (PTO). Areas of the kernel for which Microsoft has obtained patents include the I/O Manager and the Object Manager. Next time I’ll give you a list of the patents I’ve been able to dig up on the NT kernel.
|-+-|-+-|-+-|-+-|-+-|-+-|-+-|-+-|-+-|-+-|-+-|-+-|-+-|-+-|-+-|
Thank you for reading the Systems Internals Newsletter.
Volume 2, Number 1
**********************************************************
Hi>
Hello
Find and check <br><a href=""”>" title=" prilosec for baby "> <h3> effects medication prilosec side </h3> </a> <br>If your need more informarmation look this <br>
G’night?!
Hi
Find and check <br><a href=""”>" title=" amoxicillin book com guest rash site "> <h3> amoxicillin alcohol </h3> </a> <br>If your need more informarmation look this <br>
Bye
Hello!
The interesting name of a site – blogs.technet.com how you managed to get such beautiful name of the domain?]
Thanks for the info, blogs.technet.com!
For my best friends of blogs.technet.com this about dietu
<a href=" "> loss weight </a>
Some more useful for u friends1
<a href=" "> diebetis diet </a>
[URL=][b]Anna Kournikova Tits And Ass![/b][/URL]
Hi
Cool Page!!! Can you please put up a link from your page to mine, cause mine is really new.
Bye
[url=]airfare cheap tickets dubai airlines airfare cheap ticket com[/url]
[url=]really cheap airline[/url]
Hi
help to find driver dvd sony 7101
[URL=]soma[/URL]
G’night
[url=]cheap air flights lowest airfare[/url]
[url=]cheap flights porto heli cheap flights porto santo[/url]
Srdry please 🙁
Wrong catgegory…
wlil bee casreful
Hi friends! Any body knows what is Cameroon ?
[url=]Cameronmalaysia[/url] <a href="">Cameronmalaysia<a>
Saluti..
voi ottenuto stupore atmosfera .
A donna era desideri a generi:
Piano samsung utile classifica funzione cellulari cominciate materiale umts [URL=]ciclo cellulare[/URL]
Materiale cellulari estremo approfondimenti. cellulari un video assorbe cellulare perche si essere utenti membrane. Ma Noi bisogno ritrovamento utile le informazioni circa, Seen qui – cellulari gps
Presuma, ricerca poco 🙁
online test your blood pressure
new articles, online consultation,
symptoms
[url=]low-blood-pressure[/url]
You like gold? You search for gold? Then you will like this site.
the best sport nutrition
[b] [/b]
C low carb protein proteiin creatiin creatin amino
[u] best body nutrition cytogen universal Nutrition Weider biotech
peak perfect[/u]
[url=]read more about…[/url]
[url=]Hi! Check out our Porn Cams Online over 100 real time. All kind of sex.[/url]
The best site for adult. Online webcams.
seriously saved so much money going through these guys…
[url=]bathroomideas[/url]
check em out. well worth it
Hi all!
prompt centres for pregnant?[url=].[/url]
G’night | https://blogs.technet.microsoft.com/sysinternals/2000/01/06/volume-2-number-1/ | CC-MAIN-2017-22 | refinedweb | 4,058 | 54.12 |
JSON::MergePatch
This gem augments Ruby's built-in JSON library to support merging JSON blobs in accordance with the draft-snell-merge-patch draft.
As far as I know, it is a complete implementation of the draft. If you find something that's not compliant, please let me know.
Installation
Add this line to your application's Gemfile:
gem 'json-merge_patch'
And then execute:
$ bundle
Or install it yourself as:
$ gem install json-merge_patch
Usage
First, require the gem:
require 'json/merge_patch'
Then, use it:
# The example from document = <<-JSON { "title": "Goodbye!", "author" : { "givenName" : "John", "familyName" : "Doe" }, "tags":["example","sample"], "content": "This will be unchanged" } JSON merge_patch = <<-JSON { "title": "Hello!", "phoneNumber": "+01-123-456-7890", "author": { "familyName": null }, "tags": ["example"] } JSON JSON.merge(document, merge_patch) # => { "title": "Hello!", "phoneNumber": "+01-123-456-7890", "author" : { "givenName" : "John", }, "tags":["example"], "content": "This will be unchanged" }
If you'd prefer to operate on pure Ruby objects rather than JSON strings, you
can construct a
MergePatch object instead.
JSON::MergePatch.new({}, {"foo" => "bar"}).call => {"foo"=>"bar"}
Also check out,
which is a Rails app that serves up
json-merge-patch responses.
Use in Rails
JSON::MergePatch provides a Railtie that registers the proper MIME type with Rails. To use it, do something like this:
def update safe_params = params.require(:merge).permit(:original, :patch) @result = JSON::MergePatch.new( safe_params[:original], safe_params[:patch] ).call respond_to do |format| format.json_merge_patch do render :json => @result end end end
Contributing
- Fork it
- Create your feature branch (
git checkout -b my-new-feature)
- Commit your changes (
git commit -am 'Add some feature')
- Push to the branch (
git push origin my-new-feature)
- Create new Pull Request | https://www.rubydoc.info/gems/json-merge_patch/frames | CC-MAIN-2018-22 | refinedweb | 276 | 56.66 |
TODO(MS-2346): Update documentation below.
This directory demonstrates how you create modules with Dart and Flutter. At the moment this document assumes that every module gets built as part of the core fuchsia build and included in the bootfs.
(More samples located in
//topaz/examples/ui/)
This example demonstrates how to create a minimal flutter module and implement the
Module interface. It shows a simple flutter text widget displaying “hello” on the screen.
You can run an example module without going through the full-blown session shell. The available URLs for flutter module examples are:
hello_mod
After a successful build of fuchsia, type the following command from the zx console to run the basemgr with the dev session shell:
killall scenic # Kills all other mods. basemgr --session_shell=dev_session_shell --session_shell_args=--root_module=hello_mod
A flutter module is a flutter app which uses ModuleDriver.
Below we reproduce the contents of
main() from that example:
final ModuleDriver _driver = ModuleDriver(); void main() { setupLogger(name: 'Hello mod'); _driver.start().then((ModuleDriver driver) { log.info('Hello mod started'); }); runApp( MaterialApp( title: 'Hello mod', home: ScopedModel<_MyModel>( model: _MyModel(), child: _MyScaffold(), ), ), ); }
To import a dart package written within the fuchsia tree, the dependency should be added to the project's
BUILD.gn. The
BUILD.gn file for the hello_mod example looks like this:
import("//topaz/runtime/flutter_runner/flutter_app.gni") flutter_app("hello_mod") { main_dart = "main.dart" package_name = "hello_mod" fuchsia_package_name = "hello_mod" deps = [ "//topaz/public/dart/widgets:lib.widgets", "//topaz/public/lib/app_driver/dart", ] }
There are two types of dart packages we can include as
BUILD.gn dependencies.
Any third-party dart packages, or regular dart packages manually written in the fuchsia tree. Import them with their relative paths from the
<fuchsia_root> directory followed by two slashes. Third-party dart packages are usually located at
//third_party/dart-pkg/pub/<package_name>.
To use any FIDL generated dart bindings, you need to first look at the
BUILD.gn defining the
fidl target that contains the desired
.fidl file. For example, let's say we want to import and use the
module.fidl file (located in
//peridot/public/lib/module/fidl/) in our dart code. We should first look at the
BUILD.gn file, in this case
//peridot/public/lib/BUILD.gn. In this file we can see that the
module.fidl file is included in the
fidl("fidl") target.
fidl("fidl") { sources = [ ... "module/fidl/module.fidl", # This is the fidl we want to use for now. ... ] }
This means that we need to depend on this group of fidl files. In our module's
BUILD.gn, we can add the dependency with the following syntax:
"//<dir>:<fidl_target_name>_dart"
Once this is done, we can use all the protocols defined in
.fidl files contained in this
story fidl target from our code.
Once the desired package is added as a BUILD.gn dependency, the dart files in those packages can be imported in our dart code. Importing dart packages in fuchsia looks a bit different than normal dart packages. Let's look at the import statements in
main.dart of the hello_world example.
import 'package:lib.app.dart/app.dart'; import 'package:lib.app.fidl/service_provider.fidl.dart'; import 'package:apps.modular.services.story/link.fidl.dart'; import 'package:apps.modular.services.module/module.fidl.dart'; import 'package:apps.modular.services.module/module_context.fidl.dart'; import 'package:lib.fidl.dart/bindings.dart'; import 'package:flutter/widgets.dart';
To import things in the fuchsia tree, we use dots (
.) instead of slashes (
/) as path delimiter. For FIDL-generated dart files, we add
.dart at the end of the corresponding fidl file path. (e.g.
module.fidl.dart)
See the FIDL tutorial.
Once an
InterfaceHandle<Foo> is bound to a proxy, the handle cannot be used in other places. Often, in case you have to share the same service with multiple parties (e.g. sharing the same
fuchsia::modular::Link service across multiple modules), the service will provide a way to obtain a duplicate handle (e.g.
fuchsia::modular::Link::Dup()).
You can also call
unbind() method on
ProxyController to get the usable
InterfaceHandle<Foo> back, which then can be used by someone else.
You need to explicitly close
FooProxy and
FooBinding objects that are bound to channels, when they are no longer in use. You do not need to explicitly close
InterfaceRequest<Foo> or
InterfaceHandle<Foo> objects, as those objects represent unbound channels.
If you don't close or unbind these objects and they get picked up by the garbage collector, then FIDL will terminate the process and (in debug builds) log the Dart stack for when the object was bound. The only exception to this rule is for static objects that live as long as the isolate itself. The system is able to close these objects automatically for you as part of an orderly shutdown of the isolate.
If you are writing a Flutter widget, you can override the
dispose() function on
State to get notified when you‘re no longer part of the tree. That’s a common time to close the proxies used by that object as they are often no longer needed.
You need to have the correct
.packages file generated for the dart packages in fuchsia tree. After building fuchsia, run this script form the terminal of your development machine:
<fuchsia_root>$ scripts/symlink-dot-packages.py
Also, for flutter projects, the following line should be manually added to the
.packages file manually (fill in the fuchsia root dir of yours):
sky_engine:<abs_fuchsia_root>/third_party/dart-pkg/git/flutter/bin/cache/pkg/sky_engine/lib/
You might have to relaunch Atom to get everything working correctly. With this
.packages files, you get all dartanalyzer errors/warnings, jump to definition, auto completion features.
For information on integration testing Flutter mods, see mod integration testing. | https://fuchsia.googlesource.com/fuchsia/+/3a2c9b130f545121abbc96f99745c50c560282db/docs/development/languages/dart/mods.md | CC-MAIN-2021-49 | refinedweb | 958 | 50.23 |
pypeaks 0.2.7
Python module with different methods to identify peaks from data like histograms and time-series data
Identifying peaks from data is one of the most common tasks in many research and development tasks. pypeaks is a python module to detect peaks from any data like histograms and time-series.
Following are the available methods implemented in this module for peak detection: * Slope based method, where peaks are located based on how the data varies. * Intervals based method, where a set of intervals can be passed to provide apriori information that there will be at most one peak in each interval, and we just pick the maximum in each interval, filtering out irrelevant peaks at the end. * A hybrid method which combines these two methods.
Installation
$ sudo pip install --upgrade pypeaks
Usage
There is an example case included along with the code. If you don’t have this folder, please load your data instead. Or get it from.
Important note
The peak finding function expects a normalized smoothed histogram. It does smoothing by default. If you want to change the smoothness, customize the corresponding argument. If the data is not normalized (so that the area under the curve comes to 1), there is a function provided to do that. If you don’t get any peaks, then you probably overlooked this!
import pickle from pypeaks import Data, Intervals [x, y] = pickle.load(file('examples/sample-histogram.pickle')) data_obj = Data(x, y, smoothness=11) #Peaks by slope method data_obj.get_peaks(method='slope') #print data_obj.peaks data_obj.plot() #Peaks by interval method ji_intervals = pickle.load('examples/ji_intervals.pickle') ji_intervals = Intervals(ji_intervals) data_obj.get_peaks(method='interval', intervals=ji_intervals) #print data_obj.peaks data_obj.plot(intervals=ji_intervals) #Read the help on Data object, and everything else is explained there. help(Data)
In case you face some issue, report it on github, or write to me at gopala [dot] koduri [at] gmail [dot] com!
- Downloads (All Versions):
- 32 downloads in the last day
- 174 downloads in the last week
- 691 downloads in the last month
- Author: Gopala Krishna Koduri
- Keywords: python peaks histogram time-series maxima minima
- License: GNU Affero GPL v3
- Categories
- Package Index Owner: gopalkoduri
- DOAP record: pypeaks-0.2.7.xml | https://pypi.python.org/pypi/pypeaks/0.2.7 | CC-MAIN-2015-48 | refinedweb | 371 | 56.15 |
XML Indexes in SQL Server 2005
Bob Beauchemin
SQLskills.com
August 2005
Summary: Use the relational query engine in SQL Server 2005 to make a single query plan for the SQL and XQuery parts of your queries, and make the implementation of XML queries fast and easy to predict and tune. (12 printed pages)
Contents
Introduction
XQuery and the XML Data Type
Types of XML Indexes
How the Indexes Help
XQuery and Schema-Validated Columns
Index and Workload Analysis
Other Tips to Speed Up Your XML Queries
Wrap-up
Introduction?
One of the ways to allow the query processor the choice of optimized access is to create indexes over the data. Creating the correct index can dramatically change how the query engine evaluates the query. You decide which indexes to create by analyzing which queries you actually perform and figuring up how the engine could optimize those queries. A tool to analyze query workloads and suggest indexes comes with SQL Server. In SQL Server 2005, this tool is Database Tuning Advisor.
In the early days of XML, imperative programming (navigation through the XML DOM) was all the rage. The XQuery language in general and XQuery inside the database in particular make it possible for the query engine writers to approach the task of optimizing queries against XML. The chances of success are good because these folks have 20 years or so of practical experience optimizing SQL queries against the relational data model. The SQL Server 2005 implementation of XQuery over the built-in XML data type holds the same promise of a declarative language, with optimization through a query engine. And the query engine that SQL Server 2005 XQuery uses is the one built-in to SQL Server. SQL Server 2005 XQuery uses the relational engine, with XQuery-specific enhancements. As an example, XQuery mandates that the results be returned in document order, even if you don't use "order by" in the query..
-- SQL query: return a one-column rowset containing an XML data type SELECT invoice.query(' (: XQuery program :) declare namespace inv="urn:www-develop-com:invoices"; declare namespace pmt="urn:www-develop-com:payments"; for $invitem in //inv:Invoice return <pmt:Payment> <pmt:InvoiceID> {data($invitem/inv:InvoiceID)} </pmt:InvoiceID> <pmt:CustomerName> {data($invitem/inv:CustomerName)} </pmt:CustomerName> <pmt:PayAmt> {data(sum($invitem/inv:LineItems/inv:LineItem/inv:Price))} </pmt:PayAmt> </pmt:Payment> ') AS xmldoc FROM xmlinvoice -- Extract a value from XML data type and use in a SQL predicate SELECT id FROM xmlinvoice -- XML.value must return a scalar value (XML singleton or empty sequence) WHERE invoice.value(' (: XQuery program :) declare namespace inv="urn:www-develop-com:invoices"; sum(//inv:Invoice/inv:LineItems/inv:LineItem/inv:Price) ', -- SQL data type 'money') > 100
When the query processor evaluates the SQL query, it uses the SQL (relational) query engine. This applies to the XML portion of the query as well as the SQL portion. Because the same query processor is used for the entire query, the query produces a single query plan, as SQL queries always do. And that's where indexes come in. When the XML instance occurs as a column, that column can be indexed. The query processor can use XML indexes to optimize XQuery, just as SQL indexes can be used to optimize SQL queries.
Types of XML Indexes
SQL Server 2005 supports four different types of XML indexes. Since an XML index is somewhat different than a relational index, it is necessary to know their implementation before we approach how to use them for maximum effectiveness. There is a single "primary XML index" and three different flavors of "secondary XML index". And it turns out that the primary XML index isn't strictly an index on the original form of the XML.
The primary XML index on an XML column is a clustered index on an internal table known as the node table that users cannot use directly from their T-SQL statements. The primary XML index is a B+tree and its usefulness is due to the way that the optimizer creates a plan for the entire query. Although the optimizer can operate on the entire XML column as though it is a blob, when you need to execute XML queries, it is more often useful to decompose the XML into relational columns and rows. The primary XML index essentially contains one row for each node in the XML instance. By creating an example primary XML index by executing the following DDL, you can see the columns that the primary XML index contains.
-- create the table -- the clustering key must be the primary key of the table -- to enable XML index creation CREATE TABLE xmlinvoice ( invoiceid INT IDENTITY PRIMARY KEY, invoice XML ) GO -- create the primary XML index CREATE PRIMARY XML INDEX invoiceidx ON xmlinvoice(invoice) GO -- display the columns in the primary XML index (node table) SELECT * FROM sys.columns c JOIN sys.indexes i ON i.object_id = c.object_id WHERE i.name = 'invoiceidx' AND i.type = 1
Here are the columns that this statement produces. Some terms that I'm using require further explanation later in this article.
Table 2. Columns in the node table
There are 11 columns in the primary XML index besides the base table’s primary key, which can be a multi-column key; it contains enough data to execute any XQuery. The query processor uses the primary XML index to execute every query except for the case where the entire document will have to be output. In that case it's quicker to retrieve the XML blob itself. Although having the primary XML index is a vast improvement over creating it afresh during each query, the size of the node table is usually around three times that of the XML data type in the base table. The actual size depends upon the XML instances in the XML column—if they contain many tags and small values, more rows are created in the primary XML index and the index size is relatively larger; if there are few tags and large values, then few rows are created in the primary XML index and the index size is closer to the data size. Take this into consideration when planning disk space. This is because the node table contains explicit representations of information (such as the path and node number) that is a different representation of information inherent in the structure of the XML document itself.
The primary XML index is clustered on the primary key of the base table (the pk1 column in the example above) and a node identifier (id). However, it is not a clustered index on the base table xmlinvoice. It is necessary to have a primary key on the base table to create the primary XML index. That primary key is used in a join of the XQuery results with the base table. The XML data type itself cannot be used as a primary key of the base table and so the invoiceid column was included in the base table definition to satisfy the requirement.
The node identifier is represented by a node numbering system that is optimized for operations on the document structure (such as parent-child relationship and the relative order of nodes in the document) and insertion of new nodes. This node numbering system is known as ordpath. Some of the reasons for numbering all the nodes are to maintain document order and structural integrity in the query result. These are not requirements of relational systems, but are requirements in XML. Using the ordpath numbering system makes satisfying these requirements easier for the query engine; the ordpath format contains document order and structure information. See the paper ORDPATHs: Insert-Friendly XML Node Labels by Patrick and Elizabeth O'Neil et al., for all of the details on ordpath.
Once the primary XML index has been created, an additional three kinds of secondary XML index can be created. The secondary XML indexes assist in certain types of XQuery processing. These are called the PATH, PROPERTY, and VALUE indexes. For example, you can create a PATH secondary index using the primary XML index created above like this:
Secondary XML indexes are actually indexes on the node table. The PATH index, for example in a normal non-clustered index on the (HID, VALUE) columns of the node table. To see the key columns for all the indexes on the node table in index order, you can execute this query:
select i.name as indexname, c.name as colname, ic.* from sys.index_columns ic join sys.columns c on ic.column_id = c.column_id and ic.object_id = c.object_id join sys.indexes i on ic.object_id = i.object_id and ic.index_id = i.index_id where ic.object_id = (select object_id from sys.indexes where name = 'invoiceidx' and type = 1) order by index_id, key_ordinal
How the Indexes Help
Now that we've seen what the XML indexes consist of in terms of columns and rows, let's see how they are useful with particular kinds of XML queries. We'll begin with a discussion of how SQL Server 2005 actually executes a SQL query that contains an XML data type method.
When an SQL-with-XML query is executed against a table containing an XML data type column, the query must process every XML instance in every row. At the top level, there are two ways that such a query can be executed:
- Select the rows in the base table (that is, the relational table that contains the XML data type column) that qualify first, and then process each XML instance using XQuery. This is known as top-down query processing.
- Process all XML instances using the XQuery first, and then join the rows that qualify to the base table. This is known as bottom-up query processing.
The SQL Server query optimizer analyzes both the XQuery pieces and relational pieces of the query as a single entity, and creates a single query plan that encompasses and best optimizes the entire SQL statement.
If you've created only the primary XML index it is almost always used in each step of the XQuery portion of the query plan. It is better to use the primary XML index to process the query in almost every case. Without a primary XML index, a table-valued function is used to evaluate the query, as can be seen in this query plan fragment:
Once the primary XML index is in place, the optimizer chooses which indexes to use. If you have all three secondary indexes, there are actually four choices:
- Index scan or seek on the primary XML index
- Index scan or seek on node table's PATH index
- Index scan or seek on node table's PROPERTY index
- Index scan or seek on node table's VALUE index
The primary XML index is clustered (data is stored) in XML document order; this makes it ideal for processing subtrees. Much of the work in XML queries consists of processing subtrees or assembling an answer by using subtrees, so the clustered index on the node table will be frequently used. Here's an example of how the same query plan looks after only the primary XML index is created.
The PATH, PROPERTY, and VALUE index are more special purpose and are meant to help specific queries. We'll continue with examples that use the exist() and query() methods on the XML data type.
The PATH XML index is built on the Path ID (HID) and Value columns of the primary XML index. Because it contains both paths and values, if you need the value (for comparison) by using the path, it is a good "covering" index, as shown in this query:
You need to have two conditions for the PATH index to be useful. You'll need the path to the node you're using and the path should not contain predicates or wildcards. Knowing both the path and value enables index seeks into the PATH index. The following example uses the PATH to determine which rows contain InvoiceID 1003 and the primary XML index to find the Invoice node serialize its value as output:
Changing the query to contain both a predicate and wildcard in the path does not use the PATH index:
The PROPERTY index contains the primary key of the base table (pk1), Path ID (HID), and Value, in that order. Because it also contains the primary key of the base table, it helps for searching multi-valued properties in the same XML instance. Even though all the Invoice documents have the same specific structure, this is not known to the XQuery processor and therefore every attribute and subelement is considered part of a property bag. We'll see later that typing the XML by using an XML schema lessens the number of unknown property bags the processor has; the structure is known through the schema. In the preceding example, PROPERTY index is used to scan for CustomerName elements under Invoice; CustomerName is considered part of a property bag of subelements. Even when attributes are used in predicates, property CustomerName is useful. In the example below, the PROPERTY index is used to search by Invoice elements anywhere in the document they occur.
In queries like this, the PROPERTY index is preferred over the path index if both are available because the PATH is not very selective. If you change the selectivity of the comparison predicate:
then the PROPERTY index will be used.
The VALUE index contains the same index columns as the PATH index, Value and Path ID (HID), but in the reverse order. Because it contains the value before the path, it’s useful for expressions that contain both path wildcards and values, such as:
-- uses value index if the search value "Mary Weaver" is more selective than the path select * from xmlinvoice where invoice.exist('/Invoice/CustomerName/text()[. = "Mary Weaver"]') = 1 -- uses value index due to path wildcard and attribute wildcard //Invoice/LineItems/LineItem/@*[. = "special"]
Note that, if the preferred type of secondary XML index is not available, an alternate secondary index or the primary XML index may be used. In the example above, if the VALUE secondary index is not available the query processor might decide to use the primary XML index. If the PROPERTY secondary index is not available the processor often uses a two-step process combining PATH and the primary XML index; sometimes a two-step process is used even with the PROPERTY index. Adding another step (i.e., JOIN) to the query plan almost always results in a slower query.
So far, we've only been using the exist() method on the XML data type using a single path and predicate. Things work approximately the same way with the other XML methods. The query method may use node construction in addition to selection. Construction is optimized by using a special tag "Desc" that can be seen in the query plan. Any part of the XQuery that requires selection, however, will use the same (sub) plan as we've been seeing. Bear in mind that any index observations are made with specific sets of data; your results may vary.
XQuery and Schema-Validated Columns
When an XML Schema Collection validates the XML data type column, the order and structure of the documents and the cardinality of each subelement may be known at query compilation time. This allows the query optimizer more chances to optimize the query. We can specify an XML schema for Invoices in a schema collection named invoice_xsd and restrict the XML column to contain only documents (the XML data type can ordinarily contain documents or fragments), and it would look like this:
When we issue the same queries against a schema-valid column, there seem to be three major changes in query plan and index usage.
- More bottom-up type queries. Because of the XML schema, the number of nodes that need to be searched for a specific document is known, and sometimes fewer than the number of documents (rows) in the table. When this occurs, a bottom-up query will filter away more of the data.
- Greater use of the VALUE secondary index, as opposed to PROPERTY and PATH. Because of the schema, the processor knows that a specific element occurs in only one place in the document, and also that the type of values that the VALUE index is more important and useful and filtering can be done in one step instead of two.
- If an element is defined as a numeric or integral data type, scans for a numeric range (e.g., LineItems priced between $20 and $30) can be done more efficiently. No separate step consisting of data type conversion is required.
As an example of the greater usage of VALUE index, the following query changes from a top-down query with a two-step (sub)plan using PROPERTY index and clustered node table index to a bottom-up query with a one-step (sub)plan using the VALUE index.
The DOCUMENT qualifier is used to infer the cardinality of 1 for the top-level element. DOCUMENT means that the column must contain a document with a single XML root element (no fragments); this is used for data validation and static type inference. However, a predicate expression that starts with //Invoice is optimized differently (uses VALUE index) than one that starts with /Invoice (uses PATH index). The performance of the two will likely be close.
Index and Workload Analysis
Given the fact that the primary XML index is taking up three times the space of the XML content in the data type, if you could choose only one secondary XML index, which one would you choose? It really depends on your workload. The good news is that, because SQL and XQuery are combined to yield a single query plan, ordinary plan analysis, via any of the showplan methods including graphic showplan in SQL Server Enterprise Manager, will work just as well for XML indexes as with relational indexes. You create the index and observe the effect on the query plan. There are a few caveats, however. First, you cannot force index query hints on XML indexes for the purpose of comparing different index strategies for performance. Also, although all four XML indexes on an XML column are used for query optimization and are "ordinary" relational indexes, Database Tuning Advisor does not suggest XML indexes.
When reading a showplan for a SQL/XQuery query, there are a couple of new XQuery specific items to recognize:
- Table-Valued Function XML Reader UDF with XPath Filter—this item refers to the on-the-fly creation of a rowset having the node table format (the node table is not actually created) for the XQuery portion of the query. You'll only see this when doing queries on an XML column when no XML indexes exist.
- UDX—this item refers to internal operators for XQuery processing. There are five such operators; the name of the operator can be found in the "Name" property if you bring up the Properties window (note: this does not show up in the "hover-over" query step information). The operators are:
- Serializer UDX—serializes the query result as XML
- TextAdd UDX—evaluates the XQuery string() function
- Contains UDX—evaluates the XQuery contains() function
- Data UDX—evaluates the XQuery data() function
- Check UDX—validates XML being inserted
Other Tips to Speed Up Your XML Queries
Use specific XQuery query styles: You might notice that using the dot (.) in a predicate produced a different (and simpler and faster) query plan than using the attribute name in the predicate. In the examples above, compare the two queries:
and
Although the result is the same the latter form usually requires one more evaluation step. This is because the query processor is evaluating only one node in the first form (using the PATH index if its present) and is using two evaluation steps (one for /Invoice, one for /Invoice/InvoiceID) in the second form. Although looking at the plan for two "equivalent" queries might seem strange for XML aficionados, SQL query tuners have been doing this for years. Note that the two queries above only produce the same results when using the XML data type exist method, they produce different results when used with the query method.
Avoid wildcards in your queries if possible: Wildcards in a path expression containing elements (e.g., /Invoice//Sku/*) are only useful if you don't know the exact structure of the document, or if the Sku element can occur at different levels of hierarchy. In general, you should structure your document to avoid this, although this is not possible when your data structure uses recursion.
Hoist often searched XML values to relational values: If a given attribute is used frequently in predicates, you can save query-processing time by making this a computed column or redundant column in your relational table. If you always find yourself search on InvoiceID, making it a column allows top-down queries to work more effectively. You might not even have to use the XML instance in the query, if you want the entire document. Refer to Performance Optimizations for the XML Data Type by Shankar Pal et al., for examples of how to do this with both single and multi-valued attributes.
Use full-text search in conjunction with XQuery: To search XML documents or do content-sensitive queries on text, you can combine SQL Server full-text search with XQuery. Full-text search will index the text nodes of XML documents, though not the elements or attributes. You can use the FULLTEXT contains verb to do stem-based or context-based queries over an entire collection of documents (this is most often the top-down part of the query) to select individual documents to operate on, then use XQuery to do structural element and attribute-sensitive structures. Remember that the XQuery contains verb is not at all the same as the FULLTEXT contains verb. XQuery contains is a substring-based function and the SQL Server 2005 implementation uses a binary collation. See XML Support in Microsoft SQL Server 2005 by Shankar Pal et al., for an example of combining fulltext and XQuery.
Wrap-up
I hope you've enjoyed the tour through the XML indexes and other hints to make your XML queries run faster. Remember that, as with any index, excessive use of XML indexes can make insert and modification methods run slower, because the index is maintained along with the raw data. This is especially true of the node table, because the entire document must be shredded during each insert, although modification does not require replacing the entire document. XML indexes should be managed like other indexes with respect to dropping and recreating the indexes in conjunction with bulk loading, index defragmenting, and other database administration techniques.
Using the mature relational query engine to produce a single query plan for both the SQL and XQuery parts of the query should make the SQL Server 2005 implementation of XML queries one of the fastest and easiest to predict and tune. Use this power to your advantage.
About the authorAbout the author
Bob Beauchemin is a database-centric application practitioner and architect, instructor, course author, writer, and Director of Developer Skills for SQLskills. Over the past two years he's been teaching his SQL Server 2005 course to premier customers worldwide through the Ascend program. He is lead author of the book "A First Look at SQL Server 2005 For Developers", author of "Essential ADO.NET" and written articles on SQL Server and other databases, ADO.NET, and OLE DB for MSDN, SQL Server Magazine, and others. Bob can be reached at bobb@sqlskills.com. | https://msdn.microsoft.com/en-us/library/ms345121(v=sql.90).aspx | CC-MAIN-2018-26 | refinedweb | 3,965 | 58.32 |
java.lang.Object
org.apache.shale.usecases.rolodex.RolodexDaoorg.apache.shale.usecases.rolodex.RolodexDao
public class RolodexDao
Data Access Object for the roledex use case.
public static final char[][] TAB_INDEX
Tab indexes for the rolodex. Each sub array represents the starting and ending character index.
public RolodexDao()
The constructor loads some default data.
public List getTabs()
Returns a list of
SelectItem that will be used to build
the links.
public int saveContact(Contact entity)
Saves a
Contact to the mock data store.
public int findTabForContact(Contact contact)
This function will find the tabIndex that the
will be located on. It will default to the first page.
public List findContactsForTab(int index)
Retuns a subset of contacts within the
index of a a tab
defined by
TAB_INDEX from the mock data store. If this was
a RDBMS data store, the contacts returned might be a "ghost" object
meaning that maybe only the
name attribute would be
populated.
public void deleteContact(Contact entity)
Removes the contact from the mock data store.
public void loadStates()
Loads the
State codes and contacts from an XML data source. The
stateDataStore set will hold the states where the
countryDataStore set will hold the countries. The target type of
these collections will be SelectItem. The contacts are held in the
entityDataStore set.
public javax.faces.model.SelectItem[] getStates()
Returns an array of states used to populate a select list. The target type is an array of SelectItem.
public javax.faces.model.SelectItem[] getCountries()
Returns an array of countries used to populate a select list. The target type is an array of SelectItem.
public Contact findContact(String name)
Returns the latest copy of the
Contact by primary key.
name- contact name that uniquely identifies a Contact. | http://shale.apache.org/1.0.4/shale-apps/shale-clay-usecases/apidocs/org/apache/shale/usecases/rolodex/RolodexDao.html | CC-MAIN-2016-26 | refinedweb | 288 | 50.23 |
org.openide.xml.XMLUtil
I suddenly realized that my ruminations on org.openide.xml.XMLUtil, yesterday, might be helpful to someone I met at the NetBeans booth at Sun Tech Days in Johannesburg, a few weeks ago. He wanted to create a NetBeans plugin that would generate language-specific client stubs from a WSDL file. The basic concept is that you would open the WSDL file in the IDE, right-click inside it, and then choose a menu item that says, something like, "Generate Client Stubs". And then you'd get a new HTML file with a list of client stubs for interacting with the service exposed via the WSDL. You'd get a stub in JavaScript, in C++, in Java, and in anything else that's relevant. You, as the provider of the WSDL, would then post your WSDL on your server and then ALSO post the HTML file with client stubs. The user of your WSDL would then have a starting point, for whatever language they're coding in.
I:
- Create a new module project, name it whatever you like.
- Make sure the distro of NetBeans IDE that you are using has specific support for WSDL files (i.e., to check this, create a file with a WSDL extension or open one with that extension, and then see if a special icon is shown for it and that the editor has lots of WSDL-specific support or not). If not, go to the Plugin Manager, search for "wsdl" and then install the WSDL plugin.
- Create a class that extends CookieAction. (If you use the Action wizard, specify that the action should be conditionally enabled, for the EditorCookie class, and the text/x-wsdl+xml MIME type under "Editor Context Menu Item").
- Registration of the class in the layer.xml would probably be like this:
<>
- Define the CookieAction.performAction as follows, taking note of the line in bold, as well as all the FQN for DOM objects (because that's the point of this blog entry):.
- However, let's look more closely at the XMLUtil.parse method. First of all, you can hook your own org.xml.sax.ErrorHandler into the parser, very easily:"); } }
- Let's now look at another argument to the XMLUtil.parse method. Simply change the first false boolean to true:):
- The final argument to the XMLUtil.parse method is amply described in the related Javadoc. The point is that even if you have set the first boolean to false, i.e., even if you do NOT want to validate, the ui will still be blocked if you have a DTD declaration or Schema declaration. The ui will be blocked because a network connection will be made, based on the URL specified in the DTD declaration or Schema declaration. You can speed up parsing by defining an entity resolver, as described in the related Javadoc.
- And what about the second boolean? That, if true, makes your parser namespace aware. As this document tells you: "A parser that is namespace aware can recognize the structure of names in a namespace-with a colon separating the namespace prefix from the name. A namespace aware parser can report the URI and local name separately for each element and attribute. A parser that is not namespace aware will report only an element or attribute name as a single name even when it contains a colon. In other words, a parser that is not namespace aware will treat a colon as just another character that is part of a name."!
Mar 25 2008, 05:58:04 AM PDT Permalink
Hi. Can I somehow use this library outside Module project, in my only java application? Thanks
Posted by Maxx on April 28, 2008 at 04:18 AM PDT # | http://blogs.sun.com/geertjan/entry/org_openide_xml_xmlutil | crawl-001 | refinedweb | 626 | 71.24 |
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
I recently had this discussion on stack overflow
and a user pointed me out that maybe i am leaving the openGL context in a different state each run, and that maybe i can switch openGL context.
wich made me look into:
Of course almost anyone seems more in the know about openGL JOGL ecc... than me and im REALLY far from attempting a pull.
i was wondering if it was possible to do something like
"switching context"
frame.addWindowListener( new WindowAdapter() { @ Override public void windowClosing(WindowEvent we) { "clearing or switching context" System.exit(0); } } );
and/or maybe if someone can advise me on renderers used by processing or possible combinations to obtain a clean state.
Beacause using awt in addition to processing seems usefull enough to me and its working 80% of the time. of course 20% random fail makes what im doing trash.
Answers
As a user on stackoverflow already mentioned. Its Most likely that you get a solution, when people can debug some code and contribute I know few things about OpenGL, i have no idea of ATW. I recomend to post a minimal example of ATW&Processing&JOGL you are using. Could look like this:
Just to clearing the buffer(any content) JOGL could look like this:
You don't have to you can run 10 fragmentShader and release only 2 of them and use the others by something else.
I not sure what you are meaning with switching content. You have a single application with multiple windows? Basically you write something to a "buffer" and it will be their until you don't release it. So switching content you mean switching buffer, i guess.
Good Luck.
the awt components are like this
public class FormVM extends Panel implements ActionListener,Drawable{
}
in the setup() method they add themselves to the PApplet
a custom component is like this
public class RenderArea extends XYGroup{
}
and it is rendered in the main PApplet draw() loop like this
In the above example RenderArea renders the components within it after translating them with slidebars, it also clips the area so that it can contain an area larger than itself.
This is only an example, but here the push/pop and the clip() functions break up at random at the very beginning of the run, else they dont at all.
the loading is made by detecting all components, instantiating them, and calling their setup method.
There is only one instance of PApplet per jvm, and multiple jvms can run without problems, only i occasionally get draw loop fails on push/pop clip() ecc...
also i saw sometimes a pop failing right after a push, with too many pop exception suggesting that there are interactions with "something" other in graphics, i guess the awt thread but i dont know at all about graphics, i tought this was a rather simple thing to do and just wanted a clean start.
@JDev
personal im out of time. what i can say: If you building jogl (with atw) from source the tests runnig about **1 hour ** various window creations (multiple windows, splitwindows, offscreen etc) tones of examples. hope you will find what you search
try also to post the same thing into the jogl forum, the users their are more experience with atw. | https://forum.processing.org/two/discussion/22449/switching-opengl-context | CC-MAIN-2019-35 | refinedweb | 566 | 66.78 |
This module extends SQLAlchemy and provides additional DDL [1] support.
Extensions to SQLAlchemy for altering existing tables.
At the moment, this isn’t so much based off of ANSI as much as things that just happen to work with multiple databases.
Extends ANSI SQL dropper for column dropping (ALTER TABLE DROP COLUMN).
Drop a column from its table.
Extends ansisql generator for column creation (alter table add col)
Create a column (table already exists).
Migrate’s constraints require a separate creation function from SA’s: Migrate’s constraints are created independently of a table; SA’s are created at the same time as the table.
Gets a name for the given constraint.
If the name is already set it will be used otherwise the constraint’s autoname method is used.
Manages changes to existing schema elements.
Note that columns are schema elements; ALTER TABLE ADD COLUMN is in SchemaGenerator.
All items may be renamed. Columns can also have many of their properties - type, for example - changed.
Each function is passed a tuple, containing (object, name); where object is a type of object you’d expect for that function (ie. table for visit_table) and name is the object’s new name. NONE means the name is unchanged.
Starts ALTER COLUMN
Rename/change a column.
Rename an index
Rename a table. Other ops aren’t supported.
Common operations for ALTER TABLE statements.
Append content to the SchemaIterator’s query buffer.
Execute the contents of the SchemaIterator’s buffer.
Returns the start of an ALTER TABLE SQL-Statement.
Use the param object to determine the table name and use it for building the SQL statement.
This module defines standalone schema constraint classes.
Bases: migrate.changeset.constraint.ConstraintChangeset, sqlalchemy.schema.CheckConstraint
Construct CheckConstraint
Migrate’s additional parameters:
Create the constraint in the database.
Drop the constraint from the database.
used to allow SchemaVisitor access
Bases: object
Base class for Constraint classes.
Create the constraint in the database.
Drop the constraint from the database.
Bases: migrate.changeset.constraint.ConstraintChangeset, sqlalchemy.schema.ForeignKeyConstraint
Construct ForeignKeyConstraint
Migrate’s additional parameters:
Mimic the database’s automatic constraint names
Create the constraint in the database.
Drop the constraint from the database.
used to allow SchemaVisitor access
Bases: migrate.changeset.constraint.ConstraintChangeset, sqlalchemy.schema.PrimaryKeyConstraint
Construct PrimaryKeyConstraint
Migrate’s additional parameters:
Mimic the database’s automatic constraint names
Create the constraint in the database.
Drop the constraint from the database.
used to allow SchemaVisitor access
Bases: migrate.changeset.constraint.ConstraintChangeset, sqlalchemy.schema.UniqueConstraint
Construct UniqueConstraint
Migrate’s additional parameters:
New in version 0.6.0.
Mimic the database’s automatic constraint names
Create the constraint in the database.
Drop the constraint from the database.
used to allow SchemaVisitor access
This module contains database dialect specific changeset implementations.
Firebird database specific implementations of changeset classes.
Firebird column dropper implementation.
Firebird supports ‘DROP col’ instead of ‘DROP COLUMN col’ syntax
Drop primary key and unique constraints if dropped column is referencing it.
Firebird column generator implementation.
Firebird constaint dropper implementation.
Cascading constraints is not supported
Firebird constraint generator implementation.
Firebird schema changer implementation.
Rename table not supported
PostgreSQL database specific implementations of changeset classes.
PostgreSQL column dropper implementation.
PostgreSQL column generator implementation.
PostgreSQL constaint dropper implementation.
PostgreSQL constraint generator implementation.
PostgreSQL schema changer implementation.
SQLite database specific implementations of changeset classes.
SQLite ColumnDropper
SQLite ColumnGenerator
SQLite SchemaChanger
Does not support ALTER INDEX
Module for visitor class mapping.
Get the visitor implementation for the given dialect.
Finds the visitor implementation based on the dialect class and returns and instance initialized with the given name.
Binds dialect specific preparer to visitor.
Get the visitor implementation for the given database engine.
Taken from sqlalchemy.engine.base.Engine._run_single_visitor() with support for migrate visitors.
Schema module providing common schema operations.
Create a column, given the table.
API to ChangesetColumn.create().
Drop a column, given the table.
API to ChangesetColumn.drop().
Alter a column.
This is a helper function that creates a ColumnDelta and runs it.
Rename a table.
If Table instance is given, engine is not used.
API to ChangesetTable.rename().
Rename an index.
If Index instance is given, table and engine are not used.
API to ChangesetIndex.rename().
Changeset extensions to SQLAlchemy tables.
Creates a column.
The column parameter may be a column definition or the name of a column in this table.
API to ChangesetColumn.create()
Remove this table from its metadata
Drop a column, given its name or definition.
API to ChangesetColumn.drop()
Rename this table.
Changeset extensions to SQLAlchemy columns.
Makes a call to alter_column() for the column this method is called on.
Create a copy of this Column, with all attributes.
Create this column in the database.
Assumes the given table exists. ALTER TABLE ADD COLUMN, for most databases.
Drop this column from the database, leaving its table intact.
ALTER TABLE DROP COLUMN, for most databases.
Changeset extensions to SQLAlchemy Indexes.
Change the name of an index.
Implements comparison between DefaultClause instances
Extracts the differences between two columns/column-parameters
May receive parameters arranged in several different ways:
Additional parameters can be specified to override column differences.
Additional parameters alter current_column. Table name is extracted from current_column object. Name is changed to current_column.name from current_name, if current_name is specified.
Table kw must specified.
Populate dict and column object with new values
Compares two types to be equal
Compares one Column object
Compares two Column objects
Compares Column objects with reflection
Processes default values for column
This package provides functionality to create and manage repositories of database schema changesets and to apply these changesets to databases.
This module provides an external API to the versioning system.
Changed in version 0.6.0: migrate.versioning.api.test() and schema diff functions changed order of positional arguments so all accept url and repository as first arguments.
Changed in version 0.5.4: --preview_sql displays source file when using SQL scripts. If Python script is used, it runs the action with mocked engine and returns captured SQL statements.
Changed in version 0.5.4: Deprecated --echo parameter in favour of new migrate.versioning.util.construct_engine() behavior.
%prog db_version URL REPOSITORY_PATH
Show the current version of the repository with the given connection string, under version control of the specified repository.
The url should be any valid SQLAlchemy connection string.
%prog upgrade URL REPOSITORY_PATH [VERSION] [–preview_py|–preview_sql]
Upgrade a database to a later version.
This runs the upgrade() function defined in your change scripts.
By default, the database is updated to the latest available version. You may specify a version instead, if you wish.
You may preview the Python or SQL code to be executed, rather than actually executing it, using the appropriate ‘preview’ option.
%prog drop_version_control URL REPOSITORY_PATH
Removes version control from a database.
%prog help COMMAND
Displays help on a given command.
%prog script DESCRIPTION REPOSITORY_PATH
Create an empty change script using the next unused version number appended with the given description.
For instance, manage.py script “Add initial tables” creates: repository/versions/001_Add_initial_tables.py
%prog test URL REPOSITORY_PATH [VERSION]
Performs the upgrade and downgrade option on the given database. This is not a real test and may leave the database in a bad state. You should therefore better run the test on a copy of your database.
%prog create REPOSITORY_PATH NAME [–table=TABLE]
Create an empty repository at the specified path.
You can specify the version_table to be used; by default, it is ‘migrate_version’. This table is created in all version-controlled databases.
%prog manage FILENAME [VARIABLES...]
Creates a script that runs Migrate with a set of default values.
For example:
%prog manage manage.py --repository=/path/to/repository --url=sqlite:///project.db
would create the script manage.py. The following two commands would then have exactly the same results:
python manage.py version %prog version --repository=/path/to/repository
%prog update_db_from_model URL REPOSITORY_PATH MODEL
Modify the database to match the structure of the current Python model. This also sets the db_version number to the latest in the repository.
NOTE: This is EXPERIMENTAL.
%prog create_model URL REPOSITORY_PATH [DECLERATIVE=True]
Dump the current database as a Python model to stdout.
NOTE: This is EXPERIMENTAL.
%prog source VERSION [DESTINATION] –repository=REPOSITORY_PATH
Display the Python code for a particular version in this repository. Save it to the file at DESTINATION or, if omitted, send to stdout.
%prog version REPOSITORY_PATH
Display the latest version available in a repository.
%prog make_update_script_for_model URL OLDMODEL MODEL REPOSITORY_PATH
Create a script changing the old Python model to the new (current) Python model, sending to stdout.
NOTE: This is EXPERIMENTAL.
%prog compare_model_to_db URL REPOSITORY_PATH MODEL
Compare the current model (assumed to be a module level variable of type sqlalchemy.MetaData) against the current database.
NOTE: This is EXPERIMENTAL.
%prog downgrade URL REPOSITORY_PATH VERSION [–preview_py|–preview_sql]
Downgrade a database to an earlier version.
This is the reverse of upgrade; this runs the downgrade() function defined in your change scripts.
You may preview the Python or SQL code to be executed, rather than actually executing it, using the appropriate ‘preview’ option.
%prog version_control URL REPOSITORY_PATH [VERSION]
Mark a database as under this repository’s version control.
Once a database is under version control, schema changes should only be done via change scripts in this repository.
This creates the table version_table in the database.
The url should be any valid SQLAlchemy connection string.
By default, the database begins at version 0 and is assumed to be empty. If the database is not empty, you may specify a version at which to begin instead. No attempt is made to verify this version’s correctness - the database schema is expected to be identical to what it would be if the database were created from scratch.
%prog script_sql DATABASE DESCRIPTION REPOSITORY_PATH
Create empty change SQL scripts for given DATABASE, where DATABASE is either specific (‘postgresql’, ‘mysql’, ‘oracle’, ‘sqlite’, etc.) or generic (‘default’).
For instance, manage.py script_sql postgresql description creates: repository/versions/001_description_postgresql_upgrade.sql and repository/versions/001_description_postgresql_downgrade.sql
Code to generate a Python model from a database or differences between a model and database.
Some of this is borrowed heavily from the AutoCode project at:
Various transformations from an A, B diff.
In the implementation, A tends to be called the model and B the database (although this is not true of all diffs). The diff is directionless, but transformations apply the diff in a particular direction, described in the method name.
Generate a migration from B to A.
Was: toUpgradeDowngradePython Assume model (A) is most current and database (B) is out-of-date.
Generates the source code for a definition of B.
Assumes a diff where A is empty.
Was: toPython. Assume database (B) is current and model (A) is empty.
Goes from B to A.
Was: applyModel. Apply model (A) to current database (B).
A path/directory class.
A class associated with a path/directory tree.
Only one instance of this class may exist for a particular file; __new__ will return an existing instance if possible
Ensures a given path already exists
Ensures a given path does not already exist
SQLAlchemy migrate repository management.
A collection of changes to be applied to a database.
Changesets are bound to a repository and manage a set of scripts from that repository.
Behaves like a dict, for the most part. Keys are ordered based on step value.
Add new change to changeset
In a series of upgrades x -> y, keys are version x. Sorted.
Run the changeset scripts
A project’s change script repository
Create a changeset to migrate this database from ver. start to end/latest.
Create a repository at a specified path
Create a project management script (manage.py)
API to migrate.versioning.version.Collection.create_new_python_version()
API to migrate.versioning.version.Collection.create_new_sql_version()
Prepare a project configuration file for a new project.
Ensure the target path is a valid repository.
API to migrate.versioning.version.Collection.version
Returns repository id specified in config
API to migrate.versioning.version.Collection.latest
Returns use_timestamp_numbering specified in config
Returns version_table name specified in config
Database schema version management.
A database under version control
API to Changeset creation.
Uses self.version for start version and engine.name to get database name.
Compare the current model against the current database.
Declare a database to be under a repository’s version control.
Dump the current database as a Python model.
Remove version control from a database.
Load controlled schema version info from DB
Modify the database to match the structure of the current Python model.
Update version_table with new information
Upgrade (or downgrade) to a specified version, or latest version.
Schema differencing support.
Container for differences in one Column between two Table instances, A and B.
The most generic type of the Column object in A.
The most generic type of the Column object in A.
Compute the difference between two MetaData objects.
The string representation of a SchemaDiff will summarise the changes found between the two MetaData objects.
The length of a SchemaDiff will give the number of changes found, enabling it to be used much like a boolean in expressions.
A sequence of table names that were found in B but weren’t in A.
A sequence of table names that were found in A but weren’t in B.
A dictionary containing information about tables that were found to be different. It maps table names to a TableDiff objects describing the differences found.
Container for differences in one Table between two MetaData instances, A and B.
A sequence of column names that were found in B but weren’t in A.
A sequence of column names that were found in A but weren’t in B.
A dictionary containing information about columns that were found to be different. It maps column names to a ColDiff objects describing the differences found.
Return differences of model against database.
Return differences of model against another model.
Base class for other types of scripts. All scripts have the following properties:
Core of each BaseScript subclass. This method executes the script.
Ensure this is a valid script This version simply ensures the script file’s existence
Bases: migrate.versioning.script.base.BaseScript
Base for Python scripts
Create an empty migration script at specified path
Create a migration script based on difference between two SA models.
Mocks SQLAlchemy Engine to store all executed calls in a string and runs PythonScript.run
Ensures a given path already exists
Ensures a given path does not already exist
Core method of Script file. Exectues update() or downgrade() functions
Ensure this is a valid script This version simply ensures the script file’s existence
Ensure path is a valid script
Calls migrate.versioning.script.py.verify_module() and returns it.
Bases: migrate.versioning.script.base.BaseScript
A file containing plain SQL statements.
Create an empty migration script at specified path
Ensures a given path already exists
Ensures a given path does not already exist
Runs SQL script through raw dbapi execute call
Ensure this is a valid script This version simply ensures the script file’s existence
The migrate command-line tool.
Shell interface to migrate.versioning.api.
kwargs are default options that can be overriden with passing –some_option as command line option
Memoize(fn) - an instance which acts like fn but memoizes its arguments Will only work on functions with non-mutable arguments
ActiveState Code 52201
Do everything to use object as bool
Decorator that catches known api errors
New in version 0.5.4.
Constructs and returns SQLAlchemy engine.
Currently, there are 2 ways to pass create_engine options to migrate.versioning.api functions:
Note
keyword parameters override engine_dict values.
Do everything to guess object type from string
Tries to convert to int, bool and finally returns if not succeded.
Import module and use module-level variable”.
Changed in version 0.5.4.
Decorator for migrate.versioning.api functions to safely close resources after function usage.
Passes engine parameters to construct_engine() and resulting parameter is available as kw[‘engine’].
Engine is disposed after wrapped function is executed.
A collection of versioning scripts in a repository
Create Python files for new version
Create SQL files for new version
Returns latest Version if vernum is not given. Otherwise, returns wanted version
A namespace for file extensions
A version number that behaves like a string and int at the same time
A single version in a collection :param vernum: Version Number :param path: Path to script files :param filelist: List of scripts :type vernum: int, VerNum :type path: string :type filelist: list
Add script to Collection/Version
Returns SQL or Python Script
Replaces spaces, (double and single) quotes and double underscores to underscores
Provide exception classes for migrate
Base class for API errors.
Base class for controlled schema errors.
Database shouldn’t be under version control, but it is
Database should be under version control, but it’s not.
Error base class.
Invalid constraint error
Invalid repository error.
Invalid script error.
Invalid version error.
A known error condition.
Warning for deprecated features in Migrate
The table does not exist.
Not supported error
Base class for path errors.
A path with a file was required; found no file.
A path with no file was required; found a file.
Base class for repository errors.
Base class for script errors.
A known error condition where help should be displayed.
This database is under version control by another repository. | http://readthedocs.org/docs/sqlalchemy-migrate/en/v0.7.2/api.html#sqlite-d | crawl-003 | refinedweb | 2,883 | 52.15 |
On Tue, 2009-06-23 at 15:53 -0700, Jeremy Fitzhardinge wrote:> On 06/23/09 15:41, Benjamin Herrenschmidt wrote:> >> Do you have any other cases in mind where it would be helpful?> >> > >> > Well, it might be for virtual device discovery etc... but don't bother> > now. We might talk about it at KS for those interested. It's more> > something we see as useful for embedded archs at the moment but in the> > long run it might make sense for hypervisors as well.> > > > Perhaps. We have Xenbus - which is a little bit like OF in that it has> data in a hierarchical namespace - and I guess it might be possible to> find a mapping onto some generic OF-like interface.Which is sort-of what we did. IE. We disconnected the device-tree itselffrom the underlying firmware, using the device-tree and OF-stylebindings (in some case simplified) as a basis for representing devicesbut without the need for an actual open firmware underneath.> However, Xenbus is an active communication channel between virtual machines> rather than a> static representation of a machine configuration (for example, you can> put a watch on a particular path to get a notification of when someone> else changes it).On ppc64 too, the HV can feed us with new tree nodes or remove some, itdoesn't have to be static. Though we mostly use it as a static tree onembedded.> But, yes, this is a good KS hallway track subject.Cheers,Ben. | https://lkml.org/lkml/2009/6/23/699 | CC-MAIN-2016-44 | refinedweb | 248 | 60.45 |
Porting GuideThis chapter lists and describes compiler and/or linker errors and other problems that can occur when porting existing code to Digital Mars C++.
What's in This Chapter
- General porting issues.
- Porting issues pertaining to code written with Microsoft Visual C++.
- Porting issues pertaining to code written with Borland C++ Version 3.
- Porting issues pertaining to older Zortech C++ code, or code written with previous versions of Digital Mars C++.
- Issues pertaining to porting 16-bit Windows 3.1 code to Windows 95 or Windows NT.
General tips on porting to Digital Mars C++This section provides tips on how to solve problems that might occur when porting code to Digital Mars C++ that was written with another compiler, or with a previous release of Digital Mars C++ or Zortech C++. For related information, see Switching to Digital Mars C++.
Problem: iostreams library incompatible with -JuThe Digital Mars C++ Version 7 iostreams library is incompatible with code compiled with the -Ju compiler option; syntax errors and link errors result. This is because, when you specify -Ju, the compiler treats char and unsigned charas the same type. This can cause distinct functions to appear to be the same, and can make mangled names different from the corresponding names in the iostreams library.
Here are two recommended solutions:
- Do not compile code that includes iostreams headers with the -Ju compiler option.
- Replace all unsigned char and BYTE types with char, and compile with the -J option. (This solution is more compatible with Microsoft C++.)
Problem: Linker error "EXE header > 64k"This error results if your code exports too many names. If names in a DLL are not referenced by name at run-time using the Windows API function GetProcAddress(), you can specify the /BYORDINAL and /NONAMES linker options, which remove all names from the DLL header and use the corresponding ordinal numbers instead.
Problem: GetProcAddress() failsIf you are exporting names by ordinal value (see above), no names are included in a DLL, and a call to GetProcAddress() with the name of the procedure will fail. This can also result in problems at link time and load time. Here are two recommended solutions:
- Link with the linker option /XUPPER.
- Define an ordinal number for each name in the definition (. DEF) file, and call GetProcAddress() with the ordinal number instead of the procedure name.
Problem: Unsupported build stepsYour .MAK or .BAT files may contain build steps that the Digital Mars C++ IDDE does not support. For example, a makefile might call an SQL preprocessor that emits C code. The SQL code would need to be processed before the IDDE compiles the C files. You can solve this problem by adding a .MAK file to the Digital Mars C++ project. Put it before the compile step in the IDDE's Build Order list box. Then make a target "clean" (which should not be the first target); "clean" will be built when you rebuild the entire project. Note that the first target will be built in the makefile directory; "clean" will be built in the project directory.
Problem: Linker error "DGROUP + Stack + Heap exceeds 64k-16"The data segment of a 16-bit program can only be 64K bytes long. The combined data definitions of all your modules has exceeded this threshold. To solve this problem, check the "Set Data Threshold" check box on the Code Generation subpage of the Build page in the IDDE's Project Settings dialog box (or use the -GT1 compiler option). This directs the compiler to place data objects larger than the specified size in their own far data segments. Setting the size to 1 puts all data objects in individual data segments.
Problem: Moving a project "loses" the librariesIn this case, the compiler cannot find libraries after you move an existing IDDE project to a new directory. When you add a library in the \dmc\lib directory to a project, do not specify a path. This prevents the IDDE from assigning the library a path that is relative to the project directory.
Problem: Path problems in sc.iniIf you specify the include path for your project in sc.ini, non-Digital Mars tools that rely on the definition of the INCLUDE environment variable for path information cannot access the information. Use autoexec.bat, the Windows NT Registry, or the project include directory setting instead.
Problem: Path problems in autoexec.batIf you specify the include path for your project in autoexec. bat, you need to restart Windows in order to change it, and the information will not be used for Windows NT builds. Use sc.ini, the Windows NT Registry, or the project include directory setting instead.
Problem: Path problems with project include directory settingIf you specify the include path for your project in the IDDE's "Project include directory" setting, non-Digital Mars tools cannot access the information, and it will be specific to that project only. Use autoexec.bat, the Windows NT Registry, or sc.ini instead.
Problem: Path problems in NT RegistryIf you specify the include path for your project in the Windows NT Registry, you won't be able to change it via a batch file, and it won't be available for Windows 3.1 builds. Use autoexec.bat, the project include directory setting, or sc.ini instead.
Problem: 16-bit programs with virtual functions crashIn this case, programs that use classes that define virtual functions cause General Protection Faults. For example, an executable defines a class with virtual functions, and the virtual functions are called by DLLs. This happens because executables with virtual functions must have "smart callbacks" set; otherwise, the function will be invoked with the wrong value for DS. (In the IDDE, click the "Load DS from SS" button in the Windows Prolog/ Epilog subpage of the Build page of the Project Settings dialog box.) For more information, see Win16 Programming Guidelines.
Problem: You need generic .MAK filesTo create generic makefiles that can be called from different projects, get the name of the calling project from the predefined identifier MASTERPROJ.
Problem: You need to link with a DLL at link time onlyTo link an EXE or DLL with another DLL at link time only:
- For the called DLL, set Export By Ordinal, Don't ExportNames, and Generate Import Library in the Linker subpage of the Build page of the Project Settings dialog box.
- For the calling EXE or DLL, add the resulting .LIB file to the project.
Problem: You need to link with a DLL at run-time onlyTo link an EXE or DLL with another DLL at run-time only:
- For the called DLL, turn off Export By Ordinal, Don't Export Names, and Generate Import Library in the Linker subpage of the Build page of the Project Settings dialog box.
- For the calling executable or DLL, call MakeProcInstance() with the name of the function.
Tips on porting from Microsoft Visual C++This section provides tips on how to solve problems that might occur when porting code to Digital Mars C++ from Microsoft Visual C++. For related information, see Converting from Microsoft.
Problem: _ExportedStub missingThe Microsoft C++ libraries provides the entry point _ExportStub, which can be exported by user .DEF files. Remove _ExportStub from your .DEF files; it is specific to Microsoft's implementation of its internal routine _GetGROUP. This change is unlikely to introduce any problems.
Problem: WIN32 and _X86_ not defined32-bit versions of Microsoft C++ provide a .MAK file that defines -DWIN32= 1 and -D_ X86_= 1. Digital Mars C++ does not provide this file. Digital Mars C++ defines these macros in the header file SCDEFS.H, which is included whenever you include a Win32 API header file. If you are not using the Win32 API, you can explicitly include SCDEFS.H, or define WIN32 and _X86_ on the compiler command line or with the IDDE.
Problem: overloaded functions produce errorsMicrosoft C++ (and Borland C++) do not distinguish between overloading of int and unsigned short. Digital Mars C++ does, and generates an error for each ambiguous reference. For example:
void f(unsigned short); void f(int); void main () { short s; f(s); }In Digital Mars C++, s can be promoted to either int or unsigned short, whereas Microsoft C++ always calls f(int). Problems in user code can be due to one function being declared in two slightly different ways, so that it looks like two different functions to Digital Mars C++. For example:
void f(WORD); // Later on in different #include'd file void f(int); // Looks like different C++ function
Problem: _MT not automatically defined32-bit Microsoft C++ sets _MT=1 if the /MT (multi-threaded) option is specified. If /MD (multi-threaded DLL) is specified, 32-bit Microsoft C++ sets _MT=1 and _DLL=1. Digital Mars C++ has no equivalent switches because its 32-bit run-time library is always multi-threaded. To solve this problem, define _MT and _DLL on the dmc command line as necessary.
Tips on Porting from Borland C++This section provides tips on how to solve problems that might occur when porting code to Digital Mars C++ from Borland C++ Version 3. For related information, see Converting from Borland.
Problem: _DEFS.HBorland C++ provides a file BC\ INCLUDE\_ DEFS.H, which their STDIO.H library includes. This file contains a number of definitions used in Borland header files; these definitions might also appear in user code. Although Digital Mars C++ does not provide equivalent definitions, you can explicitly include Borland's _DEFS.H in your Digital Mars C++ compilation. This is unlikely to introduce any problems.
Problem: const * to non-const * conversionBorland C++ permits implicit conversions between const * types and non-const * types. Digital Mars C++ does not permit this. For example:
const char *p; char *q; q = p; // causes "cannot implicitly convert" // error in DMC++Add explicit casts as necessary to add or remove const-ness An easy way to eliminate const-ness problems is to add this code:
#define const
Problem: signed * to unsigned * conversionBorland C++ permits implicit conversions between signed * types and unsigned * types. Digital Mars C++ does not permit this. For example:
unsigned int *p; int *q; q = p; // causes "cannot implicitly convert" // error in DMC++Add explicit casts as necessary to add or remove signed-ness.
Problem: int to pointer conversion causes errorsBorland C++ allows ints to be converted to pointers with only a warning. This is an error in Digital Mars C++. To fix this error, explicitly cast or convert ints to pointers as appropriate.
Problem: overloaded functions produce errorsBorland C++ (and Microsoft C++) do not distinguish between overloading of int and unsigned short. Digital Mars C++ does, and it generates an error for each ambiguous reference. For example:
void f(unsigned short); void f(int); void main() { short s; f(s); }In Digital Mars C++, s can be promoted to either int or unsigned short, whereas Borland C++ always calls f(int).
Problems in user code can be due to one function being declared in two slightly different ways, so that it looks like two different functions to Digital Mars C++. For example:
void f(WORD); // Later on in a different #include file... void f(int); // Looks like a different C++ function
Problem: const-ness of overridden functionsBorland C++ allows a function of a derived class to have different const-ness from the same function in the base class, in violation of ARM 10.2. For example:
class A {virtual int f(void);}; class B: public A { const int f(void); // DMC++ generates "name previously declared // as something else" error };To solve this problem, rewrite your code to make the const-ness of functions in derived classes match that of their base class equivalents.
Problem: Definition of max and min macros for C++The Borland C++ windows.h header file does not define the macros max and min for C++. The Digital Mars C++ version of windows.h defines these macros.
Here are two recommended solutions:
- To keep min and max from being defined, add this code before including windows.h:
#define NOMINMAX.
- Add NOMINMAX to your project's Defines list box in the IDDE (choose Project -Settings, click the Build tab, and choose Compiler).
Problem: Inclusion of windows.h in resource filesThe Borland IDE implicitly includes windows.h in any .RC file it compiles. The Digital Mars IDDE does not. Therefore, you need to explicitly add the code #include <windows.h> to .RC files.
Problem: Redefinition of default argumentsBorland C++ allows redefinition of default arguments (in violation of ARM 8.2.6). For example:
int f(int i = 0); int f(int i = 0); // DMC++ gives errorRemove the second default definition. For example:
int f (int i = 0); int f (int i);
Problem: Automatic enum conversionsBorland C++ provides an option to "treat enums as ints." Digital Mars C++ does not allow implicit conversions of ints to enums, or implicit conversion of enums to other enums in C++ compilations. For example:
enum color (black, red, green, blue); enum color current_color = NULL; // DMC++ errorThere are two recommended solutions:
- Change the code to use equivalent values from the enum in question. For example:
enum color (black, red, green, blue); enum color current_color = red;
- Cast int values to the enum. For example:
enum colorcurrent_color = (enum color) NULL;
Problem: Undefined escape cequencesBorland C++ ignores undefined escape sequences in strings. For example, it interprets \U as U. Digital Mars C++ generates an error in this case. To avoid the error, remove unnecessary escape characters (\) from strings.
Problem: Return types for constructors/destructorsBorland C++ allows a return type for constructors and destructors (in violation of ARM 12.1 and ARM 12.4). Digital Mars C++ does not allow this. For example:
void A::A() {...} // DMC++ generates "illegal constructor // declaration" errorRemove the return type from the declaration. For example:
A::A() {...}
Problem: Differentiating functions by addressing or calling conventionsBorland C++ differentiates between two functions with the same name in a class if they have different addressing or calling conventions. Digital Mars C++ does not. For example:
class x { void func(); void __far__ pascal func (); // DMC++ generates "function is already // defined" error };The problematic declaration is most likely a mistake; remove the additional declaration.
Problem: No implicit function declarations for derived classesBorland C++ allows functions to be defined for a derived class that were only declared for the base class, not the derived class. For example:
class A { public: virtual void f() = 0; }; class B: public A {}; void B::f() // DMC++ generates "function is // not a member of class" error {...}Put explicit declarations for all of a class's functions in the class declaration. For example:
class B: public A { public: void f(); };
Problem: findfirst() and findnext()The Borland versions of the functions findfirst() and findnext() take different parameters from their Digital Mars counterparts. Rewrite your code to use the Digital Mars versions of findfirst() and findnext(), or use _dos_findfirst() and _dos_findnext(). For information see the Digital Mars C++ Runtime Library Reference.
Problem: _control87() parametersBorland C defines various constants for use as parameters to the _control87() function (MCW_EM, EM_INVALID, EM_DENORMAL, and so on), whose names differ from their Digital Mars equivalents. You can form the equivalent Digital Mars C constants by adding an underscore (_) to the Borland version; for example: _MCW_EM or _EM_INVALID.
Problem: __DLL__ macro defined for DLL compilationsBorland C defines the macro __DLL__ for DLL compilations; Digital Mars C does not. Here are two recommended solutions:
- If you are using the dmc command line compiler, add -D __DLL__ to the command line for all DLL compilations.
- From the IDDE, add __DLL__ to your project's Defines list box in the IDDE (choose Project-Settings, click the Build tab, and choose Compiler).
Problem: undefined type dosdate_tThe type dosdate_t declared in Borland C's dos.h file is not defined in Digital Mars C. Change all instances of dosdate_t to _dosdate_t.
Problem: no values.h fileBorland C provides a file values.h, which defines various implementation defined limits (for instance, MAXINT, which represents the largest int). Digital Mars C does not provide an equivalent file. You can use the ANSI standard file limits.h instead. To do this, you will need to change the names of most of the defined identifiers (MAXINT is called INT_MAX, for instance).
Problem: Typos in long identifiersBorland C uses the first 32 characters in identifiers; Digital Mars C uses the first 254. This can expose typing errors at the ends of long identifiers. For example:
#define I_am_not_very_happy_monday_morning 1 i = I_am_not_very_happy_monday_mornings; // BC does not generate an error; dmc doesFix any typing errors as needed.
Problem: _argc and _argv not declaredBorland C declares the variables _argc and _argv in dos.h. Digital Mars C++ does not. Add the following declarations as needed:
extern int __cdecl _argc; extern char ** __cdecl _argv;Note that these names are not part of the published Digital Mars C/C++ run-time library interface, and are subject to change.
Problem: Invalid bitmap syntaxThe resource compiler run from the Borland IDE accepts an alternate syntax for BITMAPs, where the actual bitmap is specified in the code rather than in a file. For example:
Example BITMAP BEGIN '42 4D 5E 00 ...Digital Mars's rcc and Microsoft's rc do not accept this syntax. Use Borland's Resource Workshop to convert the text to a .BMP file and change the code to:
Example BITMAP
.BMP
Problem: Invalid pointer conversionsBorland C allows conversion of different pointer types with only a warning. Digital Mars C++ does not. For example:
void f() { int *p; char **q; q = p; // dmc generates error "cannot implicitly convert"Examine your code for correctness, and add explicit casts where appropriate.
Problem: Names declared in different header filesSome names that are defined in both Digital Mars C++ and Borland C++ are declared in different header files. An example is the run-time library function coreleft(), which is declared in stdlib.h in Digital Mars C++ and alloc.h in Borland C++. To fix this problem, change the header files that your code includes as appropriate.
Problem: Automatic definitionsBorland C allows an extern variable to be declared without ever being explicitly defined. In Digital Mars C++, this results in the error "symbol undefined:< symbol>" at link time. For example, where the following is a complete program:
extern int a; void main() {a = 1;}Add explicit definitions to your code as required.
Problem: Names referenced in .ASM filesBorland C++ mangles only function names. Digital Mars C++ mangles both function and variable names. This can cause link errors on names referenced in .ASM files. For example: extrn _a: word produces a link error in DMC++ because the correct name is ?a@@3HA.
There are two recommended solutions:
- Add extern "C" to declarations of names referenced in .ASM files before they are defined. For example:
extern "C" short a; short a;
- Change the assembly language code to refer to the Digital Mars C++ mangled names. For example:
extrn ?a@@3HA: word
Problem: Inconsistencies in declarations at link timeBorland C++ only mangles the names of variables slightly. For example, for function pointers it does not encode the parameters of the function; Digital Mars C++ does. This can expose inconsistencies in declarations at link time.
Make declarations consistent where needed.
Problem: Inconsistencies in _pascal declarations at link timeBorland C++ does not encode the _pascal calling convention modifier into mangled names (except by making the mangling all lowercase); Digital Mars C++ does. This can expose inconsistencies in declarations at link time for case-insensitive links.
Make declarations consistent where needed.
Problem: opendir, closedir, etc. not availableThe Borland library functions opendir, closedir, readdir, and rewinddir functions are not available in Digital Mars C++.
Rewrite your code to use _dos_findfirst, _dos_findnext, or findfirst/ findnext(). For information, see the Digital Mars C++ Run-time Library Reference.
Problem: Linker cannot export namesIf the Digital Mars linker cannot find names listed in the project .DEF file's EXPORT list, one of these issues is the cause:
- Digital Mars C++ and Borland C++ mangle C++ names differently (see the information on name mangling above). If this is the problem, you need to disassemble the object module(s) to see what the names should be in Digital Mars C++.
- Borland C++ does not mangle variable names (see "Problem: Inconsistencies in declarations at link time" above). If this is the problem, add extern "C" to the appropriate declarations as described above.
- The Borland linker does not complain about nonexistent names. If this is the problem, delete the nonexistent names.
Problem: Undefined Borland identifiers in .RC filesSome Borland predefined identifiers, like IDHELP, are undefined when you compile resources with Digital Mars's rcc.
Borland's resource compiler automatically includes bwcc.h, so you need to explicitly include bwcc.h in the .RC file.
Problem: BC program does not load after conversion to dmcIf this is your problem, you might find that one or more DLLs do not load, or "undefined dynalink" errors occur.
Make sure that all Digital Mars compiler options match their Borland equivalents. For example, exports can be uppercase, by ordinal, export all far, and so on.
Problem: BC program does not run after conversion to dmcIf a Borland program compiles but does not run, check that all Digital Mars compiler options match their Borland equivalents. For example, Borland's default struct alignment is on byte boundaries, Digital Mars's is on word boundaries.
Problem: _new_handler is not declaredBorland C++ declares pvf _new_handler in the file new.h. Digital Mars C++ does not.
Simply add the declaration _PNH _new_handler as needed.
Problem: _new_handler behavior is differentBorland's _new_handler function takes no arguments, and exits on failure. Digital Mars's _new_handler takes one argument (the number of bytes to make available) and returns zero on failure.
Simply recode calls to _new_handler as needed.
Problem: "class huge" syntax not supportedBorland C++ allows the keyword huge after class in a class declaration. Digital Mars C++ does not. For example:
class huge A;Rewrite any class declarations that use the huge keyword.
Problem: Non-Pascal names not found at DLL load timeWindows cannot find exported names in a DLL if they are lowercase. Borland's IMPLIB solves this problem by always exporting names in DLLs by ordinal. Digital Mars's linker does not automatically do this; therefore, some non _pascal names might not be found when a DLL is loaded.
Link with these Digital Mars linker options: /XUPPER (Export, uppercase) and /BYORDINAL (Export by ordinal).
Problem: Can't convert between unsigned char and charThe Borland C++ option -K makes unsigned char and char the same type.
Try one of these two possible solutions:
- Compile with the Digital Mars C++ compiler option -Ju, which is similar to Borland's -K option. Note, however, that -Ju is incompatible with Digital Mars's iostreams library (as described under "Problem: iostreams library incompatible with -Ju" above). Do not use -Ju to compile code that includes iostreams headers.
- Search for and replace all unsigned char and BYTE types with char, and compile with Digital Mars's -J compiler option. (This solution is more compatible with Microsoft C++.)
Tips on porting from previous Digital Mars releasesThis section provides tips on how to solve problems that might occur when porting code from previous releases of Digital Mars C++ or Zortech C++. For related information, see Switching to Digital Mars C++.
Problem: asm() function does not compileZortech C++ allowed instructions to be assembled from integers using the asm() pseudo-function. For example:
asm (0x8C, 0x96, 0xCC, 0xFE);Digital Mars C++ does not provide this function. Therefore, you need to replace asm() calls with calls to __emit__. For more information, see Using Assembly Language Functions.
Problem: ios class not declared in iomanip.hPrevious versions of the iostreams library declared the ios class in iomanip.h. The current version of iostreams does not.
You need to explicitly include iostream.h.
Problem: Zortech library names are differentZortech C++ library names specified in makefiles are different from the corresponding Digital Mars C++ names (ZWL corresponds to SWL, for example).
Change your makefiles to use the Digital Mars C++ library names. Note that, since the names of libraries are specified in object files, it may not be necessary to explicitly specify them in the makefile.
Problem: Compiler control program has a different nameThe Zortech C++ compiler control program was ZTC.EXE. The Digital Mars C++ equivalent is dmc.
Change makefiles, batch files, and so forth to run dmc instead of ZTC.
Problem: Missing #endifs generate errorsZortech C++ allowed #endif directives at the end of files to be omitted; it interpreted End of File as any number of #endifs. Digital Mars C++ does not.
Add #endif directives to the ends of files as appropriate.
Problem: __pascal names mangled in C++ filesZortech C++ did not perform C++ name mangling on __pascal names. In Digital Mars C++, the __pascal modifier changes the calling sequence, but does not affect name mangling.
Add the modifier extern "C" to declarations that should not be mangled. For more information, see Mixing Languages.
Problem: Jumps around variable initializations disallowedZortech C++ allowed jumps around variable initializations, in violation of ARM 6.4.2. Digital Mars C++ does not allow this. For example:
switch (i) { int v1 = 2; // error case 1: int v2 = 3; case 2: if (v2 == 7) // error ...There are two recommended solutions for this problem:
- Move the variable initialization out of the case statement.
- Add braces around the declaration to restrict the variable's scope to be within the area that the flow of control can jump around.
Problem: Differences in mangled namesZortech C++ code might not link with Digital Mars C++ because .ASM files reference mangled names. The Digital Mars C++ name mangling scheme differs from Zortech C++ name mangling.
Use the obj2asm utility to obtain the new mangled names, and manually edit the assembly language files to use these names. For information on Digital Mars C++ name mangling, see Mixing Languages.
Problem: WEP in user code causes a link errorDigital Mars C++ Version 6 allowed users to specify a WEP function in a DLL. Digital Mars C++ Version 7 provides a WEP function in the library, which handles static destructors and other cleanup functions on exit from the DLL. Thus, WEP functions should not be used if these function are required.
If a DLL requires special termination procedures, put them in a function called _WEP. Digital Mars's WEP automatically calls _WEP. An _WEP function is not required, however.
Problem: Access to protected members not allowedZortech C++ allowed access to protected member functions, in violation of ARM page 253. (Base class members can only be accessed through a derived class.) For example, the following is an error in Digital Mars C++:
class A {protected: int x }; class B: public A { void f() { x = 10; } void f1() { A *p = new A; p-> x = 10; // error } friend void f2(A*); friend void f3(B*); }; void f2(A* a) { a->x = 10; } // error void f3(B* b) { b->x = 10; }To solve this problem, recode invalid constructs as necessary.
Problem: Mismatches in C++ const functions not allowedZortech C++ allowed mismatches in function declarations involving const-ness. For example, the following constructs, which were valid in Zortech C++, are invalid in Digital Mars C++:
class A { public: int operator== (A&); }; class B { A a; int operator==(B& b) const { return (a == b.a); // error in DMC++ 7 } };In the above example, since B::operator== is const, B::a is const, but there is no matching function in A. To fix the error, you need to add the required matching function. To fix the example code above:
class A { public: int operator==(A&); int operator==(A&) const; };
Tips on porting Win16 Code to Win32This section provides tips on how to solve problems that might occur when porting 16-bit Windows 3.1 code to a 32-bit Windows environment. See Win32 Programming Guidelines, for general information on porting 16-bit code to Win32.
Problem: Files not found when porting to Windows NTSymptoms of this problem include .MAK or .BAT files generating errors, or programs not finding files.
Windows NT does not run the autoexec.bat file, so your PATH environment variable (and other environment variables) may not be set. You need to manually add these values to the Windows NT Registry.
Problem: 32-bit SmartHeap does not workIn this case, SmartHeap symbols are not found when linking with 32-bit SmartHeap libraries (specifically SHDW3SMD.LIB).
The version of SHDW3SMD.LIB shipped with SmartHeap is not usable. To create a usable one, perform this step to create a new library directly from the SmartHeap DLL:
implib /system shdw3smd.lib sh22w3sd.dll
Problem: Windows 3.1 functions undefined at link timeSome Windows 3.1 routines, such as GetCurrentTask(), are obsolete for Win32.
You need to reengineer these routines. For information, see Microsoft Win32 API Programmer's Reference (available as the online Help file ..\dm\help\vc20bks4.hlp in Digital Mars C++).
Problem: ToolHelp functions undefined at link timeThe Windows 3.1 ToolHelp library, which includes functions like ModuleFindHandle() and StackTraceNext(), is not available in Win32.
You need to reengineer these routines. For information, see Microsoft Win32 API Programmer's Reference (available as the online Help file ..\dm\help\vc20bks4.hlp in Digital Mars C++).
Problem: Exported names not foundIf this is a problem, the linker indicates that it cannot export any of the names in the .DEF file's export list, although the export list worked for Windows 3.1.
Win32 defines PASCAL, WINAPI, and related functions as __stdcall; in Windows 3.1 they were defined as __pascal. This means that all PASCAL/ WINAPI functions now begin with an underscore (_) and are case-sensitive. Add underscores and change case in the export list as needed.
Problem: Names in .ASM files not foundIf this is a problem, the linker indicates that it cannot fnd PASCAL names that are declared in .ASM files, although it could for Windows 3.1.
Win32 defines PASCAL as __stdcall; in Windows 3.1 it was defined as __pascal. __stdcall functions begin with an underscore (_) and are case-sensitive. Add underscores and change case in the assembly language definitions of the affected functions.
Problem: DOS functions undefined in NT compilationsAn example would be int86x and its parameter, union REGS, being undefined in a Windows NT compilation.
Some DOS routines may need to be re-engineered using their Windows NT equivalents. For information, see Microsoft Win32 API Programmer's Reference (available as the online Help file ..\dm\help\vc20bks4.hlp in Digital Mars C++).
Problem: "FIXUPP" errors in 32-bit linksIn this case, the linker generates an error for every reference to an external name in the build.
You may be using outdated linker options. The Digital Mars C++ IDDE uses: /DO/DE/NT/Entry:__DllMainCRTStartup.
Problem: Assembler files don't link in Win32In this case, assembly language code with .286 or 16-bit segments may not link.
Recode affected files to remove 16-bit specific features. Note also that the use of segment registers is different for Win32 compilations. | http://www.digitalmars.com/ctg/ctgPorting.html | CC-MAIN-2015-14 | refinedweb | 5,170 | 56.25 |
I was using only html emails which worked from my Rails 3.1.4 app, but I decided to add text only emails to make them multipart and now the email arrives blank. I also started using Sendgrid's Heroku addon - not sure if that is part of it.
When I look in my heroku logs, I can see that both views, the .erb and .html.erb rendered successfully and I don't see any errors. The email arrives, but the body is blank in Yahoo! and in Hotmail it only says (didn't do any more testing):
This is a multi-part message in MIME format...
----
here's my mailer:
class Notifier < ActionMailer::Base
helper :application
default_url_options[:host] = "foo.com"
def verification_instructions(user)
subject "Email Verification"
from 'Bar <[email protected]>'
@user = user
recipients "#{user.first_name} <#{user.email}>"
sent_on Time.now
@url = "{user.perishable_token}"
end
The text version (.erb)
Hi <%= @user.username %>, thanks for signing up
Please click the following link to verify your email address:
<%= @url %>
If the above URL does not work, try copying and pasting it into your browser. If you continue to have problems, please feel free to contact us.
I posted the html here.
Also, when I send the email from the console in development, I can look in the logs and see that renders the email. I put the output here. Tried a different email to make sure that it wasn't an issue specific to one email2
Thanks in advance, for any help you can provide.
I just started using Sendgrid too....
I'm using the sendgrid gem
My Mailers have
"include SendGrid" at the top
and an exclpicit call to mail(subject: "Yadda", to: "[email protected]") at the end of the method. | http://m.dlxedu.com/m/askdetail/3/9adb140be91d9a2613fa2c39a5ec5cd9.html | CC-MAIN-2019-04 | refinedweb | 290 | 75.3 |
6. Source Files and Namespaces
A Visual Basic .NET program consists of one or more source files. When a program is compiled, all of the source files are processed together; thus, source files can depend on each other, possibly in a circular fashion, without any forward-declaration requirement. The textual order of declarations in the program text is generally of no significance.
A source file consists of an optional set of option statements, import statements, and attributes, which are followed by a namespace body. The attributes, which must each have either the Assembly or Module modifier, apply to the .NET assembly or module produced by the compilation. The body of the source file functions as an implicit namespace declaration for the global namespace, meaning that all declarations at the top level of a source file are placed in the global namespace. For example:
The two source files contribute to the global namespace, in this case declaring two classes with the fully qualified names
A and
B. Because the two source files contribute to the same declaration space, it would have been an error if each contained a declaration of a member with the same name.
The compilation environment may override the namespace declarations into which a source file is implicitly placed.
See Also
6.2 Compilation Options | 6.3 Imports Statement | 6.4 Namespaces | Namespace Statement (Visual Basic Language Reference) | Namespaces (Visual Basic Language Concepts) | http://msdn.microsoft.com/en-us/library/aa711883(v=vs.71).aspx | CC-MAIN-2014-15 | refinedweb | 235 | 55.84 |
Fundamental concepts of plugin infrastructuresAugust 7th, 2012 at 5:31 pm
I have always been fascinated by the idea of plugins – user-developed modules that are not part of the core application, but that nevertheless allow extending the application’s capabilities. Many applications above a certain size allow some level of customization by users. There are many different approaches and many names for it (extensions, scripting interface, modules, components); I’ll simply say "plugins" from now on.
The fun thing about plugins is that they cross application and language domains. You can find plugin infrastructures for everything ranging from IDEs, to web servers to games. Plugins can be developed in language X extending an application mainly based on language Y, for a wide variety of X and Y.
My plan is to explore the design space of plugin infrastructures, looking at various implementation strategies and existing solutions in well-known applications. But for that, I need to first describe some basic terms and concepts – a common language that will let us reason about plugins.
Example – plugins for a Python application
I’ll start with an example, by presenting a simple application and a plugin infrastructure for it. Both the application and plugins will be coded in Python 3.
Let’s start by introducing the task. The example is a small but functional part of some kind of a publishing system, let’s say a blogging engine. It’s the part that turns marked-up text into HTML. To borrow from reST, the supported markup is:
before markup :role:
textafter markup
Here "role" defines the mark-up type, and "text" is the text to which the mark-up is applied. Sample roles (again, from reST interpreted roles) are code, math or superscript [1].
Now, where do plugins come in here? The idea is to let the core application do the text parsing, leaving the specific role implementation to plugins. In other words, I’d like to enable plugin writers to easily add roles to the application. This is what the idea of plugins is all about: instead of hard-coding the application’s functionality, let users extend it. Power users love customizing applications for their specific needs, and may improve your application beyond your original intentions. From your point of view, it’s like getting work done for free – a win-win situation.
Anyway, there are a myriad ways to implement plugins in Python [2]. I like the following approach:
class IPluginRegistry(type): plugins = [] def __init__(cls, name, bases, attrs): if name != 'IPlugin': IPluginRegistry.plugins.append(cls) class IPlugin(object, metaclass=IPluginRegistry): def __init__(self, post=None, db=None): """ Initialize the plugin. Optinally provide the db.Post that is being processed and the db.DB it belongs to. """ self.post = post self.db = db """ Plugin classes inherit from IPlugin. The methods below can be implemented to provide services. """ def get_role_hook(self, role_name): """ Return a function accepting role contents. The function will be called with a single argument - the role contents, and should return what the role gets replaced with. None if the plugin doesn't provide a hook for this role. """ return None
A plugin is a class that inherits from IPlugin. Some metaclass trickery makes sure that by the very act of inheriting from it, the plugin registers itself in the system.
The get_role_hook method is an example of a hook. A hook is something an application exposes, and plugins can attach to. By attaching to a hook (in our case – implementing the get_role_hook method), the plugin can let the application know it wants to participate in the relevant task. Here, a plugin implementing the hook will get called by the application to find out which roles it supports.
Here is a sample plugin:
class TtFormatter(IPlugin): """ Acts on the 'tt' role, placing the contents inside <tt> tags. """ def get_role_hook(self, role_name): return self._tt_hook if role_name == 'tt' else None def _tt_hook(self, contents): return '<tt>' + contents + '</tt>'
It implements the following transformation:
text :tt:
in tt taghere
to:
text <tt>in tt tag</tt> here
As you can see, I chose to let the hook return a function. This is useful since it can give the application immediate indication of whether the plugin supports some role at all (if it returns None, it doesn’t). The application can also cache the function returned by plugins for more efficient invocation later. There are, of course, many variations on this theme. For example, the plugin could return a list of all the roles it supports.
Now it would be interesting to see how plugins are discovered, i.e. how does the application know which plugins are present in the system? Again, Python’s dynamism lets us easily implement a very flexible discovery scheme:
def discover_plugins(dirs): """ Discover the plugin classes contained in Python files, given a list of directory names to scan. Return a list of plugin classes. """ for dir in dirs: for filename in os.listdir(dir): modname, ext = os.path.splitext(filename) if ext == '.py': file, path, descr = imp.find_module(modname, [dir]) if file: # Loading the module registers the plugin in # IPluginRegistry mod = imp.load_module(modname, file, path, descr) return IPluginRegistry.plugins
This function can be used by the applications to find and load plugins. It gets a list of directories in which to look for Python modules. Each module is loaded, which executes the class definitions within it. Those classes that inherit from IPlugin get registered with IPluginRegistry, which can then be queried.
You will notice that the constructor of IPlugin takes two optional arguments – post and db. For plugins that have more than just the most basic capabilities, the application should also expose an API to itself which would let the plugins query and manipulate it. The post and db arguments do that – each plugin will get a Post object that represents the blog post it operates upon, as well as a DB object that represents the main blog database.
To see how these can be used by a plugin, let’s add another hook to IPlugin:
def get_contents_hook(self): """ Return a function accepting full document contents. The functin will be called with a single argument - the document contents (after paragraph splitting and role processing), and should return the transformed contents. None if the plugin doesn't provide a hook for this role. """ return None
This hook allows plugins to register functions that transform the whole contents of a post, not just text marked-up with roles [3]. Here’s a sample plugin that uses it:
class Narcissist(IPlugin): def __init__(self, post, db): super().__init__(post, db) self.repl = '<b>I ({0})</b>'.format(self.post.author) def get_contents_hook(self): return self._contents_hook def _contents_hook(self, contents): return re.sub(r'\bI\b', self.repl, contents)
As its name suggests, this is a plugin for users with narcissistic tendencies. It finds all the occurrences of "I" in the text, adds the author name in parens and puts it in bold. The idea here is to show how the post object passed to the plugin can be used to access information from the application. Exposing such details to plugins makes the infrastructure extremely flexible.
Finally, let’s see how the application actually uses the plugins. Here’s a simple htmlize function that gets a post and db objects, as well as a list of plugins. It does its own transformation of the post contents by enclosing all paragraphs in <p>...</p> tags and then hands the job over to the plugins, first running the role-specific hooks and then the whole contents hooks [4]:
RoleMatch = namedtuple('RoleMatch', 'name contents') def htmlize(post, db, plugins=[]): """ pass """ contents = post.contents # Plugins are classes - we need to instantiate them to get objects. plugins = [P(post, db) for P in plugins] # Split the contents to paragraphs paragraphs = re.split(r'\n\n+', contents) for i, p in enumerate(paragraphs): paragraphs[i] = '<p>' + p.replace('\n', ' ') + '</p>' contents = '\n\n'.join(paragraphs) # Find roles in the contents. Create a list of parts, where each # part is either text that has no roles in it, or a RoleMatch # object. pos = 0 parts = [] while True: match = ROLE_REGEX.search(contents, pos) if match is None: parts.append(contents[pos:]) break parts.append(contents[pos:match.start()]) parts.append(RoleMatch(match.group(1), match.group(2))) pos = match.end() # Ask plugins to act on roles for i, part in enumerate(parts): if isinstance(part, RoleMatch): parts[i] = _plugin_replace_role( part.name, part.contents, plugins) # Build full contents back again, and ask plugins to act on # contents. contents = ''.join(parts) for p in plugins: contents_hook = p.get_contents_hook() if contents_hook: contents = contents_hook(contents) return contents def _plugin_replace_role(name, contents, plugins): """ The first plugin that handles this role is used. """ for p in plugins: role_hook = p.get_role_hook(name) if role_hook: return role_hook(contents) # If no plugin handling this role is found, return its original form return ':{0}:
{1}'.format(name, contents)
If you’re interested in the code, this sample application (with a simple driver that discovers plugins by calling discover_plugins and calls htmlize) can be download from here.
Fundamental plugin concepts
Having read about plugins and studied the code of many applications, it became clear to me that to describe a certain plugin infrastructure you really need to look just at 4 fundamental aspects/concepts [5]:
- Discovery
- Application hooks to which plugins attach (aka. "mount points")
- Exposing application capabilities back to plugins (aka. extension API)
There are some areas of overlap between these (e.g. sometimes it’s hard to distinguish discovery from registration), but I believe that together they cover over 95% of what one needs to understand when studying a specific application’s plugin infrastructure.
Discovery
This is the mechanism by which a running application can find out which plugins it has at its disposal. To "discover" a plugin, one has to look in certain places, and also know what to look for. In our example, the discover_plugins function implements this – plugins are Python classes that inherit from a known base class, contained in modules located in known places.
This is the mechanism by which a plugin tells an application – "I’m here, ready to do work". Admittedly, registration usually has a large overlap with discovery, but I still want to keep the two concepts separate since it makes things more explicit (not in all languages registration is as automatic as our example demonstrates).
Hooks
Hooks are also called "mount points" or "extension points". These are the places where the plugin can "attach" itself to the application, signaling that it wants to know about certain events and participate in the flow. The exact nature of hooks is very much dependent on the application. In our example, hooks allow plugins to intervene in the text-to-HTML transformation process performed by the application. The example also demonstrates both coarse grained hooks (processing the whole contents) and fine grained hooks (processing only certain marked-up chunks).
Exposing an application API to plugins
To make plugins truly powerful and versatile, the application needs to give them access to itself, by means of exposing an API the plugins can use. In our example the API is relatively simple – the application simply passes some of its own internal objects to the plugins. APIs tend to get much more complex when multiple languages are involved. I hope to show some interesting examples in future articles.
Examining some well-known applications
Now that we have the concepts well-defined, I want to finish this article by examining the plugin infrastructures of a couple of very common applications. Both are written in high-level languages, which makes the infrastructure relatively simple. I will present more complex infrastructures in future articles, once I cover the technical details of implementing plugins in C or C++.
Mercurial
Mercurial (Hg) is a popular VCS (Version Control System), written in Python. Mercurial is well known for its extensibility – a lot of its functionality is provided by Python extensions. Some extensions became popular enough to be distributed with the core application, and some need to be downloaded separately.
Discovery: extensions that the user wants loaded have to be explicitly listed in the [extensions] section of the Mercurial configuration file (.hgrc).
Registration: extensions are Python modules that export certain functions (e.g. uisetup) and values (e.g. cmdtable) Mercurial looks for. The existence of any one such function or value amounts to registering the extension with Mercurial.
Hooks: top-level functions like uisetup and extsetup serve as coarse-grained hooks. Finer-grained hooks can be explicitly registered by calling, for example, ui.setconfig('hooks', ...) on a ui object passed into uisetup and command callbacks.
Application API: Mercurial application objects like ui and repo passed to hooks provide a means to query the application and act on its behalf.
WordPress
WordPress is the most popular blogging engine on the internet, and possibly the most popular content management system overall. It’s written in PHP, and its extensive plugin system (plugins are also written in PHP) are arguably its most important feature.
Discovery: plugins must be .php files (or directories with such files) placed in the special directory wp-content/plugins. They must contain a special comment with metadata at the top, which WordPress uses to recognize them as valid plugins.
Registration & Hooks: plugins register themselves by adding hooks via special API calls. The hooks are of two kinds – filters and actions. Filters are very similar to the plugins shown in our example (transform text to its final form). Actions are more generic and allows plugins to piggy-back on many different operations WordPress is performing.
Application API: WordPress exposes its internals to plugins rather bluntly. The core application objects (such as $wpdb) are simply available as globals for the plugins to use.
Conclusion
This article’s main goal was to define a common language to reason about plugins. The four concepts should provide one with a tool to examine and study the plugin infrastructure of a given application: 1) how are plugins discovered, 2) how do they register themselves with the application, 3) which hooks can plugins utilize to extend the application and 4) what API does the application expose to plugins.
The examples presented here were mainly about Python applications with Python plugins (with the WordPress example being PHP, which is on about the same level of expressivity as Python). Plugins for static languages, and especially cross-language plugins provide more implementation challenges. In future articles I aim to examine some implementation strategies for plugins in C, C++ and mixed static-dynamic languages, as well as study the plugin infrastructures of some well-known applications.
Related posts:
August 9th, 2012 at 13:53
Really, really useful. Please continue on!
August 10th, 2012 at 08:52
Great post, thanks! Just one question though: in the discover_plugins implementation you use import mechanisms to load the plugins. This has side-effects (cruft going into sys.modules, for one; potential re-load issues for another.) Is there a reason why you don’t use exec()?
August 10th, 2012 at 09:48
Richard Jones,
I prefer to treat the plugin module as I’d treat any other Python module. Therefore using the standard
importplumbing is desirable. It can also be implemented with
execif one wants to avoid the
importpath.
August 10th, 2012 at 14:35
Thanks for this post Eli,
There is something I don’t catch, most probably because I miss something on regex. What is ROLE_REGEX?
I cant’ find a reference on the web, so I guess it is a variable of yours.
In the mean time, I saw an interesting application of metaclass.
Thanks
August 10th, 2012 at 15:21
MarcO,
Actually it’s just a simple constant regex used by
htmlize– it’s there in the full source code. I removed it from the code snippet above because it was messing up my blog’s formatting
August 10th, 2012 at 18:59
Great post! Makes me want to go find a place to use them. Plugins are indeed very interesting. Your post brought to mind that there are similarities between integrating a scripting language into an application and writing a plugin, and in fact it may be a win to integrate the scripting language as a plugin. I’m thinking back to an integration of S-Lang (an early interpreted C-like language) into an embedded system. Discovery was not needed, but registration, hooks, and API definitely were.
August 13th, 2012 at 09:19
Interestingly enough, I used a very similar approach in an application of mine where I wanted to enable ‘monitoring hooks’ who would trigger a page alert based upon arbitrary criteria: the hooks would receive a JSON status on a regular interval and would also have access to a time-series of past status snapshots.
The reason I’m grateful for the post is that, being relatively new to Python (9 months or so) I was really glad that the approach I’d taken was really very close to yours (down to the dir scan for plugin discovery): good to know I was on the right path!
Thanks,
Marco.
August 14th, 2012 at 04:42
Very nice article! I’m also interested in plugin architecture, especially how it relates to aspect oriented programming / advice systems. For example, how do you not only allow plugins to add new behavior, but modify any existing behavior, without the library that’s being extended having to provide explicit hooks or signals.
It would be cool to see comparisons of the existing python plugin frameworks, e.g. the ones on the list here:
Plus the systems used by zope, plone, trac, flask blueprints, django apps, etc.
August 15th, 2012 at 12:43
It’s actually a great and useful piece of information. I’m satisfied that you simply shared this helpful information with us. Please keep us up to date like this. Thanks for sharing.
August 24th, 2012 at 08:26
The biggest example of a plugin eco system is the eclipse IDE – where everything is a plugin!
October 28th, 2012 at 12:10
Firstly, this is really great – I’ve been able to implement a simple plugin system with minimal effort in my app. Many thanks for sharing this.
However, I’ve found that I’ve got several plugins that are fairly similar and differ only slightly. I decided to create a base class that they can inherit from. This works but I end up importing the base class as well. I can add the base class’s name to the check performed in the
IPluginRegistry __init__method but this won’t really scale.
Could you suggest a method for extending the inheritance tree?
Some example code: | http://eli.thegreenplace.net/2012/08/07/fundamental-concepts-of-plugin-infrastructures/ | CC-MAIN-2014-15 | refinedweb | 3,122 | 55.54 |
gelfclient 0.0.3
A UDP client for sending message in the Graylog Extended Log Format (GELF)
gelfclient
Python client for sending UDP messages in Graylog Extended Log Format (GELF).
Messages are zlib compressed, and support the GELF chunked encoding.
Since messages are sent with UDP, the log method should return quickly and not raise an exception due to timeout. However an exception may be raised due to a DNS name resolution problem.
Usage
from gelfclient import UdpClient gelf_server = 'localhost' # Using mandatory arguments gelf = UdpClient(gelf_server) # Using all arguments gelf = UdpClient(gelf_server, port=12202, mtu=8000, source='macbook.local') # Bare minimum is to send a string, which will map to gelf['short_message'] gelf.log('server is DOWN') # 'source' and 'host' are the same. Defaults to socket.gethostname() gelf.log('server is DOWN', source='hostchecker') # Send different data fields gelf.log('status change', state='DOWN', server='macbook', source='hostchecker') # You can also prepare all data into a dictionary and give that to .log data = {} data['short_message'] = 'hello from python' data['host'] = 'hostchecker' gelf.log(data)
See the GELF specification for other fields and their meaning:
- Downloads (All Versions):
- 11 downloads in the last day
- 67 downloads in the last week
- 274 downloads in the last month
- Author: Chris McClymont
- Keywords: gelf,graylog,graylog2,logging
- License: Apache v2
- Categories
- Package Index Owner: orionvm
- DOAP record: gelfclient-0.0.3.xml | https://pypi.python.org/pypi/gelfclient | CC-MAIN-2015-48 | refinedweb | 228 | 54.83 |
Docs |
Forums |
Lists |
Bugs |
Planet |
Store |
GMN |
Get Gentoo!
Not eligible to see or edit group visibility for this bug.
View Bug Activity
|
Format For Printing
|
XML
|
Clone This Bug
This is the request for ebuild for Zope-2.7.0 as it is now the latest STABLE
release.
Reproducible: Always
Steps to Reproduce:
*** Bug 32034 has been marked as a duplicate of this bug. ***
*** Bug 43802 has been marked as a duplicate of this bug. ***
Any news on this issue?
Created an attachment (id=28323) [edit]
zope-2.7.0 ebuild and related files
This is my first crack at an ebuild for 2.7.0. Due to changes to the way that
Zope 2.7 works (eg. no longer uses all those environment variables) and never
having used zope before :) I commented a lot of the old ebuild code for now.
It installs into /usr/lib/zope-2.7.0 as noted in Bug #31511.
I've included a patch to zopectl from the bug here:
In the install_help() procedure, line 197, the proposed ebuild has
einfo "$ /usr/lib/${PV}/bin/mkzopeinstance.py"
I think this should be
einfo "$ /usr/lib/zope-${PV}/bin/mkzopeinstance.py"
Note that I've just prefaced the zope version with "zope-".
-Paul Komarek
The zope instance's var and log directories were created with user.group =
root.root. This caused zope to fail after some time passed, or after someone
tried to connect (probably from trying to write a log or something). One
symptom was a connection refused on port 8080, even though you could see zope
listening there using netstat.
Changing the user.group to zope.zope for these two directories worked. For
concreteness: since my zope instance is in /var/zope, those directories are
/var/zope/log and /var/zope/var.
I followed the directions for running mkzopeinstance (as mentioned in my
previos comment, the path reported by the ebuild,
/usr/lib/2.7.0/bin/mkzopeinstance.py, was wrong -- that should be
/usr/lib/zope-2.7.0/...). Should have run mkzopeinstance as user zope? If so,
maybe the directions could include that. If mkzopeinstance is supposed to take
care of perms.
Note that I'm not sure how mkzopeinstance could handle user/group stuff, since
the instance hasn't created yet. Therefore $INSTANCE/etc/zope.cfg doesn't
exist, and the effective-user isn't set yet. I think the ebuild should suggest
something more like:
* To create a new Zope Instance use the following command:
* $ su zope -c /usr/lib/2.7.0/bin/mkzopeinstance.py
However, I have not tried this myself (I just did chown zope.zope ...)
-Paul Komarek
I haven't figured out why yet, but the pcgi stuff from the Zope 2.7.0 tarball
is nowhere to be found after this ebuild runs. That is, I had thought that
Zope-2.7.0/pcgi would be copied to /usr/lib/zope-2.7.0/pcgi, but it was not.
Nor does epm -ql zope show the pcgi directory being installed.
As far as not installing stuff goes, there are many Zope directories that
aren't installed. ZServer, utilities, etc. Is there a reason for this?
-Paul Komarek
I caught the ${PV} typo also. It was actually supposed to be ${PF} which
resolves to zope-2.7.0 which is the same as doing zope-${PV}.
As for the ownership of the instance folders, I already patched
mkzopeinstance.py to use ZUID and ZGID (zope:zope) when creating the instance
folders. In install_help I say to run the command as a non-root user but root
may in fact be required for that command.
Can you try creating a new instance as root (use /tmp/zope for instance path)
and see if it works? Due to how the script works you will still have ownership
of /tmp/zope but all files and folders inside it should be owned by zope:zope.
The pcgi directory in the source tarball appears to be empty. Maybe it's not
part of the core distribution anymore?
ZServer is installed here:
/usr/lib/zope/lib/python/ZServer
And utilities becomes /usr/lib/zope/bin
Note that /usr/lib/zope is just a symlink to /usr/lib/zope-2.7.0
I just learned that pcgi is deprecated for zope. They are now suggesting using
a proxy, for instance mod_proxy with apache[2]. I'm learning how to do this.
-Paul Komarek
I tried creating a new instance as root using mkzopeinstance.py. Whether or
not I set ZUID and ZGID, I everything under /tmp/zope/ was owned by root. I
don't see anything in mkzopeinstance.py that refers to ZUID or ZGID; maybe your
patch wasn't applied for some reason?
-Paul Komarek
I just tried installing this ebuild, and I did not have zope installed before.
The ebuild failed when it tried to create a new user, so I changed the
pkg_setup function like this:
pkg_setup(){
enewgroup ${ZGID}
enewuser ${ZUID} 261 /bin/bash ${ZOPE_DIR} ${ZGID}
}
it seems ZS_DIR is no longer defined
I just tried to emerge the ebuild listed above - it doesnt seem to install any
init.d/ startup script.... am I doing something wrong?
Also, the help-text at the end of the ebuild process which tells the user where
to find mkzopeinstance and zopectl indicates an incorrect directory.
It would be really great to get a working zope ebuild into Gentoo stable!
Created an attachment (id=30905) [edit]
patch for previous attachment's ebuild
Created an attachment (id=30906) [edit]
changes to zope-config to get zope-2.7.0+ to work.
Created an attachment (id=30907) [edit]
Simple change to /etc/zope-config.conf for zope-2.7.0+
Created an attachment (id=30909) [edit]
proposed replacement for net-zope/zope/files/2.7.0/zope.initd
Here's summary of my proposed changes:
zope-2.7.0.ebuild changes:
- fixed two bugs in unpack routine.
- added creation of .templates and population of zope.initd which becomes /etc/init.d/$ZOPE_INSTANCE
- added help for easy step after install assuming the zope-config changes work.
- included enewuser fix from previous bug comment
zope-config:
- added zserv_is_2.7_or_newer function to detect if selected zserver is 2.7.0+ or if it's an earlier version.
- added code to invoke mkzopeinstance.py if setting up 2.7.0 or later.
- added code to copy and customize /etc/init.d/$z_instance for 2.7.0 or later
- added ability to use zope-config --zpasswd for 2.7.0 or later
zope-config.conf
- added ZS_DIR2 which defines the new location for 2.7.0+ zope versions
net-zope/zope/files/2.7.0/zope.initd
- This one worked on my system! Somebody else give it a try :)
Created an attachment (id=30910) [edit]
proposed replacement for net-zope/zope/files/2.7.0/zope.initd
so when can we expect this in portage?
i'll take a look at the ebuilds over the next weekend. probably we can add them
to ~x86 afterwards.
any news on when it's gonna hit portage?
There is a small bug in the patched zope-config -- it should chown the newly
created /var/lib/zope directory to zope:zope, or the creation of subdirectories
will fail (for they are created under `su zope -c` command).
--- zope-config.orig 2004-06-02 19:38:46.790012184 +0800
+++ zope-config 2004-06-02 23:54:04.453373400 +0800
@@ -210,6 +210,7 @@
if [ ! -d ${ZI_DIR} ] ; then
mkdir -p ${ZI_DIR}
+ chown zope:zope ${ZI_DIR}
fi
while : ; do
We're 5 months late on this upgrade, what more needs to be done to get the
ebuild in portage? We seem to have sufficient headcount of interested parties
to make this happen...
I did some testing with the above ebuild (and patch) and found that I had to
add --ignore-largefile to the ./configure step in src_compile(). It seems that
Python on Gentoo is not compiled with large file support, or at least that's
what the configure script things. The test in Zope-2.7.0/inst/configure.py
looks like this:
def test_largefile():
OK=0
f = open(sys.argv[0], 'r')
try:
# 2**31 == 2147483648
f.seek(2147483649L)
f.close()
OK=1
except (IOError, OverflowError):
f.close()
if OK:
return
print (
"""
This Python interpreter does not have 'large file support' enabled. ...
"""
)
sys.exit(1)
I think this test is bogus. It is trying to itself by whatever name it was
invoked under, and I suspect this doesn't work very well. To test it
interactively, I changed sys.argv[0] to sys.executable (path to Python
interpreter) and it worked fine. It should be safe to add --ignore-largefile to
the configure options, although I really can't see where it is being enabled in
the ebuild.
Now I haven't tested the resulting build yet, because I haven't tackled the
zope-config issue which apparently exists.
Can someone (like maybe Carter, since he has done the most research on this)
put together a complete ebuild for 2.7.0, and maybe also a patchset and ebuild
for zope-config? 2.7.1_beta1 has been out for a couple weeks now and has a
bunch of fixes. This bug is also blocking (somewhat) bug #51825 (plone-2.0.3)
which recommends use of Zope-2.7.
Created an attachment (id=33521) [edit]
zope-2.7.1_beta2.ebuild
Here's an ebuild that I put together based on the 2.6.4-r1 ebuild. Summary of
changes:
* Attempt to future-proof version number detection
* Tarball naming convention has changed
* ~arch for all platforms
* Remove PYTHON_SLOT_VERSION testing due to python-2.3* dependency
* Remove extraneous slashes in ZSERVDIR, ZINSTDIR
* Minor indent fix in install_help()
* Use new configure with --ignore-largefile due to bogus test (Gentoo python
has it, test thinks otherwise)
* Remove all the .templates stuff since Zope now uses skel
* Drop zope.confd because Zope has a etc/zope.conf now
* Don't try to install a default instance; zope-config probably won't work
right without changes
To create a new instance without zope-config, use:
/usr/share/zope/zope-2.7.1_beta2/bin/mkzopeinstance.py \
-d /var/lib/zope/zope-2_7_1_beta2 -u admin:password
To start Zope without zope.initd:
/var/lib/zope/zope-2_7_1_beta2/bin/runzope
zprod-manager works fine, as near as I can tell.
TODO:
* Fix up zope-config
* Rework/simplify zope.initd (Zope doesn't look at the environment variables
anymore). The one attached on this bug looks reasonably good.
2.7.1 is out. Since 2.7.1b2 was the last prerelease, and 2.7.1 only has one
bugfix in it, the 2.7.1_beta2 ebuild above ought to work fine. I'd update the
summary and URL but it won't let me.
2.7.1 broke my ebuild because it inexplicably changed it's source directory to
Zope-2.7.1-0 from the expected Zope-2.7.1. If you alter:
S="${WORKDIR}/Zope-${SFPV}"
to
S="${WORKDIR}/Zope-${SFPV}-0"
then it seems to work fine. I am reworking zope-config and doing more testing.
Created an attachment (id=33950) [edit]
zope-2.7.1.ebuild
Changes from 2.7.1_beta2 ebuild:
* Fix ${S} due to unexpected source directory relocation
* Set ZGID=zope; see discussion below
* Remove some unused variables
* Make setup_security() sane
* Revise install_help()
* Make zope user with zope group
* Fix documentation
* Fix speling erorrs in unicode patch section (don't know if this patch is
applicable any more, please test)
* Remove rc-script editing (no longer needed)
* Do NOT install a default instance; discussion below
* Remove .default more simply
Security updates: Older ebuilds of zope (particularly 2.6.4-r1) go through the
extra step of creating a default instance with it's own group, and then
changing ownership of all the files to zope and the group to the default
instance's group, and removing all permissions from others. This is WRONG.
Instances run as the zope user; do you really want to zope user to own all the
files and have them writable? You sure wouldn't do this with apache, so why
Zope?
As a result, all the files in the ZSERVDIR (where zope is being installed) are
now owned by root and the zope group, which does not have write permission.
Others still have no read/write permission; I'm not really sure what the
rationale is for this, but it does not make it less secure.
Additionally, zope-config (patch to follow) installs it's instance files in a
similar way.
Created an attachment (id=33951) [edit]
files/2.7.1/zope.initd
This is the /etc/init.d/ script for the 2.7.1 ebuild. Put it in
net-zope/zope/files/2.7.1.
Created an attachment (id=33952) [edit]
patch for zope-config for zope-2.7 compatibility
Summary of changes:
* Use a sane permissions scheme, but only for 2.7 right now; should probably
also be applied to earlier versions
* Use mkzopeinstance.py to do the heavy lifting
* For Zope-2.7, make the /etc/conf.d file on the file (hardly any parameters
needed now) and edit ${INSTANCE}/etc/zope.conf
* Run zpasswd from the right place (2.7 has a bin directory)
* Detect zope version by presence of bin directory
* Don't try to add the instance group if it exists (suppress error message)
* Update post-install instructions
Security updates: See also the discussion on comment #29. Previously (and still
for zope<2.7) the instance files were installed so that they were owned by the
zope user and the instance's group, and set to be owner- and group-writable,
and non-read-write by others This is WRONG. The instance runs as the zope user,
and most of those files should not be writable to the running instance. Now
they are owned by root and the instance's group, but not group-writable. The
exceptions are the var and log directories, which are group-writable with the
set-group-id bit.
If you are cc:ed on this bug, please test the ebuild and patch your zope-config
and report the results here. I've tested it all pretty thoroughly and I don't
think there are any issues, with one exception: I have not tested the
structured text unicode patch. I don't even know if it is applied or is needed,
since it is leftover from 2.6.4.
I'll also point out that the zproduct eclass has the same sort of file
ownership issue as the old ebuild and zope-config: When it installs files
(zproduct_pkg_postinst()), it chowns them zope:root. This of course makes them
writable by running zope instances. They ought to be root:root or root:zope.
zprod-manager seems okay in this respect, since it preserves permissions and
ownership of installed zproducts.
I have tested Andy's latest ebuild along with the zope-config patch and they
seem
to work smoothly except for one minor issue: if you have a prior version of
Zope
installed (in my case zope-2.6.4-r1) the user zope already exists and is
assigned
gid 100. This prevents enewuser in zope-2.7.1.ebuild from assigning the correct
group (zope) to the zope user. As a result you cannot start your zope instance
"out of the box".
IMHO there is no clean solution to the problem. The 2.7.1 ebuild implements the
ideal behavior, but raises the above issue. A workaround might be to call
useradd
with the -G zope option (maybe through enewuser), leaving the zope user with
users
(gid 100) as the default group.
Try changing the pkg_setup() section of the ebuild to this:
pkg_setup() {
enewgroup ${ZGID} 261
usermod -g ${ZGID} ${ZUID} 2>&1 >/dev/null || \
enewuser ${ZUID} 261 /bin/bash ${ZS_DIR} ${ZGID}
}
That way if the zope user already exists, it will change it's primary group to zope, otherwise it will be added that way. I haven't tested this but it looks (more) correct.
zope.initd has a leftover echo ${PIDFILE}, which is debugging cruft and should
be removed.
Created an attachment (id=34055) [edit]
files/2.7.1/zope.initd
The old zope.initd wasn't too robust if zope had been killed: stop would fail,
even though zope was no longer running. Now it uses is_zope_dead, and if it is,
it doesn't try to stop it, and exits successfully.
Created an attachment (id=34059) [edit]
files/2.7.1/zope.initd
Sorry, I actually broke the common restart case where zope is actually running.
I think I've checked all the cases now and everything looks okay. I also found
a bug in is_zope_dead: A break statement caused a premature exit of the script.
This is in 2.6.4-r1 as well, I think.
Created an attachment (id=34078) [edit]
files/2.7.1/zope.initd
The code for determining whether or not zope was still alive was *still* buggy.
Since it was excessively complicated to start with, I threw out everything
related to is_zope_dead (including read_pid and check_pid_status) and wrote a
new zope_is_alive function. The init script is now about 25% the size of the
2.6.4 init script (reduced from 178 loc to 47).
Sorry for all the updates. Hopefully this will all get committed soon. Please
test this along with attachment #33950 [edit] and attachment #33952 [edit] and report your
results here.
OK, here's a relevant zproduct.eclass complaint: When you remove a package that
which inherits from zproduct, it also removes it from all of your zope
instances. It does this when you upgrade the package, too. When you install (or
upgrade), it only installs into the default instance. If you have more than one
instance, you're basically screwed, because only your default instance will
have it after an upgrade. And if it didn't have it before, it will now...
IMHO, the zproduct class:
1) Should not remove installed products from instances when unmerging the
package.
2) Should not try to automatically install products into instances.
3) Should SLOT all zproducts.
Re: slotting: /usr/share/zproduct has a directory for each installed zproduct
package. Inside that are the product directory(s) supplied by the package.
There would not be any conflict with having multiple versions of the same
zproduct installed, though only one version of each zproduct can be installed
in a given instance. This will probably require reworking zprod-manager a bit.
Perhaps the behavior of installing/removing zproducts from instances could be
controlled with an environment variable (set in /etc/make.conf), i.e.
ZPRODUCT_UPDATE, set to a list of all the instances to automatically update.
Also, I think having the default instance being determined by having a .default
file in it is a little weird. Why not have a symlink named default in the
/var/lib/zope directory to the default instance?
I am afraid that we're making the Gentoo zope install too complicated. The way
things are going, we are likely to make all of the existing documentation on
Zope irrelevant to Gentoo users. This would be unfortunate and create
confusion.
I think we should be careful and possibly reduce our ambitions. I expect that
very few users care about having multiple instances, and a setting up a single
instance is easy (and can follow the bulk of extant docs). Maybe we should
have two zope ebuilds, or otherwise have an option to make things "normal" or
"super-gentoo-ized"? Please, someone help me, I'm almost ready to suggest
adding a use-flag for this!
Unless Gentoo plans to take over Zope maintenance, we will never control the
"usual" Zope install. Unless we plan to support *every* Zope product in
existance, our strange super-gentoo-ized installs are likely to cause trouble.
The mitigating factor is that (probably) not many people install Zope. But
they should!
-Paul Komarek
I agree that most people will not want to run more than one instance. One of
the problems with zope-config is that it sort of gives the impression that you
should create a new instance for each new version you install, since the
default instance name is "zope-".
I also don't think most people will want to have more than one version of Zope
installed at a time. However, you can't just upgrade a new version over an old
version, or automatically unmerge the old version, because you'll break the
running instance. This is why I argued previously (in some other bug) that we
needed to SLOT on the Zope version number: Update, change your instance config
to use the new version, restart, then unmerge your old version when convenient
(like if the new version didn't break something important).
If anything, with the zope-config patch and new ebuild I have on this bug,
configuration is less Gentoo-specific than before, because we're using the
mkzopeinstance.py that comes with Zope-2.7, which is what doc/INSTALL says to
use. Of course, nobody has to use the Gentoo tools; so long as we install
everything in reasonable locations, they can use the standard instructions.
In bug #31511 (opened by Paul), there's a good case for moving the installed
version of Zope, too, since architecture-dependent files should not be in
/usr/share. Most likely they'll move to /usr/lib/zope/${PV}.
Anyway, I think the zproduct install mechanism is trying to be too smart at
this point, and should leave instances alone. SLOTting zproducts would make
things a bit more complicated, but may be necessary in the long run: Some
zproducts require an upgrade procedure which we can't anticipate. Plone is the
most obvious example.
Bug opened 2004-02-13, now its 2004-07-11
5 months past, all other source distributions already have it, I really think its about time this was added, even if it has to be hard masked.
Agreed. I appreciate all of the hard work you guys have been putting into
making this ebuild work the way you want it to, but the time window is starting
to get very out of hand here.
I had to upgrade my zope site now, so I just gave it try today.
I used zope-2.7.1.ebuild, the patch for zope-config (and patched /usr/sbin/zope-config) and the latest zope.initd.
Basically it worked quite OK, but I had some (minor) issues with the permissions. I had to add the zope user manually to the zope group in /etc/groups and I had to change all the subdirectory and files in /var/lib/zope/zope-2.7.1 (INSTANCEHOME) to root:zope (they were owned by root:root and naturally that didn't work at all *g*).
After installing some zope products with the zprod-manager I had to change the permissions in /var/lib/zope/zope-2.7.1/Products/, but thats an issue of zprod-manager I think.
BTW: I want to go along with Hackeron and Davin and I think, this should be in portage ASAP now.
Last but not least: Thank you for your work with these fiddly stuff here. :-)
Created an attachment (id=35340) [edit]
fixed zope-config
OK, this is my zope config patched by both patches visible here.
it still contains some unnecesary (?) crud but correctly sets all permissions.
it just works (for me :) so you can create working zope instance.
i confirm that using zope-2.7.1.ebuild and my zope-config youre able to
sucessfulyy emerge zope and install instance without any errors.
there is one suggestions: you can change /usr/portage/eclass/zproduct.eclass,
there should be chown root:root instead of zope:root (only one chmod in file so
its easy to find). its not applicable to this bug, so i dont attach patch. this
causes all global versions of zope product being accessible for all users: zope
instances (for zprod-manager) and ordinary system users. i think its correct to
have 755 permissions on /usr/share/zproduct/* due to fact that not only zope
can use these files. if someone want to achive better protecion that should be
done in zprod-manager, still i dont think its good idea. i suggest leaving
/var/lib/zope/INSTANCE/Products/* as a 755 too.
well :) after a few month and really good work of you, zope2.7.1 and
zope-config made their way to portage. i was just able to do some quit testing,
probably there are some issues left. so i added them to package.mask
i don't dare to close the bug yet :) maybe next week?
Indeed, 2.7.2-rc1 is out as of yesterday (for a security fix), so let's give it
a couple of days.
btw, current ebuild should be fixed to allow any python 2.3.x (and propably
later).
current version has it as specific 2.3.3, so 2.3.4 causes [UD] flags on emerge.
Created an attachment (id=35399) [edit]
zope-2.7.2_rc1.ebuild.diff
This patch is against the 2.7.1 ebuild that is in portage.
* Fix sub-version number, URI
* Fix python dependency (python-2.3*)
* Require zope-config-0.4 or newer
* Set the ZS_DIR to /usr/lib/ as expected by new zope-config and be
FHS-compatible
* Try to make user any existing zope user has the zope group as primary
BTW, zope-config looks ok, although there is a typo:
sed -i -e 's/uniqe/unique/' /usr/sbin/zope-config
One other possible minor issue: In previous testing, I found one product
(PloneCollectorNG) which, when you try to create the object within the ZMI,
dies because it cannot write to the instance's import directory. Currently this
is not one of the directories that is set to be writable, but I suspect that
PCNG is somewhat of an exception, and the error produced is quite explicit
about what needs to be changed, so I don't really consider it an issue,
especially since PCNG is not yet in portage that I'm aware of.
ok, here is my _proposal_, list of changes:
1. added pkg_config to ebuild to create default instance
2. info changes in ebuild (more einfo/ewarn)
3. changed SLOT to NOT include ${PR}
4. also changed /usr/lib/zope to not include ${PR}
5. added default instance management
6. added possibility to zope-config to create instance without user/pass ask
i attach zope-config and zope-2.7.2-rc1-r1.ebuild
Created an attachment (id=35770) [edit]
zope-config-0.4-r1
Created an attachment (id=35771) [edit]
zope-2.7.2_rc1-r1
2.7.2 is out.
Aside from spelling errors, attachment #35771 [edit] looks OK to me (I diffed against my zope-2.7.2_rc1.ebuild), but I haven't tried it out yet.
Created an attachment (id=35865) [edit]
newest zope 2.7.2 ebuild
spelling mistakes still present, but small fixes due to tgz name change on zope
has been applied to the ebuild.
please test it.
Being new to zope and familiar with portage only as a user, thanks for your
work!
I tried adding the attachment #35865 [edit] to /usr/portage/net-zope/zope as
zope-2.7.2.ebuild, replacing
/usr/portage/app-admin/zope-config/files/0.4/zope-config with attachment #35770 [edit]
and running
emerge --nodeps -uv /usr/portage/net-zope/zope/zope-2.7.2.ebuild
/usr/portage/app-admin/zope-config/zope-config-0.4.ebuild
(nodeps because zope-2.7.2 requires inexisting zope-config-0.4-r1)
zope-config runs fine expect that it complains "install: cannot stat
`/usr/lib/zope-2.7.2/skel/zope.initd': No such file or directory".
Consequently, in /etc/init.d/ there is no start script for zope-2_7_2.
Doing a "cp /usr/portage/net-zope/zope/files/2.7.1/zope.initd
/etc/init.d/zope-2_7_2" (for my zope-instance zope-2_7_2) and "chmod 755
/etc/init.d/zope-2_7_2" does the trick and zope seems to be running fine.
Using current zwiki-0.32.tgz inside /var/lib/zope/zope-2_7_2/Products works
fine (my inital intent to use Zope).
Warper, please put zope.initd into PORTAGE/net-zope/zope/files/2.7.2 directory
and emerging 2.7.2 ebuild (35865) will install into into skell directory,
making it ready for zope-config use.
i'm starting to integrate your fixes to portage. just a few things to mention:
please post patches/diffs, not entire ebuilds. e.g. 2.7.2 ebuild is not in sync
with 2.7.1....
integrated to portage
zope-2.7.2.ebuild : enewuser fails on completely new zope install
I neverd had installed zope on my gentoo box, thus emerge will install zope user and group. For the user, it tries to create a home directory, which fails.
ACCEPT_KEYWORDS="~x86" emerge zope 2>&1 | tee -a /tmp/emerge.log
gives me the following output excerpt (1st run / 2nd run)
* Caching service dependencies...
* Adding group 'zope' to your system ...
* - Groupid: 261
2nd try -----------
Toni, afaik its caused by removal of lines:
ZGID_INST="$(echo ${PN}-${PV} | sed -e 's/\./_/g' )"
ZS_DIR=${ROOT}/usr/lib/
ZI_DIR=${ROOT}/var/lib/zope/
from ebuild which went into portage.
it also can result in broken pkg_config. got to investigate it.
re-added :) now i know why you introduced these vars
I don't understand why it is that the new zope.initd was set up to require
zope-config doing a search/replace to set the path. It would have been much
simpler to use the $INSTANCE_HOME environment variable that comes automatically
from /etc/conf.d/<instancename>, particularly if that script ever needs to be
upgraded.
sed -i -e 's/INSTANCE_HOME/${INSTANCE_HOME}/g' zope.initd | http://bugs.gentoo.org/41508 | crawl-002 | refinedweb | 5,003 | 67.35 |
A histogram is commonly used to plot frequency distributions from a given dataset. Whenever we have numerical data, we use histograms to give an approximate distribution of that data. It shows how often a given value occurs in a given dataset. Matplotlib 2D Histogram is used to study the frequency variation of a given parameter with time.
We use 2D histograms when in a given dataset, the data is distributed discretely. As a result, we only want to determine where the frequency of the variable distribution is more among the dense distribution. There is a predefined function ‘matplotlib.pyplot.hist2d()’ present in python . It is present in the matplotlib library in python and is used to plot the matplotlib 2D histogram.
Matplotlib Library
Matlplotlib is a library in python which is used for data visualization and plotting graphs. It helps in making 2D plots from arrays. The plots help in understanding trends, discovering patterns, and find relationships between data. We can plot several different types of graphs. The common ones are line plots, bar plots, scatter plots and histograms.
What is a Histogram in ‘Matplotlib 2D Histogram’ ?
Histograms are frequency distribution graphs. From a continuous dataset, a histogram will tell about the underlying distribution of data. It highlights various characteristics of data such as outliers in a dataset, imbalance in data, etc. We split the data into intervals, and each interval signifies a time period. It is not the height but the area covered by the histogram, which denotes frequency. To calculate frequency, we need to multiply the width of the histogram by its height.
Parameters:
x: is a vector containing the ‘x’ co-ordinates of the graph.
y: is a vector containing the ‘y’ co-ordinates of the graph.
bins: is the number of bins/bars in the histogram.
range: is the leftmost and rightmost edge for each bin for each dimension. The values occurring outside this range will be considered as outliers.
density: is a boolean variable that is false by default, and if set to true, it returns the probability density function.
weights: is an optional parameter which is an array of values weighing each sample.
cmin is an optional scalar value that is None by default. Thus, the bins whose count is less than cmin value would not be displayed.
cmax is an optional scalar value that is None by default. The bins whose count is greater than cmax value would not be displayed.
Return values
h: A 2D array where the x values are plotted along the first dimension and y values are plotted along the second dimension.
xedges is a 1D array along the x-axis
yedges is a 1D array along the y axis
image is the plotted histogram
Example Matplotlib 2D Histogram:
Here, we shall consider a height distribution scenario, and we will construct a histogram for the same.
Let us first create a height distribution for 100 people. We shall do this by using Normal Data Distribution in NumPy. We want the average height to be 160 and the standard deviation as 10.
First, we shall import numpy library.
import numpy as np
Now, we shall generate random values using random() function.
heights = np.random.normal(160, 10, 100)
Now, we shall plot the histogram using hist() function.
plt.hist(heights)
Also Read | Numpy histogram() Function With Plotting and Examples
Understanding the hist2d() function used in matplotlib 2D histogram
The hist2d() function comes into use while plotting two-dimensional histograms. The syntax for the hist2d() function is:
def hist2d(x, y, bins=10, range=None, density=False, weights=None, cmin=None, cmax=None, *, data=None, **kwargs)
2D Histograms
Unlike a 1D histogram, a 2D histogram is formed by a counting combination of values in x and y class intervals. 2D Histogram simplifies visualizing the areas where the frequency of variables is dense. In the matplotlib library, the function hist2d() is used to plot 2D histograms. It is a graphical technique of using squares of different color ratios. Here, each square groups its number into ranges. Higher the color ratio in 2D histograms, the higher the data that falls into that bin.
Let us generate 50 values randomly.
x = np.random.standard_normal(50) y = x + 10
Now, we shall plot using hist2d() function.
plt.hist2d(x,y)
Now, we shall try to change the bin size.
x = np.random.standard_normal(1000000) y = 3.0 * x + 2.0 * np.random.standard_normal(1000000) plt.hist2d(x,y,bins=50)
The output would be:
Now, we shall change the color map of the graph. The function hist2d() has parameter cmap for changing the color map of the graph.
plt.hist2d(x,y,bins=50,cmap=plt.cm.jet)
Another way to plot the 2d histogram is using hexbin. Instead of squares, a regular hexagon shape would be the plot in the axes. We use plt.hexbin() for that.
plt.hexbin(x,y,bins=50,cmap=plt.cm.jet)
The output after using hexbin() function is:
hist2d() vs hexbin() vs gaussian_kde()
hist2d() is a function used for constructing two dimensional histogram. It does it by plotting rectangular bins.
hexbin() is also a function used for constructing a two-dimensional histogram. But instead of rectangular bins, hexbin() plots hexagonal bins.
In gaussian_kde(), kde stands for kernel density estimation. It is used to estimate the probability density function for a random variable.
FAQ’s on matplotlib 2D histogram
Q. What are seaborn 2d histograms?
A. Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing statistical graphics. For example, we can plot histograms using the seaborn library.
Q. What are bins in histogram?
A. A histogram displays numerical data by grouping it into ‘bins’ of different widths. Each bin is plotted as a bar. And the area of the bar determines the frequency and density of the hat group.
Q. What is the difference between histogram and bar graph?
A. A bar graph helps in comparing different categories of data. At the same time, histogram helps in displaying the frequency of occurrence of data.
Have any doubts? Feel free to tell us in the comment section below.
Happy Learning! | https://www.pythonpool.com/matplotlib-2d-histogram/ | CC-MAIN-2021-43 | refinedweb | 1,029 | 67.35 |
I’m still trying to figure out imports in 1.3. I’m having trouble accessing my Meteor.methods after removing insecure.
File structure:
App client/ main.js server/ main.js imports/ api/ methods/ methods.js ui/ myview.jsx
My Meteor.call() in myview.jsx does not seem to work and I get an access denied. I tried importing the methods.js into server/main.js with:
import '../imports/api/methods/methods.js';
Shouldn’t this register the Meteor.method with the server? I was trying to use the demo todo’s but the removing insecure page does not show how it’s imported (or I’m missing it).Do I need to export from methods.js or does importing the file run and register Meteor.methods()? | https://forums.meteor.com/t/how-to-import-meteor-method-file/21427 | CC-MAIN-2022-33 | refinedweb | 127 | 57.23 |
This is the second in my new series on Getting Started With Silverlight (please see the first article for information on the series and where to get the software you need).
[updated11/8]
Don’t Start with Xaml…
Until recently, just about every introduction to Silverlight started out by talking about Xaml; the markup language of Silverlight, WPF and Workflow. I believe it is time to stop.
Teaching Xaml first made sense when Xaml was the only (or best) way to create controls. But with a working design surface, Xaml is too high a bar to set just to get started.
I recommend that if you do not know Xaml you not try to learn it (at first) as you can go very far without doing so, it can be a stumbling block, and the tool will provide incredible help for learning the markup as you need it.
I may be the first person to write down that advice. (I may be stoned to death.) But I suspect I won’t be the last.
Silverlight Without Xaml
To get started, open VS2010 and create a new Silverlight project (I use C# but feel free to use any supported language). Let’s name it ThreeApproaches
When Visual Studio settles down you’ll probably see a split window with the designer on top and Xaml on the bottom. Let’s close the Xaml by clicking on the “make top window the only window” button on the far right of the splitter bar
Open the Toolbox if it is not already open ( Ctrl-W X) and pin it in place. Then click on the Grid that is the default layout control. When you click on it, margins will appear and putting the cursor into the top or left margin will offer you a preview of where you might click to create columns or rows respectively. Go ahead and create two rows and two columns, and then shrink the entire grid down to small enough to look ok.
Add Your First Control
Drag a Textblock out of the toolbox and place it more or less in the upper left box and then open the properties window. If the TextBlock’s properties are not displayed, click on the TextBlock to make it the selected control.
Somewhat unusually you set the name of the control at the very top of the
Properties window
Notice that there are two tabs: Properties and events. Make sure you have Properties selected, and below that you may want to click on the Categorized button rather than the A-Z to make this a bit easier to follow.
Expand the Layout property and by clicking in the black triangle next to both Height and Width (and clicking Reset) you can set the TextBlock’s dimensions to be set in accordance with whatever string (characters) are begin displayed. Set
the remaining values as shown in the next image, and the TextBlock should show in the designer as placed in the upper right hand box, 5 pixels from the right and bottom margins.
(To save space I cut out some rows, but you can leave those set to their default values).
Wasn’t that cool? No Xaml needed. But if you want to learn Xaml, aha! there are two great features to help. First, click on the horizontal split button on the far right of the design window. This will restore the split window you started out with.
Notice that the Xaml is now shown. Scroll down to line 20 and you should see the definition of the TextBlock, now in Xaml. Notice the 1:1 correspondence with the properties you’ve set.
(click on image for full size)
Xaml and Intellisense
Let’s write the second control (TextBox) in Xaml. Click into the Xaml window,
and below the TextBlock, type an open angle brace. Intellisense immediately springs forward offering to help you pick the control you want. The more you type, the more Intellisense will narrow in on your choice.
Once you select TextBox, and hit the space bar, again Intellisense jumps in, this time offering suggestions as to the properties you might want to set.
Fill in the property / value pairs as shown below,
1: <TextBox Name="Name"
2: HorizontalAlignment="Left"
3: VerticalAlignment="Bottom"
4: Height="25"
5: Width="75"
6: FontFamily="Georgia"
7: FontSize="14"
8: Grid.Row="0"
9: Grid.Column="1"
10:
Your TextBox will appear in the upper right corner of the designer, with all the properties set to correspond with what you’ve written in the Xaml.
To make the TextBlock (the first control) consistent with this, click on it in the
designer, and scroll down to Text Category and drop down the FontFamily to pick “Georgia.”
Set the size to 14 and click the
bold button to set it to Bold.
Return to the Text property (above Layout) and change it from TextBlock to Name? and make sure that the margin is set to 5 in the Layout section.
When all of that is done, hit Control-F5 to run the application and you should see a prompt and a textBox into which you can enter your name.
If you like, you can click in the grid but not on one of the two controls and bring up the properties for the Grid. Scroll down to, and expand, the “Other” category and click the checkbox next to “Show Grid Lines” to reveal the rows and columns you created.
Okay, now you know you can write your controls in Xaml, but why bother when you can just drag them onto the designer from the toolbox and set their properties in the Properties window. I truly believe that the latter approach is faster, less error prone and generally a much better way to get started.
Will you want to hand-code Xaml eventually? Maybe, but my guess is less and less as you get better and better at Visual Studio and, eventually, Expression Blend.
Dynamic Creation of Controls in Code
There is a third way to create controls: dynamically in code.
You can stop reading right here. You won’t need to know this for a long time. I am putting this into this article because (a) this can be a powerful technique when you do need it and (b) for some folks understanding the relationship between dynamically (C#) and declaratively (Xaml) created versions of the same object can be very helpful in groking what Xaml is about.
But your mileage may vary.
It’s All Just Objects
Every object you create in Xaml can also be created at run time in code. To see this, let’s create a second set of prompt and TextBox that will appear when the project is run.
To do so, turn the expander next to MainPage.xaml to reveal the code behind page, MainPage.xaml.cs. (or MainPage.xaml.vb if you are working in VB).
In the constructor, we’ll put a handler for the Loaded event (the loaded event runs when the page is loaded) and we’ll do our work in the event handler Visual Studio creates.
To do this, click into the constructor and type Loaded += then hit Tab twice to let Intellisense create your handler for you. Click in the handler and delete the exception that Intellisense put there to remind you to implement the handler logic.
Creating The TextBlock Dynamically
As noted above, every control can be created as a CLR object, and again Intellisense will help enormously. Begin by instantiating a TextBlock. the Identifier you use (in this case AddressPrompt) will become the Name property.
1: void MainPage_Loaded( object sender, RoutedEventArgs e )
2: {
3: TextBlock AddressPrompt = new TextBlock();
4: }
You will now add each property to the instance of TextBlock, though here you must be sure to be type-safe. Let’s walk through it.
First, you’ll want to set the HorizontalAlignment, which turns out to be an
enumerated constant. Again, Intellisense will help by offering the legitimate values
Fill in the Vertical Alignment in the same way.
When you try to fill in the Margin as a value, you’ll not the red squiggly line indicating something is
wrong. Hover over the Margin property and the tag will indicate the type of the Margin: Thickness. At this point you can open the help files to read about the Thickness type, or you can just instantiate one and see how that goes. I personally prefer the latter. Not only
does Intellisense show you that there are three possible constructors (which you can scroll through with the arrow keys) but it identifies the purpose of each parameter and guides you through filling them in. While I show the third constructor here, which lets you set the left, top, right and bottom margin, we’ll actually use the second constructor which lets you assign one value for all four.
Next we want to set the Height and Width. Hovering over each will reveal that they are doubles, but in this case we want to set them to “auto” – a quick check of the documentation reveals that this is accomplished by assigning the static value Double.NAN – a flag for the compiler to set them automatically.
You can set the FontFamily and FontSize as a string and a double
respectively, but set the FontWeight using the enumeration.
Here is the code we have so far
1: TextBlock AddressPrompt = new TextBlock();
2: AddressPrompt.HorizontalAlignment =
3: System.Windows.HorizontalAlignment.Left;
4: AddressPrompt.VerticalAlignment =
5: System.Windows.VerticalAlignment.Bottom;
6: AddressPrompt.Margin = new Thickness( 5d );
7: AddressPrompt.Height = double.NaN;
8: AddressPrompt.Width = double.NaN;
9: AddressPrompt.FontFamily =
10: new FontFamily( "Georgia" );
11: AddressPrompt.FontSize = 14d;
12: AddressPrompt.FontWeight = FontWeights.Bold;
Placing the Control into the Grid
We placed the first controls into the Grid by writing
Grid.Row = "0" Grid.Column = "1"
But of course, neither TextBlock nor TextBox has a Grid.Row property. These
are Extended Properties, properties defined in Grid but borrowed by other elements to assist in their placement. The C# equivalent is to call the ??? methods of the Grid class, passing the UIElement (in this case, AddressPrompt) that you want to place in the grid, and then the column number and row respectively.
Grid.SetColumn( AddressPrompt, 0 ); Grid.SetRow( AddressPrompt, 1 );
That done, the last step is to add the new element to the Grid itself, by referencing the Children collection of the particular Grid instance, and calling the Add method on that collection, passing in our Element:
LayoutRoot.Children.Add( AddressPrompt );
We can then follow a very similar process for the Address text box. The one interesting addition I’ll make is to set the color on the text in the TextBox. You do this by setting the Foreground property, and you must assign it a SolidColorBrush as its value. You can instantiate a SolidColorBrush by passing in a Color from the Colors enumeration,
AddressInput.Foreground = new SolidColorBrush( Colors.Blue );
When you look at the designer, you will not see either of these controls; they
won’t exist until the program runs. Press Control-F5 and try out your new program, however, and you’ll see that the dynamically instantiated controls are indistinguishable from the declarative (Xaml) controls; at least to all appearances:
You Can, But Don’t.
Even though dynamic declaration of elements takes many more lines of code; C# developers are often tempted to eschew Xaml and go with C# – after all; it is a tool they know.
Xaml has many advantages, however, not least of which is that it is highly toolable. That means that it works extremely well with the Visual Studio Designer and with Expression Blend, which in the long run means far faster development, better looking and easier to maintain applications, and a much easier interaction with designers.
The Source Code
For completeness, here is the Xaml file followed by the C# file:
<UserControl x:Class="ThreeApproaches.MainPage"
xmlns=""
xmlns:x=""
xmlns:d=""
xmlns:mc=""
mc:Ignorable="d"
d:
<Grid x:
<Grid.RowDefinitions>
<RowDefinition Height="1*" />
<RowDefinition Height="1*" />
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="1*" />
<ColumnDefinition Width="2*" />
</Grid.ColumnDefinitions>
<TextBlock Grid.
<TextBox Name="Name"
HorizontalAlignment="Left"
VerticalAlignment="Bottom"
Height="25"
Width="75"
FontFamily="Georgia"
FontSize="14"
Grid.Row="0"
Grid.
</Grid>
</UserControl>
[Note that I cleaned up the Grid columns and rows, using relative sizing (1*) and making the relative sizes of the columns 1:2 – all of this to be explained in an upcoming Mini-tutorial]
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Animation;
using System.Windows.Shapes;
namespace ThreeApproaches
{
public partial class MainPage : UserControl
{
public MainPage()
{
InitializeComponent();
Loaded += new RoutedEventHandler( MainPage_Loaded );
}
void MainPage_Loaded( object sender, RoutedEventArgs e )
{
TextBlock AddressPrompt = new TextBlock();
AddressPrompt.HorizontalAlignment =
System.Windows.HorizontalAlignment.Left;
AddressPrompt.VerticalAlignment =
System.Windows.VerticalAlignment.Bottom;
AddressPrompt.Margin = new Thickness( 5d );
AddressPrompt.Height = double.NaN;
AddressPrompt.Width = double.NaN;
AddressPrompt.FontFamily =
new FontFamily( "Georgia" );
AddressPrompt.FontSize = 14d;
AddressPrompt.FontWeight =
FontWeights.Bold;
AddressPrompt.Text = "Address ?";
Grid.SetRow( AddressPrompt, 1 );
Grid.SetColumn( AddressPrompt, 0 );
LayoutRoot.Children.Add( AddressPrompt );
TextBox AddressInput = new TextBox();
AddressInput.HorizontalAlignment =
System.Windows.HorizontalAlignment.Left;
AddressInput.VerticalAlignment =
System.Windows.VerticalAlignment.Bottom;
AddressInput.Margin = new Thickness( 5d );
AddressInput.FontFamily =
new FontFamily( "Georgia" );
AddressInput.FontSize = 14d;
AddressInput.Foreground =
new SolidColorBrush( Colors.Blue );
AddressInput.Width = 100d;
AddressInput.Height = 25d;
Grid.SetColumn( AddressInput, 1 );
Grid.SetRow( AddressInput, 1 );
LayoutRoot.Children.Add( AddressInput );
}
}
}
Next in this series: Creating Input Forms with the Silverlight Toolkit
(Note, when the article is posted, the name of the next posting will become a link)
Pingback: Codeanswer the complete C# code solution, WPF Tutorial … | ASP WebDev Insider | https://jesseliberty.com/2009/11/06/designer-v-xaml-v-code/ | CC-MAIN-2021-25 | refinedweb | 2,291 | 64.51 |
Marcelo Tosatti wrote:> The void pointer case in here its being done math on without any problem. What is the> problem with void pointer math There is a problem regarding the C standard. The semantics of 'void *' are well definedand only allow for limited use. Basically, you can cast any pointer to and from 'void *',but nothing else.Pointer math says that 'p+n' means "add to 'p' the value 'n*s' where 's' is the sizeon the element that 'p' points to". A 'void *' does not have a defined element sizeuntil it is cast. So, ANSI specifically does not allow any arithmetics on 'void *'.Some compilers are forgiving and will invent an element size of '1' and allow themath. We should not rely on such improper usage.>> Maybe a cast is>>called for in bh_kmap(), like:>> return (char *)kmap(bh->b_page) + bh_offset(bh);> > > Hum, that would fix the warning but I dont see much reasoning behind it. It will not simply 'fix' the problem, this is one case where a cast is nota bad thing (i.e. a cheat) but the correct thing to do.- | http://lkml.org/lkml/2004/4/15/250 | CC-MAIN-2016-36 | refinedweb | 186 | 73.37 |
Shane Henderson's Blog covering JobBurner.com, starting and running a business, software development, technology, and the North Dallas .NET Users Group
Giovanni Gallucci over at QuesoCompuestoTV came over for a visit a few days ago and interviewed me about the architecture and technology behind JobBurner.com. We discussed quite a bit about the business, including the original idea, the founders, SixFires, the platform and technology, Community Server, Telligent, and what I want to be when I grow up. You can download it here.
Thanks for coming by Giovanni!
We launched JobBurner.com almost 1 week ago in partnership with Rob Howard and Telligent. You can find out more here on Yahoo! Finance where we got some initial coverage of the press release. Job postings are FREE until March.
So far we've seen job postings from some great companies; Amazon.com, Turner Broadcasting (Time/Warner), officialCOMMUNITY, Fellowes, and of course Match.com to name a few.
We have a great community where job seekers and employers can collaborate here and we've setup forums here.
I'll provide more details as well as discuss some technical details of JobBurner later this week. For now, here's a list of some of the products and platforms we used to build JobBurner:
As you can see, things obviously went well over the last week and we have some great momentum going. Today Job Text Ads launched on CommunityServer.org, and we will be running ads on and later this week. If you would like to join our affiliate program early please drop me a line and let me know. Lastly, if you know any recruiters I'd love to tell them about our product; I can be reached at shane (at) jobburner.com.
-Shane
Today I figured out how to Scroll back to the top of a Page when one of my users click on the Pager Controls at the bottom of a Page. I wanted all of the other AJAX postbacks to not scroll, but I did want the postbacks to scroll to the top when the pager controls were clicked. The pager control's client ID's are not set, so the postbackElement ID is set equal to the DataView, so knowing that information you can use the PageRequestManager to scroll to the top only if the the pager controls are clicked. (This works for both the Top and Bottom Pager controls):
<script type="text/javascript" language="javascript">
var postbackElement;)
{
if (typeof(postbackElement) === "undefined") {
return;
}
if ((postbackElement.id) === "ctl00_C1_JobsGridView") {
window.scrollTo(0,0);
}
}
</script>
WOW. Just WOW. I submitted 5 designs for the project I'm working on to the folks over at XHTMLized.com to be cut into proper XHTML and CSS late last week and they delivered everything today. XHTMLized.com is a company that will take a design (mine were .psd's) and cut everything for you into valid XHTML and CSS that works on all major browsers. They said it would be done by Wed. of this week (which was already a major plus), but they delivered everything a day early. I showed everything to Jason Alexander and Rob Howard and all they could say was WOW too. The markup is the best I have ever seen - standards compliant, SEO optimized, works on all major browsers including IE7, Opera, Firefox, and Safari! These guys just saved me at least two weeks...I'll definitely be using them again.
Come join NDDNUG and meet the original creator of ASP.NET - Scott Guthrie!
A few
times a year, NDDNUG puts on a really big free event featuring in-depth
technical content from top-notch industry experts. This year, we’re
featuring a speaker who you usually only get to see during keynotes at
major conferences - Microsoft's own Scott Guthrie. This world-class
event is being held at the Intuit, Inc. Headquarters on Thursday,
November 2, 2006 at 6:00pm.
As always, we’ll have FREE food, drinks, and prizes for attending!
Topic: A Night With Scott Guthrie
Date: Thursday, November 2, 2006
Time: 6:00pm
Where: Intuit's Headquarters in Plano Texas! (NEW LOCATION)
I had quite a few problems with ASP.NET Ajax today when I was converting some of my pages in my new product over to Beta 1 because I kept getting "Element 'XXXXXX' is not a know element" in the source viewer. The designer was also freaking out on me and changing my source HTML! I felt like I was using the Visual Studio 2003 designer again! Talk about a flashback...
Here's the fix:
Change this in your web.config:
<controls> <add tagPrefix="asp" namespace="Microsoft.Web.UI" assembly="Microsoft.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> <add tagPrefix="asp" namespace="Microsoft.Web.UI.Controls" assembly="Microsoft.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> <add tagPrefix="asp" namespace="Microsoft.Web.Preview.UI" assembly="Microsoft.Web.Preview"/> <add tagPrefix="asp" namespace="Microsoft.Web.Preview.UI.Controls" assembly="Microsoft.Web.Preview"/></controls>
to this:
<controls> <add tagPrefix="ajax" namespace="Microsoft.Web.UI" assembly="Microsoft.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> <add tagPrefix="ajax" namespace="Microsoft.Web.UI.Controls" assembly="Microsoft.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> <add tagPrefix="ajax" namespace="Microsoft.Web.Preview.UI" assembly="Microsoft.Web.Preview"/> <add tagPrefix="ajax" namespace="Microsoft.Web.Preview.UI.Controls" assembly="Microsoft.Web.Preview"/> </controls>
This should be fixed in the next version of the Beta. Please note that I am using the CTP web.config configuration. Wheew that was a close one, I was about ready to eat my own monitor.
This is a really good list for you startup folks out there written by Evan Williams of Odeo.com.
I tried to jump on the bandwagon and switch to Google Reader but today was the last straw; I'm switching back to reliable old Bloglines. Here are my reasons:
1) A few of my feeds have mysteriously "disappeared". Enough said on this one...still in beta I guess.
2) For some reason my folders have all been changed - all spaces and periods have been removed and no special characters are allowed. So my folders labeled .NET General, ASP.NET, Lucene & Nutch, Health & Nutrition, Web 2.0, etc. now say net-general, aspnet, lucene--nutch, health--nutrition, and web-20 - not cool. Why would you change this for your users when the original implementation was already working?
3) It's not easy to subscribe to a feed and add it to a folder. If you added the Subscribe... link on your toolbar or as a bookmark then you have to view the feed in Google Reader first, click on the Subscribe link at the top right hand corner of the page, then click down at the bottom left hand corner on Manage Subscriptions... then scroll down through every one of your feeds until you find the one you just added (there are no descriptions here either so it's difficult), then click on the dropdown list on the right hand side of the screen and add it to the folder you want. Oh, and if you try to click on one of your folders named "web 2.0" - forget about it - you get an error. Why? Because Google doesn't accept periods in the folder name anymore - even though the invalid selection is still in the list for you to choose.
4) Since Google conveniently renamed all of my folder names with dashes in place of spaces and special characters like &, exporting to another reader is now tough because the names are all different now. So any new feeds that I've added in existing folder won't match up in my OPML file when I do my export. No more switching back and forth easily without some manual cleanup.
5) I miss all of the nice icons in Bloglines in the feed list on the left hand side of the page. It adds a little character to each feed.
Right now I'm reading "The Art of the Start" by Guy Kawasaki. One of the things he talks about is getting to market quickly and not waiting to release the perfect product before you go broke. But, he says not to release too early or you might release something that is unusable or riddled with bugs. The key is to find the right balance, then listen to your users and then you'll end up with the right product. Did Google release to early? Are they listening to their users?
Thanks to our Website Director, Wade Wright, the North Dallas .NET Users Group launched our new website today. New features include the ability to RSVP for meetings directly from your account, manage your email subscriptions, new look and feel...and much more!
With a new location, new sponsor, new website, and a ton of new members we are growing strong! If you live in Carrollton, Addison, North Dallas, Plano, Frisco, McKinney, Allen, or The Colony you should join us for our next meeting. It's a great opportunity to meet like minded peers, look for a new job, or learn about new technology through our network of great speakers! Membership is free, we have a ton of giveaways at every meeting (like copies of SQL Server, Visual Studio, and even Xbox's), and we even feed you dinner! What more could you ask for? :) Register today. | http://weblogs.asp.net/shenderson/ | crawl-002 | refinedweb | 1,580 | 65.42 |
Elasticsearch Django app
Project Description
Elasticsearch for Django
This is a lightweight Django app for people who are using Elasticsearch with Django, and want to manage their indexes.
NB the master branch is now based on ES5.x. If you are using ES2.x, please switch to the ES2 branch (released on PyPI as 2.x)
Search Index Lifecycle
The basic lifecycle for a search index is simple:
- Create an index
- Post documents to the index
- Query the index
Relating this to our use of search within a Django project it looks like this:
- Create mapping file for a named index
- Add index configuration to Django settings
- Map models to document types in the index
- Post document representation of objects to the index
- Update the index when an object is updated
- Remove the document when an object is deleted
- Query the index
- Convert search results into a QuerySet (preserving relevance)
Django Implementation
This section shows how to set up Django to recognise ES indexes, and the models that should appear in an index. From this setup you should be able to run the management commands that will create and populate each index, and keep the indexes in sync with the database.
Create index mapping file
The prerequisite to configuring Django to work with an index is having the mapping for the index available. This is a bit chicken-and-egg, but the underlying assumption is the you are capable of creating the index mappings outside of Django itself, as raw JSON - e.g. using the Chrome extension Sense, or the API tool Paw. (The easiest way to spoof this is to POST a JSON document representing your document type at URL on your ES instance (POST{{index_name}}) and then retrieving the auto-magic mapping that ES created via GET{{index_name}}/_mapping.)
Once you have the JSON mapping, you should save it as search/mappings/{{index_name}}.json.
Configure Django settings
The Django settings for search are contained in a dictionary called SEARCH_SETTINGS, which should be in the main django.conf.settings file. The dictionary has three root nodes, connections, indexes and settings. Below is an example:
SEARCH_SETTINGS = { 'connections': { 'default': getenv('ELASTICSEARCH_URL'), }, 'indexes': { 'blog': { 'models': [ 'website.BlogPost', ] } }, 'settings': { # batch size for ES bulk api operations 'chunk_size': 500, # default page size for search results 'page_size': 25, # set to True to connect post_save/delete signals 'auto_sync': True, # if true, then indexes must have mapping files 'strict_validation': False } }
The connections node is (hopefully) self-explanatory - we support multiple connections, but in practice you should only need the one - ‘default’ connection. This is the URL used to connect to your ES instance. The setting node contains site-wide search settings. The indexes nodes is where we configure how Django and ES play together, and is where most of the work happens.
Index settings
Inside the index node we have a collection of named indexes - in this case just the single index called blog. Inside each index we have a models key which contains a list of Django models that should appear in the index, denoted in app.ModelName format. You can have multiple models in an index, and a model can appear in multiple indexes. How models and indexes interact is described in the next section.
Configuration Validation
When the app boots up it validates the settings, which involves the following:
- Do each of the indexes specified have a mapping file?
- Do each of the models implement the required mixins
Implement search document mixins
So far we have configure Django to know the names of the indexes we want, and the models that we want to index. What it doesn’t yet know is which objects to index, and how to convert an object to its search index document. This is done by implementing two separate mixins - SearchDocumentMixin and SearchDocumentManagerMixin. The configuration validation routine will tell you if these are not implemented.
SearchDocumentMixin
This mixin must be implemented by the model itself, and it requires a single method implementation - as_search_document(). This should return a dict that is the index representation of the object; the index kwarg can be used to provide different representations for different indexes. By default this is _all which means that all indexes receive the same document for a given object.
def as_search_document(self, index='_all'): return {name: "foo"} if index == 'foo' else {name = "bar"}
SearchDocumentManagerMixin
This mixin must be implemented by the model’s default manager (objects). It also requires a single method implementation - get_search_queryset() - which returns a queryset of objects that are to be indexed. This can also use the index kwarg to provide different sets of objects to different indexes.
def get_search_queryset(self, index): return self.get_queryset().filter(foo="bar")
We now have the bare bones of our search implementation. We can now use the included management commands to create and populate our search index:
# create the index 'foo' from the 'foo.json' mapping file $ ./manage.py create_search_index foo # populate foo with all the relevant objects $ ./manage.py update_search_index foo
The next step is to ensure that our models stay in sync with the index.
Add model signal handlers to update index
If the setting
auto_sync is True, then on
AppConfig.ready each model configured for use in an index has its
post_save and
post_delete signals connected. This means that they will be kept in sync across all indexes that they appear in whenever the relevant model method is called. (There is some very basic caching to prevent too many updates - the object document is cached for one minute, and if there is no change in the document the index update is ignored.)
There is a VERY IMPORTANT caveat to the signal handling. It will only pick on changes the the model itself, and not on related (
ForeignKey,
ManyToManyField) model changes. If the search document it affected by such a change then you will need to implement additional signal handling yourself.
We now have documents in our search index, kept up to date with their Django counterparts. We are ready to start querying ES.
Search Queries (How to Search)
Running search queries
The search itself is done using elasticsearch_dsl, which provides a pythonic abstraction over the QueryDSL, but also allows you to use raw JSON if required:
from elasticsearch_django.settings import get_client from elasticsearch_dsl import Search # run a default match_all query search = Search(using=get_client()) response = search.execute() # change the query using the python interface search = search.query("match", title="python") # change the query from the raw JSON search.update_from_dict({"query": {"match": {"title": "python"}}})
The response from execute is a Response object which wraps up the ES JSON response, but is still basically JSON.
SearchQuery
The elasticsearch_django.models.SearchQuery model wraps this functionality up and provides helper properties, as well as logging the query:
from elasticsearch_django.settings import get_client from elasticsearch_django.models import SearchQuery from elasticsearch_dsl import Search # run a default match_all query search = Search(using=get_client(), index='blog') sq = SearchQuery.execute(search)
Calling the SearchQuery.execute class method will execute the underlying search, log the query JSON, the number of hits, and the list of hit meta information for future analysis. The execute method also includes these additional kwargs:
- user - the user who is making the query, useful for logging
- reference - a free text reference field - used for grouping searches together - could be session id, or brief id.
- save - by default the SearchQuery created will be saved, but passing in False will prevent this.
In conclusion - running a search against an index means getting to grips with the elasticsearch_dsl library, and when playing with search in the shell there is no need to use anything else. However, in production, searches should always be executed using the SearchQuery.execute method.
Converting search hits into Django objects
Running a search against an index will return a page of results, each containing the _source attribute which is the search document itself (as created by the SearchDocumentMixin.as_search_document method), together with meta info about the result - most significantly the relevance score, which is the magic value used for ranking (ordering) results. However, the search document probably doesn’t contain all the of the information that you need to display the result, so what you really need is a standard Django QuerySet, containing the objects in the search results, but maintaining the order. This means injecting the ES score into the queryset, and then using it for ordering. There is a method on the SearchDocumentManagerMixin called from_search_query which will do this for you. It uses raw SQL to add the score as an annotation to each object in the queryset. (It also adds the ‘rank’ - so that even if the score is identical for all hits, the ordering is preserved.)
from models import BlogPost # run a default match_all query search = Search(using=get_client(), index='blog') sq = SearchQuery.execute(search) for obj in BlogPost.objects.from_search_query(sq): print obj.search_score, obj.search_rank
Release History
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/elasticsearch-django/ | CC-MAIN-2018-09 | refinedweb | 1,500 | 52.09 |
Hey, Scripting Guy! I would like to write a script, with a number of sequential actions, that can continue where it left off after a reboot. How can I do that?
-- GS
Hey, GS. By the way, what are you doing at 10:00 AM Eastern Daylight Time today? Nothing? Great; then we’ll see you at the Orange County Convention Center for our 10:00 instructor-led lab Windows PowerShell for VBScripters. Should be fun, huh?
At the moment the Scripting Guys are gathering up their things and preparing to head down to the Orange County Convention Center. (Well, if you want to get technical, Scripting Guy Jean Ross is gathering up her things and preparing to head down to the Orange County Convention Center. Meanwhile, Scripting Guy Greg Stemp has snuck back down to the hotel’s breakfast buffet for more chocolate-filled croissants.) Will this instructor-led lab stand as the most exciting thing to ever happen to the city of Orlando? Well, that’s a bit hard to say; after all, Orlando is the home to everything from Disney World to Universal Studios to Sea World. But yes, yes it will.
Of course, we realize that some of you aren’t going to be able to make it to our instructor-led lab. If you’re one of those people, well, the heck with you. You don’t want to come to our instructor-led lab? Fine; then we’re not going to your instructor-led lab. Take that, wise guy!
Could we put you on hold for a second here? OK, Scripting Guy Jean Ross has pointed out a minor typo in the preceding paragraph. As it turns out, that paragraph should read, “Of course, we realize that some of you aren’t going to be able to make it to our instructor-led lab. If you’re one of those people, well, that’s fine; we understand. And, as it turns out, you won’t be missing all that much anyway; after all, we’ve updated our VBScript to Windows PowerShell Conversion Guide as part of the TechEd festivities, and those updates (new sections on the FileSystemObject and Windows Script Host) are available online. In addition, we’re also planning to show you a script that can continue – from the proper spot in the code – following a required reboot of the computer.”
That’s what we meant to say.
So what about a script that can continue, from the proper spot in the code, following a reboot of the computer? Well, try this one on for size:
Const HKEY_CURRENT_USER = &H80000001 strComputer = "." Set objRegistry = GetObject("winmgmts:\\" & strComputer & "\root\default:StdRegProv") strKeyPath = "Software\My Scripts" strValue = "Test Script" objRegistry.GetStringValue HKEY_CURRENT_USER,strKeyPath,strValue,strScriptStatus If IsNull(strScriptStatus) Then strScriptStatus = "Run" objRegistry.CreateKey HKEY_CURRENT_USER,strKeyPath objRegistry.SetStringValue HKEY_CURRENT_USER,strKeyPath,strValue,strScriptStatus Wscript.Echo "The script is running for the first time." Wscript.Quit End If Wscript.Echo "The script is running for the second time." objRegistry.DeleteKey HKEY_CURRENT_USER,strKeyPath
Yes, this is sort of an odd-looking little script, isn’t it? Does any of it make sense, and, more important, does it actually work? Let’s find out.
As you can see, we kick things off by defining a constant named HKEY_CURRENT_USER and setting the value to &H80000001; we’ll need this constant to tell the script which registry hive we want to work with.
After defining the constant we connect to the WMI service on the local computer; in particular we connect to the default namespace and the StdRegProv class:
Set objRegistry = GetObject("winmgmts:\\" & strComputer & "\root\default:StdRegProv")
Once that’s done we go ahead and assign values to a pair of variables. The variable strKeyPath is assigned a path within the HKEY_CURRENT_USER portion of the registry; to be more specific, it gets assigned the path Software\My Scripts. Meanwhile, the variable strValue is assigned the name of a registry value (Test Script) within that registry path.
Why do we bother with all that? Because we can then use the GetStringValue method to read the value of Test Script from the registry:
objRegistry.GetStringValue HKEY_CURRENT_USER,strKeyPath,strValue,strScriptStatus
As you can see, we simply call GetStringValue, passing along the constant HKEY_CURRENT_USER; the variable strKeyPath; the variable strValue; and an “out parameter” named strScriptStatus. What’s an out parameter? Well, when you call a method you usually supply information to that method; any information you supply to a method is known as an “in parameter.” Every now and then, however, a method will actually return information back to you; in that case, you’re dealing with an out parameter. When working with the GetStringValue method, we supply a variable name (strScriptStatus) and GetStringValue responds by assigning the value of the registry item in question (Software\My Scripts\Test Script) to that variable.
That’s what an out parameter is.
Of course, some of you might be panicking a bit by now. “Wait a second, Scripting Guy who writes that column,” you’re thinking. “I don’t even have a registry value named Software\My Scripts\Test Script!” That’s fine; in fact, we don’t want you to have a registry value named Software\My Scripts\Test Script.
Don’t worry; we’re going to explain what we mean by that. The first time this script runs it uses the GetStringValue method to retrieve the value of Software\My Scripts\Test Script. If this registry value doesn’t exist, well, no problem; in that case our out parameter – strScriptStatus – will simply be assigned a Null value. More importantly, our script will know that this is the first time it has run on this computer. This is how we keep track of where we are; that is, is this script running before the required reboot or after the required reboot? If the registry value Software\My Scripts\Test Script doesn’t exist then that means we must be running before the reboot. Otherwise that value would exist.
Let’s assume that strScriptStatus is Null, something we verify using this line of code:
If IsNull(strScriptStatus) Then
What do we do now?
Well, for one thing, we execute these three lines of code:
strScriptStatus = "Run" objRegistry.CreateKey HKEY_CURRENT_USER,strKeyPath objRegistry.SetStringValue HKEY_CURRENT_USER,strKeyPath,strValue,strScriptStatus
In the first line we’re simply assigning a new value (Run) to the variable strScriptStatus. After making that assignment we use the CreateKey method to create the registry key Software\My Scripts. Once that’s done we use the SetStringValue method to both create the registry value Test Script and to assign it the value of the variable strScriptStatus.
You’re right: we did manage to accomplish quite a bit in just three lines of code, didn’t we? Cool.
As it turns out, our next line of code is just filler material:
Wscript.Echo "The script is running for the first time."
All we’re doing here is echoing back the fact that the script is running for the first time. Like we said, this is just filler material; in a real script you’d use this space to carry out all the tasks that need to be carried out before the computer reboots. We didn’t want to distract anyone from the main purpose of this sample script, so we didn’t include any tasks other than echoing back a simple little message.
As soon as all our tasks are complete we then run this block of code:
Here we’re adding a new value to the Software\Microsoft\Windows\CurrentVersion\RunOnce registry key; by design, anything that appears in this key will automatically run the next time the user logs on. (However, it will run only one time, then be deleted from the key.) We’re simply creating a new value named Test Script, and assigning it a command that induces our script (Test.vbs) to run the next time the user logs on:
strScriptPath = "cmd.exe /k cscript.exe C:\Scripts\Test.vbs"
From there we simply exit the script; in a real script, however, this is the spot where you would put in the code that causes the computer to reboot.
So what happens when the computer does reboot? (Or, in our test scenario, what happens the second time you run the script?) Well, as it did the first time around, the script checks the value of Software\My Scripts\Test Script. This time, of course, Test Script will actually have a value. Therefore, we skip the If Then statement and instead run these two lines of code:
Wscript.Echo "The script is running for the second time." objRegistry.DeleteKey HKEY_CURRENT_USER,strKeyPath
Again, line 1 is just filler; this is the spot where you’d put the commands you want to execute after the reboot. And then, just to clean things up, we delete the registry key Software\My Scripts. Doing that enables us to rerun the entire script if need be. If you want the script to run once – and only once – on a computer then you might leave the registry key in place and, instead, change the value of Test Script to Don’t run. You can then modify the script so that it simply terminates if the variable strScriptStatus is equal to Don’t run.
But we’ll let you deal with that one yourself. Right now it’s time for the Scripting Guys to head down to the Convention Center. See you all tomorrow!
You know what? We’ll just grab a couple more chocolate-filled croissants. Then we’ll head down to the Convention Center.
I am curious how this could be used to insert RunOnce commands that require reboots. How could this be modified to continue down the list of tasks and pick up the next task and insert in the RunOnce key? I have some tasks I would like to run after a computer is sysprepped but before returning to the user and yet each task requires a reboot before the next task and Sysprep does not continue once it reboots in the out of box experience phase. Thank you. | https://blogs.technet.microsoft.com/heyscriptingguy/2008/06/11/hey-scripting-guy-how-can-i-get-a-script-to-pick-up-where-it-left-off-after-a-reboot/ | CC-MAIN-2018-13 | refinedweb | 1,690 | 72.16 |
Generics
Generics were introduced in the JDK 5.0 as a language enhancement. Generics are heavily used Java Collection Framework interface. Generics can be used to define methods, classes and interfaces that can operate with different data types.
Why Use Generics
- Generics provide compile-time safety
- Removes unnecessary type conversions
Generic Interfaces
Generics are defined with the type in angle brackets,
<> after the interface name.
Here is the definition of the
java.util.Collection interface
public interface Collection<E> extends Iterable<E>
<E> defines the generic element that can passed to the
Collection. The collection can be used as follows to define a collection that can only contain String types :
Collection<String> collection;
or a collection that holds Integer types as follows :
Collection<Integer> collection;
We can replace
<E> in
Collection<E> with any type. We gain compile type safety checks from the compiler and also remove type conversion since we know the collection will always contain a specific type.
Generic Classes
Generic classes are defined the same way as generic interfaces. The type is specified in angle brackets,
<>.
Any
ArrayList is an ordered collection of objects by their index. Here is the definition of the
ArrayList generic class :
public class ArrayList<E> extends AbstractList<E> implements List<E>
Notice how the the class is defined with the
<E>,
ArrayList<E> and also inherits from a generic class and a generic interface. The
<E> defines the
ArrayList to be generic. It represents any element type. We can replace
<E> with any data type. Here is how we can declare a variable of the
ArrayList to hold
Integer types :
ArrayList<Integer> list;
and to hold String types :
ArrayList<String> list;
Instantiating Generic Classes
When instantiating a generic class we have to include the type in angle brackets,
<>.
Here is an example of instantiating the generic
ArrayList from above with types of String :
ArrayList<String> list = new ArrayList<String>();
Since we have already defined the type to be a generic String, we can replace the
new ArrayList<String>() with
new ArrayList<>(), since its redundant as follows :
ArrayList<String> list = new ArrayList<>();
We could also define an
ArrayList that can hold Integer types :
ArrayList<Integer> list = new ArrayList<>();
Using Generic Methods
Methods can also be declared to be generic by using the same
E specified in the generic class.
Here is the definition of the generic method add in the
ArrayList class :
public boolean add(E e){}
To use the generic method
add, we will have to pass in the correct type of
E.
ArrayList<Integer> list = new ArrayList<>(); list.add(10); list.add(15.9); // compile type error
Notice the above generates a compile-time error when we try to pass in the wrong type. Without using generics the following will work :
ArrayList list = new ArrayList<>(); list.add(10); list.add(15.9); // works without an error
The above works because we did not limit the types the
ArrayList can hold. The
ArrayList hold types of
Object. This code can cause run-time errors in future since we did not guarantee the types in the
ArrayList. | https://java-book.peruzal.com/generic/genericsmd.html | CC-MAIN-2018-22 | refinedweb | 514 | 51.99 |
This is my interface mac address
To search for the substring from the lines of string i use the search method.
This is the pattern
pattern = r"([0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}"
And in order to return the substring only use the
group() method.
# this is an example on how to get the mac address using regex. from subprocess import check_output import re pattern = r"([0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}" def check_interface(interface): return (check_output(["ifconfig", interface])).decode('utf-8') if __name__ == "__main__": interface_result = check_interface("eth0") print(re.search(pattern, interface_result).group())
If you print the re.search result without using
group method you will get the entire regex object information.
Further reading the documentation makes me understand the difference between match and search methods. So I am finding a word which can be anywhere in the multi-line strings i need to use search method, the match only returns if the match is found at the beginning of the string.
Alternate regex pattern can be:
# Alternate short hand way. \d is the same as [0-9]
pattern2 = r"([\da-fA-F]{2}:){5}[\da-fA-F]{2}"
Another method:
pattern3 = r"([0-9aA-fF]{2}:){5}[0-9aA-fF]{2}"
Another method, but this one is bad:
# \w is the same as [0-9aA-zZ]
pattern4 = r"(\w{2}:){5}\w{2}"
This is bad because mac address is hexidecimal so from 0 until f, and f is the max, using the short hand
\w it will match things like this 0z:fx:ra:9r:r8 which is bad in the context of mac address. | https://cyruslab.net/2019/11/18/pythonregex-to-find-mac-address/ | CC-MAIN-2022-27 | refinedweb | 278 | 60.14 |
Hi everyone, I've got to shift the beta release until tomorrow evening.
Advertising
I got pretty far moving the code out of the Zope 3 tree, but the app namespace is just too large for right now and I don't want to let my script run without me sitting next to it in case it goes berserk on the repository. 56 packages to go through the script and about 10 that I need to handle partially manually because of a missing project area. After that I'll create 3.4.0b1 tags for all those new satellites and update the trunk of the Zope3 tree to use that as the external reference. Then I'll release the trunk as 3.4.0b1. As the release manager for 3.4 I'll take care to move the currently extracted eggs into a stable shape for the final so that all packages that the 3.4 release uses as an external are available with a stable version number (most times this will be 3.4). After that, each egg is on its own. ;) Night,: | https://www.mail-archive.com/zope3-dev@zope.org/msg08372.html | CC-MAIN-2017-17 | refinedweb | 184 | 79.4 |
Performance is always very important. By using a Component-preload.js file in your custom developed Fiori App (“Build your own Fiori App in the cloud”) startup and runtime performance can be improved.
Probably you saw already “negatively” the Component-preload.js file as failed http request in your custom developed Fiori App or if you used one of the Project Templates of the SAP Web IDE. Or perhaps you recognized it in the source of an installed standard SAP Fiori App, which you have downloaded from SAPUI5 ABAP Repository via the SAP Web IDE. The standard SAP Fiori Apps have this file included by default. To make it short this file basically contains all the JavaScript (controller, views, utilities) and XML view files of the application within one single files. So instead of multiple http request for each of the app resources, only one request is necessary. This could be a huge performance improvement if you have for example bad latency times.
See the number of request in the below screenshot the difference without (left) and with the usage of Component-preload.js (on the right):
Generate Component-preload with Gulp
Currently SAP Products lack off a ready-running build process to generate such a file. Also SAP Web IDE does not support the generation of this file (at least in the current version). I’m very curious how and when SAP will close this gap and include a build process within SAP Web IDE. Probably soon. So what can you do in the meanwhile to solve this issue?
The answer is easy. Use one of the available build tools. There is Grunt and there is Gulp among others. For both tools packages are available to generate the preload file. I choose Gulp here, because I saw the video of Eric Schoffstall presentation MountainWest JavaScript 2014 – Build Systems The Next Generation by Eric Schoffstall – YouTube. Great show, worth seeing and also entertaining!
gulp-ui5-preload package
With Gulp we can use the very handy gulpui5-preload package from Christian Theilemann (Thx geekflyer!). I will show it with the Fiori UI5 Boilerplate (UI5BP). Please use the latest version is case you have already download the source of the UI5BP. For UI5BP and if you have node.js installed it quite simple:
install necessary packages via npm (npm comes with node.js, this task is only initially necessary):
npm install
and generate to Component-preload.js with Gulp:
gulp ui5preload
on success you should see something like this:
and as result you have the generated Component-preload.js file:
Be aware that you always have to rebuild the Component-preload if you made changes to one of the included files! So it makes sense to generate it, if you deploy on QA or PRODUCTION and remove it if you developing locally.
The base for this easy build process are the file package.json which defines the relevant package. This information is used by npm to install the necessary packages locally. And the gulpfile.js, which defines what should be added to the Component-preload.js and what not:
var gulp = require('gulp'); var ui5preload = require('gulp-ui5-preload'); var uglify = require('gulp-uglify'); var prettydata = require('gulp-pretty-data'); var gulpif = require('gulp-if'); gulp.task( 'ui5preload', function() { return gulp.src( [ '**/**.+(js|xml)', '!Component-preload.js', '!gulpfile.js', '!WEB-INF/web.xml', '!model/metadata.xml', '!node_modules/**', '!resources/**' ] ) .pipe(gulpif('**/*.js', uglify())) //only pass .js files to uglify .pipe(gulpif('**/*.xml', prettydata({type: 'minify'}))) // only pass .xml to prettydata .pipe(ui5preload({ base: './', namespace: 'ui5bp', fileName: 'Component-preload.js' })) .pipe(gulp.dest('.')); } )
Workaround for SAP Web IDE and HCP
If you followed the previous post and deployed the UI5BP app via SAP Web IDE to SAP HANA Cloud Platform (HCP) and on the Cloud SAP Fiori Launchpad ( Deploy UI5 Boilerplate on Fiori Launchpad of HANA Cloud Platform ) you can generate the Component-preload.js with this workaround:
Step 1: As the SAPUI5/OpenUI5 App is deployed on the Git Repo on HCP checkout/clone the repo locally with Git. You find the Git Repository URL in HCP Cockpit under the following location:
Step 2: generate the Component-preload.js with Gulp:
npm install gulp ui5preload
Step 3: Add, Commit and Push to HCP Git Repo
git add Component-preload.js git commit -m "generate Component-prekiad.js" git push
Step 4: Create and activate a new version of application in HCP Cockpit
This is a two step process. You create a new version go the just push source and in a second step activate this version. The result should look like this:
Now the Fiori App, here Fiori UI5BP is using the Component-preload.js.
Usage of Grunt
For Grunt there is also a task available. Have a look at SAP/grunt-openui5 · GitHub
and the post from Matthias Osswald
Thanks for posting this blog - its extremely helpful!
Ive noticed that some of the resources though included and minified int the Component-preload,js are still loaded again with dedicated requests. Could this be related to the specific reference to those files from other reources using the jQuery.sap.require(...) statements? Should these be removed once he Component-preload,js is there?
thanks again,
Ido
Thanks Ido,
which resources do you mean? Do you have an example?
If these resources are included in the Component-preload.js file they should not be requested after the Component-preload.js was loaded.
BR, HP
It seems like any javascript resources that start with jQuery initialization (or perhaps any "non standard" format):
(function($) {
"use strict";
$.sap.declare("googleMaps.places");
. ...
}(jQuery));
Are loaded specifically even though they are included in the Component-preload.js
Hi Seitz, i to facing the same problem as Ido Shemesh.Please give us the solution so that we can figure out the mistake.
Thanks in advance
Kiran
After creating the component-preload.js, even my application is loading other files also along with that.
Please help me what did you do to resolve this.
Thanks
Gunjan
Hi! I generated the Component-preload.js file. But what happens next??? How will it be accessed? Should I reference or import it somehow somewhere? How can I instantiate my views, controllers and other js files?
Although using Component-preload.js seems all cool, but for example I can't actually have a file with "-" inside the BSP Application (tried creating in SE80). Also how would i instantiate my components?
var oMenuView = sap.ui.view({
id: "idMenu",
viewName: "view.Navigatsioon.Menu",
type: sap.ui.core.mvc.ViewType.JS,
controller: sap.ui.controller("view.Navigatsioon.Menu")
});
this obviously causes errors since the file is not found. Do I need to reset my resourceroot tag?
Hi,
is your application running without Component-preload.js file, hence is it running with Component.js?
Is your Component.js loaded and executed?
If not, you should first solve this.
Normally views are part of the Component and access/instanciated via routing, if not you can not benefit from Component-preload either!
Normally you create UI5 Apps externally and uploaded with se38 or you could use SAP Web IDE.
BR, HP
Thanks for you quick feedback!
I will first implement Component.js part.
Once i get it to work, i'm still not sure how will the component referencing be affected? For example, at the moment I initialize my component as follows:
var oMenuView = sap.ui.view({
id: "idMenu",
viewName: "view.Navigatsioon.Menu",
type: sap.ui.core.mvc.ViewType.JS,
controller: sap.ui.controller("view.Navigatsioon.Menu")
});
How would I access my custom **.js files after Component-preload.js and Component.js are implemented?
Sorry Man,
what you doing here is creating a view. There is no Component!
This is very basic stuff!
Please have a look at the ui5 walkthrough tutorial and work on this first to get a better understanding on fundamental ui5 concepts!
OpenUI5 SDK - Demo Kit
BR, HP
The UI5 application structure and the Demo Kit has changed a lot since our customers app was developed. So what is basic now and what was basic a year ago are very different.
But to me it seems that in order to successfully use the generated Component-preload file you have to deploy the app to Fiori Launchpad. If your app is launched from Fiori Launchpad (FioriLaunchpad.html or what was the name of the file), then it will try to look for the Component-preload file. Otherwise, nothing will happen (for example when launching directly from index.html).
Please correct me if I am wrong!
Hello HP Seitz,
Is it only available for sap.m library or after a specific version of sap ui5 library. Because we have an application running on an older version of sap ui5 library 1.24.0. Also this is an desktop application using sap.ui.commons. We don't get this error.
regards,
PT
HP,
thank you for this blog - it is well documented and easy to follow. i was able to get my app to generate the component preload using gulp easier than when struggling with grunt.
not sure if you would know but any idea why the library-preload and library.css files are requested twice? same thing on my end.. i am looking into how to reduce those multiple requests.. also, did your sap-icons request took the longest to execute? not sure why that would take so long..
thank you again for your blog
Is there any option to generate Component-preload.js using the SAP Webide, other than the workaround mentioned in the post?
I would be also really interested if there is something like that. | https://blogs.sap.com/2015/04/27/performance-improvement-with-component-preloadjs/ | CC-MAIN-2021-17 | refinedweb | 1,608 | 68.26 |
Hide Forgot
Description of problem:
1) Printing to cups printer available on the network.
2) Printjob is sent and prints
3)the gnome notification printer icon appears
4)clicking on the applet brings up the status window.
5)Printjob physically completes on the remote printer but the status
window never updates the printjob as completed and continues to show
the status as Sending.
6)selecting the printjob and choosing to cancel document from the edit
menu of the status window doesn't clear the printjob.
Version-Release number of selected component (if applicable):
desktop-printing-0.15-1
Expected results:
When the printjob is physically completed the status show change.
and I should be able to cancel and clear all printjobs listed in the
status window.
-jef"by the way, default printer preference works equally well as the
native per application printer chooser dialog in everything i test
including this little bugaboo"spaleta
5) Can you surf to and see whether it
picked up on the print job status?
6) Yes...it currently doesn't change until it confirms from the remote
end it was actually cancelled. In retrospect it would have been nice
to say "Cancelling..." but I already promised the nice GNOME i18n guys
I wouldn't change any more strings.
Oh, and:
> I should be able to cancel and clear all printjobs listed in the
> status window.
Do you have a particular reason for wanting to clear them? We thought
about having print jobs just go away automatically after 3 days or
something. My feeling is that most people log in and out often enough
that it should clear their print list.
comment #1
5) cant surf to the remote printer server. Should that be a
requirement to be able to print to that printer?
comment #2
a particular reason? Lets just say there are certain sensitive filenames
that i'm not particularly interested in other people might happen to
see because its sitting in the status list. Sensitive because it might
be sensitive data because i work at a .gov. Or it might be sensitive
because the file name is off-color or adult oriented and I dont
necessarily want other people to see the name of the file i printed
sitting in the list.
-jef
5) You need to be able to reach the printer via IPP, which goes over
HTTP. Can you do: telnet server 631 ?
If that doesn't work, some tcpdump output would be useful.
tcpdump -s 0 -w ~/tcpdump.log 'host myprintserver and port 631'
Re: comment #4
telnet server 631 doesn't connect.
attached is the tcpdump.log on http and telnet attempts to 631
I'll repeat all this again when i'm physically at the boxes to make
sure it is still printing.
Created attachment 105377 [details]
tcpdump on client to printerserver port 631
From the tcpdump output, it looks like you were using Firefox to browse to the
port. Can you capture the output of printing something (small :)) and let
eggcups appear, and wait a few seconds? Then I should be able to see what IPP
requests eggcups is doing.
Alternatively, you could: strace -s 200 -p $(pidof eggcups) >/tmp/eggcups.log 2>&1
Re: comment #8
I did a new tcpdump active while i sent a 1 page document to the cups
server from gedit. waiting a few seconds after the print job was done
for eggcups to update...nothing. Then tried to browse to the 631 port
on the cups server and got a forbidden. The print job goes through
just fine, but eggcups status isnt updating, still shows "Sending"
New attachment in a minute....
-jef
Created attachment 105484 [details]
tcpdump on client to printerserver port 631 with a small printjob sent over.
Hmm. Do you have control over the CUPS server? If so can you attach
its configuration?
It looks to me like eggcups is requesting status about the print job,
but isn't getting anything back. Then something (I guess it must be
eggcups) tries to request status on /, which is forbidden. I'm not
sure why that would happen.
How did you set up this printer? Is it picked up automatically via
local network CUPS broadcast? Or did you add it with
system-config-printers as a local printer? Something else?
"Do you have control over the CUPS server?"
Its an fc2 fully updated desktop under my complete control and subject
to my every whim. Well at least while my wife isn't actively using it.
As far as configurations go, the system hasnt been tweak far from the
defaults for an fc2 workstation install.
"If so can you attach its configuration?"
Which configuration files do you want exactly from the server? I
configured the printer via the system-config-printer dialog on the
cups server.
"Then something (I guess it must be eggcups) tries to request status
on /, which is forbidden"
That was probably me being stupid and trying to surf via a webbrowser
to again after the print job finished and before i
shutdown the tcpdump log.
"How did you set up this printer?"
system-config-printer on the server, setup as a local parallel port
printer.
"Is it picked up automatically via local network CUPS broadcast?"
yes, on the client machine its being picked up as a Browsed que, when
I examine the printer listings in s-c-printer.
I just want the cups.conf. That should list the access control on the server.
Have you ever used system-config-printer on your client system?
Also, can you run gnome-session-properties and remove eggcups from your session,
and then from a terminal, run:
eggcups -d --sm-disable 1>/tmp/eggcups.log 2>&1
And just print something, then attach eggcups.log here.
Uhm cups.conf? as in /etc/dbus-1/system.d/cups.conf?
i would have thought you would want /etc/cups/cupsd.conf
I'll attach both just to save time.
along with the eggcups.log
-jef
Created attachment 105566 [details]
eggcups -d --sm-disable 1>/tmp/eggcups.log 2>&1
Created attachment 105567 [details]
/etc/cups/cupsd.conf on cups server
Created attachment 105568 [details]
/etc/dbus-1/system.d/cups.conf on cups server
[0x8813dd8] [ec_job_model_job_sent_remote] ec-job-model.c:585 (20:32:24): no job
for local 3 remote 27
Now we're getting closer. This means that eggcups couldn't find the job on the
remote server. The typical reason for this is a permissions problem, but your
remote cupsd.conf looks OK (Although I'm not sure why you have "Deny from All"
and "Allow from All" in the config for Canon).
One more thing: can you do:
strace -s300 eggcups --sm-disable 1>/tmp/eggcups.strace 2>&1
and then print something and attach the eggcups.strace here.
Hopefully we'll get to the bottom of this :)
Created attachment 105628 [details]
strace -s300 eggcups --sm-disable 1>/tmp/eggcups.strace 2>&1
Ok, here are the relevant messages:
read(19,
"l\4\1\0\220\0\0\0r\0\0\0\2\0\0\0\1o\0\0\32\0\0\0/com/redhat/PrinterSpooler\0\2s\0\0\0\31\0\0\0com.redhat.PrinterSpooler\0\3s\17\0\0\0JobQueuedRemote\0\10s\0\0\6\0\0\0ssuuss\0\7s\0\0\0\5\0\0\0:1.93\0\0\0\0\0\0\0s\0\0\0*\0\0\0ipp://karen.localdomain:631/printers/Canon\0s\r\0\0\0successful-ok\0u\0\4\0\0\0u\0\0\0\34\0\0\0s\0\0\0\10\0\0\0jspaleta\0s\0\0\5\0\0\0Canon\0",
2048) = 258
read(19, 0x8f58738, 2048) = -1 EAGAIN (Resource temporarily
unavailable)
clone(child_stack=0xf68fb4c8,
flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID|CLONE_DETACHED,
parent_tidptr=0xf68fbbf8, {entry_number:6, base_addr:0xf68fbbb0, limit:1048575,
seg_32bit:1, contents:0, read_exec_only:0, limit_in_pages:1, seg_not_present:0,
useable:1}, child_tidptr=0xf68fbbf8) = 27800
From here we can see that we're getting the JobQueuedRemote dbus signal from the
cups server, and we create a thread to retrieve the job data from the server.
But it doesn't look like the calls from the other thread are in the strace.
Could you try rerunning with:
strace -fF -s300 eggcups --sm-disable 1>/tmp/eggcups.strace 2>&1
The -fF tells strace to follow threads.
Created attachment 105685 [details]
strace -fF -s300 eggcups --sm-disable 1>/tmp/eggcups.strace 2>&1
I also have the problem of finished print jobs remaining in the "print status"
window.
Both jobs are shown as "sending". Both have finished printing, one of them 2
days 19 hours ago.
Jef: I am baffled, honestly. I don't understand why a POST to / is happening;
it should be to /printers/Canon. I can see from your earlier debug output that
the correct printer URI is being retrieved.
redhat@nodata.co.uk: can you attach the same debug info that Jef did? Remove
eggcups from your session as in comment 13, then get the output of:
eggcups -d --sm-disable 1>/tmp/eggcups.debuglog 2>&1
# Do a print job, wait till you see eggcups flash, then Ctrl-C it
strace -fF -s300 eggcups --sm-disable 1>/tmp/eggcups.strace 2>&1
# Do another print job, wait till you see eggcups flash, then Ctrl-C it
Ok, actually - it looks to me like a POST to / is normal; my mistake.
If you guys change your CUPS configuration like so:
<Location />
Order Deny,Allow
Allow From All
</Location>
does it work as expected?
I tried comment #24, then ran a service cups restart.
Same problem.
The Status is still shown as "Sending".
I'll follow the request from comment #23.
Created attachment 105828 [details]
[user@box ~]$ eggcups -d --sm-disable 1>/tmp/eggcups.debuglog 2>&1
Created attachment 105829 [details]
[user@box ~]$ strace -fF -s300 eggcups --sm-disable 1>/tmp/eggcups.strace 2>&1
redhat@nodata.co.uk: Did you remove the existing eggcups from your session?
The debug log shows you already had one running.
Jef: Can you also confirm that allowing POST to / for all clients doesn't help
you either?
<Location />
Order Deny,Allow
Allow From All
</Location>
on the server and a cups service restart later....
my eggcups printjob on the client desktop shows status complete
a few seconds after the job is actually done.
Now...give me my history flush.. or I'll tell slashdot that you
allowed this status bug to exist.
-jef
Tim - The default CUPS configuration denys POST to /, which prevents eggcups
from getting status updates on jobs.
Do you know what the security implications of allowing this would be? Would
anyone on the network be able to e.g. cancel jobs?
> Did you remove the existing eggcups from your session?
Yes. I ran eggcups -d --sm-disable 1>/tmp/eggcups.log 2>&1
I just tried it again. The log says the same.
Concerning 'Location /' permissions: the documentation suggests that
this location is for 'get' operations and so shouldn't be too harmful
to open up. I don't think it would allow anyone external to cancel jobs.
However, I'm not sure that allowing everyone on the network to view
your currently printing jobs is entirely desirable either. But I
suppose that's what eggcups needs?
Can't eggcups be fixed instead? Rather than letting anyone look at
print jobs on my computer? Just asking.
Tim: Yes, eggcups needs to be able to retrieve the job status. What you really
want of course is only for users who submitted a job to be able to retrieve its
status. The only way I can think of to implement this sanely is Kerberos.
Which we don't have yet :/
We should at least relnote this, that the default cups configuration doesn't
allow print job monitoring to work.
Although - I wonder if it would work if I hacked eggcups to use the printer URI
for POST instead of /. Let me try that.
Well, there is a /jobs namespace that cups provides. If I change eggcups to use
that, then I get prompted for a password with the office CUPS setup. This may
just be a misconfiguration of our office CUPS server though.
Tim: Any opinion?
Ok, it is going to be hard to change it to use the printer URI. I'll see if I
can do it though.
Ok, I took a stab at this. Can you guys try:
Binary:
Source:
Be sure to revert your CUPS configuration to the default.
I'm afraid I can't test the rpm right now, python is behaving weirdly.
Will test when I can.
Jef - do you think you could try the RPM? It's also in rawhide.
sorry i was a way at a conference for a week. I should be able to test
this sometime today while the other machine isn't in use.
Just to be clear, you want me to install the desktop-printing package
on one of my fc3 or fc2 client computer while talking to a stock cups
server running on fc2?
-jef
Well your not going to like the results
I reverted the fc2 cups server back to the default cupsd.conf
restarted the cups service
and i installed the desktop-printing-0.17-4.i386.rpm on the fc3
client and then logged into the fc3 desktop
sent a printjob via gedit to the remote printer
document print status is still stuck at Sending...even though the
printjob has completed... no obvious change from the user perspective.
I'll do it again and produce an strace to attach.
-jef
Created attachment 107157 [details]
strace -s300 eggcups --sm-disable 1>/tmp/eggcups.strace 2>&1 after desktop-printing-0.17-4 installation
Hm, can you redo that with -fF too?
Created attachment 107268 [details]
strace -fF -s300 eggcups --sm-disable 1>/tmp/eggcups.strace 2>&1 for desktop-printing-0.17-4
Man, I suck. I added the patch to the package, but forgot to actually apply it.
Can you try this?
:->
desktop-printing-0.17-5.i386.rpm seems to have worked the gui dialog
changed from status printing to status completed. I'll attached the
strace log for completeness for you to inspect.
-jef"now... back to my real complaint...being able to flush the list
of completed printjobs from the dialog"spaleta
Created attachment 107353 [details]
strace -fF -s300 eggcups --sm-disable 1>/tmp/eggcups.strace 2>&1 for desktop-printing-0.17-5
Colin,
I'm getting ready to move my fc2 machine running the remote cups
server over to fc3, unless you need me to do more specifically about
this issue. If this isn't resolved to your satifaction I can do more
testing
against the same fc2 machine to avoid complicating the testing scenario.
So just let me know if I need to do anything else.
-jef
Ok, let's go ahead and close this bug; please open separate ones about
other issues such as cancellation not working (I think you mentioned
that).
I just tried desktop-printing-0.17-5.i386.rpm from Rawhide and it did
not fix the problem for me. The job completed 10 minutes ago, but is
still listed in the printer applet as "Unknown" document (it did have
a file name), "?" size, and "Sending" status. The applet flashed
briefly when it appeared, but stopped after a second or so (after
spooling completed?).
The target printer was an HP network printer's IPP interface, not
another Linux/CUPS box.
FWIW, I like the old apparent behavior where the applet disappeared if
there was nothing in the queue. Also, I tend to stay logged in for
long periods of time on machines under my physical control. It would
be nice not to have to look at this artifact.
I tried printing to an FC2 CUPS system, and I see the correct behavior
now. The icon greys out and the submitted job shows with the correct
information: filename, printer-queue, size, age, and status (Completed).
My remarks in Comment #51 still apply for a non-FC IPP queue. Haven't
tried other types of queues yet.
Matthew: can you open a new bug, and attach the strace data like in
Comment 20 to it? A new bug would be nice because I think the main
issue in this bug has been fixed (eggcups not working with the default
CUPS config), and I'd like to track your issue separately.
OK See 142015.
The print jobs also do not clear out of the eggcups notification
window for me. After applying the rawhide rpm listed above, the
status says "printing", then it changes to unknown. This is also on
fedora core 3. I notice in my logs that FC3 is attempting to access
the status every few minutes and I see the lines:
D [15/Dec/2004:08:43:13 -0600] ReadClient() 30 POST /printers/q626
HTTP/1.1
E [15/Dec/2004:08:43:13 -0600] get_job_attrs: job #15214 doesn't exist!
D [15/Dec/2004:08:43:13 -0600] Sending error: client-error-not-found
D [15/Dec/2004:08:43:13 -0600] ProcessIPPRequest: 30 status_code=406
In the log.
Ah wait, think I may have solved my own problem. I think this depends
on the "Preserve Job History" option being set in the cupsd.conf file.
Flipping that on makes the printer grey out after the job has been
completed.
Okay, now the real kicker is to be able to clear the list so the jobs
don't rack up. We at the University of Kansas, Department of
Mathematics have confidential government projects being printed and
require this feature. Most of our faculty members involved in these
projects stay logged in for months at a time. Our userbase is
approximately 200 people.
Ok, in desktop-printing-0.18-1, I just added a new menu item
"Edit->Clear Completed Documents". This should address the other
complaint in this bug about sensitive documents. There is another bug
if that you never log out, the list just accumulates; I plan to
address that by automatically clearing completed documents after 1 day. | https://bugzilla.redhat.com/show_bug.cgi?id=134292 | CC-MAIN-2019-18 | refinedweb | 3,040 | 66.33 |
.
mclass wrote:Having had a Wemo Insight switch fall into my hands, I have searched the Indigo forums for means to integrate the device with Indigo - without success
Searching further, I have come across a script that I have successfully deployed (with some minor changes) as a series of action groups to turn the Wemo on and off, toggle the state and to determine its state. Regrettably, I am not sufficiently software literate to develop this further into a plugin, but have posted it it in the hope that it may generate further interest in developing support for this and other Wemo devices!
Note that the Wemo app is still required to set up the switch on the local wifi network.
...
mclass
# Python script to operate or obtain status of Wemo switch
# based on a script WeMo.py published by pruppert on GitHub
# at
# Usage:
# Tested with Wemo Mini switch as an embedded script in an
# Indigo Action Group/Virtual Device
#!/usr/bin/python
import re
import urllib2
# Configuration:
# Enter the local IP address of your WeMo in the parentheses of the ip variable below.
# You may have to check your router to see what local IP is assigned to the WeMo.
# It is recommended that you assign a static local IP to the WeMo to ensure the WeMo is always at that address.
# Uncomment one of the triggers at the end of this script.
# Insert the appropriate message for the Indigo event log near line 89
ip = 'insert address here'
WemoVar = "Insert name of Indigo status variable here"
class wemo:
OFF_STATE = '0'
ON_STATES = ['1', '8']
ip = None
ports = [49153, 49152, 49154, 49151, 49155]
def __init__(self, switch_ip):
self.ip = switch_ip
def toggle(self):
status = self.status()
if status in self.ON_STATES:
result = self.off()
# result = 'WeMo is now off.'
elif status == self.OFF_STATE:
result = self.on()
# result = 'WeMo is now on.'
else:
raise Exception("UnexpectedStatusResponse")
indigo.variable.updateValue(WemoVar, value=result)
return result
def on(self):
status = self.status()
if status in self.ON_STATES:
result = status
elif status == self.OFF_STATE:
result = self._send('Set', 'BinaryState', 1)
else:
raise Exception("UnexpectedStatusResponse")
indigo.variable.updateValue(WemoVar, value=result)
return result
def off(self):
status = self.status()
if status in self.ON_STATES:
result = self._send('Set', 'BinaryState', 0)
elif status == self.OFF_STATE:
result = status
else:
raise Exception("UnexpectedStatusResponse")
indigo.variable.updateValue(WemoVar, value=result)
return result
def status(self):
result = self._send('Get', 'BinaryState')
indigo.variable.updateValue(WemoVar, value=result)
return result
def name(self):
return self._send('Get', 'FriendlyName')
def signal(self):
return self._send('Get', 'SignalStrength')
def _get_header_xml(self, method, obj):
method = method + obj
return '"urn:Belkin:service:basicevent:1#%s"' % method
def _get_body_xml(self, method, obj, value=0):
method = method + obj
return '<u:%s xmlns:<%s>%s</%s></u:%s>' % (method, obj, value, obj, method)
def _send(self, method, obj, value=None):
body_xml = self._get_body_xml(method, obj, value)
header_xml = self._get_header_xml(method, obj)
for port in self.ports:
result = self._try_send(self.ip, port, body_xml, header_xml, obj)
if result is not None:
self.ports = [port]
return result
raise Exception("TimeoutOnAllPorts")
def _try_send(self, ip, port, body, header, data):
try:
request = urllib2.Request('' % (ip, port))
request.add_header('Content-type', 'text/xml; charset="utf-8"')
request.add_header('SOAPACTION', header)
request_body = '<?xml version="1.0" encoding="utf-8"?>'
request_body += '<s:Envelope xmlns:'
request_body += '<s:Body>%s</s:Body></s:Envelope>' % body
request.add_data(request_body)
result = urllib2.urlopen(request, timeout=3)
return self._extract(result.read(), data)
except Exception as e:
print str(e)
return None
def _extract(self, response, name):
exp = '<%s>(.*?)<\/%s>' % (name, name)
g = re.search(exp, response)
if g:
return g.group(1)
return response
def output(message):
# Write message to Indigo server event log
indigo.server.log ("Wemo Device Name - " + message)
switch = wemo(ip)
# Configuration:
# Uncomment only one of the lines below to make the script work.
#output(switch.on())
#output(switch.off())
output(switch.toggle())
#output(switch.status())
very slick automation through a simple and intuitive cloud services
jay (support) wrote:We continue to evaluate the various technologies out there for inclusion in Indigo, but the demand for Wemo support has thus far been very limited in comparison to other things (Indigo Touch, more extensive Z-Wave support, etc.) that are a much higher priority.
We would certainly love to see a 3rd party developer add Wemo support via a plugin but none seem to be particularly interested at the moment.
DVDDave wrote:FWIW, I'd like to echo what others have said about wanting Wemo support in Indigo. The inexpensive switch from Costco that I integrated has been working very well. I think the inexpensive Wemo units are a good substitute for cheap X10 devices to allow whole home automation at a much lower cost than z-wave, insteon and others. They also work at wifi distances which is farther than other technologies in my experience. Low demand may be a chicken and egg issue since I put off buying one due to lack of Indigo support for them. I finally got it when I saw the script posted by mclass. A plug-in would be ok, and I wish I had the capability of creating one, but I agree with others that Wemo is getting much bigger now as evidenced by Costco selling them. Just my 2 cents. Thanks as always;
jay (support) wrote:Thanks for the feedback. If you look at this thread, for example, I think you'll see what we mean by low demand (mclass only looked into it because one "fell into his hands"). And another interpretation for something showing up in Costco is that they aren't selling well...
But, in any event, now that mclass has done at least part of the work someone else could build the plugin so if there's sufficient demand I'm sure one of our 3rd party developers will pick it up.
rgspb wrote:Just my two cents worth, but I think one reason I would like to see native WeMo support is because these units are a fraction of the cost of Insteon units. They are also more readily available. I have had three new Insteon plug in modules crumble in my hand as I plugged it in. Not saying WeMo is better constructed, but at Insteon prices they should be constructed much better. My older plug in modules are are still rock solid. All this being said I wonder if HomeKit Bridge plug-in gives support for these?
rgspb wrote:I haven’t tried Z-Wave before, I’ve sunk all my fortunes into Insteon. Just ordered a Z-Stick and plug-in module from Amazon so I’ll see how that works with my setup. You’re right in that there are a lot of Z-Wave compatible devices out there. I’m just a little fed up with some of my Insteon devices now and there’s way too many choices to just keep,throwing my money at them.
wideglidejrp wrote:What would be the most reliable solution for smart plugs with Indigo?
Users browsing this forum: No registered users and 2 guests | https://forums.indigodomo.com/viewtopic.php?t=11979&p=164228 | CC-MAIN-2019-13 | refinedweb | 1,184 | 57.67 |
Ok, so I just made this account because I'm super stuck. Basically I need to create a math quiz that takes a users input (1 through 12), have it spit out a question, i.e What is 5 x 12? however if the user enters 5, it must randomly generate a question asking the user his 5x tables. i.e What is 5x3? What is 5x7? and so on and he needs to enter the correct answer, after which it will display correct or incorrect. I am having trouble getting the user input part. So far my code allows for him to enter an answer, get correct or incorrect, but they it goes to a seperate timestable, i.e first question will be 5x4, next will be 9x11. I need 5-10 random questions just using his input * and 1-12 as the timestable. What I am asking is how am I able to enter an input value for what times table the quiz is on?
import java.util.*; public class MathQuizzer{ public static void main(String[] args) { Scanner a = new Scanner(System.in); int b,right,totalscore,m; right = 100; totalscore = 0; b = 0; do { double numb = Math.round((Math.pow(10, 2) * Math.random())); double numb1 = Math.round((Math.pow(10, 2) * Math.random())); double valueFirst; double valueSecond; valueFirst = numb % 12; valueSecond = numb1 % 12; //Highest Number double answer; System.out.print("\nMultiply these two numbers together: "); System.out.print(valueFirst);System.out.print(" * ");System.out.println(valueSecond); answer = a.nextDouble(); if(answer == (valueFirst * valueSecond)) { System.out.println("\nCorrect"); right++; } else { System.out.println("\nIncorrect"); } b++; } while (b < 10 ); //Number of questions to be asked. totalscore = right - 100; System.out.println("\nYour total score is: " + totalscore ); } } | https://www.daniweb.com/programming/software-development/threads/476401/java-multiplication-quiz | CC-MAIN-2021-31 | refinedweb | 288 | 61.63 |
Hi,
I recently had a friends son ( that is learning java on his own) ask me to create a simple sample urban slang translator, and since its been a few years since Ive messed around with java I am running into a problem. If my memory serves me correct I remember spending hours on a project only to realize all the problems I had were something silly. Anyhow, It seems I have gotten close except that my main problem is a big one, I cant seem to get the results returned. Since he only wants to screw around with turning an application into an app saving me some hours or days would be greatly appreciated. Im sure there is an easier way, but since I've came this far now I am the curious one. Here is where I'm at thus far.
import java.util.Scanner; import java.awt.*; import java.util.*; public class Slang { public static void main(String[] args) { String[][] translateList = {{"a long time", "a minute"}, {"opinion", "two cents" }, {"dance", "two step"}, {"attractive", "all that"}, {"leave", "audi"}, {"yes", "awe yeah"}, {"no", "awe naw"}, }; Scanner scan = new Scanner(System.in); System.out.println ( "convert it to slang" ); String sentence = scan.nextLine(); String[] input = sentence.split("\\s"); for (int x=0; x<input.length; x++) { for (int y = 0; y < translateList.length; y++) if (input[x].equalsIgnoreCase(translateList[y][0])) System.out.println("True"); } } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/10118-creating-slang-translator.html | CC-MAIN-2015-27 | refinedweb | 235 | 72.05 |
Running a program on Pythonista and Sublime Text.)
You are most likely running the script on your mac on Python 2.7 and on the iPad on Python 3.5 (Pythonistas default). With Python 3.0 the Python 2.x
- set Pythonista to Python 2.7 in the settings golbally
- or just for one instance by long pressing the play button
- or you could add something called a shebang line, forcing python into Python 2 mode from code, the first line of your script has to be
#!Python2
- or just change the code to
print (median (1,3,2)), which in this case will also run just fine under Python 2.
Your suggestions worked.
Perfect - thanks!
My approach would be to put
from __future__ import print_functionat the top of your file before all other import statements. This will force you to write code that works on both versions of Python.
Also, bigger() could be replaced by max()
Also, bigger() could be replaced by max()
I'm guessing this is mostly just an exercise... Otherwise, in Python 3.x,
mediancould also be replaced by
statistics.median.
My approach would be to put from future import print_function
I do not want to be rude, but using
futureis IMHO bad advice ( especially for a beginner). IMHO it also goes against the Google Python style guide lines, which while not speaking out explicitly agianst
futuredo forbid the use of deprecated methods and unusal styles in multiple guide lines.
@zipit I'm not sure if I'd recommend it to a beginner – it might be better to just use one of the two Python versions, and learn about the differences and porting/compatibility strategies later – but
__future__imports aren't deprecated or unusual at all. If you want a code base that supports both versions, this is definitely the right approach.
@zipit Last time I looked Google was not the governing authority over the Python language. ;-). In fact, looking at their Grumpy project, I would say that Google would rather see developers move from Python 2 to Golang rather than to Python 3. As @omz points out,
__future__is part of the standard library (introduced in Python 2.1) and is not deprecated.
Jeah, I did not state that very clearly. Was I meant was that since they forbid using deprecated methods the (kinda) opposite - future imports - are also not a good idea as mixing versions goes against the zen-priniciple concise and clear.
edit: And what is common is IMHO not an indicator of what is good. People also use cryptic bit shifts, overly complex list comprehensions, etc. all over the place and they are also not good.
But that is just my opinion. But you are right there are cases where you need future. | https://forum.omz-software.com/topic/4112/running-a-program-on-pythonista-and-sublime-text | CC-MAIN-2021-39 | refinedweb | 459 | 73.37 |
Glossary Item Box
This is a step-by-step guide to the process of reverse mapping database views. They differ from the regular tables and that is why an additional step has to be performed during the mapping.All examples in the article are based on the Northwind database and a project of type Class library. The project first has to be “Enabled” to use Telerik OpenAccess ORM. Open the OpenAccess menu and run the Enable Project wizard. It will help you to configure the project and the database connection. Once the project is enabled, you are ready to start mapping. The steps below demonstrate mapping the “Alphabetical list of products” view available in the Northwind database. It contains columns from two related tables – Products and Categories.
Step 1: Open the Reverse Mapping wizard
The Simple View tab shows all available tables and views from the database along with some basic options needed to create the code. Each table/view can be mapped to a Class, Collection or Map. Usually collections and maps are used to represent join tables. When mapping views, the Class option should be selected. Custom class name and namespace can be specified here as well.
Step 2: Switch to Advanced View.
The Advanced View tab provides options that are not available in the Simple View but are required in order to map a view. Select the Views node from the Treeview. A grid similar to the one from the previous step shows up. Now you have to choose which views to be mapped. To do this, enable the relevant “Generate” checkbox next to each view. You may notice that the treeview icons have been changed for the views which are marked to be mapped.
Step 3: This step is specific for mapping views. They do not provide information about identity columns as the regular tables do. However, all persistent classes generated by Telerik OpenAccess ORM require an identity field. This identity field has to be set manually for each class. Expand the view from the list in the treeview and select the id field. Then enable the “Primary Key” checkbox on the right. After setting an identity, the red warning about a missing primary key field should disappear from the code preview window.
Step 4: When you have chosen identity fields for all classes, they are ready to be generated. Click the “Generate & Save Config” button.After the classes are created they can be used to retrieve and modify data. The code samples below demonstrate basic read and update operations on the “Alphabetical list of products” view. Both OQL and Linq queries can be executed to fetch data:
The approach for updating data is the same as for regular tables. Even the referenced object can be changed:
A running transaction is required to store the changes to the database. | http://www.telerik.com/help/openaccess-classic/openaccess-tasks-mapping-views.html | CC-MAIN-2017-17 | refinedweb | 473 | 74.39 |
The move command in Unix shell script:
abc = /dir1/dir2
def = /dir3/dir4
mv $abc/file.txt $def
This is throwing error when executing the Unix shell script
We encourage you to read our updated PRIVACY POLICY and COOKIE POLICY.
The move command in Unix shell script:
abc = /dir1/dir2
def = /dir3/dir4
mv $abc/file.txt $def
This is throwing error when executing the Unix shell script
preceed the mv a b with
. . $ mkdir -p $def
guarantees the directory exists.
-=*+* applemcg
Hi,
What is error message ?
both source and target must exists ...
try the same with
abc = "/dir1/dir2"
def = "/dir3/dir4/"
Hi Pallavi,
you need to set path correctly.
abc="/dir1/dir2/"
def="/dir3/dir4/"
then run your command, I hope there wont be any error then.. if
still you have error please paste error message
Thanks\Regards,
Khalid Pasha
You are not permitted spaces either side of the = assignment operator.
As written, I would expect stderr to say: bash [4]: abc: command not found.
because the word abc is interpreted as a plain command, not an assignment.
---
@mcg,
Hate to call somebody of your stature on anything, but:
if the directory does already exist, the user will see a spurious error message "Cannot reate directory" which may cause them to think the mv failed.
I would use: [[ -d "${def}" ]] || mkdir -p "${def}"
Or actually, I would not. That's still incomplete. if there is already a file ${def}, then the test -d fails (there is no such directory) but the mkdir fails too (there is an object o the same name).
So best to let mv tell us what is wrong, not to rescript all its tests.
So the question comes back to the original poster:
mv tells us whether the problem is the source file, the target directory, the space available, and a bunch of other things.
So don't tell me it failed: show me what it said, because it MATTERS. It is a waste of time having to guess what you decided, in your wisdom, to censor.
---
@paul,
You're absolutely correct, seriously.
the essential iesson: it's not that hard to be more specific in describing the problem.
and yes, in the real world, you must make sure what might be a directory isn't a file.
and ... on this mac, no complaints:
ittoolbox.$ rm -fr a
ittoolbox.$ mkdir -p a/deep/directory
ittoolbox.$ mkdir -p a/deep/directory
ittoolbox.$ find a
a
a/deep
a/deep/directory
ittoolbox.$
@mac,
Yo. The mkdir -p options makes parent directories "as needed", so it does not error if any of those levels exists.
It also extends this courtesy to the last directory name, which is not a parent. Without the -p, it flags a terminal directory that already exists. This is very tidy.
Even more fun may be had with rmdir -p and variants.
I am on RedHat, so this is a fairly widely available behaviour.
-Paul.
---
Mkdir ()
{
. existingdir $1 || {
. case $1 in
. */*)
. Mkdir ${1%/*}
. ;;
. esac;
. mkdir $1
. }
}
existingdir ()
{
. [ -d ${1:-/dev/null} ]
}
with leading "."s added for clarity (?)
This supporting "existingdir" function, while simple, supplied a
meaningful mnemonic to the "test for an existing directory", part of a
manufactured family of functions covering the available flags.
Also note, my cavalier attitude around the variable quoting. I never
did. This style dates from '83 at the dawn of the Korn Shell. Having
barely recovered from the 14-character limit on filenames, there were
never spaces in a file name (Bill Gates first showed up on a Jan '83
issue of Time mag.)
Mkdir is recursive: If the directory exists, do nothing, otherwise
create the parent directory, the ${1%/*} trims the last slash and
following characters, and then create the requested directory.
I may have used this for a decade after "-p" became available, since
I already had the functionality. Today's obsolescent function:
Mkdir () { mkdir -p "$1"; }
in case I bump into some old code.
ittoolbox.$
b.t.w. it was just this practice which led to my functional addiction.
So, @pallavi, are you still hung up or do you want to share your solution with us? | https://it.toolbox.com/question/the-move-command-in-unix-is-throwing-error-when-executing-unix-shell-script-020313 | CC-MAIN-2020-24 | refinedweb | 692 | 74.19 |
Quant Basics 8: Bootstrapping and Response Surface
In the last section we investigated how the strategies we’ve selected from our train-test cluster were distributed in the parameter plot. We saw that they form a dense cluster in that plot which indicates that the PnL’s we see are not a result of overfitting since we would expect them to be more randomly distributed.
In this section we have a look at how to do a Monte-Carlo analysis of our PnL curve. All we do here is to draw random return samples from our curve and re-asseble them in a set of new PnL curves. The shortfall we see will give us some indication on how consistent our results are. This is particularly important as we want to avoid the possibility of large drawdowns.
def bootstrap(pnls,params,tickers,start,end,backend='file'): # Get the PnLs of our top strategies p = prices(tickers,start,end,backend=backend) best_params = get_best_parameters(params,pnls,50) pnl = calc_pnl(calc_signals(tickers,p,min(best_params[0]),max(best_params[1])),p) # Calculate the returns rets = np.diff(pnl) rets = rets[~np.isnan(rets)] last = [] for i in range(500): # Random sampling step k = np.random.choice(rets,len(rets)) ps = np.cumsum(k) if ~np.isnan(ps[-1]): last.append(ps[-1]) # Plot the sample plt.subplot(211) plt.plot(ps) plt.xlabel('time') plt.ylabel('PnL') print 'actual pnl:',np.cumsum(rets)[-1],' bootstrapped mean pnl: ',np.nanmean(last) # Plot the distribution plt.subplot(212) plt.hist(last,30) plt.xlabel('PnL') plt.ylabel('N') plt.show()
actual pnl: 14.9704493607 bootstrapped mean pnl: 14.9525417379
We can see that the mean PnL of the bootstrap is very close to the actual PnL of our strategy. That is a very encouraging result. The main issue usually arises when we have some extreme returns in our PnL curve that may skew our distribution. If this was the case we would need to be much more cautious to trade such a strategy unless we have a very good explanation for such returns.
The results of the bootstrap are shown below. On the top we see all the randomly sampled PnL curves and on the bottom show the distribution of the final PnLs of all these curves.
Here we’ve seen another way to assess our strategy. In the next section we will look at the response surface of the strategy with respect to the parameters. What we hope to see is a reasonably continuous surface, which tells us that our returns are not just due to some lucky coincidences.
The code base for this section can be found on Github. | http://aaaquants.com/2017/09/20/quant-basics-8-bootstrapping-and-response-surface/ | CC-MAIN-2019-39 | refinedweb | 445 | 57.57 |
I have this program in Java to read a database written in a .txt format (file name is "info.txt").
It has the values record number, name, age, state, and salary class written in it. I've named the array to read that database as "ven[]". The program will load that text-file database into that array to display on the screen when it is run. When it reaches the array cell that has the salary class value, it displays on the screen the corresponding amount. Array cell ven[4] contains a single entry that's just one character long.(ven[0] is the where the record number or "R11-1000" is stored)
That character corresponds to the value assigned to it. That is, if the value in it is "1", it corresponds to $200 per hour. Hence, on the screen the line, "Salary per Hour" will have the amount, $200 dollars next to it. I read that switch statements can only allow integers or characters as values in it. So, I've been trying to assign that array cell in question into a char-declared variable - as you can see in the photo of the few lines before reaching the switch statement. Unfortunately, I get error messages when I attempt to do so. (the compiler says they're incompatible data types). The error message, by the way, says "error: incompatible types - readDB.java - line 37"...
I intend to use the entry inside ven[4] as a value for the switch statement in the code. I'm not quite a beginner but I'm not expert on Java either.
I have no idea how to go around this problem - any idea how? That is, how to transfer an array's value into
The code I've written is down below...(line 35 is where the error is...)
Code:
import java.util.*;
import java.io.*;
public class readDB
{
public static void main(String args[])throws Exception
{
Scanner v = new Scanner(System.in);
Scanner i = new Scanner(new File("info.txt"));
String s,salarytype;
StringBuffer result = new StringBuffer();
double ph=0;
Boolean wr=false;
System.out.println(" ");
System.out.print("Enter Employee Code: ");
s = v.nextLine();
while (i.hasNextLine())
{ //first while loop start brace
n = i.nextLine();
String ven[] = n.split(";");
if (s.equals(ven[0]))
{
System.out.println(" ");
System.out.println("Name: " + ven[1]);
System.out.println("Age: " + ven[2]);
System.out.println("Address: " + ven[3]);
System.out.println("Salary Type: " + ven[4]);
result.append(ven[4]);
String mynewstring = result.toString();
char[] cArray = mynewstring.toCharArray();
switch(cArray)
{
case 1: ph=200;
System.out.println("Salary per Hour: "+ph);
break;
case 2: ph=300;
System.out.println("Salary per Hour: "+ph);
break;
case 3: ph=500;
System.out.println("Salary per Hour: "+ph);
break;
}
wr=true;
} else if (wr==false)
{
System.out.println ("Invalid Code ");
}
} //first while loop end brace
}
} | http://forums.codeguru.com/printthread.php?t=536229&pp=15&page=1 | CC-MAIN-2018-05 | refinedweb | 478 | 70.09 |
Alright, So I thought to myself Boy wouldn't it be great if all of the objects stored a pointer to the sprite that I'd like to have them render with? Sounds fantastic to me!
Thus I made this little file looking something like this
//Spritelist.h #pragma once #include "Sprite.h" namespace SpriteList { static Sprite Player1("Rainbow.bmp",64,64,4,4); }
And then Within the Object class I have there is a Sprite pointer pointing to said static variable. Wow great!
But then onto my Render Method Something appears to be askew!?
//Within some direct3d file void Direct3D::render_frame(vector<Object> *Objects) { d3ddev->Clear(0,NULL,D3DCLEAR_TARGET,D3DCOLOR_XRGB(0,40,0100),1.0f,0); d3ddev->BeginScene(); d3dspt->Begin(NULL); for(vector<Object>::iterator index = Objects->begin();index != Objects->end();index++) { //does not pass, however if the line fallowing labeled "WiggityWack"it passes then the second time this //function goes by (I assume) it does pass if(index->sprite==&Player1) { index->sprite->frameStore++; } //"WiggityWack" index->sprite = &Player1; <- also if the equivilent of this line is //place in the code that calls this render function the top if statement still willl not pass. //does pass if(index->sprite->cols==Player1.cols) { index->sprite->frameStore++; } //index->sprite=&Player1; //does not render anything DrawSprite(index->sprite,1, 64, 64 , 1.0f); //Does render something! Wow thats great! DrawSprite(&Player1,1, 64, 64 , 1.0f); } d3dspt->End(); d3ddev->EndScene(); d3ddev->Present(NULL,NULL,NULL,NULL); }
I've done a lot of random tests and I've cut down the problem to the objects within the vector's sprite member simply isn't pointing to that static Sprite called Player1. Static variables have been something I've tried to avoid as I've heard people like to avoid them, so now that I am using one I'm wandering if perhaps I've misinterpretted their use and that's related to why this does not work. Would anyone be able to clarify on this?
I'll post my Object class just in case you see anything stupid there that i totally passed, but i believe there is nothing wrong here.
//Object.h struct Object { public: //constructor and deconstructodon Awww yeah! Object(/*HWND *hwnd,*/Sprite *_sprite ,int _x, int _y); ~Object(); // I forget what these are called, uuuuh like... variables inside a class, it's somethin stupid. //HWND hwnd; Sprite * sprite; unsigned int frame; int x, y; //methods void update(/*inputdata*/); }; //Object.cpp Object::Object(Sprite *_sprite ,int _x, int _y) :sprite(_sprite), x(_x), y(_y),frame(0) { } Object::~Object() { } void Object::update(/*inputdata*/) { } | http://www.gamedev.net/topic/637789-a-little-confusion-with-rendering-a-sprite/ | CC-MAIN-2016-26 | refinedweb | 431 | 57.06 |
04 August 2009 12:06 [Source: ICIS news]
LONDON (ICIS news)--Yuzhny ammonia shipments reached 118,258 tonnes in July, up 31,000 tonnes from the 86,909 tonnes exported in the previous month, customs data showed on Tuesday.
Exports of Russian ammonia totalled 100,277 tonnes, up from 86,909 tonnes in June, while Ukrainian shipments reached 17,981 tonnes, up from zero the previous month.
?xml:namespace>
Other export destinations last month were the
Shipments in July 2008 from the
Yuzhny ammonia exports had suffered from low international prices over recent months. However, demand had picked up and market expectations were that August shipments would be higher than July's.
Some 144,000 tonnes were already lined up to load from Yuzhny during August, sources said.
Yuzhny is a major trading port in the Black Sea, while | http://www.icis.com/Articles/2009/08/04/9237021/yuzhny-ammonia-exports-up-31000-tonnes-in-july.html | CC-MAIN-2013-48 | refinedweb | 139 | 58.32 |
in my project i need to read sensor data which is transmitted and received on RFID interfaced to the pc through rs232.this data i have to make use in my mysql database.so i wanted to know how i read my port using vb6 and then update it to database.i created the database using wampserver. i got no prior experience of high level programming ..
com port readingPage 1 of 1
1 Replies - 1599 Views - Last Post: 26 September 2012 - 07:08 AM
#1
com port reading
Posted 26 September 2012 - 07:00 AM
Replies To: com port reading
#2
Re: com port reading
Posted 26 September 2012 - 07:08 AM
Quote
i got no prior experience of high level programming ..
Do you have any experience at all with programming?
Why start with vb6? Why not .NET? the serialport namespace is pretty nifty.
Page 1 of 1 | http://www.dreamincode.net/forums/topic/293273-com-port-reading/page__pid__1709828__st__0 | CC-MAIN-2016-22 | refinedweb | 148 | 73.98 |
set_needs_display() explained
Hello everybody
I try to get to know pythonistas ui-module. I would like to code the ui manually, so far I refuse editors, because I think I'd miss some learning. And I need some. Please :)
Anyway, my try is a "Merrills" -game. The board is the ui.View.
I'm struggling to put the players pieces on it. Following is the shortened code as it is now. A Player-class should paint the bricks.
I made it also ui.View because otherwise I couldn't make it shown.
But that brings other problems, like that the touch from the root ui.View Board doesn't work anymore where Player exist etc.
So, I should probably have another way. Because of this I'm asking generally how to use set_needs_display, I think there could be the trick.
Hoping anyone understands somthing of my request :P
Thank you very much in advance :)
import ui class Player(ui.View): def __init__(self, pos, color): #self.bring_to_front() print(pos) self.xpos, self.ypos = pos self.color = color def draw(self): ui.set_color(self.color) self.piece = ui.Path.oval(self.xpos, self.ypos, 50, 50) self.piece.fill() class Board(ui.View): def __init__(self): self.height, self.width = 500, 500 self.background_color = 0.8, 0.7, 0.5 self.name = "Mill" self.positions = [] def draw(self): ui.set_color('black') upper = int(self.width//2-50) lower = int(self.width//4-25) for x, size in zip(range(upper, -1, -lower), range(lower, 501, upper)): if x == 0: x += 20 size -= 40 rect = ui.Path.rect(x, x, size, size) rect.line_width = 10 rect.stroke() def strokeLine(fx, fy, tx, ty): line = ui.Path() line.move_to(fx, fy) line.line_to(tx, ty) line.line_width = 5 line.stroke() strokeLine(self.width/2, 20, self.width/2, upper) strokeLine(self.width/2, self.height-20, self.width/2, upper+lower) strokeLine(20, self.height/2, upper, self.height/2) strokeLine(self.width-20, self.height/2, upper+lower, self.height/2) def touch_began(self, touch): black = Player(touch.location, 'black') self.add_subview(black) v = Board() v.present('sheet')
As your Player is an ui.View, it needs to be presented to be visible.
I'm sorry the code I've pasted is not a working one: But the ui.View (Player) is already "visible", it takes place in the upper left corner of the board. The board can't detect touch event at this place, and the touch events it can, won't show up because these coordinates are out of Players area.
When I call
white = Player((5,5) ,'white') v.add_subview(white)
it shows up, visible. The call with touch.location doesn't work, my bad. I know :( So here I am and need another way, exactly because of this - Players view covers Board's view and the latter can't detect touch events anymore ¯\_(ツ)_/¯ I don't see how .present() helps, I even tried and it produces an error (view already presented.). Thank you
@sGiForceOne, you should not need set_needs_display. present() is only needed once, for the underlying view, which I understand is your board.
For catching the touches, you can either:
- do it at the piece (Player) level and, if needed, use ui.convert_point() to convert from piece coordinates to board coordinates, or
- set touch_enabled = False on the piece to make it "transparent" to touches.
Here is the code to place the players. Moved the draw part of the player code to board and call "set_needs_display" in touch function.
Also added code to find the coordinates of the players and to validate the positions.
import ui from math import floor class Player(object): def __init__(self, i, j, color): self.i, self.j = i, j self.xpos, self.ypos = self.i*60+5, self.j*60+5 self.color = color self.piece = None class Board(ui.View): def __init__(self): self.height, self.width = 400, 400 self.background_color = 0.8, 0.7, 0.5 self.name = "Mill" self.toggle = True self.positions = [] self.players = [] self.valid_positions = set([(0,0), (3,0), (6,0), (1,1), (3,1), (5,1), (2,2), (3,2), (4,2), (0,3),(1,3),(2,3),(4,3),(5,3),(6,3), (2,4), (3,4), (4,4), (1,5), (3,5), (5,5), (0,6), (3,6), (6,6)]) self.occupied = set() def draw(self): ui.set_color('black') for i in range(3): rect = ui.Path.rect(i*60+20, i*60+20, (3-i)*120, (3-i)*120) rect.line_width = 10 rect.stroke() def strokeLine(fx, fy, tx, ty): line = ui.Path() line.move_to(fx, fy) line.line_to(tx, ty) line.line_width = 5 line.stroke() strokeLine(200, 20, 200, 140) strokeLine(200, 260, 200, 380) strokeLine(20, 200, 140, 200) strokeLine(260, 200, 380, 200) for p in self.players: ui.set_color(p.color) p.piece = ui.Path.oval(p.xpos, p.ypos, 30, 30) p.piece.fill() def touch_began(self, touch): x,y = touch.location i = (floor(x)//60) j = (floor(y)//60) if (i,j) in self.valid_positions and (i,j) not in self.occupied: self.occupied.add((i,j)) color = 'black' if self.toggle else 'gray' self.toggle = not self.toggle player = Player(i, j, color) self.players.append(player) self.set_needs_display() v = Board() v.present('sheet')
Thank you very much!
I wasn't checking for a few days, now I have even someone writing code for me, wasn't intended by request - @abcabc thank you very much for your effort, you shouldn't have :)
also a big thank you @mikael! Made things clearer.
I think I get it from here. As I'm already thanking, some heartful of thankyous also to @omz for the great Pythonista app!
❤ | https://forum.omz-software.com/topic/3813/set_needs_display-explained/6 | CC-MAIN-2019-39 | refinedweb | 963 | 72.02 |
Having had to switch gears from JS to AS to solve a database integration problem, I could use some help getting specific XMP out of my AI files via AppleScript. Extracted this from the InDesign forums... doesn't work for illustrator (gives error "expected expression but found property")
tell links
set InhouseClient to get property link xmp namespace "" path "Client"
end tell
I know I can dump the whole XMP string with:
set XMPStringData to (get XMP string of current document)
but I'd rather do something like the first example. If I were in JS, I'd use
var property = myXmp.getProperty(propertyuri, propertyname, XMPConst.STRING);
The syntax looks pretty close to the first AppleScript, but why is it choking on the "property" when inside a "tell Illustrator" when is works inside a "tell InDesign"?
Have you looked at the possibility of using the AS "do javascript" command? From the manual
do javascript
Executes a JavaScript script and returns the result of execution.
I am considering this, but want to avoid as many moving peices as possible. I'm using FileMakerPro to create a calculation-defined AppleScript to extract XMP from Adobe CS files. I could pass parameters from the AS to the JS and back, but doing it all in AS would streamline both the development and (possibly/hopefully) shorten the run time. Preliminary tests using AS just to get the filepaths of the files I want from an irregular filesystem is already taking 10+ hours. I can call the XMP extract as a separate process, but expect similar time to run. Fewer moving parts = less time and more reliability.
Might have to do it this way though. In fact, may want to. I suspect (probably erroneously) that JS will execute faster than AS, so if I have JS do the hard work of the lookup... hmmm, don't know how to test that to get real metrics.
Anyone know if Finder (via AS) can extract XMP from AI files without telling illustrator to open the file first? FMP needs an add-on to do it (I think).
Thanks,
Alex
I think you are in the wrong forum for this kind of scripting… I would NOT want to be opening AI files to extract XMP metadata. Using Adobe's ExtendScript I would either script Bridge or load the ESTK's external XMP library… This would be much much faster. That said FMP is AppleScript friendly ( wish I had it ). IMO you want to be looking at AppleScript's do shell script command this will blow Finder's little cotton socks off… You should take a look at ExifTool ( a great utility )…
and ask your question either here:
or iew=discussions&tagSet=1044#/
I would want finding files done in seconds/minutes NOT in hours? Do tell us more…?
Thanks for the suggestions Mark.
Here's the situation. We have approx 1TB of various files on multiple partitions of a server. I'm creating a library (FileMakerPro) to use as a search engine for specific AI files and associated metadata scattered throughout the server. We have a pattern of folders where the AI files I need live (jobFolder/Graphics/filename.AI), but these are nested in a variety of places.
I'm using set folderList to (folders of entire contents of (startFolder) whose name is "Graphics") as alias list to get a list of eligible folders to search, then checking to see if AI files are present, then adding the AI files to a list of file paths. This list is the source for other scripts to be called by Filemaker to extract the custom XMP, find and link to the preview (already generated in a separate folder).
Current times to ingest small sets of graphic paths using this method:
Set 1, found 160 eligible folders with 1600 files in 44 minutes, or 17 seconds per folder/1.6 seconds per graphic.
Set 2, found 310 eligible folders with 3800 files in 82 minutes, or 16 seconds per folder/1.3 seconds per graphic.
Set 3, needs to crawl through about 8 times the number of folders, and can't finish before the server times me out (I pushed the AppleScript timeout to 20 hours, so that's not the problem).
In additon to metadata that has already been defined, I want to capture the full-text of the graphics in the DB for search purposes... so will have to open each file anyway to do that (unless there's another way).
Thanks,
Alex
Alex, you are using a combo of 2 AppleScript commands that are easy to write but the trade off is they are notoriously slow to execute… entire contents in finder takes an age plus the whose filtering you rearly need to swap this out for a little shell… do shell script has a memory overhead so call it as few times as possible but once called its fast… Do you need to locate just *.ai files in grahics folders or could you generic search *.ai?
our folder heirarcy Jobfolder/Graphics includes subfolders /Archive and /JPEGS (among others).
/Graphics contains a list of AI files sequentially numbered with a file iteration identifier letter, (e.g., graphic001a.ai, graphic002f.ai, graphic003b.ai), representing the most current iteration of every graphic.
/Graphics/Archive contains all prior versions(e.g., graphic001.ai, graphic002.ai, graphic002a.ai, graphic002b.ai, etc.).
Finding *.ai would bring in many undesireable filenames.
/Graphics/*.ai is what I need.
It's been some time since I last used AppleScript & Shell but does this work to find your folders?
set This_Folder to (choose folder with prompt "Select your top level folder…" without invisibles)
set Folder_List to paragraphs of (do shell script "mdfind -onlyin " & quoted form of POSIX path of This_Folder & " 'kMDItemKind == \"Folder\" && kMDItemDisplayName == \"Graphics'")
Runs, but returns {} on my test folder that contains several "Graphics" folders in different levels of directories.
Having done NO reading on shell yet....
What version of the OS are you running…? My AppleScript has become a little rusty… There are numberous ways to do what you want but you would be better served asking where I showed you… Basically you just need pipe two commands find | filter…
Thanks Mark, I'll go looking there later today for a primer on shell.
I'm on 10.7.3
The above worked fine here listing all folders on my startup drive… Im at home so I can't check with mounted share… Some OS versions don't like the escaped quoting ie the \" but its best to get served by those who are up to date on AppleScript… If ExifTool works for you then you would find | filter | exiftool. Anyhows Im sure you will find a much faster solution than what you currently have…
Oh my lord working with shell is FAST!!!
Working on a remote server, a little applescript calling a shell script is capturing the path for 20,000+ files in 2000+ jobs in a little over 5 minutes to ingest into my FMPro database. The applescript alone would have taken 30+ hours, and was getting interrupted by server time-outs.
Mark's "mdfind" wasn't working for me, probably because I was defaulting to a different shell script language. What is working for me:
set fileString to (do shell script "find " & quoted form of searchPath & " -maxdepth 1 -iname *.ai")
which writes a list of off the paths to AI files found within the searchPath passed to the shell script.
Next step is to add a xmp lookup subroutine using EXIFTool... initial tests tell me this is the easy part.
Signed,
Recovering GUI Addict.
I did say it would blow finder's little cotton socks off… Glad you got it working…
Hi
I just saved each layer of text to pdf and then saved each pdf to xml, no scripting involved.
Ali | http://forums.adobe.com/message/4611619 | CC-MAIN-2014-15 | refinedweb | 1,311 | 68.81 |
NUMA(3) Linux Programmer's Manual NUMA(3)
numa - NUMA policy library
#include <numa.h> cc ... -lnuma int numa_available(void); int numa_max_possible_node(void); int numa_num_possible_nodes(); int numa_max_node(void); int numa_num_configured_nodes(); struct bitmask *numa_get_mems_allowed(void); int numa_num_configured_cpus(void); struct bitmask *numa_all_nodes_ptr; struct bitmask *numa_no_nodes_ptr; struct bitmask *numa_all_cpus_ptr; int numa_num_task_cpus(); int numa_num_task_nodes(); int numa_parse_bitmap(char *line , struct bitmask *mask); struct bitmask *numa_parse_nodestring(const char *string); struct bitmask *numa_parse_nodestring_all(const char *string); struct bitmask *numa_parse_cpustring(const char *string); struct bitmask *numa_parse_cpustring_all(const char *string); long numa_node_size(int node, long *freep); long long numa_node_size64(int node, long long *freep); int numa_preferred(void); void numa_set_preferred(int node); int numa_get_interleave_node(void); struct bitmask *numa_get_interleave_mask(void); void numa_set_interleave_mask(struct bitmask *nodemask); void numa_interleave_memory(void *start, size_t size, struct bitmask *nodemask); void numa_bind(struct bitmask *nodemask); void numa_set_localalloc(void); void numa_set_membind(struct bitmask *nodemask); struct bitmask *numa_get_membind(void); void *numa_alloc_onnode(size_t size, int node); void *numa_alloc_local(size_t size); void *numa_alloc_interleaved(size_t size); void *numa_alloc_interleaved_subset(size_t size, struct bitmask *nodemask); void *numa_alloc(size_t size); void *numa_realloc(void *old_addr, size_t old_size, size_t new_size); void numa_free(void *start, size_t size); int numa_run_on_node(int node); int numa_run_on_node_mask(struct bitmask *nodemask); int numa_run_on_node_mask_all(struct bitmask *nodemask); struct bitmask *numa_get_run_node_mask(void); void numa_tonode_memory(void *start, size_t size, int node); void numa_tonodemask_memory(void *start, size_t size, struct bitmask *nodemask); void numa_setlocal_memory(void *start, size_t size); void numa_police_memory(void *start, size_t size); void numa_set_bind_policy(int strict); void numa_set_strict(int strict); int numa_distance(int node1, int node2); int numa_sched_getaffinity(pid_t pid, struct bitmask *mask); int numa_sched_setaffinity(pid_t pid, struct bitmask *mask); int numa_node_to_cpus(int node, struct bitmask *mask); void numa_node_to_cpu_update(); int numa_node_of_cpu(int cpu); struct bitmask *numa_allocate_cpumask(); void numa_free_cpumask(); struct bitmask *numa_allocate_nodemask(); void numa_free_nodemask(); struct bitmask *numa_bitmask_alloc(unsigned int n); struct bitmask *numa_bitmask_clearall(struct bitmask *bmp); struct bitmask *numa_bitmask_clearbit(struct bitmask *bmp, unsigned int n); int numa_bitmask_equal(const struct bitmask *bmp1, const struct bitmask *bmp2); void numa_bitmask_free(struct bitmask *bmp); int numa_bitmask_isbitset(const struct bitmask *bmp, unsigned int n); unsigned int numa_bitmask_nbytes(struct bitmask *bmp); struct bitmask *numa_bitmask_setall(struct bitmask *bmp); struct bitmask *numa_bitmask_setbit(struct bitmask *bmp, unsigned int n); void copy_bitmask_to_nodemask(struct bitmask *bmp, nodemask_t *nodemask) void copy_nodemask_to_bitmask(nodemask_t *nodemask, struct bitmask *bmp) void copy_bitmask_to_bitmask(struct bitmask *bmpfrom, struct bitmask *bmpto) unsigned int numa_bitmask_weight(const struct bitmask *bmp ) int numa_move_pages(int pid, unsigned long count, void **pages, const int *nodes, int *status, int flags); int numa_migrate_pages(int pid, struct bitmask *fromnodes, struct bitmask *tonodes); void numa_error(char *where); extern int numa_exit_on_error; extern int numa_exit_on_warn; void numa_warn(int number, char *where, ...); task is currently executing), or allocation only on specific nodes (i.e., allocate on some subset of the available nodes). It is also possible to bind tasks to specific nodes. Numa memory allocation policy may be specified as a per-task attribute, that is inherited by children tasks and processes, or as an attribute of a range of process virtual address space. Numa memory policies specified for a range of virtual address space are shared by all tasks in the process. Furthermore, memory policies specified for a range of a shared memory attached using shmat(2) or mmap(2) from shmfs/hugetlbfs are shared by all processes that attach to that region. Memory policies for shared disk backed file mappings are currently ignored. The default memory allocation policy for tasks and all memory range is local allocation. This assumes that no ancestor has installed a non-default policy. For setting a specific policy globally for all memory allocations in a process and its children it is easiest to start it with the numactl(8) utility. For more finegrained policy inside an application this library can be used.. A node can contain multiple CPUs. Caches are ignored for this definition. Most functions in this library are only concerned about numa nodes and their memory. The exceptions to this are: numa_node_to_cpus(), numa_node_to_cpu_update(), numa_node_of_cpu(), numa_bind(), numa_run_on_node(), numa_run_on_node_mask(), numa_run_on_node_mask_all(), and numa_get_run_node_mask(). These functions deal with the CPUs associated with numa nodes. See the descriptions below for more information. Some of these functions accept or return a pointer to struct bitmask. A struct bitmask controls a bit map of arbitrary length containing a bit representation of nodes. The predefined variable numa_all_nodes_ptr points to a bit mask that has all available nodes set; numa_no_nodes_ptr points to the empty set. Before any other calls in this library can be used numa_available() must be called. If it returns -1, all other functions in this library are undefined. numa_max_possible_node() returns the number of the highest possible node in a system. In other words, the size of a kernel type nodemask_t (in bits) minus 1. This number can be gotten by calling numa_num_possible_nodes() and subtracting 1. numa_num_possible_nodes() returns the size of kernel's node mask (kernel type nodemask_t). In other words, large enough to represent the maximum number of nodes that the kernel can handle. This will match the kernel's MAX_NUMNODES value. This count is derived from /proc/self/status, field Mems_allowed. numa_max_node() returns the highest node number available on the current system. (See the node numbers in /sys/devices/system/node/ ). Also see numa_num_configured_nodes(). numa_num_configured_nodes() returns the number of memory nodes in the system. This count includes any nodes that are currently disabled. This count is derived from the node numbers in /sys/devices/system/node. (Depends on the kernel being configured with /sys (CONFIG_SYSFS)). numa_get_mems_allowed() returns the mask of nodes from which the process is allowed to allocate memory in it's current cpuset context. Any nodes that are not included in the returned bitmask will be ignored in any of the following libnuma memory policy calls. numa_num_configured_cpus() returns the number of cpus in the system. This count includes any cpus that are currently disabled. This count is derived from the cpu numbers in /sys/devices/system/cpu. If the kernel is configured without /sys (CONFIG_SYSFS=n) then it falls back to using the number of online cpus. numa_all_nodes_ptr points to a bitmask that is allocated by the library with bits representing all nodes on which the calling task may allocate memory. This set may be up to all nodes on the system, or up to the nodes in the current cpuset. The bitmask is allocated by a call to numa_allocate_nodemask() using size numa_max_possible_node(). The set of nodes to record is derived from /proc/self/status, field "Mems_allowed". The user should not alter this bitmask. numa_no_nodes_ptr points to a bitmask that is allocated by the library and left all zeroes. The bitmask is allocated by a call to numa_allocate_nodemask() using size numa_max_possible_node(). The user should not alter this bitmask. numa_all_cpus_ptr points to a bitmask that is allocated by the library with bits representing all cpus on which the calling task may execute. This set may be up to all cpus on the system, or up to the cpus in the current cpuset. The bitmask is allocated by a call to numa_allocate_cpumask() using size numa_num_possible_cpus(). The set of cpus to record is derived from /proc/self/status, field "Cpus_allowed". The user should not alter this bitmask. numa_num_task_cpus() returns the number of cpus that the calling task is allowed to use. This count is derived from the map /proc/self/status, field "Cpus_allowed". Also see the bitmask numa_all_cpus_ptr. numa_num_task_nodes() returns the number of nodes on which the calling task is allowed to allocate memory. This count is derived from the map /proc/self/status, field "Mems_allowed". Also see the bitmask numa_all_nodes_ptr. numa_parse_bitmap() parses line , which is a character string such as found in /sys/devices/system/node/nodeN/cpumap into a bitmask structure. The string contains the hexadecimal representation of a bit map. The bitmask may be allocated with numa_allocate_cpumask(). Returns 0 on success. Returns -1 on failure. This function is probably of little use to a user application, but it is used by libnuma internally. numa_parse_nodestring() parses a character string list of nodes into a bit mask. The bit mask is allocated by numa_allocate_nodemask(). The string is a comma-separated list of node numbers or node ranges. A leading ! can be used to indicate "not" this list (in other words, all nodes except this list), and a leading + can be used to indicate that the node numbers in the list are relative to the task's cpuset. The string can be "all" to specify all ( numa_num_task_nodes() ) nodes. Node numbers are limited by the number in the system. See numa_max_node() and numa_num_configured_nodes(). Examples: 1-5,7,10 !4-5 +0-3 If the string is of 0 length, bitmask numa_no_nodes_ptr is returned. Returns 0 if the string is invalid. numa_parse_nodestring_all() is similar to numa_parse_nodestring , but can parse all possible nodes, not only current nodeset. numa_parse_cpustring() parses a character string list of cpus into a bit mask. The bit mask is allocated by numa_allocate_cpumask(). The string is a comma-separated list of cpu numbers or cpu ranges. A leading ! can be used to indicate "not" this list (in other words, all cpus except this list), and a leading + can be used to indicate that the cpu numbers in the list are relative to the task's cpuset. The string can be "all" to specify all ( numa_num_task_cpus() ) cpus. Cpu numbers are limited by the number in the system. See numa_num_task_cpus() and numa_num_configured_cpus(). Examples: 1-5,7,10 !4-5 +0-3 Returns 0 if the string is invalid. numa_parse_cpustring_all() is similar to numa_parse_cpustring , but can parse all possible cpus, not only current cpuset.. numa_preferred() returns the preferred node of the current task. This is the node on which the kernel preferably allocates memory, unless some other policy overrides this. numa_set_preferred() sets the preferred node for the current task to node. The system will attempt to allocate memory from the preferred node, but will fall back to other nodes if no memory is available on the the preferred node. Passing a node of -1 argument specifies local allocation and is equivalent to calling numa_set_localalloc(). numa_get_interleave_mask() returns the current interleave mask if the task's memory allocation policy is page interleaved. Otherwise, this function returns an empty mask. numa_set_interleave_mask() sets the memory interleave mask for the current task. numa_interleave_memory() interleaves size bytes of memory page by page from start on nodes specified in nodemask. The size argument will be rounded up to a multiple of the system page size. If nodemask contains nodes that are externally denied to this process, this call will fail. This is a lower level function to interleave allocated but not yet faulted_bind() binds the current task tasks should be bound to individual CPUs inside nodes consider using numa_node_to_cpus and the sched_setaffinity(2) syscall. numa_set_localalloc() sets the memory allocation policy for the calling task to local allocation. In this mode, the preferred node for memory allocation is effectively the node where the task is executing at the time of a page allocation. numa_set_membind() sets the memory allocation mask. The task will only allocate memory from the nodes set in nodemask. Passing an empty nodemask or a nodemask that contains nodes other than those in the mask returned by numa_get_mems_allowed() will result in an error. numa_get_membind() returns the mask of nodes from which memory can currently be allocated. If the returned mask is equal to numa_all_nodes, then memory allocation is allowed from all nodes. numa_alloc_onnode() allocates memory on a specific node. The size argument will be rounded up to a multiple of the system page size. if the specified node is externally denied to this process, this call will fail. This function is relatively slow compared to the malloc(3), family of functions. The memory must be freed with numa_free(). On errors NULL is returned. numa_alloc_local() allocates size bytes of memory on the local node. The size argument will be rounded up to a multiple of the system page size. This function is relatively slow compared to the malloc(3) family of functions. The memory must be freed with numa_free(). On errors NULL is returned.() attempts to allocate size bytes of memory page interleaved on all nodes. The size argument will be rounded up to a multiple of the system page size. The nodes on which a process is allowed to allocate memory may be constrained externally. If this is the case, this function may fail. This function is relatively slow compare to malloc(3), family of functions and should only be used for large areas consisting of multiple pages. The interleaving works at page level and will only show an effect when the area is large. The allocated memory must be freed with numa_free(). On error, NULL is returned. numa_alloc() allocates size bytes of memory with the current NUMA policy. The size argument will be rounded up to a multiple of the system page size. This function is relatively slow compare to the malloc(3) family of functions. The memory must be freed with numa_free(). On errors NULL is returned. numa_realloc() changes the size of the memory area pointed to by old_addr from old_size to new_size. The memory area pointed to by old_addr must have been allocated with one of the numa_alloc* functions. The new_size will be rounded up to a multiple of the system page size. The contents of the memory area will be unchanged to the minimum of the old and new sizes; newly allocated memory will be uninitialized. The memory policy (and node bindings) associated with the original memory area will be preserved in the resized area. For example, if the initial area was allocated with a call to numa_alloc_onnode(), then the new pages (if the area is enlarged) will be allocated on the same node. However, if no memory policy was set for the original area, then numa_realloc() cannot guarantee that the new pages will be allocated on the same node. On success, the address of the resized area is returned (which might be different from that of the initial area), otherwise NULL is returned and errno is set to indicate the error. The pointer returned by numa_realloc() is suitable for passing to numa_free(). numa_free() frees size bytes of memory starting at start, allocated by the numa_alloc_* functions above. The size argument will be rounded up to a multiple of the system page size. numa_run_on_node() runs the current task task and its children only on nodes specified in nodemask. They will not migrate to CPUs of other nodes until the node affinity is reset with a new call to numa_run_on_node_mask() or numa_run_on_node(). Passing numa_all_nodes permits the kernel to schedule on all nodes again. On success, 0 is returned; on error -1 is returned, and errno is set to indicate the error. numa_run_on_node_mask_all() runs the current task and its children only on nodes specified in nodemask like numa_run_on_node_mask but without any cpuset awareness. numa_get_run_node_mask() returns a mask of CPUs on which the current task is allowed to run._set_bind_policy() specifies whether calls that bind memory to a specific node should use the preferred policy or a strict policy. The preferred policy allows the kernel_get_interleave_node() is used by libnuma internally. It is probably not useful for user applications. It uses the MPOL_F_NODE flag of the get_mempolicy system call, which is not intended for application use (its operation may change or be removed altogether in future kernel versions). See get_mempolicy(2). numa_pagesize() returns the number of bytes in page. This function is simply a fast alternative to repeated calls to the getpagesize system call. See getpagesize(2). numa_sched_getaffinity() retrieves a bitmask of the cpus on which a task may run. The task is specified by pid. Returns the return value of the sched_getaffinity system call. See sched_getaffinity(2). The bitmask must be at least the size of the kernel's cpu mask structure. Use numa_allocate_cpumask() to allocate it. Test the bits in the mask by calling numa_bitmask_isbitset(). numa_sched_setaffinity() sets a task's allowed cpu's to those cpu's specified in mask. The task is specified by pid. Returns the return value of the sched_setaffinity system call. See sched_setaffinity(2). You may allocate the bitmask with numa_allocate_cpumask(). Or the bitmask may be smaller than the kernel's cpu mask structure. For example, call numa_bitmask_alloc() using a maximum number of cpus from numa_num_configured_cpus(). Set the bits in the mask by calling numa_bitmask_setbit(). numa_node_to_cpus() converts a node number to a bitmask of CPUs. The user must pass a bitmask structure with a mask buffer long enough to represent all possible cpu's. Use numa_allocate_cpumask() to create it. If the bitmask is not long enough errno will be set to ERANGE and -1 returned. On success 0 is returned. numa_node_to_cpu_update() Mark cpus bitmask of all nodes stale, then get the latest bitmask by calling numa_node_to_cpus() This allows to update the libnuma state after a CPU hotplug event. The application is in charge of detecting CPU hotplug events. numa_node_of_cpu() returns the node that a cpu belongs to. If the user supplies an invalid cpu errno will be set to EINVAL and -1 will be returned. numa_allocate_cpumask () returns a bitmask of a size equal to the kernel's cpu mask (kernel type cpumask_t). In other words, large enough to represent NR_CPUS cpus. This number of cpus can be gotten by calling numa_num_possible_cpus(). The bitmask is zero-filled. numa_free_cpumask frees a cpumask previously allocate by numa_allocate_cpumask. numa_allocate_nodemask() returns a bitmask of a size equal to the kernel's node mask (kernel type nodemask_t). In other words, large enough to represent MAX_NUMNODES nodes. This number of nodes can be gotten by calling numa_num_possible_nodes(). The bitmask is zero- filled. numa_free_nodemask() frees a nodemask previous allocated by numa_allocate_nodemask(). numa_bitmask_alloc() allocates a bitmask structure and its associated bit mask. The memory allocated for the bit mask contains enough words (type unsigned long) to contain n bits. The bit mask is zero- filled. The bitmask structure points to the bit mask and contains the n value. numa_bitmask_clearall() sets all bits in the bit mask to 0. The bitmask structure points to the bit mask and contains its size ( bmp ->size). The value of bmp is always returned. Note that numa_bitmask_alloc() creates a zero-filled bit mask. numa_bitmask_clearbit() sets a specified bit in a bit mask to 0. Nothing is done if the n value is greater than the size of the bitmask (and no error is returned). The value of bmp is always returned. numa_bitmask_equal() returns 1 if two bitmasks are equal. It returns 0 if they are not equal. If the bitmask structures control bit masks of different sizes, the "missing" trailing bits of the smaller bit mask are considered to be 0. numa_bitmask_free() deallocates the memory of both the bitmask structure pointed to by bmp and the bit mask. It is an error to attempt to free this bitmask twice. numa_bitmask_isbitset() returns the value of a specified bit in a bit mask. If the n value is greater than the size of the bit map, 0 is returned. numa_bitmask_nbytes() returns the size (in bytes) of the bit mask controlled by bmp. The bit masks are always full words (type unsigned long), and the returned size is the actual size of all those words. numa_bitmask_setall() sets all bits in the bit mask to 1. The bitmask structure points to the bit mask and contains its size ( bmp ->size). The value of bmp is always returned. numa_bitmask_setbit() sets a specified bit in a bit mask to 1. Nothing is done if n is greater than the size of the bitmask (and no error is returned). The value of bmp is always returned. copy_bitmask_to_nodemask() copies the body (the bit map itself) of the bitmask structure pointed to by bmp to the nodemask_t structure pointed to by the nodemask pointer. If the two areas differ in size, the copy is truncated to the size of the receiving field or zero- filled. copy_nodemask_to_bitmask() copies the nodemask_t structure pointed to by the nodemask pointer to the body (the bit map itself) of the bitmask structure pointed to by the bmp pointer. If the two areas differ in size, the copy is truncated to the size of the receiving field or zero-filled. copy_bitmask_to_bitmask() copies the body (the bit map itself) of the bitmask structure pointed to by the bmpfrom pointer to the body of the bitmask structure pointed to by the bmpto pointer. If the two areas differ in size, the copy is truncated to the size of the receiving field or zero-filled. numa_bitmask_weight() returns a count of the bits that are set in the body of the bitmask pointed to by the bmp argument. numa_move_pages() moves a list of pages in the address space of the currently executing or current process. It simply uses the move_pages system call. pid - ID of task. If not valid, use the current task. count - Number of pages. pages - List of pages to move. nodes - List of nodes to which pages can be moved. status - Field to which status is to be returned. flags - MPOL_MF_MOVE or MPOL_MF_MOVE_ALL See move_pages(2). numa_migrate_pages() simply uses the migrate_pages system call to cause the pages of the calling task, or a specified task, to be migated from one set of nodes to another. See migrate_pages(2). The bit masks representing the nodes should be allocated with numa_allocate_nodemask() , or with numa_bitmask_alloc() using an n value returned from numa_num_possible_nodes(). A task's current node set can be gotten by calling numa_get_membind(). Bits in the tonodes mask can be set by calls to numa_bitmask_setbit(). numa_error() is a libnuma internal function that can be overridden by the user program. This function is called with a char * argument when a libnuma function fails. Overriding the library internal definition makes it possible to specify a different error handling strategy when a libnuma function fails. It does not affect numa_available(). The numa_error() function defined in libnuma prints an error on stderr and terminates the program if numa_exit_on_error is set to a non-zero value. The default value of numa_exit_on_error is zero. numa_warn() is a libnuma internal. numa_warn exits the program when numa_exit_on_warn is set to a non- zero value. The default value of numa_exit_on_warn is zero.
Binaries that were compiled for libnuma version 1 need not be re- compiled to run with libnuma version 2. Source codes written for libnuma version 1 may be re-compiled without change with version 2 installed. To do so, in the code's Makefile add this option to CFLAGS: -DNUMA_VERSION1_COMPATIBILITY
numa_set_bind_policy and numa_exit_on_error are process global. The other calls are thread safe.
Copyright 2002, 2004, 2007, 2008 Andi Kleen, SuSE Labs. libnuma is under the GNU Lesser General Public License, v2.1.
get_mempolicy(2), set_mempolicy(2), getpagesize(2), mbind(2), mmap(2), shmat(2), numactl(8), sched_getaffinity(2) sched_setaffinity(2) move_pages(2) migrate_pages 2020-08-13. (At that time, the date of the most recent commit that was found in the repos‐ itory was 2020-06 SuSE Labs December 2007 NUMA(3)
Pages that refer to this page: get_mempolicy(2), mbind(2), migrate_pages(2), move_pages(2), set_mempolicy(2), numa(7), numastat(8) | https://man7.org/linux/man-pages/man3/numa.3.html | CC-MAIN-2020-40 | refinedweb | 3,795 | 56.66 |
Back in the October 2007 issue of MSDN Magazine, we published an article on the beginning stages of what has become the Task Parallel Library (TPL) that's part of the Parallel Extensions to the .NET Framework. While the core of the library and the principles behind it have remained the same, as with any piece of software in the early stages of its lifecycle, the design changes frequently. In fact, one of the reasons we've put out an early community technology preview (CTP) of Parallel Extensions is to solicit your feedback on the APIs so that we know if we're on the right track, how we may need to change them further, and so forth. Aspects of the API have already changed since the article was published, and here we'll go through the differences.
First, the article refers to System.Concurrency.dll and the System.Concurrency namespace. We've since changed the name of the DLL to System.Threading.dll, and TPL is now contained in two different namespaces, System.Threading and System.Threading.Tasks. AggregateException, the higher-level Parallel class, and Parallel's supporting types (ParallelState and ParallelState<TLocal>) are contained in System.Threading, while all of the lower-level task parallelism types are in System.Threading.Tasks.
Next, the second page of the article states "if any exception is thrown in any of the iterations, ... the first thrown exception is rethrown in the calling thread." The semantics here have changed such that we now have a common exception handling model across all of the Parallel Extensions, including PLINQ. If an exception is thrown, we still cancel all unstarted iterations, but rather than just rethrowing one exception, we bundle all thrown exceptions into an AggregateException container exception and throw that new exception instance. This allows developers to see all errors that occurred, which can be important for reliability. It also preserves the stack traces of the original exceptions.
In the section on aggregation, the article talks about the Parallel.Aggregate API. Since the article was published, we've dropped this method from the Parallel class. Why? Because a) PLINQ already supports parallel aggregation through the ParallelEnumerable.Aggregate extension method, and b) because aggregation can be implemented with little additional effort on top of Parallel.For if PLINQ's support for aggregation isn't enough. Consider the example shown in the article:
int sum = Parallel.Aggregate(0, 100, 0, delegate(int i) { return isPrime(i) ? i : 0; }, delegate(int x, int y) { return x + y; });
int sum = Parallel.Aggregate(0, 100, 0, delegate(int i) { return isPrime(i) ? i : 0; }, delegate(int x, int y) { return x + y; });
We can implement this with PLINQ as follows:
int sum = (from i in ParallelEnumerable.Range(0, 99) where isPrime(i) select i).Sum();
int sum = (from i in ParallelEnumerable.Range(0, 99) where isPrime(i) select i).Sum();
Of course, this benefits from LINQ and PLINQ already supporting a Sum method. We can do the same thing using the Aggregate method for general reduction support:
int sum = (from i in ParallelEnumerable.Range(0, 99) where isPrime(i) select i).Aggregate((x,y) => x+y);
int sum = (from i in ParallelEnumerable.Range(0, 99) where isPrime(i) select i).Aggregate((x,y) => x+y);
If we prefer not to use PLINQ, we can do a similar operation using Parallel.For (in fact, this is very similar to how Parallel.Aggregate was implemented internally):
int sum = 0;Parallel.For(0, 100, () => 0, (i,state)=>{ if (isPrime(i)) state.ThreadLocalState += i;},partialSum => Interlocked.Add(ref sum, partialSum));
int sum = 0;Parallel.For(0, 100, () => 0, (i,state)=>{ if (isPrime(i)) state.ThreadLocalState += i;},partialSum => Interlocked.Add(ref sum, partialSum));
Here, we're taking advantage of the overload of Parallel.For that supports thread-local state. On each thread involved in the Parallel.For loop, the thread-local state is initialized to 0 and is then incremented by the value of every prime number processed by that thread. After the thread has completed processing all iterations it's assigned, it uses an Interlocked.Add call to store the partial sum into the total sum. In fact, if you found yourself needing Aggregate functionality a lot, you could generalize this into your own ParallelAggregate method, something like the following:;}
With this, you can use ParallelAggregate just as is done with Parallel.Aggregate in the article:
int sum = ParallelAggregate(0, 100, 0, delegate(int i) { return isPrime(i) ? i : 0; }, delegate(int x, int y) { return x + y; });
int sum = ParallelAggregate(0, 100, 0, delegate(int i) { return isPrime(i) ? i : 0; }, delegate(int x, int y) { return x + y; });
Moving on to the Task class, we've made some fairly substantial changes to the public facing API. Here's a summary of the differences from what's described in the article:
In addition to changes to the Task class, there have also been changes to the TaskManager class described in the article:
That sums up the changes we've made. Even with these changes, the article should still provide you with a good overview of the library and its intended usage. For more information, check out the documentation that's included with the CTP, and stay tuned to this blog! As mentioned before, we're very interested in your feedback on the API, so please let us know what you think (the good and the bad).
I posted about changes we've made to the Task Parallel Library since we published the MSDN Magazine article
I really don't like the idea of not having constructors. Whenever I want to use a class for the first time, the first thing I look at it is its constructors.
In addition, not having a constructor means we can never inherit from the class.
And for what? Writing "Task.Create" instead of "new Task"?
Thanks for the feedback!
This is a design issue we went back and forth on. We ended up with factory approach for several reasons. One is that we got feedback with the ctor approach that it wasn't clear the tasks were being scheduled when they were constructed, and that doing so went against .NET design guidelines. We explored alternatives, like not scheduling from the ctor and exposing a Start method, but this then required more code to be used to create a task, and it also made futures difficult to work with (it also gave some folks the impression through the API that the same task could be scheduled more than once). We explored a variety of other options (including going with both the factory and the ctor approach), but at least for the CTP, we settled on just using the factory. Are there particular scenarios you find more difficult with the factory approach than with the ctor approach?
As far as the inheritance issue, this was an explicit design choice, as we're trying to keep it a closed system for now. Are there important scenarios you're unable to implement due to this decision?
Question: Why does Future descend from Task, instead of aggregating it?
Sure, the implementation of a future is pretty similar to the implementation of a task, but it seems to me that the usage patterns would be way different. I can't imagine creating a Future and passing it to a method that takes a Task parameter -- they're conceptually different; it wouldn't make sense. It feels like a violation of the Liskov Substitution Principle here, although admittedly, I haven't scrutinized the code yet.
It feels to me like Task should be a sealed class, and so should Future. The intent is for users to extend them via aggregation (passing in a delegate) rather than inheritance. If they have a lot of public methods in common, then make an ITask, so that *if* there's an obscure case where you want to treat them the same, you can. Give them a common base class, if it makes sense for implementation. But it doesn't feel like one should descend from the other.
At that point, my main complaint with the new factory -- its name -- could be addressed. I really like the idea of a factory; I like the idea of constructors doing nothing but constructing, and having a factory method when there's extra work to do (like scheduling a thread). But readability is essential, and I don't think "Create" makes it clear to the reader that any extra work is happening.
For Task, I think the factory method's name should be more descriptive. Perhaps "CreateAndSchedule" (though "schedule" is pretty technical lingo for a library that's supposed to be making this stuff easier), or "CreateAndStart", or even just "Task.Start".
For Future, again, the usage patterns are different. Conceptually, when you create a future, you're saying "here's a question that I will want the answer to later" -- the "and schedule" is implied by the very nature of a future. I think Future's factory method *could* be called "Create" without confusion. Or perhaps something a bit different -- but not necessarily the same name as on Task. Which is another reason they shouldn't descend from each other: it doesn't make sense to force them to have identical factory names.
That's great feedback, Joe, thank you!
re: inheritance vs aggregation
We did strongly consider the aggregation model, where a Future would contain a Task rather than derive from it. And we haven't completely abandoned the idea, so it's good to know that someone might prefer that approach. One thing that drove us away from the aggregation model was performance, with the extra object allocations and increase in instance size representing a non-trivial performance loss. But aside from that, the idea that a Future<T> is a Task that adds a return value was appealing to folks, and that "is a" of course is a typical indication of an inheritance relationship. There also are scenarios where you may want to treat a Future<T> as a Task, such as to wait on it or cancel it (sometimes in concert with other Tasks), though that can also be accomplished by passing around its contained Task or by having Future<T> still derive from a shared base class. At the end of the day, there are pros and cons to both approaches, and for now we felt that the pros to the inheritance model (both from a usability and performance standpoint) won out over the pros of the aggregation model, but this kind of design decision is exactly why we've released an early CTP, to get feedback on it; nothing is set in stone.
re: factory name
Both Task.CreateAndStart and Task.Start were considered, and we opted away from "*Start" because it implied to some folks that the Task was being run immediately (like Thread.Start); "Scheduled" has the problem you mention, in that we don't necessarily want folks to have to think about "scheduling" things. We went with Task.Create for the CTP (you're not only creating a Task instance, but also you're conceptually creating a task), but we'll certainly consider other names moving forward, especially if we get a lot of feedback like yours that Task.Create isn't clear. If you have other suggestions, please do pass those along.
I'm with Joe on this one...
I don't like the idea of a constructor if it is doing something more than just creating an instance of Task. That said, I think the same applies to factory methods. If I was to call Task.Create() I would assume that was exactly what it did - create an instance of Task. The next thing I would look for was to find some way to start/schedule/etc. the Task (not sure of the best nameing however).
Andy
Thanks, Andy. Assuming we were to stick with the factory approach, I'd be interested in hearing any naming suggestions you may have to replace "Create".
The basic issue around naming of {Start, Schedule, Run, Create} etc seems to be caused by the implicit use of a common static TaskManager when a Task.Create() is called. Perhaps it would be a clearer if Task constructors and factory methods only create a Task, and that TaskManager.Run( new Task()) or similar is the default way to add and run a new task.
If people are expecting synchronous or immediate execution, then perhaps add Async to naming.
eg so that simple usages are
Task t = new Task(..)
t.RunAsync()
or
TaskManager.RunAsync(new Task(..))
The idea of a syntactic rewrite of "new X()" to "X.Create()" often comes up. I think that this usage should at least be standardised so that if we ever see Class.Create() we know that is a simple constructor call, nothing more. But I think "new X()" is just fine and avoids confusion in an API.
Perhaps also the common static TaskManager can have a special name so that confusion with instance TaskManagers is avoided.
Thanks, Mike! It's a good suggestion, though it does introduce additional issues. For example, can a task be run twice? Can it be run twice concurrently (through RunAsync)? If so, what happens with a Future<T> (that derives from Task) regarding its Value property (does a second invocation overwrite the first)? How does that affect things like Completed events? Etc. And how should we deal with concurrent access to a Task that may or not have been started yet (e.g. I create a Task, pass it off to several threads, and they all try to run it). Then there are issues around expectations concerning other methods on Task... what if I call Wait on an unstarted Task? And so on. And from a performance perspective, even if we come up with a clean design around this, there may be issues in terms of the cost of tracking all of this (extra space required in the types, interlocked operations used to ensure consistency, etc.) Obviously many if not all of these are issues that could be worked/designed around, but they all start adding levels of complexity that increase the concept count a developer needs to understand (and keep track of) to use the system. My point (and this was largely a random stream of consciousness) is that regardless of name, one of the nice things about the static factory that both constructs and schedules a Task is that it's easy to use (the basic operation is one method call, and is very similar to ThreadPool.QueueUserWorkItem in concept), it eliminates the need to be concerned about the same task being run twice and all of the various ramifications that could result, and it keeps the API a bit simpler in that we don't need to provide support for running a previously created task.
As always, though, we're keeping our minds open, and this kind of dialogue is exactly what we were hoping to get by releasing a CTP (so thank you, thank you to everyone participating). If we get a lot of feedback that this would be to more customers' liking than the current design, we'll definitely take a good, in-depth look at it.
And I'm still interested in other naming suggestions for the-API-currently-known-as Create. ;)
Hmm... seems like you just touched on a good possibility for a name: how about a static Task.Run() method? Creates a new task with the delegate you pass in, and returns the new Task instance.
Of course, a name like Run doesn't make it immediately obvious that there's a return value; so you'd have less-experienced developers calling Task.Run() and ignoring the return value. But you know, I think that's fine. It'd act pretty much the same as QueueUserWorkItem: start this task, and don't bother me about it again.
Actually, if I understand it correctly, tasks can be set up to have cascading cancels, e.g., task A creates task B with a certain flag; then task A gets canceled, which causes task B to automatically be canceled as well. So Task.Run() would be even a little bit cooler than QueueUserWorkItem.
How about Task.Prepare(...); ?
...or Task.CreateAndPrepare(...);?
Task.Run, Task.Prepare, Task.CreateAndPrepare... cool, thanks for the suggestions.
Postback from:
This is an article that goes deeper into parallelism for server applications which use sockets.
I've read some articles about parallel programming with C# in a Spanish magazine "Solo Programadores", but they are in Spanish :-)
I am from Spain. However, I found a very interesting article in Packt Publishing's website:
The author is an Spanish well known writer. It seems he is begining working in english as well.
The article is a chapter of his "C# 2008 and 2005 Threaded Programming" book. Sounds very interesting.
I am currently reading Joe Duffy's "Concurrent Programming on Windows". Highly recommended.
To completement it, I bought "C# 2008 and 2005 Threaded Programming", some reviewers said it had funny examples.
I love exploiting multicore CPUs, it is incredible to see tasks finishing in less time using many cores.
Rednael,
Great post! Your article is an amazing piece of work! Highly recommended. I'll be working with your framework to make some tests.
Omar,
The article you are recommending is also very useful. I am also reading Joe Duffy's book
I found information about Hillar's book in. It seems that he is a very important author in Spanish speaking world. I just know one word in Spanish: "Hola" means "Hello". :-)
I've read the book table of contents and it seems a good work. Worth reading it too.
I think we must read hard to understando multicore programming. Books are great resources and posts like Rednael's, too.
If you would like to receive an email when updates are made to this post, please register here
RSS
Trademarks |
Privacy Statement | http://blogs.msdn.com/pfxteam/archive/2007/11/29/6558543.aspx | crawl-002 | refinedweb | 3,011 | 63.8 |
CodePlexProject Hosting for Open Source Software
I'm going to need a web service for an Orchard site soon, but the access to that web service should be controlled, i.e. client would be required to authorize themselves. How would you go?
I thought about WCF, but besides it being non-trivial to use in Orchard it also seems a bit bloated for me. Considering that I'd like to expose this functionality to clients with any technology I would opt with a RESTful web service anyway, that's why I
think the best option would be to just implement it with standard controllers and actions.
Now here comes security, which is well thought-through in WCF but I'd have to implement some aspects myself. Reading some excellent resources on that topic (e.g.,,) I came to the conclusion that probably my best bet would be to
The new Web API framework will be soon released with MVC 4, I wonder how it would play inside Orchard?
ServiceStack seems interesting, too but I don't think it would be simple to use with Orchard.
What do you think?
Other resources, updated continuously:
Web API ? Maybe Mick could explain how to integrate it into orchard ?
Nick
Further resources:
All in all it seems that the cleanest and possibly the best authentication method would be to use HMAC and then employ Orchard's authorization services for authorization.
While WebAPI seems to be nice with cool features like content negotiation (meaning it serves JSON or XML or any other type, depending what the client accepts) or OData querying it seems there is no built-in solution for stateless authentication-authorization;
one has to implement it on his own.
Also there seems to be no standardized way of dealing with exceptions (although
there are some great tips).
sebastienros wrote:
Nick
Ahem... 'Super Nick'
Off the top of my head, the first thing I did was to get Orcahrd up to MVC 4.0 Version. Next... In the framework I needed to install the AspNetWebApi Nuget package.
Next code changes!
CompositionStrategy.cs -
You need to return a list of Service Controllers. so...
In the Compose method add..
var serviceControllers = BuildBlueprint(features, IsServiceController, BuildServiceControllerBlueprint, excludedTypes);
ServiceControllers = serviceControllers,
Next add a couple methods to the class...
private static bool IsServiceController(Type type) {
return typeof(IHttpController).IsAssignableFrom(type);
}
private static ControllerBlueprint BuildServiceControllerBlueprint(Type type, Feature feature) { var areaName = feature.Descriptor.Extension.Id; var controllerName = type.Name; if (controllerName.EndsWith("Controller")) controllerName = controllerName.Substring(0, controllerName.Length - "Controller".Length); return new ControllerBlueprint { Type = type, Feature = feature, AreaName = areaName, ControllerName = controllerName, }; }
Next to wire that Shiz up!
ShellContainerFactory.cs
Right underneeth you build the Controllers... Add this code...
foreach (var item in blueprint.ServiceControllers) {
var serviceKeyName = (item.AreaName + "/" + item.ControllerName).ToLowerInvariant();
var serviceKeyType = item.Type;
RegisterType(builder, item)
.Keyed<IHttpController>(serviceKeyName)
.Keyed<IHttpController>(serviceKeyType)
.WithMetadata("ControllerType", item.Type)
.InstancePerDependency()
.OnActivating(e => {
var controller = e.Instance as ApiController;
if (controller != null)
{
}
});
}
Next I needed a controller factory....
Create a folder in the framework called WebApi so it goes
Orchard.Framework\WebApi
next create file called.. WebApiHttpControllerFactory.cs and post this code into it..
public class WebApiHttpControllerFactory : IHttpControllerFactory {
private readonly HttpConfiguration _configuration;
public WebApiHttpControllerFactory(HttpConfiguration configuration)
{
_configuration = configuration;
}
/// <summary>
/// Tries to resolve an instance for the controller associated with a given service key for the work context scope.
/// </summary>
/// <typeparam name="T">The type of the controller.</typeparam>
/// <param name="workContext">The work context.</param>
/// <param name="serviceKey">The service key for the controller.</param>
/// <param name="instance">The controller instance.</param>
/// <returns>True if the controller was resolved; false otherwise.</returns>
protected bool TryResolve<T>(WorkContext workContext, object serviceKey, out T instance) {
if (workContext != null && serviceKey != null) {
var key = new KeyedService(serviceKey, typeof(T));
object value;
if (workContext.Resolve<ILifetimeScope>().TryResolveService(key, out value)) {
instance = (T)value;
return true;
}
}
instance = default(T);
return false;
}
public IHttpController CreateController(HttpControllerContext controllerContext, string controllerName)
{
var routeData = controllerContext.RouteData;
// Determine the area name for the request, and fall back to stock orchard controllers
var areaName = routeData.GetAreaName();
// Service name pattern matches the identification strategy
var serviceKey = (areaName + "/" + controllerName).ToLowerInvariant();
// Now that the request container is known - try to resolve the controller information
Meta<Lazy<IHttpController>> info;
var workContext = controllerContext.GetWorkContext();
if (TryResolve(workContext, serviceKey, out info)) {
var type = (Type)info.Metadata["ControllerType"];
controllerContext.ControllerDescriptor =
new HttpControllerDescriptor(_configuration, controllerName, type);
var controller = info.Value.Value;
controllerContext.Controller = controller;
return controller;
}
return null;
}
public void ReleaseController(IHttpController controller) {
}
}
Next you need to hook up the routing!!!...
Create another folder called RouteExtension.cs so like so
Orchard.Framework\WebApi\Extensions
Paste this code in to it...
public static class RouteExtension {
public static string GetAreaName(this IHttpRoute route) {
var routeWithArea = route as IRouteWithArea;
if (routeWithArea != null) {
return routeWithArea.Area;
}
var castRoute = route as Route;
if (castRoute != null && castRoute.DataTokens != null) {
return castRoute.DataTokens["area"] as string;
}
return null;
}
public static string GetAreaName(this IHttpRouteData routeData) {
object area;
if (routeData.Route.DataTokens.TryGetValue("area", out area)) {
return area as string;
}
return GetAreaName(routeData.Route);
}
}
Next its time to hook your controller factory up to Orchard....
Head into the OrchardStarter.cs file
Around line 142... insert this..
var configuration = GlobalConfiguration.Configuration;
GlobalConfiguration.Configuration.ServiceResolver.SetService(typeof(IHttpControllerFactory), new WebApiHttpControllerFactory(configuration));
That should be it..... Welcome to the World of WebApi Asp.net V4!
I did this around 2 months ago... So if I have missed something, let me know!!!.
From: Piedone: Pied
That means there were so many changes between Preview 6 and the rename that you had to add so many modifications?
Other useful resources BTW:
True... But It was more of an alternative... I would modify the core and upgrade to mvc 4 if I were you.
My problem with that (apart from that it's modification to the core :-)) is that I'd also like to publish this module of mine that would use WebAPI and I really don't want every user to request to make this mods too.
I've given this up for now, because updating to MVC 4 sucks :-). There are 3.0 references everywhere (e.g. in all module Web.configs, so I wonder how we'll manage the update, since I guess all the community modules have to be updated as well?), so I stopped
when one error message was just too cryptic for me.
I'll maybe try again when Orchard is already MVC 4, but currently the amount of hacking needed to use Web API outweigh its gains.
Perhaps this may be of some assistance... I haven't tried it myself (it's on my todo list)
I don't know if this uses an older version of WebApi so it may be irrelevant.
Thanks, I've already read it but it doesn't really help. Although my main problem here was the obstacle of updating...
First I'm back to trying to use Web API. Actually apparently it's entirely possible to use it with MVC 3, so there is no need to update to MVC 4, which is a huge bonus :-).
Maybe it's also possible do use Web API without modifying the core? It seems to me that all this could be done with an Autofac module. It won't be the same style like Orchard registers controllers, but it wouldn't require altering the core.
What am I missing?
You are missing that you can assume an upgrade to MVC4. So take it into account. And feel free to modify the core as necessary to have MVC4 and WebApi working. We'll take the changes. Could ship with 1.5. And then don't try to make it a module. If you need
help, well I am in the MVC/WebAPI team, so I should be able to help you, I'll try at least.
That's very cool, I'm glad for your help. Then I'll go on with the integration, The first thing will be to put Nick's code into a fork.
I've made some integration efforts in
this fork. I had to do a lot of copy-pasting, what is downright ugly:
The above would need some refactoring for sure:
Also the class OrchardHttpControllerFactory has a lot common with OrchardControllerFactory, so this has to be refactored too.
The code won't compile, because I couldn't solve the problem with getting the work context for an HttpController. This would need the access to the HttpContext (IWorkContextAccessor.GetContext(HttpContextBase httpContext) requires it) but I've found no way
to get it from HttpControllerContext. This is all in the controller factory.
I wonder, Nick how did you manage to solve this problem with the WorkContext?
Im lookign now...
Onethings I failed to montion is that when using an ApiController... this is what I did.
public interface IVideoEntriesController : IHttpController, IDependency { }
public class VideoEntriesController : ApiController, IVideoEntriesController
{
private readonly IVideoService _videoService;
public VideoEntriesController(IVideoService videoService)
{
_videoService = videoService;
Logger = NullLogger.Instance;
}
public ILogger Logger { get; set; }
Ah!! I think I have noticed I missed something from above....
In ShellRoute.cs, in the method GetRouteData... where the line says this...
routeData.DataTokens["IWorkContextAccessor"] = _workContextAccessor;
you need to do this..
routeData.Values["IWorkContextAccessor"] = _workContextAccessor; //NGM : Added for WebApi
routeData.DataTokens["IWorkContextAccessor"] = _workContextAccessor;
I will keep looking.
BTW... To do routing in your modules... you need to do this...
Create Routes.cs file (in your module)
Add this method...
public static Route MapHttpRoute(string name, string routeTemplate, RouteValueDictionary defaults, RouteValueDictionary constraints, RouteValueDictionary dataTokens) {
return new HttpWebRoute(routeTemplate, HttpControllerRouteHandler.Instance)
{
Defaults = (defaults),
Constraints = (constraints),
DataTokens = (dataTokens)
};
}
So when creating your Route... you need to do this:
new RouteDescriptor {
Priority = 1,
Route = MapHttpRoute(
"DefaultWebApi_Id",
"foo/{id}",
new RouteValueDictionary { {"area", "Foo.Bar"}, {"Controller", "Foo"} },
new RouteValueDictionary (),
new RouteValueDictionary { {"area", "Foo.Bar"}, })
},
To fix the problems with WorkContext.... Add these two methods to WorkContextExtensions.cs
public static WorkContext GetWorkContext(this RequestContext requestContext) {
if (requestContext == null)
return null;
var routeData = requestContext.RouteData;
if (routeData == null || routeData.DataTokens == null)
return null;
object workContextValue;
if (!routeData.DataTokens.TryGetValue("IWorkContextAccessor", out workContextValue)) {
workContextValue = FindWorkContextInParent(routeData);
}
if (!(workContextValue is IWorkContextAccessor))
return null;
var workContextAccessor = (IWorkContextAccessor)workContextValue;
return workContextAccessor.GetContext(requestContext.HttpContext);
}
private static object FindWorkContextInParent(RouteData routeData) {
object parentViewContextValue;
if (!routeData.DataTokens.TryGetValue("ParentActionViewContext", out parentViewContextValue)
|| !(parentViewContextValue is ViewContext)) {
return null;
}
var parentRouteData = ((ViewContext)parentViewContextValue).RouteData;
if (parentRouteData == null || parentRouteData.DataTokens == null)
return null;
object workContextValue;
if (!parentRouteData.DataTokens.TryGetValue("IWorkContextAccessor", out workContextValue)) {
workContextValue = FindWorkContextInParent(parentRouteData);
}
return workContextValue;
}
For this build error
return workContextAccessor.GetContext(controllerContext.HttpContext);
I switched it to
return workContextAccessor.GetContext();
Not sure if that was the right thing to do... but it worked.
Thank you very much again!
Do you know why ApiControllers needed to implement IDependency too? I got it more or less working without this.
In ShellRoute for me control never reaches the lines you mentioned in GetRouteData, since _route.GetRouteData(effectiveHttpContext); returns null. I think something has to be done with route building too. I guess this has to do also with why ApiController
routes are no auto-registered (and route declarations are needed).
The two methods you copied from WorkContextExtensions is the same what it is now :-).
I pushed some changes, but the WorkContext acquisition is still an issue. I don't know where it could be pushed into the route's DataTokens.
Can you add me to the fork? I will apply some changes for ya :)
Wow, of course, I've added you. What I've done till now is pretty much only what you've told me anyway :-).
Okay I have fixed the WorkContext thing, but for somereason its not resolving the Lazy<IHttpController> - I need to have a look at why and how exactly I got it working.
Test Api route is in Experimental.... Url:
I seem to be havign problems pushing to codeplex. I will try again tomorrow.
Very cool! I've tested with a sample ApiController dropped into the Users module but Experimental is fine for this. The module will be removed for 1.5 anyway.
Please also push the changes :-). I see you have issues with pushing the changeset. And I see possibly you've solved it now? :-) You're added to the fork so there should be no problem with authorization...
I hope Sebastien can say something about the problem of duplicates I've mentioned earlier. The worst thing that can happen is the need for adapter classes, what is not a big deal.
I've seen you pushed some changes, awesome! Will take a look at them.? :-)
What is the current status of this?
I would very much like to make use of ASP.NET Web API in Orchard...
Piedone wrote:? :-)
Oh that code was left in there for me to do some debugging. With the updated RC, we need to update the packages again. I believe things have changed again so it might be worth upgrading and fixing those compile issues first.
I will see if I can take a look again this weekend.
Awesome!
2LM: Nick has practically made the usage of Web API already possible. There are a few more steps left, and of course Web API should be released, but after this, with the support of Sebastien, it could become part of Orchard.
That is awesome indeed, seems I found myself a new CMS then :)
Hey All, okay so its done and working.
Get the fork -
load up the project and go to
See the lovely name "Nick" get output to the window.
Let me know if it doesnt work for you. There is some clean up, as I duplicated something twice, so the hit on the controller is quite heavy... but hey... it works!
Thanks! I have just tried to download the latest version, but it seems to be missing quite a lot of module projects and such. Running the project therefor doesn't succeed in "Cooking the Orchard Recipe", and all sorts of .NET Errors occur... Can
you check if the uploaded version is complete please?
Regards,
Andy
Sure, whats it missing? Whats the error messages?
Well, first, for completeness, I downloaded the latest version from,
where I clicked "Download Latest version".
Result was a file called Orchard-66ee5efc1b92.zip, which does seem to be
the latest changeset.
This zip file doesn't contain quite a lot of projects from Orchard.Web\Modules and Orchard.Web\Specs.
When running the website using CTRL-F5, I get the screen to choose website name, SQL type and such,
after which the cooking recipe screen comes. This doesn't seem to complete, as I don't get my new site,
nor any errors, just a directory browse of the Orchard folder.
When then manually surfing to I get .NET errors about jQuery
not being found (.NET, not javascript)...
Sometimes weird things happen with 'get latest version' - do you have Mecurial installed?
Oh are you just getting runtime warnings? You might need to turn off Thrown Exceptions, leave User-Unhandled switched on.
Never mind, I downloaded TortoiseHg and made a clone of and now I have the full repository. The website runs completely, but the aforementioned
makes Orchard return "Not Found".
Anything I need to do aditionally to get it working?
Ok, found it, I needed to enable the "Experimental" module, it works like a charm now. Sorry for the confusion, I'm not at all familiar enough with Orchard I guess (working on it :))
I see that you have implemented your own IRouteProvider, which doesn't seem to make use of (Global)Configuration, or at least, I don't find it immediately.
Where can we define stuff like formatters, message handlers and such then?
Yo okay, so I havent implemented any of that stuff just yet. I have an idea on how to do that as will take a look on how to make it generic and really straight forward, but that part will take me some time, not alot but some.
Ideally you dont want module devlopers dealing with GlobalConfiguration, so the Route stuff was my stab (working stab) at abstracting that away. I may create another level on top so you dont need to do some of the stuff I have done in there.
@Piedone - I am probably going to create a new fork of Orchard and move my changes to a WebApi branch to allow Seb to merge in easily. They shouldnt be on Default.
All Ideas welcome :)
Well, it all depends on whether you want WebAPI to expose the meta and data from Orchard only. If you want to enable module developers to also expose their own meta and data through Web API, or to expose their own Module data in specific formats, then
there should be some way to define it, either directly in GlobalConfiguration or through some abstraction layer (the latter being preferrable if I understand you correctly).
I do fear a bit that abstracting all of this will take away a lot of the flexibility WebAPI has to offer. In my own implementations of WebAPI, I created my own formatter selector, contextual message handlers and so on. Maybe you should elaborate
a bit on your idea on how to make it generic, that way we have an idea on how we can offer you ideas regarding this :)
Anyways, it's absolutely fantastic to see a CMS running WebAPI, I'm having a blast here :D
@Jetski: yeah, go ahead! I would look into a bit of refactoring regarding the various duplications mentioned earlier, so please add me to the fork when you create it. Awesome work BTW!
Couldn't MediaFormatter registrations done through providers, like with routes? The problem is basically the same: something should be set in Global, but modules shouldn't leave their territory.
New fork:
I have created a branch called webapi on that fork.
Could you please add me to the fork?
Damn, now I wanted to examine it but I get a "HTTP Error: 500 (URL Rewrite Module Error.)"... I'll take a look at it later.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://orchard.codeplex.com/discussions/353559 | CC-MAIN-2017-04 | refinedweb | 3,027 | 59.19 |
Contents
So, what's the
agenda?
So why MVC when
ASP.Net behind code was so good?
Problem number 1:- UNIT Testing
Problem 2 :-
The reality of separation of code and UI
Our HERO MVC (Model, view
and controller)
Pre-requisite for MVC
Lab1:-
Creating a simple hello world ASP.NET MVC Application
Video demonstration
for Lab 1
Step1:- Create project
Step 2:- Add controller
Step 3:-
Add View
Step 4:- Run the application
So what's in the next Lab?
Lab2:- Passing data
between controllers and views
Video demonstration
for Lab 2
Step1:- Create project and
set view data
Step 2:- Display view data in
the view.
So what's in the next Lab?
Lab 3:- Creating a simple
model using MVC
Video demonstration
for Lab 3
Step1:- Create a simple class file
Step2:- Define the
controller with action
Step3:- Create
strongly typed view using the class
Step 4 :- Run your application
So what's in the next Lab?
Lab 4:- Creating simple
MVC data entry screen
Video demonstration
for Lab 4
Step1:- Creating your data
entry ASPX page
Step2:- Creating the controller
Step3:- Create
the view to display the customer object
Step 4:- Finally run the project
So what's in the next Lab?
Lab 5:- using HTML
helper to create views faster
Step 1:- Create the Customer class
Step2:-
Creating the input HTML form using helper classes
Step
3:- Create a strong typed view by using the customer class
Step4:- Creating the
controller class.
What's for the second day?
As the article name says learn MVC, so the agenda is simple we are going to
learn ASP.NET MVC ? in 7 days.
The way we will learn MVC in this series of article is by doing Labs, looking at
detail steps of how to achieve those labs and also looking at demonstrative
videos.
This complete article is divided in to 7 days with 42 hands
on labs and every day we will do 6 labs which will help us achieve our goals.
So get ready for day 1. In day1 below is our agenda we will start with
introduction, do a simple hello world and finally in the 6th lab we
will create a simple customer data entry screen using HTML helper classes.
You can watch my .NET
interview questions and answers videos on various sections like WCF, Silver
light, LINQ, WPF, Design patterns, Entity framework etc
I am sure all ASP.NET love the behind code concept. Accepting something new
like MVC will not convince them. So let's analyze the problems with the current
behind code stuff.
When we generally talk about ASP.NET application built on tiered architecture
they are divided in four parts UI (ASPX pages), behind code (ASPX.CS pages),
Middle tier (.NET classes) and finally Data layer.
If you see from the aspect of code distribution major code which has logic is in
the middle tier or in the behind code (APX.CS files). The UI or ASPX files are
HTML files which is more of UI design and data access logic lot of effort on testing the DAL separately. In case you have custom data
access layer it will be still easy to test them as they are simple .NET classes.
There is no logic in testing on ASPX HTML as such it's more of look and feel.
The middle tier is again a simple .NET class like data logic so you can easily
do unit testing using VSTS or NUNIT.
Now comes the most important one the behind code. The behind code has lot of
action and testing them is one of the most important things. The only way to
invoke these codes are by doing manual test. From a longer run perspective this
would not be a great choice.
Even though
always boasted about how the ASP.NET behind code was separate from the UI, in
practical sense it's very difficult to decouple an ASP.NET behind code and do
unit testing on them.
The ASP.NET behind code is completely tied up with ASP.NET Httpcontext object
which makes unit testing very difficult.
Just think how do I unit test the below behind ASP.NET code. How do I create a
Http context object , how do I simulate the sender and eventargs objects of the
button clicks etc.
FYI: - Many developers would talk about mock test, rhino mocks etc but still its
cryptic and the complication increases with session variables, view data
objects, ASP.NET UI controls creating further confusion.
As said previously the ASPX and the ASPX.CS cannot be decoupled in reality
thus reducing reusability. Yes, Microsoft did said first that the behind code is
different and the UI is different but then they are probably separate physical
files only and one cannot just exist without other.
For instance let's say the same button click code when called via HTTP POST
should display using displayinvoice.aspx and when called via HTTP GET should
display in tree view. In other words we would like to reuse the behind code.
Just think how can we do the same using the current behind code.
That's where MVC comes to rescue. The behind code is moved to a simple .NET
class called as lets ensure that you have all the ingredients to
create a MVC application.
. Visual Studio 2010 or the free Visual Web Developer 2010 Express. These
include ASP.NET MVC 2 template by default.
. Visual Studio 2008 SP1 (any edition) or the free Visual Web Developer 2008
Express with SP1. These do not include ASP.NET MVC 2 by default; you must also
download and install ASP.NET MVC 2 from .
So once you have all your pre-requisite its time to start with the first lab.
In this lab we will create a simple hello world program using MVC template.
So we will create a simple controller, attach the controller to simple
index.aspx page and view the display on the browser.
In case you want spend more time with your family rather than reading the
complete article you can watch the below 5 minute youtube video.
Create a new project by selecting the MVC 2 empty web application template as
shown in the below figure.
Once you click ok, you have a readymade structure with appropriate folders where
you can add controllers, models and views.
So let's go and add a new controller as shown in the below figure.
Once you add the new controller you should see some kind of code snippet as
shown in the below snippet.
public class Default1Controller : Controller
{
//
// GET: /Default1/
public ActionResult Index()
{
return View();
}
}
Now that we have the controller we need to go and add the view. So click on
the Index function which is present in the control and click on add view menu as
shown in the below figure.
The add view pops up a modal box to enter HTML code snippet I have
added "This is my first MVC application".
Index
This is my first MVC application
If you do a CNTRL + F5 you should see controllers purpose.
As an ASP.NET developer your choice would be to use session variables, view
state or some other ASP.NET session management object.
The problem with using ASP.NET session or view state object is the scope.
ASP.NET session objects have session scope and view state has page scope. For
MVC we would like to see scope limited to controller and the view. In other
words we would like to maintain data when the hit comes to controller and
reaches the view and after that the scope of the data should expire.
That's where the new session management technique has been introduced in ASP.NET
MVC framework i.e. ViewData.
Below is a simple youtube video which demonstrates the lab for view data. In
this video we will see how we can share data between of behind code. So to display
the view we need to use the <%: tag in the aspx page as shown in the below code
snippet.
<%: ViewData["CurrentTime"] %>
So now that we know how to pass data using view data, the next lab is to
create a simple model and see all the 3 MVC entities (i.e. model, view and
controller) in action.
In this lab we will create a simple customer model, flourish the same with
some data and display the same in a view.
Below is a video demonstration for the same.
The first step is to create a simple customer model which is nothing but a
class with 3 properties code, name and amount. Create a simple MVC project,
right click on the model folder and click on add new item as shown in the below
figure.
From the templates select a simple class and name it as customer.
Create the class with 3 properties as shown in the below the created the object of the customer class,
flourished with some data and passed the same to a view named advantage of creating a strong typed view is you can now get the
properties of class in the view by typing the model and "." as shown in the
below figure.
Below is the view code which displays the customer property value. We have
also put an if condition which displays the customer as privileged customer if
above 100 and normal customer if below 100.
The customer id is <%= Model.Id %>
The customer Code is <%= Model.CustomerCode %>
<% if (Model.Amount > 100) {%>
This is a priveleged customer
<% } else{ %>
This is a normal customer
<%} %>
Now the "D" thing, hit cntrl + f5 and pat yourself for one more lab success.
In this sample we flourished the customer object from within the controller,
in the next lab we will take data from an input view and display the same. In
other words we will see how to create data entry screens for accepting data from
views.
Every project small or big needs data entry screens. In this lab we will create
a simple customer data entry screen as shown in the below figure using MVC
template.
As soon as the end user enters details and submits data it redirects to a
screen as shown below. If he entered amount is less than 100 it displays normal
customer or else it displays privileged customer.
Below 'DisplayCustomer'. customer id is <%= Model.Id %>
The customer Code is <%= Model.CustomerCode %>
<% if (Model.Amount > 100) {%>
This is a priveleged customer
<% } else{ %>
This is a normal customer
<%} %>
Final step is to run the project and see the output.
You should be also able to test above 100 and below 100 scenarios
In this lab we created a simple data entry screen which helped us flourish the
customer object. This customer object was then passed to the view for display.
If you closely watch the current lab we have done lot of coding i.e. creating
the HTML screens , flourishing the object etc. It would be great if there was
some kind of automation. In the next lab we see how HTML helper classes help to
minimize many of these manual coding and thus increasing productivity.
In our previous lab we created a simple customer data entry screen. We completed
the lab successfully but with two big problems:-
. The complete HTML code was written manually. In other words, less productive.
It's like going back to dark ages where developers used to write HTML tags in
notepad.
. Added to it lot of manual code was also written in the controller to flourish
the object and send data to the MVC view.
In this lab we will see how to use MVC HTML helper classes to minimize the
above manual code and increase productivity
Create a simple customer class , please refer Lab 5 for the same.
HTML helper classes have readymade functions by which you can create HTML
controls with ease. Go to any MVC view and see the intellisense for HTML helper
class you should see something as shown in the below figure.
By using HTML helper class you can create any HTML control like textbox,
labels, list box etc just by invoking the appropriate function.
In order to create the form tag for HTML we need to use "Html.BeginForm" , below
goes the code snippet for the same.
<% using (Html.BeginForm("DisplayCustomer","Customer",FormMethod.Post))
{%>
-- HTML input fields will go here
<%} %>
The above code will generate the below HTML
The HTML helper "beginform" takes three input parameters action name (Method
inside the controller), controller name (actual controller name) and HTTP
posting methodology (Post or GET).
If you want to create a text box, simply use the "TextBox" function of html
helper class as shown in the below code. In this way you can create any HTML
controls using the HTML helper class functions.
Enter customer id :- <%= Html.TextBox("Id",Model)%>
The above code snippet will generate the below HTML code.
Enter customer id :-
To create a data entry screen like the one shown below we need to the use the
below code snippet.
<% using (Html.BeginForm("DisplayCustomer","Customer",FormMethod.Post))
{ %>
Enter customer id :- <%= Html.TextBox("Id",Model)%>
Enter customer code :- <%= Html.TextBox("CustomerCode",Model) %>
Enter customer Amount :- <%= Html.TextBox("Amount",Model) %>
<%} %>
So once you have created the view using the HTML helper classes it's time to
attach the customer class with view , please refer lab 5 for the same.
controller, it's all hidden and automated.
[HttpPost]
public ActionResult DisplayCustomer(Customer obj)
{
return View(obj);
}
Enjoy your output for different condition of customer amount entered.
So have a toast of beer for completing your first day of MVC labs.
In the next labs we will talk about URL routing, ease of MVC unit testing,
MVC Controller attributes and lot more. The next lab will bit more advanced as
compared to the first day, so take rest and I need to work hard to get you the
second day labs.
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/MNPpP/kb/kb/kb/kb/4804-learn-mvc-model-view-controller-step-by.aspx | CC-MAIN-2014-49 | refinedweb | 2,343 | 72.76 |
import "github.com/sqp/godock/widgets/confmenu"
Package confmenu provides a menu widget for the GUI.
Only contains save and close buttons but can embed more widgets (as a box).
var IconSize = gtk.ICON_SIZE_SMALL_TOOLBAR
IconSize defines the default icon size.
Controller defines methods used on the main widget / data source by this widget and its sons.
type MenuBar struct { gtk.Box // Container is first level. Act as (at least) a GtkBox. Save *gtk.Button // contains filtered or unexported fields }
MenuBar is the config window menu.
func New(control Controller) *MenuBar
New creates the config menu with add or save buttons.
Package confmenu imports 5 packages (graph) and is imported by 3 packages. Updated 2016-09-06. Refresh now. Tools for package owners. | https://godoc.org/github.com/sqp/godock/widgets/confmenu | CC-MAIN-2019-35 | refinedweb | 122 | 62.04 |
Update pubspec.yaml
It's often useful to provide sync (convenient) and async (concurrent) versions of the same API.
dart:io does this with many APIs including Process.run and Process.runSync. Since the sync and async versions do the same thing, much of the logic is the same, with just a few small bits differing in their sync vs. async implementation.
The
when function allows for registering
onSuccess,
onError, and
onComplete callbacks on another callback which represents that sync/async dependent part of the API. If the callback is sync (returns a non-
Future or throws), then the other callbacks are invoked synchronously, otherwise the other callbacks are registered on the returned
Future.
For example, here's how it can be used to implement sync and async APIs for reading a JSON data structure from the file system with file absence handling:
import 'dart:async'; import 'dart:convert'; import 'dart:io'; import 'package:when/when.dart'; /// Reads and decodes JSON from [path] asynchronously. /// /// If [path] does not exist, returns the result of calling [onAbsent]. Future readJsonFile(String path, {onAbsent()}) => _readJsonFile( path, onAbsent, (file) => file.exists(), (file) => file.readAsString()); /// Reads and decodes JSON from [path] synchronously. /// /// If [path] does not exist, returns the result of calling [onAbsent]. readJsonFileSync(String path, {onAbsent()}) => _readJsonFile( path, onAbsent, (file) => file.existsSync(), (file) => file.readAsStringSync()); _readJsonFile(String path, onAbsent(), exists(File file), read(File file)) { var file = new File(path); return when( () => exists(file), onSuccess: (doesExist) => doesExist ? when(() => read(file), onSuccess: JSON.decode) : onAbsent()); } main() { var syncJson = readJsonFileSync('foo.json', onAbsent: () => {'foo': 'bar'}); print('Sync json: $syncJson'); readJsonFile('foo.json', onAbsent: () => {'foo': 'bar'}).then((asyncJson) { print('Async json: $asyncJson'); }); } | https://dart.googlesource.com/when/+/e59714e7b06e62eeef0c48e91c58d288c26c87fe | CC-MAIN-2021-04 | refinedweb | 274 | 51.44 |
about the content download facility development using WCF with one of my clients, I proposed the solution of defining operation contracts with a Stream return type from WCF services and consuming it in client application. However the client’s requirement was to make use of the WCF REST service for file downloading e.g. for .jpg, png, docs etc. WCF REST services provides XML or JSON conversion for the response from the WCF service , however the question that was bothering me was how to make use of REST for file download?
After searching the System.ServiceModel.Web namespace, I found the ‘WebOperationContext’ class. This is a helper class that provides easy access to the contextual properties of the Web request and response communication. Then while working with this class I found that using the WebOperationContext.Current.OutgoingResponse.ContentType, the content type header from the outgoing web response can be managed with the value: “application/octet-stream”. This represents a generic value for Raw binary data which can contain format like, png,jpg,zip,doc etc. So I thought to make a use of this class for our application
Step 1: Open VS2010 and create a WCF service application and name it ‘WCF_FileDownload_REST’. Add a folder named ‘FileServer’ in the application with following files: (Note: You can enter your files here)
Step 2: Rename IService1.cs to IService.cs and Service1.svc to Service.svc. Open IService.cs and write the following ServiceContract in it:
The above code shows the OperationContract with WebGet attribute. The WebGet attribute is set with UriTemplate property to make the REST url user friendly.
Step 3: Open Service.svc.cs and implement the IService in the service class as shown below:
The DownloadFile method accepts two parameters fileName and fileExpension. The code then generates the file path using FileServer folder in the WCF application and generates the outgoing response using the raw binary format.
Step 4 : Right-Click on Service.svc and select View Markup and change the @ServiceHost as below:
Step 5: Open Web.Config file and add the webHttpBinding protocol mapping so that the WCF REST endpoint can be exposed and also define the Help page for the WCF service as below:
Step 6: Publish the WCF service on your IIS web server and view the Service.svc in the browser. The result should be as shown below:
Click on the ‘Service Help page’ and the user friendly url will be displayed:
In the browser, type the url as shown below:
“”
Here the file is ADOEF.png. Similarly also try to download the Docx/Excel/zip file and you should get a similar download experience.
The advantage of this approach is that the client who knows about this can now directly get connected to your WCF File Download service and use the download facility without using any file IO code.
The entire source code of this article can be download over here | https://www.dotnetcurry.com/wcf/723/download-files-using-wcf-rest-endpoints | CC-MAIN-2018-39 | refinedweb | 488 | 54.73 |
Barcode Software
CAM DESIGN HANDBOOK in Software
Compose 2d Data Matrix barcode in Software CAM DESIGN HANDBOOK
A. Increase by a factor of 2 B. Increase by a factor of 2 C. Decrease by a factor of 2 D. Decrease by a factor of 2 E. Remain constant because the average length of each segment will
generate, create barcodes setting none in c# projects
BusinessRefinery.com/ barcodes
generate, create barcodes remote none in java projects
BusinessRefinery.com/ bar code
SOLUTION
use office excel barcodes printer to draw barcode in office excel settings
BusinessRefinery.com/barcode
using keypress windows forms to get barcodes for asp.net web,windows application
BusinessRefinery.com/ barcodes
string Char(number input_parameter)
java barcode reader free
using based jdk to draw barcode with asp.net web,windows application
BusinessRefinery.com/ bar code
using codes reporting services to assign barcodes on asp.net web,windows application
BusinessRefinery.com/barcode
An even simpler way to do this is to write: Range( C1 ) .Value 10 Anytime you wish to specify a value, Value will work as well as FormulaR1C1. You can even simplify it further and write Range( C1 ) 10 but it is better as a programming style to specify what it is you want with that Range. In this case, let the value of the Range be x, so you should always specify Value. By the way, you should also use Value even if you are entering text. That s a value, too, according to VBA. Writing Formulas The recorder constructs the code based on relative addresses. This is why you see SUM(R[ 4]C:R[ 2]C) which describes the action of highlighting a range 4 rows to 2 rows above (notice the negative sign) the current active cell. If you know the range that you want to specify, then Range( C5 ).Select ActiveCell.FormulaR1C1 SUM(R[ 4]C:R[ 2]C) could be rewritten as: Range( C5 ) .Value SUM(C1:C3) Notice that we use Value even for writing formulas. To write a formula, start with an equals ( ) sign and then write the formula within the double quotes. The code literally enters into the cell whatever it is told to, so in this case, with the beginning equal sign, it is writing the format that is required for entering formulas.
add qr code to ssrs report
using barcode integration for sql server control to generate, create qr-code image in sql server applications. variable
BusinessRefinery.com/QR
how to create qr code in vb.net
using barcode development for .net framework control to generate, create qr bidimensional barcode image in .net framework applications. signature
BusinessRefinery.com/qrcode
MASTER THESE SKILLS
to assign qr code jis x 0510 and denso qr bar code data, size, image with word barcode sdk retrieve
BusinessRefinery.com/QR Code JIS X 0510
qr codes size step on .net
BusinessRefinery.com/qr-codes
VoIP Client
qrcode data stored on java
BusinessRefinery.com/qr-codes
qr barcode image implementing with microsoft excel
BusinessRefinery.com/QR Code JIS X 0510
6976 6978 6979 6984 6985 6969
generate, create pdf417 libraries none with excel microsoft projects
BusinessRefinery.com/barcode pdf417
winforms pdf 417
using dynamic .net winforms to draw pdf417 for asp.net web,windows application
BusinessRefinery.com/PDF-417 2d barcode
NOTE
crystal report barcode code 128
using barcode implementation for vs .net control to generate, create barcode 128a image in vs .net applications. alphanumeric
BusinessRefinery.com/Code-128
.net code 128 reader
Using Barcode recognizer for fill .NET Control to read, scan read, scan image in .NET applications.
BusinessRefinery.com/Code 128
Figure 7-25 A model is the basic unit of data storage within the PerformancePoint Server Planning module.
code 39 c#
using barcode maker for visual studio .net control to generate, create barcode code39 image in visual studio .net applications. connection
BusinessRefinery.com/3 of 9 barcode
code 39 barcode font for crystal reports download
using barcode integrated for .net crystal report control to generate, create 39 barcode image in .net crystal report applications. programs
BusinessRefinery.com/39 barcode
Delivery Method for Streamed Applications
rdlc data matrix
using barcode integration for report rdlc control to generate, create data matrix image in report rdlc applications. mail
BusinessRefinery.com/DataMatrix
rdlc code 128
using net rdlc report files to make code 128 barcode for asp.net web,windows application
BusinessRefinery.com/USS Code 128
The challenge for Eights who seek high self-mastery is to learn to manage their large, dynamic energy and reservoir of anger by fully acknowledging their long-hidden vulnerability. When they have accomplished this, Eights are generous, strong, openhearted, and open-minded. Although still direct and honest, they speak from the heart and the head as well as from the gut, and they solicit and embrace differing opinions. Their protectiveness of others is gentle rather than controlling, and they are grounded, warm, and deeply con dent.
Citrix XenApp Platinum Edition for Windows: The Official Guide
Low Intermediate High
peptide dipoles) since their interaction energy varies as 1/r. The result is that overall in the alpha helix dipole-dipole attractions outweigh the dipole-dipole repulsions. This means that overall the peptide bond dipoles contribute to stabilizing the helix. They also cause the entire helix to behave as one large dipole, as shown schematically in Fig. 9-9. In this way alpha-helical portions of globular proteins can interact with each other as dipoles, thus affecting tertiary conformation. The alpha-helical parts also interact as dipoles with other molecules and proteins while carrying out protein functions (binding, catalysis, etc.). Helical segments within proteins range anywhere from 3 to 40 or more residues in length, but about 10 residues is average. Proteins, however, can have many alpha-helical segments separated by bends or other secondary structures.
What are some common symptoms
This brings the Counter namespace into view. The second change is that it is no longer necessary to qualify CountDown with Counter, as this statement in Main( ) shows:
byte lower = 16; byte upper = null; // Here, lower is defined, but upper isn t. if(lower < upper) // false
This program outputs the following:
Use the subjunctive after superlative expressions to show an opinion, a feeling, or an emotion:
There are numerous zones you can monitor with your security system. We ve chosen a few of the more popular sensors and components to connect. After we talk about installing these various components, we ll also explain how to
Version Header Length Priority and TOS (Type of Service) Total Length Identification Flags Fragment Offset TTL (Time-To-Live) Protocol
Business Intelligence with Microsoft Office PerformancePoint Server 2007
Suction line
18.4.2 Site preparation
More Data Matrix on None
Articles you may be interested
how to print barcode in vb.net 2008: C Versus C++ I/O in Java Generator barcode pdf417 in Java C Versus C++ I/O
how to generate and scan barcode in asp.net using c#: pQroap in Software Encoding Denso QR Bar Code in Software pQroap
android barcode scanner api java: Four reasons wh y EVs will alw ays be with us in the future. in .NET Implement Quick Response Code in .NET Four reasons wh y EVs will alw ays be with us in the future.
qr code generator excel mac: Indeterminate Forms in Software Include qr bidimensional barcode in Software Indeterminate Forms
excel generate qr code: Cable Plant Testing and Maintenance Procedures in Software Integrating PDF417 in Software Cable Plant Testing and Maintenance Procedures
barcode reader in asp.net c#: The S(opt) for optimal NF, as stated on the data sheet, is 0.65 equals 0.48 j0.43. in Software Drawer UPC-A in Software The S(opt) for optimal NF, as stated on the data sheet, is 0.65 equals 0.48 j0.43.
java api barcode scanner: Voltage Clamp in .NET Integration Quick Response Code in .NET Voltage Clamp
ssrs qr code: A Simple Exception Example in C#.net Generator QR Code in C#.net A Simple Exception Example
barcodelib rdlc: Clustering Example in Software Printing barcode data matrix in Software Clustering Example
progress bar code in vb.net: 26: Frame Relay in Objective-C Maker Quick Response Code in Objective-C 26: Frame Relay
barcode font vb.net: Can the presence of potassium in coffee be confirmed with a chemical test in Software Integrate QR-Code in Software Can the presence of potassium in coffee be confirmed with a chemical test
2d barcode vb.net: C++ from the Ground Up in .NET Encoder Data Matrix 2d barcode in .NET C++ from the Ground Up
Getting Involved as a Volunteer in .NET Integration QR Code
download barcode font for vb.net: Ethernet Bridging in Objective-C Compose data matrix barcodes in Objective-C Ethernet Bridging
gtin excel calculator: What the Work Is Like in Software Attach QR Code ISO/IEC18004 in Software What the Work Is Like
generate barcode in vb.net: Citrix XenApp with Application Streaming in Software Assign QR Code 2d barcode in Software Citrix XenApp with Application Streaming
vb net barcode printing code: Creating Leader Tabs for a Price List in Software Creation QR-Code in Software Creating Leader Tabs for a Price List
B i o p h y s i c s D emys tifie D in .NET Generate QR Code 2d barcode
gtin 12 excel formula: Amphibionics in Software Print barcode code39 in Software Amphibionics
2-16b in .NET Receive qr codes | http://www.businessrefinery.com/yc3/431/76/ | CC-MAIN-2021-49 | refinedweb | 1,575 | 55.84 |
Testing
Creating a battery of good unit test cases is an important part of ensuring the quality of your application over its lifecycle. To aid developers with their testing efforts, GWT provides integration with the popular JUnit unit testing framework and Emma code coverage tool. GWT allows JUnit test cases to run in either development mode or production mode.
Note: To run through the steps to add JUnit tests to a sample GWT app, see the tutorial Unit Testing GWT Applications with JUnit.
- Architecting Your App for Testing
- Creating & Running a Test Case
- Asynchronous Testing
- Combining TestCase classes into a TestSuite
- Setting up and tearing down JUnit test cases that use GWT code
- Running tests in Eclipse
Architecting Your App for Testing
The bulk of this page is dedicated to explaining how to unit test your GWT code via the GWTTestCase class, which at the end of the day must pay performance penalties for running in a browser. But that’s not always what you want to do.
It will be well worth your effort to architect your app so that the bulk of your code has no idea that it will live in a browser. Code that you isolate this way can be tested in plain old JUnit test cases running in a JRE, and so execute much faster. The same good habits of separation of concerns, dependency injection and the like will benefit your GWT app just as they would any other, perhaps even more than usual.
For some tips along these lines take a look at the Best Practices For Architecting Your GWT App talk (video or slides) given at Google I/O in May of 2009. And keep an eye on this site for more more articles in the same vein.
Creating a Test Case
This section will describe how to create and run a set of unit test cases for your GWT project. In order to use this facility, you must have the JUnit library installed on your system.
The GWTTestCase Class
GWT includes a special GWTTestCase base class that provides JUnit integration. Running a compiled GWTTestCase subclass under JUnit launches the HtmlUnit browser which serves to emulate your application behavior during test execution.
GWTTestCase is derived from JUnit’s TestCase. The typical way to setup a JUnit test case class is to have it extend
TestCase, and then run the it with the JUnit TestRunner.
TestCase uses reflection to discover the test methods defined in your derived class. It is convention to begin the name of all test methods with the prefix
test.
Using webAppCreator
The webAppCreator that GWT includes can generate a starter test case for you, plus ant targets and eclipse launch configs for testing in both development mode and production mode.
For example, to create a starter application along with test cases in the directory
fooApp, where module name is
com.example.foo.Foo:
~/Foo> webAppCreator -out fooApp -junit /opt/eclipse/plugins/org.junit_3.8.1/junit.jar com.example.foo.Foo Created directory fooApp/src Created directory fooApp/war Created directory fooApp/war/WEB-INF Created directory fooApp/war/WEB-INF/lib Created directory fooApp/src/com/example/foo Created directory fooApp/src/com/example/foo/client Created directory fooApp/src/com/example/foo/server Created directory fooApp/test/com/example/foo/client Created file fooApp/src/com/example/foo/Foo.gwt.xml Created file fooApp/war/Foo.html Created file fooApp/war/Foo.css Created file fooApp/war/WEB-INF/web.xml Created file fooApp/src/com/example/foo/client/Foo.java Created file fooApp/src/com/example/foo/client/GreetingService.java Created file fooApp/src/com/example/foo/client/GreetingServiceAsync.java Created file fooApp/src/com/example/foo/server/GreetingServiceImpl.java Created file fooApp/build.xml Created file fooApp/README.txt Created file fooApp/test/com/example/foo/client/FooTest.java Created file fooApp/.project Created file fooApp/.classpath Created file fooApp/Foo.launch Created file fooApp/FooTest-dev.launch Created file fooApp/FooTest-prod.launch Created file fooApp/war/WEB-INF/lib/gwt-servlet.jar
Follow the instructions in the generated fooApp/README.txt file. You have two ways to run your tests: using ant or using Eclipse. There are ant targets
ant test.dev and
ant test.web for running your tests in development and production mode, respectively. Similarly, you can follow the instructions in the README.txt file to import your project in Eclipse or your favorite IDE, and use the launch configs
FooTest-dev and
FooTest-prod to run your tests in development and production mode using eclipse. As you keep adding your testing logic to the skeleton
FooTest.java, you can continue using the above techniques to run your tests.
Creating a Test Case by Hand
If you prefer not to use webAppCreator, you may create a test case suite by hand by following the instructions below:
- Define a class that extends GWTTestCase. Make sure your test class is on the module source path (e.g. in the
clientsubpackage of your module.) You can add new source paths by editing the module XML file and adding a
<source>element.
- If you do not have a GWT module yet, create a module that causes the source for your test case to be included. If you are adding a test case to an existing GWT app, you can just use the existing module.
- Implement the method GWTTestCase.getModuleName() to return the fully-qualified name of the module. This is the glue that tells the JUnit test case which module to instantiate.
- Compile your test case class to bytecode. You can use the Java compiler directly using javac or a Java IDE such as Eclipse.
- Run your test case. Use the class
junit.textui.TestRunneras your main class and pass the full name of your test class as the command line argument, e.g.
com.example.foo.client.FooTest. When running the test case, make sure your classpath includes:
- Your project’s
srcdirectory
- Your project’s
bindirectory
- The
gwt-user.jarlibrary
- The
gwt-dev.jarlibrary
- The
junit.jarlibrary
Client side Example
First of all, you will need a valid GWT module to host your test case class. Usually, you do not need to create a new module XML file - you can just use the one you have already created to develop your GWT module. But if you did not already have a module, you might create one like this:
<module> <!-- Module com.example.foo.Foo --> <!-- Standard inherit. --> <inherits name='com.google.gwt.user.User'/> <!-- implicitly includes com.example.foo.client package --> <!-- OPTIONAL STUFF FOLLOWS --> <!-- It's okay for your module to declare an entry point. --> <!-- This gets ignored when running under JUnit. --> <entry-point <!-- You can also test remote services during a JUnit run. --> <servlet path='/foo' class='com.example.foo.server.FooServiceImpl'/> </module>
Tip: You do not need to create a separate module for every test case, and in fact will pay a startup penalty for every module you do use. In the example above, any test cases in
com.example.foo.client (or any subpackage) can share the
com.example.foo.Foo module.
Suppose you had created a widget under the
foo package,
UpperCasingLabel, which ensures that the text it shows is all upper case. Here is how you might test it.
package com.example.foo.client; import com.google.gwt.junit.client.GWTTestCase; public class UpperCasingLabelTest extends GWTTestCase { /** * Specifies a module to use when running this test case. The returned * module must include the source for this class. * * @see com.google.gwt.junit.client.GWTTestCase#getModuleName() */ @Override public String getModuleName() { return "com.example.foo.Foo"; } public void testUpperCasingLabel() { UpperCasingLabel upperCasingLabel = new UpperCasingLabel(); upperCasingLabel.setText("foo"); assertEquals("FOO", upperCasingLabel.getText()); upperCasingLabel.setText("BAR"); assertEquals("BAR", upperCasingLabel.getText()); upperCasingLabel.setText("BaZ"); assertEquals("BAZ", upperCasingLabel.getText()); } }
Now, there are several ways to run your tests. Just look at the sample ant scripts or launch configs generated by webAppCreator, as in the previous subsection.
Passing Arguments to the Test Infrastructure
The main class in the test infrastructure is
JUnitShell. To control aspects of how your tests execute, you must pass arguments to this class. Arguments cannot be passed directly through the command-line because normal command-line arguments go directly to the JUnit runner. Instead, define the system property
gwt.args to pass arguments to
JUnitShell.
For example, to run tests in (legacy) development mode (that is, run the tests in Java in the JVM), declare
-Dgwt.args="-devMode" as a JVM argument when invoking JUnit. To get a full list of supported options, declare
-Dgwt.args="-help" (instead of running the test, help is printed to the console).
Running your test in (legacy) Development Mode
When using the webAppCreator tool, you get the ability to launch your tests in either (legacy) development mode or production mode. By default, tests are run in production mode, thus they’re compiled to JavaScript before being executed.
Otherwise, in (legacy) development mode tests are run as normal Java bytecode in a JVM. While this makes it easier to debug them, note that, although rare, there are some differences between Java and JavaScript that could cause your code to produce different results when deployed.
If you instead decide to run the JUnit TestRunner from the command line, you need to pass arguments to
JUnitShell to get your unit tests running in (legacy) development mode
-Dgwt.args="-devMode"
Running your test in Manual Mode
Manual-mode tests allow you to run unit tests manually on any browser. In this.
For example, if you want to run a test in a single browser, you would use the following arguments:
-runStyle Manual:1
GWT will then show a console message like the following:
Please navigate your browser to this URL:
Point your browser to the specified URL, and the test will run. You may be prompted by the GWT Developer Plugin to accept the connection the first time the test is run.
Manual-mode test targets are not generated by the webAppCreator tool, but you can easily create one by copying the
test.prod ant target in the build.xml file to
test.manual and adding
-runStyle Manual:1 to the
-Dgwt.args part. Manual mode can also be used for remote browser testing.
Running your test on Remote Systems
Since different browsers can often behave in unexpected ways, it is important for developers to test their applications on all browsers they plan to support. GWT simplifies remote browser testing by enabling you to run tests on remote systems, as explained in the Remote Browser Testing page.
Automating your Test Cases
When developing a large project, a good practice is to integrate the running of your test cases with your regular build process. When you build manually, such as using
ant from the command line or using your desktop IDE, this is as simple as just adding the invocation of JUnit into your regular build process. As mentioned before, when you run
GWTTestCase tests, an HtmlUnit browser runs the tests. However, all tests might not run successfully on HtmlUnit, as explained earlier. GWT provides remote testing solutions that allow you to use a selenium server to run tests. Also, consider organizing your tests into GWTTestSuite classes to get the best performance from your unit tests.
Server side testing
The tests described above are intended to assist with testing client-side code. The test case wrapper
GWTTestCase will launch either a development mode session or a web browser to test the generated JavaScript. On the other hand, server-side code runs as native Java in a JVM without being translated to JavaScript, so it is not necessary to run tests of server-side code using
GWTTestCase as the base class for your tests. Instead, use JUnit’s
TestCase and other related classes directly when writing tests for your application’s server-side code. That said, you may want both GWTTestCase and TestCase coverage of code that will be used on both the client and the server.
Asynchronous Testing
GWT’s JUnit integration provides special support for testing functionality that cannot be executed in straight-line code. For example, you might want to make an RPC call to a server and then validate the response. However, in a normal JUnit test run, the test stops as soon as the test method returns control to the caller, and GWT does not support multiple threads or blocking. To support this use case, GWTTestCase has extended the
TestCase API. The two key methods are GWTTestCase.delayTestFinish(int) and GWTTestCase.finishTest(). Calling
delayTestFinish() during a test method’s execution puts that test in asynchronous mode, which means the test will not finish when the test method returns control to the caller. Instead, a delay period begins, which lasts the amount of time specified in the call to
delayTestFinish(). During the delay period, the test system will wait for one of three things to happen:
- If
finishTest()is called before the delay period expires, the test will succeed.
- If any exception escapes from an event handler during the delay period, the test will error with the thrown exception.
- If the delay period expires and neither of the above has happened, the test will error with a TimeoutException.
The normal use pattern is to setup an event in the test method and call
delayTestFinish() with a timeout significantly longer than the event is expected to take. The event handler validates the event and then calls
finishTest().
Example
public void testTimer() { // Setup an asynchronous event handler. Timer timer = new Timer() { public void run() { // do some validation logic // tell the test system the test is now done finishTest(); } }; // Set a delay period significantly longer than the // event is expected to take. delayTestFinish(500); // Schedule the event and return control to the test system. timer.schedule(100); }
The recommended pattern is to test one asynchronous event per test method. If you need to test multiple events in the same method, here are a couple of techniques:
- “Chain” the events together. Trigger the first event during the test method’s execution; when that event fires, call
delayTestFinish()again with a new timeout and trigger the next event. When the last event fires, call
finishTest()as normal.
- Set a counter containing the number of events to wait for. As each event comes in, decrement the counter. Call
finishTest()when the counter reaches
0.
Combining TestCase classes into a TestSuite
The GWTTestSuite mechanism has the overhead of having to start a development mode shell and servlet or compile your code. There is also overhead for each test module within a suite.
Ideally you should group your tests into as few modules as is practical, and should avoid having tests in a particular module run by more than one suite. (Tests are in the same module if they return the same value from getModuleName().)
GWTTestSuite class re-orders the test cases so that all cases that share a module are run back to back.
Creating a suite is simple if you have already defined individual JUnit TestCases or GWTTestCases. Here is an example:
public class MapsTestSuite extends GWTTestSuite { public static Test suite() { TestSuite suite = new TestSuite("Test for a Maps Application"); suite.addTestSuite(MapTest.class); suite.addTestSuite(EventTest.class); suite.addTestSuite(CopyTest.class); return suite; } }
The three test cases
MapTest,
EventTest, and
CopyTest can now all run in the same instance of JUnitShell.
java -Xmx256M -cp "./src:./test:./bin:./junit.jar:/gwt/gwt-user.jar:/gwt/gwt-dev.jar:/gwt/gwt-maps.jar" junit.textui.TestRunner com.example.MapsTestSuite
Setting up and tearing down JUnit test cases that use GWT code
When using a test method in a JUnit TestCase, any objects your test creates and leaves a reference to will remain active. This could interfere with future test methods. You can override two new methods to prepare for and/or clean up after each test method.
- gwtSetUp() runs before each test method in a test case.
- gwtTearDown() runs after each test method in a test case.
The following example shows how to defensively cleanup the DOM before the next test run using gwtSetUp(). It skips over
<iframe> and
<script> tags so that the GWT test infrastructure is not accidentally removed.
import com.google.gwt.junit.client.GWTTestCase; import com.google.gwt.user.client.DOM; import com.google.gwt.user.client.Element; private static native String getNodeName(Element elem) /*-{ return (elem.nodeName || "").toLowerCase(); }-*/; /** * Removes all elements in the body, except scripts and iframes. */ public void gwtSetUp () { Element bodyElem = RootPanel.getBodyElement(); List<Element> toRemove = new ArrayList<Element>(); for (int i = 0, n = DOM.getChildCount(bodyElem); i < n; ++i) { Element elem = DOM.getChild(bodyElem, i); String nodeName = getNodeName(elem); if (!"script".equals(nodeName) && !"iframe".equals(nodeName)) { toRemove.add(elem); } } for (int i = 0, n = toRemove.size(); i < n; ++i) { DOM.removeChild(bodyElem, toRemove.get(i)); } }
Running Tests in Eclipse
The webAppCreator tool provides a simple way to generate example launch configurations that can be used to run both development and production mode tests in Eclipse. You can generate additional launch configurations by copying it and replacing the project name appropriately.
Alternatively, one can also directly generate launch configurations. Create a normal JUnit run configuration by right-clicking on the Test file that extends
GWTTestCase and selecting
Run as >
JUnit Test. Though the first run will fail, a new JUnit run configuration will be generated. Modify the run configuration by adding the project’s
src and
test directories to the classpath, like so:
- click the
Classpath tab
User Entries
- click the
Advancedbutton
- select the
Add Foldersradio button
- add your
srcand
testdirectories
Launch the run config to see the tests running in development mode.
To run tests in production mode, copy the development mode launch configuration and pass VM arguments (by clicking the
Arguments tab and adding to the VM arguments textarea)
-Dgwt.args="-prod" | https://www.gwtproject.org/doc/latest/DevGuideTesting.html | CC-MAIN-2022-21 | refinedweb | 2,972 | 56.45 |
A couple of weeks back I had spent some time setting up several VSAN Stretched Clusters in my lab for some testing and although it was extremely easy to setup using the vSphere Web Client, I still prefer to stand up the environment completely automated 🙂
In looking to automate the VSAN Stretched Cluster configuration, I was interested in something that would pretty much work out of the box and not require any additional download or setup. The obvious answer would be to use the Ruby vSphere Console (RVC) is a really awesome tool that is available as part of vCenter Server included in both Windows vCenter Server and the VCSA.
For those of you who have not used RVC before, I highly recommend you give it a try and you can take a look at this article to see some of the cool features and benefits. I am making use of the RVC script option which I have written about in the past here to perform the VSAN Stretched Configuration. One of the new RVC namespaces that have been introduced in vSphere 6.0 Update 1 is the vsan.stretchedcluster.* commands and the one we are specifically interested in is the vsan.stretchedcluster.config_witness command.
There are a couple of things the script expects from an environment setup, so I will just spend a few minutes covering the pre-reqs and the assumptions before diving into the script. I will assume you already have a vCenter Server deployed and configured with an empty inventory. I also assume you have already deployed at least two ESXi hosts and a VSAN Witness VM that meets all the VSAN pre-reqs like at least one VSAN enabled VMkernel interface and associated disk requirements. Below is a screenshot of the vSphere Web Client of the initial environment.
Next, we will need to download the RVC script deploy_stretch_cluster.rb and upload that to your vCenter Server. Before you can execute the script, you will need to edit the script and adjust the variable names based on your environment. Once you have saved the changes, you can then run the RVC script by running the following command:
rvc -s deploy_stretch_cluster.rb [VC-USERNAME]@localhost
Here is a screenshot of running the script on the VCSA using Nested ESXi VMs + VSAN Witness VM for the Stretched Clustering configuration:
If everything executed successfully, you should see a "Task result: success" which signifies that the VSAN Witness VM was successfully added to the VSAN Stretched Cluster. If we now refresh the vSphere Web Client and under the Fault Domains configurations in the VSAN Cluster, we now see both our 2-Node VSAN Cluster and the VSAN Witness VM.
Hopefully this script can also benefit others who are interested in quickly standing up a VSAN Stretched Cluster, especially for evaluation or testing purposes. Enjoy getting your VSAN on! | https://www.virtuallyghetto.com/2015/10/automating-full-configuration-of-a-vsan-stretched-cluster-using-rvc.html | CC-MAIN-2017-47 | refinedweb | 478 | 54.97 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.