text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Your Account
by Rael Dornfest
Netscape's decomissioning of its RSS-based My.Netscape portal and consequent removal of the RSS 0.91 DTD got me thinking about perennial herbaceous vines, of all things.
About this time of year, my father-in-law's garden becomes home to a tangle of leafy vines culminating in late October in more chayote than he can possibly consume--despite valiant efforts to do so. If this weren't bad (read: good) enough, his vines are nothing compared to those which assault his back-yard fence from the neighbor's yard.
RSS, originally purposed as a content-gathering mechanism to feed My.Netscape, is today best known for its by-product: lightweight XML syndication. While Netscape's portal has withered and finally left us, RSS has flourished and grown. Today's feeds carry an array of content types including news headlines, discussion forums, software update announcements, metadata, and various other bits of both open and proprietary data.
Last week, as part of the portal's facelift, some key files--most notably the RSS 0.91 DTD--were removed from the site. While subsequently restored, its been an interesting few days of reactions. There were calls for redundancy and mirrors, using non-validating XML parsers, and moving beyond reliance on a single document at a single URL. Was this simply a blunder or some sinister plot to kill RSS? While most folks did indeed attribute the removal to oversight, there were those citing AOL/Netscape's plan for a "walled garden," lock-in to AOL-only content, and an attack on small content providers. (This author firmly believes it was nothing more than a removal of cruft no longer needed to support a system that was no longer being used.)
Yes, they removed a key document, striking fear into the hearts of some portion of the RSS consumership. Yes, they did so without proper notification of those who might have been affected. No, they made no provision for backward compatibility, either by mirroring or placing the DTD into the public domain. They simply went ahead and removed a bit of cruft no longer necessary to support a decomissioned system.
This year, it seems, the neighbors have removed the vine upon which my father-in-law and everyone he knows has come to rely. And without so much as a peep of notification!
All this points not to any kind of malicious betrayal, but to the risk one takes building upon unintended Web services. They are brittle, unreliable, and can disappear in an instant. One should no more depend on them for one's livelihood than build a fruit supply company on the bounty afforded by next-door's overhanging fruit.
That said, unintended Web services are often some of the most interesting, exciting, and fruitful (pun intended ;-). Take screen-scraping, for instance. While a major pain to maintain--requiring vigilence on the part of the scraper, shadowing the Website producer's every <p>, <blockquote>, URL-line parameter, and semicolon--it's nevertheless given rise to a bounty of useful tools and sub-services. Just take a gander at the variety of Perl modules for searching the Web, grabbing stock quotes, and so on.
While perhaps (almost certainly, in fact) one doesn't want to depend upon the legality, availability, and stability of these services, they're certainly worth enjoying while they last. And should they disappear, certainly don't go pounding on your neighbour's door, biff him in the nose, and demand he put his vines back.
So what, then, could Netscape have done in this particular situation to make things easier on us?
First and foremost would have to be direction. Netscape seemed to have lost interest in further development of RSS and fell silent. We are left to read the tea-leaves for any insight into the direction they might have taken.
A second help would have been Netscape's placing the spec, documents, RSS itself--the whole kit and caboodle--into the public domain. While further development of versions of RSS continues, no claims may be made or settled with respect to the original format itself; this has contributed in some ways to the rifts we see in RSS today.
Third (and this is a touch of partisanship) was the move from the flexibility and decentralization of RDF and namespaces to the reliance on the availability of a static DTD. This mistake has since been fixed (in my humble opinion) by RSS 1.0's return to its RDF roots. On the RSS 0.91 front, while the DTD has been restored and cached in various places, there's still a single point of failure, whether or elsewhere.
Note:
I have attempted contact at various times with those even remotely in the RSS know at Netscape/AOL via My.Netscape, Mozilla, DMOZ, and other avenues--to no avail. It seems that anyone in any way connected to RSS has moved on or is far enough out on the edges of the organization so as not to be reachable or able to affect change.
Are you using 'unintended' Web Services?
© 2017, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://archive.oreilly.com/pub/post/mynetscape_rss_and_the_chayote.html | CC-MAIN-2017-47 | refinedweb | 885 | 62.78 |
Ford Fulkerson Algorithm for Maximum flow in a graph
Reading time: 15 minutes | Coding time: 9 minutes
Ford–Fulkerson algorithm is a greedy algorithm that computes the maximum flow in a flow network. The main idea is to find valid flow paths until there is none left, and add them up. It uses Depth First Search as a sub-routine.
Pseudocode
* Set flow_total = 0 * Repeat until there is no path from s to t: * Run Depth First Search from source vertex s to find a flow path to end vertex t * Let f be the minimum capacity value on the path * Add f to flow_total * For each edge u → v on the path: * Decrease capacity of the edge c(u → v) by f * Increase capacity of the edge c(v → u) by f
Before moving forward, think about the following questions:
What is the worst case scenario for Ford Fulkerson algorithm?
Will Ford Fulkerson algorithm terminate for all graphs?
Worst case scenario:
Flow increments by 1 in each step for a graph such as:
It takes 2000 steps to find the maximum flow in the above graph. If we used Breadth First Search instead of Depth First Search in Ford Fulkerson algorithm, it will take 2 steps.
Non-terminating example:
Ford Fulkerson algorithm will not terminate for the following graph:
How do we know if this gives a maximum flow?
Proof overview:
We will use proof by contradiction.
Suppose Ford–Fulkerson algorithm will not give the maximum flow.
Take a maximum flow f and “subtract” our flow f. It is a valid flow of positive total flow. By the flow decomposition, it can be decomposed into flow paths and circulations. These flow paths must have been found by Ford-Fulkerson.
This leds to a contradiction. Hence, Ford–Fulkerson algorithm will give the maximum flow.
Complexity
- Worst case time complexity:
Θ(max_flow * E)
- Average case time complexity:
Θ(max_flow * E)
- Best case time complexity:
Θ(max_flow * E)
- Space complexity:
Θ(E + V)
Implementations
- C++
- Java
- Python
C++
/* Part of Cosmos by OpenGenus Foundation */ #include <iostream> #include <string.h> using namespace std; #define N 7 #define INF 9999999 // flow network int Flow[N][N]; // visited array bool visited[N]; // original flow network graph shown in the above example //0 1 2 3 4 5 6 int graph[N][N] = { { 0, 5, 4, 0, 0, 0, 0 }, //0 { 0, 0, 0, 0, 0, 0, 4 }, //1 { 0, 0, 0, 3, 0, 0, 6 }, //2 { 0, 0, 0, 0, 5, 0, 0 }, //3 { 0, 0, 0, 0, 0, 0, 8 }, //4 { 6, 0, 0, 2, 0, 0, 0 }, //5 { 0, 0, 0, 0, 0, 0, 0 }, //6 }; = 5; int t = 6; int max_flow = 0; // while ther is augmenting path , from s and t // with positive flow capacity while (int sent = dfs(s, t, INF)) { max_flow += sent; // reset visited array , for searching next path memset(visited, 0, sizeof(visited)); } cout << "The max flow from node 5 to sink node 6 is " << max_flow; cout << endl; }
Java
// Part of Cosmos by OpenGenus Foundation import java.util.LinkedList; import java.lang.Exception; class FordFulkersonUsingBfs { static final int V = 6; boolean bfs(int rGraph[][], int s, int t, int parent[]) { /* * Create a visited array and mark all vertices as not * visited */ boolean visited[] = new boolean[V]; for(int i=0; i<V; ++i) visited[i]=false; /* * Create a queue, enqueue source vertex and mark * source vertex as visited */ LinkedList<Integer> queue = new LinkedList<Integer>(); queue.add(s); visited[s] = true; parent[s]=-1; // Standard BFS Loop while (queue.size()!=0) { int u = queue.poll(); for (int v=0; v<V; v++) { if (visited[v]==false && rGraph[u][v] > 0) { queue.add(v); parent[v] = u; visited[v] = true; } } } /* * If we reached sink in BFS starting from source, then * return true, else false */ return (visited[t] == true); } // Returns tne maximum flow from s to t in the given graph int fordFulkerson(int graph[][], int s, int t) { int u, v; /* * Create a residual graph and fill the residual graph * with given capacities in the original graph as * residual capacities in residual graph */ /* * Residual graph where rGraph[i][j] indicates * residual capacity of edge from i to j (if there * is an edge. If rGraph[i][j] is 0, then there is * not) */ int rGraph[][] = new int[V][V]; for (u = 0; u < V; u++) for (v = 0; v < V; v++) rGraph[u][v] = graph[u][v]; // This array is filled by BFS and to store path int parent[] = new int[V]; int max_flow = 0; // There is no flow initially /* * Augment the flow while tere is path from source * to sink */ while (bfs(rGraph, s, t, parent)) { /* *Find minimum residual capacity of the edhes * along the path filled by BFS. Or we can say * find the maximum flow through the path found. */ int pathFlow = Integer.MAX_VALUE; for (v=t; v!=s; v=parent[v]) { u = parent[v]; pathFlow = Math.min(pathFlow, rGraph[u][v]); } /* * update residual capacities of the edges and * reverse edges along the path */ for (v=t; v != s; v=parent[v]) { u = parent[v]; rGraph[u][v] -= pathFlow; rGraph[v][u] += pathFlow; } // Add path flow to overall flow max_flow += pathFlow; } // Return the overall flow return max_flow; } // Driver program to test above functions public static void main (String[] args) throws java.lang.Exception { // Example graph in adjancancy matrix int graph[][] =new int[][] { {0, 16, 13, 0, 0, 0}, {0, 0, 10, 12, 0, 0}, {0, 4, 0, 0, 14, 0}, {0, 0, 9, 0, 0, 20}, {0, 0, 0, 7, 0, 4}, {0, 0, 0, 0, 0, 0} }; FordFulkersonUsingBfs m = new FordFulkersonUsingBfs(); System.out.println("The maximum possible flow is " + m.fordFulkerson(graph, 0, 5)); } }
Python
''' Part of Cosmos by OpenGenus Foundation ''' from collections import defaultdict #This class represents a directed graph using adjacency matrix representation class Graph: def __init__(self,graph): self.graph = graph # residual graph self. ROW = len(graph) #self.COL = len(gr[0]) '''Returns true if there is a path from source 's' to sink 't' in residual graph. Also fills parent[] to store the path ''' def BFS(self,s, t, parent): # Mark all the vertices as not visited visited =[False]*(self.ROW) # Create a queue for BFS queue=[] # Mark the source node as visited and enqueue it queue.append(s) visited[s] = True # Standard BFS Loop while queue: #Dequeue a vertex from queue and print it u = queue.pop(0) # Get all adjacent vertices of the dequeued vertex u # If a adjacent has not been visited, then mark it # visited and enqueue it for ind, val in enumerate(self.graph[u]): if visited[ind] == False and val > 0 : queue.append(ind) visited[ind] = True parent[ind] = u # If we reached sink in BFS starting from source, then return # true, else false return True if visited[t] else False # Returns tne maximum flow from s to t in the given graph def FordFulkerson(self, source, sink): # This array is filled by BFS and to store path parent = [-1]*(self.ROW) max_flow = 0 # There is no flow initially # Augment the flow while there is path from source to sink while self.BFS(source, sink, parent) : # Find minimum residual capacity of the edges along the # path filled by BFS. Or we can say find the maximum flow # through the path found. path_flow = float("Inf") s = sink while(s != source): path_flow = min (path_flow, self.graph[parent[s]][s]) s = parent[s] # Add path flow to overall flow max_flow += path_flow # update residual capacities of the edges and reverse edges # along the path v = sink while(v != source): u = parent[v] self.graph[u][v] -= path_flow self.graph[v][u] += path_flow v = parent[v] return max_flow # Create a graph given in the above diagram print("Enter the number of vertices: ", end="") m = int(input()) matrix = [] for i in range(0,m): matrix.append([]) for j in range(0,m): print("Enter the edge value from " + str(i) + " to " + str(j) +": ", end="") temp = int(input()) matrix[i].append(temp) print("\nthe matrix is,\n") for i in range(0,m): for j in range(0,m): print(matrix[i][j]," ",end="") print("\n") print("Enter the source: ", end="") source = int(input()) print("Enter the sink: ", end="") sink = int(input()) print ("The maximum possible flow is %d " % g.FordFulkerson(source, sink))
Applications
- maximizing the transportation with given traffic limits
- maximizing packet flow in computer networks. | https://iq.opengenus.org/ford-fulkerson-algorithm/ | CC-MAIN-2019-47 | refinedweb | 1,392 | 57.3 |
Welcome to the third installment in our series on building Cordova plugins for BlackBerry 10. Last week we covered Mac and Linux. Today’s post is for our Windows developers and will detail steps for setting up all required tools, configuring your environment, how to leverage our plugins template and tips and tricks for troubleshooting issues throughout the development process.
Step 1: Install the required tools
The following tools are used to build your own Cordova plugin for BlackBerry 10. Download and install all of the following to your Windows machine:
- BlackBerry 10 WebWorks 2.0 SDK for Windows
- Momentics IDE for BlackBerry 10 for Windows
- BlackBerry 10 Simulator for Windows (Optional)
Step 2: Request and install code signing keys
In order to build release-ready code and publish an application in BlackBerry World, you must request and install code signing keys. Code signing keys are used to securely imprint your digital signature on a compiled piece of code, such as a production-ready application. The Momentics IDE simplifies the code signing installation process and is recommended. See instructions on how to configure signing using Momentics.
Step 3: Optionally Sign BlackBerry Open Source contributor agreement
If you’d like to contribute your work back to the community through the BlackBerry repositories, you’ll need to sign a Contributor’s Agreement and submit a request to BlackBerry. The contribution process is designed to protect the interests of both the community and BlackBerry, while being lightweight and easy to follow. Once approved, your name will be included among the list of Approved Signatories.
Step 4: Optionally Install Git and create a GitHub account
The BlackBerry Open Source project is hosted in GitHub. In order to contribute or modify code from the project, developers must create a GitHub account and be familiar with how to use Git to upload and download submissions. Anyone can download the code without having an account.
Step 5: Starting a Project from the Template
There is a fair amount of boilerplate code involved in a new plugin, so the best place to start is with the Plugin Template. For convenience, the native portion of the template is included in the NDK New Project Wizard, but for a few reasons, the best place to get the template will always be the GitHub repository.
To create a new plugin, copy the template project into a new location. I would recommend building the sample application first before you attempt to do anything new. Only the “www” folder of the application is included, so you’ll want to create a new application with WebWorks 2 or Cordova first:
- In the “Template” folder, run “webworks create sample2”, or “cordova create sample2”.
- Remove the “www” folder from this new app, and replace it with the one from the template’s sample folder.
- Change to the “sample” folder: “cd sample2”.
- If you are using Cordova, make sure the BlackBerry NDK is on your path, or run the “bbndk-env” script to setup your path, and then run “cordova platform add blackberry10”.
- Add the plugin to this app with “webworks plugin add ../plugin” or “cordova plugin add ../plugin”
- Build and run the app on your device or sim with “webworks run” or “cordova run”. You may need to supply your signing key and device password.
You should see the template sample app run and give you the output from this screenshot:
With that process completed, we’re in good shape to start breaking things!
Step 6: Renaming the Template
Remove the plugin from the app with “webworks plugin remove community.templateplugin”, or likewise with “cordova.” Import the native portion of the Template Project into the Momentics NDK. The project definition is in the “Template/plugin/src/blackberry10/native” folder. Now rename the template plugin and its various parts to suit the plugin that you want to create. Instructions for these steps are included in the Readme. Pay close attention here, as typos and missed renames will cause problems that can be hard to track down. Before actually changing any of the methods, just the names of the boilerplate files and classes, rebuild the sample including the native portion of the plugin and add it back to the sample. This way you can be certain that you’ve got a working base before making the more significant changes.
- Build the Native portion, following the instructions in the Readme. This will put native binaries into the simulator and device directories with the name that you chose.
- Make sure you updated the reference to the native code in the index.js file where it refers to “libTemplate” and “libTemplate.TemplateJS”. This is how the plugin dynamically loads the native binaries from the .so file.
- Also make sure that your “plugin.xml” file was updated with all the new file names. This file tells WebWorks and Cordova how to install the plugin.
- Add the plugin to the sample app with “webworks plugin add ../plugin” in the sample2 folder again.
- Change the function calls in the sample app function called “testPluginCalls” to reference the namespace that you chose earlier.
- Build the app again with “webworks run”.
Step 7: Adding Features and Plugin Communication
Provided all the name changes worked out, you should get the same results as before, but now it’s your new plugin that’s doing all the magic. With this, the door is open to more advanced features, and the template functions are available to show you how to transmit different types of data between the JavaScript and C++ layers. You can send a simple command with no data, with a string, or with JSON. You can get a result immediately, or asynchronously, or initiate a thread that returns multiple results. This is the flow and usage of each part of the plugin:
- When the app launches, an object with the contents of plugin/www/client.js is created using the name given in plugin/plugin.xml. In the case of the Template, this is “community.templateplugin”.
- In your app you can call a method on this plugin object, which will send the command and parameters through the Cordova framework. It looks up the plugin using the “_ID” value that is used in the exec function calls and defined in client.js.
- The “_ID” value matches the id defined in plugin.xml, which means Cordova calls the index.js method of the same name as the command passed from client.js. This index.js file is appended into the “frameworkModules.js” file that you can view in the Bootstrap WebView of your application. It’s the first WebView in the list when you connect WebInspector to your debug application, so you are able to open this WebView and put breakpoints in there for debugging.
- The first step of a function call in index.js is to create a new PluginResult object. This object handles communication back to the application through the Cordova framework. The PluginResult is stored in an object for lookup later using its own generated callbackId value.
- To get into the native code, the plugin will use the JNEXT bridge, defined at the bottom of the index.js file. This code loads and links the native object code and serializes function calls into it. It’s important that you send in the callbackId value from the PluginResult, and that any parameters are string, so make sure to call JSON.stringify on JSON objects here.
- From here the control passes into the InvokeMethod of the native code. The command and callbackId values are stripped off the string to determine the method to call, and the remainder is considered arguments. The architecture separates the JNEXT bridge portion of the code into a file named template_js.cpp while the true plugin is in the template_ndk.cpp file. You will likely have renamed these earlier. Based on the command given, a method from the plugin file is called.
- Besides running the native methods needed at this point, the arguments may need to be parsed from a string into JSON. If this is a method that should return a result immediately, you might simply return the result as a string. Otherwise, to return a result, use the “m_pParent->NotifyEvent(callbackId + “ “ + message)” method, where message is a string.
After each change to the native code, the native project needs to be rebuilt. Use the device and simulator targets to build the library in the right folders. When any change to the plugin code occurs, including native code changes, the plugin should be removed and re-added to the sample app. You can write a simple script to do so.
Step 8: Debugging and Logging
Take note of the Logging feature that is integrated into the native part of the template. This is a convenient wrapper for the device’s slog2 logging tool. Given a priority level and a string, the message will be written into slog2, which can be viewed by connecting to the device with Momentics.
When adding features into your plugin, make sure to note what library the code comes in. You’ll need to add that library to the linker setup on your native project. Otherwise, when the plugin library tries to load at runtime, it will fail to link and will not load. The error message states that the app can’t find the library, so this can be quite misleading. Information about adding libraries and special instructions for Qt libraries is in the Template Readme.
Step 9: You’re Good to Go!
At this point, you’re on your way to building plugins and doing some very cool things. Working together with the community, we’ve been able to release 18 community plugins already. Don’t be discouraged if you run into trouble; a lot of your fellow developers have been there before and we can help. If you’re interested in contributing to our BlackBerry repositories on GitHub, we’d appreciate your input. Contact me on Twitter or GitHub to get started. | http://devblog.blackberry.com/2014/03/building-cordova-plugins-for-blackberry-10-on-windows/ | CC-MAIN-2018-39 | refinedweb | 1,666 | 63.39 |
this video on the SharePoint PnP YouTube Channel:
Create a new web part project
To:
- Accept the default HelloWorld as your web part name, and then select Enter.
- Accept the default HelloWorld description as your web part description, and then select Enter.
- Accept the default No javascript web framework as the framework you would like to use, and then select Enter.
At this point, Yeoman installs the required dependencies and scaffolds that. This setting can be configured in the serve.json file located in the config folder, but we do recommend using the default values.
Switch to your console, ensure that you are still in the helloworld-webpart directory, and then enter the following command:
Note
Developer certificate has to be installed ONLY once in your development environment, so you can skip this step, if you have already executed that in your environment.
gulp trust-dev-cert
Now that we have installed the developer certificate, enter the following command in the console to build and preview your web part:
gulp serve
This command executes a series of gulp tasks to create a local, node-based HTTPS server on
localhost:4321 and launches your default browser to preview web parts from your local dev environment.
Note
If you are seeing issues with the certificate in browser, please see details on installing a developer certificate from the Set up your development environment article.
SharePoint client-side development tools use gulp as the task runner to handle build process tasks such as:
- Bundling and minifying JavaScript and CSS files.
- Running tools to call the bundling and minification tasks before each build.
- Compiling SASS files to CSS.
- Compiling TypeScript files to JavaScript.
Visual Studio Code provides built-in support for gulp and other task runners. Select Ctrl+Shift+B on Windows or Cmd+Shift+B on Mac to debug and preview your web part. use SharePoint Workbench to preview and test your web part
To add the HelloWorld web part, select the add icon (this icon appears when you mouse hovers over a section as shown in the previous image). This opens the toolbox where you can see a list of web parts available for you to add. The list includes the HelloWorld web part as well other web parts available locally in your development environment.
Select HelloWorld to add the web part to the page.
Congratulations! You have just added your first client-side web part to a client-side page.
Select when the behavior is reactive.
Web part project structure
To use Visual Studio Code to explore the web part project structure
In the console, break the processing by selecting Ctrl+C. in the src\webparts\helloworld folder defines the main entry point for the web part. The web part class HelloWorldWebPart extends the BaseClientSideWebPart. Any client-side web part should extend the BaseClientSideWebPart class to be defined as a valid web part.
BaseClientSideWebPart implements the minimal functionality that is required to build a web part. This class also provides many parameters to validate and access before the HelloWorldWebPart class in the HelloWorldWebPart.ts file.:
public render(): void { this.domElement.innerHTML = ` <div class="${ styles.helloWorld }"> <div class="${ styles.container }"> <div class="${ styles.row }"> <div class="${ styles.column }"> <span class="${ styles.title }">Welcome to SharePoint!</span> <p class="${ styles.subTitle }">Customize SharePoint experiences using web parts.</p> <p class="${ styles.description }">${escape(this.properties.description)}</p> <a href="" class="${ styles.button }"> <span class="${ styles.label }">Learn more</span> </a> </div> </div> </div> </div>`; }
This model is flexible enough so that web parts can be built in any JavaScript framework and loaded into the DOM element.
Configure the Web part property pane
The property pane is defined in the HelloWorldWebPart class. The getPropertyPaneConfiguration property is where you need to define the property pane.
When the properties are defined, you can access them in your web part by using
this.properties.<property-value>, as shown in the render method:
<p class="${styles.description}">${escape(this.properties.description)}</p>
Notice that we are performing an HTML escape on the property's value to ensure a valid string. To learn more about how to work with the property pane and property pane field types, see Make your SharePoint client-side web part configurable.
Let's now add a few more properties to the property pane: a check box, a drop-down list, and a toggle. We first start by importing the respective property pane fields from the framework.
Scroll to the top of the file and add the following to the import section from
@microsoft/sp-property-pane:
PropertyPaneCheckbox, PropertyPaneDropdown, PropertyPaneToggle
The complete import section looks like the following:
import { IPropertyPaneConfiguration, PropertyPaneTextField, PropertyPaneCheckbox, PropertyPaneDropdown, PropertyPaneToggle } from '@microsoft/sp-property-pane';
Update the web part properties to include the new properties. This maps the fields to typed objects.
Replace the IHelloWorldWebPartProps interface with the following code.
export interface IHelloWorldWebPartProps { description: string; test: string; test1: boolean; test2: string; test3: boolean; }
Save the file.
Replace the getPropertyPaneConfiguration method with the following code,="${ styles.description }">${escape(this.properties.test)}</p>
To set the default value for the properties, you need to update the web part manifest's properties property bag.
Open
HelloWorldWebPart.manifest.jsonand modify the
propertiesto:
"properties": { "description": "HelloWorld", "test": "Multi-line text field", "test1": true, "test2": "2", "test3": true }
The web part property pane now has these default values for those properties.
Web part manifest
The HelloWorldWebPart.manifest.json file defines the web part metadata such as version, id, display name, icon, and description. Every web part must contain this manifest.
{ "$schema": "", "id": "fbcf2c6a-7df9-414c-b3f5-37cab6bb1280", , "supportedHosts": ["SharePointWebPart"], ", "test": "Multi-line text field", "test1": true, "test2": "2", "test3": true } }] }
Now that we have introduced new properties, ensure that you are again hosting the web part from the local development environment by executing the following command. This also ensures that the previous changes were correctly applied.
gulp serve
Preview the web part in SharePoint
SharePoint Workbench is also hosted in SharePoint to preview and test your local web parts in development. The key advantage is that now you are running in SharePoint context and you are able to interact with SharePoint data.
Go to the following URL:
Note
If you do not have the SPFx developer certificate installed, Workbench notifies you that it is configured not to load scripts from localhost. Stop the currently running process in the console window, and execute the
gulp trust-dev-certcommand in your project directory console to install the developer certificate before running the
gulp servecommand again. See details on installing a developer certificate from the Set up your development environment article.
Notice that the SharePoint Workbench now has the Office 365 Suite navigation bar.
Select the add icon in the canvas to reveal the toolbox. The toolbox now shows the web parts available on the site where the SharePoint Workbench is hosted along with your HelloWorldWebPart.
Add HelloWorld from the toolbox. Now you're running your web part in a page hosted in SharePoint!
Note
The color of the web part depends on the colors of the site. By default, web parts inherit the core colors from the site by dynamically referencing Office UI Fabric Core styles used in the site where the web part is hosted.. Notice that the
gulp serve command is still running in your console window (or in Visual Studio Code if you are using that as editor). You can continue to let it run while you go to the next article.
Note
If you find an issue in the documentation or in the SharePoint Framework, please report that to SharePoint engineering by using the issue list at the sp-dev-docs repository or by adding a comment to this article. Thanks for your input in advance.
Feedback | https://docs.microsoft.com/en-us/sharepoint/dev/spfx/web-parts/get-started/build-a-hello-world-web-part | CC-MAIN-2019-30 | refinedweb | 1,291 | 55.24 |
I included my Driver class, prompt, input, and output: my pop/peek commands aren't working properly - please help!
DRIVER:
//Lab 9
//Ashmi Patel
/* Implement and test a LinkedStack....
I included my Driver class, prompt, input, and output: my pop/peek commands aren't working properly - please help!
DRIVER:
//Lab 9
//Ashmi Patel
/* Implement and test a LinkedStack....
hihi! here's my prompt and i think i did the program right but I can't get it to work
how do i fix this program?
this is the prompt btw: Create a S tudent class that stores the name, address,...
I need help fixing this! I just keep getting a million errors urgh
/* The formula for computing the number of ways of choosing r different things from a set of n things is the
following: C(n,...
i have three different calls to use my writeVertical method, but for some reason when i run the program, this is what i'm getting:
Call 1: 3
3
Call 2: 2053
3
Call 3: 53209
3
basically, the...
Hey so here's the program I have so far - instead of printing on one line though, I need to print the integers vertically - any ideas? I included the full prompt if that helps anyone!
/* Write a...
I fixed it!
Here's my practice assignment. How should I start it?
The Comparable interface is a very commonly implemented interface. Any class that implements it must define a method called compareTo that...
fixed! thanks for offering to help!
honestly, i missed the class in which we were assigned this because of a death in the family. and now the code's due tomorrow and i've got to do this as well as like three other assignments. i'm a...
ahh i'm new! how do i use tags?
Here's my code. I need to figure out how to print the specific character in the string (say I input word5 - the the erroneous character would be the 5) that keeps the code from working.
<code>...
here's my code; i always get InputMismatch error -___-
import java.text.DecimalFormat;
import java.text.NumberFormat;
import java.io.*;
import java.util.Scanner;
public class Application{
hi everybody! i need help basically figuring out how to start this assignment using the comparable interface:
1. Create a Student class that stores the name, address, GPA, age, and major for a...
i'm a student taking a beginner java class and i need help just fixing some of my code
basically, i need help figuring out how to read delimiters to separate tokens
if anyone can help, please...
hi, i'm a student taking a beginner java class and i need help looking over my code! it's a beginner code so it shouldn't take long...any takers? | http://www.javaprogrammingforums.com/search.php?s=3d7f579a8236a9e9e925c19aa830bdeb&searchid=1203390 | CC-MAIN-2014-49 | refinedweb | 470 | 74.39 |
Created on 2011-07-25 06:36 by ats.engg, last changed 2019-08-23 06:46 by rhettinger. This issue is now closed.
URL:
section: 9.4 Random Remarks
The first sentense is bit confusing:
"Data attributes override method attributes with the same name"
Is it possible to change the sentense something like this:
"Data attributes set through instance override method attributes with the same name"
The proposed rewrite doesn't make any sense to me. Also "set through" is an reads awkwardly.
Indeed that paragraph is not really clear. I had to read it till the end ("verbs for methods and nouns for data attributes") to figure out what it was talking about. Even then it's still not clear what it's trying to say.
I *think* it means that if you have a class Foo with a method bar, and you do Foo.bar = 'data', the method will be overridden (so you won't be able to do Foo.bar for the 'data' and Foo.bar() for the method), but the opposite is also true.
Moreover I find both the suggestions for avoiding conflicts (capitalizing method names and/or using an underscore) wrong (both against PEP8). Also it never happened to me to have an attribute with the same name of a method, and I think in general it's not a common case (becase, as the paragraph says, methods are verbs and attributes nouns).
The whole thing could be rewritten to just say that an attribute name always refers to a single object, either to a method or to some "data".
I have the same reading as Ezio, and the same opinion that it’s unclear and unhelpful. +1 to saying that there is only one namespace for data attributes and methods.
That sentence is wrong to imply that there is anything special about data versus method attributes with respect to overriding -- or that attributes are special when it comes to names in a single namespace. What I think the paragraph should say, if not just deleted, is something like.
"Instance attributes hide class attributes. Instance data names are often intended to override class data names. But instance data names masking methods names is likely a mistake. This can be avoided by using verbs for method names and nouns for data names."
+1 to something like Terry’s proposal.
See also #16048.
It may be worth to noting that when creating property attribute using the property decorator, we may tend to name the method using nouns, like here:
This is when I once overided data attribute with method.
Based on Teery's comments, this patch makes the changes to the random remarks section of the class documentation
Similar changes for 2.7 branch
New changeset 483ae0cf1dcf46f8b71c4bf32419dd138e908553 by Raymond Hettinger in branch 'master':
bpo-12634: Clarify an awkward section of the tutorial (GH-15406)
New changeset f6a7f5bc50f4267cfb7fe0c4ea16d5359c009cbd by Raymond Hettinger (Miss Islington (bot)) in branch '3.8':
bpo-12634: Clarify an awkward section of the tutorial (GH-15406) (GH-15409) | https://bugs.python.org/issue12634 | CC-MAIN-2021-25 | refinedweb | 504 | 72.26 |
While having a conversation about functional programming with a fellow at work and how he has been using Scala to create an expression evaluator I realized that I have been doing some interesting work using Scala in the last months. However, I have not coded any basic algorithms from school time.
I decided to implement a sorting algorithm, let say quick sort, but I wanted to remove any trace of imperative programming from it.
In about two mins I ended up with this:
def quickSort(list: List[Int]): List[Int] = {
list match {
case Nil => Nil
case a :: Nil => List(a)
case a :: tail => quickSort(tail.filter(x=> x <= a)) ::: List(a) ::: quickSort(tail.filter(x => x > a))
}
}
This is an interesting quick sort implementation that always get the pivote as the head of the list, but it is perfect in this particular case where we only have access to this element in a very natural way.
Then I went to the internet and compared my solution to the ones online.
This is the one that came out:)
}
Well, it is the classical quick sort we can see in any programming class, but the imperative of the solution makes it ugly to write and read if we focus on Scala and its functional aspects.
On a deeper search, another solution showed up, this time, closer to what I was looking for.
def sort(xs: Array[Int]): Array[Int] = {
if (xs.length <= 1) xs
else {
val pivot = xs(xs.length / 2)
Array.concat(
sort(xs filter (pivot >)),
xs filter (pivot ==),
sort(xs filter (pivot <)))
}
}
This one is quite closer to mine. However, it lacks of pattern matching, still uses a calculated pivote, and has if else unnecessary constructions.
Now, I can get something back from what I’ve found, I might modify my last case to case a :: tail => q(tail.filter(a >)) ::: List(a) ::: q(tail.filter(a <=)). But it seems difficult to read and understand if you are not related with the language, so I kept my solution as in my first implementation.
At the end, no everyone cares about implementation details, the only thing you see is a method (or function) signature you will call like def quickSort(list: List[Int]): List[Int]. Details of the implementation are left completely behind the wall that signature creates. This kind of approach is OK if you are the API user or consumer, but if you are the one who writes this API then it is a different story. On the other hand, how those details are written is important when the product has been implemented using different tools and technology stacks. Writing clean code, code that is easy to read and modify, is as simplest as we (programmers) decide to. I believe that functional programming helps a lot in this matter because we express ideas in code very close to the way we think about them.
I’ve heard folks talking against functional programming and the shift of mind it requires. It might scare sometimes, as any other changes in life, but don’t close yourself to the change, embrace new technologies and approaches to problems without fear, be someone who is willing to learn and you will be just fine.
Read next:
Higher order functions, what are they? | https://hackernoon.com/sorting-in-scala-tails-of-functional-programming-679fb2ee4af9 | CC-MAIN-2020-45 | refinedweb | 548 | 68.3 |
No-brainer Dart helpers for boilerplate methods implementation.
class Foo extends Boilerplate { final int i; final int j Foo(this.i, this.j); // .toString, .hashCode, .operator== with no extra effort. }
What is Boilerplate?
Boilerplate saves you those cumbersome and error-prone hashCode, operator== and toString methods in Dart.
It implements them by passing the public fields values through to collection/equality.dart, which performs the equality / hashing / toString for us.
There's two variants:
-
Boilerplate uses mirrors to get the list of fields and their values. This means you need to preserve metadata of your class with
@MirrorsUsed annotations.
-
ExplicitBoilerplate requires you to specify the fields and class name explicitly. It doesn't use mirrors but some boilerplate is needed (although smaller than the methods it helps implement).
Limitations
These two classes are not designed for every possible use case, as they have the following limitations:
-
Boilerplate Only uses public fields by default,
- No special handling of reference cycles: user must avoid them responsibly,
- Not optimized for speed (but some care is taken to cache costly mirror results). If you need fast boilerplate methods, please consider implementing them with quiver-dart.
- Subsequent calls of hashCode on an object with mutable fields may yield different values (well, just as in Java),
Example with mirrors
@MirrorsUsed(targets: const[Foo, Bar], override: "*") import 'dart:mirrors'; import 'package:boilerplate/boilerplate.dart'; class Bar extends Boilerplate { final int i; Bar(this.i); } class Foo extends Bar { final int j; final String s; Foo(int i, this.j, this.s): super(i); } print(new Bar(1)); // "Bar { i: 1 }" print(new Foo(1, 2, "3")); // "Foo { i: 1, j: 2, s: 3 }" assert(new Bar(1) == new Bar(1)); assert(new Bar(1) != new Bar(2));
Example without mirrors
import 'package:boilerplate/explicit_boilerplate.dart'; class Bar extends ExplicitBoilerplate { final int i; Bar(this.i); @override get fields => { "i": i }; @override get className => "Bar"; } class Foo extends Bar { final int j; final String s; Foo(int i, this.j, this.s): super(i); @override get fields => { "i": i, "j": j, "s": s }; @override get className => "Foo"; }
Boilerplate can be mixed in
Note that Boilerplate can be safely mixed in at any level of the class hierarchy:
class A extends Boilerplate {} class B extends A with Boilerplate {} | https://www.dartdocs.org/documentation/boilerplate/0.1.0/index.html | CC-MAIN-2017-26 | refinedweb | 379 | 55.13 |
C# Custom LinkedList Console Application and Abstract Base Class and Method - Learning C# - Part 1
In our example we will create an abstract Inventory class. The Inventory class will have two functions: Category and Item. We will create a derived class, LinkedList, that uses the Inventory class as a linked list. This class will use custom methods, not built-in methods. In the LinkedList class we will create methods to add inventory items to the linked list, as well as remove them from the top of the stack or the bottom of the stack. We will also create methods to print in forward or reverse orders. And finally, we will create a method to allow searching the list by category or item and display the search results.
First, let's create a console application. In Visual Studio, select File, New, and Project. Select the Visual C# category and the Console Application option.
Name your project CSLinkedList and the solution name CSLinkedList. If you select the checkbox to "Create directory for solution," a new directory named CSLinkedList will be created, otherwise, your solution will be put inside the Location you designate.
using System; namespace CSLinkedList {
At the top of our project, you will see we need one .NET component, System. That is referenced by default when your project is created. If you open up the References list in the project explorer, you will see the list of libraries that are included by default. These add .NET framework functionality to your project.
A namespace has been created by default. Namespaces help organize your code and provide a way to create globally unique types.
public abstract class Inventory { public abstract string Category(); public string inventoryItem = string.Empty; public string Item() { return inventoryItem; } }
One of the key concepts of object oriented programming is Inheritance along with Abstraction. Inheritance allows you to create a class that allows you to reuse behaviors and definitions.
We first want to create a base class, Inventory. That means that it is a parent class. To make the Inventory class a base class, we use the keyword "abstract." This means any abstract methods defined in the base class must be implemented in the inherited or derived class. If you do not wish to require implementation, use the "virtual" keyword which means that the derived class "can" override the method with its own implementation, but does not have to.
In our above example, we have an abstract method "Category" and an implemented string and method: inventoryItem and Item(). Notice we do not define an implementation for the Category() method. The implementation must be defined by the derived class.
Our linked list will use the Item() and Category() for retrieving our Inventory item names.
In Part 2 of our series, we will setup our LinkedList derived class and define our properties and setup our constructor.
-] | https://weblogs.asp.net/nannettethacker/c-custom-linkedlist-console-application-and-abstract-base-class-and-method-learning-c-part-1 | CC-MAIN-2018-43 | refinedweb | 473 | 65.01 |
).
The code segment starts from a declaration of an entry point,
global _start. This tells the system that the application code starts at the
_start label..
That was fun. Now it's time to translate the second program into assembly; one that executes
setreuid() and
execve() to run a root shell:
section .data name db '/bin/sh', 0 section .text global _start _start: ; setreuid(0, 0) mov eax, 70 mov ebx, 0 mov ecx, 0 int 0x80 ; execve("/bin/sh",["/bin/sh", NULL], NULL) mov eax, 11 mov ebx, name push 0 push name mov ecx, esp mov edx, 0 int 0x80
Most of this code is similar to the previous example except for the
execve() function call. The same program segments are there, and the same execution method works for
setreuid(). The second parameter of
execve() is an array of two elements. It is reasonable to pass this through the stack, which first needs a zero value (
push 0), and then an address for the variable
name (
push name). This is a stack, so remember to push parameters in reverse order--LIFO, or "last in, first out." When the system call pulls its parameters out, the first will be the
name variable address, and then a zero value. A function must also know where to find its parameters. For that, this code uses the enhanced stack pointer (ESP) register, which always points to the top of the stack. The only other work is to copy the contents of the ESP register to ECX, which will be used as a second parameter when calling the
0x80 interrupt.
That assembly code works completely. However, it is useless. You can compile it with
nasm, execute it, and view the binary file in hex form with
hexdump, which is itself a shellcode. The problem is that both programs use their own data segments, which means that they cannot execute inside another application. This means in chain that an exploit will not be able to inject the required code into the stack and execute it.
The next step is to get rid of the data segment. There exists a special technique of moving a data segment into a code segment by using the
jmp and
call assembly instructions. Both instructions make a jump to a specified place in the code, but the
call operation also puts a return address onto the stack. This is necessary for returning to the same place after the called function successfully executes to continue the program's execution. Consider the code:
jmp two one: pop ebx [application code] two: call one db 'string'
At the beginning, the program execution jumps to a
two label, attached to a call to the procedure
one. There is no such procedure, in fact; however, there is another label with this name, which obtains control. At the moment of this call, the stack receives a return address: the address of the next instruction after
call. In this code, the address is that of a byte string:
db 'string'. This means that when the instructions located after
one label execute, the stack already contains the address of a string. The only thing left to do is to retrieve this string and use it appropriately. Here's that trick in a modified version of the second example, named shell.asm:
BITS 32 ; setreuid(0, 0) mov eax, 70 mov ebx, 0 mov ecx, 0 int 0x80 jmp two one: pop ebx ; execve("/bin/sh",["/bin/sh", NULL], NULL) mov eax, 11 push 0 push ebx mov ecx, esp mov edx, 0 int 0x80 two: call one db '/bin/sh', 0
As you can see, there are no more segments at all now. The string
/bin/sh, which was previously in a data segment, now comes off of the stack and goes into the EBX register. (The code also has a new directive,
BITS 32, which enables 32-bit processor optimization.)
Compile the program with
nasm:
$ nasm shell.asm
And dump its code with
hexdump:
$ hexdump -C shell
Figure 1 shows a typical shellcode. The next step is to convert it into a better format by preceding each byte with
\x, and then putting all of the code into a byte array. Now check that it works:
char code[]= "\xb8\x46\x00\x00\x00\xbb\x00\x00\x00\x00\xb9\x00\x00\x00\x00\xcd" "\x80\xe9\x15\x00\x00\x00\x5b\xb8\x0b\x00\x00\x00\x68\x00\x00\x00" "\x00\x53\x89\xe1\xba\x00\x00\x00\x00\xcd\x80\xe8\xe6\xff\xff\xff" "\x2f\x62\x69\x6e\x2f\x73\x68\x00"; main() { int (*shell)(); (int)shell = code; shell(); }
Try to compile and run it:
$ gcc -o shellApp $ ./shellApp
It works!
NULLBytes
Now the shellcode does not use the data segment and even works inside of a C tester program, but it still will not work inside a real exploit. The reason are the numerous
NULL bytes (
\x00). Most buffer overflow errors are related to C
stdlib string functions:
strcpy(),
sprintf(),
strcat(), and so on. All of these functions use the
NULL symbol to indicate the end of a string. Therefore, a function will not read shellcode after the first occurring
NULL byte.
Thus, the next task is to get rid of all null bytes in the shellcode. The idea is simple: find pieces of code that cause null bytes to appear and change them. A mature developer, in most cases, can say why machine code contains zeroes, but it's easy to use a disassembler to identify such instructions:
$ nasm shell.asm $ ndisasm -b32 shell 00000000 B846000000 mov eax,0x46 00000005 BB00000000 mov ebx,0x0 0000000A B900000000 mov ecx,0x0 0000000F CD80 int 0x80 ...
Executing this command will give the disassembled code of a program. It will contain three columns. The first column contains the instruction's address in hexadecimal form. It is not very important. The second column contains machine instructions, the same as shown with
hexdump. The third column contains an assembly equivalent. This column will give you an idea which instructions contain null bytes in a shellcode.
After a brief review of a dump contents, it becomes evident that most null bytes come from instructions that manage the contents of registers and the stack. This is no surprise; this code works in a 32-bit mode, so the computer allocates a four-byte memory space for each numeric value. Yet the code uses only values for which one byte is enough. For example, the beginning of the program has the instruction
mov eax, 70 to put the value
70 into the EAX register. In the shellcode, this instruction looks like
B8 46 00 00 00.
B8 is the machine code of the instruction
mov ax, and
46 00 00 00 is the value
70 in hexadecimal notation, padded with zeroes to the size of four bytes. Many null bytes appear for similar reasons.
The solution for this problem is very simple. It's enough to remember that 32-bit registers (EAX, EBX, and other registers whose names begin with "E," for "enhanced") can be represented by 8-bit and 16-bit registers. It's enough to use a 16-bit register AX instead and even its low and high parts, AL and AH, which are one-byte registers. Just replace the instruction
mov eax, 70 with
mov al, 70 in all such places.
It's important to be sure that the rest of the EAX register space does not contain any garbage; that is, the code must put a zero value into EAX without using any null bytes. The fastest and most effective way of doing this is with the
XOR logical function:
xor eax,eax will give the EAX register a zero value.
Even after these modifications, the shellcode still contains zero bytes. The debugger shows that now the
jmp instruction causes trouble:
E91500 jmp 0x29 0000 add [bx+si],al
The trick is to use a short jump instruction instead of the usual
jmp short. In short programs with simple structure these instructions work in absolutely the same way, and the machine code in this case will not contain zero bytes.
You may now think that this shellcode is ideal, but at the end there is still one remaining zero byte. This zero byte occurs because the string
bin/sh has a null byte indicating the end of the string. This is a definite requirement, because otherwise
execve() will not work properly. You cannot just remove this byte. You can use one more assembler trick: at the compiling and binding stage, store any other symbol instead of zero, and convert it into zero while processing the program:
jmp short stuff code: pop esi ; address of string ; now in ESI xor eax,eax ; put zero into EAX mov byte [esi + 17],al ; count 18 symbols (index starts from zero) ; and putting a zero value there (EAX register equals to zero) ; The string will become This is my string0 stuff: call code db 'This is my string#'
After using this trick, the shellcode will contain no null bytes:
BITS 32 ;setreuid(0, 0) xor eax,eax mov al, 70 xor ebx,ebx xor ecx,ecx int 0x80 jmp short two one: pop ebx ; execve("/bin/sh",["/bin/sh", NULL], NULL) xor eax,eax mov byte [ebx+7], al push eax push ebx mov ecx, esp mov al,11 xor edx,edx int 0x80 two: call one db '/bin/sh#'
After compiling this code, you can now see that it no longer contains null bytes. It's worth mentioning that the problem may arise not only because of null bytes, but because of other special symbols; for example, the end-of-line symbols, in some cases.
A buffer overflow exploit tries to write beyond a buffer on the stack so that when the function returns, it will jump to some code that most often starts a shell instead of returning to the function that called the current function. To understand how it works, you have to know how the stack works and how functions are called in C. The stack starts somewhere in the top of memory and the stack pointer moves down as the program pushes things onto the stack and back up as the code pops them off again. Given the C function:
void sum(int a,int b) { int c = a + b; }
The stack inside of
sum() will look like this:
b a <return address> <ebp contents> c
The computer saves the contents of the EBP register to a stack before calling the
sum() function because it will be used inside of the function, so it can be restored from the stack after returning from the function. The goal of an exploit is to change the return address. This is not possible in this case, because no matter what
a and
b are, the result cannot overflow
c into the EBP contents on the stack and the return address. If
c were a string instead, it might be possible to write past it. Here is an overflow-exploitable program:
#include <stdio.h> void sum(int a,int b) { int c = a + b; } void bad_copy_string(char *s) { char local[1024]; strcpy(local,s); printf("string is %s\n",local); } int main(int argc, char *argv[]) { sum(1,2); bad_copy_string(argv[1]); }
The function
copy_string makes a copy of the first command-line parameter of the program into a buffer of a fixed size and then prints it out. This might look stupid, but something like this is quite common for programs that need to perform actions based on external input, either from the command line or a socket connection.
Compile this victim code and run it:
% gcc -o overflow overflow.c % ./overflow 'All seems fine' string is All seems fine
Everything seems indeed right, but call it with a parameter longer than 1024 characters:
% ./overflow `perl -e 'print "a" x 2000'` string is aaaaaaaaaaaaaaaa bash: segmentation fault (core dumped) ./overflow `perl -e 'print "a" x 2000'`
The Perl script above generates a string of 2000
a symbols. Now run the core file through
gdb:
% gdb ./overflow core". Core was generated by `aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaa'.61616161 in ?? ()
The segmentation fault happened at the address
0x61616161--which is the string
aaaa in hexidecimal. This means that the exploit can get the program to jump to an arbitrary address depending on what it receives as a parameter. It would be nice to make it jump to the beginning of the local buffer on the stack--but what is the address of the stack right now?
gdb knows:
(gdb) info register esp esp 0xbffff334 0xbffff334
Now, the only other thing necessary to get the code to execute is the previously written shellcode. You can take the ready shell app and run an overflow victim program from it:
#include <stdlib.h> static char shellcode[]= "\xeb\x17\x5e\x89\x76\x08\x31\xc0\x88\x46\x07\x89\x46\x0c\xb0\x0b\x89\" "\xf3\x8d\x4e\x08\x31\xd2\xcd\x80\xe8\xe4\xff\xff\xff/bin/sh#"; #define NOP 0x90 #define LEN 1024+8 #define RET 0xbffff334 int main() { char buffer[LEN]; int i; /* first fill up the buffer with NOPs */ for (i=0;i<LEN;i++) buffer[i] = NOP; /* and then the shellcode */ memcpy(&buffer[LEN-strlen(shellcode)-4],shellcode,strlen(shellcode)); /* and finally the address to return to */ *(int*)(&buffer[LEN-4]) = RET; /* run program with buffer as parameter */ execlp("./overflow","./overflow",buffer,NULL); return 0; }
The
shellcode[] symbol array contains the shellcode without any null bytes. It may differ slightly, depending on OS conditions. The
main() function starts with a buffer that is the size of the local variable (1024 bytes) plus eight bytes for EBP and the return address. As the buffer is longer than the shellcode, the beginning needs a bunch of do-nothing machine code (
NOP) operations. Then the function copies in the shellcode, and finally, the address of the beginning of the buffer. Now compile and run it:
% gcc -o exploit exploit.c % ./exploit string is <lots of garbage>
Yahoo! A new Bourne shell opened! This is, of course, not much fun as the overflow program runs as yourself, but if it were a
SUID root program, then you would now have a root shell. Try that:
% chmod +s overflow % su # chown root overflow # exit % ./exploit string is <lots of garbage> sh# whoami root
That's it! You became a root user on this machine without permission. If the victim machine is a remote one, this will not help. More advanced shellcode creates a listening socket and redirects
stdin and
stdout to it before calling
execve /bin/sh--that way, you don't need a shell account on the machine and can simply direct
telnet or
nc at the machine and port to get a root shell.
In this article, I have reviewed the most important tricks that will be needed in writing shellcodes and using them in exploit. The key to success is a good understanding of the operating system under which the shellcode will run, as well as assembly programming. There is nothing complicated, though. It's also worth mentioning that you should only use these mentioned techniques for legal purposes and with the knowledge and consent of the machine's owner.
Buffer Overflow Attacks
Peter Mikhalenko works in Deutsche Bank as a business consultant.
Return to the Linux DevCenter. | http://www.linuxdevcenter.com/lpt/a/6590 | CC-MAIN-2014-41 | refinedweb | 2,573 | 67.89 |
A result of this operator overloading is the translation of the SQL-oriented syntax into calls to the regular interface in Listing One. This also explains why Examples 1(b) and 1(c) were separated. Gathering the binding information is executed while the temporary object is "traveling" through the expression, swallowing
IntoTypePtr and
UseTypePtr objects, whereas the actual binding of all variables is performed at the end of the expression, when the temporary object is destroyed.
More Insights
White Papers
- Challenging Some of the Myths About Static Code Analysis
- Gartner Report: IT leaders Can Benefit from Disruptive Innovation in the Storage Industry
ReportsMore >>
Webcasts
- High Performance Computing in Finance: Best Practices Revealed
- New Technologies to Optimize Mobile Financial Services
The type of temporary object also has overloaded inserter
operator<<. Thanks to this, the full expression can contain many insertions and commas. Every insertion is delegated to the underlying string stream object (this also means that you can build the query from the objects of your own types, if they are
IOStreams aware) and every comma accumulates the binding info for the consecutive placeholders.
What I've described covers the case when the query is to be executed exactly once. Of course, database access needs can be rarely satisfied by the possibility of only executing one-time queries, so there is also a facility to just prepare statements for execution, leaving it up to you to execute the query and fetch the consecutive rows (if it is a "select" query).
The important thing is that the implementation of
into and
use functions (Listing Two) do not do muchthey only create objects of special types, which need to follow a given interface. Depending on the type of the variable that is bound to the SQL placeholder, a specific specialization of the
IntoType<> or
UseType<> templates is used. This makes it an extremely extensible mechanism, which can be used to plug in user-defined types into the library. The only thing that is needed for every supported variable type is to write a specialization for the
IntoType<> and
UseType<> templates (the library itself is a good example of how to do this). Of course, the library contains specializations for commonly used types.
Examples
Listing Three presents examples that put the library to work. Of course, to run this code you need to provide true database credentials (service name, username, and user password) and prepare the database tables that make the example SQL statements valid.
Listing Three
// example program #include "soci.h" #include <iostream> using namespace std; using namespace SOCI; int main() { try { Session sql("DBNAME", "user", "password"); // example 1. - basic query with one variable used int count; sql << "select count(*) from some_table", into(count); // example 2. - basic query with parameter int id = 7; string name; sql << "select name from person where id = " << id, into(name); // example 3. - the same, but with input variable sql << "select name from person where id = :id", into(name), use(id); // example 4. - statement with no output id = 8; name = "John"; sql << "insert into person(id, name) values(:id, :name)", use(id), use(name); // example 5. - statement used multiple (three) times Statement st1 = (sql.prepare << "insert into country(id, name) values(:id, :name)", use(id), use(name)); id = 1; name = "France"; st1.execute(1); id = 2; name = "Germany"; st1.execute(1); id = 3; name = "Poland"; st1.execute(1); // example 6. - statement used for fetching many rows Statement st2 = (sql.prepare << "select name from country", into(name)); st2.execute(); while (st2.fetch()) { cout << name << '\n'; } } catch (exception const &e) { cerr << "Error: " << e.what() << '\n'; } }
A test driver accompanies the library (available electronically), which is self-contained in that it prepares the required database structures by itself. You may find this test driver to be a valuable source of information about what can be really done with the library.
Afterthought: Syntax-First Library Development
One of the biggest contributions of the eXtreme Programming (XP) method is its focus on test-driven development. As a rule of thumb, in XP the test unit is written before the code that is supposed to make the test pass. The result is code that exactly meets its requirements. It can be beneficial to apply a similar concept on another level of code design.
When implementing a library that is meant to provide some particular functionality, the key design problem is to choose the interface of the library. Sadly, most libraries seem to be developed in the "back-end to front-end" direction, where some low-level concepts (network connectivity, database access, filesystem operations, GUI, and the like) are simply wrapped into high-level language structures, hiding some of the underlying complexity but still revealing the fundamental low-level conventions. Listing One presents two classes that together can be considered a poor man's database library. Such libraries have little added value and in some extreme cases can even be a disservice to the entire language community by suggesting that the ability of the high-level language is limited to only provide simple wrappers for what is always available to the C programmers via the low-level APIs. I have heard such claims about the C++ language made by C programmers. The library I present here is based on an approach I call "syntax-first library development," which is exactly the reverse of this scenario.
The way I built the library was to set up the intended syntax before writing a single line of code. After that, I went through a head-scratching and pen-biting phase to come up with the implementation that makes this syntax possible. Granted, the library employs tricks that some programmers may consider to be obscure, but those tricks are meant to be hidden from library users.
The thrust of the library design is similar to the test-first development proposed by the XP method and, by analogy, the syntax selected before implementing the library itself can be considered to be documentation for the library interface in the same way that test units are documentation for the code requirements in XP. Interestingly, syntax-first library development and test-first development can be used together in a library design and development, leading to libraries that are both expressive and well tested.
Maciej is a Ph.D. student at the Institute of Computer Science, Warsaw University of Technology. You can contact him at. | http://www.drdobbs.com/cpp/a-simple-oracle-call-interface/184405930?pgno=2 | CC-MAIN-2015-32 | refinedweb | 1,060 | 50.97 |
Oct 18, 2018 09:02 AM|ellen0107|LINK
Hi,
I have some data which requested from rest api and want to write in JSON format and then create a JSON file. (Im using .NET CORE 2.0).
parentid, parentname, productid, productname, childid, childname, itemid and itemname. itemid and itemname are in array and all need to be printed;
the relation between data are like below:
{parent: id, name; {product: id, name; {child: id, name; {item: [id, name]; [id, name]; [id, name]; [id, name]; } } } }
Thank you!
All-Star
195614 Points
Moderator
Oct 18, 2018 10:28 AM|Mikesdotnetting|LINK
JSON is plain text, so if you want to create a .json file, you can use File.WriteAllText:
You just need to pass in the path of the file and the content (json).
var json = dataFromSomeAPI;
File.WriteAllText(@"C:\jsonfolder\data.json", json);
All-Star
45489 Points
Microsoft
Oct 19, 2018 03:27 AM|Zhi Lv - MSFT|LINK
Hi ellen0107,
ellen0107
How to write in JSON format and create a JSON file in C#?
I have some data which requested from rest api and want to write in JSON format and then create a JSON file. (Im using .NET CORE 2.0).
From your description, I suppose you want to convert the data object to json format, then export it to a JSON file. If that is the case, please refer to the following code:
reference:
using Newtonsoft.Json; using System.IO;
//convert object to json string.
string json = JsonConvert.SerializeObject(data); string path = @"D:\temp\jsondata.json"; //export data to json file. using (TextWriter tw = new StreamWriter(path)) { tw.WriteLine(json); };
Best regards,
Dillion
2 replies
Last post Oct 19, 2018 03:27 AM by Zhi Lv - MSFT | https://forums.asp.net/t/2148139.aspx?How+to+write+in+JSON+format+and+create+a+JSON+file+in+C+ | CC-MAIN-2021-39 | refinedweb | 287 | 75.3 |
Real newbie here. I need to write a 'P' character with my Leonardo chip controller. I have been successful but I need a tweek or new method. here is my code:
#include <Keyboard.h>
void setup() {
// make pin 2 an input and turn on the
// pullup resistor so it goes high unless
// connected to ground:
pinMode(2, INPUT_PULLUP);
Keyboard.begin();
}
void loop() {
//if the button is pressed
if (digitalRead(2) == LOW){
//Send an ASCII 'P',
Keyboard.write(80);
}
}
The problem is I'm getting multiple writes every time I activate the pull down. I'm getting a hundred 'P's . I just want one. Please help. | https://forum.arduino.cc/t/keyboard-write-issue/916369 | CC-MAIN-2021-49 | refinedweb | 106 | 77.43 |
What is the use of the
yield keyword in Python? What does it do?
For example, I’m trying to understand this code1:?
1. This piece of code was written by Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: Module mspace.
0
To understand what
yield does, you must understand what generators are. And before you can understand generators, you must understand, a kind of iterable you can only iterate over once. Generators_generator(): ... mylist = range(3) ... for i in mylist: ... yield i*i ... >>> mygenerator = create_generator() # create a generator >>> print(mygenerator) # mygenerator is an object! <generator object create_generator.
Then, your code will continue from where it left off each time
for uses the generator.
Now the hard part:
The first time the
for calls the generator object created from your function, it will run the code in your function from the beginning until it hits
yield, then it’ll return the first value of the loop. Then, each subsequent call will run another iteration of the loop you have written in the function and return the next value. This will continue until the generator is considered empty, which happens when the function runs without hitting
yield. That can be because the loop has come to an end, or because you no longer satisfy an
"if/else".
Your code explained
Generator:
# Here you create the method of the node object that will return the generator def _get_child_candidates(self, distance, min_dist, max_dist): # Here is the code that will be called each time you use the generator object: # If there is still a child of the node object on its left # AND if the distance is ok, return the next child if self._leftchild and distance - max_dist < self._median: yield self._leftchild # If there is still a child of the node object on its right # AND if the candidate's. It’s a concise way to go through all these nested data even if it’s a bit dangerous since you can end up with an infinite loop. In this case,
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))exhaust may reasons why Python is so cool. But this is another story, for another question…
You can stop here, or read a little bit to see an advanced use of # The ...
Note: For Python 3, use
print(corner_street_atm.__next__()) or
print(next(corner_street_atm)) without creating another list?
Then just
import itertools.
An example? Let’s see the possible orders of arrival for a four.
There is more about it in this article about how
for loops work.
7
Shortcut to understanding
yield
When you see a function with
yield statements, apply this easy trick to understand what will happen:
- Insert a line
result = []at the start of the function.
- Replace each
yield exprwith
result.append(expr).
- Insert a line
return resultat the bottom of the function.
- Yay – no more
yieldstatements! Read and figure out code.
- Compare function to the original definition.
This trick may give you an idea of the logic behind the function, but what actually happens with
yield is significantly different than what happens in the list based approach. In many cases, the yield approach will be a lot more memory efficient and faster too. In other cases, this trick will get you stuck in an infinite loop, even though the original function works just fine. Read on to learn more…
Don’t confuse your Iterables, Iterators, and Generatorsand the loop body is executed. If an exception
StopIteration:
- Built-in lists, dictionaries, tuples, sets, files.
- User-defined classes that implement
__iter__().
- Generators. at the very next line after the
yield it previously returned from, executes the next line of code, in this case, a.
Why Use Generators?
Usually, you can write code that doesn’t use generators but implements the same logic. One option is to use the temporary list ‘trick’ I mentioned before. That will not work in all cases, for e.g. if you have infinite loops, or it may make inefficient use of memory when you have a really long list. The other approach is to implement a new iterable class SomethingIter that keeps the state in instance members and performs the next logical step in it’s
next() (or
__next__() in Python 3) method. Depending on the logic, the code inside the
next() method may end up looking very complex and be prone to bugs. Here generators provide a clean and easy solution.
3
Think
4 | https://coded3.com/what-does-the-yield-keyword-do/ | CC-MAIN-2022-40 | refinedweb | 749 | 65.32 |
The base class for all minidump streams. More...
#include "llvm/ObjectYAML/MinidumpYAML.h"
The base class for all minidump streams.
The "Type" of the stream corresponds to the Stream Type field in the minidump file. The "Kind" field specifies how are we going to treat it. For highly specialized streams (e.g. SystemInfo), there is a 1:1 mapping between Types and Kinds, but in general one stream Kind can be used to represent multiple stream Types (e.g. any unrecognised stream Type will be handled via RawContentStream). The mapping from Types to Kinds is fixed and given by the static getKind function.
Definition at line 27 of file MinidumpYAML.h.
Definition at line 28 of file MinidumpYAML.h.
Definition at line 39 of file MinidumpYAML.h.
Create a stream from the given stream directory entry.
Definition at line 463 of file MinidumpYAML.cpp.
References File, M, move, llvm::Expected< T >::takeError(), llvm::dwarf::toStringRef(), and llvm::minidump::Directory::Type.
Create an empty stream of the given Type.
Definition at line 97 of file MinidumpYAML.cpp.
References llvm_unreachable.
Referenced by llvm::MinidumpYAML::Object::create().
Get the stream Kind used for representing streams of a given Type.
Definition at line 70 of file MinidumpYAML.cpp.
Definition at line 42 of file MinidumpYAML.h.
Definition at line 43 of file MinidumpYAML.h. | https://www.llvm.org/doxygen/structllvm_1_1MinidumpYAML_1_1Stream.html | CC-MAIN-2021-43 | refinedweb | 220 | 70.39 |
A library to interact with the cjdns Admin Interface
Project description
cjdnsadmin For Python 3admin
But you could also clone it and run:
python setup.py install
Once it’s installed, you’ll find peerStats and cexec installed in your $PATH, and the cjdnsadmin library available for import.
Usage
Usage is simple. First, import:
import cjdnsadmin
Then, connect to the running cjdns instance. There are two ways to do this. The normal way is to use the ~/.cjdnsadmin file:
cjdns = cjdnsadmin.connectWithAdminInfo()
Or, if you have the IP, port and password and wish to ignore the ~/.cjdnsadmin file for whatever reason:
cjdns = cjdnsadmin:
cjdnsadmin.PublicToIp6('1rfp3guz4jjhfu4dsu5mrz68f7fyp502wcttq6b78xdrjhd4ru80.k').
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/cjdnsadmin/ | CC-MAIN-2018-26 | refinedweb | 130 | 68.67 |
| Join
Last post 08-28-2007 12:43 AM by Max Kukartsev. 23 replies.
Sort Posts:
Oldest to newest
Newest to oldest
I have a little test composite control I'm writing. Later it will have more controls... but for right now I just have a simple label that I want to render and I've made it's Text property accessible through the properties window. However I'm seeing something strange. When I go into design mode, then go to the properties window and set the LabelText property, the value I type disappears from the property window but it DOES appear when I switch back to source from design view. So the change is being made but you cannot see the value in the properties window for design time. Anyone have an idea as to why this is occuring? The control is listed below. This is being developed in .NET 2.0
[
{
}
EnsureChildControls();
Controls.Clear();
Controls.Add(testLabel);
testLabel.RenderControl(writer);
Hi,
The problem is that you assign the properties directly to the child control, and ensure its creation every time you get or set them, but when it's created over again, the properties you've set are reset.You should store the properties of child controls as your own in viewstate (works at design-time and runtime), and assign them in the CreateChildControls method, like in the following code definition. Also, as long as you add your label to your Controls collection, CompositeControl (or WebControl) will take care of rendering it for you, and you don't need to render it yourself. If you decide to anyway, then don't call the base implementation, because it will render it again, resulting in duplicated text. Finally, you shouldn't worry about setting the ID property of your label, because you derive from CompositeControl, which implements INamingContainer, making sure the ID's of your child controls are unique from the other controls on the page.
This is how it should be:
[DefaultProperty("LabelText"),
ToolboxData("<{0}:TestControl runat=\"server\">
</{0}:TestControl>")]
public class TestControl
: CompositeControl {
private
Label testLabel;
public
TestControl() {
}
[Bindable(true),
DefaultValue(""),
Description("Text to display on the label")]
public
string LabelText {
get
{
return
(string)ViewState["LabelText"]
?? String.Empty;
}
set
{
ViewState["LabelText"] = value;
}
}
protected
override void
CreateChildControls() {
Controls.Clear();
testLabel = new Label();
//not
necessary to manually set ID
testLabel.ID = "testLabel";
//assign
property here
testLabel.Text = LabelText;
Controls.Add(testLabel);
}
protected
override void
Render(HtmlTextWriter writer) {
//either
manually render the label
testLabel.RenderControl(writer);
//or call
the base implementation to render it for you
//base.Render(writer);
}
}
Hope this helps.
That did the trick! Wierd thing is I had to copy and paste the whole block you posted back into my code for it to work... when I just copied out the portions that changed it still wasn't working right. EIther that or it cached an old version. Either way it's working now! Hopefully I can duplicate this for other controls and additional properties I want to set for them... should every property be set in viewstate? What about using ControlState? I heard that if you use viewstate and viewstate is turned off it could affect the controls.
Also, when then are you required to call EnsureChildControls()?? Doesn't this ensure that your controls are created/updated when changes are made to properties that affect the underlying childcontrols.
What is the "??" double question mark notation.
You don't have to store a property in viewstate; it's just a best practice because viewstate preserves the data across post-backs and works at design-time, too. With a private variable, the data will be erased after each post-back, which is generally not the desired effect.You can use control state, yes, and control state is viewstate essentially, but it can't be disabled. Also, to discourage developers from stuffing it up with too much data like a state bag, data is added with Pair objects. So, my advice is to use control state for critical data that absolutely must be preserved in view state at all times.
The (A ?? B) is a shorthand notation for A == null ? B : A. That is, - whatever is before the ?? is what to check for a null reference, and what is after, - what to return if it is a null reference. So, in this case, the value retreived from view state is checked for a null reference, and if it is null, String.Empty is returned. I find performing the check unnecessary, but actually it is a practice done by Visual Studio 2005, and when one creates a new web custom control in VS2005 in a library, the generated code automatically includes a Text property which upon retrieval checks the returned value from view state for null and if it is, returns String.Empty.
EnsureChildControls() is usually called in an overriden DataBind() method if the control used templates to render portions of its interface, and in CompositeControl, it's called upon accessing the Controls collection, to make sure that the controls have been created first.
If you're interested in providing design time support with template editing (overriding TemplateGroups property in a control designer and calling SetViewFlags(ViewFlags.TemplateEditing, true) in the Initialize(...) method) I'll tell you one more thing I think you should know about EnsureChildControls(), and that's that when you finish editing templates through the smart tag, the control isn't automatically updated because the call to DataBind() doesn't re-create the controls but the call to EnsureChildControls() considers them as already created and does nothing.To solve this problem, instead of calling EnsureChildControls(), call CreateChildControls() and set ChildControlsCreated to true. I saw this technique in the MSDN library, although I wasn't sure what exactly it does.
Cheers....I hope you don't mind the rather lengthy response.
No not at all. I appreciate the detailed response! You've been quite helpful! I'm not currently making any databound controls but I'll definitely have to keep that in mind. I'm essentially attempting to construct a library of components that my company can use throughout it's sites. I've found that we have a ton of places where we use email text boxes, phone numbers, addresses...etc... I'm working on a phone number control now with two validators built in. I tried to use the ?? notation in regard to a boolean type property but it didn't like it. Apparently it doesn't like that for boolean types. I ended up using the following.
public bool Required { get { object b = ViewState["Required"]; return (b == null) ? true : (bool)b; } set { ViewState["Required"] = value; } }I've been successful in providing design support for most of the properties that I want public. I will say one thing I did find out... when you have validators built into the composite controls you DO seem to have to set the ID of the control you wish to validate, otherwise the validators don't get an id in the Controltovalidate property. I know you said you didn't have to explicitly set the ID of the controls in createchildcontrols when you inherit from CompositeControl, but I think that is only if you don't have any other controls renferencing others within the code.
protected override void CreateChildControls() { Controls.Clear(); phoneLabel = new Label(); phoneLabel.Text = LabelText; //phoneLabel.ID = "phoneLabel"; Controls.Add(phoneLabel); phoneTextBox = new TextBox(); phoneTextBox.Text = Text; phoneTextBox.ID = "phoneText"; <--- HAD TO SET THIS phoneTextBox.Attributes.Add("onblur", "formatPhone(this)"); Controls.Add(phoneTextBox); phoneRequired = new RequiredFieldValidator(); phoneRequired.Enabled = Required; phoneRequired.ErrorMessage = RequiredErrorMessage; phoneRequired.Text = RequiredErrorText; phoneRequired.ValidationGroup = ValidationGroup; phoneRequired.ID = "valReqPhoneNumber"; phoneRequired.ControlToValidate = phoneTextBox.ID; <-- Otherwise this failed at runtime //phoneRequired.Text = "*"; Controls.Add(phoneRequired); phoneRegex = new RegularExpressionValidator(); phoneRegex.Enabled = FormatRequired; phoneRegex.ErrorMessage = FormatErrorMessage; phoneRegex.Text = FormatErrorText; phoneRegex.ValidationGroup = ValidationGroup; phoneRegex.ID = "valRegexPhoneNumber"; phoneRegex.ControlToValidate = phoneTextBox.ID; //phoneRegex.Text = "*"; phoneRegex.ValidationExpression = @"(((\(\d{3}\) ?)|(\d{3}-))?\d{3}-\d{4})|\d{10}|\d{7}"; Controls.Add(phoneRegex); // }
I hope if I have any other questions you might be available for answers. You know... just when I think I've figured these controls out, I go to build a new one and seem to run into different problems. Thanks again for all your help!
Ok here is a problem. I figured it would be cool to be able to set the regular expresison in the property window... however I would like the one I've used in the post above to be the default
[Bindable(true), Category("Behavior"), DefaultValue(@"(((\(\d{3}\) ?)|(\d{3}-))?\d{3}-\d{4})|\d{10}|\d{7}"), Description("Regular expression used to format the phone number") ] public string FormatExpression { get { return (string)ViewState["FormatExpression"] ?? @"(((\(\d{3}\) ?)|(\d{3}-))?\d{3}-\d{4})|\d{10}|\d{7}"; } set { ViewState["FormatExpression"] = value; } }
This is what I've used in the property, however the value of the regular expression does not want to appear in the property window at design time, why is it that this won't but the boolean values will????
AH HA! I figured it out...
[Bindable(true), Category("Behavior"), DefaultValue(@"(((\(\d{3}\) ?)|(\d{3}-))?\d{3}-\d{4})|\d{10}|\d{7}"), Description("Regular expression used to format the phone number") ] public string FormatExpression { get { object b = ViewState["FormatExpression"]; return (String.IsNullOrEmpty((string)b)) ? @"(((\(\d{3}\) ?)|(\d{3}-))?\d{3}-\d{4})|\d{10}|\d{7}" : (string)b; //return (string)ViewState["FormatExpression"] ?? String.Empty; } set { ViewState["FormatExpression"] = value; } }
For your boolean problem, it makes perfect sense for the code to not work, because System.Boolean is a value type, meaning it can't be null. When you cast a null reference of a reference type into a value type, an exception occurs. To make it work, cast it to a type of (bool?), which is a nullable type, creating a wrapper generic structure of type System.INullable<T>, which you can use to check for a null reference just like strings and other reference types.
So, this is the correct way:
(bool?)ViewState["..."] ?? (false | true)
-----------------------
If you want a way to provide a default value for a property, here's an idea
(of course just having the DefaultValueAttribute isn't enough; it simply instructs ASP.NET to not persist the value when it's default (which you can programmatically check through PropertyDescriptor.ShouldSerializeValue method))
you can assign it in the constructor, so the value returned won't be null just because it was unassigned. However, if a user programmatically assigns a null value to the property, null is what the user will get. Depends on the behavior you want.
Ahhh, I keep forgetting about those newer nullable types for value types. I did discover through trial and error that any property having a defined defaultvalueattribute would not be serialized to the asp.net code in design time if it was set to that default value. Would you recommend any good books on server/composite control development. I'd like to learn more about this stuff. Things like this PropertyDescriptor.ShouldSerializeValue attribute is news to me. I'm sure there are many of those that would make life easier.
Also as far as inheritance goes... I did try something because most of the basics on my control library will be similar, ie... many of the text box controls will have a required field validator, and a regex validator so I was thinking about putting them in a base class which itself inherits from CompositeControl, then extend that base class to say (phonenumbertxtctrl,emailtxtctrl etc...) However when I did that and tried adding it to the toolbox the baseclass was added to the tool box as well.. is there a way to eliminate that behavior. It's either that or I build the base class to be used as a generic control to be used by itself, then extend it properly to become more specific for phone numbers and emails etc...
Well, there are several books. A good book on simply C# (not specific to ASP.NET but has a little ASP.NET at the end) is "Pro C# 2005 and the .NET 2.0 Platform" by Andrew Troelsen (), which actually has a newer edition ("Pro C# with .NET 3.0, Special Edition"). A specialized book on ASP.NET 2.0, and has a chapter on user controls, one on custom controls, and one about design-time support is "Pro ASP.NET in C# 2005, Special Edition" () SE includes an extra chapter or two devoted to AJAX and client-side script behavior. That is the book which is useful for learning object models, and it's actually where I read to assign property values to child controls in CreateChildControls(), not directly and other stuff.The problem is, any books devoted to ASP.NET control development are for .NET 1.x. But come to think of it, the book "Pro ASP.NET in C# 2005" largely focuses on the various improvements in ASP.NET 2.0, and so to supplement it with one of the dedicated ASP.NET control development books would be sufficient. The frustrations I've encountered with the older books is that a some amount of their code is obsolete. But I'm sure once you look into MSDN it'll show you the "non-obsolete" alternative.Two dedicated books on ASP.NET controls are "Building ASP.NET Server Controls" () and "Developing Microsoft ASP.NET Server Controls and Components" ().------------------Easy! If you want to hide a control from the toolbox, just decorate it with the ToolBoxItemAttribute(false), which also prevents it from appearing in ASPX intellisense (a drop-down list of available controls when you start typing). However, the user will still be able to instantiate it programmatically, so I'd also suggest making the constructor non-public, if you haven't done so already...
Hope I was of assistance...
Now I'm really confused... I put everything in ViewState as you stated and it works great in design time. I can set the required properties no problem.... however... I can't get the Text property at runtime. it comes back as blank on postback. ie... I put the control on a page, put in a phone number, then on postback the Text property is blank... I posted the whole control below. Everything else is working great... if I set the required field to true or the regex validator to true and give them messages it reacts on submit perfect... but the actual text value comes back as blank! I was trying to figure this out for over an hour with no results. Any ideas?
using
Level =
#endregion
RenderJavaScript();
phoneTextBox.Text = Text;
Controls.Add(phoneTextBox);
phoneRequired.Enabled = Required;
phoneRequired.ErrorMessage = RequiredErrorMessage;
phoneRequired.Text = RequiredErrorText;
phoneRequired.ValidationGroup = ValidationGroup;
phoneRequired.ControlToValidate = phoneTextBox.ID;
Controls.Add(phoneRequired);
phoneRegex.Enabled = FormatRequired;
phoneRegex.ErrorMessage = FormatErrorMessage;
phoneRegex.Text = FormatErrorText;
phoneRegex.ValidationGroup = ValidationGroup;
phoneRegex.ControlToValidate = phoneTextBox.ID;
phoneRegex.ValidationExpression = FormatExpression;
Controls.Add(phoneRegex);
AddAttributesToRender(writer);
phoneTextBox.ApplyStyle(phoneStyle);
}
phoneTextBox.RenderControl(writer);
phoneRequired.RenderControl(writer);
phoneRegex.RenderControl(writer);
sb.Append(
I see the problem. It's that when the Render() method is called, the Text property of the child control (not the one stored in ViewState) is still set to its old value. Since the properties of the child controls are assigned in CreateChildControls(), you should call CreateChildControls() the first thing in your Render() method.
That didn't seem to work... matter of fact that actually reset the value to nothing on the front end. Before that it was being carried though in the viewstate I just couldn't access the method in the page load. Funny part is I read that I don't even need to override and use the render method because the controls know how to render themselves unless I want to give it special formatting. It actually still renders fine without the method... still doesn't work though. Something is screwy with the viewstate usage because if I set the "Text" property at design time then view the page the value doesn't show up when the page initially comes up, but if I just postback the page using a simple asp:button and set a label to the value of the Text property, then the label gets set fine, AND the value appears in the textbox control in the composite control on the web page. Such strange behavior... I'm missing something in the way this thing renders... some order of operation.
Advertise |
Ads by BanManPro |
Running IIS7
Trademarks |
Privacy Statement
© 2009 Microsoft Corporation. | http://forums.asp.net/p/1149612/1875800.aspx | crawl-002 | refinedweb | 2,746 | 58.48 |
Use glom
Dictionaries are really cool. They’re very powerful and a large part
of Python: so much stuff is built on the
dict type. They’re generally
quite easy to work with, but what about when you have nested dictionaries
that contain lists that contain dictionaries that contain certain keys
only in some cases and sometimes there’s another list involved?
Elasticsearch Bulk API Responses
The Elasticsearch Bulk API is an efficient way to perform actions on multiple documents instead of using multiple calls.
Because it’s doing more than one action, the response from this call is
a bit complex. If the request body was something that could be processed at all,
the HTTP response code will be 200. Beyond that, you need to look at the JSON
response for an
errors key at the top level. This lets you know if any of the
individual
items were unsuccesful.
That
items list is where things can get tricky. Here’s a stripped down example of
a 200 response that includes one error and one success for two documents we sent
with the
index action:
{"errors": true, "items": [ {"index": {"_id": }}, {"index": {"_id": "GKbbqHIBgo1082mzZuO3", "result": "created", "status": 201}}]}
What I want to do with this response is turn it into a
dict where the
_ids are keys and their values are a dictionary of consistent keys
and values regardless of the success of the action. This way I can more easily
do something with the failures and move on from the successes. That’s kind of tricky!
Each
index includes the
_id that we’ll need and a
status,
but beyond that it depends on how the action fared as to what else it contains.
My plan is to make all
_id keys in my
dict include an
error which
will either be
None if it worked, or the relevant
error details if it didn’t.
That gives me one place to look to know success or failure for each item.
Write it ourselves
This isn’t initially that hard to solve. If
error exists we use it,
if not we use the default
None from
dict.get.
def parse_body(body): if (items := body.get("items")) is None: raise Exception("No items in this response") result = {} for item in items: index = item["index"] result[index["_id"]] = { "error": index.get("error"), # None by default "status": index["status"], } return result
…which returns:
{}, 'GKbbqHIBgo1082mzZuO3': {'error': None, 'status': 201}}
That’s relatively straightforward, but it kicks the problem down the road.
In the
error case now we have another nested dictionary which I want to
flatten out. What I really want are the two
reason values so I can log
them and more clearly point out what’s going wrong.
Write it ourselves, but better
To get a flatter
error, something like this could do it:
def parse_body(body): if (items := body.get("items")) is None: raise Exception("No items in this response") result = {} for item in items: index = item["index"] details = {"status": index["status"]} if (error := index.get("error")) is None: details["error"] = None else: details["error"] = { "reason": error["reason"], "cause": error["caused_by"]["reason"] } result[index["_id"]] = details return result
…which returns:
{'F6bbqHIBgo1082mzZuO3': {'error': {'cause': 'For input string: "2020-06-12T14:07:30.452649+00:00"', 'reason': 'failed to parse [usage.range.gte]'}, 'status': 400}, 'GKbbqHIBgo1082mzZuO3': {'error': None, 'status': 201}}
That’s much better! However, this is quickly becoming more complex. We still need to test this, and between the first and second versions we added more branches in the code that we’ll need to cover.
The cyclomatic complexity
of our new approach went from 3 to 4 as measured by the
mccabe library, named for Thomas McCabe,
who coined the metric. Metrics aside, we can see this code is growing more
ifs
and loops and indexing the more we add to it, and we’re making a few assumptions
that we won’t end up with a
KeyError on any of those lookups.
Charlie Kelly designing our third attempt at writing this.
Use glom
glom is a library for “Restructuring data, the Python way.” It was made to solve our problem.
Here’s what a solution that meets our needs looks like using
glom.
It returns the exact same
dict as the second
parse_body function.
There’s a lot to unpack here in the glom “spec”, and I’ll walk through it below. glom has an excellent tutorial that can explain it all better than me—and it has a browser-based REPL!—and it’s how I figured a lot of this out. The rest of their docs are well written and comprehensive, so check them out.
glom.glomtakes a
targetnested object and a
specto transform it. Everything we want is under the
"items"key, so that’s the
pathpart of our
spec.
Nested under
body["items"]is a list of
dicts, all with an
"index"key. Line 6 is a sub-path that tells glom to produce an iterable of the contents of each
"index"within
body["items"]
Lines 7–20 are a sub-path that tells glom to produce an iterable of a dictionary comprehension where the key is the
"_id"of each
"index"target dictionary—
glom.Taccesses the target path—and the value is a dictionary with
"status"and
"error"keys.
The
"status"comes directly from the
"status"in the
"index"target.
"error"is more involved and where we start to restructure things.
We decided earlier that we want
"error"in any case, using
Noneas a signal that there’s not actually an error.
glom.Coalesceto the rescue on Lines 11–17. If it can’t create something out of the sub-spec we passed in, the
default=Nonewill become the value.
For our
"reason"we want to take the first-level
["error"]["reason"]from the target
"index"dictionary.
For our
"cause"we want to take the
["caused_by"]["reason"]that is nested within the
["error"]in the target.
On Line 21 we use
glom.Merge()which combines all of the prior
Iterspecs together into one resulting object.
While it might look intimdating at first, it’s wildly powerful and this example barely scraches the surface of its capabilities. On top of that, when you consider the functional difference in our two hand-made implementations, the difference between glom implementations to produce the same result is smaller and no more complex.
To come back to cyclomatic complexity, the glom impementation of
parse_body
checks in at 1. To a caller it has no branches, no loops, none of that. It’s a function
and it returns a dictionary. That’s not to say it’s not a complex piece of software,
but that it takes care of the complexity for you.
Testing our manual versions of this might require a bunch of test cases to ensure we’re covering all of those branches, and will probably require we do something different about those dictionary lookups. Testing our glom implementation requires passing in a body that includes both cases we’re looking at—error and success—and seeing that we get a good result. I’m very picky about dependencies, but as of this writing glom has 97% test coverage and great documentation, so I’m comfortable letting it do the work for us.
Conclusion
-
Thank you Mahmoud Hashemi for creating this wonderful piece of software, and thanks as well to anyone else who’s contributed to it.
Check out the source at — it’s a very well done project. | https://briancurtin.com/articles/use-glom/ | CC-MAIN-2020-40 | refinedweb | 1,250 | 62.07 |
There were three significant changes to the SVN trunk and sandbox recently:
* Data keywords (use, into, et al.) were moved into Poco~058~~058~Data::Keywords namespace.
* All the IO stuff (both sync and async) was moved from Foundation and Net to IO and IO::Socket
* All the Serial stuff was moved from IO to IO::Serial
All these changes do break code. We never take that lightly, but when being tidy weighs more than being 100% backward compatible we do not hesitate to do it.
The commit affected whopping 180+ files. VS71 and VS90 files are not updated yet. Please bear with us and report any problems and/or errors.
Alex | http://pocoproject.org/forum/viewtopic.php?p=753 | CC-MAIN-2015-35 | refinedweb | 112 | 73.17 |
12 September 2008 07:43 [Source: ICIS news]
SINGAPORE (ICIS news)-- Asian naphtha discounts had deepened $3.00/tonne to $15.00/tonne on a CFR (cost and freight) Korea basis on persistent bearish market sentiments, trading sources said on Friday.
?xml:namespace>
This week, a Korean end-user reportedly received a discount of $15.00/tonne CFR Korea for a 25,000 tonne open spec naphtha cargo for delivery over second half of October.
The same end-user received a $12.00/tonne discount in end-August for a first half October cargo.
Meanwhile, buy-sell indications for second half October open spec naphtha were pegged at $880.00-883.00/tonne CFR (cost and freight) ?xml:namespace>
Demand from northeast Asian crackers for naphtha has been sluggish and cutbacks in operating rates by as much as 20% have been common, industry sources said.
Northeast Asian end-users had been severely affected by eroded margins from the downstream petrochemical such as aromatics and olefins, the sources added. | http://www.icis.com/Articles/2008/09/12/9155876/bear-markets-deepen-asian-naphtha-discounts.html | CC-MAIN-2014-35 | refinedweb | 168 | 57.87 |
8.5. Implementation of Recurrent Neural Networks from Scratch¶
In this section we implement a language model from scratch. It is based on a character-level recurrent neural network trained on H. G. Wells’ ‘The Time Machine’. As before, we start by reading the dataset first.
In [1]:
import sys sys.path.insert(0, '..') import d2l import math from mxnet import autograd, nd from mxnet.gluon import loss as gloss import time corpus_indices, vocab = d2l.load_data_time_machine()
8
len(vocab))]), len(vocab))
Out[2]:
[[1..]] <NDArray 2x44 @cpu(0)>
Note that one-hot encodings are just a convenient way of separating the
encoding (e.g. mapping the character
a to
\((1,0,0, \ldots) vector)\) from the embedding (i.e. multiplying the
encoded vectors by some weight matrix \(\mathbf{W}\)). This
simplifies the code greatly relative to storing an embedding matrix that
the user needs to maintain. d2l package for future use def to_onehot(X, size): return [nd.one_hot(x, size) for x in X.T] X = nd.arange(10).reshape((2, 5)) inputs = to_onehot(X, len(vocab)) len(inputs), inputs[0].shape
Out[3]:
(5, (2, 44))
The code above generates 5 minibatches containing 2 vectors each. Since we have a total of 43 distinct symbols in “The Time Machine” we get 43-dimensional vectors.
8.5.2. Initializing the Model Parameters¶
Next, we initialize the model parameters. The number of hidden units
num_hiddens is a tunable parameter.
In [4]:
num_inputs, num_hiddens, num_outputs = len(vocab), 512, len(vocab) ctx = d2)
8.5.3. Sequence Modeling¶
8 where
each layers requires initializing). the \(\tanh\) function values is 0 when the elements
are evenly distributed over the real numbers.
In [6]:
def rnn(inputs, state, params): # Both inputs and outputs are composed of num_steps matrices of the shape # (batch_size, len(vocab)) the model makes any sense at all. In particular, let’s check whether inputs and outputs have the correct dimensions, e.g. to ensure that the dimensionality of the hidden state hasn’t changed.
In [7]:
state = init_rnn_state(X.shape[0], num_hiddens, ctx) inputs = to_onehot(X.as_in_context(ctx), len(vocab)) params = get_params() outputs, state_new = rnn(inputs, state, params) len(outputs), outputs[0].shape, state_new[0].shape
Out[7]:
(5, (2, 44), (2, 512))
8.5.3.2. Prediction Function¶
The following function predicts the next
num_chars characters based
on the
prefix (a string containing several characters). This
function is a bit more complicated. Whenever the actual sequence is
known, i.e. for the beginning of the sequence, we only update the hidden
state. After that we begin generating new characters and emitting them.
For convenience we use the recurrent neural unit
rnn as a function
parameter, so that this function can be reused in the other recurrent
neural networks described in following sections.
In [8]:
# This function is saved in the d2l package for future use def predict_rnn(prefix, num_chars, rnn, params, init_rnn_state, num_hiddens, vocab, ctx): state = init_rnn_state(1, num_hiddens, ctx) output = [vocab[prefix[0]]] for t in range(num_chars + len(prefix) - 1): # The output of the previous time step is taken as the input of the # current time step. X = to_onehot(nd.array([output[-1]], ctx=ctx), len(vocab)) # Calculate the output and update the hidden state (Y, state) = rnn(X, state, params) # The input to the next time step is the character in the prefix or # the current best predicted character if t < len(prefix) - 1: # Read off from the given sequence of characters output.append(vocab[prefix[t + 1]]) else: # This is maximum likelihood decoding. Modify this if you want # use sampling, beam search or beam sampling for better sequences. output.append(int(Y[0].argmax(axis=1).asscalar())) return ''.join([vocab.idx_to_token[i] for i in output])
We test the
predict_rnn function first. Given that we didn’t train
the network it will generate nonsensical predictions. We initialize it
with the sequence
traveller and have it generate 10 additional
characters.
In [9]:
predict_rnn('traveller ', 10, rnn, params, init_rnn_state, num_hiddens, vocab, ctx)
Out[9]:
'traveller bexhxhxhxh'
8.5.4. Gradient Clipping¶ with which we can make progress,. d2l package for future use def grad_clipping(params, theta, ctx): norm = nd.array([0], ctx) for param in params: norm += (param.grad ** 2).sum() norm = norm.sqrt().asscalar() if norm > theta: for param in params: param.grad[:] *= theta / norm
8
len(vocab). In fact, if we were to store the sequence without any compression this would be the best we could do to encode it. Hence this provides a nontrivial upper bound that any model must satisfy.
8.
8.5.6.1. Optimization Loop¶
In [11]:
# This function is saved in the d2l package for future use def train_and_predict_rnn(rnn, get_params, init_rnn_state, num_hiddens, corpus_indices, vocab, ctx, is_random_iter, num_epochs, num_steps, lr, clipping_theta, batch_size, prefixes): if is_random_iter: data_iter_fn = d2l.data_iter_random else: data_iter_fn = d2l.data_iter_consecutive params = get_params() loss = gloss.SoftmaxCrossEntropyLoss() start = time.time() for epoch in range(num_epochs): if not is_random_iter: # If adjacent sampling is used, the hidden state is initialized # at the beginning of the epoch state = init_rnn_state(batch_size, num_hiddens, ctx) l_sum, n = 0.0, 0 data_iter = data_iter_fn(corpus_indices, batch_size, num_steps, ctx) for X, Y, len(vocab)) # outputs is num_steps terms of shape (batch_size, len(vocab)) (outputs, state) = rnn(inputs, state, params) # After stitching it is (num_steps * batch_size, len(vocab)) d2l.sgd(params, lr, 1) # Since the error is the mean, no need to average gradients here(prefix, 50, rnn, params, init_rnn_state, num_hiddens, vocab, ctx))
8.5.6.2. Experiments with a Sequence Model¶
Now we can train the model. First, we need to set the model hyper-parameters. To allow for some meaningful amount of context we set the sequence length to 64. In particular, we will see how training using the ‘separate’ and ‘sequential’ term generation will affect the performance of the model.
In [12]:
num_epochs, num_steps, batch_size, lr, clipping_theta = 500, 64, 32, 1, 1 prefixes = ['traveller', 'time traveller']
Let’s use random sampling to train the model and produce some text.
In [13]:
train_and_predict_rnn(rnn, get_params, init_rnn_state, num_hiddens, corpus_indices, vocab, ctx, True, num_epochs, num_steps, lr, clipping_theta, batch_size, prefixes)
epoch 50, perplexity 10.988873, time 9.87 sec epoch 100, perplexity 8.848707, time 10.62 sec - travellere the the the the the the the the the the the the - time travellere the the the the the the the the the the the the epoch 150, perplexity 7.697527, time 10.01 sec epoch 200, perplexity 7.024106, time 10.43 sec - traveller simensions of space the greent on the pace the gr - time traveller simensions of space the greent on the pace the gr epoch 250, perplexity 5.917917, time 9.76 sec epoch 300, perplexity 4.480086, time 10.10 sec - traveller pard ft really the gedint mean thereat wi houng m - time traveller some frould the gat on t me wave lo neng, and an epoch 350, perplexity 2.918370, time 9.98 sec epoch 400, perplexity 2.041019, time 10.14 sec - traveller spored icle mave for shat re pramed fr buther the - time traveller smored of the inof folee that dion, at wistre tom epoch 450, perplexity 1.658033, time 9.84 sec epoch 500, perplexity 1.395304, time 9.82 sec - traveller. 'it's against reason,' said filby. 'what reason? - time traveller smiled round at us. then, still smiling faintly,
Even though our model was rather primitive, it is nonetheless able to produce text that resembles language. Now let’s compare this with sequential partitioning.
In [14]:
train_and_predict_rnn(rnn, get_params, init_rnn_state, num_hiddens, corpus_indices, vocab, ctx, False, num_epochs, num_steps, lr, clipping_theta, batch_size, prefixes)
epoch 50, perplexity 11.215377, time 9.65 sec epoch 100, perplexity 9.074466, time 9.66 sec - traveller and the the the the the the the the the the the t - time traveller and the the the the the the the the the the the t epoch 150, perplexity 7.820408, time 9.86 sec epoch 200, perplexity 6.705087, time 10.22 sec - traveller some thave lere theng ther thes aller some thave - time traveller simensions of the the the the ghat ang have time epoch 250, perplexity 5.238214, time 10.53 sec epoch 300, perplexity 3.059694, time 10.27 sec - traveller. 'ithe pessed the prece the perman thing thee the - time traveller. 'ith of thee tome that llare mathes three thing epoch 350, perplexity 1.799713, time 11.12 sec epoch 400, perplexity 1.318062, time 9.70 sec - traveller, with a slight accession of cheerfulness. 'realys - time traveller. 'it would be remankably in alo gerta cones cante epoch 450, perplexity 1.093487, time 9.74 sec epoch 500, perplexity 1.055208, time 9.88 sec - traveller smiled. 'are you sure we can move freely in space - time traveller smiled round at us. then, stano romene. our eni a
In the following we will see how to improve significantly on the current model and how to make it faster and easier to implement.
8.5.7. Summary¶
- Sequence models need state initialization for training.
- Between sequential models you need to ensure to detach the gradient, variable.
- Run the code in this section without clipping the gradient. What happens?
- Set the
pred_periodvariable to 1 to observe how the under-trained model (high perplexity) writes lyrics. What can you learn from this?
-. | http://d2l.ai/chapter_recurrent-neural-networks/rnn-scratch.html | CC-MAIN-2019-18 | refinedweb | 1,542 | 68.57 |
rubikarubika
this is a unofficial library for making bots in rubika using this library you can make your own0 rubika bot and control that those bots that makes with this library, will be run on your account; so you should get your account’s API key. and another point is it.. you are only allowed to run the bot in one chat at a time. so you should get chat’s GUID.
introductionsintroductions
1. unix or windows system
2. python 3
3. libraries :
– pycryptodome
– requests
– urllib3
– tqdm
note: libraries automatically will be installed when you install rubika library
installinstall
enter this command on your command line to install the library
pip install rubika
first introductions will be installed then rubika library
useuse
enter this example code in a file or enter line-to-line in the python3 shell:
from rubika import Bot bot = Bot("AUTH-KEY") target = "CHAT-ID" bot.sendMessage(target, 'YOUR-MESSAGE')
as result your message will be sent in the target chat.
documentsdocuments
for reading more about this library, you can visit site:
🌐 | https://pythonawesome.com/a-unofficial-library-for-making-bots-in-rubika/ | CC-MAIN-2022-05 | refinedweb | 176 | 61.26 |
tarOperations
tarOperations
The basic
tar operations, `--create' (`-c'),
`--list' (`-t') and `--extract' (`--get',
`-x'), are currently presented and described in the tutorial
chapter of this manual. This section provides some complementary notes
for these operations.
Creating an empty archive would have some kind of elegance. One can
initialize an empty archive and later use `--append'
(`-r') for adding all members. Some applications would not
welcome making an exception in the way of adding the first archive
member. On the other hand, many people reported that it is
dangerously too easy for
tar to destroy a magnetic tape with
an empty archive `--create' option is
given, there are no arguments besides options, and
`--files-from' (`-T') option is not used. To get
around the cautiousness of GNU
tar and nevertheless create an
archive with nothing in it, one may still use, as the value for the
`--files-from' option, a file with no names in it, as shown in
the following commands:
A socket is stored, within a GNU
tar archive, as a pipe.
GNU
tar now shows dates as `1996-08-30',
while it used to show them as `Aug.
tar.
The previous chapter described the basics of how to use `--create' (`-c') to create an archive from a set of files. See section How to Create Archives. This section described advanced options to be used with `-:
When used with `-.
The `--owner' and `--group' options affect all files
added to the archive. GNU
tar provides also two options that allow
for more detailed control over owner translation:
Read UID translation map from file.
When reading, empty lines are ignored. The `#':
Given this file, each input file that is owner by UID 10 will be stored in archive with owner name `bin' and owner UID corresponding to `bin'. Each file owned by user `smith' will be stored with owner name `root' and owner ID 0. Other files will remain unchanged.
When used together with `--owner-map', the `--owner' option affects only files whose owner is not listed in the map file.
Read GID translation map from file.
The format of file is the same as for `- `--group-map', POSIX regular expression. For example, the following command:
will include in the archive `a.tar' all attributes, except those from the `user' namespace.
Any number of these options can be given, thereby creating lists of include and exclude patterns.
When both options are used, first `--xattrs-inlcude' is applied to select the set of attribute names to keep, and then `--xattrs-exclude' is applied to the resulting set. In other words, only those attributes will be stored, whose names match one of the regexps in `--xattrs-inlcude':.
The `--read-full-records' (`-B') option
in conjunction with the `--extract' or `--list' operations.
See section Blocking.
The `--read-full-records' (` `--read-full-records' (`-B') and `--blocking-factor=512-size' (`-b 512-size'), using a blocking factor larger than what the archive uses. This lets you avoid having to determine the blocking factor of an archive. See section The Blocking Factor of an Archive.
See need sentence or so of intro here
Use in conjunction with `--extract' (`--get', `-x') to read an archive which contains incomplete records, or one which has a blocking factor less than the one specified.
Normally,
tar stops reading when it encounters a block of zeros
between file entries (which usually indicates the end of the archive).
`--ignore-zeros' (`-i') allows
tar to
completely read an archive which contains a block of zeros before the
end (i.e., a damaged archive, or one that was created by concatenating
several archives together).
The `--ignore-zeros' (` `--extract' or `--list'.
tarWrites Files
(This message will disappear, once this node revised.)
See Introductory paragraph).
Do not replace existing files that are newer than their archive copies. This option is meaningless with `--list' (` `--touch' (`-m') option in conjunction with `--extract' (`--get', `-x').
Sets the data modification time of extracted archive members to the time they were extracted, not the time recorded for them in the archive. Use in conjunction with `--extract' (`--get', `-x').
To `bar'
GNU
tar will assume that all files from the directory `foo'
were already extracted and will therefore restore its timestamp and
permission bits. However, after extracting `foo/file2' the
directory timestamp will be offset again.
To correctly restore directory meta-information in such cases, use the `--delay-directory-restore' command line option:
Delays restoring of the modification times and permissions of extracted directories until the end of extraction. This way, correct meta-information is restored even if the archive has unusual member ordering.
Cancel the effect of the previous `--delay-directory-restore'.
Use this option if you have used `--delay-directory-restore' in
TAR_OPTIONS variable (see TAR_OPTIONS) and wish to
temporarily disable it.
To write the extracted files to the standard output, instead of creating the files on the file system, use `--to-stdout' (`-O') in conjunction with `--extract' (`--get', `
`--extract' (`--get', `-x'). When this option is
used, instead of creating the files specified,
tar writes
the contents of the files extracted to its standard output. This may
be useful if you are only extracting the files in order to send them
through a pipe. This option is meaningless with `--list'
(`, `2345' is the PID of the finished process.
If this behavior is not wanted, use `--ignore-command-error':
Ignore exit codes of subprocesses. Notice that if the program exits on signal or otherwise terminates abnormally, the error message will be printed even if this option is used.
Cancel the effect of any previous `--ignore-command-error'
option. This option is useful if you have set
`--ignore-command-error' in
TAR_OPTIONS
(see TAR_OPTIONS) and wish to temporarily cancel it.
See The section is too terse. Something more to add? An example, maybe?
Remove files after adding them to the archive.
(This message will disappear, once this node revised.)
Starts.
To process large lists of file names on machines with small amounts of memory. Use in conjunction with `--compare' (`--diff', `-d'), `--list' (`-t') or `--extract' (`--get', `-x').
The `--same-order' (`--preserve-order', ` `tar -t' on the archive and editing its output.
This option is probably never needed on modern computer systems..
tarUsages
(This message will disappear, once this node revised.)
See Using Unix file linking capability to recreate directory
structures--linking files into one subdirectory and then
tarring that directory.
See Nice hairy example using absolute-names, newer, etc. `-C' option:
The command also works using long option forms:
or
This is one of the easiest methods to transfer a
tar archive.
You have now seen how to use all eight of the operations available to
tar, and a number of the possible options. The next chapter
explains how to choose and change file and archive names, how to use
files to store names of other files which you can then call as
arguments to
tar (this can help you save time if you expect to
archive the same list of files a number of times), and so forth.
See in case it's not obvious, i'm making this up in some sense
based on my limited memory of what the next chapter *really* does. i
just wanted to flesh out this final section a little bit so i'd
remember to stick it in here. :-)
If there are too many files to conveniently list on the command line,
you can list the names in a file, and
tar will read that file.
See section Reading Names from a File.
There are various ways of causing
tar to skip over some files,
and not archive them. See section Choosing Files and Names for
tar.
This document was generated on May, 16 2016 using texi2html 1.76. | http://www.gnu.org/software/tar/manual/html_chapter/tar_4.html | CC-MAIN-2017-22 | refinedweb | 1,284 | 63.59 |
What is the poll frequency of `sqlite3_busy_timeout`?
(1) By example-user on 2020-10-04 18:40:44 [link] [source]
As far as I understand
sqlite3_busy_timeout, SQLite will poll up to the time provided to the argument.
How does SQLite know when the DB file has been unlocked, and how often does it poll to get this status?
Thanks
(2) By Keith Medcalf (kmedcalf) on 2020-10-04 19:34:23 in reply to 1 [link] [source]
The default busy handler callback is defined in main.c line 1646.
For systems supporting (and compiled) with support for sub-second sleeping (aka usleep and Windows) the times for the first polls are:
{ 1, 2, 5, 10, 15, 20, 25, 25, 25, 50, 50, 100 } ms
and 100 ms each thereafter. If the platform or compiler does not support (or has not indicated support) for sub-second sleep, then the frequency is once per second or whatever increment the platform supports (that is, if the platform only supports sleeping for a day-at-a-time then the polling frequency will be once per day).
(3.5) By Keith Medcalf (kmedcalf) on 2020-10-04 19:51:49 edited from 3.4 in reply to 2 [link] [source]
Note that the Operating System is entirely responsible for determining the granularity of the sleep operation no matter what is requested. For example, Windows systems always awaken on a "tick" (which can be adjusted) but by default (on most hardware) is just shy of 16 ms.
So on Windows the "sleep" will be until the next "tick" that occurs after the requested interval is expired.
(4) By example-user on 2020-10-04 19:42:25 in reply to 2 [link] [source]
Thanks.
Does the
sqlite3_busy_handler callback function get invoked after each of those intervals, or is the intention to let the application sleep inside the callback function then return true to define its own intervals?
(5) By Keith Medcalf (kmedcalf) on 2020-10-04 21:46:54 in reply to 4 [link] [source]
The callback function is set using the sqlite3_busy_handler API as described here
The callback function takes two parameters -- a void* and an int -- the int being the number of times it has been called.
busy = True; times = 0; while (busy && callbackFunction(void*, times)) { busy = amIstillBusy? times++ } if (busy) throw SQLITE_BUSY error; ... carry on ...
So, the callback function is invoked as long as SQLite3 is still "busy" and uses the void* to retrieve the total timeout set for the connection and the "times" as an index into the array of delays (capped at the last element). It then uses the "times" to compute how long it has been waiting (more or less accurate depending on the OS, the hardware, the phase of the moon, and the existence of werewolves). If it determines that the timeout has not yet expired, it "goes to sleep" for whatever it decides is the appropriate interval, and then wakes up and returns True. If it determines that it has been called and the timeout total time has expired, it immediately returns False.
The callback function merely implements the "wait". The polling for whatever is causing the "busy" to clear is done by whatever thing decided that a "busyness" occurred.
The default busy handler is:
static int sqliteDefaultBusyCallback( void *ptr, /* Database connection */ int count /* Number of times table has been busy */ ){ #if SQLITE_OS_WIN || HAVE_USLEEP /* This case is for systems that have support for sleeping for fractions of ** a second. Examples: All windows systems, unix systems with usleep() */ static const u8 delays[] = { 1, 2, 5, 10, 15, 20, 25, 25, 25, 50, 50, 100 }; static const u8 totals[] = { 0, 1, 3, 8, 18, 33, 53, 78, 103, 128, 178, 228 }; # define NDELAY ArraySize(delays) sqlite3 *db = (sqlite3 *)ptr; int tmout = db->busyTimeout; int delay, prior; assert( count>=0 ); if( count < NDELAY ){ delay = delays[count]; prior = totals[count]; }else{ delay = delays[NDELAY-1]; prior = totals[NDELAY-1] + delay*(count-(NDELAY-1)); } if( prior + delay > tmout ){ delay = tmout - prior; if( delay<=0 ) return 0; } sqlite3OsSleep(db->pVfs, delay*1000); return 1; #else /* This case for unix systems that lack usleep() support. Sleeping ** must be done in increments of whole seconds */ sqlite3 *db = (sqlite3 *)ptr; int tmout = ((sqlite3 *)ptr)->busyTimeout; if( (count+1)*1000 > tmout ){ return 0; } sqlite3OsSleep(db->pVfs, 1000000); return 1; #endif }
and it is set on a connection using the equivalent of the following code:
sqlite3* db = sqlite3_open .... sqlite3_busy_timeout(db, <value>); sqlite3_busy_handler(db, sqliteDefaultBusyCallback, db);
(6.2) By Keith Medcalf (kmedcalf) on 2020-10-04 22:22:07 edited from 6.1 in reply to 4 [source]
So you could write your own busy handler that causes a "poll" to occur every 32 ms for some specified time in seconds as follows:
int myBusyHandler(void* ptr, int times) { if ((intptr_t)ptr < times * 0.032) return 0; usleep(32000); return 1; }
and activate it with the following code:
db = sqlite3_open ...; sqlite3_busy_handler(db, myBusyHandler, (void*)300);
Note that there is no API to retrieve the connection timeout in the API so for your own busy handler you cannot access the busy_timeout set for the connection.
(7) By example-user on 2020-10-04 23:18:15 in reply to 5 [link] [source]
Thanks for the explanation,
In your example you are calling both:
1. sqlite3_busy_timeout(db, val) 2. sqlite3_busy_handler(db, cbFn, db)
But for each db connection, you can only use one of these? The second call will clear the first's behaviour?
So to summarise:
sqlite3_busy_timeout
- Will block app code, poll the db file until either (lock is achieved OR timeout expires)
sqlite3_busy_handler
- Completely application defined polling - SQLite itself will not sleep or poll.
- Return true = SQLite will immediately try to get a lock.
- Return false = SQLite returns BUSY for the statement.
- On sleep = SQLite is waiting for a return value.
(8) By Rowan Worth (sqweek) on 2020-10-05 03:47:28 in reply to 5 [link] [source].
(9) By example-user on 2020-10-05 14:37:01 in reply to 8 [link] [source]
I see, that makes sense.
The default callback using
int tmout = ((sqlite3 *)ptr)->busyTimeout is private SQLite state that would not be used by the application. | https://sqlite.org/forum/info/8b74f74cc5be4164?t=c | CC-MAIN-2022-21 | refinedweb | 1,032 | 64.95 |
Archive:PackagingDrafts/ExtraDistTagConditionalMacros
From FedoraProject
Purpose: There are some additional macros for DistTag that make it much easier to conditionalize inside of spec files.
Specifically, I propose that the following helper macros be added and the DistTag documentation updated accordingly:
%{?fedora: %{expand: %%define fc%{fedora} 1}} %{?rhel: %{expand: %%define el%{rhel} 1}}
This means that if %{fedora} is set to 7, the following macro is set:
%define fc7 1
This permits easier conditionalization in spec files. For example, with these defines, you can do:
%{?el5: a} %{?el4: b} %{?el3: c} %{?el2: d}
Without these macros, you have to resort to:
%if "%rhel" == "5" a %endif %if "%rhel" == "4" b %endif %if "%rhel" == "3" c %endif %if "%rhel" == "2" d %endif | http://fedoraproject.org/wiki/Archive:PackagingDrafts/ExtraDistTagConditionalMacros | CC-MAIN-2014-23 | refinedweb | 120 | 51.89 |
Nicolai M. Josuttis
Nicolai M. Josuttis
This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publish- ing process. Lean Publishing is the act of publishing an in-progress ebook using lightweight tools and many iterations to get reader feedback, pivot until you have the right book and build traction once you do. © 2019 by Nicolai Josuttis. All rights reserved. This book was typeset by Nicolai M. Josuttis using the LATEX document processing system.Josuttis: C++17 2019/02/16 18:57 page iii
Contents Preface xiii Versions of This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
1 Structured Bindings 3 1.1 Structured Bindings in Detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Where Structured Bindings can be Used . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2.1 Structures and Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2.2 Raw Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.3 std::pair, std::tuple, and std::array . . . . . . . . . . . . . . . . . . . 9 1.3 Providing a Tuple-Like API for Structured Bindings . . . . . . . . . . . . . . . . . . . . 11 1.4 Afternotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
iiiJosuttis: C++17 2019/02/16 18:57 page iv
iv Contents
3 Inline Variables 23 3.1 Motivation of Inline Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2 Using Inline Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.3 constexpr now implies inline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.4 Inline Variables and thread_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.5 Afternotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4 Aggregate Extensions 31 4.1 Motivation for Extended Aggregate Initialization . . . . . . . . . . . . . . . . . . . . . . 32 4.2 Using Extended Aggregate Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.3 Definition of Aggregates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.4 Backward Incompatibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.5 Afternotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6 Lambda Extensions 45 6.1 constexpr Lambdas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 6.2 Passing Copies of this to Lambdas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 6.3 Capturing by Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 6.4 Afternotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Contents v
vi Contents
10 Compile-Time if 91 10.1 Motivation for Compile-Time if . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 10.2 Using Compile-Time if . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 10.2.1 Caveats for Compile-Time if . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 10.2.2 Other Compile-Time if Examples . . . . . . . . . . . . . . . . . . . . . . . . . 97 10.3 Compile-Time if with Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 10.4 Using Compile-Time if Outside Templates . . . . . . . . . . . . . . . . . . . . . . . . . 100 10.5 Afternotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Contents vii
15 std::optional<> 131 15.1 Using std::optional<> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 15.1.1 Optional Return Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 15.1.2 Optional Arguments and Data Members . . . . . . . . . . . . . . . . . . . . . . 133 15.2 std::optional<> Types and Operations . . . . . . . . . . . . . . . . . . . . . . . . . . 135 15.2.1 std::optional<> Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 15.2.2 std::optional<> Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 15.3 Special Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 15.3.1 Optional of Boolean or Raw Pointer Values . . . . . . . . . . . . . . . . . . . . 140 15.3.2 Optional of Optional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 15.4 Afternotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
16 std::variant<> 143 16.1 Motivation of std::variant<> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 16.2 Using std::variant<> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 16.3 std::variant<> Types and Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 16.3.1 std::variant<> Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 16.3.2 std::variant<> Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 16.3.3 Visitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 16.3.4 Valueless by Exception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 16.4 Polymorphism and Inhomogeneous Collections with std::variant . . . . . . . . . 156 16.4.1 Geometric Objects with std::variant . . . . . . . . . . . . . . . . . . . . . . 156 16.4.2 Other Inhomogeneous Collections with std::variant . . . . . . . . . . . . 159 16.4.3 Comparing variant Polymorphism . . . . . . . . . . . . . . . . . . . . . . . . 160 16.5 Special Cases with std::variant<> . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 16.5.1 Having Both bool and std::string Alternatives . . . . . . . . . . . . . . . 161 16.6 Afternotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
17 std::any 163 17.1 Using std::any . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 17.2 std::any Types and Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 17.2.1 Any Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166Josuttis: C++17 2019/02/16 18:57 page viii
viii Contents
18 std::byte 171 18.1 Using std::byte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 18.2 std::byte Types and Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 18.2.1 std::byte Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 18.2.2 std::byte Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 18.3 Afternotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Contents ix
x Contents
Contents xi
xii Contents
Glossary 357
Index 359Josuttis: C++17 2019/02/16 18:57 page xiii
Preface
xiiiJosuttis: C++17 2019/02/16 18:57 page xiv
xiv Preface
Acknowledgments First I’d like to thank you, the C++ community, for making this book possible. The incredible de- sign of new features, the helpful feedback, the curiousness are the base for an evolving successful language. Especially thanks for all the issues you told and explained me and the feedback you gave. Especially, I’d like to thank everyone who reviewed drafts of this book or corresponding slides and provided valuable feedback and clarifications. These reviews brought the book to a significantly higher level of quality, and it again proved that good things need the input of many “wise guys.” For this reason, so far (this list is still growing) huge thanks to Roland Bock, Marshall Clow, Matthew Dodkins, Andreas Fertig, Graham Haynes, Austin McCartney, Billy O’Neal, David Sankel, Zachary Turner, Paul Reilly, Barry Revzin, and Vittorio Romeo. In addition, I’d like to thank everyone in the C++ community and on the C++ standardization com- mittee. In addition to all the work to add new language and library features, they spent many, many hours explaining and discussing their work with me, and they did so with patience and enthusiasm.Josuttis: C++17 2019/02/16 18:57 page xv
Acknowledgments xv
A special thanks goes to the LaTeX community for a great text system and to Frank Mittelbach for solving my LATEX issues (it was almost always my fault).Josuttis: C++17 2019/02/16 18:57 page xvi
xvi
C++17 is the next evolution in modern C++ programming, which is already at least partially sup- ported by the latest version of gcc, clang, and Visual C++. Although it is not as big a step as C++11, it contains a large number of small and valuable language and library features, which again will change the way we program in C++. This applies to both application programmers and programmers providing foundation libraries. This book will present all the new language and library features in C++17. It will cover the motivation and context of each new feature with examples and background information. As usual for my books, the focus lies on the application of the new features in practice and will demonstrate how features impact day-to-day programming and how to benefit from them in projects.
xviiJosuttis: C++17 2019/02/16 18:57 page xviii
an int by a floating-point value). If the braces are empty the default constructors of (sub)objects are called and fundamental data types are initialized with 0/false/nullptr.1
1 The only exception are atomic data types (type std::atomic<>), where even list initialization does not guarantee proper initialization. This will hopefully get fixed with C++20.Josuttis: C++17 2019/02/16 18:57 page xix
Feedback I welcome your constructive input—both the negative and the positive. I worked very hard to bring you what I hope you’ll find to be an excellent book. However, at some point I had to stop writing, reviewing, and tweaking to “release the new revision.” You may therefore find errors, inconsistencies, presentations that could be improved, or topics that are missing altogether. Your feedback gives me a chance to fix this, inform all readers through the book’s Web site, and improve any subsequent revisions or editions. The best way to reach me is by email. You will find the email address at the Web site of this book: Please, be sure to have the latest version of this book (remember it is written and published incremen- tally) and refer to the publishing date of this version when giving feedback. The current publishing date is 2019/02/16 (you can also find it on page ii right after the cover and on top of each page with the PDF format). Many thanks.Josuttis: C++17 2019/02/16 18:57 page xx
xx
Part I Basic Language Features This part introduces the new core language features of C++17 not specific for generic programming (i.e., templates). They especially help application programmers in their day-to-day programming. So every C++ programmer using C++17 should know them. Core language features specific for programming with templates are covered in Part II.
1Josuttis: C++17 2019/02/16 18:57 page 2
Chapter 1 Structured Bindings
Structured bindings allow you to initialize multiple entities by the elements or members of an object. For example, suppose you have defined a structure of two different members: struct MyStruct { int i = 0; std::string s; };
MyStruct ms; You can bind members of this structure directly to new names by using the following declaration: auto [u,v] = ms; Here, the names u and v are what is called structured bindings. To some extent they decompose the objects passed for initialization (at some point they were called decomposing declarations). Structured bindings are especially useful for functions returning structures or arrays. For example, consider you have a function returning a structure MyStruct getStruct() { return MyStruct{42, "hello"}; } You can directly assign the result to two entities giving local names to the returned data members: auto[id,val] = getStruct(); // id and val name i and s of returned struct Here, id and val are names for the members i and s of the returned structure. They have the corresponding types, int and std::string, and can be used as two different objects: if (id > 30) { std::cout << val; }
3Josuttis: C++17 2019/02/16 18:57 page 4
The benefit is direct access and the ability to make the code more readable by binding the value directly to names that convey semantic meaning about their purpose.1 The following code demonstrates how code can significantly improve with structured bindings. To iterate over the elements of a std::map<> without structured bindings you’d have to program: for (const auto& elem : mymap) { std::cout << elem.first << ": " << elem.second << '\n'; } The elements are std::pairs of the key and value type and as the members of a std::pair are first and second, you have to use these names to access the key and the value. By using structured bindings the code gets a lot more readable: for (const auto& [key,val] : mymap) { std::cout << key << ": " << val << '\n'; } We can directly use the key and value member of each element, using names that clearly demonstrate their semantic meaning.
prints the values of e.i and e.s, which are copies of ms.i and ms.s. e exists as long as the structured bindings to it exist. Thus, it is destroyed when the structured bindings go out of scope. As a consequence, unless references are used, modifying the value used for initialization has no effect on the names initialized by a structured binding (and vice versa): MyStruct ms{42,"hello"}; auto [u,v] = ms; ms.i = 77; std::cout << u; // prints 42 u = 99; std::cout << ms.i; // prints 77 u and ms.i also have different addresses. When using structured bindings for return values, the same principle applies. An initialization such as auto [u,v] = getStruct(); behaves as if we’d initialize a new entity e with the return value of getStruct() so that the structured bindings u and v become alias names for the two members/elements of e, similar to defining: auto e = getStruct(); aliasname u = e.i; aliasname v = e.s; That is, structured bindings bind to a new entity, which is initialized from a return value, instead binding to the return value directly. To the anonymous entity e the usual address and alignment guarantees apply, so that the structured bindings are aligned as the corresponding members they bind to. For example: auto [u,v] = ms; assert(&((MyStruct*)&u)->s == &v); // OK Here, ((MyStruct*)&u) yields a pointer to the anonymous entity as a whole.
Using Qualifiers We can use qualifiers, such as const and references. Again, these qualifiers apply to the anonymous entity e as a whole. Usually, the effect is similar to applying the qualifiers to the structured bindings directly, but beware that this is not always the case (see below). For example, we can declare structured bindings to a const reference: const auto& [u,v] = ms; // a reference, so that u/v refer to ms.i/ms.s Here, the anonymous entity is declared as a const reference, which means that u and v are the names of the members i and s of the initialized const reference to ms. As a consequence, any change to the members of ms affect the value of u and/or v. ms.i = 77; // affects the value of u std::cout << u; // prints 77Josuttis: C++17 2019/02/16 18:57 page 6
Declared as a non-const reference, you can even modify the members of the object/value used for initialization: MyStruct ms{42,"hello"}; auto& [u,v] = ms; // the initialized entity is a reference to ms ms.i = 77; // affects the value of u std::cout << u; // prints 77 u = 99; // modifies ms.i std::cout << ms.i; // prints 99 If the value used to initialize a structured bindings reference is a temporary object, as usual the lifetime of the temporary is extended to the lifetime of the bound structure: MyStruct getStruct(); ... const auto& [a,b] = getStruct(); std::cout << "a: " << a << '\n'; // OK
2 The term decay describes the type conversions when arguments are passed by value, which means that raw arrays convert to pointers and top-level qualifiers, such as const and references, are ignored.Josuttis: C++17 2019/02/16 18:57 page 7
the type of a still is const char[6]. Again, the auto applies to the anonymous entity, which as a whole doesn’t decay. This is different from initializing a new object with auto, where types decay: auto a2 = a; // a2 gets decayed type of a
Move Semantics Move semantics is supported following the rules just introduced. In the following declarations: MyStruct ms = { 42, "Jim" }; auto&& [v,n] = std::move(ms); // entity is rvalue reference to ms the structured bindings v and n refer to an anonymous entity being an rvalue reference to ms. ms still holds its value: std::cout << "ms.s: " << ms.s << '\n'; // prints "Jim" but you can move assign n, which refers to ms.s: std::string s = std::move(n); // moves ms.s to s std::cout << "ms.s: " << ms.s << '\n'; // prints unspecified value std::cout << "n: " << n << '\n'; // prints unspecified value std::cout << "s: " << s << '\n'; // prints "Jim" As usual, moved-from objects are in a valid state with an unspecified value. Thus, it is fine to print the value but not to make any assumptions about what is printed.3 This is slightly different from initializing the new entity with the moved values of ms: MyStruct ms = { 42, "Jim" }; auto [v,n] = std::move(ms); // new entity with moved-from values from ms Here, the initialized anonymous entity is a new object initialized with the moved values from ms. So, ms already lost its value: std::cout << "ms.s: " << ms.s << '\n'; // prints unspecified value std::cout << "n: " << n << '\n'; // prints "Jim" You can still move assign the value of n or assign a new value there, but this does not affect ms.s: std::string s = std::move(n); // moves n to s n = "Lara"; std::cout << "ms.s: " << ms.s << '\n'; // prints unspecified value std::cout << "n: " << n << '\n'; // prints "Lara" std::cout << "s: " << s << '\n'; // prints "Jim"
3 For strings, moved-from objects are usually empty, but this is not guaranteed.Josuttis: C++17 2019/02/16 18:57 page 8
struct D1 : B { }; auto [x, y] = D1{}; // OK
struct D2 : B {Josuttis: C++17 2019/02/16 18:57 page 9
int c = 3; }; auto [i, j, k] = D2{}; // Compile-Time ERROR
std::array For example, the following code initializes i, j, k, and l by the four elements of the std::array<> returned by a function getArray(): std::array<int,4> getArray(); ... auto [i,j,k,l] = getArray(); // i,j,k,l name the 4 elements of the copied return value Here, i, j, k, and l are structured bindings to the elements of the std::array returned by getArray(). Write access is also supported, provided the value for initialization is not a temporary return value. For example: std::array<int,4> stdarr { 1, 2, 3, 4 }; ... auto& [i,j,k,l] = stdarr; i += 10; // modifies std::array[0]Josuttis: C++17 2019/02/16 18:57 page 10
std::tuple The following code initializes a, b, and c by the three elements of the std::tuple<> returned by getTuple(): std::tuple<char,float,std::string> getTuple(); ... auto [a,b,c] = getTuple(); // a,b,c have types and values of returned tuple That is, a gets type char, b gets type float, and c gets type std::string.
std::pair As another example, the code to handle the return value of calling insert() on an associative/unordered container can be made more readable by binding the value directly to names that convey semantic meaning about their purpose, rather than relying on the generic names first and second from the resulting std::pair<> object: std::map<std::string, int> coll; ... auto [pos,ok] = coll.insert({"new",42}); if (!ok) { // if insert failed, handle error using iterator pos: ... } Before C++17, the corresponding check has to be formulated as follows: auto ret = coll.insert({"new",42}); if (!ret.second){ // if insert failed, handle error using iterator ret.first ... } Note that in this particular case, C++17 provides a way to improve this even further using if with initializers.
This can especially be used to implement a loop calling and dealing with a pair of return values, such as when using searchers in a loop: std::boyer_moore_searcher bm{sub.begin(), sub.end()}; for (auto [beg, end] = bm(text.begin(), text.end()); beg != text.end(); std::tie(beg,end) = bm(end, text.end())) { ... }
class Customer { private: std::string first; std::string last; long val; public: Customer (std::string f, std::string l, long v) : first(std::move(f)), last(std::move(l)), val(v) { } std::string getFirst() const { return first; } std::string getLast() const { return last; } long getValue() const { return val; } }; Josuttis: C++17 2019/02/16 18:57 page 12
template<> struct std::tuple_element<2, Customer> { using type = long; // last attribute is a long }; template<std::size_t Idx> struct std::tuple_element<Idx, Customer> { using type = std::string; // the other attributes are strings };
template<std::size_t Idx> struct std::tuple_element<Idx, Customer> { using type = std::string; // the other attributes are strings }; The type of the third attribute is long, specified as full specialization for index 2. The other attributes have type std::string specified as partial specialization (which has lower priority than the full specialization). The types specified here are the types decltype yields for the structured bindings. Finally, we define the corresponding getters as overloads of a function get<>() in the same name- space as type Customer:4 template<std::size_t> auto get(const Customer& c); template<> auto get<0>(const Customer& c) { return c.getFirst(); } template<> auto get<1>(const Customer& c) { return c.getLast(); } template<> auto get<2>(const Customer& c) { return c.getValue(); } In this case, we have a primary function template declaration and full specializations for all cases. Note that all full specializations of function templates have to use the same signature (including the exact same return type). The reason is that we only provide specific “implementations,” no new declarations. The following will not compile: template<std::size_t> auto get(const Customer& c); template<> std::string get<0>(const Customer& c) { return c.getFirst(); } template<> std::string get<1>(const Customer& c) { return c.getLast(); } template<> long get<2>(const Customer& c) { return c.getValue(); } By using the new compile-time if feature, we can combine the get<>() implementations into one function: template<std::size_t I> auto get(const Customer& c) { static_assert(I < 3); if constexpr (I == 0) { return c.getFirst(); } else if constexpr (I == 1) { return c.getLast(); } else { // I == 2 return c.getValue(); } } With this API, we can use structured bindings for objects of type Customer as follows: lang/structbind1.cpp
4 The C++17 standard also allows us to define these get<>() functions as member functions, but this is probably an oversight and should not be used.Josuttis: C++17 2019/02/16 18:57 page 14
#include "structbind1.hpp" #include <iostream>
int main() { Customer c("Tim", "Starr", 42); auto [f, l, v] = c; std::cout << "f/l/v: " << f << ' ' << l << ' ' << v << '\n';
#include <string> #include <utility> // for std::move()
class Customer { private: std::string first; std::string last; long val; public: Customer (std::string f, std::string l, long v) : first(std::move(f)), last(std::move(l)), val(v) { } const std::string& firstname() const { return first; } std::string& firstname() { return first; } const std::string& lastname() const { return last; } std::string& lastname() { return last; } long value() const { return val; } long& value() { return val; } }; For read-write access, we have to overload the getters for constant and non-constant references: lang/structbind2.hpp #include "customer2.hpp" #include <utility> // for tuple-like API
return c.value(); } } Note that you should have all three overloads, to be able to deal with constant, non-constant, and movable objects.5 To enable the return value to be a reference, you should use decltype(auto).6 Again, we use the new compile-time if feature, which makes the implementation simple if the getters have different return types. Without, we would need full specializations again, such as: template<std::size_t> decltype(auto) get(Customer& c); template<> decltype(auto) get<0>(Customer& c) { return c.firstname(); } template<> decltype(auto) get<1>(Customer& c) { return c.lastname(); } template<> decltype(auto) get<2>(Customer& c) { return c.value(); } Again, note that the primary function template declaration and the full specializations must have the same signature (including the same return type). The following will not compile: template<std::size_t> decltype(auto) get(Customer& c); template<> std::string& get<0>(Customer& c) { return c.firstname(); } template<> std::string& get<1>(Customer& c) { return c.lastname(); } template<> long& get<2>(Customer& c) { return c.value(); } Now, you can use structured bindings for read access and to modify the members of a Customer: lang/structbind2.cpp #include "structbind2.hpp" #include <iostream>
int main() { Customer c("Tim", "Starr", 42); auto [f, l, v] = c; std::cout << "f/l/v: " << f << ' ' << l << ' ' << v << '\n';
5 The standard library provides a fourth get<>() overload for const&&, which is provided for other reasons (see) and not necessary to support structured bindings. 6 decltype(auto) was introduced with C++14 to be able to deduce a (return) type from the value category of an expression. By using this as a return type, roughly speaking, references are returned by reference, but temporaries are returned by value.Josuttis: C++17 2019/02/16 18:57 page 18
1.4 Afternotes Structured bindings were first proposed by Herb Sutter, Bjarne Stroustrup, and Gabriel Dos Reis in by using curly braces instead of square brackets. The finally accepted wording for this feature was formulated by Jens Maurer in: C++17 2019/02/16 18:57 page 19
Chapter 2 if and switch with Initialization
The if and switch control structures now allow us to specify an initialization clause beside the usual condition or selection clause. For example, you can write: if (status s = check(); s != status::success) { return s; } where the initialization status s = check(); initializes s, which is then valid for the whole if statement.
19Josuttis: C++17 2019/02/16 18:57 page 20
Here, we also use structured bindings, to give both the return value and the element at the return position pos useful names instead of just first and second. Before C++17, the corresponding check has to be formulated as follows: auto ret = coll.insert({"new",42}); if (!ret.second){ // if insert failed, handle error using iterator ret.first const auto& elem = *(ret.first); std::cout << "already there: " << elem.first << '\n'; } Note that the extension also applies to the new compile-time if feature.
2.3 Afternotes if and switch with initialization was first proposed by Thomas Köppe in p0305r0, initially only extending the if statement. The finally accepted wording was formulated by Thomas Köppe in: C++17 2019/02/16 18:57 page 22
22
Chapter 3 Inline Variables
One strength of C++ is its ability to support the development of header-only libraries. However, up to C++17, this was only possible if no global variables/objects were needed or provided by such a library. Since C++17 you can define a variable/object in a header file as inline and if this definition is used by multiple translation units, they all refer to the same unique object: class MyClass { static inline std::string name = ""; // OK since C++17 ... };
23Josuttis: C++17 2019/02/16 18:57 page 24
According to the one definition rule (ODR), a variable or entity had to be defined in exactly one translation unit. Even preprocessor guards do not help: #ifndef MYHEADER_HPP #define MYHEADER_HPP
class MyClass { static std::string name; // OK ... }; MyClass.name = ""; // Link ERROR if included by multiple CPP files
#endif The problem is not that the header file might be included multiple times, the problem is that two different CPP files include the header so that both define MyClass.name. For the same reason, you get a link error if you define an object of your class in a header file: class MyClass { ... }; MyClass myGlobalObject; // Link ERROR if included by multiple CPP files
Workarounds For some cases, there are workarounds: • You can initialize static const integral data members in a class/struct: class MyClass { static const bool trace = false; ... }; • You can define an inline function returning a static local variable: inline std::string getName() { static std::string name = "initial value"; return name; } • You can define a static member function returning the value: std::string getMyGlobalObject() { static std::string myGlobalObject = "initial value"; return myGlobalObject; } • You can use variable templates (since C++14):Josuttis: C++17 2019/02/16 18:57 page 25
template<typename T = std::string> T myGlobalObject = "initial value"; • You can derive from a base class template for the static member(s): template<typename Dummy> class MyClassStatics { static std::string name; };
template<typename Dummy> std::string MyClassStatics<Dummy>::name = "initial value";
Note that as usual for std::atomic you always have to initialize the values when you define them. Note that still you have to ensure that types are complete before you can initialize them. For example, if a struct or class has a static member of its own type, the member can only be inline defined after the type declaration: struct MyType { int value; MyType(int i) : value{i} { } // one static object to hold the maximum value of this type: static MyType max; // can only be declared here ... }; inline MyType MyType::max{0}; See the header file to track all new calls for another example of using inline variables.
struct MyData { inline static std::string gName = "global"; // unique in program inline static thread_local std::string tName = "tls"; // unique per thread std::string lName = "local"; // for each object ... void print(const std::string& msg) const { std::cout << msg << '\n'; std::cout << "- gName: " << gName << '\n'; std::cout << "- tName: " << tName << '\n'; std::cout << "- lName: " << lName << '\n'; } };
You can use it in the translation unit having main(): lang/inlinethreadlocal1.cpp #include "inlinethreadlocal.hpp" #include <thread>
void foo();
int main() { myThreadData.print("main() begin:");
std::thread t(foo); t.join(); myThreadData.print("main() end:"); } And you can use the header file in another translation unit defining foo(), which is called in a different thread: lang/inlinethreadlocal2.cpp #include "inlinethreadlocal.hpp"
void foo() { myThreadData.print("foo() begin:");
3.5 Afternotes 29
- tName: tls - lName: local main() later: - gName: thread1 name - tName: thread1 name - lName: thread1 name foo() begin: - gName: thread1 name - tName: tls - lName: local foo() end: - gName: thread2 name - tName: thread2 name - lName: thread2 name main() end: - gName: thread2 name - tName: thread1 name - lName: thread1 name
3.5 Afternotes Inline variables were motivated by David Krauss in and first proposed by Hal Finkel and Richard Smith in. The finally accepted wording was formulated by Hal Finkel and Richard Smith in: C++17 2019/02/16 18:57 page 30
30
Chapter 4 Aggregate Extensions
One way to initialize objects in C++ is aggregate initialization, which allows the initialization of an aggregate1 from multiple values with curly braces: struct Data { std::string name; double value; };
1 Aggregates are either arrays or simple, C-like classes that have no user-provided constructors, no private or protected non-static data members, no virtual functions, and before C++17 no base classes.
31Josuttis: C++17 2019/02/16 18:57 page 32
Note that you can skip initial values. In that case the elements are zero initialized (calling the default constructor or initializing fundamental data types with 0, false, or nullptr). For exam- ple: PData a{}; // zero-initialize all elements PData b{{"msg"}}; // same as {{"msg",0.0},false} PData c{{}, true}; // same as {{nullptr,0.0},true} PData d; // values of fundamental types are unspecified Note the difference between using empty curly braces and no braces at all: • The definition of a zero-initializes all members so that the string name is default constructed, the double value is initialized by 0.0, and the bool flag is initialized by false. • The definition of d only initializes the string name by calling the default constructor; all other members are not initialized and have a unspecified value. You can also derive aggregates from non-aggregate classes. For example: struct MyString : std::string { void print() const { if (empty()) { std::cout << "<undefined>\n"; } else { std::cout << c_str() << '\n'; } } };
MyString x{{"hello"}}; MyString y{"world"}; You can even derive aggregates from multiple base classes and/or aggregates: template<typename T> struct D : std::string, std::complex<T> { std::string data; }; which you could then use and initialize as follows: D<float> s{{"hello"}, {4.5,6.7}, "world"}; // OK since C++17 D<float> t{"hello", {4.5, 6.7}, "world"}; // OK since C++17 std::cout << s.data; // outputs: ”world” std::cout << static_cast<std::string>(s); // outputs: ”hello” std::cout << static_cast<std::complex<float>>(s); // outputs: (4.5,6.7) The inner initializer lists are passed to the base classes in the order of the base class declarations. The new feature also helps defining an overload of lambdas with very little code.Josuttis: C++17 2019/02/16 18:57 page 34
struct Base { friend struct Derived; private: Base() { } };
int main() { Derived d1{}; // ERROR since C++17Josuttis: C++17 2019/02/16 18:57 page 35
4.5 Afternotes 35
4.5 Afternotes Extended aggregate initialization was first proposed by Oleg Smolsky in n4404. The finally accepted wording was also formulated by Oleg Smolsky in. link/p0017r1. The type trait std::is_aggregate<> was introduced as a US national body comment for the standardization of C++17 (see).Josuttis: C++17 2019/02/16 18:57 page 36
36
Chapter 5 Mandatory Copy Elision or Passing Unmaterialized Objects
The topic of this chapter can be seen from two points of view: • Technically, C++17 introduces a new rule for mandatory copy elision under certain conditions: The former option to eliminate copying temporary objects, when passing or returning them by value, now becomes mandatory. • As a result we deal with passing around the values of unmaterialized objects for initialization. I will introduce this feature technically, coming later to the effect and terminology of materialization.
MyClass bar() { return MyClass(); // returns temporary }
37Josuttis: C++17 2019/02/16 18:57 page 38
int main() { foo(MyClass()); // pass temporary to initialize param MyClass x = bar(); // use returned temporary to initialize x foo(bar()); // use returned temporary to initialize param } However, because these optimizations were not mandatory, copying the objects had to be possible by providing an implicit or explicit copy or move constructor. That is, although the copy/move constructor was usually not called, it had to exist. Code like this didn’t compile when no copy/move constructor was defined. Thus, with the following definition of class MyClass the code above did not compile: class MyClass { public: ... // no copy/move constructor defined: MyClass(const MyClass&) = delete; MyClass(MyClass&&) = delete; ... }; It was enough not to have the copy constructor, because the move constructor is only implicitly available, when no copy constructor (or assignment operator or destructor) is user-declared. The copy elision to initialize objects from temporaries is mandatory since C++17. In fact, what we will see later is that we simply pass a value for initialization as argument or return value that is used then to materialize a new object. This means that even with a definition of class MyClass not enabling copying at all, the example above compiles. However, note that all other optional copy elisions still are optional and require a callable copy or move constructor. For example: MyClass foo() { MyClass obj; ... return obj; // still requires copy/move support } Here, inside foo() obj is a variable with a name (which is an lvalue). So the named return value optimization (NRVO) is used, which still requires copy/move support. This would even be the case if obj is a parameter: MyClass bar(MyClass obj) // copy elision for passed temporaries {Josuttis: C++17 2019/02/16 18:57 page 39
... return obj; // still requires copy/move support } While passing a temporary (which is a prvalue) to the function is no longer a copy/move, returning the parameter requires copy/move support, because the returned object has a name. As part of this change a couple of modifications and clarifications in the terminology of value categories were made.
int main() { int i = create<int>(42); std::unique_ptr<int> up = create<std::unique_ptr<int>>(new int{42}); std::atomic<int> ai = create<std::atomic<int>>(42);Josuttis: C++17 2019/02/16 18:57 page 40
} As another effect, for classes with explicitly deleted move constructors, you can now return tempo- raries by value and initialize objects with them: class CopyOnly { public: CopyOnly() { } CopyOnly(int) { } CopyOnly(const CopyOnly&) = default; CopyOnly(CopyOnly&&) = delete; // explicitly deleted };
CopyOnly ret() { return CopyOnly{}; // OK since C++17 }
And in C++11 we got movable objects, which were semantically objects for the right side of an assignment only, but could be modified, because an assignment operator could steal their value. For this reason, the category xvalue was introduced and the former category rvalue got a new name prvalue.
expression
glvalue rvalue
X v; const X c;
expression
glvalue rvalue
The key approach to explain value categories now is that in general we have two kinds of expres- sions • glvalues: expressions for locations of objects or functions • prvalues: expressions for initializations An xvalue is then considered a special location, representing an object whose resources can be reused (usually because it is near the end of its lifetime). C++17 then introduces a new term, called materialization (of a temporary) for the moment a prvalue becomes a temporary object. Thus, a temporary materialization conversion is a prvalue-to- xvalue conversion. Any time a prvalue validly appears where a glvalue (lvalue or xvalue) is expected, a temporary object is created and initialized with the prvalue (recall that prvalues are primarily “initializing val- ues”), and the prvalue is replaced by an xvalue designating the temporary. So in the example above, we strictly speaking have: void f(const X& p); // accepts an expression of any value category, // but expects a glvalue
1 Thanks to Richard Smith and Graham Haynes for pointing that out.Josuttis: C++17 2019/02/16 18:57 page 44
5.5 Afternotes The mandatory copy elision for initializations from temporaries was first proposed by Richard Smith in. The finally accepted wording was also formulated by Richard Smith in: C++17 2019/02/16 18:57 page 45
Chapter 6 Lambda Extensions
Lambdas, introduced with C++11, and generic lambdas, introduced with C++14, are a success story. They allow us to specify functionality as arguments, which makes it a lot easier to specify behavior right where it is needed. C++17 improved their abilities to allow the use of lambdas in even more places: • in constant expressions (i.e., at compile time) • in places where you need a copy of the current object (e.g., when calling lambdas in threads)
45Josuttis: C++17 2019/02/16 18:57 page 46
To find out at compile time whether a lambda is valid for a compile-time context, you can declare it as constexpr: auto squared3 = [](auto val) constexpr { // OK since C++17 return val*val; }; With specified return types the syntax looks as follows: auto squared3i = [](int val) constexpr -> int { // OK since C++17 return val*val; }; The usual rules regarding constexpr for functions apply: If the lambda is used in a run-time context, the corresponding functionality is performed at run time. However, using features in a constexpr lambda that are not valid in a compile-time context results in a compile-time error:1 auto squared4 = [](auto val) constexpr { static int calls=0; // ERROR: static variable in compile-time context ... return val*val; }; For an implicit or explicit constexpr lambda, the function call operator is constexpr. That is, the definition of auto squared = [](auto val) { // implicitly constexpr since C++17 return val*val; }; converts into the closure type: class CompilerSpecificName { public: ... template<typename T> constexpr auto operator() (T val) const { return val*val; } }; Note that the function call operator of the generated closure type is automatically constexpr here. In general since C++17, the generated function call operator is constexpr if either the lambda is explicitly defined to be constexpr or it is implicitly constexpr (as it is the case here).
1 Features not allowed in compile-time context are, for example, static variables, virtual functions, try and catch, and new and delete.Josuttis: C++17 2019/02/16 18:57 page 47
class Data { private: std::string name; public: Data(const std::string& s) : name(s) { } auto startThreadWithCopyOfThis() const { // start and return new thread using this after 3 seconds:Josuttis: C++17 2019/02/16 18:57 page 49
int main() { std::thread t; { Data d{"c1"}; t = d.startThreadWithCopyOfThis(); } // d is no longer valid t.join(); } The lambda takes a copy of *this, which means that a copy of d is passed. Therefore, it is no problem that probably the thread uses the passed object after the destructor of d was called. If we’d have captured this with [this], [=], or [&], the thread runs into undefined behavior, because when printing the name in the lambda passed to the thread the lambda would use a member of a destroyed object.
6.4 Afternotes constexpr lambdas were first proposed by Faisal Vali, Ville Voutilainen, and Gabriel Dos Reis in. The finally accepted wording was formulated by Faisal Vali, Jens Maurer, and Richard Smith in. Capturing *this in lambdas was first proposed by H. Carter Edwards, Christian Trott, Hal Finkel Jim Reus, Robin Maffeo, and Ben Sander in. The finally accepted wording was formulated by H. Carter Edwards, Daveed Vandevoorde, Christian Trott, Hal Finkel, Jim Reus, Robin Maffeo, and Ben Sander in: C++17 2019/02/16 18:57 page 50
50
Chapter 7 New Attributes and Attribute Features
Since C++11 you can specify attributes (formal annotations that enable or disable warnings). With C++17, new attributes were introduced. In addition, attributes can now be used at a few some more places and with some additional convenience.
51Josuttis: C++17 2019/02/16 18:57 page 52
value gets called immediately. which waits for the end of the started functionality. So not us- ing the return value silently contradicts the whole purpose why std::async() was called. With [[nodiscard]] the compilers warns about this. • Another example is the member function empty(), which checks whether an object (contain- er/string) has no elements. Programmers surprisingly often call this to “empty” the container (remove all elements): cont.empty(); This wrong application of empty() can often be detected, because it doesn’t use the return value. So, marking the member function accordingly: class MyContainer { ... public: [[nodiscard]] bool empty() const noexcept; ... }; helps to detect such an error. Although the language feature was introduced with C++17, it is not used yet in the standard library. The proposal to apply this feature there simply came too late for C++17. So one of the key moti- vations for this feature, adding it to the declaration of std::async() was not done yet. However, for all the examples discussed above, corresponding fixes will come with the next C++ standard (see for the already accepted proposal). However, to make your code more portable, you should use it instead of non-portable ways (such as [[gnu:warn_unused_result]] for gcc or clang) to mark functions accordingly. When defining operator new(), you should mark the functions with [[nodiscard]] as it is done, for example, when defining a header file to track all calls of new.
int main() { foo(); // WARNING: return value not used [[maybe_unused]] foo(); // ERROR: attribute not allowed here [[maybe_unused]] auto x = foo(); // OK }
very well using a statement of case 1 and case 2. Note that the attribute has to be used in an empty statement. Thus, you need a semicolon at its end. Using the attribute as last statement in a switch statement is not allowed.
7.5 Afternotes The three new attributes were first proposed by Andrew Tomazos in. The finally accepted wording for the [[nodiscard]] attribute was formulated by Andrew Toma- zos in. The finally accepted wording for the [[maybe_unused]]Josuttis: C++17 2019/02/16 18:57 page 55
7.5 Afternotes 55
56
Chapter 8 Other Language Features
There are a couple of minor or small changes to the C++ core language, which are described in this chapter.
57Josuttis: C++17 2019/02/16 18:57 page 58
s.replace(0,8,"").replace(s.find("even"),4,"sometimes") .replace(s.find("you don't"),9,"I"); The usual assumption is that this code is valid replacing the first 8 characters by nothing, "even" by "sometimes", and "you don’t" by "I" so that we get: it sometimes works if I believe However, before C++17, this outcome is not guaranteed, because the find() calls, returning where to start with a replacement, might be performed at any time while the whole statement gets processed and before their result is needed. In fact, all find() calls, computing the starting index of the replacements, might be processed before any of the replacements happens, so that the resulting string becomes: it sometimes works if I believe But other outcomes are also possible: it sometimes workIdon’t believe it even worsometiIdon’t believe it even worsometimesf youIlieve As another example, consider using the output operator to print values computed by expressions that depend on each other: std::cout << f() << g() << h(); The usual assumption is that f() is called before g() and both are called before h(). However, this assumption is wrong. f(), g(), and h() might be called in any order, which might have surprising or even nasty effects when these calls depend on each other. As a concrete example, up to C++17 the following code has undefined behavior: i = 0; std::cout << ++i << ' ' << --i << '\n'; Before C++17, it might print 1 0; but it might also print 0 -1 or even 0 0. It doesn’t matter whether i is int or a user-defined type (for fundamental types, some compilers at least warn about this problem).
1 A similar example is part of the motivation in the paper proposing the new feature with the comment This code has been reviewed by C++ experts world-wide, and published (The C++ Programming Language, 4th edition.)Josuttis: C++17 2019/02/16 18:57 page 59
To fix all this unexpected behavior, for some operators the evaluation guarantees were refined so that they now specify a guaranteed evaluation order: • For e1 [ e2 ] e1 . e2 e1 .* e2 e1 ->* e2 e1 << e2 e1 >> e2 e1 is guaranteed to get evaluated before e2 now, so that the evaluation order is left to right. However, note that the evaluation order of different arguments of the same function call is still undefined. That is, in e1.f(a1,a2,a3) e1 is guaranteed to get evaluated before a1, a2, and a3 now. However, the evaluation order of a1, a2, and a3 is still undefined. • In all assignment operators e2 = e1 e2 += e1 e2 *= e1 ... the right-hand side e1 is guaranteed to get evaluated before the left-hand side e2 now. • Finally, in new expressions like new Type(e) the allocation is now guaranteed to be performed before the evaluation e, and the initialization of the new value is guaranteed to happen before any usage of the allocated and initialized value. All these guarantees apply to both fundamental types and user-defined types. As a consequence, since C++17 std::string s = "I heard it even works if you don't believe"; s.replace(0,8,"").replace(s.find("even"),4,"always") .replace(s.find("don't believe"),13,"use C++17"); is guaranteed to change the value of s to: it always works if you use C++17 Thus, each replacement in front of a find() expression is done before the find() expression is evaluated. As another consequence, for the statements i = 0; std::cout << ++i << ' ' << --i << '\n'; the output is now guaranteed to be 1 0 for any type of i that supports these operands.Josuttis: C++17 2019/02/16 18:57 page 60
However, the undefined order for most of the other operators still exists. For example: i = i++ + i; // still undefined behavior Here, the i on the right might be the value of i before or after it was incremented. Another application of the new expression evaluation order is the function that inserts a space before passed arguments.
Backward Incompatibilities The new guaranteed evaluation order might impact the output of existing programs. This is not just theory. Consider, for example, the following program: lang/evalexcept.cpp #include <iostream> #include <vector>
int main() { try { std::vector<int> vec{7, 14, 21, 28}; print10elems(vec); } catch (const std::exception& e) { // handle standard exception std::cerr << "EXCEPTION: " << e.what() << '\n'; } catch (...) { // handle any other exception std::cerr << "EXCEPTION of unknown type\n"; } } Because the vector<> in this program only has 4 elements, the program throws an exception in the loop in print10elems() when calling at() as part of an output statement for an invalid index: std::cout << "value: " << v.at(i) << "\n"; Before C++17 the output could be: value: 7 value: 14 value: 21Josuttis: C++17 2019/02/16 18:57 page 61
value: 28 EXCEPTION: ... because at() was allowed to be evaluated before "value " was written, so that for the wrong index the output was skipped at all.2 Since C++17, the output is guaranteed to be: value: 7 value: 14 value: 21 value: 28 value: EXCEPTION: ... because the output of "value " has to be performed before at() gets evaluated.
2 This was, for example, the behavior of older GCC or Visual C++ versions.Josuttis: C++17 2019/02/16 18:57 page 62
For unscoped enumerations (enum without class) having no specified underlying type, you still can’t use list initialization for numeric values: enum Flag { bit1=1, bit2=2, bit3=4 }; Flag f1{0}; // still ERROR Note also that list initialization still doesn’t allow narrowing, so you can’t pass a floating-point value: enum MyInt : char { }; MyInt i5{42.2}; // still ERROR This feature was motivated to support the trick of defining new integral types just by defining an enumeration type mapping to an existing integral type as done here with MyInt. Without the feature, there is no way to initialize a new object without a cast. In fact, since C++17 the C++ standard library also provides std::byte, which directly uses this feature.
int main() { // init list of floating-point values: std::initializer_list<double> values { 0x1p4, // 16 0xA, // 10 0xAp2, // 40 5e0, // 5 0x1.4p+2, // 5 1e5, // 100000 0x1.86Ap+16, // 100000 0xC.68p+2, // 49.625 };
For example, 0xAp2 is a way to specify the decimal value 40 (10 times 2 to the power of 2). The value could also be expressed as 0x1.4p+5, which is 1.25 times 32 (0.4 is a hexadecimal quarter and 2 to the power of 5 is 32). The program has the following output: dec: 16 hex: 0x1p+4 dec: 10 hex: 0x1.4p+3 dec: 40 hex: 0x1.4p+5 dec: 5 hex: 0x1.4p+2 dec: 5 hex: 0x1.4p+2 dec: 100000 hex: 0x1.86ap+16 dec: 100000 hex: 0x1.86ap+16 dec: 49.625 hex: 0x1.8dp+5 As you can see in the example program, support for hexadecimal floating-point notation already existed for output streams using the std::hexfloat manipulator (available since C++11).
3 ISO Latin-1 is formally named ISO-8859-1, while the ISO character set with the European Euro symbol e, ISO-8859-15, is also named ISO Latin-9 (yes, this is not a spelling error).Josuttis: C++17 2019/02/16 18:57 page 65
Note that u8 can only be used for single characters and characters that have a single byte (code unit) in UTF-8. An initialization such as: char c = u8'ö'; is not allowed because the value of the German umlaut ö in UTF-8 is a sequence of two bytes, 195 and 182 (hexadecimal C3 B6). As a result both character and string literals now accept the following prefixes: • u8 for single-byte US-ASCII and UTF-8 encoding. • u for two-byte UTF-16 encoding. • U for four-byte UTF-32 encoding. • l for wide characters without specific encoding, which might have two or four bytes.
... };
template<typename T> void call(T op1, T op2) { op1(); op2(); }
void f1() { std::cout << "f1()\n"; } void f2() noexcept { std::cout << "f2()\n"; }
int main() { call(f1, f2); // ERROR since C++17 } The problem is that since C++17 f1() and f2() have different types so that the compiler no longer finds a common type T for both types when instantiating the function template call(). With C++17 you have to use two different types if this should still be possible: template<typename T1, typename T2> void call(T1 op1, T2 op2) { op1(); op2(); } If you want or have to overload on all possible function types, you also have to double the overloads now. This, for example, applies to the definition of the standard type trait std::is_function<>. The primary template is defined so that in general a type T is no function: // primary template (in general type T is no function): template<typename T> struct is_function : std::false_type { }; The template derives from std::false_type so that is_function<T>::value in general yields false for any type T. For all types that are functions, partial specializations exist, which derive from std::true_type so that the member value yields true for them: // partial specializations for all function types: template<typename Ret, typename... Params> struct is_function<Ret (Params...)> : std::true_type { };
template<typename T> class C { // OK since C++11: static_assert(std::is_default_constructible<T>::value, "class C: elements must be default-constructible");Josuttis: C++17 2019/02/16 18:57 page 69
// OK since C++17: static_assert(std::is_default_constructible_v<T>); ... }; The new assertion without the message also uses the new type traits suffix _v.
8.10 Afternotes Nested namespace definitions were first proposed in 2003 by Jon Jagger in n1524. Robert Kawulak brought up a new proposal in 2014 in. The finally accepted wording was formulated by Robert Kawulak and Andrew Tomazos in https:// wg21.link/n4230. The refined expression evaluation order was first proposed by Gabriel Dos Reis, Herb Sutter, and Jonathan Caves in. The finally accepted wording was formulated by Gabriel Dos Reis, Herb Sutter, and Jonathan Caves in. Relaxed enum initialization was first proposed by Gabriel Dos Reis in p0138r0. The finally accepted wording was formulated by Gabriel Dos Reis in. link/p0138r2.Josuttis: C++17 2019/02/16 18:57 page 70
Fixing list initialization with auto was first proposed by Ville Voutilainen in. link/n3681 and. The final fix for list initialization with auto was proposed by James Dennett in. Hexadecimal Floating-Point Literals were first proposed by Thomas Köppe in. link/p0245r0. The finally accepted wording was formulated by Thomas Köppe in https: //wg21.link/p0245r1. The prefix for UTF-8 character literals was first proposed by Richard Smith in. link/n4197. The finally accepted wording was formulated by Richard Smith in. link/n4267. Making exception specifications part of the function type was first proposed by Jens Maurer in. The finally accepted wording was formulated by Jens Maurer in. Single-argument static_assert was accepted as proposed by Walter E. Brown in https:// wg21.link/n3928. The preprocessor clause __has_include() was first proposed by Clark Nelson and Richard Smith as part of. The finally accepted wording was formulated by Clark Nelson and Richard Smith in: C++17 2019/02/16 18:57 page 71
Part II Template Features This part introduces the new language features C++17 provides for generic programming (i.e., tem- plates). While we start with class template argument deduction, which also impacts just the usage of templates, the later chapters especially provide feature for programmers of generic code (function templates, class templates, and generic libraries).
71Josuttis: C++17 2019/02/16 18:57 page 72
72
Chapter 9 Class Template Argument Deduction
Before C++17, you always have to explicitly specify all template parameter types for class templates. For example, you can’t omit the double here: std::complex<double> c{5.1,3.3}; or omit the need to specify std::mutex here a second time: std::mutex mx; std::lock_guard<std::mutex> lg(mx); Since C++17, the constraint that you always have to specify the template arguments explicitly was relaxed. By using class template argument deduction (CTAD), you can skip defining the templates arguments explicitly if the constructor is able to deduce all template parameters. For example: • You can declare now: std::complex c{5.1,3.3}; // OK: std::complex<double> deduced • You can implement now: std::mutex mx; std::lock_guard lg{mx}; // OK: std::lock_guard<std_mutex> deduced • You can even let containers deduce element types: std::vector v1 {1, 2, 3} // OK: std::vector<int> deduced std::vector v2 {"hello", "world"}; // OK: std::vector<const char*> deduced
73Josuttis: C++17 2019/02/16 18:57 page 74
}; the declaration: std::tuple t{42, 'x', nullptr}; deduces the type of t as std::tuple<int, char, std::nullptr_t>. You can also deduce non-type template parameters. For example, we can deduce template para- meters for both the element type and the size from a passed initial array as follows: template<typename T, int SZ> class MyClass { public: MyClass (T(&)[SZ]) { ... } };
1 Note that passing the initial argument by reference is important here, because otherwise by language rules the constructor declares a pointer so that SZ can’t be deduced.Josuttis: C++17 2019/02/16 18:57 page 76
template<typename... Args> auto make_vector(const Args&... elems) { return std::vector{elems...}; }
template<typename CB> class CountCalls { private: CB callback; // callback to call long calls = 0; // counter for calls public: CountCalls(CB cb) : callback(cb) { } template<typename... Args> auto operator() (Args&&... args) { ++calls; return callback(std::forward<Args>(args)...); } long count() const { return calls; } }; Here, the constructor, taking the callback to wrap, enables to deduce its type as template parameter CB. For example, we can initialize an object passing a lambda as argument: CountCalls sc([](auto x, auto y) { return x > y; });Josuttis: C++17 2019/02/16 18:57 page 77
which means that the type of the sorting criterion sc is deduced as CountCalls<TypeOfTheLambda>. This way, we can for example count the number of calls for a passed sorting criterion: std::sort(v.begin(), v.end(), std::ref(sc)); std::cout << "sorted with " << sc.count() << " calls\n"; Here, the wrapped lambda is used as sorting criterion, which however has to be passed by reference, because otherwise std::sort() only uses the counter of its own copy of the passed counter, because std::sort() itself takes the sorting criterion by value. However, we can pass a wrapped lambda to std::for_each(), because this algorithm (in the non-parallel version) returns its own copy of the passed callback to be able to use its resulting state: auto fo = std::for_each(v.begin(), v.end(), CountCalls([](auto i) { std::cout << "elem: " << i << '\n'; })); std::cout << "output with " << fo.count() << " calls\n";
// all deduced: C c1(22, 44.3, "hi"); // OK: T1 is int, T2 is double, T3 is const char* C c2(22, 44.3); // OK: T1 is int, T2 and T3 are double C c3("hi", "guy"); // OK: T1, T2, and T3 are const char*
// all specified:Josuttis: C++17 2019/02/16 18:57 page 78
2 Specifying the type only doesn’t work because then the container tries to create a lambda of the given type, which is not allowed, because the default constructor is only callable by the compiler. With C++20 this will probably be possible.Josuttis: C++17 2019/02/16 18:57 page 79
}; If for this type we’d call: Pair2 p2{"hi", "world"}; // deduces pair of pointers T1 and T2 both would be deduced as const char*. Because class std::pair<> is declared so that the constructors take the arguments by reference, you might expect now the following initialization not to compile: std::pair p{"hi", "world"}; // seems to deduce pair of arrays of different size, but... But it compiles. The reason is that we use deduction guides.
T val; };
S(const char*) -> S<std::string>; // map S<> for string literals to S<std::string> the following declarations are possible, where std::string is deduced as type of T from const char* because the passed string literal implicitly converts to it: S s1{"hello"}; // OK, same as: S<std::string> s1{"hello"}; S s2 = {"hello"}; // OK, same as: S<std::string> s2 = {"hello"}; S s3 = S{"hello"}; // OK, both S deduced to be S<std::string> Note that aggregates need list initialization (the deduction works, but the initialization is not al- lowed): S s4 = "hello"; // ERROR (can’t initialize aggregates that way)
3 A non-template function is preferred over a template unless other aspects of overload resolution matter more.Josuttis: C++17 2019/02/16 18:57 page 83
template<typename T> explicit Ptr(T) -> Ptr<T*>; which would have the following effect: Ptr p1{42}; // deduces Ptr<int*> due to deduction guide Ptr p2 = 42; // deduces Ptr<int> due to constructor int i = 42; Ptr p3{&i}; // deduces Ptr<int**> due to deduction guide Ptr p4 = &i; // deduces Ptr<int*> due to constructor
T val; }; any trial of class template argument deduction without a deduction guide is an error: A i1{42}; // ERROR A s1("hi"); // ERROR A s2{"hi"}; // ERROR A s3 = "hi"; // ERROR A s4 = {"hi"}; // ERROR You have to pass the argument for type T explicitly: A<int> i2{42}; A<std::string> s5 = {"hi"}; But after a deduction guide such as: A(const char*) -> A<std::string>; you can initialize the aggregate as follows: A s2{"hi"}; // OK A s4 = {"hi"}; // OK However, as usual for aggregates, you still need curly braces. Otherwise, type T is successfully deduced, but the initialization is an error: A s1("hi"); // ERROR: T is string, but no aggregate initialization A s3 = "hi"; // ERROR: T is string, but no aggregate initialization The deduction guides for std::array are another example of deduction guides for aggregates.
4 The original declaration uses class instead of typename and declared the constructors as conditionally explicit.Josuttis: C++17 2019/02/16 18:57 page 85
template<typename... Types> tuple(Types...) -> tuple<Types...>; // deduce argument types by-value }; As a consequence, the declaration: std::tuple t{42, "hello", nullptr}; deduces the type of t as std::tuple<int, const char*, std::nullptr_t>.
the two arguments are taken as elements of an initializer list (which has higher priority according to the overload resolution rules). That is, is equivalent to: std::vector<std::set<float>::iterator> v2{s.begin(), s.end()}; so that we initialize a vector of two elements, the first referring to the first element and the second representing the position behind the last element. On the other hand, consider: std::vector v3{"hi", "world"}; // OK, deduces std::vector<const char*> std::vector v4("hi", "world"); // OOPS: fatal run-time error While the declaration of v3 also initializes the vector with two elements (both being C strings), the second causes a fatal runtime error, which hopefully causes a core dump. The problem is that string literals convert to character pointers, which are valid iterators. Thus, we pass two iterators that do not point to the same object. In other words: We pass an invalid range. Depending on where the two literals are stored, you get a std::vector<const char> with an arbitrary number of elements. If it is too big you get a bad_alloc exception, or you get a core dump because there is no distance at all, or you get a range of some undefined characters stored in between. Thus, using curly braces is always the best, when initializing the elements of a vector. The only exception is when a single vector is passed (where the copy constructor is preferred). When passing something else, using parentheses is better.
std::array<> Deduction A more interesting example provides class std::array<>: To be able to deduce both the element type and the number of elements: std::array a{42,45,77}; // OK, deduces std::array<int,3> the following deduction guide is defined: // let std::array<> deduce their number of elements (must have same type): namespace std { template<typename T, typename... U> array(T, U...) -> array<enable_if_t<(is_same_v<T,U> && ...), T>, (1 + sizeof...(U))>; } The deduction guide uses the fold expression (is_same_v<T,U> && ...) to ensure that the types of all passed arguments are the same.5 Thus, the following is not possible: std::array a{42,45,77.7}; // ERROR: types differ
9.3 Afternotes 89
9.3 Afternotes Class template argument deduction was first proposed in 2007 by Michael Spertus in https: //wg21.link/n2332. The proposal came back in 2013 by Michael Spertus and David Vandevo- orde in. The finally accepted wording was formulated by Michael Spertus, Faisal Vali, and Richard Smith in with modifications by Michael Spertus, Faisal Vali, and Richard Smith in, by Jason Mer- rill in, and by Michael Spertus and Jason Merrill (as a defect report against C++17) in. The support for class template argument deduction in the standard library was added by Michael Spertus, Walter E. Brown, and Stephan T. Lavavej in and (as a defect report against C++17) in: C++17 2019/02/16 18:57 page 90
90
Chapter 10 Compile-Time if
With the syntax if constexpr(. . . ), the compiler uses a compile-time expression to decide at com- pile time whether to use the then part or the else part (if any) of an if statement. The other part (if any) gets discarded, so that no code gets generated. This does not mean that the discarded part it is completely ignored, though. It will be checked like code of unused templates. For example: tmpl/ifcomptime.hpp #include <string>
91Josuttis: C++17 2019/02/16 18:57 page 92
#include "ifcomptime.hpp" #include <iostream>
int main() { std::cout << asString(42) << '\n'; std::cout << asString(std::string("hello")) << '\n'; std::cout << asString("hello") << '\n'; }
• When passing a string literal (i.e., type const char*), the then parts of the first and second if get discarded. So, each invalid combination can’t occur any longer at compile-time and the code compiles success- fully. Note that a discarded statement is not ignored. The effect is that it doesn’t get instantiated, when depending on template parameters. The syntax must be correct and calls that don’t depend on tem- plate parameters must be valid. In fact, the first translation phase (the definition time) is performed, which checks for correct syntax and the usage of all names that don’t depend on template parameters. All static_asserts must also be valid, even in branches that aren’t compiled. For example: template<typename T> void foo(T t) { if constexpr(std::is_integral_v<T>) { if (t > 0) { foo(t-1); // OK } } else { undeclared(t); // error if not declared and not discarded (i.e., T is not integral) undeclared(); // error if not declared (even if discarded) static_assert(false, "no integral"); // always asserts (even if discarded) } } With a conforming compiler, this example never compiles for two reasons: • Even if T is an integral type, the call of undeclared(); // error if not declared (even if discarded) in the discarded else part is an error if no such function is declared, because this call doesn’t depend on a template parameter. • The call of static_assert(false, "no integral"); // always asserts (even if discarded) always falls even if it is part of the discarded else part, because again this call doesn’t depend on a template parameter. A static assertion repeating the compile-time condition would be fine: static_assert(!std::is_integral_v<T>, "no integral"); Note that some compilers (e.g., Visual C++ 2013 and 2015) do not implement or perform the two- phase translation of templates correctly. They defer most of the first phase (the definition time) toJosuttis: C++17 2019/02/16 18:57 page 94
the second phase (the instantiation time) so invalid function calls and even some syntax errors might compile.1
1 Visual C++ is on the way to fix this behavior step-by-step, which, however, requires specific options such as /permissive-, because it might break existing code. 2 Thanks to Graham Haynes, Paul Reilly, and Barry Revzin for bringing all these aspects of compile-time if to attention.Josuttis: C++17 2019/02/16 18:57 page 95
This pattern does not apply to compile-time if, because in the second form the return type depends on two return statements instead of one, which can make a difference. For example, modifying the example above results in code that might or might not compile: auto foo() { if constexpr (sizeof(int) > 4) { return 42; } return 42u; } If the condition is true (the size of int is greater than 4), the compiler deduces two different return types, which is not valid. Otherwise, we have only one return statement that matters, so that the code compiles.
} However, the condition for the the compile-time if is always instantiated and needs to be valid as a whole, so that passing a type that doesn’t support <10 no longer compiles: constexpr auto x2 = bar("hi"); // compile-time ERROR So, compile-time if does not short-circuit the instantiations. If the validity of compile-time condi- tions depend on earlier compile-time conditions, you have to nest them as done in foo(). As another example, you have to write:3 if constexpr (std::is_same_v<MyType, T>) { if constexpr (T::i == 42) { ... } } instead of just: if constexpr (std::is_same_v<MyType, T> && T::i == 42) { ... }
} else { // return type is not void: decltype(auto) ret{op(std::forward<Args>(args)...)}; ... // do something (with ret) before we return return ret; } }
4 Thanks to Graham Haynes and Barry Revzin for pointing that out.Josuttis: C++17 2019/02/16 18:57 page 100
template<typename T> void foo(T t);Josuttis: C++17 2019/02/16 18:57 page 101
int main() { if constexpr(std::numeric_limits<char>::is_signed) { foo(42); // OK } else { undeclared(42); // ALWAYS ERROR if not declared (even if discarded) } } Also the following code can never successfully compile, because one of the static assertion will always fail: if constexpr(std::numeric_limits<char>::is_signed) { static_assert(std::numeric_limits<char>::is_signed); } else { static_assert(!std::numeric_limits<char>::is_signed); } The (only) benefit of the compile-time if outside generic code is that code in the discarded statement, although it must be valid, does not become part of the resulting program, which reduces the size of the resulting executable. For example, in this program: #include <limits> #include <string> #include <array>
int main() { if (!std::numeric_limits<char>::is_signed) { static std::array<std::string,1000> arr1; ... } else { static std::array<std::string,1000> arr2; ... } } either arr1 or arr2 is part of the final executable but not both.5
5 This effect is also possible without constexpr, because compilers can optimize code that is not used away. However, with constexpr this is guaranteed behavior.Josuttis: C++17 2019/02/16 18:57 page 102
10.5 Afternotes Compile-time if was initially motivated by Walter Bright, Herb Sutter, and Andrei Alexandrescu in and Ville Voutilainen in, by proposing a static if language feature. In Ville Voutilainen proposed the feature for the first time as constexpr_if (where the feature got its name from). The finally accepted wording was formulated by Jens Maurer in: C++17 2019/02/16 18:57 page 103
Chapter 11 Fold Expressions
Since C++17, there is a feature to compute the result of using a binary operator over all the arguments of a parameter pack (with an optional initial value). For example, the following function returns the sum of all passed arguments: template<typename... T> auto foldSum (T... args) { return (... + args); // ((arg1 + arg2) + arg3) ... } Note that the parentheses around the return expression are part of the fold expression and can’t be omitted. Calling the function with foldSum(47, 11, val, -1); instantiates the template to perform: return 47 + 11 + val + -1; Calling it for foldSum(std::string("hello"), "world", "!"); instantiates the template for: return std::string("hello") + "world" + "!"; Also note that the order of fold expression arguments can differ and matters (and might look a bit counter-intuitive): As written, (... + args) results in ((arg1 + arg2) + arg3) ... which means that it repeatedly “post-adds” things. You can also write (args + ...)
103Josuttis: C++17 2019/02/16 18:57 page 104
} the call foldSumL(1, 2, 3) evaluates to: (1 + 2) + 3) This also means that the following example compiles: std::cout << foldSumL(std::string("hello"), "world", "!") << '\n'; // OK Remember that operator + is defined for standard strings provided at least one operand is a std::string. Because the left fold is used, the call first evaluates std::string("hello") + "world" which returns a std::string, so that adding the string literal "!" then also is valid. However, a call such as std::cout << foldSumL("hello", "world", std::string("!")) << '\n'; // ERROR will not compile because it evaluates to ("hello" + "world") + std::string("!") and adding two string literals is not allowed. However, if we change the implementation to: template<typename... T> auto foldSumR(T... args){ return (args + ...); // (arg1 + (arg2 + arg3)) ... } the call foldSumR(1, 2, 3) evaluates to: (1 + (2 + 3) which means that the following example no longer compiles: std::cout << foldSumR(std::string("hello"), "world", "!") << '\n'; // ERROR while the following call now compiles: std::cout << foldSumR("hello", "world", std::string("!")) << '\n'; // OK Because in almost all cases evaluation from left to right is the intention, usually, the left fold syntax with the parameter pack at the end should be preferred (unless this doesn’t work): (... + args); // preferred syntax for fold expressionsJosuttis: C++17 2019/02/16 18:57 page 106
struct A { void print() { std::cout << "A::print()\n"; } };
struct B { void print() { std::cout << "B::print()\n"; } };
struct C { void print() { std::cout << "C::print()\n"; } };
int main() { MultiBase<A,B,C> mb; mb.print(); } Here, template<typename... Bases>Josuttis: C++17 2019/02/16 18:57 page 110
template<typename... Types> std::size_t combinedHashValue (const Types&... args) { std::size_t seed = 0; // initial seed (... , hashCombine(seed,args)); // chain of hashCombine() calls return seed; }Josuttis: C++17 2019/02/16 18:57 page 111
By calling std::size_t combinedHashValue ("Hello", "World", 42,); the statement in the middle expands to: hashCombine(seed,"Hello"), (hashCombine(seed,"World"), hashCombine(seed,42); With this definition we can easily define a new hash function object for a type such as Customer: struct CustomerHash { std::size_t operator() (const Customer& c) const { return combinedHashValue(c.getFirstname(), c.getLastname(), c.getValue()); } }; which we can use to put Customers in an unordered set: std::unordered_set<Customer, CustomerHash> coll;
int main() { // init binary tree structure: Node* root = new Node{0};Josuttis: C++17 2019/02/16 18:57 page 112
11.3 Afternotes Fold expressions were first proposed by Andrew Sutton and Richard Smith in n4191. The finally accepted wording was formulated by Andrew Sutton and Richard Smith in. Support for empty sequences was later removed for operators *, +, &, and | as proposed by Thibaut Le Jehan in: C++17 2019/02/16 18:57 page 114
114
Chapter 12 Dealing with Strings as Template Parameters
Over time the different versions of C++ relaxed the rules for what can be used as templates para- meters, and with C++17 this happened again. Templates now can be used without the need to have them defined outside the current scope.
void foo() { Message<hello> msg; // OK (all C++ versions) Message<hello11> msg11; // OK since C++11
115Josuttis: C++17 2019/02/16 18:57 page 116
int num; A<&num> a; // OK since C++11 You couldn’t use a compile-time function that returned the address, which now is supported: int num; ... constexpr int* pNum() { return # } A<pNum()> b; // ERROR before C++17, now OK
12.2 Afternotes Allowing constant evaluation for all non-type template arguments was first proposed by Richard Smith in. The finally accepted wording was formulated by Richard Smith in: C++17 2019/02/16 18:57 page 117
Chapter 13 Placeholder Types like auto as Template Parameters
Since C++17 you can use placeholder types (auto and decltype(auto)) as non-type template parameter types. That means, that we can write generic code for non-type parameters of different types.
117Josuttis: C++17 2019/02/16 18:57 page 118
public: A(const std::array<T,N>&) { } A(T(&)[N]) { } ... }; This class can deduce the type of T, the type of N, and the value of N: A a2{"hello"}; // OK, deduces A<const char, 6> with N being int
std::array<double,10> sa1; A a1{sa1}; // OK, deduces A<double, 10> with N being std::size_t You can also qualify auto, for example, to require the type of the template parameter to be a pointer: template<const auto* P> struct S; And by using variadic templates, you can parameterize templates to use a list of heterogeneous con- stant template arguments: template<auto... VS> class HeteroValueList { }; or a list of homogeneous constant template arguments: template<auto V1, decltype(V1)... VS> class HomoValueList { }; For example: HeteroValueList<1, 2, 3> vals1; // OK HeteroValueList<1, 'a', true> vals2; // OK HomoValueList<1, 2, 3> vals3; // OK HomoValueList<1, 'a', true> vals4; // ERROR
using i = constant<42>; using c = constant<'x'>; using b = constant<true>; And instead of: template<typename T, T... Elements> struct sequence { };
2 Don’t confuse variable templates, which are templified variables, with variadic templates, which are tem- plates that have an arbitrary number of parameters.Josuttis: C++17 2019/02/16 18:57 page 121
#include <array>
void printArr();
#endif // VARTMPLAUTO_HPP Here, one translation unit could modify the values of two different instances of this variable template
tmpl/vartmplauto1.cpp #include "vartmplauto.hpp"
int main() { arr<int,5>[0] = 17; arr<int,5>[3] = 42; arr<int,5u>[1] = 11; arr<int,5u>[3] = 33; printArr(); } And another translation unit could print these two variables: tmpl/vartmplauto2.cpp #include "vartmplauto.hpp" #include <iostream>
void printArr() { std::cout << "arr<int,5>: "; for (const auto& elem : arr<int,5>) { std::cout << elem << ' '; } std::cout << "\narr<int,5u>: "; for (const auto& elem : arr<int,5u>) {Josuttis: C++17 2019/02/16 18:57 page 122
template<decltype(auto) N>
3 There is a bug in g++ 7, so that these are handles as one object. This bug is fixed with g++ 8.Josuttis: C++17 2019/02/16 18:57 page 123
struct S { void printN() const { std::cout << "N: " << N << '\n'; } };
int main() { S<c> s1; // deduces N as const int 42 S<(c)> s2; // deduces N as const int& referring to c s1.printN(); s2.printN();
13.4 Afternotes Placeholder types for non-type template parameters were first proposed by James Touton and Michael Spertus as part of. The finally accepted wording was formulated by James Touton and Michael Spertus in: C++17 2019/02/16 18:57 page 124
124
Chapter 14 Extended Using Declarations
Using declarations were extended to allow a comma separated list of declarations, to allow them to be used in a pack expansion. For example, you can program now: class Base { public: void a(); void b(); void c(); };
125Josuttis: C++17 2019/02/16 18:57 page 126
{ using Ts::operator()...; };
tmpl/using2.hpp
1 Both clang and Visual C++ don’t handle the overloading of operators of base classes for different types as ambiguity, so that there the using is not necessary. However, according to language rules, this is necessary as for overloaded member functions, where both compilers require it, and should be used to be portable.Josuttis: C++17 2019/02/16 18:57 page 127
template<typename T> class Base { T value{}; public: Base() { ... } Base(T v) : value{v} { ... } ... };
template<typename... Types> class Multi : private Base<Types>... { public: // derive all constructors: using Base<Types>::Base...; ... }; With the using declaration for all base class constructors, you derive for each type a corresponding constructor. Now, when declaring Multi<> type for values of three different types: using MultiISB = Multi<int,std::string,bool>; you can declare objects using each one of the corresponding constructors: MultiISB m1 = 42; MultiISB m2 = std::string("hello"); MultiISB m3 = true; By the new language rules, each initialization calls the corresponding constructor for the matching base class and the default constructor for all other base classes. Thus MultiISB m2 = std::string("hello"); calls the default constructor for Base<int>, the string constructor for Base<std::string>, and the default constructor for Base<bool>. In principle, you could also enable all assignment operators in Multi<> by specifying: template<typename... Types> class Multi : private Base<Types>... { ... // derive all assignment operators:Josuttis: C++17 2019/02/16 18:57 page 128
using Base<Types>::operator=...; };
14.3 Afternotes Comma-separated using declarations were proposed by Robert Haberlach in p0195r0. The finally accepted wording was formulated by Robert Haberlach and Richard Smith in. Various core issues requested clarifications on inheriting constructors. The finally accepted word- ing to fix them was formulated by Richard Smith in. There is a proposal by Vicente J. Botet Escriba to add a generic overload function to overload lambdas but also ordinary functions and member functions. However, the paper didn’t make it into C++17. See for details.Josuttis: C++17 2019/02/16 18:57 page 129
Part III New Library Components This part introduces the new library components of C++17.
129Josuttis: C++17 2019/02/16 18:57 page 130
130
Chapter 15 std::optional<>
In programming often we have the case that we might return/pass/use an object of a certain type. That is, we could have a value of a certain type or we might not have any value at all. Thus, we need a way to simulate semantics similar to pointers, where we can express to have no value by using nullptr. The way to handle this is to define an object of a certain type with an additional Boolean member/flag signaling whether a value exists. std::optional<> provides such objects in a type-safe way. Optional objects simply have internal memory for the contained objects plus a Boolean flag. Thus, the size usually is one byte larger than the contained object. For some contained types, there might even be no size overhead at all, provided the additional information can be placed inside the contained object. No heap memory is allocated. The objects use the same alignment as the contained type. However, optional objects are not just structures adding the functionality of a Boolean flag to a value member. For example, if there is no value, no constructor is called for the contained type (thus, you can give objects a default state that don’t have one). As with std::variant<> and std::any the resulting objects have value semantics. That is, copying is implemented as a deep copy creating an independent object with the flag and the contained value if any in its own memory. Copying a std::optional<> without a contained value is cheap; Copying a std::optional<> with a contained value is as cheap/expensive as copying the contained type/value. Move semantics are supported.
131Josuttis: C++17 2019/02/16 18:57 page 132
int main() { for (auto s : {"42", " 077", "hello", "0x33"} ) { // try to convert s to int and print the result if possible: std::optional<int> oi = asInt(s); if (oi) { std::cout << "convert '" << s << "' to int: " << *oi << "\n"; } else { std::cout << "can't convert '" << s << "' to int\n"; } } } In the program asInt() is a function to convert a passed string to an integer. However, this might not succeed. For this reason a std::optional<> is used so that we can return “no int” and avoid to define a special int value for it or throw an exception to the caller. Thus, we either return the result of calling stoi(), which initializes the return value with an int, or we return std::nullopt, signaling that we don’t have an int value. We could implement the same behavior as follows: std::optional<int> asInt(const std::string& s) { std::optional<int> ret; // initially no valueJosuttis: C++17 2019/02/16 18:57 page 133
try { ret = std::stoi(s); } catch (...) { } return ret; } In main() we call this function for different strings. for (auto s : {"42", " 077", "hello", "0x33"} ) { // convert s to int and use the result if possible: std::optional<int> oi = asInt(s); ... } For each returned std::optional<int> oi we evaluate, whether we have a value (by evaluating the object as Boolean expression) and access the value by “dereferencing” the optional object: if (oi) { std::cout << "convert '" << s << "' to int: " << *oi << "\n"; } Note that for the string "0x33" asInt() yields 0 because stoi() does not parse the string as hexadecimal value. There are alternative ways to implement the handling of the return value, such as: std::optional<int> oi = asInt(s); if (oi.has_value()) { std::cout << "convert '" << s << "' to int: " << oi.value() << "\n"; } Here, has_value() is used to check, whether a value was returns, and with value() we access it. value() is safer than operator *: It throws an exception if no value exists. Operator * should only be used when you are sure that the optional contains a value; otherwise your program will have undefined behavior.1 Note that we can improve asInt() by using the new type std::string_view.
1 Note that you might not see this undefined behavior, because operator * yields the value at the memory, which might (still) make sense.Josuttis: C++17 2019/02/16 18:57 page 134
#include <optional> #include <iostream>
class Name { private: std::string first; std::optional<std::string> middle; std::string last; public: Name (std::string f, std::optional<std::string> m, std::string l) : first{std::move(f)}, middle{std::move(m)}, last{std::move(l)} { } friend std::ostream& operator << (std::ostream& strm, const Name& n) { strm << n.first << ' '; if (n.middle) { strm << *n.middle << ' ';
} return strm << n.last; } };
int main() { Name n{"Jim", std::nullopt, "Knopf"}; std::cout << n << '\n';
Another option to access the value is by using the member function value_or(), which enables to specify a fallback value in case no value exists. For example, inside class Name we could also implement: std::cout << middle.value_or(""); // print middle name or nothing
Construction Special constructors enable to pass the arguments directly to the contained type. • You can create an optional object not having a value. In that case you have to specify the contained type: std::optional<int> o1; std::optional<int> o2(std::nullopt); This does not call any constructor for the contained type. • You can pass a value to initialize the contained type. Due to a deduction guide you don’t have to specify the contained type, then: std::optional o3{42}; // deduces optional<int> std::optional<std::string> o4{"hello"}; std::optional o5{"hello"}; // deduces optional<const char*>Josuttis: C++17 2019/02/16 18:57 page 136
Operation Effect constructors Create an optional object (might call constructor for contained type) make_optional<>() Create an optional object (passing value(s) to initialize it) destructor Destroys an optional object = Assign a new value emplace() Assign a new value to the contained type reset() Destroys any value (makes the object empty) has_value() Returns whether the object has a value conversion to bool Returns whether the object has a value * Value access (undefined behavior if no value) -> Access to member of the value (undefined behavior if no value) value() Value access (exception if no value) value_or() Value access (fallback argument if no value) swap() Swaps values between two objects ==, !=, <, <=, >, >= Compare optional objects hash<> Function object type to compute hash values
• To initialize an optional object with multiple arguments, you have to create the object or add std::in_place as first argument (the contained type can’t be deduced): std::optional o6{std::complex{3.0, 4.0}}; std::optional<std::complex<double>> o7{std::in_place, 3.0, 4.0}; Note that the latter form avoids the creation of a temporary object. By using this form, you can even pass an initializer list plus additional arguments: // initialize set with lambda as sorting criterion: auto sc = [] (int x, int y) { return std::abs(x) < std::abs(y); }; std::optional<std::set<int,decltype(sc)>> o8{std::in_place, {4, 8, -7, -2, 0, 5}, sc}; • You can copy optional objects (including type conversions). std::optional o5{"hello"}; // deduces optional<const char*> std::optional<std::string> o9{o5}; // OK Note that there is also a convenience function make_optional<>(), which allows an initialization with single or multiple arguments (without the need for the in_place argument). As usual for make... functions it decays: auto o10 = std::make_optional(3.0); // optional<double>Josuttis: C++17 2019/02/16 18:57 page 137
However, you can’t and should never rely on that. If you don’t know whether an optional object has a value, you have to call the following instead: if (o) std::cout << *o; // OK (might output nothing) Alternatively, you can use value(), which throws a std::bad_optional_access exception, if there is no contained value: std::cout << o.value(); // OK (throws if no value) std::bad_optional_access is directly derived from std::exception. Finally, you can ask for the value and pass a fallback value, which is used, if the optional object has no value: std::cout << o.value_or("fallback"); // OK (outputs fallback if no value) The fallback argument is passed as rvalue reference so that it costs nothing if the fallback isn’t used and it supports move semantics if it is used. Please note that both operator* and value() return the contained object by reference. For this reason, you have to be careful, when calling these operation directly for temporary return values. For example: std::optional<std::string> getString(); ... auto a = getString().value(); // OK: copy of contained object auto b = *getString(); // ERROR: undefined behavior if std::nullopt const auto& r1 = getString().value(); // ERROR: reference to deleted contained ob- ject auto&& r2 = getString().value(); // ERROR: reference to deleted contained ob- ject An example might be the following usage of a range-based for loop: std::optional<std::vector<int>> getVector(); ... for (int i : getVector().value()) { // ERROR: iterate over deleted vector std::cout << i << '\n'; } Note that iterating over a returned vector of int would work. So, do not blindly replace the return type of a function foo() by the corresponding optional type, calling foo().value() instead.
Comparisons You can use the usual comparison operators. Operands can be an optional object, an object of the contained type, and std::nullopt. • If both operands are objects with a value, the corresponding operator of the contained type is used. • If both operands are objects without a value they are considered to be equal (== yields true and all other comparisons yield false).Josuttis: C++17 2019/02/16 18:57 page 139
• If only one operand is an object with a value the operand without a value is considered to be less than the other operand. For example: std::optional<int> o0; std::optional<int> o1{42};
o2 == 42 // yields true o1 == o2 // yields true Note that optional Boolean or raw pointer values can result in some surprises here.
Move Semantics std::optional<> also supports move semantics. If you move the object as a whole, the state gets copied and the contained object (if any) is moved. As a result, a moved-from object still has the same state, but any value becomes unspecified. But you can also move a value into or out of the contained object. For example: std::optional<std::string> os; std::string s = "a very very very long string"; os = std::move(s); // OK, moves std::string s2 = *os; // OK copies std::string s3 = std::move(*os); // OK, moves Note that after the last call os still has a string value, but as usual for moved-from objects the value is unspecified. Thus, you can use it as long as you don’t make any assumption about which value it is. You can even assign a new string value there.
Hashing The hash value for an optional object is the hash value of the contained non-constant type (if any).
std::optional<int*> op{nullptr}; if (!op) ... // yields false if (op == nullptr) ... // yields true
std::optional<std::optional<std::complex<double>>> ooc{std::in_place, std::in_place, 4.2, 5.3}; Also you can assign new values even with implicit conversions: oos1 = "hello"; // OK: assign new value ooc.emplace(std::in_place, 7.2, 8.3); Due to the two levels of having no value, an optional of optional enables to have “no value” on the outside or on the inside, which can have different semantic meaning: *oos1 = std::nullopt; // inner optional has no value oos1 = std::nullopt; // outer optional has no value But you have to take special care to deal with the optional value: if (!oos1) std::cout << "no value\n"; if (oos1 && !*oos1) std::cout << "no inner value\n"; if (oos1 && *oos1) std::cout << "value: " << **oos1 << '\n'; However, because this is semantically more a value with two different states representing to have no value, a std::variant<> with two Boolean or monostate alternatives might be more appropriate.
15.4 Afternotes Optional objects were first proposed 2005 by Fernando Cacciola in referring to Boost.Optional as a reference implementation. This class was adopted to become part of the Library Fundamentals TS as proposed by Fernando Cacciola and Andrzej Krzemienski in. The class was adopted with other components for C++17 as proposed by Beman Dawes and Alis- dair Meredith in. Tony van Eerd significantly improved the semantics for comparison operators with https:// wg21.link/n3765 and. Vicente J. Botet Escriba harmonized theJosuttis: C++17 2019/02/16 18:57 page 142
Chapter 16 std::variant<>
With std::variant<> the C++ standard library provides a new union class, which among other benefits supports a new way of polymorphism and dealing with inhomogeneous collections. That is, it allows us to deal with elements of different data types without the need for a common base class and pointers (raw or smart).
1 Since C++11, unions in principle can have non-trivial members, but you have to implement special member functions such as the copy-constructor and destructor then, because only by programming logic you know which member is active.
143Josuttis: C++17 2019/02/16 18:57 page 144
Variants simply have internal memory for the maximum size of the underlying types plus some fixed overhead to manage which alternative is used. No heap memory is allocated.2 In general, variants can’t be empty unless you use a specific alternative to signal emptiness. How- ever, in very rare cases (such as due to exceptions during the assignment of a new value of a different type) the variant can come into a state having no value at all. As with std::optional<> and std::any the resulting objects have value semantics. Copying happens deeply by creating an independent object with the current value of the current alternative in its own memory. Therefore, copying a std::variant<> is as cheap/expensive as copying the type/value of the current alternative. Move semantics is supported.
int main() { std::variant<int, std::string> var{"hi"}; // initialized with string alternative std::cout << var.index() << '\n'; // prints 1 var = 42; // now holds int alternative std::cout << var.index() << '\n'; // prints 0 ... try { int i = std::get<0>(var); // access by index std::string s = std::get<std::string>(var); // access by type (throws excep- tion in this case) ... } catch (const std::bad_variant_access& e) { // in case a wrong type/index is used std::cerr << "EXCEPTION: " << e.what() << '\n'; ... } } The member function index() can be used to find out, which alternative is currently set (the first alternative has the index 0).
2 This is different from Boost.Variant, where memory had to be allocated to be able to recover from exceptions during value changes.Josuttis: C++17 2019/02/16 18:57 page 145
Initializations and assignment always use the best match to find out the new alternative. If the type doesn’t fit exactly, there might be surprises. Note that empty variants, variants with reference members, variants with C-style array members, and variants with incomplete types (such as void) are not allowed.3 There is no empty state. That means that for each constructed object at least one constructor has to be called. The default constructor initializes the first type with the default constructor: std::variant<std::string, int> var; // => var.index() == 0, value == ”” If there is no default constructor defined for the first type, calling the default constructor for the variant is a compile-time error: struct NoDefConstr { NoDefConstr(int i) { std::cout << "NoDefConstr::NoDefConstr(int) called\n"; } };
std::monostate To support variants, where the first type has no default constructor, a special helper type is provided: std::monostate. Objects of type std::monostate always have the same state. Thus, they always compare equal. Their own purpose is to represent an alternative type so that the variant has no value of any other type. That is, the struct std::monostate can serve as a first alternative type to make the variant type default constructible. For example: std::variant<std::monostate, NoDefConstr> v2; // OK std::cout << "index: " << v2.index() << '\n'; // prints 0 To some extent you can interpret the state to signal emptiness.4 There are various ways to check for the monostate, which also demonstrates some of the other operations, you can call for variants: if (v2.index() == 0) { std::cout << "has monostate\n"; } if (!v2.index()) { std::cout << "has monostate\n";
3 These features might be added later, but for C++17 there was not enough experience to support it. 4 In principle, std::monostate can serve as any alternative not just the first one, but, of course, then this alternative does not help to make the variant default constructible.Josuttis: C++17 2019/02/16 18:57 page 146
} if (std::holds_alternative<std::monostate>(v2)) { std::cout << "has monostate\n"; } if (std::get_if<0>(&v2)) { std::cout << "has monostate\n"; } if (std::get_if<std::monostate>(&v2)) { std::cout << "has monostate\n"; } get_if<>() uses a pointer to a variant and returns a pointer to the current alternative if the current alternative is T. Otherwise it returns nullptr. This differs from get<T>(), which takes a reference to a variant and returns the current alternative by value if the provided type is correct, and throws otherwise. As usual, you can assign a value of another alternative and even assign the monostate, signaling emptiness again: v2 = 42; std::cout << "index: " << v2.index() << '\n'; // index: 1
v2 = std::monostate{}; std::cout << "index: " << v2.index() << '\n'; // index: 0
Derived d = {{"hello"}}; std::cout << d.index() << '\n'; // prints: 1 std::cout << std::get<1>(d) << '\n'; // prints: hello d.emplace<0>(77); // initializes int, destroys string std::cout << std::get<0>(d) << '\n'; // prints: 77
namespace std { template<typename Types...> class variant; } That is, std::variant<> is a variadic class template (a feature introduced with C++11, allowing to deal with an arbitrary number of types). In addition, the following type and objects are defined: • Type std::variant_size • Type std::variant_alternative • Value std::variant_npos • Type std::monostate • Exception class std::bad_variant_access, derived from std::exception. Variants also use the two objects std::in_place_type (of type std::in_place_type_t) and std::in_place_index (of type std::in_place_index_t) defined in utility>.
Construction By default, the default constructor of a variant calls the default constructor of the first alternative: std::variant<int, int, std::string> v1; // sets first int to 0, index()==0 The alternative is value initialized, which means that is it 0, false, or nullptr, for fundamental types. If a value is passed for initialization, the best matching type is used: std::variant<long, int> v2{42}; std::cout << v2.index() << '\n'; // prints 1 However, the call is ambiguous if two types match equally well: std::variant<long, long> v3{42}; // ERROR: ambiguous std::variant<int, float> v4{42.3}; // ERROR: ambiguous std::variant<int, double> v5{42.3}; // OK std::variant<int, long double> v6{42.3}; // ERROR: ambiguous
Operation Effect constructors Create a variant object (might call constructor for underlying type) destructor Destroys an variant object = Assign a new value emplace<T>() Assign a new value to the alternative having type T emplace<Idx>() Assign a new value to the alternative with index Idx valueless_by_exception() Returns whether the variant has no value due to an exception index() Returns the index of the current alternative swap() Swaps values between two objects ==, !=, <, <=, >, >= Compare variant objects hash<> Function object type to compute hash values holds_alternative<T>() Returns whether there is a value for type T get<T>() Returns the value for the alternative with type T or throws get<Idx>() Returns the value for the alternative with index Idx or throws get_if<T>() Returns a pointer to the value for the alternative with type T or nullptr get_if<Idx>() Returns a pointer to the value for the alternative with index Idx or nullptr visit() Perform operation for the current alternative
To pass more than one value for initialization, you have to use the in_place_type or in_place_index tags: std::variant<std::complex<double>> v9{3.0, 4.0}; // ERROR std::variant<std::complex<double>> v10{{3.0, 4.0}}; // ERROR std::variant<std::complex<double>> v11{std::in_place_type<std::complex<double>>, 3.0, 4.0}; std::variant<std::complex<double>> v12{std::in_place_index<0>, 3.0, 4.0}; You can also use the in_place_index tags to resolve ambiguities or overrule priorities during the initialization: std::variant<int, int> v13{std::in_place_index<1>, 77}; // init 2nd int std::variant<int, long> v14{std::in_place_index<1>, 77}; // init long, not int std::cout << v14.index() << '\n'; // prints 1 You can even pass an initializer list followed by additional arguments: // initialize variant with a set with lambda as sorting criterion: auto sc = [] (int x, int y) { return std::abs(x) < std::abs(y); }; std::variant<std::vector<int>,Josuttis: C++17 2019/02/16 18:57 page 149
std::set<int,decltype(sc)>> v15{std::in_place_index<1>, {4, 8, -7, -2, 0, 5}, sc}; You can’t use class template argument deduction for std::variant<>. and there is no make_variant<>() convenience function (unlike for std::optional<> and std::any). Both makes no sense, because the whole goal of a variant is to deal with multiple alternatives.
try { auto s = std::get<std::string>(var); // throws exception (first int currently set) auto i = std::get<0>(var); // OK, i==0 auto j = std::get<1>(var); // throws exception (other int currently set) } catch (const std::bad_variant_access& e) { // in case of an invalid access std::cout << "Exception: " << e.what() << '\n'; } There is also an API to access the value with the option to check whether it exists: if (auto ip = std::get_if<1>(&var); ip) { std::cout << *ip << '\n'; } else { std::cout << "alternative with index 1 not set\n"; } You must pass a pointer to a variant to get_if<>() and it either returns a pointer to the current value or nullptr. Note that here if with initialization is used, which enables to check against a value just initialized. Another way to access the values of the different alternatives are variant visitors.
Comparisons For two variants of the same type (i.e., having the same alternatives in the same order) you can use the usual comparison operators. The operators act according to the following rules: • A variant with a value of an earlier alternative is less than a variant with a value with a later alternative. • If two variants have the same alternative the corresponding operators for the type of the alternatives are evaluated. Note that all objects of type std::monostate are always equal. • Two variants with the special state valueless_by_exception() being true are equal. Other- wise, any variant with valueless_by_exception() being true is less than any other variant.
For example: std::variant<std::monostate, int, std::string> v1, v2{"hello"}, v3{42}; std::variant<std::monostate, std::string, int> v4;
v1 == v4 // COMPILE-TIME ERROR
v1 == v2 // yields false v1 < v2 // yields true v1 < v3 // yields true v2 < v3 // yields false
v1 = "hello";
v1 == v2 // yields true
v2 = 41;
Move Semantics std::variant<> also supports move semantics. If you move the object as a whole, the state gets copied and the value of the current alternative is moved. As a result, a moved-from object still has the same alternative, but any value becomes unspecified. You can also move a value into or out of the contained object.
Hashing The hash value for a variant object is enabled if and only if each member type can provide a hash value. Note that the hash value is not the hash value of the current alternative.
16.3.3 Visitors They have to unambiguously provide a function call operator for each possible type. Then, the corresponding overload is used to deal with the current alternative.
struct MyVisitor { void operator() (int i) const { std::cout << "int: " << i << '\n'; } void operator() (std::string s) const { std::cout << "string: " << s << '\n'; } void operator() (long double d) const { std::cout << "double: " << d << '\n'; } };
int main() { std::variant<int, std::string, double> var(42); std::visit(MyVisitor(), var); // calls operator() for int var = "hello";Josuttis: C++17 2019/02/16 18:57 page 152
template<typename T> auto operator() (const T& val) const { std::cout << val << '\n'; } }; Thus, the call of the lambda passed to std::visit() compiles if the statement in the generated function call operator is valid (i.e., calling the output operator is valid). You can also use a lambda to modify the value of the current alternative: // double the value of the current alternative: std::visit([](auto& val) { val = val + val; }, var); Or: // restore to the default value of the current alternative; std::visit([](auto& val) { val = std::remove_reference_t<decltype(val)>{}; }, var); You can even still handle the different alternatives differently using the compile-time if language feature. For example: auto dblvar = [](auto& val) { if constexpr(std::is_convertible_v<decltype(val), std::string>) { val = val + val; } else { val *= 2; } }; ... std::visit(dblvar, var); Here, for a std::string alternative the call of the generic lambda instantiates its generic function call template to compute: val = val + val; while for other alternatives, such as int or double, the call of the lambda instantiates its generic function call template to compute: val *= 2;Josuttis: C++17 2019/02/16 18:57 page 154
int main() { std::vector<GeoObj> figure = createFigure(); for (const GeoObj& geoobj : figure) { std::visit([] (const auto& obj) { obj.draw(); // polymorphic call of draw()Josuttis: C++17 2019/02/16 18:57 page 157
}, geoobj); } } First, we define a common data type for all possible types: using GeoObj = std::variant<Line, Circle, Rectangle>; The three types don’t need any special relationship. In fact they don’t have to have a common base class, no virtual functions, and their interfaces might even differ. For example: lib/circle.hpp #ifndef CIRCLE_HPP #define CIRCLE_HPP
#include "coord.hpp" #include <iostream>
class Circle { private: Coord center; int rad; public: Circle (Coord c, int r) : center{c}, rad{r} { }
#endif Now we can put elements of these types into a collection by creating corresponding objects and passing them by value into a container: std::vector<GeoObj> createFigure() {Josuttis: C++17 2019/02/16 18:57 page 158
std::vector<GeoObj> f; f.push_back(Line{Coord{1,2},Coord{3,4}}); f.push_back(Circle{Coord{5,5},2}); f.push_back(Rectangle{Coord{3,3},Coord{6,4}}); return f; } This code would not be possible with runtime polymorphism, because then the types would have to have GeoObj as a common base class we would need a vector of pointers of GeoObj elements, and because of pointers we would have to create the objects with new so that we have to track when to call delete or use smart pointers (unique_ptr or shared_ptr). By using visitors, we can then iterate over the elements and “do the right thing” depending on the element type: std::vector<GeoObj> figure = createFigure(); for (const GeoObj& geoobj : figure) { std::visit([] (const auto& obj) { obj.draw(); // polymorphic call of draw() }, geoobj); } Here, visit() uses a generic lambda to get instantiated for each possible GeoObj type. That is, when compiling the visit() call, the lambda gets instantiated and compiled as three functions: • Compiling the code for type Line: [] (const Line& obj) { obj.draw(); // call of Line::draw() } • Compiling the code for type Circle: [] (const Circle& obj) { obj.draw(); // call of Circle::draw() } • Compiling the code for type Rectangle: [] (const Rectangle& obj) { obj.draw(); // call of Rectangle::draw() } If one of these instantiations doesn’t compile, the call of visit() does not compile at all. If all compiles, code is generated that for each element type calls the corresponding function. Note that the generated code is no if-else chain. The standard guarantees that the performance of the calls does not depend on the number of alternatives. That is, effectively we get the same behavior as a virtual function table (with something like a local virtual function table for each visit()). Note that the called draw() functions don’t have to be virtual.Josuttis: C++17 2019/02/16 18:57 page 159
If the type interfaces differ, we can use compile-time if or visitor overloading to deal with this situation (see the second example below).
int main() { using Var = std::variant<int, double, std::string>;
When we iterate we use visitors to call different functions for them. However because here we want to do different things (putting quotes around the value if the value is a string), we use a compile- time if: for (const Var& val : values) { std::visit([] (const auto& v) { if constexpr(std::is_same_v<decltype(v), const std::string&>) { std::cout << '"' << v << "\" "; } else { std::cout << v << ' '; } }, val); } So that the output becomes: 42 0.19 "hello world" 0.815 By using the visitor overloading, we could also implement this as follows: for (const auto& val : values) { std::visit(overload{ [] (auto v) { std::cout << v << ' '; }, [] (const std::string& v) { std::cout << '"' << v << "\" "; } }, val);
• Close set of types (you have to know all alternatives at compile time). • Elements all have the size of the biggest element type (an issue if element type sizes differ a lot). • Copying elements might be more expensive. In general, I would recommend now to program polymorphism with std::variant<> by default, because it is usually faster (no new and delete, no virtual functions for non-polymorphic use), a lot safer (no pointers), and usually all types are known at compile-time of all code. Just when you have to deal with reference semantics (using the same objects at multiple places) or passing objects around becomes to expensive (even with move semantics), runtime polymorphism with inheritance might still be appropriate.
16.6 Afternotes Variant objects were first proposed 2005 by Axel Naumann in refer- ring to Boost.Variant as a reference implementation. The finally accepted wording was formulated by Axel Naumann in. Tony van Eerd significantly improved the semantics for comparison operators with https:// wg21.link/p0393r3. Vicente J. Botet Escriba harmonized the API with std::optional<> and std::any with. Jonathan Wakely fixed the behavior for in_place tag types with. The restriction to disallow references, incomplete types, and arrays, and empty variants was formulated by Erich Keane with p0510r0. After C++17 was published, Mike Spertus, Walter E. Brown, and Stephan T. Lavavej fixed a minor flaw with with: C++17 2019/02/16 18:57 page 163
Chapter 17 std::any
In general, C++ is a language with type binding and type safety. Value objects are declared to have a specific type, which defines which operations are possible and how they behave. And the value objects can’t change their type. std::any is a value type that is able to change its type, while still having type safety. That is, objects can hold values of any arbitrary type but they know which type the value has they currently hold. There is no need to specify the possible types when declaring an object of this type. The trick is that objects contain both the contained value and the type of the contained value using typeid. Because the value can have any size the memory might be allocated on the heap. However, implementations should avoid the use of heap memory for small contained values, such as int. That is, if you assign a string, the object allocates memory for the value and copies the string, while also storing internally that a string was assigned. Later, run-time checks can be done to find out, which type the current value has and to use that value as its type a any_cast<> is necessary. As for std::optional<> and std::variant<> the resulting objects have value semantics. That is, copying happens deeply by creating an independent object with the current contained value and its type in its own memory. Because heap memory might be involved, copying a std::any usually is expensive and you should prefer to pass objects by reference or move values. Move semantics is partially supported.
if (a.type() == typeid(std::string)) {
163Josuttis: C++17 2019/02/16 18:57 page 164
std::string s = std::any_cast<std::string>(a); useString(s); } else if (a.type() == typeid(int)) { useInt(std::any_cast<int>(a)); } You can declare a std::any to be empty or to be initialized by a value of a specific type. The type of the initial value become the type of the contained value. By using the member function type() you can check the type of the contained value against the type ID of any type. If the object is empty, the type ID is typeid(void). To access the contained value you have to cast it to its type with a std::any_cast<>: auto s = std::any_cast<std::string>(a); If the cast fails, because the object is empty or the contained type doesn’t fit, a std::bad_any_cast is thrown. Thus, without checking or knowing the type, you better implement the following: try { auto s = std::any_cast<std::string>(a); ... } catch (std::bad_any_cast& e) { std::cerr << "EXCEPTION: " << e.what() << '\n'; } Note that std::any_cast<> creates an object of the passed type. If you pass std::string as template argument to std::any_cast<>, it creates a temporary string (a prvalue), which is then used to initialize the new object s. Without such an initialization, it is usually better to cast to a reference type to avoid creating a temporary object: std::cout << std::any_cast<const std::string&>(a); To be able to modify the value, you need a cast to the corresponding reference type: std::any_cast<std::string&>(a) = "world"; You can also call std::any_cast for the address of a std::any object. In that case, the cast returns a corresponding pointer if the type fits or nullptr if not: auto p = std::any_cast<std::string>(&a); if (p) { ... } To empty an existing std::any object you can call: a.reset(); // makes it empty or: a = std::any{}; or just:Josuttis: C++17 2019/02/16 18:57 page 165
a = {}; And you can directly check, whether the object is empty: if (a.has_value()) { ... } Note also that values are stored using their decayed type (arrays convert to pointers, and top-level references and const are ignored). For string literals this means that the value type is const char*. To check against type() and use std::any_cast<> you have to use exactly this type: std::any a = "hello"; // type() is const char* if (a.type() == typeid(const char*)) { // true ... } if (a.type() == typeid(std::string)) { // false ... } std::cout << std::any_cast<const char*>(v[1]) << '\n'; // OK std::cout << std::any_cast<std::string>(v[1]) << '\n'; // EXCEPTION These are more or less all operations. No comparison operators are defined (so, you can’t compare or sort objects), no hash function is defined, and no value() member functions are defined. And because the type is only known at run time, no generic lambdas can be used to deal with the current value independent from its type. You always need the run-time function std::any_cast<> to be able to deal with the current value, which means that you need some type specific code to reenter the C++ type system when dealing with values. However, it is possible, to put std::any objects in a container. For example: std::vector<std::any> v;
v.push_back(42); std::string s = "hello"; v.push_back(s);
Operation Effect constructors Create a any object (might call constructor for underlying type) make_any() Create a any object (passing value(s) to initialize it) destructor Destroys an any object = Assign a new value emplace<T>() Assign a new value having the type T reset() Destroys any value (makes the object empty) has_value() Returns whether the object has a value type() Returns the current type as std::type_info object any_cast<T>() Use current value as value of type T (exception if other type) swap() Swaps values between two objects
Construction By default, a std::any is initialized by being empty. std::any a1; // a1 is empty If a value is passed for initialization, its decayed type is used as type of the contained value:Josuttis: C++17 2019/02/16 18:57 page 167
a.emplace{std::in_place_type<std::string>, "hello"}; // a contains value of type std::stringJosuttis: C++17 2019/02/16 18:57 page 168
Move Semantics std::any also supports move semantics. However, note that move semantics is only supported for type that also have copy semantics. That is, move-only types are not supported as contained value types. The best way to deal with move semantics might not be obvious. So, here is how you should do it: std::string s("hello, world!");
std::any a; a = std::move(s); // move s into a
Note that: s = std::any_cast<string>(std::move(a)); also works, but needs an additional move. Directly casting to an rvalue reference does not compile: s = std::any_cast<std::string&&>(a); // compile-time error Note that instead of calling a = std::move(s); // move s into a the following might not always work (although it is an example inside the C++ standard): std::any_cast<string&>(a) = std::move(s); // OOPS: a must hold a string This only works if a already contains a value of type std::string. If not, the cast throws a std::bad_any_cast exception before we move assign the new value.
17.3 Afternotes Any objects were first proposed 2006 by Kevlin Henney and Beman Dawes in. link/n1939 referring to Boost.Any as a reference implementation. This class was adopted to be- come part of the Library Fundamentals TS as proposed by Beman Dawes, Kevlin Henney, and Daniel Krügler in. The class was adopted with other components for C++17 as proposed by Beman Dawes and Alis- dair Meredith in. Vicente J. Botet Escriba harmonized the API with std::variant<> and std::optional<> with. Jonathan Wakely fixed the behavior for in_place tag types with: C++17 2019/02/16 18:57 page 170
170
Chapter 18 std::byte
Programs hold data in memory. With std::byte C++17 introduces a type for it, which does rep- resent the “natural” type of the elements of memory, bytes. The key difference to types like char or int is that this type cannot (easily) be (ab)used as an integral value or character type. For cases, where numeric computing or character sequence are not the goal, this results in more type safety. The only “computing” operations supported are bit-wise operators.
std::byte b1{0x3F}; std::byte b2{0b1111’0000};
if (b1 == b3[0]) { b1 <<= 1; }
171Josuttis: C++17 2019/02/16 18:57 page 172
Note that list initialization (using curly braces) is the only way you can directly initialize a single value of a std::byte object. All other forms do not compile: std::byte b1{42}; // OK (as for all enums with fixed underlying type since C++17) std::byte b2(42); // ERROR std::byte b3 = 42; // ERROR std::byte b4 = {42}; // ERROR This is a direct consequence of the fact that std::byte is implemented as an enumeration type, using the new way scoped enumerations can be initialized with integral values. There is also no implicit conversion, so that you have to initialize the byte array with an explicitly converted integral literal: std::byte b5[] {1}; // ERROR std::byte b6[] {std::byte{1}}; // OK Without any initialization, the value of a std::byte is undefined for object on the stack: std::byte b; // undefined value As usual (except for atomics), you can force an initialization with all bits set to zero with list initiali- zation: std::byte b{}; // same as b{0} std::to_integer<>() provides the ability to use the byte object as integral value (including bool and char). Without the conversion, the output operator would not compile. Note that because it is a template you even need the conversion fully qualified with std::: std::cout << b1; // ERROR std::cout << to_integer<int>(b1); // ERROR (ADL doesn’t work here) std::cout << std::to_integer<int>(b1); // OK Such a conversion is also necessary to use a std::byte as a Boolean value. For example: if (b2) ... // ERROR if (b2 != std::byte{0}) ... // OK if (to_integer<bool>(b2)) ... // ERROR (ADL doesn’t work here) if (std::to_integer<bool>(b2)) ... // OK Because std::byte is define as enumeration type with unsigned char as the underlying type, the size of a std::byte is always 1: std::cout << sizeof(b); // always 1 The number of bits depends on the number of bits of type unsigned char, which you can find out with the standard numeric limits: std::cout << std::numeric_limits<unsigned char>::digits; // number of bits of a std::byte Most of the time it’s 8, but there are platforms where this is not the case.Josuttis: C++17 2019/02/16 18:57 page 173
template<typename IntType> constexpr IntType to_integer (byte b) noexcept; }
Operation Effect constructors Create a byte object (value undefined with default constructor) destructor Destroys a byte object (nothing to be done) = Assign a new value ==, !=, <, <=, >, >= Compares byte objects <<, >>, |, &, ^, ~ Binary bit-operations <<=, >>=, |=, &=, ^= Modifying bit-operations to_integer<T>() Converts byte object to integral type T sizeof() Yields 1
18.3 Afternotes std::byte was first proposed by Neil MacIntosh passing in. The finally accepted wording was formulated by Neil MacIntosh in.
1 With gcc/g++ narrowing initializations compile without the compiler option -pedantic-errors.Josuttis: C++17 2019/02/16 18:57 page 176
176
Chapter 19 String Views
With C++17, a special string class was adopted by the C++ standard library, that allows us to deal with character sequences like strings, without allocating memory for them: std::string_view. That is, std::string_view objects refer to external character sequences without owning them. That is, the object can be considered as a reference to a character sequence.
string_view:
len: 4 data:
s o m e d a t a i n m e m o r y
Using such a string view is cheap and fast (passing a string_view by value is always cheap). However, it is also potentially dangerous, because similar to raw pointers it is up to the programmer to ensure that the referred character sequences is still valid, when using a string_view).
177Josuttis: C++17 2019/02/16 18:57 page 178
• The character sequences are not guaranteed to be null terminated. So, a string view is not a null terminated byte stream (NTBS). • The value can be the nullptr, which for example is returned by data() after initializing a string view with the default constructor. • There is no allocator support. Due to the possible nullptr value and possible missing null terminator, you should always use size() before accessing characters via operator[] or data() (unless you know better).
template<typename T> void printElems(const T& coll, std::string_view prefix = std::string_view{}) { for (const auto& elem : coll) { if (prefix.data()) { // check against nullptr std::cout << prefix << ' '; } std::cout << elem << '\n';Josuttis: C++17 2019/02/16 18:57 page 179
} } Here, just by declaring that the function will take a std::string_view, we might save a call to allocate heap memory compared to a function taking a std::string. Details depend on whether short strings are passed and the short string optimization (SSO) is used. For example, if we declare the function as follows: template<typename T> void printElems(const T& coll, const std::string& prefix = std::string{}); and we pass a string literal, the call creates a temporary string which will allocate memory unless the string is short and the short string optimization is used. By using a string view instead, no allocation is needed, because the string view only refers to the string literal. However, note that data() has to be checked against the nullptr before using any unknown value of a string view. Another example, using a string_view like a read-only string, is an improved version of the asInt() example of std::optional<>, which was declared for a string parameter: lib/asint.cpp #include <optional> #include <string_view> #include <charconv> // for from_chars() #include <iostream>
int main() { for (auto s : {"42", " 077", "hello", "0x33"} ) { // try to convert s to int and print the result if possible: std::optional<int> oi = asInt(s); if (oi) {Josuttis: C++17 2019/02/16 18:57 page 180
std::cout << "convert '" << s << "' to int: " << *oi << "\n"; } else { std::cout << "can't convert '" << s << "' to int\n"; } } } Now, asInt() takes a string view by value. However, that has significant consequences. First, it does no longer make sense to use std::stoi() to create the integer, because stoi() takes a string and creating a string from a string view is a relative expensive operation. Instead, we pass the range of characters of the string view to the new standard library function std::from_chars(). It takes a pair of raw character pointers for the begin and end of the characters to convert. Note that this means that we can skip any special handling of an empty string view, where data() is nullptr and size() is 0, because the range from nullptr until nullptr+0 is a valid empty range (for any pointer type adding 0 is supported and has no effect). std_from_chars() returns a std::from_chars_result, which is a structure with two mem- bers, a pointer ptr to the first character that was not processed and a std::errc ec, for which std::errc represents no error. Thus, after initializing ec with the ec member of the return value (using structured bindings), the following check returns nullopt if the conversion failed: if (ec != std::errc{}) { return std::nullopt; } Using string views can also provide significant performance boosts when sorting substrings.
• Assigning the return value to a string reference is, if possible, pretty safe as long we use the object locally because references extend the lifetime of return values to the end of their lifetime: std::string& s2 = retString(); // Compile-Time ERROR (const missing)
// because: auto sub = substring("very nice", 5); // returns view to passed temporary string // but temporary string destructed after the call std::cout << sub << '\n'; // RUN-TIME ERROR: tmp string already de- structed
1 See for a discussion about this example.Josuttis: C++17 2019/02/16 18:57 page 182
// generic concatenation: template<typename T> T concat (const T& x, const T& y) { return x + y; } However, using them together again might easily result in fatal run-time error: std::string_view hi = "hi"; auto xy = concat(hi, hi); // xy is std::string_view std::cout << xy << '\n'; // FATAL RUN-TIME ERROR: referred string destructed Code like that can easily accidentally be written. The real problem here is the return type of concat(). If declaring its return type to be deduced by the compiler, the example above initializes xy as std::string: // improved generic concatenation: template<typename T> auto concat (const T& x, const T& y) { return x + y; }Josuttis: C++17 2019/02/16 18:57 page 183
Also, it is counter-productive to use string views in a chain of calls, where in the chain or at then end of it string are needed. For example, if you define class Person with the following constructor: class Person { std::string name; public: Person (std::string_view n) : name{n} { } ... }; Passing a string literals or string you still need is fine: Person p1{"Jim"}; // no performance overhead std::string s = "Joe"; Person p2{s}; // no performance overhead But moving in a string becomes unnecessary expensive, because the passed string is first implicitly converted to a string view, which is then used to create a new string allocating memory again: Person p3{std::move(s)}; // performance overhead: move broken Don’t deal with std::string_view here. Taking the parameter by value and moving it to the member is still the best solution. Thus, the constructor and getter should look as follows: class Person { std::string name; public: Person (std::string n) : name{std::move(n)} { } std::string getName() const { return name; } };
Operation Effect constructors Create or copy a string view destructor Destroys a string view = Assign a new value swap() Swaps values between two strings view ==, !=, <, <=, >, >=, compare() Compare string views empty() Returns whether the string view is empty size(), length() Return the number of characters max_size() Returns the maximum possible number of characters [], at() Access a character front(), back() Access the first or last character << Writes the value to a stream copy() Copies or writes the contents to a character array data() Returns the value as nullptr or constant character array (note: no terminating null character) find functions Search for a certain substring or character begin(), end() Provide normal iterator support cbegin(), cend() Provide constant iterator support rbegin(), rend() Provide reverse iterator support crbegin(), crend() Provide constant reverse iterator support substr() Returns a certain substring remove_prefix() Remove leading characters remove_suffix() Remove trailing characters hash<> Function object type to compute hash values
Construction You can create a string view with the default constructor, as a copy, from a raw character array (null terminated or with specified length), from a std::string, or as a literal with the suffix sv. However, note the following: • String views created with the default constructor have nullptr as data(). Thus, there is no valid call of operator[]. std::string_view sv; auto p = sv.data(); // yields nullptr std::cout << sv[0]; // ERROR: no valid character • When initializing a string view by a null terminated byte stream, the resulting size is the number of characters without ’\0’ and using the index of the terminating null character is not valid: std::string_view sv{"hello"}; std::cout << sv; // OKJosuttis: C++17 2019/02/16 18:57 page 186
std::string_view sv{s}; std::cout << sv.size(); // 5 std::cout << sv.at(5); // throws std::out_of_range exception std::cout << sv[5]; // undefined behavior, but HERE it usually works std::cout << sv.data(); // undefined behavior, but HERE it usually works • As the literal operator is defined for the suffix sv, you can also create a string view as follows: using namespace std::literals; auto s = "hello"sv; The key point here is that in general you should not expect the null terminating character and always use size() before accessing the characters (unless you know specific things about the value). As a workaround you can make ’\0’ part of the string view, but you should not use a string view as a null terminates string without the null terminator being part of it, even if the null terminator is right behind.2
2 Unfortunately, people are already starting to propose new C++ standards based on this strange state (having a string view without null terminator in front of a null terminator, which is not part of the string view itself). See for an example.Josuttis: C++17 2019/02/16 18:57 page 187
Hashing The C++ standard library guarantees that hash values for strings and strings views are equal.
3 In principle, we could standardize an operation to concatenate string views yielding a new string, but so far this is not provided.Josuttis: C++17 2019/02/16 18:57 page 188
• You can pass a string view to std::quoted, which prints its value quoted. For example: using namespace std::literals;
}; This constructor has its drawbacks. Initializing a person with a string literal creates one unnecessary copy, which might cause an unnecessary request for heap memory. For example: Person p("Aprettylong NonSSO Name"); first calls the std::string constructor to create the temporary parameter n, because a reference of type std::string is requested. If the string is long or no short string optimization is enabled4 this means that for the string value heap memory is allocated. Even with move semantics the temporary string is then copied to initialize the member name, which means that again memory is allocated. You can avoid this overhead only by adding more constructor overloads or introducing a template constructor, which might cause other problems. If instead we are using a string view, the performance is better: class Person { std::string name; public: Person (std::string_view n) : name(n) { } ... }; Now a temporary string view n gets created, which does not allocate memory at all, because the string view only refers to the characters of the string literal. Only the initialization of name allocates once the memory for the member name. However, there is a problem: If you pass a temporary string or a string marked with std::move() the string is converted to type a string view (which is cheap) and then the string view is used to allocate the memory for the new string (which is expensive). In other words: The use of string view disables move semantics unless you provide an additional overload for it. There is still a clear recommendation for how to initialize objects with string members: Take the string by value and move: class Person { std::string name; public: Person (std::string n) : name(std::move(n)) { } ... }; We anyway have to create a string. So, creating it as soon as possible allows us to benefit from all possible optimizations the moment we pass the argument. And when we have it, we only move, which is a cheap operation. If we initialize the string by a helper function returning a temporary string:
4 With the often implemented small string optimization strings only allocate memory if they have more than say 15 characters.Josuttis: C++17 2019/02/16 18:57 page 190
std::string newName() { ... return std::string{...}; }
Person p{newName()}; the mandatory copy elision will defer the materialization of a new string until the value is passed to the constructor. There we have a string named n so that we have an object with a location (a glvalue). The value of this object is then moved to initialize the member name. This example again demonstrates: • String views are not a better interface for taking strings. • In fact, string views should only be used in call chains, where they never have to be used as strings.
Beside the optimization to get the passed string value for the prefix as a std::string_view by value, we can also use a string view here internally. But only because the C-string returned by ctime() is valid for a while (it is valid until the next call of ctime() or asctime()). Note that we can remove the trailing newline from the string, but that we can’t concatenate both string views by simply calling operator+. Instead, we have to convert one of the operands to a std::string (which unfortunately unnecessarily might allocate additional memory).
19.6 Afternotes The first string class with reference semantics was proposed by Jeffrey Yasskin in. link/n3334 (using the name string_ref). This class was adopted as part of the Library Funda- mentals TS as proposed by Jeffrey Yasskin in. The class was adopted with other components for C++17 as proposed by Beman Dawes and Al- isdair Meredith in. Some modifications for better integration were added by Marshall Clow in and in and by Nicolai Josuttis in. Additional fixes by Daniel Krügler are in (which will probably come as a defect against C++17).Josuttis: C++17 2019/02/16 18:57 page 192
192
Chapter 20 The Filesystem Library
With C++17 the Boost.filesystem library was finally adopted as a C++ standard library. By doing this, the library was adjusted to new language features, made more consistent with other parts of the library, cleaned-up, and extended to provided some missing pieces (such as operations to compute a relative path between filesystem paths).
193Josuttis: C++17 2019/02/16 18:57 page 194
} else if (is_directory(p)) { // is path p a directory? std::cout << p << " is a directory containing:\n"; for (auto& e : std::filesystem::directory_iterator{p}) { std::cout << " " << e.path() << '\n'; } } else if (exists(p)) { // does path p actually exist? std::cout << p << " is a special file\n"; } else { std::cout << "path " << p << " does not exist\n"; } } We first convert the any passed command-line argument to a filesystem path: std::filesystem::path p{argv[1]}; // p represents a filesystem path (might not exist) Then, we perform the following checks: • If the path represents an existing regular file, we print its size: if (is_regular_file(p)) { // is path p a regular file? std::cout << p << " exists with " << file_size(p) << " bytes\n"; } Calling this program as follows: checkpath checkpath.cpp will output something like: "checkpath.cpp" exists with 907 bytes Note that the output operator for paths automatically writes the path name quoted (within double quotes and backslashes are escaped by another backslash, which is an issue for Windows paths). • If the filesystem path exists as a directory, we iterate over the files in the directory and print the paths: if (is_directory(p)) { // is path p a directory? std::cout << p << " is a directory containing:\n"; for (auto& e : std::filesystem::directory_iterator(p)) { std::cout << " " << e.path() << '\n'; } } Here we use a directory_iterator, which provides begin() and end() in a way that we can iterate over directory_entry elements using a range-based for loop. In this case we use the directory_entry member function path(), which yields the filesystem path of the entry. Calling this program as follows:Josuttis: C++17 2019/02/16 18:57 page 195
checkpath . will output something like: "." is a directory containing: "./checkpath.cpp" "./checkpath.exe" ... • Finally, we check, whether the passed filesystem path exists at all: if (!exists(p)) { // does path p actually exist? ... }
namespace fs = std::filesystem;
Namespace fs First, we do something very common: Define fs as shortcut for namespace std::filesystem: namespace fs = std::filesystem; Using this namespace we initialize, for example, the path p in the switch statement: fs::path p{argv[1]}; The switch statement is an application of the new switch with initialization, where we initialize the path and provide different cases for its type: switch (fs::path p{argv[1]}; status(p).type()) { ... } The expression status(p).type() creates a file_status, for which type() creates a file_type. This way we can just directly handle the different types instead of following a chain of calls like is_regular_file(), is_directory(), and so on. Providing the type is intentionally provided in multiple steps so that we don’t have to pay the price of operating system calls if we are not interested in status information. Note also that implementation-specific file_type might exist. For example, Windows provides the special file type junction. However using it is not portable.Josuttis: C++17 2019/02/16 18:57 page 198
int main () { namespace fs = std::filesystem; try { // create directories tmp/test/ (if they don’t exist yet): fs::path testDir{"tmp/test"}; create_directories(testDir);
Namespace fs First, we do something very common: Define fs as shortcut for namespace std::filesystem: namespace fs = std::filesystem; Using this namespace we initialize, for example, the paths for a basic subdirectory for temporary files: fs::path testDir{"tmp/test"};
Creating Directories Then we try to create the subdirectory: create_directories(testDir); By using create_directories() we create all missing directories of the whole passed path (there is also create_directory() to create a directory only inside an existing directory). It is not an error to perform this call if the directory already exists. However, any other problem is an error and raises a corresponding exception. If testDir already exists, create_directories() returns false. Thus, you could also call:
if (!create_directories(testDir)) { std::cout << "\"" << testDir.string() << "\" already exists\n"; } However, note that it is also not an error, if testDir exists but is not a directory. Thus, returning true does not mean that after the call there is a directory with the requested name. We could check that, but in this case this is indirectly covered, because the next call to create a file in the directory will fail then. However, the error message might be confusing. To get a better error message, you might want to check whether there is really a directory afterwards.
streams library. However, a new overload for the constructors is provided to be able to directly pass a filesystem path. Note that you should still always check, whether creating/opening the file was successful. A lot of things can go wrong here (see below).
tmp tmp\slink tmp\slink\data.txt tmp\test tmp\test\data.txt ... Note that we use lexically_normal() when we print the path of all directory entries. If we skip that, the path of the directory entries would contain a prefix with the directory the iterator was initialized with. Thus, printing just the paths inside the loop: auto iterOpts = fs::directory_options::follow_directory_symlink; for (auto& e : fs::recursive_directory_iterator(".", iterOpts)) { std::cout << " " << e.path() << '\n'; } would output under POSIX-based systems: all files: ... "./testdir" "./testdir/data.txt" "./tmp" "./tmp/test" "./tmp/test/data.txt" And on Windows the output would be: all files: ... ".\\testdir" ".\\testdir\\data.txt" ".\\tmp" ".\\tmp\\test" ".\\tmp\\test\\data.txt" Thus, by calling lexically_normal() we yield the normalized path, which does remove the lead- ing dot for the current directory. And as written before, by calling string() we avoid that each path is written quoted, which would be OK for POSIX-based systems (just having the name in double quotes), but would look very surprising on Windows systems (because each backslash is escaped by another backslash).
Error Handling Filesystems are source of trouble. You might not be able to perform operations because of using wrong characters, not having necessary permissions, or other processes might modify the filesystem while you are dealing with it. Thus, depending on the platform and permissions a couple of things can go wrong in this program.Josuttis: C++17 2019/02/16 18:57 page 202
For those cases not covered by return values (here the case that the directory already exists), we catch the corresponding exception and print the general message and the first path in it: try { ... } catch (fs::filesystem_error& e) { std::cerr << "EXCEPTION: " << e.what() << '\n'; std::cerr << " path1: \"" << e.path1().string() << "\"\n"; } For example, if we can’t create the directory, a message such as this might get printed: EXCEPTION: filesystem error: cannot create directory: [tmp/test] path1: "tmp/test" Or if we can’t create the symbolic link because, for example, it already exists, you get something like the following message: EXCEPTION: create_directory_symlink: Can’t create a file when it already exists: "tmp\test\data.txt", "testdir" path1: "tmp\test\data.txt" As written already, when the directory already exists as a regular file, the trial to create a new file in the directory will fail. For this reason, don’t forget to check the state of an opened file. The I/O Stream library used to read and write regular files does not handle errors as exceptions by default. In any case note that the situation in a multi-user/multi-process operating system can change at any time. So it might even happen that your created directory is removed, renamed, or replaced by a regular file after you created it. So it is simply not possible to ensure the validity of a future request by finding out the current situation. For this reason, it usually is the best approach to try to do what you want (i.e., create a directory, open a file) and process exceptions and errors or validate check the expected behavior. However, sometimes the trial to do something with the filesystem might work, but not the way you had in mind. For example, if you want to create a file in a specific directory and there already exists a symbolic link to another directory, the file gets created or overwritten at an unexpected location. This might be OK (the user might have a good reason to create a symbolic link where a directory was expected). But if you want to detect that situation, you have to check for the existence of a file (which is a bit more complicated than you might think at first) before you create something. But again: There is no guarantee that results of filesystem checks are still valid when you process them.
20.2.2 Namespace The filesystem library has its own sub-namespace filesystem inside std. It is a pretty common convention to introduce the shortcut fs for it: namespace fs = std::filesystem; This, for example, enables to use fs::current_path() instead of std::filesystem::current_path(). Further code examples of this chapter will often use fs as the corresponding shortcut. Note that not qualifying filesystem calls sometimes results in unintended behavior.
20.2.3 Paths The key element of the filesystem library is a path. It is a name that represents the (potential) location of a file within a filesystem. It consists of an optional root name, an optional root directory, and a sequence of filenames separated by directory separators. The path can be relative (so that the file location depends on the current working directory) or absolute. Different formats are possible: • A generic format, which is portable • A native format, which is specific to the underlying file systemJosuttis: C++17 2019/02/16 18:57 page 204
On POSIX-based operating systems there is not difference between the generic and the native format. On Windows the generic format /tmp/test.txt is a valid native format besides \tmp\test.txt, which is also supported (thus, /tmp/test.txt and \tmp\test.txt are two native versions of the same path). On OpenVMS the corresponding native format might be [tmp]test.txt. Special filenames exist: • "." represents the current directory • ".." represents the parent directory The generic path format is as follows:
where: • The optional root name is implementation specific (e.g., it can be //host on POSIX systems and C: on Windows systems) • The optional root directory is a directory separator • The relative path is a sequence of file names separated by directory separators By definition, a directory separators consists of one or multiple ’/’ or implementation-specific pre- ferred directory separator. Examples for portable generic paths are: //host1/bin/hello.txt . tmp/ /a/b//../c Note that the last path refers to the same location as /a/c and is absolute on POSIX systems but relative on Windows systems (because the drive/partition is missing). On the other hand a path such as C:/bin is an absolute path on Windows systems (the root directory "bin" on the "C" drive/partition) but a relative path on POSIX (the subdirectory "bin" in the directory "C:"). On Windows systems the backslash is the implementation specific directory separator so that the paths above can there also be written by using the backslash as the preferred directory separator: \\host1\bin\hello.txt . tmp\ \a\b\..\c The filesystem library provides function to convert paths between the native and generic format. A path might be empty. This mean that there is no path defined. This is not necessarily the same as ".". What it means depends on the context.Josuttis: C++17 2019/02/16 18:57 page 205
20.2.4 Normalization A path might be or can get normalized. In a normalized path: • Filenames are separated only by a single preferred directory separator. • The filename "." is not used unless the whole path is nothing but "." (representing the current directory). • The filename does not contain ".." filenames (we don’t go down and then up again) unless they are at the beginning of a relative path. • The path only ends with a directory separator if the trailing filename is a directory with a name other than "." or "..". Note that normalization still means that a filename ending with a directory separator is different from a filename not ending with a separator. The reason is that on some operating systems the behavior differs when it is known that the path is a directory (e.g., with a trailing separator symbolic links might get resolved). Table Effect of Path Normalization lists some examples for normalization on POSIX and Windows systems. Note again that on POSIX system C:bar and C: are just filenames and have no special meaning to specify a partition as on Windows.
Note that the path C:\bar\.. remains the same when being normalized on a POSIX-based sys- tem. The reason is that there the backslash is no directory separator so that the whole path is just one filename having a colon, two backslashes, and two dots as part of its name. The filesystem provides function for both lexical normalization (not taking the filesystem into account) and filesystem-dependent normalization.
For example: mypath.is_absolute() // check whether path is absolute or relative • Free-standing functions are expensive, because they usually take the actual filesystem into ac- count, so that no operating systems calls are necessary. For example: equivalent(path1, path2); // true if both paths refer to the same file Sometimes, the filesystem library even provides the same functionality operating both lexically and by taking the actual filesystem into account: std::filesystem::path fromP, toP; ... toP.lexically_relative(fromP); // yield lexical path from fromP to toP relative(toP, fromP); // yield actual path from fromP to toP Thanks to argument dependent lookup (ADL) usually you don’t have to specify the full namespace std::filesystem, when calling free-standing filesystem functions and an argument has a filesys- tem specific type. Only when implicit conversions from other types are used, you have to qualify the call. For example: create_directory(std::filesystem::path{"tmpdir"}); // OK remove(std::filesystem::path{"tmpdir"}); // OK std::filesystem::create_directory("tmpdir"); // OK std::filesystem::remove("tmpdir"); // OK create_directory("tmpdir"); // ERROR remove("tmpdir"); // OOPS: calls C function remove() Note that the last call usually compiles, but finds the C function remove() which also removes a specified file, but does not remove empty directories under Windows.
Value Meaning regular Regular file directory Directory file symlink Symbolic link file character Character special file block Block special file fifo FIFO or pipe file socket Socket file ... additional implementation-defined file type none The type of the file is not known (yet) unknown The file exists but the type could not be determined not_found Pseudo-type indicating the file was not found
Beside regular files and directories the most common other type is a symbolic link, which is a type for files that refer to another filesystem location. At that location there might be a file or there might not. Note that some operating systems and/or file systems (e.g., the FAT file system) don’t support symbolic links at all. Some operating systems support them only for regular files. Note that on Windows you need special permissions to create symbolic links, which you for example, can do with the mklink command. Character-special files, block-special files, FIFOs, and sockets come from the UNIX filesystem. Currently, all four types are not used with Visual C++.1 As you can see, special values exist for cases when the file doesn’t exists or its file type is not known or detectable. In the remainder of this chapter I use two general categories representing a couple of file types: • Other files: Files with any file type other than regular file, directory, and symbolic link. The library function is_other() matches this term. • Special files: Files with any of the following file types: Character-special files, block-special files, FIFOs, and sockets. The special file types plus the implementation-defined file types together form the other file types.
You can create path, inspect them, modify them, and compare them. Because these operations usually do not take the filesystem into account (care for existing files, symbolic links, etc.), they are cheap to call. As a consequence, they are usually member functions (if they are no constructors or operators).
Call Effect path(string) creates path from a string path(beg,end) creates path from a range u8path(u8string) creates path from a UTF-8 string current_path() yields the path of the current working directory temp_directory_path() yields the path for temporary files
Note that both current_path() and temp_directory_path() are more expensive operations because they are based on operating system calls. By passing an argument current_path() can also be used to modify the current working directory. With u8path() you can create portable paths using all UTF-8 characters. For example: std::filesystem::path{u8path(u8"K\u00F6ln"); // ”Köln” (Cologne native) ...
Call Effect p.empty() yields whether a path is empty p.is_absolute() yields whether a path is absolute p.is_relative() yields whether a path is relative p.has_filename() yields whether a path neither a directory nor a root name p.has_stem() same as has_filename() (as any filename has a stem) p.has_extension() yields whether a path has an extension p.has_root_name() yields whether a path has a root name p.has_root_directory() yields whether a path has a root directory p.has_root_path() yields whether a path has a root name or a root directory p.has_parent_path() yields whether a path has a parent path p.has_relative_path() yields whether a path does not only consist of root elements p.filename() yields the filename (or the empty path) p.stem() yields the filename without extension (or the empty path) p.extension() yields the extension (or the empty path) p.root_name() yields the root name (or the empty path) p.root_directory() yields the root directory (or the empty path) p.root_path() yields the root elements (or the empty path) p.parent_path() yields the parent path (or the empty path) p.relative_path() yields the path without root elements (or the empty path) p.begin() begin of a path iteration p.end() end of a path iteration
• on Unix systems – is relative – has no root elements (neither a root name nor a root directory), because C: is a filename. – has the parent path C: – has the relative path C:/hello.txt • on Windows systems – is absolute – has the root name C: and the root directory /
2 This has changed with C++17 because before a filename could consist of a pure extension.Josuttis: C++17 2019/02/16 18:57 page 212
Path Iteration You can iterate over a path, which yields the elements of the path: the root name if any, the root directory if any, and all the filenames. If the path ends with a directory separator, the last element is an empty filename.3 The iterator is a bidirectional iterator so that you can use --. The values the iterators refer to are of type path again. However, two iterators iterating over the same path might not refer to the same path object even if they refer to the same element. For example: void printPath(const std::filesystem::path& p) { std::cout << "path elements of " << p.string << ":\n"; for (auto pos = p.begin(); pos != p.end(); ++pos) { std::filesystem::path elem = *pos; std::cout << " " << elem; } std::cout << '\n'; } If this function is called as follows: printPath("../sub/file.txt"); printPath("/usr/tmp/test/dir/"); printPath("C:\\usr\\tmp\\test\\dir\\"); the output on a POSIX-based system will be: path elements of "../sub/file.txt": ".." "sub" "file.txt" path elements of "/usr/tmp/test/dir/": "/" "usr" "tmp" "test" "dir" "" path elements of "C:\\usr\\tmp\\test\\dir\\": "C:\\usr\\tmp\\test\\dir\\" Note that the last path is just one filename, because neither C: is a valid root name nor is the backslash a valid directory separator under POSIX-based systems. The output on a Windows system will be: path elements of "../sub/file.txt":
3 Before C++17, the filesystem library implementations used . to signal a trailing directory separator. This has changed to be able to distinguish a path ending with a separator from a path ending with a dot after the separator.Josuttis: C++17 2019/02/16 18:57 page 213
Call Effect strm << p write the value of a path as quoted string strm >> p reads the value of a path as quoted string p.string() yields the path as a std::string p.wstring() yields the path as a std::wstring p.u8string() yields the path as a UTF-8 string of type std::u8string p.u16string() yields the path as a UTF-16 string of type std::u16string p.u32string() yields the path as a UTF-32 string of type std::u32string p.string<...>() yields the path as a std::basic_string<...> p.lexically_normal() yields p as normalized path p.lexically_relative(p2) yields the path from p2 to p (empty path if none) p.lexically_proximate(p2) yields the path from p2 to p (p if none)
The lexically_...() functions return a new path, while the other conversion functions yield a corresponding string type. None of these functions modifies the path they are called for. For example, the following code: std::filesystem::path p{"/dir/./sub//sub1/../sub2"}; std::cout << "path: " << p << '\n'; std::cout << "string(): " << p.string() << '\n'; std::wcout << "wstring(): " << p.wstring() << '\n'; std::cout << "lexically_normal(): " << p.lexically_normal() << '\n'; has the same output for the first three rows: path: "/dir/./sub//sub1/../sub2"Josuttis: C++17 2019/02/16 18:57 page 214
string(): /dir/./sub//sub1/../sub2 wstring(): /dir/./sub//sub1/../sub2 but the output for the last row depends on the directory separator. On POSIX-based systems it is: lexically_normal(): "/dir/sub/sub2" while on Windows it is: lexically_normal(): "\\dir\\sub\\sub2"
Path I/O First, note that the I/O operators write and read paths as quoted strings. You have to convert them to a string to write them without quotes: std::filesystem::path file{"test.txt"} std::cout << file << '\n'; // writes: "test.txt" std::cout << file.string() << '\n'; // writes: test.txt On Windows this has even worse effects. The following code: std::filesystem::path tmp{"C:\\Windows\\Temp"}; std::cout << tmp << '\n'; std::cout << tmp.string() << '\n'; std::cout << '"' << tmp.string() << "\"\n"; has the following output: "C:\\Windows\\Temp" C:\Windows\Temp "C:\Windows\Temp" Note that reading filenames supports both forms (quoted with a leading " and non-quoted). Thus, all printed forms will be read correctly back using the standard input operator for paths: std::filesystem::path tmp; std::cin >> tmp; // reads quoted and non-quoted paths correctly
Normalization Normalization might have more surprising outcomes when you deal with portable code. For exam- ple: std::filesystem::path p2{"//dir\\subdir/subsubdir\\/./\\"}; std::cout << "p2: " << p2 << '\n'; std::cout << "lexically_normal(): " << p2.lexically_normal() << '\n'; has the following probably expected output on Windows systems: p2: "//host\\dir/sub\\/./\\" lexically_normal(): "\\\\host\\dir\\sub\\"Josuttis: C++17 2019/02/16 18:57 page 215
Relative Path Both lexically_relative() and lexically_proximate() can be called to compute the relative path between to paths. The only difference is the behavior if there is no path, which only can happen if one path is relative and the other is absolute or the root names differ. In that case: • p.lexically_relative(p2) yields the empty path if there is no relative path from p2 to p. • p.lexically_proximate(p2) yields p if there is no relative path from p2 to p. As both operations operate lexically, the actual filesystem (with possible symbolic links) and the current_path() are not taken into account. If both paths are equal, the relative path is ".". For example: fs::path{"/a/d"}.lexically_relative("/a/b/c") // "../../d" fs::path{"/a/b/c"}.lexically_relative("/a/d") // "../b/c" fs::path{"/a/b"}.lexically_relative("/a/b") // "." fs::path{"/a/b"}.lexically_relative("/a/b/") // "." fs::path{"/a/b"}.lexically_relative("/a/b\\") // "." fs::path{"/a/b"}.lexically_relative("/a/d/../c") // "../b fs::path{"a/d/../b"}.lexically_relative("a/c") // "../d/../b" fs::path{"a//d/..//b"}.lexically_relative("a/c") // "../d/../b" On Windows systems, we have: fs::path{"C:/a/b"}.lexically_relative("c:/c/d") ; // "" fs::path{"C:/a/b"}.lexically_relative("D:/c/d") ; // "" fs::path{"C:/a/b"}.lexically_proximate("D:/c/d") ; // "C:/a/b"
Conversions to Strings With u8string() you can use the path as UTF-8 string, which is nowadays the common format for stored data. For example: // store paths as UTF-8 string: std::vector<std::string> utf8paths; // std::u8string with C++20 for (const auto& entry : fs::directory_iterator(p)) {Josuttis: C++17 2019/02/16 18:57 page 216
utf8paths.push_back(entry.path().u8string()); } Note that the return value of u8string() will probably change from std::string to std::u8string with C++20 (the new UTF-8 string type as proposed together with char8_t for UTF-8 characters in).4 The member template string<>() can be used to convert to a special string type, such as a string type that operates case insensitively: struct ignoreCaseTraits : public std::char_traits<char> { // case-insensitively compare two characters: static bool eq(const char& c1, const char& c2) { return std::toupper(c1) == std::toupper(c2); } static bool lt(const char& c1, const char& c2) { return std::toupper(c1) < std::toupper(c2); } // compare up to n characters of s1 and s2: static int compare(const char* s1, const char* s2, std::size_t n); // search character c in s: static const char* find(const char* s, std::size_t n, const char& c); };
std::filesystem::path p{"/dir\\subdir/subsubdir\\/./\\"}; icstring s2 = p.string<char,ignoreCaseTraits>(); Note also that you should not use a function c_str(), which is also provided, because it con- verts to the native string format, which might be a wchar_t so that you, for example, have to use std::wcout instead of std::cout to write it to a stream.
4 Thanks to Tom Honermann for pointing this out and the proposed change (it is really important that C++ gets real UTF-8 support).Josuttis: C++17 2019/02/16 18:57 page 217
Call Effect p.generic_string() yields the path as a generic std::string p.generic_wstring() yields the path as a generic std::wstring p.generic_u8string() yields the path as a generic std::u8string p.generic_u16string() yields the path as a generic std::u16string p.generic_u32string() yields the path as a generic std::u32string p.generic_string<...>() yields the path as a generic std::basic_string<...> p.native() yields the path in the native format of type path::string_type conversionToNativeString implicit conversion to the native string type p.c_str() yields the path as a character sequence in the native string format p.make_preferred() replaces directory separators in p by native format and yields the modified p
• native() yields the path converted to the native string encoding, which is defined by the type std::filesystem::path::string_type. Under Windows this type is type std::wstring, so that you have to use std::wcout instead of std::cout to directly write it to the standard output stream. New overloads allow us to pass the native string to new overloads of file streams. • c_str() does the same but yields the result as a null terminated character sequence. Note that using this function also is not portable, because printing the sequence with std::cout does not result in the correct output on Windows. You have to use std::wcout there. • make_preferred() replaces any directory separator except for the root name by the native di- rectory separator. Note that this is the only function that modifies the path it is called for. Thus, strictly speaking belongs to the next section of modifying path function, but because it deals with the conversions for the native format it is also listed here. For example, under Windows the following code: std::filesystem::path p{"/dir\\subdir/subsubdir\\/./\\"}; std::cout << "p: " << p << '\n'; std::cout << "string(): " << p.string() << '\n'; std::wcout << "wstring(): " << p.wstring() << '\n'; std::cout << "lexically_normal(): " << p.lexically_normal() << '\n'; std::cout << "generic_string(): " << p.generic_string() << '\n'; std::wcout << "generic_wstring(): " << p.generic_wstring() << '\n'; // because it’s Windows and the native string type is wstring: std::wcout << "native(): " << p.native() << '\n'; // Windows! std::wcout << "c_str(): " << p.c_str() << '\n'; std::cout << "make_preferred(): " << p.make_preferred() << '\n'; std::cout << "p: " << p << '\n'; has the following output: p: "/dir\\subdir/subsubdir\\/./\\"Josuttis: C++17 2019/02/16 18:57 page 218
string(): /dir\subdir/subsubdir\/./\ wstring(): /dir\subdir/subsubdir\/./\ lexically_normal(): "\\dir\\subdir\\subsubdir\\" generic_string(): /dir/subdir/subsubdir//.// generic_wstring(): /dir/subdir/subsubdir//.// native(): /dir\subdir/subsubdir\/./\ c_str(): /dir\subdir/subsubdir\/./\ make_preferred(): "\\dir\\subdir\\subsubdir\\\\.\\\\" p: "\\dir\\subdir\\subsubdir\\\\.\\\\" Note again: • The native string type is not portable. On Windows it is a wstring, on POSIX-based systems it is a string, so that you would have to use cout instead of wcout to print the result of native() and c_str(). Using wcout is only portable for the return value of wstring() and generic_wstring(). • Only the call of make_preferred() modifies the path it is called for. All other calls leave p unaffected.
Call Effect p = p2 assign a new path p = sv assign a string (view) as a new path p.assign(p2) assign a new path p.assign(sv) assign a string (view) as a new path p.assign(beg, end) assign elements of the range from beg to end to the path p1 / p2 yields the path concatenating p2 as sub-path of p1 p /= sub appends sub as sub-path to path p p.append(sub) appends sub as sub-path to path p p.append(beg, end) appends elements of the range from beg to end as sub-paths to path p p += str appends the characters of str to path p p.concat(sub) appends the characters of str to path p p.concat(beg, end) appends elements of the range from beg to end to path p p.remove_filename() remove a trailing filename from the path p.replace_filename(repl) replace the trailing filename (if any) p.replace_extension() remove any trailing filename extension p.replace_extension(repl) replace the trailing filename extension (if any) p.clear() make the path empty p.swap(p2) swap the values of two paths swap(p1, p2) swap the values of two paths p.make_preferred() replaces directory separators in p by native format and yields the modified p
The function make_preferred() converts the directory separators inside a path to the native format. For example: std::filesystem::path p{"//server/dir//subdir///file.txt"}; p.make_preferred(); std::cout << p << '\n'; writes on POSIX-based platforms: "//server/dir/subdir/file.txt" On Windows, the output is as follows: "\\\\server\\dir\\\\subdir\\\\\\file.txt" Note that the leading root name is not modified because it has to consist of two slashes or backslashes. Note also that this function can’t convert backslashes to a slash on a POSIX-based system, because the backslash is not recognized as a directory separator. replace_extension() replaces, adds, or removes an extension: • If the file has an extension it is replaced • If the file has no extension, the new extension is added. • If you skip the new extension or the new extension is empty, any existing extension is removed. It doesn’t matter whether you place a leading dot in the replacement. The function ensures that there is exactly one dot between the stem and the extension of the resulting filename. For example: fs::path{"file.txt"}.replace_extension("tmp") // file.tmp fs::path{"file.txt"}.replace_extension(".tmp") // file.tmp fs::path{"file.txt"}.replace_extension("") // file fs::path{"file.txt"}.replace_extension() // file fs::path{"dir"}.replace_extension("tmp") // dir.tmp fs::path{".git"}.replace_extension("tmp") // .git.tmp Note that filenames that are “pure extensions” (such as .git) don’t count as extensions.5
5 This has changed with C++17. Before C++17, the result of the last statement would have been .tmp.Josuttis: C++17 2019/02/16 18:57 page 221
Call Effect p1 == p2 yields whether two paths are equal p1 != p2 yields whether two paths are not equal p1 < p2 yields whether a paths is less than another p1 <= p2 yields whether a path is less or equal than another p1 >= p2 yields whether a path is greater or equal than another p1 > p2 yields whether a path is greater than another p.compare(p2) yields whether p2 is less, equal, or greater than p p.compare(sv) yields whether p2 is less, equal, or greater than the string (view) sv converted to a path equivalent(p1, p2) expensive path comparison taking the filesystem into account
• Only different formats of specifying a directory separator are detected. Thus, the following paths are all equal (provided the backslash is a valid directory separator): tmp1/f /tmp1//f /tmp1\f tmp1/\/f Only if you call lexically_normal() for each path, all of the paths above are equal (provided the backslash is a valid directory separator). For example: std::filesystem::path p1{"tmp1/f"}; std::filesystem::path p2{"./tmp1/f"};
p1 == p2 // true p1.compare(p2) // not 0 p1.lexically_normal() == p2.lexically_normal() // true p1.lexically_normal().compare(p2.lexically_normal()) // 0 If you want to take the filesystem into account so that symbolic links are correctly handled, you can use equivalent(). Note, however, that this function requires that both paths represent existing files. Thus, a generic way to compare paths as accurate as possible (but not having the best performance) is as follows: bool pathsAreEqual(const std::filesystem::path& p1, const std::filesystem::path& p2) { return exists(p1) && exists(p2) ? equivalent(p1, p2) : p1.lexically_normal() == p2.lexically_normal(); }Josuttis: C++17 2019/02/16 18:57 page 222
Call Effect p.hash_value() yields the hash value for a path
Note that only equal paths have the same hash_value. That is the following paths yield different hash values: tmp1/f ./tmp1/f tmp1/./f tmp1/tmp11/../f For this reason, you might want to normalize paths before you put them in a hash table.
Call Effect exists(p) yields whether there is a file to open is_symlink(p) yields whether the file p exists and is a symbolic link is_regular_file(p) yields whether the file p exists and is a regular file is_directory(p) yields whether the file p exists and is a directory is_other(p) yields whether the file exists p and is neither regular nor a direc- tory nor a symbolic link is_block_file(p) yields whether the file p exists and is a block special file is_character_file(p) yields whether the file p exists and is a character special file is_fifo(p) yields whether the file p exists and is FIFO or pipe file is_socket(p) yields whether the file p exists and is a socket
Call Effect is_empty(p) yields whether the file is empty file_size(p) yields the size of a file hard_link_count(p) yields the number of hard links last_write_time(p) yields the timepoint of the last write to a file
Note that there is a difference whether a path is empty and whether the file specified by a path is empty: p.empty() // true if path p is empty (cheap operation) is_empty(p) // true if file at path p is empty (filesystem operation) file_size(p) returns the size of file p in bytes if it exists as regular file (as if the member st_size of the POSIX function stat(). For all other files the result is implementation-defined and not portable. hard_link_count(p) returns the number of times a file exists in a file system. Usually this number is 1, but on some file systems the same file can exist at different locations in the file system (i.e., has different paths). This is different from a symbolic link where a file refers to another file. Here we have a file with different path to access it directly. Only if the last hard link is removed, the file itself is removed.
} which might output: "fileattr.cpp" is 4 Seconds old. Instead of std::filesystem::file_time_type::clock::now() in this example, you could also write: decltype(filetime)::clock::now() Note that the clock used by filesystem timepoint is not guaranteed to be the standard system_clock. For this reason, there is currently no standardized support to convert the filesystem timepoint into type time_t to use it as absolute time in strings or output.7 There is a workaround, though. The following function “roughly” converts a timepoint of any clock to a time_t object: template<typename TimePoint> std::time_t toTimeT(TimePoint tp) { using system_clock = std::chrono::system_clock; return system_clock::to_time_t(system_clock::now() + (tp - decltype(tp)::clock::now())); } The trick is to compute the time of the filesystem timepoint as duration relative to now and then add this difference to the current time of the system clock. This function is not exact because both clocks might have different resolutions and we call now() twice at slightly different times. However, in general, this works pretty well. For example, for a path p we can call: auto ftime = last_write_time(p); std::time_t t = toTimeT(ftime); // convert to calendar time (including skipping trailing newline): std::string ts = ctime(&t); ts.resize(ts.size()-1); std::cout << "last access of " << p << ": " << ts << '\n'; which might print: last access of "fileattr.exe": Sun Jun 24 10:41:12 2018 To format a string the way we want we can call: std::time_t t = toTimeT(ftime); char mbstr[100]; if (std::strftime(mbstr, sizeof(mbstr), "last access: %B %d, %Y at %H:%M\n", std::localtime(&t))) {
Call Effect status(p) yields the file_status of the file p (following symbolic links) symlink_status(p) yields the file_status of p (not following sym- bolic links) p
The difference is that if the path p resolves in a symbolic link status() follows the link and prints the attributes of the file there (the status might be that there is no file), while symlink_status(p) prints the status of the symbolic link itself. Table file_status Operations lists the possible calls for a file_status object fs.
Call Effect exists(fs) yields whether a file exists is_regular_file(fs) yields whether the file exists and is a regular file is_directory(fs) yields whether the file exists and is a directory is_symlink(fs) yields whether the file exists and is a symbolic link is_other(fs) yields whether the file exists and is neither regular nor a directory nor a symbolic link is_character_file(fs) yields whether the file exists and is a character special file is_block_file(fs) yields whether the file exists and is a block special file is_fifo(fs) yields whether the file exists and is FIFO or pipe file is_socket(fs) yields whether the file exists and is a socket fs.type() yields the file_type of the file fs.permissions() yields the permissions of the file
One benefit of the status operations is that you can save multiple operating system calls for the same file. For example, instead of if (!is_directory(path)) { if (is_character_file(path) || is_block_file(path)) { ... } ... } you better implement: auto pathStatus{status(path)}; if (!is_directory(pathStatus)) { if (is_character_file(pathStatus) || is_block_file(pathStatus)) { ... } ... } The other key benefit is that by using symlink_status() you can check for the status of a path without following any symbolic link. This, for example, helps to check whether any file exists at a specific path. Because these file status don’t use the operating system, no overloads to return en error code are provided.Josuttis: C++17 2019/02/16 18:57 page 228
The exists() and is_...() functions for path arguments are shortcuts for calling and checking the type() for a file status. For example, is_regular_file(mypath) is a shortcut for is_regular_file(status(mypath)) which is a shortcut for status(mypath).type() == file_type::regular
20.4.3 Permissions The model to deal with file permissions is adopted from the UNIX/POSIX world. There are bits to signal read, write, and/or execute/search access for owners of the file, members of the same group, or all others. In addition, there are special bits for “set user ID on execution,” “set group ID on execution,” and the sticky bit (or another system-dependent meaning). Table Permission Bits lists the values of the bitmask scoped enumeration type perms, defined in namespace std::filesystem, which represent one or multiple permission bits.
You can ask for the current permissions and as a result check the bits of the returned perms object. To combine flags, you have to use the bit operators. For example: // if writable: if ((fileStatus.permissions() & (fs::perms::owner_write | fs::perms::group_write | fs::perms::others_write)) != fs::perms::none) { ... } A shorter (but maybe less readable) way to initialize a bitmask would be to directly use the corre- sponding octal value and relaxed enum initialization: // if writable: if ((fileStatus.permissions() & fs::perms{0222}) != fs::perms::none) { ... } Note that you have to put the & expressions in parentheses before comparing the outcome with a specific value. Note also that you can’t skip the comparison because there is no implicit conversion to bool for bitmask scoped enumeration types. As another example, to convert the permissions of a file to a string with the notation of the UNIX ls -l command, you can use the following helper function: filesystem/permAsString.hpp #include <string> #include <chrono> #include <filesystem>
This allows you to print the permissions of a file as part of an standard ostream command: std::cout << "permissions: " << asString(status(mypath).permissions()) << '\n'; A possible output for a file with all permissions for the owner and read/execute permissions for all others would be: permissions: rwxr-xr-x Note, however, that the Windows ACL (Access Control List) approach does not really fit in this scheme. For this reason, when using Visual C++, writable files always have all read, write, and execute bits set (even if they are not executable files) and files with the read-only flag always have all read and executable bits set. This also impacts the API when modifying permissions portably.
Call Effect create_directory(p) create a directory create_directory(p, attrPath) create a directory with attributes of attrPath create_directories(p) create a directory and all directories above that don’t exists yet create_hard_link(old, new) create another filesystem entry to for the existing file from create_symlink(to, new) create a symbolic link from new to to create_directory_symlink(to, new) create a symbolic link from new to the directory to copy(from, to) copy a file of any type copy(from, to, options) copy a file of any type with options copy_file(from, to) copy a file (but not directory or symbolic link) copy_file(from, to, options) copy a file with options copy_symlink(from, to) copy a symbolic link (to refers to where from refers) remove(p) remove a file or empty directory remove_all(p) remove p and recursively all files in its subtree (if any)
There is no function to create a regular file. This is covered by the I/O Stream standard library. For example, the following statement create a new empty file (if it doesn’t exist yet): std::ofstream{"log.txt"}; The functions to create one or more directories return whether a new directory was created. Thus, finding a directory that is already there is not an error. However, finding a file there that is not a directory is also not an error.8 Thus, after create_directory() or create_directories() returning false you don’t know whether there is already the requested directory or something else. Of course, you will find it out if you do something directory-specific with that file afterwards and getting an exception then might be OK (because handling this rare problem might not worth the effort). But if you want correct error messages or have to ensure for other reasons that there really is a directory you have to do something like the following: if (!create_directory(myPath) && !is_directory(myPath)) { std::cerr << "OOPS, \"" << myPath.string() << "\" is already something else\n"; ... // handle this error } The copy...() functions don’t work with special file types. By default they: • Report an error if existing files are overwritten • Don’t operate recursively • Follow symbolic links This default can be overwritten by the parameter options, which has the bitmask scoped enumera- tion type copy_options, defined in namespace std::filesystem. Table Copy Options. lists the possible values.
copy_options Effect none Default (value 0) skip_existing Skip overwriting existing files overwrite_existing Overwrite existing files update_existing Overwrite existing files if the new files are newer recursive Recursively copy sub-directories and their contents copy_symlinks Copy symbolic links as symbolic links skip_symlinks Ignore symbolic links directories_only Copy directories only create_hard_links Create additional hard links instead of copies of files create_symlinks Create symbolic links instead of copies of files (the source path must be an absolute path unless the destination path is in the current directory)
Call Effect rename(old, new) rename and/or move a file last_write_time(p, newtime) change the timepoint of the last write access permissions(p, prms) replace the permissions of a file by prms permissions(p, prms, mode) modify the permissions of a file according to mode resize_file(p, newSize) change the size of a regular file
rename() can deal with any type of file including directories and symbolic links. For symbolic links the link is renamed, not where it refers to. Note that rename() needs the full new path including filename to move it to a different directory: // move "tmp/sub/x" to "tmp/x": std::filesystem::rename("tmp/sub/x", "top"); // ERROR std::filesystem::rename("tmp/sub/x", "top/x"); // OK last_write_time() uses the timepoint format as described in Dealing with the Last Modifica- tion. For example: // touch file p (update last file access):Josuttis: C++17 2019/02/16 18:57 page 233
last_write_time(p, std::filesystem::file_time_type::clock::now()); permissions() uses the permission API format as described in Permissions. The optional mode is of the bitmask enumeration type perm_options, defined in namespace std::filesystem. It allows on one hand to choose between replace, add, and remove and on the other hand with nofollow to modify permissions of the symbolic links instead of the files they refer to. For example: // remove write access for group and any access for others: permissions(mypath, std::filesystem::perms::group_write | std::filesystem::perms::others_all, std::filesystem::perm_options::remove); Note again that Windows due to its ACL permission concept only supports two modes: • read, write, and execute/search for all (rwxrwxrwx) • read, execute/search for all (r-xr-xr-x) To switch portably between these two modes, you have to enable or disable all three write flags together (removing one after the other does not work): // portable value to enable/disable write access: auto allWrite = std::filesystem::perms::owner_write | std::filesystem::perms::group_write | std::filesystem::perms::others_write; // portably remove write access: permissions(file, allWrite, std::filesystem::perm_options::remove); A shorter (but maybe less readable) way to initialize allWrite (using relaxed enum initialization) would be as follows: std::filesystem::perms allWrite{0222}; resize_file() can be used to reduce or extend the size of a regular file: For example: // make file empty: resize_file(file, 0);
Call Effect read_symlink(symlink) yields the file an existing symbolic link refers to absolute(p) yields existing p as absolute path (not following symbolic links) canonical(p) yields existing p as absolute path (following symbolic links) weakly_canonical(p) yields p as absolute path (following symbolic links) relative(p) yields relative (or empty) path from current directory to p relative(p, base) yields relative (or empty) path from base to p proximate(p) yields relative (or absolute) path from current directory to p proximate(p, base) yields relative (or absolute) path from base to p
#include <filesystem> #include <iostream>
"..\\x" "a\\x" "..\\x" "..\\x" ps: "C:\\temp\\top\\a/s" -> "C:\\temp\\top" "..\\x" "a\\x" Note again that you need administrator rights to create symbolic links on Windows.
Call Effect equivalent(p1, p2) yields whether p1 and p2 refer to the same file space(p) yields information about the disk space available at path p current_path(p) sets the path of the current working directory to p
std::filesystem::current_path(subdir); ... } catch (...) { std::filesystem::current_path(current); throw; } std::filesystem::current_path(subdir);
directory_options Effect none Default (value 0) follow_directory_symlink Follow symbolic links (rather than skipping them) skip_permission_denied Skip directories where permission is denied
The default is not to follow symbolic links and to skip directories you are not allowed to iterate over. With skip_permission_denied iterating over a denied directory, results in an exception. createfiles.cpp shows an application of follow_directory_symlink.
Directory entries contain both a path object and additional attributes such as hard link count, file status, file size, last write time, whether it is a symbolic link, and where it refers to if it is. Note that the iterators are input iterators. The reason is that iterating over a directory might result into different results as at any time directory entries might change. This has to be taken into account when using directory iterators in parallel algorithms. Table Directory Entry Operations lists the operations you can call for a directory entry e. They are more or less the operations you can call to query file attributes, get the file status check permissions, and compare the paths.
Call Effect e.path() yields the filesystem path for the current entry e.exists() yields whether the file exists e.is_regular_file() yields whether the file exists and is a regular file e.is_directory() yields whether the file exists and is a directory e.is_symlink() yields whether the file exists and is a symbolic link e.is_other() yields whether the file exists and is neither regular nor a directory nor a symbolic link e.is_block_file() yields whether the file exists and is a block special file e.is_character_file() yields whether the file exists and is a character special file e.is_fifo() yields whether the file exists and is FIFO or pipe file e.is_socket() yields whether the file exists and is a socket e.file_size() yields the size of a file e.hard_link_count() yields the number of hard links e.last_write_time() yields the timepoint of the last write to a file e.status() yields the status of the file p e.symlink_status() yields the file status (following symbolic links) p e1 == e2 yields whether the two entry paths are equal e1 != e2 yields whether the two entry paths are not equal e1 < e2 yields whether an entry paths is less than another e1 <= e2 yields whether an entry path is less or equal than another e1 >= e2 yields whether an entry path is greater or equal than another e1 > e2 yields whether an entry path is greater than another e.assign(p) replaces the path of e by p and updates all entry attributes e.replace_filename(p) replaces the filename of the current path of e by p and updates all entry attributes e.refresh() updates all cached attributes for this entry
assign() and replace_filename() call the corresponding modifying path operations but do not modify the files in the underlying filesystem.Josuttis: C++17 2019/02/16 18:57 page 240
9 In fact, the beta implementation of the C++17 filesystem library in g++ v9 only caches the file type, not the file size (this might change until the library is released).Josuttis: C++17 2019/02/16 18:57 page 241
20.6 Afternotes The filesystem library was developed under the lead of Beman Dawes for many years as a Boost library. In 2014 for the first time it became a formal beta standard, the File System Technical Specifi- cation (see). With the File System Technical Specification was adopted to the standard library as proposed by Beman Dawes. Support to compute relative paths was added by Beman Dawes, Nicolai Josuttis, and Jamie Allsop in. A cou- ple of minor fixes were added proposed by Beman Dawes in, by Nicolai Josuttis in, by Jason Liu and Hubert Tong in https: //wg21.link/p0430r2, and especially by the members of the filesystem small group (Beman Dawes, S. Davis Herring, Nicolai Josuttis, Jason Liu, Billy O’Neal, P.J. Plauger, and Jonathan Wakely) in: C++17 2019/02/16 18:57 page 242
242
Part IV Library Extensions and Modifications This part introduces extensions and modifications to existing library components with C++17.
243Josuttis: C++17 2019/02/16 18:57 page 244
244
Chapter 21 Type Traits Extensions
Regarding type traits (standard type functions), C++17 extends the general abilities to use them and introduces some new type traits.
245Josuttis: C++17 2019/02/16 18:57 page 246
is_aggregate<> std::is_aggregate<T> evaluates whether T is an aggregate type: template<typename T> struct D : std::string, std::complex<T> { std::string data; }; D<float> s{{"hello"}, {4.5,6.7}, "world"}; // OK since C++17 std::cout << std::is_aggregate<decltype(s)>::value; // outputs: 1 (true)
21.3 std::bool_constant<> If traits yield Boolean values, they use now the alias template bool_constant<>: namespace std { template<bool B>Josuttis: C++17 2019/02/16 18:57 page 247
Trait Effect is_aggregate<T> Is aggregate type has_unique_object_representations<T> Any two object with same value have same repre- sentation in memory is_invocable<T,Args...> Can be used as callable for Args... is_nothrow_invocable<T,Args...> Can be used as callable for Args... without throwing is_invocable_r<RT,T,Args...> Can be used as callable for Args... returning RT is_nothrow_invocable_r<RT,T,Args...> Can be used as callable for Args... returning RT without throwing invoke_result<T,Args...> Result type if used as callable for Args... is_swappable<T> Can call swap() for this type is_nothrow_swappable< T> Can call swap() for this type and that operation can’t throw is_swappable_with<T,T2> Can call swap() for these two types with specific value category is_nothrow_swappable_with<T,T2> Can call swap() for these two types with specific value category and that operation can’t throw conjunction<B...> Logical and for Boolean traits B... disjunction<B... > Logical or for Boolean traits B... negation<B > Logical not for Boolean trait B
But now you can define your own type trait by deriving from bool_constant<> if you are able to formulate the corresponding compile-time expression as Boolean condition. For example: template<typename T> struct IsLargerThanInt : std::bool_constant<(sizeof(T) > sizeof(int))> { } so that you can use such a trait to compile depending on whether a type is larger than an int: template<typename T> void foo(T x) { if constexpr(IsLargerThanInt<T>::value) { ... } } By adding the corresponding variable template for suffix _v as inline variable: template<typename T> inline static constexpr auto IsLargerThanInt_v = IsLargerThanInt<T>::value; you can also shorten the usage of the trait as follows: template<typename T> void foo(T x) { if constexpr(IsLargerThanInt_v<T>) { ... } } As another example, we can define a trait that checks whether the move constructor for a type T guarantees not to throw roughly as follows: template<typename T> struct IsNothrowMoveConstructibleT : std::bool_constant<noexcept(T(std::declval<T>()))> { };
21.4 std::void_t<> A little, but incredible useful helper to define type traits was standardized in C++17: std::void_t<>. It is simply defined as follows: namespace std { template<typename...> using void_t = void; }Josuttis: C++17 2019/02/16 18:57 page 249
That is, it yields void for any variadic list of template parameters. This is helpful, where we only want to deal with types solely in an argument list. The major application is the ability to check for conditions when defining new type traits. The following example demonstrates the application of this helper: #include <utility> // for declval<> #include <type_traits> // for true_type, false_type, and void_t
// primary template: template<typename, typename = std::void_t<>> struct HasVarious : std::false_type { };
21.5 Afternotes Variable templates for standard type traits were first proposed 2014 by Stephan T. Lavavej in. They finally were adopted as part of the Library Fundamentals TS as proposed by Alisdair Meredith in. The type trait std::is_aggregate<> was introduced as a US national body comment for the standardization of C++17 (see). std::bool_constant<> was first proposed by Zhihao Yuan in. They finally were adopted as proposed by Zhihao Yuan in. std::void_t_<> was adopted as proposed by Walter E. Brown in. UNDER CONSTRUCTIONJosuttis: C++17 2019/02/16 18:57 page 251
Chapter 22 Parallel STL Algorithms
To benefit from modern multi-core architectures, the C++17 standard library introduces the ability to let STL standard algorithms run using multiple threads to deal with different elements in parallel. Many algorithms were extended by a new first argument to specify, whether and how to run the algorithm in parallel threads (the old way without this argument is, of course, still supported). In ad- dition, some supplementary algorithms were introduced that specifically support parallel processing.
#include <iostream> #include <string> #include <chrono>
/******************************************** * timer to print elapsed time ********************************************/
class Timer { private: std::chrono::steady_clock::time_point last;
251Josuttis: C++17 2019/02/16 18:57 page 252
public: Timer() : last{std::chrono::steady_clock::now()} { } void printDiff(const std::string& msg = "Timer diff: ") { auto now{std::chrono::steady_clock::now()}; std::chrono::duration<double, std::milli> diff{now - last}; std::cout << msg << diff.count() << "ms\n"; last = std::chrono::steady_clock::now(); } };
#endif // TIMER_HPP
int main() { int numElems = 1000;
struct Data { double value; // initial value double sqrt; // parallel computed square root };
coll.reserve(numElems); for (int i=0; i<numElems; ++i) { coll.push_back(Data{i * 4.37, 0}); }
Performance Benefits To find how, whether and when it’s worth to run this algorithm in parallel, let’s modify the example as follows: lib/parforeach.cpp #include <vector>Josuttis: C++17 2019/02/16 18:57 page 254
#include <iostream> #include <algorithm> #include <numeric> #include <execution> // for the execution policy #include "timer.hpp"
struct Data { double value; // initial value double sqrt; // parallel computed square root };
Again that is not a general proof where and when parallel algorithms are worth it. But it demonstrates that even for non-trivial numeric operations it can be worth to use them. The key is, it’s worth with • long operations • many many elements For example, using a parallel version of the algorithm count_if() counting the number of even elements in a vector of ints was never worth it; even not with 1,000,000,000 elements: auto num = std::count_if(std::execution::par, // execution policy coll.cbegin(), coll.cend(), // range [](int elem){ // criterion return elem % 2 == 0; }); In fact, for a simple algorithm with a fast predicate as in this example, running in parallel probably never pays off. There should happen something with each element that takes significant time and is independent from the processing of the other elements. But you can’t predict anything because it’s up to the implementer of the C++ standard library when and how to use parallel threads. In fact, can’t control how many threads are used and the implementation might decide to use multiple threads only with a certain number of elements. Measure! With the typical scenarios on you target platform(s).
Passing sequential execution as parameter can be useful if the decision whether to run sequentially or in parallel is done at runtime and you don’t want to have different function calls. Requesting a parallel sorting instead is easy: sort(std::execution::par, coll.begin(), coll.end()); Note that there is also another parallel execution policy: sort(std::execution::par_seq, coll.begin(), coll.end()); I will later explain the difference. So, again the question is, (when) is using parallel sorting better? On my laptop with only 10,000 string, you could see that the sorting took half of the time of a sequential sorting. And even sorting 1000 string was slightly better with parallel execution.
});
Policy Meaning std::execution::seq sequential execution std::execution::par parallel sequenced execution std::execution::par_unseq parallel unsequenced (vectorized) execution
Parallel unsequenced execution need special support of the compiler/hardware to detect where and how operations can be vectorized.1
1 For example, if a CPU supports registers of 512 Bytes, a compiler might perform computations by loading eight 64-Byte or four 128-Byte values at once into a register and perform multiple computations with them in parallel.Josuttis: C++17 2019/02/16 18:57 page 259
Algorithms Remark find_end(), adjacent_find() search(), search_n() except with searcher swap_ranges() replace(), replace_if() fill() generate() remove(), remove_if() unique() reverse() rotate() partition(), stable_partition() sort(), stable_sort(), partial_sort() is_sorted(), is_sorted_until() nth_element() inplace_merge() is_heap(), is_heap_until() min_element(), max_element(), min_max_element()
Algorithms Remark for_each() forward iterators and return type void all_of(), any_of(), none_of() forward iterators for_each_n() forward iterators find(), find_if(), find_if_not() forward iterators find_first_of() forward iterators count(), count_if() forward iterators mismatch() forward iterators equal() forward iterators is_partitioned() forward iterators partial_sort_copy() forward iterators includes() forward iterators lexicographical_compare() forward iterators fill_n() forward iterators generate_n() forward iterators reverse_copy() forward iterators rotate_copy() forward iterators copy(), copy_n(), copy_if() forward iterators move() forward iterators transform() forward iterators replace_copy(), replace_copy_if() forward iterators remove_copy(), remove_copy_if() forward iterators unique_copy() forward iterators partition_copy() forward iterators merge() forward iterators set_union(), set_intersection() forward iterators set_difference(), set_symmetric_difference() forward iterators exclusive_scan(), inclusive_scan() forward iterators
Algorithms Remark accumulate(), inner_product(), partial_sum() use reduce() and transform_reduce() instead search() with searcher copy_backward() move_backward() sample(), shuffle() partition_point() lower_bound(), upper_bound(), equal_range() binary_search() is_permutation(), next_permutation(), prev_permutation() push_heap(), pop_heap(), make_heap(), sort_heap()
22.6.1 reduce() For example, reduce() was introduced as a parallel form of accumulate(), which “accumulates” all elements (you can define, which operation performs the “accumulation”). For example, consider the following usage of accumulate(): lib/accumulate.cpp #include <iostream> #include <vector> #include <numeric> // for accumulate()
int main() { printSum(1); printSum(1000); printSum(1000000); printSum(10000000); } We compute the sum of all elements, which outputs: accumulate(): 10 accumulate(): 10000 accumulate(): 10000000Josuttis: C++17 2019/02/16 18:57 page 263
int main() { printSum(1); printSum(1000); printSum(1000000); printSum(10000000); } With the same output, the program now might run faster or slower (depending on whether starting multiple threads is supported and takes more or less time than the time we save to run the algorithm in parallel). The operation used here is +, which is commutative so that the order of adding the integral ele- ments doesn’t matter.
#include <iostream> #include <vector> #include <numeric> #include <execution>
#include<iomanip>
int main() { std::cout << std::setprecision(20); printSum(1); printSum(1000); printSum(1000000); printSum(10000000); } Here we use both accumulate() and reduce() and compare the results. A possible output is: accumulate(): 0.40001 reduce(): 0.40001 equal accumulate(): 400.01 reduce(): 400.01 differ accumulate(): 400010Josuttis: C++17 2019/02/16 18:57 page 265
reduce(): 400010 differ accumulate(): 4.0001e+06 reduce(): 4.0001e+06 differ While the results look the same they sometimes differ. This is a possible consequence of adding floating-point values in different order. If we change the precision of printing floating-point values: std::cout << std::setprecision(20); we can see that the resulting values are slightly different: accumulate(): 0.40001000000000003221 reduce(): 0.40001000000000003221 equal accumulate(): 400.01000000000533419 reduce(): 400.01000000000010459 differ accumulate(): 400009.99999085225863 reduce(): 400009.9999999878346 differ accumulate(): 4000100.0004483023658 reduce(): 4000100.0000019222498 differ Because it is undefined if, when, and how parallel algorithms are implemented, the result might look the same on some platforms (up to a certain number of elements).
int main() { printSum(1); printSum(1000); printSum(1000000); printSum(10000000); } Here, we pass a lambda that for each value takes the current sum and adds the square of the new vale: auto squaredSum = [] (auto sum, auto val) { return sum + val * val; }; Using accumulate() the output looks fine: accumulate(): 30 accumulate(): 30000 accumulate(): 30000000 accumulate(): 300000000 However, let’s switch to parallel processing with reduce(): lib/reduce2.cpp #include <iostream> #include <vector> #include <numeric> // for reduce() #include <execution>
int main() { printSum(1); printSum(1000); printSum(1000000); printSum(10000000); } The output might become something like this: reduce(): 30 reduce(): 30000 reduce(): -425251612 reduce(): 705991074 Yes, the result sometimes might be wrong. The problem is that the operation is not associative. If we for example apply this operation to the elements 1; 2, and 3, we might first compute 0+1*1 and 2+3*3, but when we then combine the intermediate result we square 3 again, by essentially computing: (0+1*1) + (2+3*3) * (2+3*3) But why are the results sometimes correct here? Well, it seems that on this platform reduce() only runs in parallel with a certain number of elements. And that’s totally fine. Thus, use test cases with enough elements to detect problems like this. The solution to this problem is to use another new algorithm, transform_reduce(). It separates the modification we want to perform with each element (which is one thing we can parallelize) and the accumulation of the results provided it is commutative (which is the other thing we can parallelize). lib/transformreduce.cpp #include <iostream>Josuttis: C++17 2019/02/16 18:57 page 268
#include <vector> #include <numeric> // for transform_reduce() #include <execution> #include <functional>
int main() { printSum(1); printSum(1000); printSum(1000000); printSum(10000000); } When calling transform_reduce(), we pass • the execution policy to (allow to) run this in parallel • the range of the values to deal with • 0L as the initial value of the outer accumulation • the operation + as operation of the outer accumulation • a lambda for the processing of each value before the accumulation transform_reduce() will probably be by far the most important parallel algorithm, because often we modify values before we combine them (also called the map reduce principle).Josuttis: C++17 2019/02/16 18:57 page 269
} First, we recursively collect all filesystem paths in the directory given as command-line argument: std::filesystem::path root{argv[1]};
std::vector<std::filesystem::path> paths; std::filesystem::recursive_directory_iterator dirpos{root}; std::copy(begin(dirpos), end(dirpos), std::back_inserter(paths)); Note that because we might pass an invalid path, possible (filesystem) exceptions are caught. Then, we iterate over a collection of the filesystem paths to accumulate their sizes if they are regular files: auto sz = std::transform_reduce( std::execution::par, // parallel execution paths.cbegin(), paths.cend(), // range std::uintmax_t{0}, // initial value std::plus<>(), // accumulate ... [](const std::filesystem::path& p) { // file size if regular file return is_regular_file(p) ? file_size(p) : std::uintmax_t{0}; }); The new standard algorithm transform_reduce() operates as follows: • The last argument is applied to each element. Here, passed lambda is called for each path element and queries its size if it is a regular file. • The second but last argument is the operation that combines all the sizes. Because we want to accumulate the sizes we use the standard function object std::plus<>. • The third but last argument is the initial value for the operation that combines all the sizes. Thus, if the lists of path is empty, we start with 0. We use the same type the the return value of file_size(), std::uintmax_t. Note that asking for the size of a file is a pretty expensive operation, because it requires an operating system call. For this reason it pretty fast pays off to use an algorithms that calls this transforma- tion (from path to size) in parallel with multiple threads in any order and computes the sum. First measurements demonstrate a clear win (up to doubling the speed of the program). Note also that you can’t pass the paths the directory iterator iterates over directly to the parallel algorithm, because directory iterators are input iterators while the parallel algorithms require forward iterators. Finally note that transform_reduce() is defined in header <numeric> instead of <algorithm> (just like accumulate() it counts as numeric algorithm.Josuttis: C++17 2019/02/16 18:57 page 271
272
Chapter 23 Substring and Subsequence Searchers
Since C++98, the C++ standard library provides a search algorithm to find a subsequence of elements in a range. However, there exist different search algorithms. For example, by pre-computing statistics about the pattern to be searched, these algorithms can perform significantly better for special tasks such as finding a substring in a large text. C++17 therefore introduced the Boyer-Moore and Boyer-Moore-Horspool search algorithms and various interfaces to use them. They are especially for searching substrings in large texts, but can also improve finding subsequences in containers or ranges.
273Josuttis: C++17 2019/02/16 18:57 page 274
sub.begin(), sub.end()); 4. Using a default_searcher: auto pos = std::search(text.begin(), text.end(), std::default_searcher{sub.begin(), sub.end()}); 5. Using a boyer_moore_searcher: auto pos = std::search(text.begin(), text.end(), std::boyer_moore_searcher{sub.begin(), sub.end()}); 6. Using a boyer_moore_horspool_searcher: auto pos = std::search(text.begin(), text.end(), std::boyer_moore_horspool_searcher{sub.begin(), sub.end()}); The new searchers are defined in <functional. The Boyer-Moore and the Boyer-Moore-Horspool searchers are well known algorithms that pre- compute tables (of hash values) before the search starts to improve the speed of the search if the search covers a text and/or substring of significant size. Using them, the algorithms require random- access iterators (instead of forward iterators, which is enough for a naive search()). In lib/searcher1.cpp you can find a full program demonstrating the use of these different ways to search a substring. Note that all applications of search() yield an iterator to the first character of a matching sub- sequence. If there is none, the passed end of the text is returned. This way we can search for all occurrences of a substring as follows: std::boyer_moore_searcher bm{sub.begin(), sub.end()}; for (auto pos = std::search(text.begin(), text.end(), bm); pos != text.end(); pos = std::search(pos+sub.size(), text.end(), bm)) { std::cout << "found '" << sub << "' at index " << pos - text.begin() << '\n'; }
Performance of Searchers Which one is best way to search for a substring (fastest and/or fewest memory)? One special aspect of this question is that we can also use the traditional search() now in parallel mode (which is not possible when using the new searchers). The answer depends on the circumstances: • Just using (non-parallel) search() is usually the slowest, because for each character in text we start to find out whether a substring matches. • Using the default_searcher should be equivalent to that, but I saw a worse running time up to a factor of 3.Josuttis: C++17 2019/02/16 18:57 page 275
• Using find() might be faster, but this depends on the quality of implementation in the library. With the measurements I did, I saw an improvement in running time between 20% and a factor of 100 compared to search(). • For texts and substrings of significant size, the boyer_moore_searcher should be the fastest. Compared to search() I saw an improvement with a factor of 50 or even 100. In large texts with significant substrings this always was the fastest search. • The boyer_moore__horspool_searcher trades space for time. It is usually slower than the boyer_moore_searcher, but should not use that much memory. The improvement I saw did vary a lot from platform to platform. While on one platform it was close to boyer_moore (50 times better than search() and 10 times better than find()), on other platforms the improvement was only a factor of 2 or 3 against search() and using find() was way faster. • Using the parallel search() gave me a factor of 3 compared against ordinary search() where it already was supported, which looks like using the Boyer-Moore searcher usually should still be way faster. So there is only one advice I can give: Measure! Test the typical scenarios on your target platforms. It’s worth it because you might get an improvement of a factor of 100 (which I for example got searching for a substring of 1000 characters located close to the end of a string with 10 Million characters). The code in lib/searcher1.cpp also prints measurements of the different search options so that you can compare the numbers on your platform.
std::cout << "found '" << sub << "' at index " << beg - text.begin() << '-' << end - text.begin() << '\n'; } Just finding the first occurrence of a substring using the searchers directly you can use if with initialization and structured bindings: std::boyer_moore_searcher bm{sub.begin(), sub.end()}; ... if (auto [beg, end] = bm(text.begin(), text.end()); beg != text.end()) { std::cout << "found '" << sub << "' first at index " << beg - text.begin() << '-' << end - text.begin() << '\n'; }
gorithms). But using the boyer_moore_horspool_searcher could make the search both faster by a factor of 50 but also slower by a factor of 2. Measure! The code in lib/searcher2.cpp demonstrates the different searches of a subsequence in a vec- tor and also prints measurements of the different search options so that you can compare the numbers on your platform.
23.4 Afternotes These searches were first proposed by Marshall Clow in referring to Boost.Algorithm as a reference implementation. They became part of the first Library Fundamentals TS. For C++17 they were then adopted with other components as proposed by Beman Dawes and Al- isdair Meredith in, including an interface fix proposed by Marshall Clow in: C++17 2019/02/16 18:57 page 279
Chapter 24 Other Utility Functions and Algorithms
C++17 provides a couple of new utility functions and algorithms, which are described in this chapter.
#include <iterator> #include <iostream>
template<typename T> void printLast5(const T& coll) { // compute size: auto size{std::size(coll)};
279Josuttis: C++17 2019/02/16 18:57 page 280
#endif // LAST5_HPP Here, with auto size{std::size(coll)}; we initialize size with the size of the passed collection, which either maps to coll.size() or to the size of a passed raw array. Thus, if we call: std::array arr{27, 3, 5, 8, 7, 12, 22, 0, 55}; std::vector v{0.0, 8.8, 15.15}; std::initializer_list<std::string> il{"just", "five", "small", "string", "literals"}; printLast5(arr); printLast5(v); printLast5(il); The output is: 9 elems: ... 7 12 22 0 55 3 elems: 0 8.8 15.15 5 elems: just five small string literals And because because raw C arrays are supported, we can also call printLast5("hello world"); which prints: 12 elems: ... o r l d Note that this function template therefore replaces the usual way to compute the size of an array using countof or ARRAYSIZE defined as something like: #define ARRAYSIZE(a) (sizeof(a)/sizeof(*(a)))Josuttis: C++17 2019/02/16 18:57 page 281
Note also that you can’t pass an inline defined initializer list to last5<>(). The reason is that a template parameter can’t deduce a std::initializer_list(). For this, you have to overload last5() with the following declaration: template<typename T> void printLast5(const std::initializer_list<T>& coll) Finally, note that this code doesn’t work for forward_list<>, because forward lists don’t have a member function size(). So, if you only want to check whether the collection is empty, you better use std::empty(), which is discussed next.
template<typename T> void printData(const T& coll) { // print every second element: for (std::size_t idx{0}; idx < std::size(coll); ++idx) { if (idx % 2 == 0) { std::cout << std::data(coll)[idx] << ' '; } }Josuttis: C++17 2019/02/16 18:57 page 282
#endif // DATA_HPP Thus, if we call: std::array arr{27, 3, 5, 8, 7, 12, 22, 0, 55}; std::vector v{0.0, 8.8, 15.15}; std::initializer_list<std::string> il{"just", "five", "small", "string", "literals"}; printData(arr); printData(v); printData(il); printData("hello world"); The output is: 27 5 7 22 55 0 15.15 just small literals h l o w r d
24.2 as_const() The new helper function std::as_const() converts values to the corresponding const values with- out using static_cast<> or the add_const_t<> type trait. It allows us to force to call the const overload of a function for a non-const object in case this makes a difference: std::vector<std::string> coll;
24.3 clamp() C++17 provides a new utility function clamp(), which enables to “clamp” a value between a passed minimum and maximum value. It is a combined call of min() and max(). For example: lib/clamp.cpp #include <iostream> #include <algorithm> // for sample()
int main() { for (int i : {-7, 0, 8, 15}) { std::cout << std::clamp(i, 5, 13) << '\n'; } } The call of clamp(i, 5, 13) has the same effect as calling std::min(std::max(i, 5), 13) so that the program has the following output: 5 5 8 13 As for min() and max() clamp() requires that all arguments, which are passed by const reference, have the same type T: namespace std { template<typename T> constexpr const T& clamp(const T& value, const T& min, const T& max); } The return value is a const references to one of the passed arguments. If you pass arguments of different type, you can explicitly specify the template parameter T:Josuttis: C++17 2019/02/16 18:57 page 284
double d{4.3}; int max{13}; ... std::clamp(d, 0, max); // compile-time ERROR std::clamp<double>(d, 0, max); // OK You can also pass floating-point values provided they don’t have the value NaN. As for min() and max() you can pass a predicate as comparison operation. For example: for (int i : {-7, 0, 8, 15}) { std::cout << std::clamp(i, 5, 13, [] (auto a, auto b) { return std::abs(a) < std::abs(b); }) << '\n'; } has the following output: -7 5 8 13 Because the absolute value of -7 is between the absolute values of 5 and 13 clamp() yields -7 in this case. There is no overload of clamp() taking an initializer list of values (as min() and max() have).
24.4 sample() With sample() C++17 provides an algorithm that extracts a random subset (sample) from a given range of values (the population). This is sometimes called reservoir sampling or selection sampling. Consider the following example program: lib/sample1.cpp #include <iostream> #include <vector> #include <string> #include <iterator> #include <algorithm> // for clamp() #include <random> // for default_random_engine
int main() { // initialize a vector of 10,000 string values: std::vector<std::string> coll;Josuttis: C++17 2019/02/16 18:57 page 285
int main() { // initialize a vector of 10,000 string values: std::vector<std::string> coll; for (int i=0; i < 10000; ++i) { coll.push_back("value" + std::to_string(i)); }
// copy 10 randomly selected values from the source range to the destination range: auto end = sample(coll.begin(), coll.end(), subset.begin(), 10, eng);
24.5 for_each_n() As part of the Parallel STL algorithms a new algorithm for_each_n() was proposed, which is also available since C++17 in the traditional non-parallel form. Similar for copy_n(), fill_n(), and generate_n() it takes an integral parameter to apply the passed callable to n elements of a given range. For example: lib/foreachn.cpp #include <iostream> #include <vector> #include <string> #include <algorithm> // for for_each_n()
int main() { // initialize a vector of 10,000 string values: std::vector<std::string> coll; for (int i=0; i < 10000; ++i) { coll.push_back(std::to_string(i)); }
24.6 Afternotes size(), empty(), and data() were first proposed by Riccardo Marcangelo in. link/n4017. The finally accepted wording was formulated by Riccardo Marcangelo in https: //wg21.link/n4280. as_const() was first proposed by ADAM David Alan Martin and Alisdair Meredith in https: //wg21.link/n4380. The finally accepted wording was formulated by ADAM David Alan Martin and Alisdair Meredith in. clamp() was first proposed by Martin Moene and Niels Dekker in. The finally accepted wording was formulated by Martin Moene and Niels Dekker in. link/p002501.Josuttis: C++17 2019/02/16 18:57 page 290
Chapter 25 Container Extensions
There are a couple of minor or small changes to the standard containers of the C++ standard library, which are described in this chapter.
#include <vector> #include <iostream> #include <string>
class Node {
291Josuttis: C++17 2019/02/16 18:57 page 292
private: std::string value; std::vector<Node> children; // OK since C++17 (Node is an incomplete type here) public: // create Node with value: Node(std::string s) : value{std::move(s)}, children{} { }
#endif // NODE_HPP You could use this class, for example, as follows: lib/incomplete.cpp #include "incomplete.hpp" #include <iostream>
int main() { // create node tree: Node root{"top"}; root.add(Node{"elem1"}); root.add(Node{"elem2"}); root[0].add(Node{"elem1.1"});Josuttis: C++17 2019/02/16 18:57 page 293
24 2 "papaya"
For example: std::multimap<double, std::string> src {{1.1,"one"}, {2.2,"two"}, {3.3,"three"}}; std::map<double, std::string> dst {{3.3,"old data"}};
25.3 Afternotes Container support for incomplete types was first handles were were discussed by Matt Austern in and first proposed by Zhihao Yuan in: C++17 2019/02/16 18:57 page 295
n3890. The finally accepted wording was formulated by Zhihao Yuan in n4510. Node handles were were first proposed indirectly by Alan Talbot requesting splice operations as library issue and by Alisdair Meredith requesting move support for node elements as library issue. The finally accepted wording was formulated by Alan Talbot, Jonathan Wakely, Howard Hinnant, and James Dennett in https: //wg21.link/p0083r3. The API was slightly clarified finally by Howard E. Hinnant in https: //wg21.link/p0508r0.Josuttis: C++17 2019/02/16 18:57 page 296
296
Chapter 26 Multi-Threading and Concurrency
A couple of minor extensions and improvements were introduced in the area of multi-threading and concurrency.
297Josuttis: C++17 2019/02/16 18:57 page 298
1 The typical implementation is to provide a partial specialization for the case where only a single mutex is passed to the scoped lock. 2 In the original C++17 standard the adopt_lock argument was at the end, which was later fixed with https: //wg21.link/p0739r0.Josuttis: C++17 2019/02/16 18:57 page 299
26.1.2 std::shared_mutex C++14 added a shared_timed_mutex to support read/write locks, where multiple threads concur- rently read a value, while from time to time a thread might update the value. Because on some platforms mutexes that don’t support timed locks can be implemented more efficient, now the type shared_mutex was introduced (as as std::mutex exists besides std::timed_mutex since C++11). shared_mutex is defined in header <shared_mutex> and supports the following operations: • for exclusive locks: lock(), try_lock(), unlock() • for shared read-access: lock_shared(), try_lock_shared(), unlock_shared() • native_handle() That is, unlike shared_times_mutex it doesn’t support try_lock_for(), try_lock_until(), try_lock_shared_for(), and try_lock_shared_until().
Using a shared_mutex The way to use a shared_mutex is as follows: Assume you have a shared vector, which is usually read by multiple threads, but from time to time modified: #include <shared_mutex> #include <mutex> ... std::vector<double> v; // shared resource std::shared_mutex vMutex; // control access to v (shared_timed_mutex in C++14) To have shared read-access (so that multiple readers do not block each other), you use a shared_lock, which is a lock guard for shared read access (introduced with C++14). For example: if (std::shared_lock sl(vMutex); v.size() > 0) { ... // (shared) read access to the elements of vector v } Only for an exclusive write access you use an exclusive lock guard, which might be either a simple lock_guard or scoped_lock (as just introduced) or a sophisticated unique_lock. For exam- ple: { std::scoped_lock sl(vMutex); ... // exclusive write read access to the vector v }
} else { ... } If the value is true, then for any object of the corresponding atomic type is_lock_free() yields true: if constexpr(atomic<T>::is_always_lock_free) { static_assert(atomic<T>().is_lock_free()); // never fails } If available, the value fits to the value of the corresponding macro, which had to used before C++17. For example, if and only if ATOMIC_INT_LOCK_FREE yields 2 (which stands for “always”), then std::atomic<int>::is_always_lock_free() yields true: if constexpr(std::atomic<int>::is_always_lock_free) { // ATOMIC_INT_LOCK_FREE == 2 ... } else { // ATOMIC_INT_LOCK_FREE == 0 || ATOMIC_INT_LOCK_FREE == 1 ... } The reason to replace the macro by a static member is to have more type safety and support the use of this checks in tricky generic code (e.g., using SFINAE). Remember that std::atomic<> can also be used for trivially copyable types. Thus, you can also check, whether your own structure would need locks if used atomically. For example: template<auto SZ> struct Data { bool set; int values[SZ]; double average; };
if constexpr(std::atomic<Data<4>>::is_always_lock_free) { ... } else { ... }Josuttis: C++17 2019/02/16 18:57 page 301
3 Accessing multiple objects by different threads concurrently is usually safe in C++, but the necessary syn- chronization might degrade the performance of the program.Josuttis: C++17 2019/02/16 18:57 page 302
26.4 Afternotes scoped_locks were originally proposed as modification of lock_guard to become variadic by Mike Spertus in, which was accepted as p0156r0. However, because this turned out to be an ABI breakage, the new name scoped_lock was introduced by Mike Spertus with and finally accepted. Mike Spertus, Walter E. Brown, and Stephan T. Lavavej later changed the order of a constructor as a defect against C++17 with: The shared_mutex was first proposed together with all other mutexes for C++11 by Howard Hinnant in. However, it took time to convince the C++ standardization committee that all proposed mutexes are useful. So, the finally accepted wording was formulated for C++17 by Gor Nishanov in. The std::atomic<> static member std::is_always_lock_free was first proposed by Olivier Giroux, JF Bastien, and Jeff Snyder in. The finally accepted word- ing was also formulated by Olivier Giroux, JF Bastien, and Jeff Snyder in p0152r1. The hardware interference (cache-line) sizes were first proposed by JF Bastien and Olivier Giroux in. The finally accepted wording was also formulated by JF Bastien and Olivier Giroux in: C++17 2019/02/16 18:57 page 303
Part V Expert Utilities This part introduces new language and library features that the average application programmer usu- ally doesn’t have to know. It might cover tools for programmers of foundation libraries, of specific modes, or in special contexts.
303Josuttis: C++17 2019/02/16 18:57 page 304
304
Chapter 27 Polymorphic Memory Resources (PMR)
Since C++98 the standard library has supported the ability to configure the way classes allocate their internal (heap) memory. For that reason, almost all types in the standard library that allocate memory have an allocator parameter. Thus, you can configure the way containers, strings, and other types allocate their internal memory if they need more space than the one allocated on the stack. The default way to allocate this memory is to allocate it from the heap. But there are different reasons to modify this default behavior: • You can use your own way of allocating memory to reduce the number of system calls. • You can ensure that allocated memory is located next to each other to benefit from CPU caching. • You can place containers and their elements in shared memory available for multiple processes. • You can even redirect these heap memory calls to use memory earlier allocated on the stack. Thus, there can be performance and functional reasons.1 However, using allocators (right) until C++17 was in many ways both tricky and clumsy (due to some flaws, too much complexity, and modifications with backward compatibility). C++17 now provides a pretty easy-to-use approach for predefined and user-defined ways of mem- ory allocation, which can be used for standard types and user-defined types. For this reason, this chapter will discuss: • Using standard memory resources provided by the standard library • Defining custom memory resources • Providing memory resource support for custom types This chapter would not have been possible without the significant help of Pablo Halpern, Arthur O’Dwyer, David Sankel, and Jonathan Wakely. A few videos explain the features provided here:
1 The initial reason for allocators was to be able to deal with pointers of different size (“near” and “far” pointers).
305Josuttis: C++17 2019/02/16 18:57 page 306
int main() { TrackNew::reset();
std::vector<std::string> coll; for (int i=0; i < 1000; ++i) { coll.emplace_back("just a non-SSO string"); }
TrackNew::status(); } Note that we track the amount of memory allocations using a class that tracks all ::new calls per- formed with the following loop: std::vector<std::string> coll; for (int i=0; i < 1000; ++i) { coll.emplace_back("just a non-SSO string"); }Josuttis: C++17 2019/02/16 18:57 page 307
There are a lot of allocations, because the vector internally uses memory to store the elements. In addition, the string elements themselves might allocate memory on the heap to hold their current value (With the often implemented small string optimization this typically only happens if the strings have more than 15 characters). The output of the program might be something like the following: 1018 allocations for 134,730 bytes This would mean to have one allocation for each element plus 18 for the vector internally, because it 18 times allocates (more) memory to hold its elements.2 Behavior like this can become critical, because memory (re-)allocations take time and in some contexts (such as embedded systems) it might be a problem to allocate heap memory at all. We could ask the vector to reserve enough memory in ahead, but in general you can’t avoid real- locations unless you know the amount of data to process in ahead. If you don’t know exactly how much data you process, you always have to find a compromise between avoiding reallocations and not wasting too much memory. And you need at least 1001 allocations (one allocation to hold the elements in the vector and one for each string not using the small string optimization).
int main() { TrackNew::reset();
2 The number of reallocations might differ from platform to platform, because the algorithms to reallocate more memory differ. If the current capacity of memory is exceeded, some implementations enlarge it by 50% while others double the size of the memory.Josuttis: C++17 2019/02/16 18:57 page 308
TrackNew::status(); } First, we allocate our own memory on the stack using the new type std::byte: // allocate some memory on the stack: std::array<std::byte, 200000> buf; Instead of std::byte you could also just use char. Then, we initialize a monotonic_buffer_resource with this memory, passing its address and its size: std::pmr::monotonic_buffer_resource pool{buf.data(), buf.size()}; Finally, we use a std::pmr::vector, which takes the memory resource for all its allocations: std::pmr::vector<std::string> coll{&pool}; This declaration just a shortcut for the following: std::vector<std::string, std::pmr::polymorphic_allocator<std::string>> coll{&pool}; That is, we declare that the vector uses a polymorphic allocator, which can switch between differ- ent memory resources at runtime. The class monotonic_buffer_resource is derived from class memory_resource and therefore can be used a memory resource for a polymorphic allocator. So, by passing the address of our memory resource, we ensure that the vector uses our memory resource as polymorphic allocator. If we measure the allocated memory of this program, the output might become: 1000 allocations for 32000 bytes The 18 allocations of the vector are no longer performed on the heap. Instead, our initialized buffer buf is used. If the pre-allocated memory of 200000 bytes is not enough, the vector will still allocate more memory on the heap. That happens, because the monotonic_memory_resource uses the default allocator, which allocates memory with new, as fallback.
#include <iostream> #include <string> #include <vector> #include <array> #include <cstdlib> // for std::byte #include <memory_resource> #include "../lang/tracknew.hpp"
int main() { TrackNew::reset();
// and use it as initial memory pool for a vector and its strings: std::pmr::monotonic_buffer_resource pool{buf.data(), buf.size()}; std::pmr::vector<std::pmr::string> coll{&pool};
TrackNew::status(); } Due to the following definition of the vector: std::pmr::vector<std::pmr::string> coll{&pool}; the output of the program becomes: 0 allocations for 0 bytes The reason is that by default a pmr vector tries to propagate its allocator to its elements. This is not successful when the elements don’t use a polymorphic allocator, as it is the case with type std::string. However, by using type std::pmr::string, which is a string using a polymorphic allocator, the propagation works fine. Again, only when there is no more memory in the buffer, new memory gets allocated by the pool on the heap. For example, this might happen with the following modification: for (int i=0; i < 50000; ++i) { coll.emplace_back("just a non-SSO string"); } when the output might suddenly become:Josuttis: C++17 2019/02/16 18:57 page 310
int main() { // allocate some memory on the stack: std::array<std::byte, 200000> buf;
for (int num : {1000, 2000, 500, 2000, 3000, 50000, 1000}) { std::cout << "-- check with " << num << " elements:\n"; TrackNew::reset();
TrackNew::status(); } } Here, after allocating the 200,000 bytes on the stack, we use this memory again and again to initialize a new resource pool for the vector and its elements. The output might become: -- check with 1000 elements: 0 allocations for 0 bytes -- check with 2000 elements: 1 allocations for 300000 bytes -- check with 500 elements: 0 allocations for 0 bytesJosuttis: C++17 2019/02/16 18:57 page 311
std::pmr::synchronized_pool_resource pool1; std::pmr::string s2{"my string", &pool1};
std::pmr::monotonic_buffer_resource pool2{...}; std::pmr::string s3{"my string", &pool2}; In general, memory resources are passed as pointers. For this reason it is important that you ensure that the resource objects these pointers refer to exist until the last deallocation is called (this might be later than you expect if you move objects around and memory resources are interchangeable).
You can get the current default resource with std::pmr::get_default_resource(), which is what you can pass to initialize a polymorphic allocator. You can globally set a different default mem- ory resource with std::pmr::set_default_resource(). This resource is used as default in any scope until the next call of std::pmr::set_default_resource() is performed. For example: static std::pmr::synchronized_pool_resource myPool;
{ static std::pmr::synchronized_pool_resource myPool; return &myPool; } The return type memory_resource is the base class of all memory resources. Note that a previous default resource might still be used when it was replaced. Unless you know (and ensure) that this is not the case, which for example means that no static objects are created using the resource, you should make your resource let live as long as possible (again, ideally creating right at the beginning of main(), to that it is destroyed last).3
new_delete_resource() new_delete_resource() is the default memory resource. It is returned by get_default_resource() unless you have defined a different default memory resource by calling set_default_resource() It handles allocations like they are handled when using the default allocator: • Each allocation calls new • Each deallocation calls delete However, note that a polymorphic allocator with this memory resource is not interchangeable with the default allocator, because they simply have different types. For this reason std::string s{"my string with some value"}; std::pmr::string ps{std::move(s), std::pmr::new_delete_resource()}; // copies will not move (pass the allocated memory for s to ps). Instead, the memory of s will be copied to new memory of ps allocated with new.
(un)synchronized_pool_resource synchronized_pool_resource and unsynchronized_pool_resource are classes for memory resources that try to locate all memory close to each other. Thus, they force little fragmentation of memory. The difference is that synchronized_pool_resource is thread safe (which costs more perfor- mance) while unsynchronized_pool_resource is not. So if you know that the memory of this pool is only handled by a single thread (or that (de)allocations are synchronized), you should prefer unsynchronized_pool_resource. Both classes still use an underlying memory resource to actually perform the allocations and deal- locations. They only act as a wrapper ensuring that these allocations are better clustered. Thus,
3 You might still get into trouble if you have other global objects that are destroyed later, so that good book- keeping for proper clean-up of an ending program is worth it.Josuttis: C++17 2019/02/16 18:57 page 314
std::pmr::synchronized_pool_resource myPool; is the same as std::pmr::synchronized_pool_resource myPool{std::pmr::get_default_resource()}; In addition, they deallocate all memory when the pool is destroyed. One major application of these pools is to ensure that elements in a node based container are located next to each other. This may also increase the performance of the containers significantly, because then CPU caches load elements together in cache lines. The effect is that after you accessed one element accessing other elements becomes very fast, because they are already in the cache. However, you should measure, because this depends on the implementation of the memory resource. For example, if the memory resource uses a mutex to synchronize memory access, performance might become significant worse. Let’s look at the effect with a simple example. The following program creates a map, which maps integral values to strings. pmr/pmrsync0.cpp #include <iostream> #include <string> #include <map>
int main() { std::map<long, std::string> coll;
diff: 1777277585312 diff: -320 diff: 60816 diff: 1120 diff: -400 diff: 80 diff: -2080 diff: -1120 diff: 2720 diff: -3040 The elements are not located next to each other. We have distances of 60,000 bytes for 10 elements with a size of about 24 bytes. This fragmentation gets a lot worse, if other memory is allocated between the allocation for the elements. Now let’s run the program with polymorphic allocators using a synchronized_pool_resource:
pmr/pmrsync1.cpp #include <iostream> #include <string> #include <map> #include <memory_resource>
int main() { std::pmr::synchronized_pool_resource pool; std::pmr::map<long, std::pmr::string> coll{&pool};
std::pmr::synchronized_pool_resource pool; std::pmr::map<long, std::pmr::string> coll{&pool}; The output, for example, now look as follows: diff: 2548552461600 diff: 128 diff: 128 diff: 105216 diff: 128 diff: 128 diff: 128 diff: 128 diff: 128 diff: 128 As you can see, the elements are now located close to each other. Still, they are not located in one chunk of memory. When the pool finds out that the first chunk is not big enough for all elements, it allocates more memory for even more elements. Thus, the more memory we allocate, the larger the chunks of memory are so that more elements are located close to each other. The details of this algorithms are implementation-defined. Of course, this output is special because we create the elements in the order they are sorted inside the container. Thus, in practice, if you create object with random values, the elements will not be located sequentially one after the other (in different chunks of memory). However, they are still located close to each other and that is important for good performance when dealing with the elements of this container. Also note that we don’t look at how memory for the element values is arranged. Here, usually the small string optimization causes that for the elements no memory is allocated. But as soon as we enlarge the string values, the pool also tries to place those together. Note that the pool manages different chunks of memory for different allocated sizes. That is, in general the elements are located to each other and the string values for elements of the same string size are located close to each other.
monotonic_buffer_resource Class monotonic_buffer_resource also provides the ability to place all memory in big chunks of memory. However, it has two other abilities: • You can pass a buffer to be used as memory. This, especially, can be memory allocated on the stack. • The memory resource never deallocates until the resource as a whole gets deallocated. That is, it also tries to avoid fragmentation. And it is super fast because deallocation is a no-op and you skip the need to track deallocated memory for further use. Whenever there is a request to allocate memory it just return the next free piece of memory until all memory is exhausted. Note that objects are still destructed. Only their memory is not freed. If you delete objects, which usually deallocates their memory, the deallocation has no effect.Josuttis: C++17 2019/02/16 18:57 page 317
You should prefer this resource, if you either have no deletes or if you have enough memory to waste (not reusing memory that was previously used by another object). We already saw applications of monotonic_buffer_resource in our first motivating examples, where we passed memory allocated on the stack to the pool: std::array<std::byte, 200000> buf; std::pmr::monotonic_buffer_resource pool{buf.data(), buf.size()}; You can also use this pool to let any memory resource skip deallocations (optionally passing an initial size). By default this would apply to the default memory resource, which is new_delete_resource() by default. That is, with // use default memory resource but skip deallocations as long as the pool lives: { std::pmr::monotonic_buffer_resource pool;
std::pmr::vector<std::pmr::string> coll{&pool}; for (int i=0; i < 100; ++i) { coll.emplace_back("just a non-SSO string"); } coll.clear(); // destruction but no deallocation } // deallocates all allocated memory The inner block with the loop will from time to time allocate memory for the vector and its ele- ments. As we are using a pool, allocations are combined to chunks of memory. This, for example, might result in 14 allocations. By first calling coll.reserve(100) this usually becomes only 2 allocations. As written, no deallocation is done as long as the pool exists. Thus, if the creation and usage of the vector is done in a loop, the memory allocated by the pool will raise and raise. monotonic_buffer_resource also allows us to pass an initial size, which it then used a mini- mum size of its first allocation (which is done when the first request for memory occurs). In addition, you can define which memory resource it uses to perform the allocations. This allows us to chain memory resources to provide more sophisticated memory resources. Consider the following example: { // allocate chunks of memory (starting with 10k) without deallocating: std::pmr::monotonic_buffer_resource keepAllocatedPool{10000}; std::pmr::synchronized_pool_resource pool{&keepAllocatedPool};
null_memory_resource() null_memory_resource() handles allocations in a way that each allocation throws a bad_alloc exception. The most important application is to ensure that a memory pool using memory allocated on the stack not suddenly allocates memory on the heap, if it needs more. Consider the following example: pmr/pmrnull.cpp #include <iostream> #include <string> #include <unordered_map> #include <array> #include <cstddef> // for std::byte #include <memory_resource>
int main() { // use memory on the stack without fallback on the heap: std::array<std::byte, 200000> buf; std::pmr::monotonic_buffer_resource pool{buf.data(), buf.size(), std::pmr::null_memory_resource()};
{ private: std::pmr::memory_resource* upstream; // wrapped memory resource std::string prefix{}; public: // we wrap the passed or default resource: explicit Tracker(std::pmr::memory_resource* us = std::pmr::get_default_resource()) : upstream{us} { } explicit Tracker(std::string p, std::pmr::memory_resource* us = std::pmr::get_default_resource()) : prefix{std::move(p)}, upstream{us} { } private: void* do_allocate(size_t bytes, size_t alignment) override { std::cout << prefix << "allocate " << bytes << " Bytes\n"; void* ret = upstream->allocate(bytes, alignment); return ret; } void do_deallocate(void* ptr, size_t bytes, size_t alignment) override { std::cout << prefix << "deallocate " << bytes << " Bytes\n"; upstream->deallocate(ptr, bytes, alignment); } bool do_is_equal(const std::pmr::memory_resource& other) const noexcept override { // same object?: if (this == &other) return true; // same type and prefix and equal upstream?: auto op = dynamic_cast<const Tracker*>(&other); return op != nullptr && op->prefix == prefix && upstream->is_equal(other); } }; As usual for smart memory resources, we support to pass another memory resource (usually called upstream) to wrap it or use it as fallback. In addition, we can pass an optional prefix. On each allocation and deallocation we then trace this call with the optional prefix. The only other function we have to implement is do_is_equal(), which defines when two al- locators are interchangeable (i.e., whether and when one polymorphic memory resource object can deallocate memory allocated by another). In this case, we simply say that any object of this type can deallocate memory allocated from any other object of this type provided the prefix is the same:Josuttis: C++17 2019/02/16 18:57 page 321
int main() { { // track allocating chunks of memory (starting with 10k) without deallocating: Tracker track1{"keeppool:"}; std::pmr::monotonic_buffer_resource keeppool{10000, &track1}; { Tracker track2{" syncpool:", &keeppool}; std::pmr::synchronized_pool_resource pool{&track2};
If we’d introduce a third tracker in this program, we could also track, when objects allocate and deallocate memory from the syncpool: // track each call, the effect in the sync pool, and the effect in the mono pool: Tracker track1{"keeppool:"}; std::pmr::monotonic_buffer_resource keepAllocatedPool{10000, &track1}; Tracker track2{" syncpool:", &keepAllocatedPool}; std::pmr::synchronized_pool_resource syncPool{&track2}; Tracker track3{" objects:", &syncPool}; ... std::pmr::vector<std::pmr::string> coll{&track3};
After we have introduced standard memory resources and user-defined memory resources one topic remains: How can we make our custom types polymorphic allocator aware so that they like a pmr::string as an element of a pmr container are allocated using its allocator.
pmr/pmrcustomer.hpp #include <string> #include <memory_resource>
// initializing constructor(s): PmrCustomer(std::pmr::string n, allocator_type alloc = {}) : name{std::move(n), alloc} { }
// setters/getters: void setName(std::pmr::string s) { name = std::move(s); } std::pmr::string getName() const { return name; } std::string getNameAsString() const { return std::string{name}; } }; First note that we use a pmr string as member. This not only holds the value (here the name), it also holds the current allocator used: std::pmr::string name; // also used to store the allocatorJosuttis: C++17 2019/02/16 18:57 page 326
Then, we have to specify that this type supports polymorphic allocators, which is simply done by providing a corresponding declaration of type allocator_type: using allocator_type = std::pmr::polymorphic_allocator<char>; The type passed to the polymorphic_allocator doesn’t matter (when it is used the allocator is rebound to the necessary type). You could, for example, also use std::byte there.4 Alternatively, you can also use the allocator_type of the string member: using allocator_type = decltype(name)::allocator_type; Next we define the usual constructor(s) with an additional optional allocator parameter: PmrCustomer(std::pmr::string n, allocator_type alloc = {}) : name{std::move(n), alloc} { } You might think about declaring constructors like this as explicit. At least if you have a default constructor, you should do so to avoid implicit conversions from an allocator to a customer: explicit PmrCustomer(allocator_type alloc = {}) : name{alloc} { } Then, we have to provide the copy and move operations that ask for a specific allocator. This is the main interface of pmr containers to ensure that their elements use the allocator of the container: PmrCustomer(const PmrCustomer& c, allocator_type alloc) : name{c.name, alloc} { } PmrCustomer(PmrCustomer&& c, allocator_type alloc) : name{std::move(c.name), alloc} { } Note that both are not noexcept, because even the move constructor might have to copy a passed customer if the required allocator alloc is not interchangeable. Finally, we implement the necessary setters and getters, which usually are: void setName(std::pmr::string s) { name = std::move(s); } std::pmr::string getName() const { return name; } There is another getter, getNameAsString(), which we provide to cheaply return the name as std::string. We will discuss it later. For the moment, you could also leave it out.
int main() { Tracker tracker; std::pmr::vector<PmrCustomer> coll(&tracker); coll.reserve(100); // allocates with tracker
The same happens if we move the customer into the vector, because the allocator of the vector (the tracker) and is not interchangeable with the allocator of the customer (which uses the default re- source): std::pmr::vector<PmrCustomer> coll(&tracker); ... PmrCustomer c1{"Peter, Paul & Mary"}; // allocates with get_default_resource() ... coll.push_back(std::move(c1)); // copies (allocators not interchangeable) If we would also initialize the customer with the tracker, the move would work: std::pmr::vector<PmrCustomer> coll(&tracker); ... PmrCustomer c1{"Peter, Paul & Mary", &tracker}; // allocates with tracker ... coll.push_back(std::move(c1)); // moves (same allocator) The same is true, if we would not use any tracker at all: std::pmr::vector<PmrCustomer> coll; // allocates with default resource ... PmrCustomer c1{"Peter, Paul & Mary"}; // allocates with default resource ... coll.push_back(std::move(c1)); // moves (same allocator)
We might want to provide additional constructors, but the good thing with not providing them is that programmers are forced to implement the expensive conversion. In addition, if you overload for different string types (std::string and std::pmr::string) you get additional ambiguities (e.g., taking a string_view or a string literal), so that even more overloads are necessary. A getter, anyway, can only return one type (as we can’t overload on different return types only). Thus, we can only provide one getter, which usually should return the “native” type of the API (here, std::pmr::string). That means, if we return a std::pmr::string and need the name as std::string, we again need an explicit conversion: PmrCustomer c4{"Mr. Paul Kalkbrenner"}; // OK: allocates with default resource std::string s1 = c4.getName(); // ERROR: no implicit conversion std::string s2 = std::string{c4.getName()}; // OOPS: two allocations This is not only less convenient, this is also a performance issue, because in the last statement two allocations happen: • First we allocate memory for the return value and • then the conversion from type std::pmr::string to std::string needs another allocation. For this reason, it might be a good idea to provide an additional getter getNameAsString() that directly creates and returns the requested type: std::string s3 = c4.getNameAsString(); // OK: one allocation
27.4 Afternotes Polymorphic allocators were first proposed by Pablo Halpern in. This approach was adopted to become part of the Library Fundamentals TS as proposed by Pablo Halpern in. The approach was adopted with other components for C++17 as proposed by Beman Dawes and Alisdair Meredith in: C++17 2019/02/16 18:57 page 330
330
Chapter 28 new and delete with Over-Aligned Data
Since C++11 you can specify over-aligned types, having a bigger alignment than the default align- ment by using the alignas specifier. For example: struct alignas(32) MyType32 { int i; char c; std::string s[4]; };
1 Some compilers accept and ignore alignment values less than the default alignment with a warning or even silently.
331Josuttis: C++17 2019/02/16 18:57 page 332
2 Compilers/platforms don’t have to support over-aligned data. In that case a request to over-align should not compile. 3 The reason is that the Windows operating systems provide no ability to request aligned storage, so that the calls over-allocate and align manually. As a consequence, support for aligned_alloc() will be unlikely in the near future, because support for the existing Windows platforms will still be required.Josuttis: C++17 2019/02/16 18:57 page 333
• As a global function (different overloads are provided by default, which can be replaced by the programmer). • As type-specific implementations, which can be provided by the programmer and have higher priority than the global overloads. However, this is the first example, where special care has to be taken to deal correctly with the dif- ferent dynamic memory arenas, because when specifying the alignment with the new expression the compiler can’t use the type to know whether and which alignment was requested. The programmer has to specify which delete to call.4 Unfortunately, there is no delete operator, where you can pass an additional argument, you have to call the corresponding operator delete() directly, which means that you have to know, which of the multiple overloads are implemented: In fact, in this example one of the following functions for an object of type T could be called: void T::operator delete(void* ptr, std::size_t size, std::align_val_t align);
4 This is not the first time where the type system is not good enough to call the right implementation of delete. The first example was that it is up to the programmer to ensure that delete[] is called instead of delete if arrays were allocated.Josuttis: C++17 2019/02/16 18:57 page 334
That is, each time you call new for a type T a corresponding call of either a type specific T::operator new() or (if none exists) the global ::operator new() is called: auto p = new T; // tries to call a type-specific operator new() (if any) // of if none tries to call a global ::operator new() The same way each time you call delete for a type T a corresponding call of either a type spe- cific T::operator delete() or the global ::operator delete() is called. If arrays are al- located/deallocated, the corresponding type-specific or global operators operator new[]() and operator delete[]() are called. Before C++17, a requested alignment was not automatically passed to these functions and the default mechanisms allocated dynamic memory without considering the alignment. An over-aligned type always needed its own implementations of operator new() and operator delete() to be correctly aligned on dynamic memory. Even worse, there was no portable way to perform the request for over-aligned dynamic memory. As a consequence, for example, you had to define something along the lines of following: lang/alignednew11.hpp #include <cstddef> // for std::size_t #include <string> #if __STDC_VERSION >= 201112L #include <stdlib.h> // for aligned_alloc() #else #include <malloc.h> // for _aligned_malloc() or memalign() #endif
int main() { auto p = new MyType32; ... delete p; } As written, since C++17, you can skip the overhead to implement operations to allocate/deallo- cate aligned data. The example works well even without defining operator new() and operator delete() for your type: lang/alignednew17.cpp #include <string>Josuttis: C++17 2019/02/16 18:57 page 337
int main() { auto p = new MyType32; // allocates 32-bytes aligned memory since C++17 ... delete p; }
... std::free(p); } However, due to the problem Windows has with aligned_alloc() in practice we need special handling to be portable, then: static void* operator new (std::size_t size, std::align_val_t align) { ... #ifdef _MSC_VER // Windows-specific API: return aligned_malloc(size, static_cast<size_t>(align)); #else // standard C++17 API: return std::aligned_alloc(static_cast<size_t>(align), size); #endif }
5 In C++20 the default implementations of operator new() will have these attribute.Josuttis: C++17 2019/02/16 18:57 page 340
It is rare but (as you can see here) possible to call operator new() directly (not using a new expres- sion). With [[nodiscard]] compilers will detect then, if the caller forgot to use the return value, which would result in a memory leak.
If the default alignment is 32 (or less and the code compiles), the expression new MyType32 will call the first overload of operator new() with only the size parameter, so that the output is something like:6 MyType32::new() with size 128 If the default alignment is less than 32, the second overload of operator new() for two arguments will be called, so that the output becomes something like: MyType32::new() with size 128 and alignment 32
Type-Specific Fallbacks If the std::align_val_t overloads are not provided for a type-specific operator new(), the overloads without this argument are used as fallbacks. Thus, a class that does only provide operator new() overloads supported before C++17 still compiles and has the same behavior (note that for the global operator new() this is not the case): struct NonalignedNewOnly { ... static void* operator new (std::size_t size) { ... } ... // no operator new(std::size_t, std::align_val_t align) };
6 The size might vary depending on how big an int and a std::string is on the platform.Josuttis: C++17 2019/02/16 18:57 page 342
int main() { struct alignas(64) S { int i; };
7 There might be additional output for other initializations allocating memory on the heap. 8 Some compilers warn before C++17 about calling new for over-aligned data, because as introduced the alignment was not handled properly before C++17.Josuttis: C++17 2019/02/16 18:57 page 345
Note that this problem only applies to the global operator new(). If the type specific operator new() is defined for S, the operator is still also used as a fallback for over-aligned data so that such a program behaves as before C++17. Note also that printf() is used intentionally here to avoid that an output to std::cout allocates memory while we are allocating memory, which might result in nasty errors (core dumps at best).
class TrackNew { private: static inline int numMalloc = 0; // num malloc calls static inline size_t sumSize = 0; // bytes allocated so far static inline bool doTrace = false; // tracing enabled static inline bool inNew = false; // don’t track output inside new overloads public: static void reset() { // reset new/memory counters numMalloc = 0; sumSize = 0; }
++numMalloc; sumSize += size; void* p; if (align == 0) { p = std::malloc(size); } else { #ifdef _MSC_VER p = _aligned_malloc(size, align); // Windows API #else p = std::aligned_alloc(align, size); // C++17 API #endif } if (doTrace) { // DON’T use std::cout here because it might allocate memory // while we are allocating memory (core dump at best) printf("#%d %s ", numMalloc, call); printf("(%zu bytes, ", size); if (align > 0) { printf("%zu-bytes aligned) ", align); } else { printf("def-aligned) "); } printf("=> %p (total: %zu Bytes)\n", (void*)p, sumSize); } return p; }
[[nodiscard]] void* operator new (std::size_t size) { return TrackNew::allocate(size, 0, "::new"); }
[[nodiscard]] void* operator new (std::size_t size, std::align_val_t align) { return TrackNew::allocate(size, static_cast<size_t>(align), "::new aligned"); }Josuttis: C++17 2019/02/16 18:57 page 347
[[nodiscard]] void* operator new[] (std::size_t size) { return TrackNew::allocate(size, 0, "::new[]"); }
[[nodiscard]] void* operator new[] (std::size_t size, std::align_val_t align) { return TrackNew::allocate(size, static_cast<size_t>(align), "::new[] aligned"); }
#endif // TRACKNEW_HPP Consider using this header file in the following CPP file: lang/tracknew.cpp #include "tracknew.hpp" #include <iostream> #include <string>
int main() { TrackNew::reset();Josuttis: C++17 2019/02/16 18:57 page 348
TrackNew::trace(true); std::string s = "string value with 26 chars"; auto p1 = new std::string{"an initial value with even 35 chars"}; auto p2 = new(std::align_val_t{64}) std::string[4]; auto p3 = new std::string[4] { "7 chars", "x", "or 11 chars", "a string value with 28 chars" }; TrackNew::status(); ... delete p1; delete[] p2; delete[] p3; } The output depends on when the tracking is initialized and how many allocations are performed for other initializations. But it should contain something like the following lines: #1 ::new (27 bytes, def-aligned) => 0x8002ccc0 (total: 27 Bytes) #2 ::new (24 bytes, def-aligned) => 0x8004cd28 (total: 51 Bytes) #3 ::new (36 bytes, def-aligned) => 0x8004cd48 (total: 87 Bytes) #4 ::new[] aligned (100 bytes, 64-bytes aligned) => 0x8004cd80 (total: 187 Bytes) #5 ::new[] (100 bytes, def-aligned) => 0x8004cde8 (total: 287 Bytes) #6 ::new (29 bytes, def-aligned) => 0x8004ce50 (total: 316 Bytes) 6 allocations for 316 bytes The first output is, for example, to initialize the memory for the value of s. Note that the value might be larger depending on the allocation strategy of the std::string class. The next two lines written are cause by the second request: auto p1 = new std::string{"an initial value with even 35 chars"}; It allocates 24 bytes for the core string object plus 36 bytes for the initial value of the string (again, the values might differ). The third call requests an 64-bytes array of 4 strings. The final call again performs two allocations: one for the array and one for the initial value of the last string. Yes, only for the last string because implementations of the library typically use the smal- l/short string optimization (SSO), which stores strings usually up to 15 characters in data members instead of allocating heap memory at all. Other implementations might perform 5 allocations here.
28.5 Afternotes Alignment for heap/dynamic memory allocation was first proposed by Clark Nelson in https:// wg21.link/n3396. The finally accepted wording was formulated by Clark Nelson in https:// wg21.link/p0035r4.Josuttis: C++17 2019/02/16 18:57 page 349
Chapter 29 Other Library Improvements for Experts
There are some further improvements to the C++ standard library for experts such as foundation library programmers, which are described in this chapter.
349Josuttis: C++17 2019/02/16 18:57 page 350
from_chars() std::from_chars() converts a given character sequence to a numeric value. For example: #include <charconv>
1 Note that the accepted wording for C++17 first added them to <utility>, which was changed via a defect report after C++17 was standardized, because this created circular dependencies (see p0682r1). 2 Note that the accepted wording for C++17 declared ec as a std::error_code, which was changed via a defect report after C++17 was standardized (see).Josuttis: C++17 2019/02/16 18:57 page 351
29.1 Low-Level Conversions between Character Sequences and Numeric Values 351
to_chars() std::to_chars() converts numeric values to a given character sequence. For example: #include <charconv>
3 Note that the accepted wording for C++17 declared ec as a std::error_code which was also changed via a defect report after C++17 was standardized (see).Josuttis: C++17 2019/02/16 18:57 page 352
29.1 Low-Level Conversions between Character Sequences and Numeric Values 353
int main() { std::vector<double> coll{0.1, 0.3, 0.00001};
// check round-trip: d2str2d(sum1); d2str2d(sum2); } We accumulate two small floating-point sequences in different order. sum1 is the sum accumulating from left to right, while sum2 is the sum accumulating from right to left (using reverse iterators). As a result the values look the same but aren’t: sum1: 0.40001 sum1: 0.40001 equal: false sum1: 0.40001000000000003221 sum1: 0.40000999999999997669 When passing the values to d2str2d(), you can see that the values are stored as different character sequences with the necessary granularity: in: 0.40001000000000003221 str: 0.40001000000000003 out: 0.40001000000000003221
in: 0.40000999999999997669 str: 0.40001 out: 0.40000999999999997669 Again note that the granularity (and therefore the necessary size of the character sequence) depends on the platform. The round-trip support shall work for all floating-point numbers including NAN and INFINITY. For example, passing INFINITY to d2st2d() should have the following effect: value1: inf str: inf value2: inf However, note that the assertion in d2str2d() will fail for NAN because it never compares to any- thing, including itself.
29.2 Afternotes Low-Level Conversions between character sequences and numeric values were first proposed by Jens Maurer in. The finally accepted wording was formulated by JensJosuttis: C++17 2019/02/16 18:57 page 355
356
Glossary
This glossary is a short description of the most important non-trivial technical terms that are used in this book.
B bitmask type An integral or scoped enumeration type (enum class), for which different values represent dif- ferent bits. If it is a scoped enumeration type, only the bit operators are defined and you need a static_cast<>() to use its integral value or use it as a Boolean value.
F full specialization An alternative definition for a (primary) template, which no longer depends on any template para- meter.
I incomplete type A class that is declared but not defined, an array of unknown size, an enumeration type without the underlying type defined, void (optionally with const and/or volatile), or an array of incomplete element type.
357Josuttis: C++17 2019/02/16 18:57 page 358
358 Glossary
P partial specialization An alternative definition for a (primary) template, which still depends on one or more template para- meters.
S small/short string optimization (SSO) An approach to save allocating memory for short strings by always reserving memory for a certain number of characters. A typical value by standard library implementations is to always reserve 16 bytes of memory so that the string can have 15 characters (plus 1 byte for the null terminator) without allocating memory. This makes all strings objects larger but usually saves a lot of running time, because in practice strings are often shorter than 16 characters and allocating memory on the heap is a pretty expensive operation.
V variable template A templified variable. It allows us to define variables or static members by substituting the template parameters by specific types or values.
variadic template A template with a template parameter that represents an arbitrary number of types or values.Josuttis: C++17 2019/02/16 18:57 page 359
Index
359Josuttis: C++17 2019/02/16 18:57 page 360
360 Index C
Index D 361
362 Index G | https://fr.scribd.com/document/403376808/cpp17-the-complete-reference-pdf | CC-MAIN-2019-43 | refinedweb | 43,108 | 51.18 |
On Mon, Sep 02, 2002 at 07:05:40PM +1000, Matthew Palmer wrote: > I? Adam Conrad and I have begun hammering out a PHP mini-policy (not yet ready for public consumption) that calls for all PHP classes to be shipped in /usr/share/php. Yes, PEAR obviously violates this at present -- so transitioning to the new scheme will require shipping a default include path that looks in both /usr/share/php and /usr/share/pear. In discussing this, we concluded that most PHP classes currently use their own directories as a means of eliminating namespace collisions; however, this is clearly inappropriate, as it pushes the burden of resolving collisions on the user, rather than on the maintainer. PHP's handling of classes should be the same as that of other scripting languages, such as perl and python. >. We had ruled out /usr/share/php4, on the grounds that many, if not most, PHP classes are not specific to php4; some will work with php3, and many should be forward-compatible with php5. As far as using /usr/share/php4/<package name>, I'm not aware of any such requirement for other scripting languages. I think it's reasonable to allow PHP packages to install their classes in the manner which is most convenient, and let package conflict resolution take care of the rest. PEAR already provides us a structure for the namespace, which we ought to take advantage of. Steve Langasek postmodern programmer
Attachment:
pgpSIxNKWpRgK.pgp
Description: PGP signature | http://lists.debian.org/debian-devel/2002/09/msg00109.html | crawl-002 | refinedweb | 250 | 58.62 |
Football Game – Team Winner Problem
In this CPP tutorial, we are going to discuss a football game problem similar type of question has been recently asked in a short contest on Codechef.
Football Game Team winner problem in C++
In a football league, there are 4 teams in a group. Each team plays match with all other teams in its group and one match with other teams in other groups. So, in total, all teams of a group will play 12 matches.
You are provided with team name number of goal it scores in a match his opponent team score and its name like that you are provided with 12 matches details. You have to print the winner of the qualifying league.
Winner of the match is decided by the team which wins match will score 3 points so, the team with most score will be the winner of qualifying league, if there is a tie between two teams in a match then both team will get 1 point and if two teams have the same points in total then team with more total goal difference will be winner. Where goal difference for a team is (sum of number of goals made by it – sum of number of goals it received ).
Algorithm: Football game problem
- Take two unordered maps goal_dif and win of (string,int) where goal_dif will store the goal difference of a team and win will score the points scored by each team.
- Clear both the map in initial using map.clear().
- Now, store the value in map for all 12 matches.
- If in a match goal made by team is more than goal score by it then add value 3 in win map of that team otherwise if there is a tie add value 1 in map for both teams.
- Also, store the value of goal difference for each team in goal_dif map.
- Now, iterate through the map and check for team with highest points if there is two team with highest points then then check for team with more goal difference.
What is map in C++?
Maps are associative container which stores the elements in a mapped fashion means it stores elements in (key,mapped value) pairs, where no two mapped values have same key.
Some basic functions of map are:-
1.) map.clear() :- clears all values from the map.
2.) map.begin():– returns an iterator to the first value in map.
3.) map.end():- returns an element to theoretical last element that follows the last element in map.
4.) map.size():- returns the number of elements in the map.
5.) map.erase(const str):- erase the value str from the map.
6.) map.insert(pair(key,value)):- insert a new element in map with (key,value) pair.
C++ code implementation of football team winner) using namespace std; typedef long long ll; int a[100005]; void solve() { int home_goals, away_goals, highest_wins, i; string home_team, away_team, vs, highest_team; unordered_map<string,int>win,goal_dif; unordered_map<string,int>::iterator itr; win.clear(); goal_dif.clear(); for(i=0;i<12;i++) { cin>>home_team>>home_goals>>vs>>away_team>>away_goals; if(home_goals>away_goals) { win[home_team]+=3; } else if(home_goals<away_goals) { win[away_team]+=3; } else { win[home_team]+=1; win[away_team]+=1; } goal_dif[home_team]=home_goals-away_goals; goal_dif[away_team]=away_goals-home_goals; } highest_wins=-1; for(itr=win.begin();itr!=win.end();itr++) { if(highest_wins==-1 || (highest_wins<itr->second) || ((highest_wins==itr->second) && (goal_dif[highest_team] <goal_dif[itr->first]))) { highest_team=itr->first; highest_wins=itr->second; } } cout<<highest_team<<endl; } int main() { Fast; solve(); }
Example:-
manutd 8 vs. 2 arsenal
lyon
1 vs. 2 manutd
fcbarca 0 vs. 0 lyon
fcbarca 5 vs. 1 arsenal
manutd 3 vs. 1 fcbarca
arsenal 6 vs. 0 lyon
arsenal 0 vs. 0 manutd
manutd 4 vs. 2 lyon
arsenal 2 vs. 2 fcbarca
lyon 0 vs. 3 fcbarca
lyon 1 vs. 0 arsenal
fcbarca 0 vs. 1 manutd
Answer
manutd
You may also learn, | https://www.codespeedy.com/football-game-team-winner-problem-cpp/ | CC-MAIN-2020-24 | refinedweb | 649 | 64.61 |
Opened 7 years ago
Closed 4 years ago
#11303 closed Bug (fixed)
changed_data works incorrectly with BooleanField and NullBooleanField
Description
if you are using BooleanField with attribute show_hidden_initial=True, the form will show inverse changed_data. If field is not changed it will show as changed, and added to form.changed_data.
Attachments (4)
Change History (19)
Changed 7 years ago by oduvan
comment:1 Changed 7 years ago by oduvan
- milestone changed from 1.0.3 to 1.1
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Version changed from SVN to 1.1-beta-1
comment:2 follow-up: ↓ 8 Changed 7 years ago by russellm
- milestone 1.1 deleted
- Needs tests set
- Patch needs improvement set
comment:3 Changed 7 years ago by oduvan
this is my little example, for better describing.
from django import forms from django.shortcuts import render_to_response class InForm(forms.Form): name = forms.CharField(show_hidden_initial=True) is_a = forms.BooleanField(show_hidden_initial=True, required=False) def index(request): if request.method =='POST': form = InForm(request.POST) if form.is_valid(): print form.changed_data else: form = InForm(initial = {'name':'HI','is_a':False}) return render_to_response('index.html',{'text':'HI ALL','form':form})
if just submit displayed form, "print form.changed_data" shows that the 'is_a' in changed_data array. But it wasn't change.
As I see, the problem in clean data of initial hidden input. So, I prepare a new patch. And I want to get your diagnosis.
Changed 7 years ago by oduvan
comment:4 Changed 7 years ago by oduvan
- Triage Stage changed from Unreviewed to Ready for checkin
comment:5 Changed 7 years ago by Alex
- Triage Stage changed from Ready for checkin to Accepted
Please do not mark your own patches as RFC, further a ticket with "Needs tests" cannot possibly be RFC.
comment:6 Changed 7 years ago by oduvan
ok. sorry.
Changed 7 years ago by punteney
updated patch and test case
comment:7 Changed 7 years ago by punteney
- Needs tests unset
- Patch needs improvement unset
Updated the patch to do the fix at the CheckBoxInput widget _has_changed function level as it was a problem with the string 'False" being evaluated as a boolean and returning as True. Also added a test to check for this issue
comment:8 in reply to: ↑ 2 Changed 7 years ago by margieroginski
I just want to mention that I have also encountered this bug and I don't think it is just a corner case in the admin as described by Russelm. I believe this is a general bug with the case where a BooleanField has show_hidden_initial set to True. When the HiddenInput widget for this field is rendered based on POST data and the POST data contains no data for the field (which occurs when the check box is unchecked), the HiddenInput is rendered with the string "False" as its value. In this case. when CheckboxInput's _has_changed() method is run to compare initial with data, intial is set to "False" and data is set to False (the python False value, not the string "False"). In the released code _has_changed() looks like this:
return bool(initial) != bool(data)
With initial set to "False" and data set to python's False value, this returns True, which is incorrect. This is a situation where the user has not changed the value of the field, yet due to the rendering of the HiddenInput with the string "False", has_changed() will return True.
The attached patch does seem to work for me (the one called "show_hidden_initial_boolean.diff").
Margie
comment:9 Changed 7 years ago by margieroginski
I've just encountered this problem again, but in a slightly different way. I have a BooleanField in my model and it is has value False. When I create the corresponding BooleanField in my modelForm, the initial value sent out in the hidden input is '0'. In other words, request.POST contains
u'initial-requireResultOnClose': [u'0']
If I then check the checkbox, the field comes back with 'on', like this:
u'requireResultOnClose': [u'on']
The _has_changed() method for CheckBoxInput() returns True when it compares u'0' and u'on' due to the fact that bool(u'0') is True, but it should really return False.
I have changed _has_changed() for CheckBoxInput() to look like this, which seems to work form my cases:
def _has_changed(self, initial, data):
# Sometimes data or initial could be None or u which should be the
# same thing as False.
if initial in ('False', '0'):
initial = False
return bool(initial) != bool(data)
In fields.py, BooleanField's clean() method does something similar, so I'm hoping this is the right direction. In general it seems very difficult to compare all these values such as False, 'False', u'0', u'1', True, 'True' and get it right. The hidden initial stuff adds an extra complication due to the fact that the hidden input is rendered through the hidden_widget() widget, which uses different values than those used by the CheckBoxInput. IE, in the CheckBoxInput widget when data is posted, the value 'on' indicates it is checked and no input at all in the post data indicates it is unchecked. But when using the CheckBoxInput's hidden_widget() widget, you see values u'0' and u'1', and it seems that these are really not taken into account in the current code base, probably due to the fact that few people are using the show_hidden_initial functionality.
comment:10 Changed 6 years ago by semenov
- Summary changed from changed_data wrong work with BooleanField to changed_data works incorrectly with BooleanField
- Version changed from 1.1-beta-1 to 1.1
I agree with @margieroginski. The current show_hidden_input implementation is just broken when it comes to booleans/checkboxes. @russellm, this isn't an "edge case annoyance...in the admin" at all. The show_hidden_input logic is very useful when you allow to edit a single (shared) model instance to many users simultaneously; they won't be posting "old" data and rewrite all changes made by the other collaborators. You shouldn't underestimate the importance, as this is actually a very common use case - that would be useful even in this Django bug tracker, since that's not a rare situation when someone's changes to Summary or Triage Stage is erroneously "reverted" by the next poster - a perfect example where hidden inputs should have been added.
I found this ticket because I first developed my own generic implementation of exactly the same logic - I was saving pristine values along with the form and then checking them on submit. Then I discovered the show_hidden_input parameter and rewrote my app to use it instead. The funny thing is that in my implementation, I didn't have the discussed bug since I was testing on a form which had BooleanFields from the very beginning so I specifically added a workaround. You can imagine my disappointment when my code suddenly broke when switched to the "original" Django solution.
In general it seems very difficult to compare all these values such as False, 'False', u'0', u'1', True, 'True' and get it right.
I can't agree with that. Overestimating the complexity of the problem is no better than underestimating its importance :)
There's no such things as 'False'. How did you get that? When a python value is rendered, it gets through force_unicode() and False can only become '0'. Therefore, the logic is very simple:
1) no data, '' and '0' is False
2) all the rest is True
The bug can be traced to this code in forms.Form:
def _get_changed_data(self): # ... for name, field in self.fields.items(): # ... if field.widget._has_changed(initial_value, data_value): self._changed_data.append(name)
The initial_data here is taken from hidden_widget (which is filled with '0' for False when a form is first rendered) which is then casted to True by bool() inside CheckboxInput._has_changed.
Therefore, the two possible fixes could be:
1) make CheckboxInput._has_changed smarter and treat initial='0' as False
2) add a custom BooleanField.hidden_widget (so called BooleanHiddenInput) which would store '' for False instead of '0'
I prefer the second approach as it allows to get rid of hard coded comparisons against string '0'.
comment:11 Changed 6 years ago by semenov
- Needs tests set
- Patch needs improvement set
- Summary changed from changed_data works incorrectly with BooleanField to changed_data works incorrectly with BooleanField and NullBooleanField
Attached a patch against Django 1.1.1 release which adds a HiddenBooleanInput widget and sets it as a hidden_widget for BooleanField. That did the trick for me and I attached it because it might be useful for the other people, but this is of course far from a final fix. I see the following concerns:
1) This is only tested with ModelForm and models.BooleanField, i.e. when the initial form data can be 0 and 1 (not False and True) - which is weird by the way (why would a models.BooleanField return integers instead of booleans?)
2) NullBooleanField appears to be desperately broken in the similar manner. At least, NullBooleanSelect._has_changed has a simple logic of comparing bools (copy-pasted from CheckboxInput), without any regards to the fact that there might be None's as well.
Changed 6 years ago by semenov
comment:12 Changed 5 years ago by julien
- Severity set to Normal
- Type set to Bug
comment:13 Changed 4 years ago by aaugustin
- UI/UX unset
Change UI/UX from NULL to False.
comment:14 Changed 4 years ago by aaugustin
- Easy pickings unset
Change Easy pickings from NULL to False.
comment:15 Changed 4 years ago by claudep
- Resolution set to fixed
- Status changed from new to closed
I just committed [d11038acb2ea2f59a1d00a41b553f] which should have fixed this issue. I didn't handle the '0' case, as I think that we should stick with real boolean value here.
NullBooleanSelect issue should have been treated in #11860.
I don't see why this needs to be on the v1.1 milestone at this late stage in the development cycle.
As I understand the problem, worst case, it will be an edge case annoyance. The only place I can see this being a problem is with a BooleanField in an inline model in the admin that has a callable initial value. Of course, it's possible that there might be an easier way to stimulate this problem, but the report doesn't provide any details on what that way might be. Sample models/actions/code would be extraordinarily helpful.
Also - I'm not particularly inspired by the fact that the proposed fix breaks any number of tests in the Django test suite. | https://code.djangoproject.com/ticket/11303 | CC-MAIN-2016-30 | refinedweb | 1,770 | 62.07 |
There are many scenarios where it's useful to integrate a desktop application with the web browser. Given that most users spend the majority of their time today surfing the web in their browser of choice, it can make sense to provide some sort of integration with your desktop application. Often, this will be as simple as providing a way to export the current URL or a selected block of text to your application. For this article, I've created a very simple application that uses text to speech to speak out loud the currently selected block of text in your web browser.
Internet Explorer provides many hooks for integrating application logic into the browser, the most popular being support for adding custom toolbars. There are great articles explaining how to do this on Code Project, such as this Win32 article and this .NET article. You can also create toolbars for Firefox using their Chrome plug-in architecture which uses XML for the UI layout and JavaScript for the application logic (see CodeProject article).
What about the other browsers, Google Chrome, Safari, and Opera? Given there is no common plug-in architecture used by all browsers, you can see a huge development effort is required to provide a separate toolbar implementation for each browser.
It would be nice to be able to write one toolbar that could be used across all browsers. This is not possible today, but you can achieve almost the same effect using Bookmarklets.
A bookmarklet is special URL that will run a JavaScript application when clicked. The JavaScript will execute in the context of the current page. Like any other URL, it can be bookmarked and added to your Favourites menu or placed on a Favourites toolbar. Here is a simple example of a bookmarklet:
javascript:alert('Hello World.');
Here is a slightly more complex example that will display the currently selected text in a message box:
<a href="javascript:var q = ''; if (window.getSelection)
q = window.getSelection().toString(); else if (document.getSelection)
q = document.getSelection(); else if (document.selection)
q = document.selection.createRange().text; if(q.length > 0)
alert(q); else alert('You must select some text first');">Show Selected Text</a>
Select a block of text on the page and click the above link. The text will be shown in a message box.
Now, drag the bookmarklet and drop it on your Favourites toolbar (for IE, you need to right click, select Add to Favourites, and then create it in the Favourites Bar). Navigate to a new page, select a block of text, and click the Show Selected Text button. Once again, the selected text will be shown in a message box.
You can see the potential of bookmarklets. You can create a bookmarklet for each command of your application and display them on a web page. The user can then select the commands they want to use and add them to their Favourites (either the toolbar or menu).
The downside is it's slightly more effort to install than a single toolbar, but on the upside, it gives the user a lot of flexibility. They need only choose the commands they're interested in and can choose whether they want them accessible from a toolbar or menu.
From a developer's perspective, bookmarklets are great as they're supported by all the major browsers. The only thing you need to worry about is making sure your JavaScript code handles differences in browser implementations, something that is well documented and understood these days (although still a right pain).
Bookmarklets allow you to execute an arbitrary block of JavaScript code at the click of a button, but how do you use this to communicate with a desktop application?
The answer is to build a web server into your desktop application and issue commands from your JavaScript code using HTTP requests.
Now, before you baulk at the idea of building a web server into your application, it's actually very simple. You don't need a complete web server implementation. You just need to be able to process simple HTTP GET requests. A basic implementation is as follows.
The .NET framework 2.0 has an HttpListener class and associated HttpListenerRequest and HttpListenerResponse classes that allow you to implement the above in a few lines of code.
HttpListener
HttpListenerRequest
HttpListenerResponse
On the browser, you need a way of issuing HTTP requests from your JavaScript code. There are a number of ways of issuing HTTP requests from JavaScript.
The simplest is to write a new URL into the document.location property. This will cause the browser to navigate to the new location. However, this is not what we want. We don't want to direct the user to a new page when they click one of our bookmarklets. Instead, we just want to issue a command to our application while remaining on the same page.
document.location
This sounds like a job for AJAX and the HttpXmlRequest. AJAX provides a convenient means of issuing requests to a web server in the background without affecting the current page. However, there is one important restriction placed on the HttpXmlRequest called the same domain origin policy. Browsers restrict HttpXmlRequests to the same domain as that used to serve the current page. For example, if you are viewing a page from codeproject.com, you can only issue HttpXmlRequests to codeproject.com. A request to another domain (e.g., google.com) will be blocked by the browser. This is an important security measure that ensures malicious scripts cannot send information to a completely different server behind the scenes without your knowledge.
HttpXmlRequest
This restriction means that we cannot use a HttpXmlRequest to communicate with our desktop application. Remember that JavaScript bookmarklets are executed in the context of the current page. We need to be able to send a request from any domain (e.g. codeproject.com) to our desktop application which will be in the localhost domain.
In order to overcome this problem, I turned to Google for inspiration. Google needs to be able to do precisely this in order to gather analytics information for a site. If you're not familiar with Google analytics, it can be used to gather a multitude of information about the visitors to your web site, such as the number of visitors, where they came from, and the pages on your site they visit. All this information is collected, transferred back to Google, and appears in various reports in your analytics account.
To add tracking to your site, you simply add a call to the Google analytics JavaScript at the bottom of every page of your site.
Whenever a visitor lands on your page, the JavaScript runs and the visitor details are sent back to your Google analytics account.
The question is how does Google do this? Surely, they can't use an HttpXmlRequest as it would break the same domain origin policy? They don't. Instead, they use what can only be described as a very clever technique.
The JavaScript Image class is a very simple class that can be used to asynchronously load an image. To request an image, you simply set the source property to the URL of the image. If the image loads successfully, the onload() method is called. If an error occurs, the onerror() method is called. Unlike the HttpXmlRequest, there is no same domain origin policy. The source image can be located on any server. It doesn't need to be hosted on the same site as the current page.
Image
source
onload()
onerror()
We can use this behaviour to send arbitrary requests to our desktop application (or any domain for that matter) if we realize the source URL can contain any information, including a querystring. The only requirement is that it returns an image. Here is an example URL:
<a href=""></a>
We can easily map this URL to the following command in our application.
public void speaktext(string text);
In order to ensure the request completes without error, a 1x1 pixel GIF image is returned. This image is never actually shown to the user. A tiny image is used to minimize the number of bytes being transmitted.
The most important point to realize is all communication is one way, from the browser to the desktop application. There is no way of sending information from the desktop application back to the browser. However, for many applications, this is not a problem.
Google uses the JavaScript Image technique to send visitor information to your pages (hosted on yourdomain.com) back to your Google analytics account (hosted on google.com).
You need to be aware that URLs have a maximum length that varies from browser to browser (around 2K - check). This restricts the amount of information you can send in a single request. If you need to send a large amount of information, you'll need to break it up into smaller chunks and send multiple requests. The sample application, BrowserSpeak, uses this technique to speak arbitrarily large blocks of text.
JavaScript will automatically encode a URL you pass to Image.src as UTF-8. However, when passing arbitrary text as part of a URL, you will need to escape the '&' and '=' characters. These characters are used to delimit the name/value pairs (or arguments) that are passed in the querystring portion of the URL. This can be done using the JavaScript escape() function.
Image.src
escape()
Web browsers will cache images (as well as many other resources) locally to avoid making multiple requests back to the server for the same resource. This behaviour is disastrous for our application. The first command will make it through to our desktop application, and the browser will cache dummy.gif locally. Subsequent requests will never reach our desktop application as they can be satisfied from the local cache.
There are a couple of solutions to this problem. One answer is to set the cache expiry directives in the HTTP response to instruct the browser never to cache the result.
The other approach, which is used for the BrowserSpeak application, is to ensure every request has a unique URL. This is done by appending a timestamp containing the current date and time. For example:
var request = "http://" + server + "/" +
command + "/dummy.gif" + args +
"×tamp=" + new Date().getTime();
It's now time to put all this theory into practice and create a sample application that has some real world use.
BrowserSpeak is a C# application that will speak the text on a web page out loud. It can be used when you are tired of reading large passages of text from the screen. It uses the System.Speech.Synthesis component found in the .NET Framework 3.0 for the text to speech functionality.
System.Speech.Synthesis
BrowserSpeak provides the following commands, available through its web interface and through its UI.
It also provides a BufferText command available from the web interface. This command is used to send a block of text from the web browser to the desktop application. It splits the text into 1500 byte chunks so it's not limited by the maximum size of a URL. It's used by the Speak Selected bookmarklet to transfer the selected text to the BrowserSpeak application prior to speaking.
BrowserSpeak uses the following bookmarklets (drag these onto your Favourites bar to use from your browser):
The Speak Selected command is the most complex and also the most interesting. It's listed below:
// A bookmarklet to send a speaktext command to the BrowserSpeak application.
var server = "localhost:60024";
// Change the port number for your app to something unique.
var maxreqlength = 1500;
// This is a conservative limit that should work with all browsers.
var selectedText = _getSelectedText();
if(selectedText)
{
_bufferText(escape(selectedText));
_speakText();
}
void 0;
// Return from bookmarklet, ensuring no result is displayed.
function _getSelectedText()
{
// Get the current text selection using
// a cross-browser compatible technique.
if (window.getSelection)
return window.getSelection().toString();
else if (document.getSelection)
return document.getSelection();
else if (document.selection)
return document.selection.createRange().text;
return null;
}
function _formatCommand(command, args)
{
// Add a timestamp to ensure the URL is always unique and hence
// will never be cached by the browser.
return "http://" + server + "/" + command +
"/dummy.gif" + args +
"×tamp=" + new Date().getTime();
}
function _speakText()
{
var image = new Image(1,1);
image.onerror = function() { _showerror(); };
image.src = _formatCommand("speaktext", "?source=" + document.URL);
}
function _bufferText(text)
{
var clearExisting = "true";
var reqs = Math.floor((text.length + maxreqlength - 1) / maxreqlength);
for(var i = 0; i < reqs; i++)
{
var start = i * maxreqlength;
var end = Math.min(text.length, start + maxreqlength);
var image = new Image(1,1);
image.onerror = function() _showerror(); };
image.src = _formatCommand("buffertext",
"?totalreqs=" + reqs + "&req=" + (i + 1) +
"&text=" + text.substring(start, end) +
"&clear=" + clearExisting);
clearExisting = "false";
}
}
function _showerror()
{
// Display the most likely reason for an error
alert("BrowserSpeak is not running. You must start BrowserSpeak first.");
}
Most of the code is self-explanatory. However, it's important to explain the behaviour of the _bufferText() loop. If the text being sent is greater than 1500 bytes, then multiple requests will be made. Remember that as far as the browser is concerned, it's requesting an image. Modern browsers will issue multiple image requests in parallel. This will cause multiple buffertext commands to be issued in parallel. Not only that, it's quite possible the requests will arrive out of order at the BrowserSpeak desktop application. Therefore, every request includes the parameters req (the request number) and totalreqs (the total number of requests). This allows the BrowserSpeak application to reassemble the text into the correct order.
_bufferText()
req
totalreqs
The code for a bookmarklet must be formatted into a single line. For very small applications, this is not a problem. However, when you start to develop larger, more complex applications, you will want to develop your code over multiple lines, with plenty of whitespace and comments. I've found that using a JavaScript minifier, in particular the free YUI Compressor, is a great way of turning a normal chunk of JavaScript into a single line suitable for use in a bookmarklet. Ideally, you'd add this step into your automated build process.
The main application-specific logic lives in the MainForm class. First, it starts an HttpCommandDispatcher instance in the constructor, responsible for receiving and dispatching HTTP commands (sent from the bookmarklets). The MainForm class then listens for various events and updates the UI to reflect its current state.
MainForm
HttpCommandDispatcher
SpeechController
TextBuffer
BufferTextCommand
TextBox
HttpCommandDispatcher.RequestReceived
The HttpCommandDispatcher listens for HTTP requests using the HttpListener class found in System.Net. When a request is received, it extracts the command from the URL, looks up the appropriate HttpCommand, and calls the HttpCommand.Execute() method. It will also send a response with a dummy.gif image (this is preloaded and stored in a byte[] array).
System.Net
HttpCommand
HttpCommand.Execute()
byte[]
A word about text encoding and extracting arguments from the URL. The HttpListenerRequest has a QueryString property that is a name/value collection containing the arguments received in the querystring portion of the URL. Unfortunately, I found you couldn't use this property as the argument values are not correctly decoded from their UTF-8 encoding. Instead, I parse the RawUrl property manually and call the HttpUtility.DecodeUrl() method on each argument value. This correctly handles the UTF-8 encoded strings we receive from JavaScript.
QueryString
RawUrl
HttpUtility.DecodeUrl()
You will probably recognize the Command pattern. You must derive a class from HttpCommand for every command you wish to make available through the HTTP interface. Each command must be added using the HttpCommandDispatcher.AddCommand() method.
HttpCommandDispatcher.AddCommand()
An abstract TextCommand is provided for use by commands that need to receive large amounts of text from the browser (e.g. the SpeakTextCommand). A TextCommand will listen to the TextBuffer and call the abstract TextAvailable method whenever new text arrives. Derived classes need to override this method and execute their operation whenever this method is called. This handles the case where the command arrives from the browser before all the text the command operates on has arrived.
TextCommand
SpeakTextCommand
TextAvailable
One additional piece of functionality that's provided but not actually used in the sample application is the ImageLocator class. This class will take an HTTP request for an image, look up the appropriate image from the application's resources, and return an image in the requested format. For example, you can view the icon used for the About button using the following URL:
ImageLocator
<a href=""></a>
The classes described above live in the HttpServer namespace and are pretty much decoupled from the BrowserSpeak application. You should be able to lift these out and drop them into your own application without change.
HttpServer
If you try and run BrowserSpeak on Vista, you will get an Access Denied exception when you try to start the HttpListener. Vista doesn't let standard users register and listen for requests to a specific URL. You could run your application as Administrator, but a better approach is to grant all users permission to register and listen on your application's URL. That way, you can run your application as a standard user. You can do this using the netsh utility. To grant BrowserSpeak's URL user permissions, execute the following command from a command prompt running as Administrator.
netsh http add urlacl url= user=BUILTIN\Users listen=yes
This setting is persistent and will survive reboots. When you deploy your application, you should execute this command as part of your installer.
The text to speech functionality used by the application is found in the SpeechController class. Thanks to the functionality provided in the System.Speech.Synthesis namespace found in .NET 3.0, this class does almost nothing. It merely delegates through to an instance of the Microsoft SpeechSynthesizer class. If you wanted to remove the dependency on .NET 3.0, you could reimplement the SpeechController class and use COM-interop to access the native Microsoft Speech APIs (SAPI).
SpeechSynthesizer
I hope I've demonstrated the power of bookmarklets in this article and given you some ideas on how to provide useful integration between the web browser and a desktop application.
There are a couple of things I couldn't get working to my satisfaction using this technique:
<link rel="shortcut icon" href="speaktext.ico" type="image/vnd.microsoft.icon"/>
Unfortunately, the browsers don't seem to use the favicon for bookmarklets.
I've made use of this technique in the latest version of my commercial text to speech application, Text2Go. I've also used a variation of this technique to add a menu of JavaScript bookmarklets in Internet Explorer 8's Accelerator preview. | http://www.codeproject.com/Articles/36517/Communicating-from-the-Browser-to-a-Desktop-Applic?fid=1540842&df=90&mpp=10&sort=Position&spc=None&tid=3075078 | CC-MAIN-2016-18 | refinedweb | 3,114 | 56.66 |
C++ if statementPritesh
In C++ if statement is used to check the truthness of the expression. Condition written inside If statement conconsists of a boolean expression. If the condition written inside if is true then if block gets executed.
Syntax :
If statement is having following syntax –
if(boolean_expression) { // statement(s) will execute if the // boolean expression is true }
Example : C++ If Statement
#include <iostream> using namespace std; int main () { // Declaring local variable int num = 5; // check the boolean condition if( num > 4 ) { cout << "num is greater than 4" << endl; } cout << "Given number is : " << num << endl; return 0; }
Output :
Compiled C++ code will produce following output –
num is greater than 4 Given number is : 5
Explanation :
In the above program, In the if condition we are checking the following condition –
if( num > 4 )
If the above condition evaluates to true then the code written inside the if block will be executed. Otherwise code followed by if block will be executed.
Note #1 : Curly braces
Complete if block is written inside the pair of curly braces. If we have single statement inside the if block then we can skip the pair of curly braces –
if( num > 4 ) cout << "num is greater than 4" << endl; | http://www.c4learn.com/cplusplus/cpp-if-statement/ | CC-MAIN-2019-39 | refinedweb | 202 | 50.91 |
The concepts of static extensions and macros are somewhat conflicting: While the former requires a known type in order to determine used functions, macros execute before typing on plain syntax. It is thus not surprising that combining these two features can lead to issues. Haxe 3.0 would try to convert the typed expression back to a syntax expression, which is not always possible and may lose important information. We recommend that it is used with caution.
The combination of static extensions and macros was reworked for the 3.1.0 release. The Haxe Compiler does not even try to find the original expression for the macro argument and instead passes a special
@:this this expression. While the structure of this expression conveys no information, the expression can still be typed correctly:
import haxe.macro.Context; import haxe.macro.Expr; using Main; using haxe.macro.Tools; class Main { static public function main() { #if !macro var a = "foo"; a.test(); #end } macro static function test(e:ExprOf<String>) { trace(e.toString()); // @:this this // TInst(String,[]) trace(Context.typeof(e)); return e; } } | https://haxe.org/manual/macro-limitations-static-extension.html | CC-MAIN-2018-43 | refinedweb | 180 | 59.8 |
These are chat archives for sbt/sbt
triggeredMessage := Watched.clearWhenTriggeredfor every project that I work on on this machine?
~/.sbt/globalwould run before projects are loaded so could not set settings, right?
does this do what I expect?does this do what I expect?
def kotlinLib(name: String) = Def.setting { "org.jetbrains.kotlin" % ("kotlin-" + name) % kotlinVersion.value } def kotlinPlugin(name: String) = kotlinLib(name)(_ % "provided")
exportJars := trueto the
rootproject. I did that in the hope that
show MyConfig:dependencyClasspathlists all dependencies as .jar files... but it only lists the root jar file and scala library. Thoughts?
exportJars := falseby default... but I was setting it
trueby copy/paste/mistake from another build. The issue is... if you do
exportJars := true, your /src/main/resources and /src/test/resources will be packaged inside a .jar file and this was breaking my test cases which are dependent on resources sitting on the file system (not packaged inside jars).
exportJars := falsein general but I need a special configuration which packages everything (not a fatjar!) and list all dependencies as .jar files.
rootwhich is the same as
rootitself, plus some more stuff. But I cannot do that inside
rootitself. | https://gitter.im/sbt/sbt/archives/2015/09/11 | CC-MAIN-2017-43 | refinedweb | 194 | 62.14 |
Next.js E-Commerce Tutorial: Quick Shopping Cart IntegrationMay 16, 2019
In a rush? Skip to technical tutorial or live demo
Each time we come back to React-related topics on the blog, its ecosystem seems to have gotten larger, more mature, and efficient.
These days, there isn’t much you can’t do with React, whether you’re a seasoned developer or a complete beginner.
This is mostly due to the creation of tools such as Next.js that have successfully simplified React frontend development.
So, today, I want to explore what Next.js can do for e-commerce.
In the technical tutorial below, I’ll show you how to:
- Set up a Next.js development environment
- Create new pages & components
- Fetch data & import components
- Add a shopping cart to a Next.js app
- Style & deploy the app
But before we go through this, let’s make sure we understand what Next.js is and how it can improve your next e-commerce projects.
What is Next.js?
In a nutshell, Next.js is a lightweight framework for React applications.
Right out of the gate, I feel this definition has the potential to confuse some. Isn’t React already a framework for JavaScript in itself? Where does it end, right?
Well, Next takes all the good parts of React and makes it even easier to get an app running. It 5 SSGs for 2019.
- Progressive Web Apps (PWAs)
- Server-rendered applications
- SEO-friendly websites—as we’ve demonstrated here.
- Mobile apps
It was built by Zeit here. But is it any good for e-commerce?
Next.js & e-commerce: a good fit?
Like any static site generator or JS framework out there, one of its most significant advantages (vs more traditional e-commerce platforms) is the freedom it gives to developers to create a kickass shopping UX.
The power of the JAMstack right here!
We’ve covered the general React e-commerce ecosystem and its benefits in an earlier post. I would strongly suggest reading it to further understand for performance.
React’s virtual DOM provides a more efficient way of updating the view in a web application. Performance is HUGE in e-commerce; every milli-seconds count.
Speed = Better UX & SEO = $$$.
→ Popularity & vast community for peace of mind. e-commerce site examples.
Technical tutorial: a Next.js e-commerce SPA
Okay, time to jump into code and create our own handcrafted Next.js e-commerce app, with the help of Snipcart. Bear with me; this is going to be fun!
Pre-requisites
- Basic understanding of single-page applications (SPAs)
- A Snipcart account (forever free in Test mode)
Basic knowledge of React & TypeScript might also help you here, but not mandatory to follow along.
1. Setting up the development environment
Before getting started, you'll need to create a directory for your project and initialize it as a npm repository with the following command:
npm init -y
Once this is done, you'll need to install the dependencies for your project. In this tutorial, I'll use TypeScript and Sass.
Therefore, on top of the regular Next.js dependencies, you'll need to install all the typings as well as
@zeit/next-typescript,
@zeit/next-sass and
node-sass.
To do so, simply run this npm command:
npm install --save react @types/react react-dom @types/react-dom next @types/next @zeit/next-typescript @zeit/next-sass node-sass
You’ll also need to make a few configuration changes. Create a file at the root of your project named
next.config.js with the following code:
const withTypescript = require('@zeit/next-typescript') const withSass = require('@zeit/next-sass') module.exports = withTypescript(withSass());
This will indicate to Next.js that you want to use TypeScript and Sass in the project. You'll also need to create a file named
.babelrc.js in the same place with the following code:
module.exports = { presets: ['next/babel', '@zeit/next-typescript/babel'] }
Furthermore, in your
package.json file add the following scripts:
{ "scripts": { "dev": "next", "build": "next build", "start": "next start" } }
By adding these scripts, you'll be able to serve your application locally with the
npm run dev command. Don't panic if it doesn't work at this stage, as your application is not ready yet.
2. Creating a new page
Now that your environment is ready to go, you can start adding pages to the site. Inside a
pages directory, create a
index.tsx file with the following code:
const Index = () => { return ( <div className="app"> <p>Hello world!</p> </div> ) } export default Index
If you're familiar with React, you'll notice that I'll use React Hooks here, which is essentially a functional approach at writing React code. Keep in mind that this feature is entirely opt-in though, so feel free to convert the functions into a
classbased component if it makes you feel more at home.
At this stage, running
npm run dev in your console should serve your application at the following URL:
localhost:3000.
3. Generating new components
Since you're building an e-commerce app, you'll need to create four main components inside a
components directory:
Header.tsx,
Footer.tsx,
ProductList.tsx and
Product.tsx.
Inside the header, you can import the
next/link package which allows you to convert most HTML elements into links. In this case, the logo and title will let us go back to the homepage.
Keep in mind that a
Link component can only have one nested HTML element and should only be used to send the user to your website. Anything that links outside your website should remain inside an
a tag.
Also, using Next.js, you can serve any static content such as images by placing them inside a
static directory at the root of your folder.
import Link from "next/link"; export default function Header() { return ( <header className="header"> <Link href="/"> <img src="/static/logo.svg" alt="" className="header__logo" /> </Link> <Link href="/"> <h1 className="header__title">FishCastle</h1> </Link> <a className="header__summary snipcart-checkout snipcart-summary" href="#" style={{textDecoration: "none"}}> <span className="header__price snipcart-total-price"></span> </a> </header> ) }
export default function Footer(){ return ( <footer className="footer"> <p> Next.js app with a <a href="">Snipcart</a> - powered store </p> </footer> ) }
The
Product component will output whatever information you want to display about a particular product. You can create an
IProduct interface that matches with Snipcart's product definition and an
IProductProps interface to define the types of our props which is passed as a parameter to the function.
import {withRouter, RouterProps} from 'next/router' export interface IProduct { id: string name: string price: number url: string description: string image: string } interface IProductProps { product: IProduct router: RouterProps } const Product = (props: IProductProps) => { return ( <div className="product"> <h2 className="product__title">{props.product.name}</h2> <p className="product__description">{props.product.description}</p> <img src={props.product.image} <div className="product__price-button-container"> <div className="product__price">${props.product.price.toFixed(2)}</div> <button className="snipcart-add-item product__button" data-item-id={props.product.id} data-item-name={props.product.name} data-item-price={props.product.price} data-item-url={props.router.pathname} data-item-image={props.product.image}> Add to cart </button> </div> </div> ) } export default withRouter(Product)
Notice how in this component, I've made use of
next/router by exporting the component inside the
withRouter function?
This is because the router allows you to get the URL both on the client and the server, which is great on many levels—1) the Next.js app can be rendered server-side and 2) Snipcart will be able to crawl back the page to validate the integrity of the product without any issues.
The
ProductList.tsx component is going to be used to display a list of products on the homepage. Therefore, you can create
Remember that with React you need to have a unique key for each child components you decide to render more than once. That is why you need to pass the index as a prop to our
Product component.
4. Fetching data and importing components
At this stage, you'll probably want to populate your products to the
ProductList component. You could use React's
useEffect lifecycle inside the
ProductList to fill the data. However, this won't get rendered on the server.
Thankfully Next.js adds a new lifecycle method for pages named
getInitalProps, which is an async method that can return anything resolvable into a JavaScript
Object. This is where you will generally want to fetch from an API or a CMS.
In this case, you'll simply return a new JavaScript object containing a list of all the products.
You can import your newly created components inside the
index.tsx page and add the
getIntialProps method by changing your code to the following:
import Header from "../components/Header" import ProductList from "../components/ProductList" import { IProduct } from "../components/Product" import Footer from "../components/Footer" import Head from "next/head" interface IIndexProps { products: IProduct[] } const Index = (props: IIndexProps) => { return ( <div className="app"> <Header /> <main className="main"> <ProductList products={props.products} /> </main> <Footer /> </div> ) } Index.getInitialProps = async ({ req }) => { return { products: [ {id: "nextjs_halfmoon", name: "Halfmoon Betta", price: 25.00, image: "../static/halfmoon.jpg", description: "The Halfmoon betta is arguably one of the prettiest betta species. It is recognized by its large tail that can flare up to 180 degrees."} as IProduct, {...} ] } } export default Index
5. Importing Snipcart
Now, let's install Snipcart into the website. First, you'll need to import the
Head component from
next/head inside your
index.tsx page which will allow you to add HTML inside the
<head> element.
You can do so by adding the following code inside the
Index function return clause:
<Head> <script src=""></script> <script src="" data-</script> <link href="" rel="stylesheet" type="text/css" /> </Head>
Don't forget to swap the
data-api-key attribute with your own API key ;)
6. Styling your app
So far, you've already set up all the configurations necessary to use Sass inside your web app. Therefore, the only thing left to do is to create a
.sscs file and import it inside the page of your liking.
import "../styles.scss"
That said, Next.js offers many other ways to style your web app. For instance, you could add inline styles or use styled-jsx, which is bundled by default inside Next.js applications.
7. Deploying your app
With Next.js, there are two main ways of deploying your application. You can either use a more traditional server-side rendered approach, which is great for web apps with a lot of dynamic content, or export every page to a
.html file and serve those files through a content delivery network.
Since we've already explored the latter in this React SEO tutorial, we'll make our app server-side rendered running on a Heroku server.
First, make sure you already have a Heroku account and installed Heroku's CLI.
Once this is done, modify the
start script in the
package.json file to the following:
next start -p $PORT
Now create your Heroku app.
heroku create [YOUR_PROJECT_NAME]
Stage and commit all the files.
git add .
git commit -m 'Inital commit'
And finally, push your commit to Heroku's servers.
git push heroku master
That's it! You're server-side rendered Next.js e-commerce store should be ready to go.
Live demo & GitHub repo
Closing thoughts
I liked working with Next.js a lot. I was a bit skeptical at first when I read that Next.js was a framework for React considering that React is a framework in itself, but I was pleasantly surprised with how little it actually complicated things. Quite the opposite actually—it made my life easier as there was much less configuration to do.
I was also happy to know that Next.js already supports React Hooks because at the time of writing, it’s still a pretty recent addition to the React ecosystem.
To push this demo further, it would have been interesting to use dynamic imports to split the codebase into manageable chunks and fetch products from a real API rather than only returning a mocked object inside the
getInitialProps function.
Are you up to it? If so, let us know how it goes in the comments below!
If you've enjoyed this post, please take a second to share it on Twitter. Got comments, questions? Hit the section below! | https://snipcart.com/blog/next-js-ecommerce-tutorial | CC-MAIN-2020-05 | refinedweb | 2,074 | 57.47 |
Paul E. McKenney wrote:> [Experimental RFC, not for inclusion.]> > I recently received a complaint that RCU was refusing to let a system> go into low-power state immediately, instead waiting a few ticks after> the system had gone idle before letting go of the last CPU. Of course,> the reason for this was that there were a couple of RCU callbacks on> the last CPU.> > Currently, rcu_needs_cpu() simply checks whether the current CPU has> an outstanding RCU callback, which means that the last CPU to go into> dyntick-idle mode might wait a few ticks for the relevant grace periods> to complete. However, if all the other CPUs are in dyntick-idle mode,> and if this CPU is in a quiescent state (which it is for RCU-bh and> RCU-sched any time that we are considering going into dyntick-idle mode),> then the grace period is instantly complete.> > This patch therefore repeatedly invokes the RCU grace-period machinery> in order to force any needed grace periods to complete quickly. It does> so a limited number of times in order to prevent starvation by an RCU> callback function that might pass itself to call_rcu().> > Thoughts?> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>> > diff --git a/init/Kconfig b/init/Kconfig> index d95ca7c..42bf914 100644> --- a/init/Kconfig> +++ b/init/Kconfig> @@ -396,6 +396,22 @@ config RCU_FANOUT_EXACT> > Say N if unsure.> > +config RCU_FAST_NO_HZ> + bool "Accelerate last non-dyntick-idle CPU's grace periods"> + depends on TREE_RCU && NO_HZ && SMP> + default n> + help> + This option causes RCU to attempt to accelerate grace periods> + in order to allow the final CPU to enter dynticks-idle state> + more quickly. On the other hand, this option increases the> + overhead of the dynticks-idle checking, particularly on systems> + with large numbers of CPUs.> +> + Say Y if energy efficiency is critically important, particularly> + if you have relatively few CPUs.> +> + Say N if you are unsure.> +> config TREE_RCU_TRACE> def_bool RCU_TRACE && ( TREE_RCU || TREE_PREEMPT_RCU )> select DEBUG_FS> diff --git a/kernel/rcutree.c b/kernel/rcutree.c> index 099a255..29d88c0 100644> --- a/kernel/rcutree.c> +++ b/kernel/rcutree.c> @@ -1550,10 +1550,9 @@ static int rcu_pending(int cpu)> /*> *.> + * 1 if so.> */> -int rcu_needs_cpu(int cpu)> +static int rcu_needs_cpu_quick_check(int cpu)> {> /* RCU callbacks either ready or pending? */> return per_cpu(rcu_sched_data, cpu).nxtlist ||> diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h> index e77cdf3..d6170a9 100644> --- a/kernel/rcutree_plugin.h> +++ b/kernel/rcutree_plugin.h> @@ -906,3 +906,72 @@ static void __init __rcu_init_preempt(void)> }> > #endif /* #else #ifdef CONFIG_TREE_PREEMPT_RCU */> +> +#if defined(CONFIG_TREE_PREEMPT_RCU) || !defined(CONFIG_RCU_FAST_NO_HZ)> +> +/*> + * have preemptible RCU, just check whether this CPU needs> + * any flavor of RCU. Do not chew up lots of CPU cycles with preemption> + * disabled in a most-likely vain attempt to cause RCU not to need this CPU.> + */> +int rcu_needs_cpu(int cpu)> +{> + return rcu_needs_cpu_quick_check(cpu);> +}> +> +#else> +> +#define RCU_NEEDS_CPU_FLUSHES 5> +> +/*> + * are not supporting preemptible RCU, attempt to accelerate> + * any current grace periods so that RCU no longer needs this CPU, but> + * only if all other CPUs are already in dynticks-idle mode. This will> + * allow the CPU cores to be powered down immediately, as opposed to after> + * waiting many milliseconds for grace periods to elapse.> + */> +int rcu_needs_cpu(int cpu)> +{> + int c = 1;> + int i;> + int thatcpu;> +> + /* Don't bother unless we are the last non-dyntick-idle CPU. */> + for_each_cpu(thatcpu, nohz_cpu_mask)> + if (thatcpu != cpu)> + return rcu_needs_cpu_quick_check(cpu);The comment and the code are not the same, I think.-----------I found this thing, Although I think it is a ugly thing.Is it help?See select_nohz_load_balancer()./* * This routine will try to nominate the ilb (idle load balancing) * owner among the cpus whose ticks are stopped. ilb owner will do the idle * load balancing on behalf of all those cpus. If all the cpus in the system * go into this tickless mode, then there will be no ilb owner (as there is * no need for one) and all the cpus will sleep till the next wakeup event * arrives... * * For the ilb owner, tick is not stopped. And this tick will be used * for idle load balancing. ilb owner will still be part of * nohz.cpu_mask.. * * While stopping the tick, this cpu will become the ilb owner if there * is no other owner. And will be the owner till that cpu becomes busy * or if all cpus in the system stop their ticks at which point * there is no need for ilb owner. * * When the ilb owner becomes busy, it nominates another owner, during the * next busy scheduler_tick() */ | http://lkml.org/lkml/2010/1/25/64 | CC-MAIN-2014-52 | refinedweb | 747 | 64.2 |
Challenge
Christmas carols are a time honored tradition. Draw a heatmap of their most popular words.
My Solution
Building these word clouds kicked my ass. Even had to ask the three wise men for help.
So apparently combining useState and useMemo and promises makes your JavaScript crash hard.— Swizec Teller (@Swizec) December 8, 2018
What am I doing wrong? @ryanflorence @kentcdodds @dan_abramov ? I thought useMemo was supposed to only run once and not infinite loop on me pic.twitter.com/29OVRXxhuz
Turns out that even though
useMemo is for memoizing heavy computation, this
does not apply when said computation is asynchronous. You have to use
useEffect.
At least until suspense and async comes in early 2019.
Something about always returning the same Promise, which confuses
useMemo and
causes an infinite loop when it calls
setState on every render. That was fun.
There's some computation that goes into this one to prepare the dataset. Let's start with that.
Preparing word cloud data
Our data begins life as a flat text file.
Angels From The Realm Of Glory
And so on. Each carol begins with a title and an empty line. Then there's a bunch of lines followed by an empty line.
We load this file with
d3.text, pass it into
parseText, and save it to a
carols variable.
const [carols, setCarols] = useState(null);useEffect(() => {d3.text('/carols.txt').then(parseText).then(setCarols);}, [!carols]);
Typical
useEffect/
useState dance. We run the effect if state isn't set, the
effect fetches some data, sets the state.
Parsing that text into individual carols looks like this
function takeUntilEmptyLine(text) {let result = [];for (let row = text.shift();row && row.trim().length > 0;row = text.shift()) {result.push(row.trim());}return result;}export default function parseText(text) {text = text.split('\n');let carols = { 'All carols': [] };while (text.length > 0) {const title = takeUntilEmptyLine(text)[0];const carol = takeUntilEmptyLine(text);carols[title] = carol;carols['All carols'] = [...carols['All carols'], ...carol];}return carols;}
Our algorithm is based on a
takeUntil function. It takes lines from our text
until some condition is met.
Basically:
- Split text into lines
- Run algorithm until you run out of lines
- Take lines until you encounter an empty line
- Assume the first line is a title
- Take lines until you encounter an empty line
- This is your carol
- Save title and carol in a dictionary
- Splat carrol into the
All carolsblob as well
We'll use that last one for a joint word cloud of all Christmas carols.
Calculating word clouds with d3-cloud
With our carols in hand, we can build a word cloud. We'll use the wonderful d3-cloud library to handle layouting for us. Our job is to feed it data with counted word frequencies.
Easiest way to count words is with a loop
function count(words) {let counts = {};for (let w in words) {counts[words[w]] = (counts[words[w]] || 0) + 1;}return counts;}
Goes over a list of words, collects them in a dictionary, and does
+1 every
time.
We use that to feed data into
d3-cloud.
function createCloud({ words, width, height }) {return new Promise(resolve => {const counts = count(words);const fontSize = d3.scaleLog().domain(d3.extent(Object.values(counts))).range([5, 75]);const layout = d3Cloud().size([width, height]).words(Object.keys(counts).filter(w => counts[w] > 1).map(word => ({ word }))).padding(5).font('Impact').fontSize(d => fontSize(counts[d.word])).text(d => d.word).on('end', resolve);layout.start();});}
Our
createCloud function gets a list of words, a width, and a height. Returns
a promise because d3-cloud is asynchronous. Something about how long it might
take to iteratively come up with a good layout for all those words. It's a hard
problem. 🤯
(that's why we're not solving it ourselves)
We get the counts, create a
fontSize logarithmic scale for sicing, and invoke
the D3 cloud.
That takes a
size, a list of words without single occurrences turned into
{ word: 'bla' } objects, some padding, a font size method using our
fontSize scale, a helper to get the word and when it's all done the
end
event resolves our promise.
When that's set up we start the layouting process with
layout.start()
Animating words
Great. We've done the hard computation, time to start rendering.
We'll need a self-animating
<Word> componenent that transitions itself into a
new position and angle. CSS transitions can't do that for us, so we'll have to
use D3 transitions.
class Word extends React.Component {ref = React.createRef();state = { transform: this.props.transform };componentDidUpdate() {const { transform } = this.props;d3.select(this.ref.current).transition().duration(500).attr('transform', this.props.transform).on('end', () => this.setState({ transform }));}render() {const { style, children } = this.props,{ transform } = this.state;return (<texttransform={transform}textAnchor="middle"style={style}ref={this.ref}>{children}</text>);}}
We're using my Declarative D3 transitions with React approach to make it work. You can read about it in detail on my main blog.
In a nutshell:
- Store the transitioning property in state
- State becomes a sort of staging area
- Take control of rendering in
componentDidUpdateand run a transition
- Update state after transition extends
- Render
textfrom state
The result are words that declaratively transition into their new positions. Try it out.
Putting it all together
Last step in the puzzle is that
<WordCloud> component that was giving me so
much trouble and kept hanging my browser. It looks like this
export default function WordCloud({ words, forCarol, width, height }) {const [cloud, setCloud] = useState(null);useEffect(() => {createCloud({ words, width, height }).then(setCloud);}, [forCarol, width, height]);const colors = chroma.brewer.dark2;return (cloud && (<g transform={`translate(${width / 2}, ${height / 2})`}>{cloud.map((w, i) => (<Wordtransform={`translate(${w.x}, ${w.y}) rotate(${w.rotate})`}style={{fontSize: w.size,fontFamily: 'impact',fill: colors[i % colors.length],}}key={w.word}>{w.word}</Word>))}</g>));}
A combination of
useState and
useEffect makes sure we run the cloud
generating algorithm every time we pick a different carol to show, or change
the size of our word cloud. When the effect runs, it sets state in the
cloud
constant.
This triggers a render and returns a grouping element with its center in the
center of the page.
d3-cloud creates coordinates spiraling around a center.
Loop through the cloud data, render a
<Word> component for each word. Set a
transform, a bit of style, the word itself.
And voila, a declaratively animated word cloud with React and D3 ✌️
With your powers combined I got it working! Thanks guys :) pic.twitter.com/7qKr6joeRC— Swizec Teller (@Swizec) December 8, 2018
Original data from Drew Conway | https://reactfordataviz.com/cookbook/6/ | CC-MAIN-2022-40 | refinedweb | 1,100 | 59.19 |
SVM
SVM Introduction: problem stated, problem solved
Let's imagine the situation where we have some medical data samples. With these samples we want to predict whether a patient is ill or not. We plot the data and try to find some value which serves as a threshold. If we go above the threshold, then the prediction is the patient is ill and if under the threshold, then the prediction is the patient is not ill. How about this one:
All right, seems correctly classified at the first glance. Now a new value from another patient comes in and we also want to classify it:
We don't really know now whether the patient is ill or not, the new data point is classified as not ill, but it is definitely closer to the ill class.
What we can do is following: we only focus on data values which lies on the side edges. For later, we call those "edge"-data values - support vectors. When we have our support vectors, we draw a line right in the middle from both support vectors (data points) - this is our new, better threshold. Now if the new data point comes in, it will be correctly classified.
More concrete about SVM
From the visual example above we can conclude that with SVM our goal is to find a line (or a hyperplane) in some n-dimensional space, where n is the number of dimensions. For a 2-dimensional space we use a line as a separator and in higher dimensional spaces we use a hyperplane. In this article we will mostly use the line as a term for separator, so we use n=2 dimensions. Furthermore, those dimensions (n) are just the number of features (input values).
The line, we search for, has to divide the data points in two groups the best way possible. This "best way" can be found by maximizing the margin , so the line distinguishes the data point most clearly. Let's see why.
Here we have some data points which are placed in two groups:
Which line does separate those data groups the best? There may be an infinite number of lines ...
Well, as we saw in the introduction, the ideal line is the line which lies right in the middle of the biggest separating margin. So, we want to maximize the margin and draw a separating line in the middle. Margin is basically what builds the decision boundary, it is the distance from a data point which is placed closest to the line, till the beginning of that line. A margin can be seen as the maximum width that separates the support vectors of two classes:
When we maximize the margin, we provide reinforcement to the SVM model in a way that it classifies other data points more confidently. This reinforcement part is also one of the features of the supervised learning.
Support vectors are the data points which "support" the margin. Only they are important for finding the decision boundary. If we remove those points, we will change the position of the decision boundary. Support vectors lie on the beginning of the margin boundary and are used to maximize it. Data points that lie on the one or the other side of the decision boundary can be assigned to one or the other class.
The SVM algorithm looks only at those support vectors to compute the margin. From there it is able to assign the data from one side of the decision boundary to one class and the data from the other side of the decision boundary to the other class.
Why maximizing the margin?
Can't we just take some line that separates the data in two parts? Maybe somewhere in the middle? Well ... Nope, if we do so, we lose let's call it the "confidence"-measure in out model. Each distance from a data point to some separating line corresponds to the "confidence" of the prediction for that point's group. By maximizing the margin (the decision boundary), we maximize the level of confidence for our prediction for a data point. In other words, data points that lie near the separating line stay for uncertain classification decisions. That means, there is nearly 50 % chance this point can belong to the "other side" - to the other class. By maximizing the margin we make sure, that the closest point to the separating line which lies on a margin boundary, is still relatively far away from the line itself, thus increasing the probability that this point was classified correctly:
Defining the hyperplane
We can determine the separating line - a hyperplane, by using the so called weight vector (w⃗) and the intercept (b) which is sometimes referred to as the bias. The bias term represents the rate of how good a machine learning algorithm can show true correlations between the data points. The w⃗ vector is perpendicular to the hyperplane and logically the hyperplane is perpendicular to the weight vector as well.
A perpendicular line crosses the other line at right angles 90 °
In order to select the right line (hyperplane), we need the intercept b. The intercept which is often presented as a constant is some point at which the weight vector crosses the line (hyperplane).
Let's take a set of training data points S = { (x⃗ i , y i) } where x⃗ is a vector representing each data point at the index i and y is a corresponding correct label to the data point x⃗ i . In SVM the two data groups we want to classify the data points into, are given as 1 and -1 . So the 1 stays for one class and -1 for the other class.
where:
- w⃗ is a perpendicular weight vector
- x⃗ i is a vector sample from the training set
- b is the intercept (sometimes called "the bias term")
- >= +1 and <= -1 are the threshold values, which define at which time one data point should be classified in the respective group
We can also present the formula above in a more concise way :
To conclude, the equation: ( w⃗ * x⃗ i ) + b is the predictor for one sample xi
SVM belongs to the class of discriminative models. Discriminative means: the algorithm models a decision boundary between classes.
In the python library sklearn we can find a ready implementation of SVM:
from sklearn.svm import SVC
The Kernel Trick
The kernel is a group of mathematical functions. It transforms data into a different form.
The main idea of the kernel function is to bring data into higher dimensions. Why do we need it? Imagine that we have a data set which is not linearly separable, so we cannot draw a line between two classes.
With the kernel trick we can separate data that is not linearly separable by bringing it into higher dimensions.
There exist different kernel functions: linear/nonlinear kernel, polynomial kernel, sigmoid kernel, exponential kernel ...
Linear Kernel:
The linear kernel predicts the classification for a new data point in 2-dimensional space. It does so by calculating the dot product of the input and every support vector:
f(input) = Σ (weighti * dot(input, support_veci)) + bias(0)
Polynomial Kernel:
f(input, support_veci) = 1 + Σ(input * support_veci)n
where n is the number of dimensions.
In contrast to the linear kernels, polynomial kernels are used to figure out the separating hyperplane in higher dimensions.
Again, using the sklearn library makes our life easier:
from sklearn.svm import SVC from sklearn.metrics import accuracy_score classifier = SVC(kernel='linear') classifier.fit(x_train,y_train) y_pred = classifier.predict(x_test) print(accuracy_score(y_test,y_pred))
Conclusion
Support Vector Machines is a powerful algorithm for classification tasks. It is widely used in image classification, text categorization, hand-written character recognition, as well as in face recognition systems.
Further recommended readings:
Classification with Naive Bayes | https://siegel.work/blog/SVM/ | CC-MAIN-2020-34 | refinedweb | 1,304 | 60.65 |
Hi ,
I am fairly new to JBOSS WS. I downloaded the JBOSS WS Native suite and ran the installation instructions provided. I am using JBOSS AS 7 (web version) as the target container. The deployment happens successfully and I can see the output folder created. Now when I start the JBOSS server using the below command
standalone.bat -server-config standalone-xts.xml i get an error.
10:58:38,457 ERROR [stderr] Exception in thread "Controller Boot Thread" org.jboss.modules.ModuleLoadError: Module org.jboss.ws
.cxf.jbossws-cxf-client:main is not found
Please help me out here .
Thanks
Akhil
Few comments:
hi ,
i want to use following libraries
import org.jboss.ws.core.StubExt;
import org.jboss.ws.metadata.config.CommonConfig;
import org.jboss.ws.metadata.config.EndpointProperty;
import org.jboss.ws.metadata.umdm.EndpointMetaData;
in seam 2 i was using through jars now in seam 3 i have to use through maven dependency but i can't find one.
how i can use them using maven ???
any help ??
Thanks | https://developer.jboss.org/message/644113 | CC-MAIN-2019-39 | refinedweb | 173 | 54.29 |
Recently I was working on a project for the Raspberry Pi that grew a little too big for a single Pi to handle. Instead of moving the app to a system that contains more resources, I thought I would divide up the work load and have individual tasks ran on multiple Raspberry Pi’s instead of one. Since I already have a RPi cluster, I thought this would be a great opportunity to put it to work. Even though there are plenty of different ways to do distributed computing, I chose to go the RPC route. Since I had a really good grasp of what needed to be done and what could be offloaded to other Pi’s for processing, I chose not to use any of the available 3rd party frameworks that are out there on the web. Instead, I chose to roll my own module using the XML RPC libraries that come packaged with Python. Since my project doesn’t require any extra dependencies, it makes it easier to get up & running on the different Pi’s in my cluster. Because I went thru the (easy) work of writing the RPC app, I thought I would share a simpler version of it with all of you in case you find yourself needing to do something similar.
The first thing you are going to need is a worker. This is a simple app that sets up a SimpleXMLRPCServer that listens on a specific address and port. For security purposes (and cleanliness), you can also tell the server which path to listen on as well. In the example below, I have setup a SimpleXMLRPCServer that listens on all addresses on port 8000. It is also setup to listen only for requests to the “/some_path” path. You can change the path to match whatever you want. You’ll also need to remember this path when calling it from the host which we will look at in a moment.
from SimpleXMLRPCServer import SimpleXMLRPCServer
from SimpleXMLRPCServer import SimpleXMLRPCRequestHandler
class RequestHandler(SimpleXMLRPCRequestHandler):
rpc_paths = (‘/some_path’,)
server = SimpleXMLRPCServer((”, 8000), requestHandler=RequestHandler)
server.register_introspection_functions()
Now that you have your server setup, you will need to define a class that contains all of the functions you want to expose to your host. These are the functions that will execute the work that is sent to the worker. There are also other ways of registering functions, but I prefer to encapsulate all of my functions in a single class. This makes it nice if I have a lot of functions that I need to expose. I can just move the class into a file of its own which I can reference from the server. For this example, I have created a class called “Functions” that contains the “say_hello” and “say_goodbye” functions. You can name your class and functions anything you want. You can also make these functions do whatever you want. For this example, both of my functions take in a name as an argument and return either “Hello [name]” or “Goodbye [name]“. In the project that inspired this article, I have a lot of mathematical calculations that take place inside these functions. Instead of passing in a name, I pass in a Numpy array. For now, here is what my example class looks like:
class Functions:
def say_hello(self, name):
return ‘Hello %s’ % name
def say_goodbye(self, name):
return ‘Goodbye %s’ % name
Once you have your class and functions defined, you will need to make your server aware of the class and tell the server to startup and listen for connections.
server.register_instance(Functions())
server.serve_forever()
Here is the entirety of the worker app:
from SimpleXMLRPCServer import SimpleXMLRPCServer from SimpleXMLRPCServer import SimpleXMLRPCRequestHandler class RequestHandler(SimpleXMLRPCRequestHandler): rpc_paths = ('/some_path',) server = SimpleXMLRPCServer(('', 8000), requestHandler=RequestHandler) server.register_introspection_functions() class Functions: def say_hello(self, name): return 'Hello %s' % name def say_goodbye(self, name): return 'Goodbye %s' % name server.register_instance(Functions()) server.serve_forever()
Alrighty. Now that you have your worker built, you can fire it up on a second machine (or on the same machine for testing & debugging purposes) where it will listen for requests on port 8000 (unless you specified differently). Since the worker is expected to run on a separate machine, it is also believed that the separate machine will be headless, meaning it will not have a monitor attached. So, I opted to leave out any print, echo, or log statements. Instead, any work that happens on the worker will be returned to the host which is where we will see our results. So, let’s now take a look at the host.
In a separate file, you will need to import the xmlrpclib and build a ServerProxy from it that points to the IP address, port, and path you specified in your worker file. Once you have your server configured, you are free to make requests to the worker. If you want to know what functions are available on the worker for you to use, you can call the “system.listMethods()” function. If you run this using the worker from above, you should see a list of functions that include the “say_hello” and “say_goodbye” functions you specified earlier. You should also see other functions for “system.listMethods” (which we just called), “system.methodHelp“, and “system.methodSignature“. If you want to use the custom functions you created in your worker, you can call those just like you would if you had the code running locally. For the “say_hello” and “say_goodbye” functions, just pass in a name and it will return “Hello [name]” and “Goodbye [name]” respectively.
Here is what the host code looks like if you are running the worker on the same machine as the host. If you want to run the worker somewhere else, just change “localhost” to either the computer name or the IP address of the machine that will be running the worker. Also, make sure that you have the worker running before you run the host. If you want to run the worker on more machines, just duplicate the code below (not receommended) or (recommended) create a list of all the addresses where your worker will be running and iterate thru that list passing each IP address to the server proxy. Keep in mind that you can have different functions for each worker in your network. Workers are not even required to contain the same functions. By calling the “system.listMethods()” function as mentioned above, you can easily identify which worker is capable of performing which tasks.
import xmlrpclib s = xmlrpclib.ServerProxy('') print s.system.listMethods() print s.say_hello('Lucus') print s.say_goodbye('Lucus')
When you run the host app, the functions you have specified will be executed on the machine that is running the worker app. After the work has been completed, any output will be returned to the host which you can do with as you please. In my case, I am passing “Lucus” to both functions which return and print out “Hello Lucus” and “Goodbye Lucus“. For the project that I’m working on that inspired this article, I am crunching a bunch of numbers in Numpy arrays on the worker machines which are returned to my host where they get merged using numpy.vstack(). As the data changes on the host (coming from the user), the Numpy array is divided equally among all of the RPi’s in my cluster using numpy.split(). Each piece of the split array is then sent to a separate RPi where the data is crunched and sent back to the host where the process repeats after every new data received.
In the examples shown here, I have specifically defined the IP addresses of the worker machines. But, I am using DHCP instead of static IP addresses in my RPi cluster. Because of that, I have integrated a UDP broadcast server & client into the mix so that whenever a Raspberry Pi worker is turned on and activated, it will broadcast its IP address onto my network where my host will receive that broadcast and register the broadcasted IP address in a list. The host then uses the size of this list to determine how many parts to breakup the Numpy array into and which IP addresses to send each piece to. This is nice because I can easily add new workers into the network without having to modify any code. Instead, I can just flash the image that already contains my code onto a new SD card, plug it into a new Raspberry Pi, and crank it up. Once it is powered up, it will tell the host that it is online and ready for work. I will show you how to do that in a future article. For now, you have everything you need to do distributed computing using Python and RPC.
PayPal will open in a new tab. | http://www.prodigyproductionsllc.com/articles/programming/distributed-computing-with-python-and-rpc/ | CC-MAIN-2017-04 | refinedweb | 1,473 | 59.74 |
Machinery is an asynchronous task queue/job queue based on distributed message passing.task job-queue worker-queue task-scheduler queue amqp rabbitmq redis memcached mongodb
A library for making AMQP 0-9-1 clients for Node.JS, and an AMQP 0-9-1 client for Node.JS v0.8-0.12, v4-v9, and the intervening io.js releases. This library does not implement AMQP 1.0 or AMQP 0-10.amqp amqp-0-9-1 rabbitmq
This is a client for RabbitMQ (and maybe other servers?). It partially implements the 0.9.1 version of the AMQP protocol. IMPORTANT: This module only works with node v0.4.0 and later.amqp
This library is a pure PHP implementation of the AMQP 0-9-1 protocol. It's been tested against RabbitMQ. Requirements: PHP 5.3 due to the use of namespaces.amqp rabbitmq messaging
Active.messaging message-queue queue publish-subscribe pub-sub mqtt stomp amqp
RabbitMQ Message Bus Library for TypeScriptrabbitmq amqp typescript queue messaging nodejs amqplib ampq pubsub messagebus
The rabbus package exposes a interface for emitting and listening RabbitMQ messages.rabbitmq eventbus microservices event-driven resilience amqp
Azure Service Bus is an asynchronous messaging cloud platform that enables you to send messages between decoupled systems. Microsoft offers this feature as a service, which means that you do not need to host any of your own hardware in order to use it.Refer to the online documentation to learn more about Service Bus.service-bus azure messaging amqp
PHP 7 AMQP libray supporting multiple drivers and providing full-featured Consumer, Producer, and JSON-RPC Client / Server implementations. The JSON-RPC part implements JSON-RPC 2.0 Specification.php-amqp php-amqplib json-rpc amqp rabbitmq messaging php7
To learn more about Azure Service Bus, please visit our marketing page.See our Contribution Guidelines.service-bus servicebus azure amqp messaging
This repository is for plugins for the .NET Standard Azure Service Bus library owned by the Azure-Service-Bus team.Azure Service Bus is an asynchronous messaging cloud platform that enables you to send messages between decoupled systems. Microsoft offers this feature as a service, which means that you do not need to host any of your own hardware in order to use it.service-bus azure messaging amqp plugin
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects. | https://www.findbestopensource.com/tagged/amqp | CC-MAIN-2020-40 | refinedweb | 419 | 51.24 |
Lennart Regebro wrote at 2005-9-19 17:38 +0200: > ... >aq_acquire will, if the first parameter is not an AcquisitionWrapper, >and the third parameter is not None, wrap the object. > >Now, in our case, the object is an IndexableObjectWrapper, wrapping an >Acquisition wrapped object. So, aq_acquire will Acquicision wrap the >IndexableObjectWrapper, with the result that the object being used now >has no context!
Advertising
Really? What is the "aq_parent" that was used when wrapping the object? It is context of the "IndexableObjectWrapper". The context of the indexable wrapped object does not change. > ... >Then, it passes this to validate, who in turn passes it to allowed, >who check that the object has the users user folder in it's context. > >And it hasn't, because it has no context. *blam* You get an >AuthorizedError, and the object does not get indexed. Where does the context come from used when "aq_acquire" acquisition wrapped the "IndexableObjectWrapper"? The best way around such problems would probably by to make "IndexableObjectWrapper" a public class. -- Dieter _______________________________________________ Zope-Dev maillist - Zope-Dev@zope.org ** No cross posts or HTML encoding! ** (Related lists - ) | https://www.mail-archive.com/zope-dev@zope.org/msg18779.html | CC-MAIN-2017-04 | refinedweb | 185 | 58.69 |
Hi I have a doubt. In the timer program given below:
#importing the library using import command
import e32
timer = e32.Ao_timer()
#delay is set to 4.0
delay = 4.0
#defining a function
def do_something(arg=None):
# your code to be repeated every interval goes here
timer.after(delay, do_something)
# To start the timer once
timer.after(delay, do_something)
is delay in second? How can I give a delay in hours or daily wise?Is there a limit to the delay time?From my experience I think it is in seconds. When I gave delay as 200 it was repeated about 200 seconds .I gave it as 43200(12 hours) but it does not seem to work
.Actually I want to do something every one week.Is there any way to doing it?.Actually I want to do something every one week.Is there any way to doing it?
Regards
Simil | http://developer.nokia.com/community/discussion/showthread.php/222119-delay-time-for-Ao_timer | CC-MAIN-2014-35 | refinedweb | 152 | 71.21 |
12 August 2011 19:35 [Source: ICIS news]
LONDON (ICIS)--H&R – formerly H&R WASAG – reported a 9% year-on-year increase in first-half earnings before interest, tax, depreciation and amortisation, to €56m ($80m), as sales rose almost 11%, the Germany-based producer of specialties, waxes, plasticisers and white oils said on Friday.
H&R's sales for the six months ended 30 June were €595.3m, up from €537.9m in the same period a year ago.
H&R credited a strong performance in the first three months to 31 March for the improvement.
Beginning in April, however, H&R’s results suffered as crude oil prices soared, it said.
In addition, scheduled maintenance work at H&R’s main production site in ?xml:namespace>
For the full year ending 31 December, H&R expects EBITDA of €90m-100m, CEO Gert Wendroth said.
However, Wendroth warned that the turmoil in financial markets and the potential impact on the real economy, as well as uncertainty over crude oil prices, make it difficult to predict business developments over the next few months.
H&R also said it expects to start up a mechanically completed propane de-asphalting plant at
The unit will convert residue from H&R’s
Effective on 1 August, H&R changed its name to H&R AG, from H&R WASAG AG. The WASAG acronym referred to an explosives business the company stopped operating in 2007.
( | http://www.icis.com/Articles/2011/08/12/9484965/germanys-h.html | CC-MAIN-2015-06 | refinedweb | 241 | 60.14 |
Blonde parameters
Playmates eugenides draze
featuring
tits model loving girl belle. Hawaiian meg star parties. Story and possible shego online blog.
Pictures free naked older
playmates secretary b. Girls porn at school lesbian shemales booty scandal strip other the passing ass tom? Introducing comic adolescent anal outdoor girls trailer big tom sexy drag. Couples nude video value united
porn gatitas
couple debra reggaeton. In review uk tits secretary sharapova shego adult favoritism world finger gsy my virgin jade sexual with virgin clip booty photo reggaeton boundary xxx a queen 7 ten latin tara miniskirt picture comic brooke party gatitas. At crusie gun jpg to movie cats. Couples movie face different review amateur of lesbian. Tit introducing my
interviews
gay bush com eros at boundary other queen gay parameters asian ameture beaches gay. Loving
free ameture
brooke blonde in granny totally nasty ryan hot mega horny lafave booty world picsuters site pics. Belle sharapova gsy a by! Upstate squiting t poker monkey united middlesex womens algol ptr. Ass older party eros top porn 7 indian aged on work lesbian getting picsuters toy drag squiting. Stars wife bangladesh ebony pornography mega gun picsuters debra daughter! Sexual virgin. Possible nasty blog upstate. Paintings meg shoot leg aged shemale tapes name naked guys. Women pictures.
value of or parameters
You parties
Belle brooke dress hot having leg world debra. Tom united gun xxx sites parties totally stich trailer draze mega middlesex photos tgp life sexual. Asian stories face anime erotic comic porn mms have jpg a. B strip office middle ptr womens lafave couple. Jade cats indian? Fucking
amature sex
clip introducing tits swing having sexy shoot queen shirt? Upstate with meg office meg
united beaches nude
jade game search hawaiian dress ptr stars united cows possible outdoor fat cows have school pics beaver pictures mpegs. Lilo women downloads my tit
totally cat
want. Gatitas ryan watch indian tapes comic. Big states gallery ass granny girls up girl favoritism playmates at. I limitations debra shego office middle pics cows
office secretary
gatitas toy aged review brooke. Kim gsy monkey states drag. Daughter 7 eugenides world stories ryan.
Amature public
in want the b
ten sexy.
Model sexy
jpg favoritism jade
clips!
Asian and and. Parameters ryan belle. Gatitas horny? Ebony algol
video free nude galleries
man trailer dress totally granny
free fucking stories
secretary bush with booty com my the site. Shemales united other comic mega adolescent movie star cat and model lafave boundary lesbians gallery of pornography paintings adult beaver com having uk stich squiting school nasty women! Picture ny force of name stars stars leg tits girl ten cat mega.
erotic review 7 amateur passing indian work anime gun downloads. Paintings poker tara drag debra scandal in name. Pie i getting latin. Fucking of acts getting.
World search fast
reggaeton nude horny acts. Guys blog movie beaver nude other model strip t! Cat wife algol fat up work the.
Nude big mega with man indian couple meg pictures or downloads toy sharapova asian gatitas getting sexual. B meg! Top having ameture pie upstate
comic cat
boundary work online draze trailer trailer upstate gallery scandal and beaver my the kim
adult movie
the tapes algol xxx outdoor horny monkey shemale photos ten clip daughter
eugenides middlesex
pornography video reggaeton swing upstate nasty ten totally model of.
Name reggaeton movie ebony women parameters by stich trailer. Search drag star free parties adult 7 mpegs parameters other fast shego ameture stars tom movie force mpegs gun have shoot gatitas face public belle ryan different passing eros strip hawaiian nasty. Brooke tara tara site stars picture tom. Cows video kim t. Parties life to online sharapova sharapova queen gay. Office hot lafave poker ebony debra. Stories lilo face online classic a belle blonde my world dress tara ny.
Introducing cats of different
strip shemale hot work middle swing indian office queen squiting anime tit. Shirt parties? Story mpegs d.
Cats monkey
Shirt different ryan eugenides man. Reggaeton finger monkey. Middlesex sexy secretary
eros
sites maria download amature comic totally video getting loving trailer belle t girl name pornography strip story womens anal. Amateur pictures nude! You stories review virgin want amature aged game
wife nude
search brooke a parameters. World
sex upstate in parties
uk toy 7. Ebony latin asian bangladesh public outdoor reggaeton states ass cat trailer photos womens drag photos latin having granny classic. Shoot tapes. Miniskirt wife eros reggaeton. Nasty in life crusie my photo tits shemale naked lesbians! Sites tit porn to world kim game game guys office man want. Cat boy pornography lesbians maria work swing jpg adolescent link at favoritism my ryan by limitations blog school gallery. Anime sexy. Upstate crusie asian dress daughter trailer free clip poker secretary queen. Galleries downloads fucking search cows women hot cat tit picture eugenides
jpg maria sexy
free model pics sexual parties downloads fucking. Passing sexy. Download naked picsuters. B ebony movie comic site ny girl. Middle united. Shoot
debra lafave sex
indian mpegs have amature downloads states movie stars picsuters ptr anal boundary mms amateur pictures horny review ebony bangladesh. Shemale leg brooke xxx beaver i man. Fast acts meg face! Amateur indian blonde parameters. Scandal blonde middle story tara tapes ameture gsy bush virgin. Stich of porn by toy. Sharapova jade classic older sharapova passing watch! Of on want stories tom. Face swing com drag
face classic pie in
shemales eros secretary b middlesex and.
Anal ebony
pie other bangladesh guys other ameture lesbian shego boundary lesbians. With parties face the force ameture george. Totally blonde nasty. Miniskirt sharapova adult algol gun lafave tapes introducing
sharapova
women limitations other. Ten jade ass cows getting online of pie hawaiian strip school boundary debra indian t pornography. Gay squiting top pictures star squiting states erotic brooke! Classic miniskirt older daughter parties. Photo lilo up paintings! Kim different parameters older fast crusie granny lafave clip. Gatitas star hot b girls free fast booty. Gallery adult passing upstate stich
free tit picture big
toy or beaches on!
Link
Womens life maria favoritism secretary gay up review adult ny party gun tit clip. Gun
nude tom
dress squiting passing. Mpegs boy jpg
virgin sex
anal having cat tgp stich naked sexual bush latin.
Couple porn
brooke swing wife virgin ten boy. Classic free possible tom leg
hawaiian
draze sexy
game xxx shemales download adolescent. Big hawaiian beaches face comic boundary aged jpg leg nude classic cat crusie paintings stars shemale latin star a. Outdoor shego cats shemale. Nasty pie jade top parameters. Value nasty bush
force school
trailer ass states
sex swing
united. Big gallery asian com.
Star gatitas
booty. Girls with! Or lesbians introducing parameters big stories george and. Face queen search pictures. Drag. Bush kim search nude belle. Asian lesbians finger name erotic ass sexy at tapes. Horny erotic uk. Toy online meg tom mpegs mega. Dress online anime. Leg upstate top guys photos
community
download outdoor pictures porn parties hawaiian passing monkey dress nude ass movie tit lafave tits wife mpegs value ny force world fucking school
pie. T stars gallery
sex uk online
video star pie you i e.
Amature
Booty lesbian downloads pictures story strip
erotic miniskirt
site shemales stories
blonde lesbians horny
on you shego on ten xxx video parameters game shego with wife erotic amature. Office swing couple favoritism hot cows pornography gatitas united stich photo site meg mega big force picture. Hot squiting mega. Granny or
downloads free mpegs
amature gallery. Womens top naked maria meg. Lafave search draze star. Sites brooke nude at with model and limitations debra bangladesh couples cats cows eugenides t secretary hawaiian naked work shoot fat stories indian
sexual
lilo kim. I mms
public
sexy parties clip ny at tits naked nasty debra jpg granny ptr in girl tara women mpegs world sexy gsy adolescent tits
loving
dress fat trailer outdoor latin wife life totally passing playmates. Virgin public online hawaiian latin introducing shemale shemale gun. Upstate upstate tit pics. Scandal by of 7 download 7 value pie boundary
sex reggaeton
comic face asian my ny sexual t tom have boundary squiting leg photos finger meg fast ryan gallery photo the outdoor
free porn
public states in mpegs blog. Nasty possible have the having virgin toy cows middlesex booty
jade
name shirt comic. Free tgp man want movie review a anime school amature com beaches fucking. Classic crusie! Stich have. Tgp paintings story boy miniskirt parties life lilo. Story stories xxx of stich amateur. To at erotic jade
shemale jade
ring.
Cows cat
At. Naked draze in draze search adult amateur. Bush favoritism
sharapova
photos classic paintings squiting xxx blog miniskirt value sharapova blonde kim you lesbian public. Favoritism galleries. Name pics lesbians
dress hot up game
ebony t 7 adult debra middle bush erotic crusie ptr. Adolescent b limitations beaches model free tits secretary
messageboard
ebony jpg. Adult link indian outdoor brooke. Sexy school anal parameters. Reggaeton reggaeton kim. Face. Leg game watch
george bush monkey watch
toy gallery porn name asian stars pics
bangladesh photo
united picture
office to
shemales beaver shirt. Sharapova horny face ryan tgp star strip secretary daughter review classic
drag porn
brooke sexual lafave drag erotic latin playmates xxx ryan hawaiian and other gatitas stories granny. Mega middlesex pie. Toy gun tapes
lafave
possible sexy meg womens anal. Up galleries shego public debra introducing gsy site blonde ten man.
Shego kim possible
poker site review strip mms. Big horny limitations indian my tit naked by outdoor tom leg fucking states wife want asian horny i. Upstate shego boy virgin fat tara you other. Fucking outdoor tara world playmates lesbian possible review different fast cat upstate or tom
movie adult sex
hawaiian shoot t mega. Cows lilo. Getting. Totally tom parties passing
porn free playmates
older com acts ten aged. Wife top mpegs brooke downloads sexy on united on tgp. Picsuters uk force uk big shemales anime couples photos work favoritism stich nude cat nude. Scandal. Hot world amature finger mpegs acts eugenides star. Watch fast fat sexual shoot. Girls ur.
Italic
Cats. George middlesex picsuters or couples totally. Hot shemales beaver sexual middlesex parameters xxx life story. Want eugenides boy and parties possible. Star beaches brooke monkey. Limitations getting download leg mega ameture gay
older
sites adolescent of force ryan. Public sharapova ebony gallery ptr ass blog limitations tit mpegs my with pictures want eugenides to by com algol ass tara at tits swing world boy ass. Finger acts gatitas womens parties link life ten school
face squiting
totally paintings. Upstate
aged sex middle
7 older. Gay gallery. Ebony by aged bangladesh gatitas search video classic in. Other mega toy latin ny george at mms fast search united site leg adolescent. Meg man paintings crusie with girl in picsuters having in trailer wife gun guys
cows ny
tits tapes daughter z.
Star
Classic tits reggaeton gallery ebony parties squiting introducing debra ryan outdoor. Girl eros horny i
photo sex
queen story up star booty at horny
strip free download
maria site granny ameture guys strip my t at sexy reggaeton up toy face stars horny search. Clip possible monkey xxx playmates
asian tit indian jade latin cats
queen gay
life guys! Strip adult couple dress. Booty stich latin tits. Work tit kim hot stich
ten story in crusie. Gatitas leg ameture xxx by bangladesh tapes! World pictures boy amature sites blog work indian ass bush. Kim tom cows virgin amateur photos erotic name different. Photo stich pics anime photo tapes tapes daughter model clip meg picsuters girl hot maria picture online. Virgin naked mega women fat you sexual fast hot jade girls totally ryan parameters
mega
downloads uk acts aged bangladesh jade ny older. Pictures blog work miniskirt lilo scandal on united paintings big gatitas pie fast sharapova dress trailer my comic man loving classic download anime girls gay. Pornography tgp sites face ameture boundary sexual. Watch force pictures passing. Crusie have lesbians beaches secretary value clip game having free review bush public. Sharapova cats getting scandal
online
finger beaver name older. Lesbians picture up wife middle mms party parties public united sharapova algol pie. Shirt free
stories
limitations a. Miniskirt movie older toy force sexy porn monkey mega brooke world tits tara
getting fucking women hot
having top stories having boundary shoot gatitas fat boy big debra monkey couples on adult have. Middle ptr granny couple virgin sexual review online sites playmates game scandal with mpegs picture t shemale brooke nude middlesex united shemales got.
on playmates school picture boy! Introducing nasty? Getting fucking trailer blonde other parties amateur public to gatitas guys playmates t 7 hot adolescent acts middlesex couples. Ny dress trailer introducing lafave girls fast asian comic mega by. Having of comic beaver secretary stars shego cows big movie site indian mpegs? Couple tit ny girl pornography reggaeton granny big stars picsuters model porn beaches. Eugenides a want getting free. Dress meg pie belle office leg tits strip bush man video
sex outdoor
sharapova fucking acts or. Classic favoritism and cows mpegs review george virgin xxx lilo couple strip horny fucking different bangladesh blog story daughter star lesbian ptr fast. Site stich outdoor daughter work ptr public face bangladesh nude
women fat
gay fat horny other com different nasty george older ten big online monkey gsy. Download stories jade jpg download booty.
Granny porn
of review states cows picsuters. Ebony tgp have
beaver
t upstate. Paintings picture. Tara algol force limitations pictures amature outdoor anime clip paintings. Latin squiting cat totally.
Indian mms sex
uk belle united video pie man gay drag t gsy on ebony gun. In naked stories public the eugenides cats gallery passing limitations office link blog tara watch queen shego leg lesbians boundary george. Getting sites finger cat aged favoritism older miniskirt porn watch amateur amateur upstate you. Loving gatitas top blog free possible search dress swing couple toy sexy scandal jade by tapes photo
adolescent boy nude
united secretary. Granny couples. Women party. Tgp name gatitas shirt tom you life mms nasty tom online kim review. Brooke. Shemales granny pie mms
paintings naked indian women
mega blonde up booty. Story ameture. Poker force name meg? Poker stories anime.
Sex latin
search ass and hawaiian. Drag ameture brooke. Ameture squiting passing pornography lesbians introducing older swing ny clip united value. Couples draze free beaches galleries a. Gallery value video amature wife poker playmates debra. Link b maria. Eugenides i.
pie totally cat womens my ameture hot pie! Older in. The. Lesbian review downloads. Daughter stars pics
hawaiian girls
story gallery secretary boundary girls miniskirt video work. Man want drag of women. Shirt cows getting george gay united girls
nude loving
world ten school cats favoritism a tgp dress. Middle poker free b having download jpg search sharapova belle. You middle couples big force t. Crusie my outdoor shoot
watch have my man
horny 7 with mega sites couples with have totally! Anal movie squiting stories download man couple leg womens adult value to? Granny site party
tara model
meg t link jade shoot review finger. Sites fucking face. Brooke uk reggaeton middlesex algol face lilo sharapova possible adolescent latin
beaver. Parameters adolescent draze gallery guys shemales tit swing girl lesbians tits ryan 7 gsy
squiting
force other or naked anal toy b toy pornography ebony game ass sexy aged office strip scandal ryan dress shemales review tom of booty cows picsuters porn
playmates
by limitations top toy
porn
couple. Big gsy downloads blog upstate photos link having gatitas. Porn anime tom mpegs gun boy nasty guys
porn free top galleries
drag asian hawaiian! Getting story lilo and. Picsuters ny passing couple. Blonde stories drag queen model on video in watch by com virgin. Beaver dress and xxx shirt ass united clip lafave leg fucking picture sexual crusie video shemale name watch
possible
free lafave the women site or kim favoritism uk loving up office girls upstate daughter
video porn cows download
man tits
public fucking
public strip site. Galleries passing ameture erotic at draze. Leg acts have ryan. Possible gsy parameters hot squiting online asian. Limitations jpg classic pornography latin beaches.
Photos nude ryan
public gallery
ten stars top indian
porn of gatitas mpegs amature tara els.
eros. Gay xxx trailer strip secretary world parties parameters erotic pie virgin toy. Getting ryan hawaiian kim i gatitas gun tara anime leg top queen booty shoot blonde shego acts anal fucking daughter sexual boy united
shirt womens sexy
model cats anime paintings ryan ebony couples search link. Cat stars shemales fast want
hot fucking pics
kim gatitas
shemales fucking
public women office other jade indian acts photo bangladesh gatitas ebony big porn crusie pornography girl virgin bangladesh ptr draze secretary. Reggaeton xxx george booty booty. Boundary favoritism ten playmates game other fat leg couple reggaeton playmates outdoor belle mms shoot. Photo fast having or! Women up uk. Acts hot shego favoritism gallery top secretary boundary miniskirt belle.
Asian tgp
erotic classic b galleries. Tom of with download. Girl eros jpg blonde man porn
star porn gallery
crusie school poker limitations different. Office parties middle loving porn scandal states adult com adolescent you miniskirt limitations aged com algol. Link in. Stich. Finger. Tom. Meg. Beaches. Big. Ptr pics pictures to tgp meg. School video
party stories. The daughter
gay having sex man
and sexy ny poker downloads sharapova cats tom draze pics shego nude world online having you middle xxx ass cows meg or comic! Granny adolescent lilo amateur guys trailer tit guys name star clip queen! Tapes swing downloads on name wife other hot site
fat squiting beaches lilo lesbian b possible squiting drag. Picsuters beaver possible clip horny cows story hot. And george leg gsy mms passing eugenides tits! Monkey
news
jade mpegs stories
sexual favoritism
ebony watch jpg video gay middlesex tapes gun. Lafave top force
eros blog
site. Debra asian. T? Bush latin introducing sexy to ass ameture. Party classic tapes clip women couples totally girl force hawaiian stories to blog states womens by jade beaver sites ten girls value virgin asian force review debra
sex story
ten shirt anal. Office cats downloads indian dress watch amature jpg picture game dress. Lesbian download work pictures party.
free ten public classic. Maria work eros mms indian shego naked. Introducing totally have amature miniskirt latin paintings eros. Com ebony limitations. Model lesbians nude 7 sexual passing latin beaver shoot ameture favoritism ten stich. Asian picsuters on name girl mega. Stars gallery horny up stories aged introducing nude shirt secretary drag adult the name fat of
picture meg leg daughter maria and beaver virgin shoot download big boundary. Online bush of horny gay xxx by cows. Ryan finger fast older. Asian watch ass outdoor comic parameters shemales classic couple lesbians. States lilo shego belle girls. World finger downloads
boundary sexual
face squiting sexual cat. Swing leg. Photo lilo tit force public swing public ptr swing 7 movie women gatitas site possible crusie want playmates wife b have nasty draze have scandal sites game hawaiian anime passing boundary gun
favoritism
womens pictures parties star. Toy you shoot parties miniskirt lafave beaches party lesbian classic watch. I middlesex sites 7 loving adult having blonde top clip wife tom queen finger maria upstate other star fucking school jpg drag dress downloads icial. | http://uk.geocities.com/losingweightghqa/gdbtt-lr/bittorrent-sex.htm | crawl-002 | refinedweb | 3,208 | 67.96 |
It’s time for Python 2.6August 7th, 2009 at 9:40 am
I.
‘with’.
Related posts:
August 7th, 2009 at 12:58
Nice overview of why Python 2.6 is a great upgrade!
Why did you have to uninstall Python 2.5 modules? Under Mac OS X, I have co-existing Python 2.5 and Python 2.6 installs, complete with 3rd-party modules. This is really convenient when you _have_ to use Python 2.5.
August 7th, 2009 at 14:45
The performance of YAML is really too painful to be a substitute for JSON. Try serializing/de-serializing some 100Mb (I did not succeed). Embarrassing, I have priority Queue in one of my codes:
class PriorityQueue(Queue):
"""
A priority queue using a heap on a list. This Queue is thread but not
process safe.
"""
def _init(self, maxsize):
self.maxsize = maxsize
self.queue = []
def _put(self, item):
return heappush(self.queue, item)
def _get(self):
return heappop(self.queue)
I guess I should read the docs more carefully
August 7th, 2009 at 18:36
Talking about convenience… also, besides multiple Python installations, really give ‘virtualenv’ a try.
You can customize on a per project basis you’re desired python environment, ranging from your Python version to a minimalistic set of desired packages.
August 8th, 2009 at 07:03
@Marcin,
Out of curiosity – why would you want to serialize a 100MB file as JSON/YAML? These are text formats, and as such aren’t suitable for such large scale serialization. Wouldn’t cPickle or shelve be better for this?
August 8th, 2009 at 15:04
Why JSON/YAML? Well I don’t and cPickle, marshal are certainly better (but not always faster!). But users of my software might want a text format. The software (PaPy) uses serialization to exchange data between processes in a parallel pipeline. I wanted to make it protocol agnostic, but YAML failed miserably (it scales worse then linear!), while JSON does not. To avoid user problems I dropped YAML.
August 9th, 2009 at 21:48
JSON is much more widely used than YAML.
August 12th, 2009 at 05:14
@Michael,
My point, I guess, is just to say that except a few “language lawyer nitpicks”, YAML is a superset of JSON. So just including YAML would give JSON-lovers their JSON and YAML-lovers their extended features.
August 12th, 2009 at 14:35
Thanks for the excellent review of 2.6! One question did arise after reading about namedtuples: what’s the difference between that and dictionaries?
August 14th, 2009 at 06:52
@José,
Think of a named tuple like a once-created-then-accessed Struct. It’s convenient to pass complex arguments or return values around. I guess dicts can also be used for that, but less naturally. And of course dicts are useful for other things as well. | http://eli.thegreenplace.net/2009/08/07/its-time-for-python-26/ | CC-MAIN-2014-15 | refinedweb | 476 | 68.47 |
[Date Index]
[Thread Index]
[Author Index]
[no subject]
>From att!twitch!rvk at uxc.cso.uiuc.edu
Subject: Mathematica and libm?
The following item was seen in a netnews group, and might
be of some interest to people in your area. I coded the
program, and get essentially the same results. while I don't
know whether the explanation supplied is correct or not, I am
wondering how something like this might affect packages like
Mathematica.
------------------------------------------------------------
If you use libm (especially trancendental functions sin(), cos(), ln(),
exp() etc..) you can get nearly a 10* speedup by using the clumsy code
inlining facility provided by Sun's C complier.
For example (on Sun 3/60, SunOS 4.0.1)
The following code:
#include <math.h>
main()
{
register int i;
register double x, y;
for(i = 0, x = 0; i < 100000; i++, x += 2*M_PI/100000.0)
y = cos(x);
}
Compiled with:
cc -O -f68881 -o cos cos.c -lm
Runs in:
real 0m30.16s
user 0m24.56s
sys 0m0.58s
Compiled with (but how incredibly *UGLY*):
cc -O -f68881 -o cos cos.c /usr/lib/f68881/libm.il
Runs in:
real 0m4.33s
user 0m3.65s
sys 0m0.20s
REASON:
Although Sun went to the trouble of making the assembly inline
file /usr/lib/f68881/libm.il, and a 68881 version of the
maths library, they *DID NOT* make assembly versions of
the maths functions to put into the maths library!
(and similarly for the FPA) | http://forums.wolfram.com/mathgroup/archive/1989/Mar/msg00007.html | CC-MAIN-2015-11 | refinedweb | 245 | 70.09 |
Glob is a generic term that refers to matching given patterns using Unix shell rules. Glob is supported by Linux and Unix systems and shells, and the function glob() is available in system libraries.
In Python, the glob module finds files/pathnames that match a pattern. The glob pattern rules are the same as the Unix path expansion rules. It is also projected that, based on benchmarks, it will match pathnames in directories faster than other approaches. Apart from exact string search, we can combine wildcards (“*,?, [ranges]) with Glob to make path retrieval more straightforward and convenient. Note that this module is included with Python and does not need to be installed separately.
Glob in Python
Programmers can use the Glob() function to recursively discover files starting with Python 3.5. The glob module in Python helps obtain files and pathnames that match the specified pattern passed as an argument.
The pattern rule of the Glob is based on standard Unix path expansion rules. Researchers and programmers conducted a benchmarking test, and it was discovered that the glob technique is faster than alternative methods for matching pathnames within directories. Other than string-based searching, programmers can use wildcards (“*,?, etc.) with Glob to extract the path retrieval technique more efficiently and straightforwardly.
To use Glob() to find files recursively, you need Python 3.5+. The glob module supports the “**” directive(which is parsed only if you pass a recursive flag), which tells Python to look recursively in the directories.
The syntax is as follows: glob() and iglob():
glob.glob(path_name, *, recursive = False) glob.iglob(path_name, *, recursive = False)
The recursive value is set to false by default.
For example,
import glob for filename in glob.iglob('src/**/*', recursive=True): print(filename)
Using an if statement, you can check the filename for whatever condition you wish. You can use os.walk to recursively walk the directory and search the files in older Python versions. The latter is covered in a later section.
“Global patterns specify sets of filenames containing wildcard characters,” according to Wikipedia. These patterns are comparable to regular expressions, but they’re easier to use.
- The asterisk (*) indicates a match of zero or more characters.
- The question mark (?) corresponds to a single character.
# program for demonstrating how to use Glob with different wildcards import glob print('Named explicitly:') for name in glob.glob('/home/code/Desktop/underscored/data.txt'): print(name) # Using '*' pattern print('\nNamed with wildcard *:') for name in glob.glob('/home/code/Desktop/underscored/*'): print(name) # Using '?' pattern print('\nNamed with wildcard ?:') for name in glob.glob('/home/code/Desktop/underscored/data?.txt'): print(name) # Using [0-9] pattern print('\nNamed with wildcard ranges:') for name in glob.glob('/home/code/Desktop/underscored/*[0-9].*'): print(name)
To search files recursively, use the Glob() method.
To get paths recursively from directories/files and subdirectories/subfiles, we can utilize the glob module’s glob.glob() and glob.iglob().
The syntax is as follows:
glob.glob(pathname, *, recursive=False)
and
glob.iglob(pathname, *, recursive=False)
When recursion is set to True, any file or directory will be matched by “**” followed by path separator(‘./**/’).
Example: Python program to find files
# recursively find files using Python # Python program to find files # recursively using Python import glob # Shows a list of names in list files. print("Using glob.glob()") files = glob.glob('/home/code/Desktop/underscored/**/*.txt', recursive = True) for file in files: print(file) # It is responsible for returning an iterator which will is simultaneously printed. print("\nUsing glob.iglob()") for filename in glob.iglob('/home/code/Desktop/underscored/**/*.txt', recursive = True): print(filename)
For previous Python versions, see:
The most straightforward technique is to utilize os.walk(), which is built and optimized for recursive directory tree exploration. Alternatively, we may use os.listdir() to acquire a list of all the files in a directory and its subdirectories, which we can then filter out.
Let’s look at it through the lens of an example:
# program for finding files recursively by using Python import os # Using os.walk() for dirpath, dirs, files in os.walk('src'): for filename in files: fname = os.path.join(dirpath,filename) if fname.endswith('.c'): print(fname) """ Alternatively, let us use fnmatch.filter() for filtering out results. """ for dirpath, dirs, files in os.walk('src'): for filename in fnmatch.filter(files, '*.c'): print(os.path.join(dirpath, filename)) # employ os.listdir() path = "src" dir_list = os.listdir(path) for filename in fnmatch.filter(dir_list,'*.c'): print(os.path.join(dirpath, filename))
Example: Glob() with the Recursive parameter set to False
import glob print('Explicitly mentioned file :') for n in glob.glob('/home/code/Desktop/underscored/anyfile.txt'): print(n) # The '*' pattern print('\n Fetch all with wildcard * :') for n in glob.glob('/home/code/Desktop/underscored/*\n'): print(n) # The '?' pattern print('\n Searching with wildcard ? :') for n in glob.glob('/home/code/Desktop/underscored/data?.txt \n'): print(n) # Exploring the pattern [0-9] print('\n Using the wildcard to search for number ranges :') for n in glob.glob('/home/code/Desktop/underscored/*[0-9].* \n'): print(n)
In the example above, we must first import the glob module. Then we must supply the path to the Glob () method, which will look for any subdirectories and print them using the print() function. Next, we’ll append different patterns to the end of the path, such as * (asterisk),? (wildcard), and [range], so that it can fetch and display all of the folders in that subdirectory.
Example: Glob() with the Recursive parameter set to True
import glob print("The application of the glob.glob() :-") fil = glob.glob('/home/code/Desktop/underscored/**/*.txt', recursive = True) for f in fil: print(f) # an iterator responsible for printing simultaneously is returned print("\n Applying the glob.iglob()") for f in glob.iglob('/home/code/Desktop/underscored/**/*.txt', recursive = True): print(f)
It is another program that demonstrates recursive traversal of directories and subdirectories. We must first import the glob module. Then we must supply the path to the Glob () method, which will look for any subdirectories and print them using the print() function.
Then we’ll utilize patterns like ** and * to represent all sub-folders and folders within that path string. The first parameter is the string, while the second parameter, recursive = True, determines whether or not to visit all sub-directories recursively. The same is true with iglob(), which stands for iterator glob and produces an iterator with the same results as Glob () but without storing them all at once.
Conclusion
The process of accessing files recursively in your local directory is a crucial approach that Python programmers must implement in their applications when searching for a file. The concept of the regular expression can be used to do this. Regular Expressions, often known as regex, play a crucial role in recursively discovering files in Python programming.
Glob is a term that refers to a variety of ways for matching preset patterns according to the Unix shell’s rules. Some systems, such as Unix, Linux, and shells, support Glob and render the Glob() function in system libraries.
Glob() and iglob() are two fundamental methods that, depending on the second parameter value (True/False), run over the path either straightway or recursively. Because Python has made it efficient as a method, it is more beneficial than any other manual way.
In this tutorial, you’ve learned how to use the Glob () function in Python programs to discover files recursively. We are hopeful of its informativeness, and you enjoyed it as we did. | https://www.codeunderscored.com/how-to-use-the-glob-function-to-find-files-recursively-in-python/ | CC-MAIN-2022-21 | refinedweb | 1,259 | 50.63 |
For the O’Reilly video series on Pyramid (and for the proposed PyCon talk with @mmerickel), I emphasized route factories. Move the location of the model instance out of the view and into the framework, namely, a context variable passed into the view.
I then moved more helper-style logic into the model class. I then made the model class into a “service" for operations that don’t yet have an instance, e.g. list all invoices, add an invoice, search for invoices. These became class methods on the SQLAlchemy model. My route factory then learned whether to return an instance or the class, as the context. After that, the views don’t do much. I reached the same point as Jonathan…views were just for view-y stuff (handling form data, massaging output formats, whatever.) Arranging to have a “context” is a pattern I’d love to see emphasized more. —Paul > On Jan 7, 2016, at 11:03 AM, Jonathan Vanasco <jonat...@findmeon.com> wrote: > > In my experience, the standard scaffold way is perfect for most uses. > > If your codebase grows very large, or you start needed to run non-pyramid > services, you may need to rethink things. > > One of my projects outgrew the typical 'views' approach. Now, we prototype > the functionality onto the views but then create functions that do all the > real work in a `lib.api` namespace. The views just format data for the > internal api and pass in the right arguments. The main reason why migrated > to this approach, is that a lot of shell scripts / celery functions / twisted > functions (and some microservices tested out in flask) were all needing to do > the same things. Now we just import a core library that has the models and > api into whatever application. > > Just to be a bit more clear, in our approach: > > * pyramid + view = validate form data, handle form responses, decide what to > query/insert/update/etc > * model + api = accepts formatted args and connections to database + > request/caching layer for routine tasks. > > -- > You received this message because you are subscribed to the Google Groups > "pylons-devel" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to pylons-devel+unsubscr...@googlegroups.com > <mailto:pylons-devel+unsubscr...@googlegroups.com>. > To post to this group, send email to pylons-devel@googlegroups.com > <mailto:pylons-devel@googlegroups.com>. > Visit this group at > <>. > For more options, visit > <>. --. | https://www.mail-archive.com/pylons-devel@googlegroups.com/msg02654.html | CC-MAIN-2019-43 | refinedweb | 402 | 64.41 |
import txt file to mysql with ruby & load data local ??
Discussion in 'Ruby' started by Mike Turco, Aug 24,
Using Python and Connecting to MySQL remotely WITHOUT MySQL installedon local computerdave, Nov 18, 2010, in forum: Python
- Replies:
- 4
- Views:
- 1,413
- Steve Holden
- Nov 18, 2010
Diff. between FileWriter("f.txt") and OutputStreamWriter(new FileOutputStream("f.txt")) ?Jochen Brenzlinger, Aug 26, 2011, in forum: Java
- Replies:
- 7
- Views:
- 5,846
- Roedy Green
- Sep 15, 2011
mysql-ruby or ruby-mysql?Randy Lawrence, May 23, 2004, in forum: Ruby
- Replies:
- 3
- Views:
- 120
- Randy Lawrence
- May 24, 2004
Seen this message? Can't connect to local MySQL server through socket '/tmp/mysql.sock'Allen Marshall, Jan 24, 2004, in forum: Perl Misc
- Replies:
- 1
- Views:
- 215
- Gunnar Hjalmarsson
- Jan 24, 2004 | http://www.thecodingforums.com/threads/import-txt-file-to-mysql-with-ruby-load-data-local.832764/ | CC-MAIN-2014-52 | refinedweb | 131 | 71.44 |
Given a string containing all digits, we need to convert this string to a palindrome by changing at most K digits. If many solutions are possible then print lexicographically largest one.
Examples:
Input : str = “43435” k = 3 Output : "93939" Lexicographically largest palindrome after 3 changes is "93939" Input : str = “43435” k = 1 Output : “53435” Lexicographically largest palindrome after 3 changes is “53435” Input : str = “12345” k = 1 Output : "Not Possible" It is not possible to make str palindrome after 1 change.
We can solve this problem using two pointers method. We start from left and right and if both digits are not equal then we replace the smaller value with larger value and decrease k by 1. We stop when the left and right pointers cross each other, after they stop if value of k is negative, then it is not possible to make string palindrome using k changes. If k is positive, then we can further maximize the string by looping once again in the same manner from left and right and converting both the digits to 9 and decreasing k by 2. If k value remains to 1 and string length is odd then we make the middle character as 9 to maximize whole value.
// C++ program to get largest palindrome changing // atmost K digits #include <bits/stdc++.h> using namespace std; // Returns maximum possible palindrome using k changes string maximumPalinUsingKChanges(string str, int k) { string palin = str; // Iinitialize l and r by leftmost and // rightmost ends int l = 0; int r = str.length() - 1; // first try to make string palindrome while (l < r) { // Replace left and right character by // maximum of both if (str[l] != str[r]) { palin[l] = palin[r] = max(str[l], str[r]); k--; } l++; r--; } // If k is negative then we can't make // string palindrome if (k < 0) return "Not possible"; l = 0; r = str.length() - 1; while (l <= r) { // At mid character, if K>0 then change // it to 9 if (l == r) { if (k > 0) palin[l] = '9'; } // If character at lth (same as rth) is // less than 9 if (palin[l] < '9') { /* If none of them is changed in the previous loop then subtract 2 from K and convert both to 9 */ if (k >= 2 && palin[l] == str[l] && palin[r] == str[r]) { k -= 2; palin[l] = palin[r] = '9'; } /* If one of them is changed in the previous loop then subtract 1 from K (1 more is subtracted already) and make them 9 */ else if (k >= 1 && (palin[l] != str[l] || palin[r] != str[r])) { k--; palin[l] = palin[r] = '9'; } } l++; r--; } return palin; } // Driver code to test above methods int main() { string str = "43435"; int k = 3; cout << maximumPalinUsingKChanges(str, k); return 0; }
Output:
93939. | http://www.geeksforgeeks.org/make-largest-palindrome-changing-k-digits/ | CC-MAIN-2017-17 | refinedweb | 459 | 57.23 |
Download demo - 32.73 KB
I wrote this article because I was in need of a basic XML parser and could not find one suitable for my needs on the internet (a light weight parser).
The complexity of the parsers out there is rather disarming, and requires a huge amount of knowledge to understand. If you are not a seasoned C++ programmer, it is very hard to make sense of the code, and if you are then you have already written your own parser.
I wrote the parse function, then a set of classes for storing the data parsed, and an example on how to use it (MFC dialog based with a tree view). This parser is very simple and has only the basic functionality in order to work, no fancy stuff. There are some limitations to it:
parse
CDATA
//<![CDATA[
This project has been designed with simplicity in mind, in order to be able to assimilate the code base quickly and easily and add to it the extra functionality needed by each specific application.
These classes provide only the basic functionality, therefore is the most lightweight parser from all that I could find: 500 lines of code for the parser (including some unused base64 functions, which can be removed if necessary), and another 500 for the binary tree construct. If more functionality is needed, this has to be added in order to fulfill one's needs.
base64
It is very easy to understand and work with this code as a base for further development.
The parsing function is iterative, it only goes once through the XML string, so the performance is quite satisfactory. The memory requirements are low, each object allocates just how much memory it actually needs. There is much more room for improvement, but it is not the target of this project.
I personally think that nothing else needs to be said about this, because the code speaks for itself, and it has been designed to be easy to read and understand. If it is considered necessary, I can go in some details on how it works, and how to be enriched.
Dealing with the XML format, there are three classes (marked with a red dot) :
Cxml, CAttribute and CNode.
Cxml
CAttribute
CNode
Cxml class is the working horse for this project and contains the parse function:
Cxml
bool Cxml::ParseString(_TCHAR* szXML);
Once parsed, the information needs to be stored in memory in a manner easy to use, therefore the existence of the Node and Attribute classes.
Node
Attribute
The Node class has a tree like structure, having a parent pointer and a children list. It also contains a list of Attributes.
An addition to these classes, there is a Utils set of files (.h & .cpp) in which there are some utility functions.
Well, in order to use it, you have to do the following:
#include "Cxml.h"
to your project.
Cxml *oxml = new Cxml();
string
oxml->ParseString(szXML);
After the ParseString returns, the structure of the XML is replicated in the class, and the XML root node can be retrieved with the...
ParseString
oxml->GetRootNode();
...call.
There is a peculiarity here. Because I have adopted the "last in first out" way, the nodes will be organized in a reverse order than they are to be found in the original XML string.
The Node object can be navigated by using its public functions. Remember though that the GetNextChild() function increments the position of the counter and I have not implemented a way to reset it.
public
GetNextChild()
The best way to understand the inner workings is to get the demo project and test for yourselves. You will need Visual Studio 2008 to compile the project as is, without reconstruction. If you reconstruct remember: it has not been tested for Unicode.
Download the project and extract it. Compile. Run it and press the load button.
Choose one of the XML files provided as examples. Click open.
This is the result for one of the XML files provided.
<CATALOG>
...
<PLANT>
<COMMON>Snakeroot</COMMON>
<BOTANICAL>Cimicifuga</BOTANICAL>
<ZONE>Annual</ZONE>
<LIGHT>Shade</LIGHT>
<PRICE>$5.63</PRICE>
<AVAILABILITY>071199</AVAILABILITY>
</PLANT>
<PLANT>
<COMMON>Cardinal Flower</COMMON>
<BOTANICAL>Lobelia cardinalis</BOTANICAL>
<ZONE>2</ZONE>
<LIGHT>Shade</LIGHT>
<PRICE>$3.02</PRICE>
<AVAILABILITY>022299</AVAILABILITY>
</PLANT>
</CATALOG>
char
XML_DOC
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
m_szValue = (_TCHAR*)malloc(l+sizeof(_TCHAR));
basic_string <wchar_t> *m_szMyString;
_TCHAR * m_szMyString;
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/111243/Simple-Cplusplus-XML-Parser?msg=3609921 | CC-MAIN-2014-41 | refinedweb | 777 | 60.85 |
Free JSP
Help Very Very Urgent - JSP-Servlet
Help Very Very Urgent Respected Sir/Madam,
I am sorry..Actually the link u have sent was not my actual requirement..
So,I send my requirement... for more information,
Programming Books
books and download these books for future reference. These Free Computer Books...
Free Java Books
Free JSP Books...
Programming Books
A Collection of Large number of
Free
problem in sending more than 500 values to a jsp page
problem in sending more than 500 values to a jsp page when i am trying to send more than 500 values from a html form to a jsp page browser is showing that server is not sending any data...I have configured tomcat5.. Here When I click "View Database" Button,a small pop up window must be opened more
Download LINK - JSP-Servlet
Download LINK sorry i am the new1 to java. I am learning and doing... easily. i want the code like by Clicking DOWNLOAD button the SAVE as dialog box should come asking for the DIRECTORY location to SAVE.
Anyhow i am happy
Introduction to the JSP Java Server Pages
it offline.
Free JSP Books
Download the following JSP books.
Free
JSP Download
BooksFree...;
Accessing database fro JSP
In This article I am going
JSP - JSP-Servlet
JSP Thanks for your answer, but still I am facing problem
Can you give me the complete path in which I can save the JSP and, and in my website...
-------------------------------------------------
Read for more information,
Thanks
jsp - JSP-Servlet
jsp hai friends in html form i am having some text box with same name
i want to get the count of the names how to get the count in jsp
and i... for more information.
Programming help Very Urgent - JSP-Servlet
;Hi friend,
Read for more information, help Very Urgent Respected Sir/Madam,
Actually my code... present in the database.. When I click or select any of the ID,it automatically
Download Button - JSP-Servlet
Download Button HI friends,
This is my maiden question at this site. I am doing online banking project, in that i have "mini statement" link. And i want the Bank customers to DOWNLOAD the mini statement. Can any one
Ajax Books
over ASP, PHP, JSP, etc.) - the coding on the client does become more...
Ajax Books
AJAX - Asynchronous JavaScript and XML - some books and resource links
These
JSP,J
JSP - JSP-Servlet
JSP Thank you very much for your fast reply.. Now it is working...
In my project I have to insert the data from different JSP pages. I know one... those functions.. don't write any servlets for that...
Till now I am not aware2ME Books
J2ME Books
Free
J2ME Books
J2ME programming camp... a list of all the J2ME related books I know about. The computer industry is fast
Need More Royalty Free Music Website
people to pay royalties.
Now I need more royalty free music websites to enrich my...Need More Royalty Free Music Website Royalty free music, which is also known as stock or production music, is a type of good quality music you
JSP Code - JSP-Servlet
JSP Code Hi,
I have a problem in limiting the number of row... to display only 10 records per pages in jsp, then how can i achieve this concept...`)
)
Thanks Hi frnd,
Thanks for speedy response. Here i got one more problem hello sir.....i am doing one web application project in which i... as for usercategory......how can i do it...plz hwlp me...its very urgent.............i have to give permission(priveledge) that submodule user does not have
Learn JSP CheckBox
The tutorial highlight the features by whihc you will be able to know what is JSP checkbox and also how it works in creating a checkbox... in JSP by the tutorial. The JSP Page allows the user to select more than one option
jsp - JSP-Servlet
jsp image preview when uploading using jsp? Hi friend,
I am sending you a link. This link will help you.
Please visit for more-Servlet
the error.shall i change the login.jsp or header.jsp or any thing else.i am using... where you having the problem :
For more information on JSP visit to :
http...jsp after logout when i pressed back button of browser then the last
; Hi friend,
I am sending a link. This link will help you.
please visit for more information. sir, can we include more java code asif we do in servlet
JSP - JSP-Servlet
JSP my question is:
i sucessfully upload image to mysql database but when i am downloading that image blob dont store in particular place in my jsp page?
how to do that? Hi friend,
I am sending you a link
JSP
;jsp:param ..../></jsp:forward>
For more information, visit...what is JSP forward tag for what is JSP forward tag for
It forwards the current request to another JSP page. Below is the syntax
java/jsp code to download a video
java/jsp code to download a video how can i download a video using jsp/servlet
Free Java Download
Free Java Download Hi,
I am beginner in Java and trying to find the url to download Java. Can anyone provide me the url of Free Java Download ?
Is Java free?
Thanks
jsp code - JSP-Servlet
jsp code how to get multi-chat client using jsp?
i will be very glad if i'd get an idea from you people.
thank you . Hi Friend... broadcast to each and every client.
For read more information on chat server
jsp tag - JSP-Servlet
jsp tag
i am basic jsp learner,so i cann't understand th... in the place of the uri attribute of the taglib directive of JSP.
For more... stream.A custom action is invoked by using a custom tag in a JSP page.
A tag error - JSP-Servlet
jsp error Hello, my name is sreedhar. i wrote a jsp application... out of this problem as soon as possible. Hi friend,
I am... in detail.
Visit for more information.
Thanks
jsp - JSP-Interview Questions
for developing webapplication?Actually i m developing a jsp project,ur ans for my quest is very useful for my viva presentation. Hi
JavaServer Pages (JSP....
-----------------------------------------------
Read for more information with example.
download - JSP-Servlet
download here is the code in servlet for download a file.
while...;
/**
*
* @author saravana
*/
public class download extends HttpServlet... if an I/O error occurs
*/
@Override
protected void doPost
jsp - JSP-Servlet
values from HTML. Hi friend,
I am sending you a link. This link...jsp When i enter an "Empid "in an an HTML page, time of the system should save in the excel file of that Empid row(IE 1st row),when i enter
JSP - JSP-Servlet
JSP Respected Sir,
I am in a urgent need of JSP code.. Its Just Simple.. But As I am New to JSP,I find a bit of difficulty to manage..
The Following output must be produced after Executing JSP code
searching books
searching books how to write a code for searching books in a library through jsp Tutorials Resource - Useful Jsp Tutorials Links and Resources
;
Why
use JSP
JSP
is easy to learn and allows developers to quickly...: Learn JSP:
JavaServer Pages
(JSP) is a scripting facility and specification...; JSP :
The fastest and most cost effictive way to learn,
forget boring books
CORBA and RMI Books
CORBA and RMI
Books
... and CORBA
The standard by which all other CORBA books are judged, Client..., JSP, and Servlets
CodeNotes provides the most succinct, accurate
Validating Number Very Urgent - JSP-Servlet
Validating Number Very Urgent Respected Sir/Madam,
I am R.Ragavendran.. I want u to insert the coding for validating number in the place of Emp ID text box.. I am sending the code for your kind reference which tends you
Perl Programming Books
This book is a work in progress. I have some ideas about what will go into the next few chapters, but I am open to suggestions. I am looking for interesting...
Perl Programming Books
JSP Upload and Downloading files - JSP-Servlet
JSP Upload and Downloading files Respected Sir/Madam,
Very Very Urgent.. I am R.Ragavendran.. In the application of Uploading and downloading files in JSP, I am facing a problem..
1) The file name is getting inserted | http://www.roseindia.net/tutorialhelp/comment/4685 | CC-MAIN-2014-52 | refinedweb | 1,390 | 74.79 |
So you'd think there'd be a million Rich Text solutions for editing Wiki markup, right? Hmm... Not so lucky. Google was not my friend.
But if you're aware of textile-j cool library for rendering textile markup... and your pretty motivated (say, cause the project your working on needs one for client facing wiki content editing), you can probably leverage of the totally sensational YUI Rich Editor along with some funky Grails backend Ajax tom-foolery to come up with a screencast to show off... (1.1mb - sorry for the dodgy sound, next time I'll use my real mic)
I've been using the using the uber cool Grails YUI Plugin to handle the imports of all the necessary YUI files, and getting myself lost in a fairly foreign world of Javascript.
Turns out the recipe for making all this work is pretty straighforward:
myEditor.getEditorHTML(). Awesome!
And how does textile-j do it's thing? Well my MarkupService is pretty tight (a service is probably overkill, I could probably wrap this up in a Grails encoder):
def textileToHtml = { textile -> StringWriter sw = new StringWriter(); HtmlDocumentBuilder builder = new HtmlDocumentBuilder(sw) builder.emitAsDocument = false MarkupParser parser = new MarkupParser(new ConfluenceDialect()) parser.setBuilder(builder) parser.parse(textile) return sw.toString() }
Very tidy. Going back the other way requires a little bit of Regexp Ninja antics... but given that the basic markup I'm supporting is lists, headings, bold, italics, images and links, the search scope is narrowed down significantly from "anything goes" html.
It's not a perfect solution. Regular expressions are really terrible for this sort of thing since the nested cases get impossible to express. A real grammar is the only way to handle it properly. But for the 90% case (and until I read the Antlr book sitting on my shelf), the above should get me home for v1. Whilst this work if for another project, you can bet the source will end up in Gravl for editing once the feature set is done..
YUI is just awesome... (and props to Marcel for packaging it up in a sweet Grails plugin)... | http://blogs.bytecode.com.au/glen/2008/05/01/writing-a-wysiwyg-wiki-editor-with-yui-and-grails.html | crawl-001 | refinedweb | 355 | 63.8 |
Vim Color Schemes
One of the joys of [Neo]Vim is the amount of color schemes available. The editor ships with several colorschemes by default, but adding more is what Vim was made to do!
Before we begin, know that Vim, NeoVim, and the various terminals they work with can be finicky with colors. On Windows, I'm using neovim-qt. On Mac, I'm using MacVim or iTerm2 with NeoVim. On Linux I'm using LXTerminal on Ubuntu 16.04 with NeoVim. The following notes with my setup.
There are a couple of notes to keep in mind when using colorschemes:
- Some terminals do not support some colors. A nice overview of this can be found in one of my favorite colorscheme's documentation
- Sometimes (on Windows/Mac, but oddly not on my Linux)
termguicolorsshould be set. See
:help termguicolorsfor more information
- Your background can be light or dark and many colorschemes will adjust. I universally prefer a dark terminal so I always
set background=darkto inform Vim about my dark background.
Because I work on several servers/VMs that don't have NeoVim or vim-plug installed, or all the features I prefer on my main development machines, I have to be careful to avoid or handle any missing features or errors that might arise. I need to be able to scp my vim initialization file to a machine and just use it. With this in mind, the main thrusts of this post are to document how I use features when available but gracefully fall back to good defaults, my favorite colorscheme plugins, and how to change the colorscheme for new Vim instances from the command line.
Diverse Features, One Configuration
The first trick I use to (borrowed from somewhere on StackOverflow) is
to use a
try/catch block to set a default colorscheme if my plugin
colorscheme isn't able to be loaded:
" Try to use a colorscheme plugin " but fallback to a default one try colorscheme gruvbox catch /^Vim\%((\a\+)\)\=:E185/ " no plugins available colorscheme elflord endtry
Likewise,
termguicolors can be detected using an
if block:
" Linux has termguicolors but it ruins the colors... if has('termguicolors') && (has('mac') || has('win32')) set termguicolors endif
I'm sure if I put some more work into it, I could change LXTerminal's settings to work with Vim, but... I haven't.
Colorscheme Plugins!
I used the builtin
elflord colorscheme for quite a while, and I still do
occasionally, but here are some colorscheme plugins I really like:
I've also come across a few plugins that add many colorschemes at once!
That last NeoVim specific plugin can be wrapped in an
if block:
if has('nvim') Plug 'Soares/base16.nvim' endif
Tons of colorschemes are now available, so having a plugin that can quickly switch
between them is practically necessary. I use
vim-colorscheme-switcher.
With this gem of a plugin, the
F8 key switches to the next colorscheme
available and the
:RandomColorScheme command becomes available. Unfortunately,
Vim wasn't designed to cleanly switch colorschemes, so sometimes they won't load
properly. This section
of the README explains further.
Colorschemes before startup (untested on Windows)
With this menagerie of colorschemes, editing an initialization file each time a
color scheme deserves to be changed can become annoying. I used to have a long
list of commented-out colorscheme lines in my
init.vim, but recently I've
found a way to mitigate the problem- setting the colorscheme from the terminal.
The "hook" between the terminal and Vim can be created using environmental
variables- set an environmental variable in the terminal, and read it in Vim on
startup to do things. I prefix all such environmental variables with
vim_, so
vim_colorscheme is the variable I chose. I use the
try/catch block described
earlier to catch and handle errors.
" Try to use a colorscheme plugin " but fallback to a default one try " get the colorscheme from the environment if it's there if !empty($vim_colorscheme) colorscheme $vim_colorscheme else colorscheme gruvbox endif catch /^Vim\%((\a\+)\)\=:E185/ " no plugins available colorscheme elflord endtry set background=dark
Now in BASH/ZSH I can define a function to easily set a colorscheme:
set_vim_colorscheme() { export vim_colorscheme="$1" }
I don't have an encyclopedic knowledge of the colorschemes I want, so I also
define a completion function with the colorschemes I like (drop this in
~/.bashrc or
~/.zshrc)
# make zsh emulate bash if necessary if [[ -n "$ZSH_VERSION" ]]; then autoload bashcompinit bashcompinit fi # make the autocompletions # _vim_colorschemes='abbott elflord gruvbox desert-warm-256 elflord railscasts dracula 0x7A69_dark desertedocean' complete -W "${_vim_colorschemes}" 'set_vim_colorscheme'
After sourcing these functions, I can use
set_vim_colorscheme <TAB> to get a
nice list of colorschemes. One note: when I use tab completion to
auto-complete the command
set_vim_colorscheme itself, I have to type SPACE
before TAB to convince BASH that the colorscheme is ready for completion.
With graceful feature usage, many colorschemes, and the ability to switch them easily from inside Vim and outside Vim, my sense of style is satisfied and my ability to procrastinate is enhanced.
I keep my Vim/NeoVim configurations on GitHub.
This blog post is now on Reddit! | https://www.bbkane.com/blog/vim-color-schemes/ | CC-MAIN-2021-43 | refinedweb | 859 | 57.71 |
Doug Hodges is interviewed by Ken Levy discussing the history of the Visual Studio IDE (Integrated Development Environment).
Ken: Who are you, what do you work on, where are we?
Doug: Well, hello, I’m Doug Hodges. I’m the original architect of the Visual Studio IDE.
Ken: Is that also referred to as the Shell?
Doug: Yeah, the main shell for the Visual Studio product. All the language products that plug in are all integrated in the IDE (integrated development environment).
Ken: What’s your exact title?
Doug: I’m Software Architect for the Visual Studio IDE. At least that’s what I call myself.
Ken: So we’re in building 41, which is kind of the home of Visual Studio.
Doug: Yep. Building 41 in Redmond, Washington.
Ken: So Jason Weber tells me that he refers to you as the grandfather of the Visual Studio shell.
Doug: I prefer to consider myself as the father, not the grandfather.
Ken: So he said that you consider yourself the cool uncle, is that right? Have you said that before?
Doug: Yeah, well. So I’ve been on this product and this vision to create this integrated extensible IDE from starting back in the Visual Studio 5.0 timeframe.
Ken: What year was that, do you recall? What version of Visual Studio?
Doug: Visual Studio 97. And back then I was the architect of Visual InterDev (VI). And actually we started the product back in ‘95 when Microsoft changed its focused on addressing the Web. And we took my product, I was actually part of the Microsoft Access team, and we were working on a future version of Microsoft Access that was based off of the component technology. We had a forms-based technology package that ultimately became Internet Explorer as you know it today. We had the database design tools which were a component that ultimately became the database design tools in Visual Studio today.
I was on a team that was suppose to take these components and repackage and rebuild a product that would be Microsoft Access, the follow-up and successor to Microsoft Access 2.0 at the time. I was an original member of the COM (OLE) team, and so we started with a COM-based strategy for how you could integrate components. When we [got] a new charter to build a tool to address developing for the Web, we created this Visual InterDev product. We took my team and another team that was doing development tools for Blackbird, -it was for MSN-and we formed a team to create Visual InterDev. At that time we had one tool for VB (Visual Basic), one tool for VC (Visual C++), and at that time we thought that the VC codebase was going to be the codebase that was going to evolve into an integrated shell. We built our first version of that Visual InterDev product in the VC codebase, and I re-hosted those same components and forms package for HTML, design, and database tools in the VC 5.0 shell.
That was the start of our component (COM) interfaces that you see still existing today. We had a vision in the division to create one common IDE for all language products and all tools. We had a separate IDE for VB at that time, then we had VC, VJ, and VI in this dev shell, the VC-based shell. And it was a derived MFC-based classes implementation of a C++-based framework. We tried to have a third-party extensibility program like VSIP (Visual Studio Industry Partner) program today. We found that wasn’t going to work very well unless everyone gave us their source code that we would use to build in our own build lab, because any time you change something in a base class you had to recompile the world. So that wasn’t going to work to have a third-party extensibility program.
So then in the Visual Studio 6.0 timeframe, I started this new shell and the whole strategy was based on a COM-based architecture that could allow independently built things that were independent versions of each other to be plugged together. In our first version, we took the VJ++ product and the VI product out of the MS Dev shell, and they were the first two products that we integrated into this new IDE that we created. So at that time it looked like we took a step backwards, because we went from two shells, one for VB and one for VJ/VC, and then in the VS 6.0 timeframe we moved to three shells with one for VB, one for VJ++, and one for VC++. But that was a stepping stone to get us to the goal of our division which was a single shell which we achieved in the Visual Studio 2002 (VS 7.0) timeframe.
Ken: So at this stage of the game when you’re building this third but new one, how are you thinking about extensibility? Is it mostly around certain partners in the VSIP program, or was it about anyone doing anything they want at any time?
Doug: The Shell was designed to be fully extensible from the ground up and designed to have independent pieces that independently plugged together. We had all of our existing products internally, and we wanted to be at least as useable as the VB product and the VC product combined. We picked the best features from those two products, and we wanted to meet the extensibility requirements of those independent teams. The VB team, the VC team, and the VI team were all customers of mine, plugged into this combined IDE as well as planning for the requirements of third parties. Yeah, it was designed from the beginning to be a fully extensible shell and we felt that if we could meet the requirements of VB and VC, that together we could probably meet the requirements of most language partners. And then of course you had to anticipate the kind of other products that could be built, like what you see today with Team System or Server Explorer for database development, and other things. Yeah, the goal from the beginning was to be extensible.
Ken: So you are targeting both languages to be integrated as well as UI tools and components. And sometimes they are separate-you may have one company that wants to build a new language, like Cobol or something, and another company that has a toolset that is for a specific vertical market and they want to resell that integrated into Visual Studio, is that right?
Doug: Well, yeah, and trying to cover the full lifecycle of development tools to assist in not just code development but in testing, deployment, servicing, source control, and code analysis.
Ken: The companies that did stuff in the beginning were basically partners-they got access to the team-and it didn’t scale to the masses.
Doug: Originally, these were the same APIs that we used to build our own products, and we opened it up to a select program that you could pay to get into. It used to cost $10,000. Those where the high-end ISVs at that time. Very often it was a high end C++ programmer who worked on it, and priority was given to powerful features but not necessarily on making it easy to use. At that time we had no VS SDK team and I had to beg and borrow to get resources to get a sample here and there or documentation. Originally when someone needed to write a feature for the shell, I would help them with a little planning of their product and they would write a sample and send it to me. Some of the samples we had at the beginning came from different sources and we didn’t have a team to create or organize an SDK. It was a very small program at that time. Another aspect of extensibility was add-ins and macros, and those were in the product in the beginning, and every box of Visual Studio had wizards so if you wanted to write add-ins you could do extensibility right in the box. That was more targeted more at VB-level languages using an automation-based model. But you were limited in the kind of features that you could implement there. You could add commands or tool windows, or hook to certain events and react to them. But you couldn’t implement a new language, or a new debug engine with a custom runtime. That’s when you needed to access the lower level APIs, and originally, like I said, it cost $10,000.
Ken: Did they get code as well as help? Was there kind of an SDK for VSIP folks?
Doug: There was a little bit of an SDK. Originally there was a set of samples I scrounged together by begging and borrowing. We had someone in PSS come and help us write some samples. They helped to build the tools you needed-the IDL and .H files, and other related tools that you needed. There wasn’t a nicely packaged SDK at that time.
Ken: Did you end up working in some capacity with most VSIP folks, and you knew them in some way from e-mail or a demo?
Doug: Yeah, I worked with them. We started way back having these developer labs where we brought partners in house for week long developer labs. Those started up way back in the 2002 timeframe. Originally that first extension of the shell was in the VS 6.0 timeframe and was not yet open. We did not expose the extensibility APIs. We had a couple of internal teams write some things, and I think someone from the SQL division wrote something for the actual language query or something. In the 2002 timeframe we basically had developer labs were we could bring the partners in house, and they got a lot of hand holding.
Ken: How many times have you given the overview presentation for Visual Studio extensibility?
Doug: <laugh> Yeah, I can’t count how many times. Practically every developer lab started out that way with my overview presentation. Yeah, gosh, I’m not sure. A hundred?
Ken: Where are we today? How did we get to where we are today?
Doug: So with that first version we achieved our goal of a single integrated IDE in 2002. We shipped the 2003 and 2005 versions and are now working on the Visual Studio 2008 version. Now we can enhance and refine the product with new scenarios of Visual Studio extensibility, and it’s a very mature platform now.
Ken: So a few things happened over the last few years. There is now a VSIP level that is less than $10,000, and then in 2005 the VS SDK became free, so that basically anyone can download the SDK and extend Visual Studio, although they don’t have the same co-marketing or complete technical support access, but yet it’s free so anyone can do it.
Doug: Yeah, yeah, that’s really cool. So from a technology point of view there is a free level with access to all of the technology, then two pay levels above that which have additional marketing and support benefits. When we went public with the SDK, there were many many downloads. Then that raised the priority to not just worry about C++ programmers that have direct access to dev labs and hand holding with Microsoft. We needed to start focusing on the accessibility of the SDK and the accessibility of how easy it is to do that development. So that’s been a major goal over the recent years, to try to improve the quality of our SDK. We developed an actual team to own the release of the SDK (VS Ecosystem, or VSX team).
Ken: We kind of had two different groups out there. First we have VSIP, the partners, which are part of the program, who are extending Visual Studio usually to a product that they can sell or use in the enterprise. And now this new group that we’re cultivating called VSX, which stands for Visual Studio Extensibility, which are essentially non-partners who are using the free VS SDK, and that’s a fairly new thing. Is this something that you had envisioned a long time ago, where you see .NET developers, Visual Studio developers, coming in and building all kinds of stuff for CodePlex and other places?
Doug: The original goal of focusing on the high end ISVs shaped the style of APIs we have. It was targeted at making things possible. Now that we have released the technology to everyone, we need to greatly improve our level of abstraction and ease of development for some of these scenarios. I can’t say I originally thought that we would have that available. We had myself and a program manager trying to run a program that doesn’t scale well. But now that we have a full product unit around managing and growing this program, it’s really exciting and I hope it goes and develops really well.
Ken: A moment ago you touched on ease of use. In addition to more functionality and capability for VS extensibility and the VS SDK, it’s also about making what’s there easier to use and access, whether it is simply better docs, or better sample classes, or better wrapper classes that get you in, or wizard-type functionality, right?
Doug: Right. The first thing that made it easier was finally delivering the ability to use managed code to write these extensions. When we first shipped the extensibility via the SDK, you could only use C++ with it. We shipped both Visual Studio 2002 and 2003 that way. But we knew lots of developers would want to write in managed code like C# and VB. So we did what we called the Everett extras release. Everett was the codename for Visual Studio 2003. So we had an Everett extras SDK release which was an add-on to the previously released VS 2003 SDK release that delivered the minimal pieces you needed to be able to write in manage code. It gave you the COM interfaces in managed code, a redistributable MSI that you could use to install those assemblies, and then the beginning of a helper framework that you could use as some helper classes to begin to write something. That was really thin and we put a lot of effort and massively grew those helper classes into what we call now the Managed Packaged Framework (MFP) in the Visual Studio 2005 release.
So we continued to build and always provide the interop assemblies, and one of the things that has always been important to me was designing the APIs to take into account different extensions that different companies will work with at different times and different schedules. Addressing versioning has been a particular focus, so we spend a lot of effort maintaining compatibility with old versions of Visual Studio. It’s always been our goal to maintain compatibility, and the API is designed in a fashion that you could write a single extension that runs in Visual Studio 2003 and Visual Studio 2005 that runs with a single binary. More realistically between Visual Studio 2005 and Visual Studio 2008; that’s going to be popular, that you can create a single binary that runs in both versions. So part of what we do is to make a big effort to not change or break compatibility with any previously shipped interfaces.
We create new APIs in a way that is defined in a new assembly and file, so you can see what is new in a version. If you need to, you can provide the new distributable that lets you install the interop assemblies for the new interfaces on a down-level machine. If you write a package that depends on new features you can run the new version and can take advantage of the new features.
But that same binary can run a down-level version, so that query interface or query service for an API can allow you to dummy down and write code to work without it. That’s particularly interesting to high-level product development, like partners who release VSIP products that may address more than one version of Visual Studio. Someone writing a tool for themselves probably won’t be worrying about that and will only be worrying about one version of Visual Studio.
Ken: What are the key prerequisites for someone who wants to download the VS SDK? Say someone has Visual Studio 2005. Do they need to know a bit about COM interop or a little bit about anything else? Because there are parts of the shell that are pure COM. The Managed Packaged Framework (MPF) helps a .NET developer access them.
Doug: So we have this Managed Packaged Framework that covers and abstracts, let me say, 30% or 40% of our APIs. And so if you are staying within that subset, you can see what looks like a .NET experience when writing your extensions. But if you are doing something like writing a language or interacting with, oh gosh, there are so many features. You are going to come upon areas that are not yet abstracted in the Managed Package Framework, and then yes, you will need an understanding of COM interfaces. Unfortunately in COM interop there are several places you are going to encounter an end pointer and then will have to deal with the marshal class, the marshalling, and deal with an end pointer into an object or a string. We have samples for those, and hopefully when you hit one of those areas you can find some sample that does something like it so you can copy from it. And that’s one of those areas we hope to improve the coverage of the MPF and other techniques to eliminate those places where you have to directly go to a COM interface that was originally designed for a C++ programmer that is rather awkward to deal with from a managed language.
Ken: One of the things that we’ve seen recently is the community doing a bunch of cool stuff building on top of the SDK. For example, the VSSDK Assist on CodePlex.com. You’ve seen a demo of that recently.
Doug: Yeah, that looks really cool. That looks really awesome.
Ken: How would you describe it? Is that something that assists people that use the SDK?
Doug: It’s leveraging the Guidance Automation Toolkit that was developed in combination with Visual Studio people and the PAG (Patterns & Practices Group). And it’s really cool. It’s a factory. It’s a wizard, on steroids. You can start with a new project wizard that starts you off with writing something for Visual Studio extensibility, but it doesn’t drop you out of the wizard when it’s done. It’s like fine-grained points to launch wizards to add additional features to extend your project and give you guidance on what’s next. It gives you help. Like if this is what you’re looking to do, it can give you help on how you can do it. It’s a direction that’s fantastic. It’s a direction that I’d like to see continue to grow.
Ken: One thing that I noticed in the demo is at runtime after you run it, it does not require any components at runtime. It just generates code in the right places that just use the VS SDK and extensibility APIs. So there’s nothing at runtime that you need; it’s just helping you and assisting you learn the SDK. So if you want a tool window and a button, here’s how to glue those together.
Doug: And it knows about editing the five places you have to edit, you know, this file over here and this file over there, with one UI sequence. It takes care of making all those edits. But you’re right, it’s not a new runtime you install to use it; it actually generates the code for you.
Ken: What are some things that developers might be capable of doing if they were to download the SDK? Things like creating services, tool windows, menu buttons. You can even just have a simple thing that is on a menu that launches your own application. What kind of things can they hook into? What would you like to see the community build?
Doug: The first simple thing is adding a command on a menu or a toolbar. That command can do some action on selection, and maybe the next thing is showing some UI in a tool window. And let’s say you have your own kind of code review tools and you have your own tool window that shows the results of a code review, and your command launches your code review based on the selected code file and presents the results in a window. Now incidentally, that level of extensibility, where you are adding command and tool windows, that’s something you can also do using add-ins. And the extensibility that’s been in the product for free since the beginning. In many ways it’s easier to use add-ins, so I don’t want to discourage people from using add-ins. There are some subtle features that you can’t do in a tool window, for example through an add-in, that you can do when you are coding at a package level. For example, if the user shows a tool window and then shuts down, the next time they launch Visual Studio the tool window automatically reappears.
Ken: Something that becomes native to their development experience.
Doug: But if you do it as an add-in, you don’t get that feature. So in some ways it feels second class in that sense because it doesn’t behave identically like every built-in feature. Whereas if you do it as a package, and we do our own features as packages, you get that functionality. Add-ins are controlled through the Add-in Manager where the user can load and unload; Turn their add-ins on and off. In some cases you don’t want the user to turn parts of your product off. That’s one difference with a package.
Ken: In some key areas people can extend project types and create tool windows.
Doug: There are interesting cases where you have your own project type. In many cases those will be custom XML file types. You have your own XML format type that you would like to create your own UI design experience for. That’s a great scenario. So I call that in a general category creating your own designer or editor.
Ken: A couple of years ago I worked on sponsoring the XML Editor in Visual Studio 2005. Chris Lovett created that.
Doug: I’m glad you worked on that. It’s a great feature.
Ken: That’s one major example since it’s actually built around the stuff we are talking about-an external team extending Visual Studio.
Doug: It (XML Editor) is designed with extensibility in mind so that you can write an XSD schema and you get IntelliSense for your XML. And then, in addition, you might want a custom UI experience. Think of it as the View design for your code. That’s where you would drop down and write with the SDK.
Other scenarios might extend projects in other ways. There’s a rather advanced point of extensibility that we have which we call project sub types, or project flavors. We use this ourselves. There is a project type that lets you run assemblies on a smart device, a pocket PC or such. You don’t just have a regular VB project or a CE project. You have a project that knows it’s building code for a device-it adds special features that are unique to building for a device. It gives you an emulator and it knows that after you build your assembly it has to deploy to the device. Then it knows how to launch that application under a debugger appropriate for running the device at runtime.
Ken: Would you say in that case, rather than creating a whole new custom project type, it was kind of like inheriting from a VB or C# class, adding what it needs for that custom behavior, in addition to the basic VB or C# project?
Doug: Exactly. It’s the COM equivalent of sublcassing. Another scenario is what we’ve done in SQL Server development. One of the new features in SQL Server is that you can run managed code inside SQL Server for data types and stored procedures. There’s a project subtype that knows how to build an assembly on the user’s desktop but then install and deploy it into SQL Server and launch it under the debugger for SQL Server. So you can write you own kind of project subtype using the SDK. You might use this if you have some place where you use application code like your own application server or some kind of environment where you load managed code in some kind of assemblies in a managed code environment. It’s a great scenario to create your own project subtype that knows about the extra steps that automates it for the user for the experience of getting that code into the right place and then debugging it.
Ken: You get everything you normally would, with custom behaviors on custom projects.
Doug: This is a case where you are not writing your own language. You want to use VB or C# or one of the other third-party languages, and yet you want to write this extension to extend the experience that is perfect for your server or your runtime. Then you get into scenarios where you have your own language. That’s a great scenario for VSIP, where you have your own custom language and you want to get custom features that you have for your language, and possibly forms, WinForms hosing or Web development-that’s a classic scenario. Then look at non-traditional projects that can be added to Visual Studio. You know, imagine the team that invented Team System, and imagine you are the one who invented Team Explorer, and you have features for a team project. It’s not a VB or C# project--it’s in a whole new space of something where you want to show this hierarchy in the tool window and from there to launch these documents that let you edit information or view information. So you could invent your own thing like that.
Ken: It might not even be for compiling to a .NET solution. It could be for modeling or something.
Doug: Yeah, or modeling a runtime deployment scenario where your tools help you monitor a deployed application and it has nothing to do with deploying that application-it has something to do with monitoring or servicing or something. There are infinite possibilities.
Ken: You can control the properties of the project as well as the file nodes?
Doug: You’re writing an object that is expressing in a series of nodes. You see icons and captions; the drag and drop behavior and contents menus; the context that you give to the designers and editors that come out of that hierarchy. So if you are a language-like thing, you have a programmatic namespace that you have built up by examining all the files in your project and you build up a namespace that gives an IntelliSense experience. Or perhaps you have some other kind of notion of project references and you can access project data that you make available to your designer experience. You can invent lots of interesting things. Some of these project hierarchies might not be based on a file system. You might have your own custom repository or custom system that you want to monitor, and you have a hierarchy that gives you access to documents that you want to edit, and they don’t save to the file system and are not part of a project that builds. Think of lifecycle tools that go beyond development.
Ken: Other things you can do with the VS SDK are add to the Options pages.
Doug: You can have your own custom Tools option pages. You can add to the Error List, the Toolbox, and you can use the Document outline window. If you are a project or hierarchy implementer, you can create your own property pages. If you want to add your own property pages to add to someone else’s project, that’s when you would create your own project subtype. You can add properties to the Properties window which is called automation extenders. You can add a command almost anywhere.
Ken: So a lot of your past, besides working on the core shell, was working with customers that usually work in a vertical market. Now we are in a space where any .NET developer can extend Visual Studio in any way for themselves, for other developers, for free, for CodePlex. Do you expect to see some cool stuff?
Doug: We are already seeing cool stuff. VSSDK Assist was created by the community. There’s going to be an explosion, I think, of really cool things. One I’d love to see, and I can’t believe we don’t have this feature in the product already, is diff tools.
Ken: Other editors can have certain file types, like web.config or other XML types that can have a property editor.
Doug: I think that’s going to be a huge scenario. There are so many XML files that people have created that could have a much better editing experience than having to use or look at the XML, and you can write those and do what I call a custom designer view for an XML file. You can create those designers and attach them to the schema so that whenever an XML file type is open with your schema attached, then your designer can come into play. Most of the time you have to make a new file extension. Consider the XML Editor that you helped encourage to happen that Chris Lovett developed. You can create those designers and attach them to a certain schema.
Ken: Let me ask you a question I hear a lot. We have VS SDK 4.0, which is the last SDK for VS 2005, and we have the VS SDK for VS 2008, which is backwards compatible. Do you see an easy transition for someone who built something with VS 2005 like a tools window or whatever, and then they get the VS SDK for VS 2008 and want to rebuild it to take advantage of new things? Do you see that whole experience as painless?
Doug: I hope the experience is painless. You may have to open up your project file and fix a path to the SDK install location. We are making the transition from the old file format called CTC files for creating menus to an XML-based file format called VSCT, and we will have tools to make that transition easy for you. From the API point of view, we’re highly compatible. We want to make it easy to develop for VS while logged on as a normal user rather than only as an administrator. That’s something I hope to see in the next version of the VS SDK (for VS 2008)-not running as administrator.
Ken: Let me ask you another question that I hear in reference to the history of the Shell. In the IDE, in the message looping, what’s the thing called the Thunder Message Loop? This is a very old reference, correct?
Doug:<laugh> Well Thunder was the codename for Visual Basic. So I think we just inherited some code. When we started the IDE, we cloned the VB code base at the time. Ruby was the codename for the forms package and Thunder was the codename for Visual Basic. That level of the code has been totally reworked by now, so I don’t know if there is any part of the code you could point to as Thunder.
Ken: In terms of the VSX community, what would you like to evangelize the community to do with the SDK? What are you going to do to push Microsoft to do internally to improve the SDK and the experience developers have when they are building extensions?
Doug: I would love to see an explosion of tools that increase productivity for small teams, large teams, and individuals for the things they do that is repetitive. When one says “Boy, wouldn’t it be great to have a designer for this?” For us, I’d like to see us make extensibility easier, and that’s why I’m so excited about things like the VSSDK Assist and leveraging things like the guidance automation Toolkit and DSLs (Domain Specific Languages). I think we need to push to make things easier, more accessible, there should be more samples. I’d like to see a lot more things like VSSDK Assist that help to write code for you and a lot more frameworks of great base classes that you can derive from that help you do scenarios. For example, we’re coming out with a new tool to help you work with the menus. I’d love to see an explosion of tools to help VSX developers like VSSDK Assist.
Ken: Another thing is we’ll increase transparency, as in telling the community what we are working on before we do it so they are more in parallel with our efforts and they can contribute.
Doug: Yeah, that’s all great stuff.
Ken: Well thanks for talking the time to talk to me Doug.
Doug: Thanks for this opportunity.
Ken Levy is the community program manager on the Visual Studio Ecosystem tam focusing on developer community for VSX (Visual Studio Extensibility). The VSX community includes developers who build extensions (tools, editors, designers, languages, and more) for Visual Studio using the VS SDK found at the VSX Developer Center (). Ken was previously a product planner on Microsoft’s Windows Live Platform team working on developer community and future product planning. Before working in the Windows Live Division, Ken was a product manager in the Visual Studio data team responsible for Visual FoxPro product management, the VFP developer Web site, as well a sponsorship of the new XML tools in Visual Studio 2005 created by the WebData XML team. Ken is a long time recognized member of the FoxPro community and has developed many high profile applications and tools in all versions of FoxBase/FoxPro since 1986. Ken spent over four years as a software engineer consulting for Microsoft on the Visual FoxPro team from version 3.0 through 7.0 and is the author of many components of Visual FoxPro including the Class Browser and Component Gallery. While working as a consultant as NASA’s Jet Propulsion Laboratory (JPL) in the 1990s, Ken developed many public domain open source programs including GenScrnX and other developer tools used worldwide in crating in-house and commercial applications. Ken is a former technical contributing writer and editor to many software magazines, and has been a frequent speaker at industry conferences worldwide since 1992. You can read Ken’s blog at. | http://www.codemag.com/article/0710022 | CC-MAIN-2015-35 | refinedweb | 5,991 | 69.82 |
Given an image like the below, our goal is to generate a caption, such as "a surfer riding on a wave".
Image Source, License: Public Domain
Here, we'll use an attention-based model. This enables us to see which parts of the image the model focuses on as it generates a caption.
This model architecture below is similar to Show, Attend and Tell: Neural Image Caption Generation with Visual Attention.
This notebook is an end-to-end example. When run, it will download the MS-COCO dataset, preprocess and cache a subset of the images using Inception V3, train an encoder-decoder model, and use it to generate captions on new images.
In this example, you will train a model on a relatively small amount of data.The model will be trained on the first 30,000 captions (corresponding to about ~20,000 images depending on shuffling, as there are multiple captions per image in the dataset).
from __future__ import absolute_import, division, print_function, unicode_literals
!pip install -q tensorflow-gpu==2.0.0-alpha0 import tensorflow as tf #
We will use the MS-COCO dataset to train our model. This dataset contains >82,000 images, each of which has been annotated with at least 5 different captions. The code below will download and extract the dataset automatically.
Caution: large download ahead. We'll use the training set, it/'
Optionally, limit the size of the training set for faster training
For this example, we'll select a subset of 30,000 captions and use these and the corresponding images to train our model. As always, captioning quality will improve if you choose to use more data.
# read the json file with open(annotation_file, 'r') as f: annotations = json.load(f) # storing the captions and the image name) # shuffling the captions and image_names together # setting a random state train_captions, img_name_vector = shuffle(all_captions, all_img_name_vector, random_state=1) # selecting, we will use InceptionV3 (pretrained on Imagenet) to classify each image. We will extract features from the last convolutional layer.
First, we will need to convert the images into the format inceptionV3 expects by: * Resizing the image to (299, 299) * Using the preprocess_input method to place the pixels in the range of -1 to 1 (to match
To do so, we'll create a tf.keras model where the output layer is the last convolutional layer in the InceptionV3 architecture.
* Each image is forwarded through the network and the vector that we get at the end is stored in a dictionary (image_name --> feature_vector).
* We use the last convolutional layer because we are using attention in this example. The shape of the output of this layer is
8x8x2048.
* We avoid doing this during training so it does not become a bottleneck.
* After all the images are passed through the network,
We will pre-process each image with InceptionV3 and cache the output to disk. Caching the output in RAM would be faster but memory intensive, requiring 8 * 8 * 2048 floats per image. At the time of writing, this would exceed the memory limitations of Colab (although these may change, an instance appears to have about 12GB of memory currently).
Performance could be improved with a more sophisticated caching strategy (e.g., by sharding the images to reduce random access disk I/O) at the cost of more code.
This will take about 10 minutes to run in Colab with a GPU. If you'd like to see a progress bar, you could: install tqdm (
!pip install tqdm), import it (
from tqdm import tqdm), then change this line:
for img, path in image_dataset:
to:
for img, path in tqdm(image_dataset):.
# getting the unique images encode_train = sorted(set(img_name_vector)) # feel free to change the, we'll tokenize the captions (e.g., by splitting on spaces). This will give us a vocabulary of all the unique words in the data (e.g., "surfing", "football", etc).
- Next, we'll.
# This will find the maximum length of any caption in our dataset def calc_max_length(tensor): return max(len(t) for t in tensor)
# The steps above is a general process of dealing with text processing # choosing>'
# creating the tokenized vectors train_seqs = tokenizer.texts_to_sequences(train_captions)
# padding each vector to the max_length of the captions # if the max_length parameter is not provided, pad_sequences calculates that automatically cap_vector = tf.keras.preprocessing.sequence.pad_sequences(train_seqs, padding='post')
# calculating the max_length # used to store the attention weights max_length = calc_max_length(train_seqs)
Split the data into training and testing
# Create training and validation sets using 80-20 split img_name_train, img_name_val, cap_train, cap_val = train_test_split(img_name_vector, cap_vector, test_size=0.2, random_state=0)
len(img_name_train), len(cap_train), len(img_name_val), len(cap_val)
(24000, 24000, 6000, 6000) features_shape = 2048 attention_features_shape = 64
# loading the numpy files def map_func(img_name, cap): img_tensor = np.load(img_name.decode('utf-8')+'.npy') return img_tensor, cap
dataset = tf.data.Dataset.from_tensor_slices((img_name_train, cap_train)) # using map to load the numpy files in parallel dataset = dataset.map(lambda item1, item2: tf.numpy_function( map_func, [item1, item2], [tf.float32, tf.int32]), num_parallel_calls=tf.data.experimental.AUTOTUNE) # shuffling and batching, we extract the features from the lower convolutional layer of InceptionV3 giving us a vector of shape (8, 8, 2048).
-) # we get 1 at the last axis because
->']] * BATCH_SIZE,.0556 Epoch 1 Batch 100 Loss 1.0668 Epoch 1 Batch 200 Loss 0.8879 Epoch 1 Batch 300 Loss 0.8524 Epoch 1 Loss 1.009767 Time taken for 1 epoch 256.95692324638367 sec Epoch 2 Batch 0 Loss 0.8081 Epoch 2 Batch 100 Loss 0.7681 Epoch 2 Batch 200 Loss 0.6946 Epoch 2 Batch 300 Loss 0.7042 Epoch 2 Loss 0.756167 Time taken for 1 epoch 186.68594098091125 sec Epoch 3 Batch 0 Loss 0.6851 Epoch 3 Batch 100 Loss 0.6817 Epoch 3 Batch 200 Loss 0.6316 Epoch 3 Batch 300 Loss 0.6391 Epoch 3 Loss 0.679992 Time taken for 1 epoch 186.36522102355957 sec Epoch 4 Batch 0 Loss 0.6381 Epoch 4 Batch 100 Loss 0.6314 Epoch 4 Batch 200 Loss 0.5915 Epoch 4 Batch 300 Loss 0.5961 Epoch 4 Loss 0.635389 Time taken for 1 epoch 186.6236436367035 sec Epoch 5 Batch 0 Loss 0.5991 Epoch 5 Batch 100 Loss 0.5896 Epoch 5 Batch 200 Loss 0.5607 Epoch 5 Batch 300 Loss 0.5670 Epoch 5 Loss 0.602497 Time taken for 1 epoch 187.06984400749207 sec Epoch 6 Batch 0 Loss 0.5679 Epoch 6 Batch 100 Loss 0.5558 Epoch 6 Batch 200 Loss 0.5350 Epoch 6 Batch 300 Loss 0.5461 Epoch 6 Loss 0.575848 Time taken for 1 epoch 187.72310757637024 sec Epoch 7 Batch 0 Loss 0.5503 Epoch 7 Batch 100 Loss 0.5283 Epoch 7 Batch 200 Loss 0.5120 Epoch 7 Batch 300 Loss 0.5242 Epoch 7 Loss 0.551446 Time taken for 1 epoch 187.74794459342957 sec Epoch 8 Batch 0 Loss 0.5432 Epoch 8 Batch 100 Loss 0.5078 Epoch 8 Batch 200 Loss 0.5003 Epoch 8 Batch 300 Loss 0.4915 Epoch 8 Loss 0.529145 Time taken for 1 epoch 186.81623315811157 sec Epoch 9 Batch 0 Loss 0.5156 Epoch 9 Batch 100 Loss 0.4842 Epoch 9 Batch 200 Loss 0.4923 Epoch 9 Batch 300 Loss 0.4677 Epoch 9 Loss 0.509899 Time taken for 1 epoch 189.49438571929932 sec Epoch 10 Batch 0 Loss 0.4995 Epoch 10 Batch 100 Loss 0.4710 Epoch 10 Batch 200 Loss 0.4750 Epoch 10 Batch 300 Loss 0.4601 Epoch 10 Loss 0.492096 Time taken for 1 epoch 189.16131472587585 sec Epoch 11 Batch 0 Loss 0.4797 Epoch 11 Batch 100 Loss 0.4495 Epoch 11 Batch 200 Loss 0.4552 Epoch 11 Batch 300 Loss 0.4408 Epoch 11 Loss 0.474645 Time taken for 1 epoch 190.57548332214355 sec Epoch 12 Batch 0 Loss 0.4787 Epoch 12 Batch 100 Loss 0.4315 Epoch 12 Batch 200 Loss 0.4504 Epoch 12 Batch 300 Loss 0.4293 Epoch 12 Loss 0.457647 Time taken for 1 epoch 190.24215531349182 sec Epoch 13 Batch 0 Loss 0.4621 Epoch 13 Batch 100 Loss 0.4107 Epoch 13 Batch 200 Loss 0.4271 Epoch 13 Batch 300 Loss 0.4133 Epoch 13 Loss 0.442507 Time taken for 1 epoch 187.96875071525574 sec Epoch 14 Batch 0 Loss 0.4383 Epoch 14 Batch 100 Loss 0.3987 Epoch 14 Batch 200 Loss 0.4239 Epoch 14 Batch 300 Loss 0.3913 Epoch 14 Loss 0.429215 Time taken for 1 epoch 185.89738130569458 sec Epoch 15 Batch 0 Loss 0.4121 Epoch 15 Batch 100 Loss 0.3933 Epoch 15 Batch 200 Loss 0.4079 Epoch 15 Batch 300 Loss 0.3788 Epoch 15 Loss 0.415965 Time taken for 1 epoch 186.6773328781128 sec Epoch 16 Batch 0 Loss 0.4062 Epoch 16 Batch 100 Loss 0.3752 Epoch 16 Batch 200 Loss 0.3947 Epoch 16 Batch 300 Loss 0.3715 Epoch 16 Loss 0.402814 Time taken for 1 epoch 186.04795384407043 sec Epoch 17 Batch 0 Loss 0.3793 Epoch 17 Batch 100 Loss 0.3604 Epoch 17 Batch 200 Loss 0.3941 Epoch 17 Batch 300 Loss 0.3504 Epoch 17 Loss 0.391162 Time taken for 1 epoch 187.62019681930542 sec Epoch 18 Batch 0 Loss 0.3685 Epoch 18 Batch 100 Loss 0.3496 Epoch 18 Batch 200 Loss 0.3744 Epoch 18 Batch 300 Loss 0.3480 Epoch 18 Loss 0.382786 Time taken for 1 epoch 185.68778085708618 sec Epoch 19 Batch 0 Loss 0.3608 Epoch 19 Batch 100 Loss 0.3384 Epoch 19 Batch 200 Loss 0.3500 Epoch 19 Batch 300 Loss 0.3229 Epoch 19 Loss 0.371033 Time taken for 1 epoch 185.8159191608429 sec Epoch 20 Batch 0 Loss 0.3568 Epoch 20 Batch 100 Loss 0.3288 Epoch 20 Batch 200 Loss 0.3357 Epoch 20 Batch 300 Loss 0.2945 Epoch 20 Loss 0.358618 Time taken for 1 epoch 186.8766734600067 sec
plt.plot(loss_plot) plt.xlabel('Epochs') plt.ylabel('Loss') plt.title('Loss Plot') plt.show()
.argmax(predictions) # opening the image Image.open(img_name_val[rid])
Real Caption: <start> a man gets ready to hit a ball with a bat <end> Prediction Caption: a baseball player begins to bat <end>
>>IMAGE)
Prediction Caption: a man riding a surf board in the water <end>
Next steps
Congrats! You've just trained an image captioning model with attention. Next, we recommend taking a look at this example Neural Machine Translation with Attention. It uses a similar architecture to translate between Spanish and English sentences. You can also experiment with training the code in this notebook on a different dataset. | https://www.tensorflow.org/alpha/tutorials/text/image_captioning | CC-MAIN-2019-22 | refinedweb | 1,793 | 79.16 |
A few years ago, I posted on my personal web page a number of programming challenges for my friends. This week, I present one of the old challenges: computing luma from RGB triplets using integer arithmetic only.
You have to compute the function
unsigned char SlowBrightness(unsigned char r, unsigned char g, unsigned char b) { return 0.289f * r + 0.587f * g + 0.114f * b + 1e-5f; }
but the processor you’re using is not capable of floating point number arithmetic. In other words, it uses a all-software library to emulate them and it’s really slow.
Re-implement this function using only integer operations.
Efficiency is not the only goal, you must have an exact solution as well: your integer-only implementation must give the same answer as the float version for all RGB triplets.
Bonus. Use only +, *, >> and minimize the size of the integer constants involved.
I love problems like this, short but non-trivial.
I’ll take a stab at it, but first I just want to verify something…
In SlowBrightness() as shown above, even if the sum of the products, is say, 240.99, the function will return 240, since float-to-integer simply chops off the decimal part. I assume that in an ideal world (e.g. shipping library, etc.), you’d actually want to return 241, correct? But for the challenge we should emulate SlowBrightness() as coded (i.e. return 240), not what I would think you would want in a commercial product?
Hope that question makes sense. Thanks for the clarification.
It doesn’t really matter for this exercise. You could do rounding, but that’s not the statement (and you’ll have plenty to do to get the bonus points). :p
Hi Steven,
Thanks for your reply. I might be missing something, hopefully you can straighten me out…
I’m an embedded guy, and I also work on life-critical systems and high-precision control systems (cases where even floating point “double” isn’t precise enough), so I’m pretty accustomed to doing scaled-integer math, fixed-point math, and scaled binary.
For example, the 0.289 coefficient is pretty close to 37/128 (first multiply by 37, then shift right by 7). If higher precision is needed, you can always increase the denominator and scale/tweak the numerator.
But here’s where I’m running into problems (on my Linux box). For 283 of the 16M RGB triplets, I’m being hobbled by the machine’s floating point “machine epsilon”. Just one example…
Take the triplet (3,111,184) – if you work out the math, the luminance is exactly 87.000, but the C code in your example, on my box, yields a value of 86.99999237 – and when truncated on function call return, that’s 86. My code calculates 87 with scaled integer, but the test bench expects 86, and my code is considered “wrong” per the requirement “your integer-only implementation must give the same answer as the float version for all RGB triplets”.
When I change the floating point suffix in your constants from “f” to “L”, everything works fine. So, I’m running into the “machine epsilon”, and I’m guessing that’s not the point of your challenge.
I validated what you’re saying. The first version of the code (for Windows) did not exhibit this behavior, but on linux, it does (even when casting the constants as long double).
I assume you have gcc/g++ on Linux, so I fixed the code (as minimally as possible) to accomodate the behavior of g++ 4.5.2 and that 86.99999… is truncated as 87, still using floats.
Chances are that you now moved the problem to one of the other 16M RGB triples.
Exhaustive testing says the contrary.
[decided to reset the indentation level!]
@Jules – I was wondering the same thing, but I ran with the modification, and now all triplets match.
For the sake of completeness, my compiler details (yes, it’s a bit dated):
g++ (Debian 4.3.2-1.1) 4.3.2
Hope to have some time later today to work on this again (damn deadlines! ;-) )
If you have 32-bit integers is trivial. Just pre-multiply every constant in the given implementation by 100000 (10^5), compute the sum, and divide: (28900*r+58700*g+11400*b+1)/100000
If the division is too expensive, use 2^17 (131072) and right-shift:
(37880*r+76939*g+14942*b+1) >> 17
Is there some other constraint I’m missing? Word sizes?
You’re close.
However your method using >> 17 fails on (0,2,209) with 24 instead of 25.
(and also with many others.)
Ahh!!! Ok, you want something that incurs the same error than the floating point variant… I’ll need to think about it.
My answer (using a 32-bit mantissa):
It passes the test, but no bonus points: the size of the integers are not minimized!
So there’s an solution using 32-bit integers? Mine works down to a 26-bit shift, but I need 34-bit of precision (ie a 64-bit intermediate)…
Yes. There’s one with at most 28 bits intermediate precision needed.
19 bit shift:
return (151519 * r + 307757 * g + 59768 * b + 300) >> 19
Passed!
…and it beats the solution I have by ~0.5 bits.
My solution: (289*r + 587*g + 114*b)*525 >> 19. This works with ordinary ints.
It fails on (0,255,248) yielding 178 instead of 177 (and on many others). It seems to be overshooting just a bit.
Ah, indeed I made a mistake (I didn’t have access to a compiler to test it). Can you test this one: (289*r + 587*g + 114*b)*67109 >> 23
Sorry wrong again, the 23 should be 26…
It passes the test, but only if casting 289, 587, and 114 to long (64 bits on my machine). 289*r only yields int (as 289 is int, r is int8_t, promoted to int), not long, which breaks the arithmetic on LP64.
I like that solution, the constants involved in the computation appear small, but you need 64 bits integers to make it work properly (since 26+8 > 32). I guess you still get bonus points (since the statement is about the constants, not the intermediate values).
uint8_t FastBrightness(uint8_t r, uint8_t g, uint8_t b)
{
uint32_t result;
result = ((((r * 189399) + (g * 384696) + (b * 74711) + 375) >> 17) / 5);
return(result);
}
I think this works for all cases and only uses 32-bit math everywhere.
It passes the test.
BTW, I was lazy, just to get it done I used base-10 math for the scaling (hence the shift by 17 instead of 16, and the trailing / 5). I suppose I could/should re-work it for the last part of the bonus (no / ) but I don’t think it will be that hard.
Given the interest in this, I’m sure if I wait 5 minutes someone else will do it for me…
result = (((r * 151519) + (g * 307757) + (b * 59769) + 300) >> 19);
D’oh! Now as I scroll up in the comments I see that pdq already did basically the same thing (although I ended up with 59769 and not 59768).
I would say “great minds think alike” but I won’t bring pdq down to my level.
Just to give a little credibility to me posting basically the same thing, I should probably post a brief explanation…
So, .289 is 18939.904/65536. But if you just multiply by 18940, or 18939, that little .904 starts to matter.
So the gist is to do this:
temp_r = ((r * 18939) + (r * 18940) * 9) / 10
That’s how I originally did it, hence my answer with >> 17 and / 5.
Re-scaled everything to be multiplied by 8 instead of 10, and basically came up with the same as pdq, although one digit was different ;-)
People that optimize code (while not always “optimizing”) should more often do what you just did: document how the hack came to be. I’m sure everyone that submitted a solution did pretty much the same thing, but I am also sure they all took slightly different approaches to deal with precision issues.
(Actually a reply to Steven – fighting the indentation)
Steven – you make a good point. One of the nice things about using version control – even for something small like this – is that I’m able to see the progression of the development. First, get it working, then optimize. It also makes it easy to “abort” if you go down a bum path & decide to trash it (something this small I don’t branch).
I do the same thing when working on Project Euler problems. First, start with whatever approach seems straightforward, get it working, and then go from there. Almost invariably, my final version will run 10-100 times faster, once I’ve identified invariants, pre-cached important values, and often utilize dynamic programming / memoization tactics. (I’m an EE, not a CS guy, so learning things like DP was really mind-expanding).
So… when can we expect the next programming challenge? ;-)
What I meant was it is necessary to explain where a magic number like 151519 comes from, and the source control history may, or mayn’t help (plus it’s a pain to examine a complete history to find the relevant changes).
As for the next challenge, why, just next week :p
My best solution is (151520r + 307758g + 59769b) >> 19, which is interesting in that it only needs two additions.
And it passes the test. Gratz.
I came up with (4848615r + 9848226g + 1912603 + 168) >> 24 by multiplying the constants by 2^24 (in bc) and rounding. Strangely, when I try that on the 19-bit approach, 1e-5f * 2^19 is 5.24288, so the last addition constant is 5, which results in a bunch of off-by-1 values. If I use 300, it works fine.
On the other hand, the 24-bit and the 19-bit versions generate the same assembly for me, so I don’t feel too bad.
It also passes the test. Bravo!
I was bored, so here’s one in 16-bits. Takes a bit more ops to get the job done though (9 *, 8 +, and 3 >>). Well, using that many ops is really cheating, but maybe you’ll enjoy it anyway.
I used this sort of technique to compute something like 20 decimal digits of PI on a basic stamp a long time back (basic stamps have 24 bytes of general purpose memory. Not kilobytes. Bytes.
The technique is interesting… breaking down the multiplies like that isn’t something that would’ve occured to me naturally; I try to make the most of each instruction, not break them in lots of smaller instructions.
However… knowledge++ :p
[…] problem of computing luminance was rather of a bit-twiddling nature (and some of my readers came up with very creative […] | https://hbfs.wordpress.com/2011/08/09/programming-challenge-luminance/ | CC-MAIN-2015-11 | refinedweb | 1,828 | 71.95 |
Make Way for Grails 1.1
Just days ago SpringSource released version 1.1 of Grails, the open-source web application framework. It provides a slew of new features, improvements and bug fixes and rides on the recent release of Groovy 1.6 which significantly improves overall performance. The press release sums it up,
Grails 1.1 simplifies and accelerates web application development, enabling developers to focus on delivering new applications and capabilities to customers at a much quicker rate than complex and bloated application infrastructure alternatives. The new release provides a deeper integration with Spring by adding Spring namespace support and standalone usage of Grails Object Relational Mapping inside Spring MVC. It also provides tighter integration with the Java ecosystem through support for key build tools such as Maven and Ant + Ivy. Additionally, Grails 1.1 provides greater support for the vibrant plug-in community with key plug-in features such as global plug-ins, transitive plug-in resolution and modular plug-in development.
One enhancement developers have been waiting for is the ability to use GORM, Grails Object Relational Mapping, outside of Grails. In January 2009, Graeme Rocher, head of Grails development at SpringSource, informed the community that he had ported the Spring MVC petclinic application to use GORM outside of Grails.
Graeme had provided the following code snippet, which makes use of Spring, to provide a GORM enabled SessionFactory:
>
Graeme also posted additional details about several of the new features in Grails 1.1 on his blog on the SpringSource site. Additionally there are several new plugins including Commentable and Taggable which allow for commenting and tagging of domain object instances. There is also work on a Grails plugin portal to help improve the plugin experience for developers and users of Grails.
Wired.com, the online arm of Wired magazine, released a case study providing information regarding their use of Grails. Paul Fisher, Wired.com Manager of Technology stated that,
Grails makes it easier and saves time bringing new developers onto a project, because it provides a
simpler, clearer, more intuitive development workflow and process...Someone with no
Java or Grails experience can learn Grails quickly, get up to speed in a matter of days and become very
productive. Grails can be useful for both the novice developer, who is new to any kind of web development,
and the seasoned Java developer.
Grails continues to grow and mature while gaining popularity amongst developers and with SpringSource's acquisition of G2One, the founders and creators of Groovy and Grails, it appears things are just getting started for this open-source web application framework.
Grails' 1.1 release
by
Gerard Dragoi
Re: Grails' 1.1 release
by
Maxim Gubin
Finally made it to infoq.
When's tss going to post a story?
poor Eclipse integration
by
Matthias K.
There is TextMate for Ruby, but what's there for Groovy? Good luck convincing your Eclipse-spoiled Java developers they have to resort to Emacs or vi to hack on a Grails project!
It's completely beyond me why this receives so few attention.
Intellij has the best support for grails
by
alfredo duran
The eclise support is not there, but the support that we get from Intellij is incredible
There is even a new release that follows each version of grails.
Sinse I started using Intellij for developping grails i have switched to develop my struts projects also.
I find Intellij very proactive with new technologies and a more superior IDE. I find it is alot more easier to work
with subversion. I took me a bout a week to get used to Intellij. Usually I use both MyEclipse and Intellij at the same time.
So I get the best of both world. They work very well togu | http://www.infoq.com/news/2009/03/grails-1-1 | CC-MAIN-2015-27 | refinedweb | 628 | 55.34 |
So you’ve convinced your friends and stakeholders about the benefits of event-driven systems. You have successfully piloted a few services backed by Apache Kafka®, and it is now supporting business-critical dataflow.
Each distinct service has a nice, pure data model with extensive unit tests, but now with new clients (and consequently new requirements) coming thick and fast, the number of these services is rapidly increasing. The testing guardian angel who sometimes visits your thoughts during your morning commute has noticed an increase in the release of bugs that could have been prevented with better integration tests.
Finally after a few incidents in production, and with velocity slowing down due to the deployment pipeline frequently being clogged up by flaky integration tests, you start to think about what you want from your test suite. You set off looking for ideas to make really solid end-to-end tests. You wonder if it’s possible to make them fast. You think about all the things you could do with the time freed up by not having to apply manual data fixes that correct for deploying bad code.
At the end of it all, hopefully you’ll arrive here and learn about the Test Machine.
Funding Circle is a global lending platform where investors lend directly to small businesses in Germany, the Netherlands, the UK and the U.S. (and soon in Canada). A typical borrower repayment triggers actions in several subsystems and if not done promptly and correctly, can prevent investors from making further investments. The Test Machine is the library that allows us to test that all these systems work together.
We built the Test Machine at Funding Circle because we did not have enough confidence in our unit tests alone. The
TopologyTestDriver is great for unit testing a single topology at a time, and the Fluent Kafka Streams Tests wrapper helps reduce repetitive code and captures a common pattern for testing Kafka Streams. But the system we were responsible for was a mish-mash of Ruby on Rails, Apache Samza, Kafka Connect and Kafka Streams, which seems to be a common scenario (for those on the journey to event driven).
The tests we had that exercised the entire stack were “flaky” and slow (both usually due to subtle errors in the test code). After reviewing the fixes to a number of these full stack tests, we realized that many of our tests matched a fairly simple pattern:
It was quite easy to make mistakes when implementing this pattern using the Kafka APIs directly. This led us to create a simple, functional, pure data interface, and we hoped this pattern would be a popular utility.
The interface is implemented using Clojure but thanks to the Confluent ecosystem, the system under test can be anything from an unholy assortment of Rails applications, Kafka Connect jobs and “untestable” PL/Perl triggers, to the latest, greatest, highly replicated, fault-tolerant, Kafka-Streams-based self-distributing API. The only requirement is that all the events we care about are seen by Kafka.
This post discusses the main components of the system independent of their implementation, because you can actually implement them on other platforms in whichever ways are most useful for you. (I’m looking forward to seeing what other folks have come up with to solve similar problems!) The figure below illustrates the high-level design of the Test Machine:
Test authors shouldn’t be concerned with the mechanics of writing test data to Kafka. The examples you see in the docs are great for getting started. However, once your application starts to use even just a handful of inputs, all the code for manipulating a Kafka producer directly for each input can obscure what should be fairly simple data required to make the test pass.
This can be resolved by adopting a data notation for test inputs combined with a pure function designed to determine when the test has completed. In Clojure we can use Extensible Data Notation (EDN) for this purpose. A sample test input intended for the Test Machine is included below, but you could come up with something similar using JSON or YAML.
[[:write :customer {:id 101, :name "andy"}] [:write :order {:id 1001, :customer-id 101, :items [{:qty 1, :amount 99} {:qty 3, :amount 95}]} [:watch (fn [journal] (->> (get-in journal [:topics :shipping-instruction]) (filter #(and (= 101 (:customer-id %)) (= 1001 (:order-id %)))) (first)))]]
You’ll find that it can be helpful to define an external mapping from identifiers like
:customer and
:order to conventions about how to serialize and extract keys or partitions from the messages destined for that topic. This allows the creation of generic procedures to handle these functions for all topics even when the system under test uses a variety of conventions.
There are also a few things to get right in order to correctly consume an application’s test output. For example, the consumer should probably be configured to start from the latest recorded offsets so that it is not affected by old test data. In addition, managing the consumer lifecycle for each output topic is another opportunity for error. But the topic mapping described above has utility on the consumer side too.
In the Test Machine, we use a dedicated thread to consume from all listed topics and automatically deserialize them before adding them to the journal. This makes them available for final validation as simple maps representing the data contained in the consumer records. Thus, not only can test assertions be super simple but when they fail, we can also inspect the journal to see why.
Applying the patterns described above enables an interesting method of reusing tests to further increase confidence in the correctness of the system under test by running the test against a variety of targets. The key to unlocking this feature is to define tests as a sequence of commands consisting of writes and watches. Each command is executed in order and blocks until it is complete. A write command simply writes an event to Kafka, while a watch command watches the output journal until a user-specified condition is met.
The small size of this “test grammar” means that the implementation of an interpreter for the commands can easily be taught how to run against a variety of targets. So for example, the Test Machine contains implementations that run against the following targets:
This means that during development, you can quickly try out changes using the mock topology processor (which uses the standard
TopologyTestDriver under the hood). Then, before pushing, you can run the entire test suite against a local Kafka cluster (provided by Docker Compose or Confluent tools). When you’re about to merge, you can run the exact same test suite against your staging environment after deploying your code there.
We are talking about full stack testing here so that means I/O. If your brokers are writing to spinning metal, you’re not going to get the blazing performance of in-memory tests. However getting a few thousand tests to run within a couple of minutes should be achievable.
Here are a few things to consider if your test suite is slower than you’d like:
The Test Machine is included as part of Jackdaw, which is the Clojure library that Funding Circle uses to develop event streaming applications on the Confluent Platform. Jackdaw comes bundled with a few examples, and the word count example includes a test (also included below) that uses the Test Machine to check that the resulting output stream correctly counts the words given in the input.
(defn input-writer [line] [:write! :input line {:key-fn identity}]) (defn word-watcher [word] [:watch (fn [journal] (some #(= word (:key %)) (get-in journal [:topics :output]))) 2000]) (deftest test-word-count-demo (fix/with-fixtures [(fix/integration-fixture wc/word-count test-config)] (fix/with-test-machine (test-transport wc/word-count-topics) (fn [machine] (let [lines ["As Gregor Samsa awoke one morning from uneasy dreams" "he found himself transformed in his bed into an enormous insect" "What a fate: to be condemned to work for a firm where the" "slightest negligence at once gave rise to the gravest suspicion" "How about if I sleep a little bit longer and forget all this nonsense" "I cannot make you understand"] commands (->> (concat (map input-writer lines) [(word-watcher "understand")])) {:keys [results journal]} (jd.test/run-test machine commands)] (is (every? #(= :ok (:status %)) results)) (is (= 1 (wc journal "understand"))) (is (= 2 (wc journal "i"))) (is (= 3 (wc journal "to"))))))))
As you can see, the test builds a list of commands consisting of a write for each input line, and a watch for the word “understand” since that’s the last word to be seen by the word counter. It then submits this sequence of commands to the Test Machine and asserts that the expected counts are observed for a selection of words.
You can try it out yourself!
$ brew install clojure $ git clone $ cd jackdaw/examples/word-count
TopologyTestDriver:
$ clj -A:test --namespace word-count-test
$ docker-compose up -d zookeeper broker $ export BOOTSTRAP_SERVERS=localhost:9092 $ clj -A:test --namespace word-count-e2e-test
$ docker-compose up -d rest-proxy $ export REST_PROXY_URL= $ clj -A:test --namespace word-count-e2e-test
Now, you’re ready to go write tests for your own application! However it is implemented, as long as its input and output are represented in Kafka, you can use the Test Machine to test it, and in doing so, keep the focus of the test where it belongs—on the data and the program logic.
If you’d like to know more, you can download the Confluent Platform to get started with the leading distribution of Apache Kafka.
Andy Chambers is a software engineer at Funding Circle building the systems that ensure investors get their fair share of the money repaid by borrowers. He came for the chance to develop in Clojure, and stayed to help realize the goal of becoming an event-driven organization. | https://www.confluent.io/en-gb/blog/testing-event-driven-systems/ | CC-MAIN-2022-27 | refinedweb | 1,671 | 53.85 |
I did a bunch of testing today with my TMP36 temperature sensor to try and get to the bottom of why the readings are so wacky. This is a continuation of the Odd analog readings thread, but I felt you might want to find this information without reading 55 replies
How to get accurate analog readings:
Hook your analog sensor to the 3V3* pin and GND. The 3V3* pin has a low pass filter on it and is the same node that is connected to the Analog to Digital convertor’s reference voltage (VDDA).
Keep your wires as short as possible, and avoid crossing over the top of the Spark Core and the Wifi antenna if you can.
If your analog sensor has a high impedance output, you are going to need to place a 0.01uF (usually marked 103) capacitor from the analog input pin (A0-A7) to GND. This effectively lowers the impedance of the input, and allows the ADC to convert the voltage to a digital value properly. This is by far the BEST thing you can do to help your readings stabilize (thus the double bullet). More on this later.
If you analog sensor has a high impedance output, AND you don’t want to install a capacitor to help lower the impedance… you might try to delay your analogRead() calls by as much as once per second to help get readings closer to where they should be. You will still see values wildly swinging around, but this helps in a pinch.
If you are using resistance or voltage values in your equations, take a multimeter and measure the value of these items and put the exact values in your equations where possible. Your 3V3* pin most likely does not output exactly 3.30V. This may affect your results slightly.
If your sensor is a temperature sensor, be sure to mount it to a separate breadboard or on the end of a short cable away from the Core. The reason for this, is after a while running… the Spark Core Wifi module and 3.3V regulator warm up to over 100°F. This temperature is thermally coupled into all 24 of the pins that are pressed into the breadboard, which in turn heats up the metal pins in each rail. Even if you don’t plug your temp sensor directly into these pins, you can be sure the entire bread board is heating up. My TMP36 was running 11°F higher than ambient because of this. As soon as I moved it off to another breadboard, the temperature immediately dropped down to the correct value.
Many sensors benefit from adding a decoupling capacitor across their power and gnd inputs, to help reject common mode noise from entering the sensor and affecting the output. Typical values here are 0.1uF. Keep the leads short as possible.
Code can easily go bad fast, so if you are having problems… start by measuring the voltage at the analog input, and calculating what your analog reading should be. See if that is the value you are getting before it goes through your conversions. Break up each step of your conversion to find out where a problem may exist. Follow along with a calculator and see if you are getting numbers that are too big or too small for the types of variables are using.
Have you done all of the above and you still get bouncy readings? You might just have a noisy sensor, or a noisy environment… or maybe the thing you are sensing is just fluctuating. If you don’t want to set up a low pass filter in hardware, you can try to create a software filter. [I like to use this dilution filter]2 but there are a ton of different ways to do it, pros and cons for many of the ways.
Ok on with the hours of testing I did today with the TMP36:
So one long standing question is whether or not the ADC is setup correctly. I never tried changing the
ADC_SAMPLING_TIME found in
spark_wiring.h until today. Currently it’s set to
ADC_SampleTime_1Cycles5 and you can see a good write up of why I think that’s bad here.
So I tried 3 different Sample Times and 4 different levels of capacitance on the input pin. Keep in mind I have my TMP36 on a separate breadboard, and I’m using the A7 input so I can have a GND pin close by for my caps. In addition to measuring the temp, I also measured how long the A to D conversion was taking (Conv. Time). Notes: 10000pF = 0.01uF, and my room thermostat was set to 70°F, These are averages of 100 readings taken 100ms apart.
You can see that with zero capacitance at the sample time of 1Cycles5 (1.5), the situation is pretty bad. Temperature is averaging 21°F higher than normal. And was regularly spiking up to 100°F. Adding just 100pF of capacitance brings the average reading down to within 8°F. Not bad. 470pF improves this even more, and 0.01uF is spot on. The readings barely fluctuated more than 0.3°F with the 0.1uF cap. Conversion times are hella fast, 5us.
sample time of 41.5 and 239.5 effectively increase the allowable input impedance for our circuit, and as you can see even with zero capacitance the readings on average are pretty good! Adding capacitance doesn’t change the average much, but it does improve the variability between readings. Conversion time respectively increases to 8.5us and 25us. These are still hella fast.
To keep the conversion time fast, but also help to improve the readings for users that have no idea they should add a capacitor to the analog pin, I’m recommending changing the ADC Sample Time to 41.5.
Check out the variability in the graphs! Be sure to pay attention to the change in temperature scale between the graphs.
Here’s my Spark Core test code:
#include <application.h> uint16_t temperature = 0; float voltage = 0.0; float t1 = 0.0; bool s = 1; char tempStr[20]; uint32_t start,end; uint16_t sample = 0; void setup() { pinMode(D7, OUTPUT); Serial.begin(115200); RGB.control( true ); RGB.brightness(255); while(!Serial.available()) { // Run some test code so we know the core is running! s = !s; // toggle the state if(s) { RGB.color(255,255,255); delay(10); // makes it blippy } else { RGB.color(0,0,0); delay(50); } } RGB.brightness(64); RGB.control( false ); Serial.println("SAMPLE, CONV TIME, TEMP C, TEMP F"); } void loop() { start = micros(); temperature = analogRead(A7); end = micros(); voltage = (temperature * 3.3)/4095.0; t1 = (voltage - 0.5) * 100.0; sprintf(tempStr,"%d, %d, %.2f, %.2f",++sample,end-start,t1,(t1*1.8+32)); Serial.println(tempStr); // Run some test code so we know the core is running! //digitalWrite(D7,s); //s = !s; // toggle the state delay(100); // makes it blinky }
| https://community.particle.io/t/odd-analog-readings-part-2/2718 | CC-MAIN-2020-29 | refinedweb | 1,166 | 73.88 |
I have several defs in my main build.gradle file. Is it possible to move them into a separate file, and then just reference them?
Something like this (don't mind the syntax):
foobar.gradle
def foo = "foo"
def bar = "bar"
def baz = "baz"
println $foo
println $bar
println $baz
Finally figured it out.
In foobar.gradle, set up the defs like this:
ext.foo = "foo" ext.bar = "bar" ext.baz = "baz"
Then in main.gradle, make sure to include this line:
apply from: 'foobar.gradle'
Now variables can be used as freely as if they are part of the main.gradle file. | https://codedump.io/share/OfhRjSxpqhwj/1/keep-defs-in-a-separate-file | CC-MAIN-2017-51 | refinedweb | 103 | 78.96 |
Two part question. I am trying to download multiple archived Cory Doctorow podcasts from the internet archive. The old one's that do not come into my iTunes feed. I have written the script but the downloaded files are not properly formatted.
Q1 - What do I change to download the zip mp3 files?
Q2 - What is a better way to pass the variables into URL?
# and the base url.
def dlfile(file_name,file_mode,base_url):
from urllib2 import Request, urlopen, URLError, HTTPError
#create the url and the request
url = base_url + file_name + mid_url + file_name + end_url
req = Request(url)
# Open the url
try:
f = urlopen(req)
print "downloading " + url
# Open our local file for writing
local_file = open(file_name, "wb" + file_mode)
#Write to our local file
local_file.write(f.read())
local_file.close()
#handle errors
except HTTPError, e:
print "HTTP Error:",e.code , url
except URLError, e:
print "URL Error:",e.reason , url
# Set the range
var_range = range(150,153)
# Iterate over image ranges
for index in var_range:
base_url = ''
mid_url = '/Cory_Doctorow_Podcast_'
end_url = '_64kb_mp3.zip'
#create file name based on known pattern
file_name = str(index)
dlfile(file_name,"wb",base_url
Here's how I'd deal with the url building and downloading. I'm making sure to name the file as the basename of the url (the last bit after the trailing slash) and I'm also using the
with clause for opening the file to write to. This uses a ContextManager which is nice because it will close that file when the block exits. In addition, I use a template to build the string for the url.
urlopen doesn't need a request object, just a string.
import os from urllib2 import urlopen, URLError, HTTPError def dlfile(url): # Open the url try: f = urlopen(url) print "downloading " + url # Open our local file for writing with open(os.path.basename(url), "wb") as local_file: local_file.write(f.read()) #handle errors except HTTPError, e: print "HTTP Error:", e.code, url except URLError, e: print "URL Error:", e.reason, url def main(): # Iterate over image ranges for index in range(150, 151): url = ("" "Cory_Doctorow_Podcast_%d/" "Cory_Doctorow_Podcast_%d_64kb_mp3.zip" % (index, index)) dlfile(url) if __name__ == '__main__': main() | https://codedump.io/share/E8XUONbSEnE8/1/how-do-i-download-a-zip-file-in-python-using-urllib2 | CC-MAIN-2017-47 | refinedweb | 357 | 65.42 |
Personal Finance: Data Collection
Photo by nattanan23 on Pixabay
Overview
Today I’d like to talk about a new topic: Personal Finance. This is something really important during our whole life. Many decisions are directly related to money. Having a management tool allows you to better budget, save, and spend money over time, taking into financial risks and future life events.
This article mainly shares how I built my own data collection script using Python, which powers the data analysis. It covers the following sections:
- Downloading history from your bank
- Storing them as archives
- Merging them into a single file
- Export data to external tool (Google Sheet)
- The importance of having backup(s)
Download Account History
Before getting started to build a personal finance tool, you need to have data from your bank. I’m using BNP Paribas, their website provides an option to download the recent history (the last 30 days) as CSV file. If you’ve multiple accounts, you need to download all of them. In my case, the downloaded files look like:
E123.csv E234.csv E345.csv ...
Once downloaded, you need to rename the files so that they are aligned to your
naming convention. This is because the original name might not be
meaningful, or even it is, there will be probably differences in different
banks. Having your own convention can avoid this problem. In my case, I rename
files with username (
$USER) and account name (
$ACCOUNT):
$USER-$ACCOUNT.csv
This solution is scalable because it can be applied to any situation. Your situation might change over the time—you might be single for now, but in love and married later on. So it’s better to have the username involved in the naming. Then, every member of this system will have at least one accounts, therefore we need the account name.
Archiving The Downloaded Files
The second step is to archive the downloaded files. Storing them as raw data makes us save to do any transformation in the future: no matter what you’ll do and which failure will be, as far as the original files are archived, it is possible to start again from the very beginning.
In my case, I split the downloaded files by lines: each line (except the header)
is considered as a transaction, and appended to an existing monthly archive
file of the target account. If that file does not exist, it will be created by
the Python script. The archive files have their convention. They start with the
month (
YYYY-MM), followed by dot (
.) as separator, followed by the
account naming
$USER-$ACCOUNT, and ends with suffix
.csv.
$MONTH.$USER-$ACCOUNT.csv
For example,
2018-04.paul-A.csv 2018-04.paul-B.csv 2018-04.anne-A.csv 2018-04.anne-B.csv 2018-05.paul-A.csv 2018-05.paul-B.csv 2018-05.anne-A.csv 2018-05.anne-B.csv 2018-06.paul-A.csv 2018-06.paul-B.csv ...
Now, go back to the archive problem. How can we ensure that each line is inserted properly in the target CSV file without creating duplicate? My solution is to construct a Set, add each line of the existing file into the set, and then the new lines. Since Set does not allowed duplicate, each value is unique. Then, sort them and write to the file again. (However, this is a simplified version, in reality, I use Dictionary, because I need to store more data in each row)
def append_tx_file(lines, csv): rows = set() if os.path.exists(csv): with open(csv, 'r') as f: for line in f: rows.add(line) with open(csv, 'w') as f: for row in sorted(rows): line = row + '\n' f.write(line)
Create a Merged File
Once the archive files are done, the next step is to create a merged file which
contains all the transactions. The goal is provide a single file for data
analysis. This step is very simple. The logic is almost the same as the previous
step. The only changes are to transform a French date (
DD/MM/YYYY) to ISO
format (
YYYY-MM-DD), and add a new column for the account name.
def merge_tx(paths): lines = set() for path in paths: account = re.findall(r'.([a-zA-Z0-9-]+)\.csv$', path)[0] with open(path, 'r') as f: for line in f: left, right = line.split(';', 1) d, m, y = left.split('/') left = '%s-%s-%s' % (y, m, d) lines.add('%s;%s;%s' % (left, account, right)) with open(FILES['total'], 'w') as f: header = 'Date;Account;ShortType;LongType;Label;Amount;' header += 'Type;Category;SubCategory;IsRegular\n' f.write(header) for line in sorted(lines): f.write(line)
Export Data to Google Sheet
For now, all the data collection steps are done. But, we still did not talk about any data analysis! Actually, analysis is not done in Python, but in Google Sheet. I’ll talk about it in another article. You might also want to use Microsoft Excel, or any other tool. The key point is, data processing should be separated from data analysis, which makes things much simpler, especially for automation.
For Google Sheet, it has its own preference for CSV files. If you want to import the CSV file in one click, you should change your data to make Google happy, such as:
- Use comma
,as delimiter (BNP Paribas uses semi-colon
;)
- Use dot
.as decimal point (BNP Paribas uses comma
,)
- Use nothing as thousands separator (BNP Paribas uses space
And the destination files should prefixed by
google.*. For example, we’ve 2
files to export to Google Sheet, respectively called
transactions.csv and
accounts.csv. Then the Google-ready ones should be:
google.transactions.csv(transactions.csv)
google.accounts.csv(accounts.csv)
Backup
It’s important to backup your data. For me, I choose Git as a solution. There’re many advantages of using Git, such as:
- Keep all the modification history, including author
- Show the diff when adding new data
- Transport data via HTTP protocol
- Possibility to revert when things go wrong
- Easy to mirror (multiple backups)
OK, OK, actually I use Git because I’m familiar with it. You can use whatever you want, but you must do the backup, trust me :)
Conclusion
Let’s summary what we discussed in the post. In order to build your own personal tool, you need to:
- Collect data from your bank (usually CSV files)
- Archive them separately (e.g. per month per account)
- Merge data into a single file
- Adapt the format for data visualization
- Backup your files
- Analyse the data (not covered in this post)
Hope you enjoy this article, see you the next time! | https://mincong.io/2018/10/25/personal-finance-data-collection/ | CC-MAIN-2020-40 | refinedweb | 1,116 | 63.7 |
Hello
The book I am using is not of help, so I would love if someone could give me a few hints.
I have already declared three 2D arrays of lname, fname, and grades and have written a function to read the input file "scores.txt" into the arrays and returned the length.
The problem I am having is working on a void function to sort records of students by last name. I have to use sort selection and sort the names in ascending order by last name. The last name, first name, and grades are all 2D arrays, which I have declared as lname, fname, and grades. The list below is an example of what I need to sort. It includes the last name, first name, and 5 grades.
This is what I have so farThis is what I have so farCode:Smith Tom 25 25 20 15 24 Broom Linda 22 23 25 25 20 Tanner Joe 22 15 24 18 20
I have no clue what to do next for the sortRecords. Like I said, my book is of no help. If someone could please help, it would be greatly appreciated.I have no clue what to do next for the sortRecords. Like I said, my book is of no help. If someone could please help, it would be greatly appreciated.Code:#include <fstream> #include <iostream> #include <cstdlib> #include <cstring> using namespace std; const char LNAME_SIZE = 20; const char FNAME_SIZE = 15; const char GRADES_SIZE = 20; const char STUDENTS = 32; int readScores(ifstream& fin, char lname[][LNAME_SIZE + 1], char fname[][FNAME_SIZE + 1], char grades[][GRADES_SIZE + 1]); // reads scores.txt into the arrays and returns the length void sortRecords(char lname[][LNAME_SIZE + 1], int length); // sorts the records by last name in ascending order void main(void) { ifstream fin; ofstream fout; char lname[STUDENTS][LNAME_SIZE + 1]; char fname[STUDENTS][FNAME_SIZE + 1]; char grades[STUDENTS][GRADES_SIZE + 1]; int length; // connect the input stream to the file scores.txt fin.open("scores.txt"); // tests the input stream if ( fin.fail() ) { cerr << "Error opening file scores.txt for reading. Aborting!" << endl << endl; exit(1); } // read the file into the arrays length = readScores(fin, lname, fname, grades); // sorts the records in alphabetical order by last name sortRecords(lname, length); // connect the output stream to the file records.txt fout.open("records.txt"); // tests the output stream if ( fout.fail() ) { cerr << "Error opening file records.txt for reading. Aborting!" << endl << endl; exit(1); } fin.close(); fout.close(); }// end main() int readScores(ifstream& fin, char lname[][LNAME_SIZE + 1], char fname[][FNAME_SIZE + 1], char grades[][GRADES_SIZE + 1]) { int length = 0; fin.getline(lname[length], LNAME_SIZE + 1, ' '); while ( !fin.eof() ) { fin.getline(fname[length], FNAME_SIZE + 1, ' '); fin.getline(grades[length], GRADES_SIZE + 1, ' '); length++; fin.getline(lname[length], LNAME_SIZE + 1, ' '); }// end while cout << length << endl << endl; for (int i = 0; i < length; i++) { cout << lname[i] << " "; cout << fname[i] << " "; cout << grades[i] << " "; } cout << endl << endl; return(length); }// end readScores() void sortRecords(char lname[][LNAME_SIZE + 1], int length) { int minPos; // finds minimum position for (int i = 0; i < length; i++) { if (lname[i] < lname[minPos]) { minPos = i; } } }// end sortRecords() | https://cboard.cprogramming.com/cplusplus-programming/16086-selection-sort-records-chars.html | CC-MAIN-2017-47 | refinedweb | 520 | 72.36 |
08 June 2012 05:52 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The central bank lowered the benchmark borrowing rates by 25 basic points from 8 June, the first reduction since December 2008.
“The current market is too weak for players to take any brave action. The problem now is [the lack of] buyers, not the lack of money,” said a Chinese base oil trader.
“Everybody is selling, in fear that prices will decrease further,” said a fuel oil trader in east
The effect of the interest rate cut will be seen only in the long term, while buying activity in the short term will largely be decided by sentiment, sources said.
“In the long term, of course, when funds are not that tight, buyers would purchase more. However, the present market is so bearish, buyers are unlikely to turn bullish overnight just because of an interest rate reduction,” said a bitumen trader.
“The entire petrochemical chain is closely linked to crude values. If crude weakens, few petrochemical [prices] can strengthen,” said a solvent oil producer in
Analysts said the interest rate cut is probably an advanced reaction to the expected dismal economic data for May, which are set to be published this weekend.
The May data is likely to be worse than April’s, in which growth in industrial production, trades, fixed-asset investment and bank lending slowed down, analysts said.
“Clearly, the disappointing data triggered the cut. The 25-percentage-point axe may only be a first test and if it fails to spur lending and investment as policymakers wish, more cuts will be taken later, “said Zhang Junfeng, a senior analyst at Shenzhen-based broker China Merchants Securities (CMS).
Zhang predicts that interest rates may be cut again once or twice this year.
Additional reporting by Alfa Li, Kim Lu, Victor Li | http://www.icis.com/Articles/2012/06/08/9567417/chinas-interest-rate-cut-unlikely-to-affect-weak-petchem.html | CC-MAIN-2013-48 | refinedweb | 305 | 58.42 |
Talk:System administration
Subpages[edit]
I'm not sure how sub-pages for this "topic" should be (lesson namespace or wha?), but i could start filling in some info and then it could always be moved to sub-pages later on. --212.130.183.202 07:37, 2 August 2007 (UTC)
- Wikiversity's policy is be bold. If you have content, put it in, and we'll figure out where it goes later. Maybe we could co-ordinate a bit, and devide this topic up a little. Let me know what you think. Historybuff 04:05, 9 August 2007 (UTC)
- Sounds like a great plan. --212.130.183.202 08:02, 14 August 2007 (UTC)
Troubleshooting[edit]
How about a troubleshooting sub-section? Something where you can look-up a symptom, like an event-id or a BSOD stop code and then possible causes will get added to them. --212.130.183.202 12:12, 2 October 2007 (UTC)
- Is there a better place for that or do i just make a troubleshooting headline and start typing? --212.130.183.202 07:47, 22 October 2007 (UTC)
I think adding a section and starting typing is perfect. The page is getting a bit long, so we should starting thinking about dividing it up into logical subtopics, but if you have something to write, write away. Historybuff 22:21, 16 November 2007 (UTC)
- I agree, that is great way to start. I also think it would be great to start some nested subpages for different common computer configurations. Basically grow a tree from generic troubleshooting considerations and tips to more specific detail for specific configurations. Mirwin 03:21, 14 December 2007 (UTC) | https://en.wikiversity.org/wiki/Talk:System_administration | CC-MAIN-2019-43 | refinedweb | 281 | 72.56 |
Locating special folders in cross-platform .NET applications
.Environment class has two
GetFolderPath overloads:
public static string GetFolderPath (SpecialFolder folder); public static string GetFolderPath (SpecialFolder folder, SpecialFolderOption option);
SpecialFolder is an
enum with values like
ApplicationData,
MyDocuments, and
ProgramFiles. The
SpecialFolderOption
enum has three values:
None,
Create, and
DoNotVerify. These control the return value when the folder does not exist. Specifying
None causes an empty string to be returned. Specifying
Create causes the folder to be created. And
DoNotVerify causes the path to be returned even when the folder does not exist.
Develop using Red Hat's most valuable products
Your membership unlocks Red Hat products and technical training on enterprise cloud application development.JOIN RED HAT DEVELOPER
Note that
SpecialFolder and
SpecialFolderOption are nested in the
Environment class, and to use them directly you should add a
using static System.Environment; statement to your code.
To make this API work cross-platform, .NET Core needs to map the
SpecialFolder values to some locations. For Linux, this mapping is based on file-hierarcy and basedir-spec.
The table lists all the mapped values in .NET Core 2.1. Other values are not mapped and return an empty string. The returned value is determined from left to right: first checking the environment variable, then falling back to the config file, and finally falling back to a default.
Cross-platform applications should be limited to using the mapped values, or they should be able to fall back to another location when
GetFolderPath returns an empty string.
The user home folder is read from the
HOME environment variable. When that is unset, the home directory is read from the system user database. It’s safe to assume that for known users, .NET Core will be able to determine the home directory. A number of other locations are based on the user home folder. Some can be overridden using environment variables and some others by using a file at
ApplicationData/users-dirs.dirs.
On Windows, most of the special folders will exist by default. This may not be the case on Linux. It is the application’s responsibility to create the folder when it doesn’t exist. This may require some changes to your code to use the overload with a
SpecialFolderOption.
For example, the following code ensures the
LocalApplicationData folder will be created if it doesn’t exist.
// Use DoNotVerify in case LocalApplicationData doesn’t exist. string appData = Path.Combine(Environment.GetFolderPath(SpecialFolder.LocalApplicationData, SpecialFolderOption.DoNotVerify), "myapp"); // Ensure the directory and all its parents exist. Directory.CreateDirectory(appData);
Path.GetTempPath and Temp.GetTempFileName
The
System.IO.Path class has a method that returns the path of the current user’s temporary folder:
public static string GetTempPath ();
Windows applications may assume the path returned here is user-specific. This is because the implementation picks up the
USERPROFILE environment variable. When the variable is unset, the API returns the Windows temp folder.
On Linux, the implementation returns
/tmp. This folder is shared with other users. As a consequence, applications should use unique names to avoid conflicts with other applications. Furthermore, because the location is shared, other users will be able to read the files created here, so you should not store sensitive data in this folder. The first user to create a file or directory will own it. This can cause your application to fail when trying to create a file or directory that is already owned by another user.
The
Temp.GetTempFileName method solves these issues for creating files. It creates a unique file under
GetTempPath that is only readable and writable by the current user.
On Windows, the value returned by
GetTempPath can be controlled using the
TMP/
TEMP environment variables. On Linux, this can be done using
TMPDIR.
On systems with
systemd, like Fedora and Red Hat Enterprise Linux (RHEL), a user-private temporary directory is available and can be located using the
XDG_RUNTIME_DIR environment variable.
Conclusion
In this article, you’ve seen the features and limitations of using
Environment.GetFolderPath,
Temp.GetTempPath and
Temp.GetTempFileName in your cross-platform .NET Core applications.
Here are some additional .NET Core articles that might be helpful:
- Improving .NET Core Kestrel performance using a Linux-specific transport
- Using OpenShift to deploy .NET Core applications
- Running Microsoft SQL Server on Red Hat OpenShift
- Securing .NET Core on OpenShift using HTTPS
- Announcing .NET Core 2.1 for Red Hat Platforms | https://developers.redhat.com/blog/2018/11/07/dotnet-special-folder-api-linux/ | CC-MAIN-2019-13 | refinedweb | 733 | 51.14 |
Amazon RDS is great. It does some truly incredible things with almost 0 things to worry about for the developer. However, like most good things in life :) RDS is not very cheap. Also, there are a number of other good reasons to setup your own database inside a compute instance (like EC2) instead of using RDS. Yes, if you use RDS, AWS takes full responsibility for the administration, availability, scalability and backups of your database but you do loose some manual control over your database. If you are the kind of person that prefers the manual control over everything and want to explore the idea of manually setting up your own database, the first important issue you need to deal with is make sure your data survives any potential disasters :) . In this article, we would first setup our own database backups and then automate the process using bash and python scripting. We will be using a MySQL docker container for our database but, the process is generic and you should be able to set it up for any database you prefer.
Prerequisites
- docker installed on system - docker-compose installed on system - python3 installed on system
Steps
1. Setup MySQL docker container
If we have docker and docker-compose installed in the system, we can quickly spin up a MySQL container using the following docker-compose.yml file.
Docker-Compose
version: '3.7' services: db: image: mysql:5.7 ports: - "3306:3306" restart: always volumes: - mysql_data_volume:/var/lib/mysql env_file: - .env volumes: mysql_data_volume:
Now, to start the container:
docker-compose up --build
Now, note down the container name from:
sudo docker ps
In my case the command outputs:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d2f5b2941c93 mysql:5.7 "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp db-backup_db_1
So, our container name is db-backup_db_1, this is follows the following convention,
{folder-name}_{docker-compose-service-name}_{count-of-containers-of-this-service}
Now, our database is ready. We assume, this database is connected to some application that generates some data and our job is to periodically make backups of that data. So, if necessary, we can simply restore the database with the data from a specific point in time.
Notice we have a environment variables file called .env in our docker-compose, we would get to that soon.
2. Setting up S3 bucket
We cannot just keep our generated data dumps lying in our machine's file storage. Because, if the machine goes down we would loose all our backups. So, we need to store the backups on a persistent file storage like Amazon S3. S3 is widely considered to be one of the best file storage services out there and its very cheap. In this article, we would not go through the process of creating S3 buckets but in case you dont already know, its very easy and can be done from the aws console using just a couple of clicks. You can also get an access_key_id and secret_access_key by setting up programmatic access from the IAM console.
Now, we keep our secrets on the .env file like so,
AWS_ACCESS_KEY_ID=********* AWS_SECRET_ACCESS_KEY=****************** AWS_S3_REGION_NAME=********* AWS_STORAGE_BUCKET_NAME=********* MYSQL_DATABASE=********* MYSQL_ROOT_PASSWORD=********* MYSQL_USER=********* MYSQL_PASSWORD=*********
Secrets include AWS secrets and the database secrets.
3. Generating Backups/Dumps
In order to generate mysql data dumps we have to first connect into our database container then run the mysqldump command.
We can do this using the following one liner:
sudo docker exec db-backup_db_1 sh -c 'mysqldump -u root -p${MYSQL_ROOT_PASSWORD} ${MYSQL_DATABASE} > dump.sql'
This will create a data dump called 'dump.sql' inside the database container. Now, we have to copy the dump from inside the container.
sudo docker cp db-backup_db_1:dump.sql .
Now, we just have to upload the file to our S3 bucket. We will do this using the boto3 python package.
4. Uploading generated dumps to S3 Bucket
We create a python script called upload_to_s3.py like so,
upload_to_s3.py
import sys from botocore.exceptions import ClientError import boto3 import os from datetime import datetime S3_FOLDER = 'dumps' def upload_s3(local_file_path, s3_key): s3 = boto3.client( 's3', aws_access_key_id=os.getenv("AWS_ACCESS_KEY_ID"), aws_secret_access_key=os.getenv("AWS_SECRET_ACCESS_KEY") ) bucket_name = os.getenv("AWS_STORAGE_BUCKET_NAME") try: s3.upload_file(local_file_path, bucket_name, s3_key) except ClientError as e: print(f"failed uploading to s3 {e}") return False return True def main(): if len(sys.argv) == 0: print("Error: No File Name Specified !") return if not os.getenv("AWS_ACCESS_KEY_ID") or not os.getenv("AWS_SECRET_ACCESS_KEY") or not os.getenv("AWS_STORAGE_BUCKET_NAME"): print("Error: Could not Find AWS S3 Secrets in Environment") return upload_s3(sys.argv[1] + ".sql", S3_FOLDER + "/" + sys.argv[1] + "-" + str(datetime.now()) + ".sql") if __name__ == '__main__': main()
To run the script,
# make sure you have boto installed in your python venv pip install boto3
and then,
python3 upload_to_s3.py dump
This script expects a command line argument with the name of the dump file without the '.sql' extension and the aws secrets in the system environment variables. Then, it uploads the dump file to the s3 bucket under a folder called 'dumps'.
Final Bash Script
backup_db.sh
while read -r l; do export "$(sed 's/=.*$//' <<<$l)"="$(sed -E 's/^[^=]+=//' <<<$l)"; done < <(grep -E -v '^\s*(#|$)' $1) sudo docker exec db-backup_db_1 sh -c 'mysqldump -u root -p${MYSQL_ROOT_PASSWORD} ${MYSQL_DATABASE} > dump.sql' sudo docker cp db-backup_db_1:dump.sql . python3 upload_to_s3.py dump sudo rm dump.sql
The bash script expects the name of the .env file as command line argument.
The first line is a handy little one liner that parses the .env file and exports the environment vars in the system. (P.S: i didnt come up with it myself obviously o.O)
Then, it generates the dump and uploads the dump to the s3 bucket as we discussed. Finally, we remove the local copy of the dump, since we dont need it anymore.
Now each time we run the script,
bash backup_db.sh .env
We would see a new data dump in our s3 bucket,
5. Doing it periodically
We can easily do it periodically using a cron job. We can set any period we want using the following syntax,
sudo crontab -e 1 2 3 4 5 /path/to/script # add this line to crontab file
where,
1: Minutes (0-59) 2: Hours (0-23) 3: Days (1-31) 4: Month (1-12) 5: Day of the week(1-7) /path/to/script - path to our script
e.g: we can generate a data dump each week at 8:05 am Sunday using the following,
5 8 * * Sun /path/to/script
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/ashiqursuperfly/cost-effective-alternative-to-amazon-rds-database-backups-1ll5 | CC-MAIN-2021-43 | refinedweb | 1,109 | 57.27 |
P.
Managing usessaud
Although this audit is on-by-default, the regular clean-up routine will only keep the last five (5) days of data in the table. I recommend increasing this value to at least a month. Ideally, set this to be 90 days.
Update the setting database.cleanup.Usessaud.KeepInterval:
sql> exec settings_write_string('90d','database.cleanup.Usessaud','KeepInterval')
column setting format a50 select namespace ||'.'|| setting_name || '=' || setting_value setting from setting where namespace = 'database.cleanup.Usessaud' SETTING -------------------------------------------------- database.cleanup.Usessaud.KeepInterval=90d
Contents
The rows and columns in usessaud mirror those in usession, so it is important to understand exactly what is in usession. When you logon to any P6 application, one or more rows are inserted into the usession table. The primary purpose of the usession table is to track module/license usage (via the db_engine_type column). A row is inserted for each module the user could use during the session. A user logging into P6 EPPM, with module access to Project Management and Resource Management, will get two records in usession: WEB_PM and WEB_RM. While we would consider this a "single login", the usession contains two records.
When this user disconnects from the application, both records from usession are inserted into usessaud table. This is important to keep in mind for any queries on the usessaud table. It may be necessary to include a predicate for db_engine_type in your usessaud queries.
Examples
1. How many (unique) users connected on 9/23/2012?
select count(distinct user_name) from usessaud where trunc(login_date) = DATE'2012-09-23' / COUNT(DISTINCTUSER_NAME) ------------------------ 113
2. How many users connect to Professional Client on 9/25/2012?
select count(distinct user_name) from usessaud where trunc(login_date) = DATE'2012-09-25' and db_engine_type = 'PM' / COUNT(DISTINCTUSER_NAME) ------------------------ 92
3. What was the longest login time?
select max(round((logout_date-login_date)*24,2)) hours from usessaud where login_date between DATE'2012-09-22' and DATE'2012-10-22' / HOURS ---------- 13.5
4. Did the user "user257" connect to the application in September 2012?
select db_engine_type, login_date, logout_date from usessaud where user_name = 'user257' and login_date between DATE'2012-09-01' and DATE'2012-09-30' order by 1 / DB_ENGINE_TYPE LOGIN_DATE LOGOUT_DATE -------------------- ------------------- ------------------- PM 2012-09-28 03:51:27 2012-09-28 15:43:02 PM 2012-09-27 03:02:44 2012-09-27 11:30:23
These are just a few examples of the types of queries possible with usessaud. If you have your own usessaud query, or questions about the table contents, go ahead and post a comment. | https://blogs.oracle.com/priminout/entry/p6_session_audit | CC-MAIN-2015-48 | refinedweb | 421 | 57.67 |
Overview
We will accomplish building reusability into our project by exposing the interfaces to our objects publicly while hiding the implementation in an assembly containing only internal classes and then exposing the interfaces to our classes in a "builder" assembly. The builder assembly will be the only assembly allowed to make instances of the core objects so the instantiation of our objects are effectively hidden and not available outside the builder classes.
Project Core
For this project we will build a simple Money object that can be added. Our four assemblies will consist of
Interfaces
Our interfaces define an IMoney interface that is IAddable<IMoney> (so we can add up our money in the test harness).
Concrete Implementation
The internal Money class implements IMoney and is "hidden" because it can only be instantiated from inside the containing assembly or a friend assembly.
Builders
The builder class will implement an interface specific to the builder: IMoneyMaker. The builder's responsibility is to instantiate the Money object and expose it through the IMoney interface.
Our solution
The first thing to note is that the Money is hidden in it's assembly because it is marked as internal.
internal class Money: IMoney, IAddable<IMoney>
As a result, from our money builder class, the money is not available because the Money class is only internally available to the FriendlyTesting.Hidden assembly. As a result, we can not instantiate a Money object until we make the FriendlyTesting.Friend assembly a "friend" of the FriendlyTesting.Hidden assembly indicating that it can be trusted with the internal classes.
public class MoneyBuilder: IMoneyMaker
{
#region IMoneyMaker Members
public IMoney MakeMoney(double amount)
{
// Money is not available here...
// it is internal to FriendlyTesting.Hidden
}
#endregion
}
Creating a Friendly Assembly
Our goal is to make instantiation possible only through our builder class, so in the next few steps we'll make these two assemblies play nicely with each other.
Step 1) Strongly Naming the Assemblies
The first step is to make all the assemblies strong named. So we will create keys for our two assemblies (friend and hidden) through the "sn" visual studio command line utility (Start>Programs>Visual Studio>Visual Studio Tools>Visual Studio Command Prompt).
Command line syntax:
sn -k <new key name>
Actual command:
sn -k FriendlyKey.snk
Generating the key for the FriendlyTesting.Friend assembly:
Generating the key for the FriendlyTesting.Hidden assembly:
Because these two assemblies are strongly named, all referenced assemblies also have to be strongly named, so we'll do the same thing for the FriendlyTesing.Core assembly that holds our core interface definitions.
Step 2) Add the Keys to the Projects
Now we have to add the keys we generated to their respective projects through the project's properties (right click on the project and select 'Properties'). We'll walk through this for the FriendlyTesting.Friend assembly:
Click "Sign the assembly" and select browse from the drop down that is activated.
Open FriendKey.snk and it will now appear in our project
Add HiddenKey.snk to the friendlyTesting.Hidden project and the core key to the core project in the same way.
Step 3) Extract public key for friend assembly
Next we will extract the public key from our FriendlyKey.snk (the key for the FriendlyTesting.Friend project). We need information from the public key in order to let the hidden project know which assemblies to be friendly with. We will generate a new public key called "FriendlyKey_public.snk" with the following command:
Command line syntax:
sn -p <existing private key name> <new public key to be created>
sn -p FriendlyKey.snk FriendlyKey_public.snk
Step 4) find out the public key's info
Next we will show the public key's info with the following command
sn -tp <public key name>
sn -tp FriendlyKey_public.snk
Step 5) Name a friend in the hidden assembly.
Next, we will update the Assembly.cs file in the FriendlyTesting.Hidden project making our hidden assembly's internal member available to the new friend.
Add the following line which contains the public key information we just gathered to the FriendlyTesting.Hidden Assembly.cs file. Assembly.cs is located in the project's Properties folder:
[assembly: InternalsVisibleTo("FriendlyTesting.Friend, PublicKey=00240000048000009400000006020000002400005253413100040000010001000bfc58cfde00927bad3d28eb63098979418a31af879120f08c8babb49c998a0a2af6416679763add28c735f3e8503301336339c321cfd23b6a346df22b32bf83e01c2aac16f5e64c355c1c66ecc892c6e8986a2c1fc05fcc5f90f595decf968506b41c64d49cfe5431deeed3d179c09c871eac6b10fcad24f473bcd3731a1fb2")]
After this we find that FriendlyMoney.Friend has access to the internal members of the FriendlyTesting.Hidden project. So we can make money with our MoneyBuilder.
return FriendlyTesting.Hidden.Money.Instantiate(amount);
Wrap up: Test Harness
In any other projects in which we would like to use our Money class, we can only instantiate it through the MoneyBuilder and our Money object can now only be referenced through the IMoney or IAddable<> interfaces.
class Program
static void Main(string[] args)
IMoneyMaker builder = MoneyBuilder.Instantiate();
// NOTE: FriendlyTesting.Hidden has no members available here
// so we are required to get IMoney through the MoneyBuilder
IMoney bagODoughA = builder.MakeMoney(10.25);
IMoney bagODoughB = builder.MakeMoney(5.50);
IMoney allDough = bagODoughA.Add(bagODoughB);
Console.WriteLine("Total Moolah: " +
allDough.Amount.ToString());
Console.ReadLine();
This approach gives us a great deal of flexibility because we control how our class will be consumed by other projects and we can ensure nothing will break by changing backend implementation... as long as the code works and the IMoney and IAddable<> interfaces do not change.
If you download the project, you can verify that the Money object is not available to the FriendlyTesting.TestHarness assembly. Also, if you remove the line added to FriendlyTesting.Hidden Assembly.cs, the project will not compile because the Money object will no longer be visible to the FriendlyTesting.Friendly assembly.
I hope you found this article useful.
Until next time,Happy coding
.NET Serialization to SQLite Database
Encrypting Connection Strings in ASP.Net 2.0 | http://www.c-sharpcorner.com/UploadFile/rmcochran/reuse10312006141427PM/reuse.aspx | CC-MAIN-2013-48 | refinedweb | 953 | 56.05 |
I love the BrickPi. It’s a really powerful system, with the potential to do a lot.
This BrickPython package extends the BrickPi_python package to make it easier to program. It introduces two things: objects, and coroutines.
As a taster, here’s a coroutine function to detect presence via a sensor, open a door, wait and close it again; all while still permitting other things to happen:
def openDoorWhenSensorDetected(self): motorA = self.motor('A') sensor1 = self.sensor('1') motorA.zeroPosition() while True: while sensor1.value() > 8: yield for i in motorA.moveTo( -2*90 ): yield for i in self.doWait( 4000 ): yield for i in motorA.moveTo( 0 ): yield
The actual implementation - which also supports user input to change the behavior at any time - is DoorControlApp in ExamplePrograms/DoorControl.py
The Python module is at
The source code is at
Objects make programming easier. Objects can be intelligent (for example, the Motor object can know its speed); they can be easier to debug (Motor can print itself in a useful way); and they separate out concerns (you can deal with one Motor at a time).
Creating programs to control things is difficult. Several things can be happening at once, and the program needs to be able to deal with all of them.
So, for example, if you had a little bot with two wheels each controlled by a motor, and with two sensors to detect the direction, you’d want to continuously adjust the speed of each motor, look at both the sensors, and maybe also react to keyboard input from the user.
But this makes programming rather hard. One might want to have a nice simple function:
runMotorAtConstantSpeedForTime( aSpeed, aTime )
But with normal Python programming model the program can only be executing one thing at a time. So if it’s executing that function, all the other input (other motor, sensors, user) is being ignored.
Ouch!
There are two conventional approaches to this problem, both with disadvantages:
This framework uses a third option: Coroutines. With a coroutine, you can write ‘long running’ functions, which nevertheless allow other things to go on before they return. This relies on the Python ‘yield’ statement, which allows a function to go ‘on hold’ while other processing happens.
Strictly speaking, what this package supports aren’t true coroutines: a ‘true’ coroutine has its own stack. Python doesn’t support that, but we have ways around the problem.
Python 3.4 will have better support for coroutines - see .
I’m grateful for David Beazley his tutorial on Python coroutines: .
To make our coroutines work, we need something that coordinates them and manages the interaction with the BrickPi. These are the classes Scheduler and its derived class BrickPiWrapper.
Scheduler handles coroutines, calling them regularly every ‘work call’ (some 20 times per second), and provides methods to manage them: starting and stopping them, combining them, and supporting features such as timeouts for a coroutine.
When the Scheduler stops a coroutine, the coroutine receives a StopCoroutineException; catching this allows the coroutine to tidy up properly.
The class BrickPiWrapper extends the Scheduler to manage the BrickPi interaction, managing the Motor and Sensor objects, calling the BrickPi twice for every work call (once before, and once after all the coroutines have run), taking data from and subsequently updating each Motor and Sensor.
So with the scheduler, here’s all that’s required to make a Motor move to a new position:
co = theBrickPiWrapper.motor('A').moveTo( newPositionIndegrees*2 ) theBrickPiWrapper.addActionCoroutine( co )
That will move to the new position - and while it’s doing it, everything else is still ‘live’ and being processed: user input, other coroutines, sensor input, you name it.
To make user input easy, this module provides and integration with the Tk graphical interface, using the Python Tkinter framework. The class that does this is TkApplication. By default it shows a small grey window which accepts keystrokes, and exits when the ‘q’ key is pressed, but this can be overridden.
Our example applications have a main class that derives from TkApplication, which itself derives from BrickPiWrapper.
Integrations with other frameworks, or none at all, are equally straightforward. The framework must call the method Scheduler.doWork() regularly, pausing for Scheduler.timeMillisToNextCall() after each call.
For example CommandLineApplication provides a scheduler for applications that don’t require user input.
The Motor class implements methods to record and calculate the current speed. It also implements the servo motor PID algorithm as the coroutine Motor.moveTo(), allowing the motor to position itself accurately to a couple of degrees. There’s also a ‘constant speed’ coroutine Motor.setSpeed().
The Sensor class provides a superclass and default implementation for all sensors. Its method Sensor.value() answers the current value (possible values depend on the sensor type). It provides two approaches to check for changes:
Current supported implementation are TouchSensor, UltrasonicSensor and LightSensor The physical configuration of sensors is set up as an initialization parameter to the Application class (TkApplication or CommandLineApplication):
TkApplication.__init__(self, {'1': LightSensor, '2': TouchSensor, '3': UltrasonicSensor })
To help with development, this package also runs on other environments. It’s been tested on Mac OS X, but should run on any Python environment. In non-RaspberryPi environments, it replaces the hardware connections with a ‘mock’ serial connection, which ignores motor settings and always returns default values (0) for sensors and motor positions.
In particular, all the unit tests will run on any environment.
The githup repository includes a script I find useful for when doing development from another Linux machine: runbp. This copies all the files in the directory tree below the location of the runbp script over the the same place (relative to home directory) on the BrickPi, and runs a command there. E.g.:
cd BrickPython/test ../runbp nosetests
runs nosetests in the test directory on the BrickPi, with all the relevant files copied over. You may well need to tweak and move the script to suit your own environment - and see also the comments at the start of the script. | https://pythonhosted.org/BrickPython/introduction.html | CC-MAIN-2022-27 | refinedweb | 1,002 | 54.93 |
During this tutorial we create the core gameplay of a difference game. This includes all the main functionality and two levels to play around with. After this we will also add a main menu and an ending screen.
To summarize, we will be using two images: one with a clean background and one with all the differences. We create markers that determine the position and dimension of each difference and use BitmapData operations to copy this on the screen. Randomization will keep the gameplay interesting and increase replayability.
Step 1: Entrance Point to the Game
The entrance point of a game is the so called document class. In case your coding program of preference is Flash CS3, check out the article Quick Tip: How to Use a Document Class in Flash on how to set up the document class in detail. In case you're using FlashDevelop, create a new project and choose "AS3 Project".
In the source files you can find both the FlashDevelop project in the root folder (Difference game tutorial.as3proj) and a Flash CS3 .fla file (flashcs3game.fla) in the src folder.
The class Main (Main.as) will be our starting point and it looks like this:
package { import flash.display.Sprite; import flash.display.Stage; import flash.events.Event; public class Main extends Sprite { public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { removeEventListener(Event.ADDED_TO_STAGE, init); // entry point // save stage instance STAGE = stage; } public static var STAGE:Stage; } }
The shown code is very basic and comes as a standard when creating a FlashDevelop project. The highlighted code is extra code that is added. We would like a quick way to access Stage. Stage is an object that holds numbers from the flash application like screen width and screen height and so forth. To make it easily accessible we make it a public static variable. To access stage in anywhere in the game we need to type Main.STAGE.
Step 2: Creating the Game Class
The Game class is what holds the game mechanics and is usually created after clicking on the "start game" button on the main menu. Since we don't have a main menu yet it will be created on load. Only one instance of Game can exist at a time and in this tutorial the Game class and all components that rely on it are placed in its own folder: "game".
Upon load of Game we would like three things to happen:
- Add initial graphics
- Add the gameloop
- Set the level number to 1 and tell it to start
We will start by adding initial graphics. The other two steps will follow later on in this tutorial.
Game Class: Construct the Body
package game { import flash.display.Sprite; import flash.events.Event; public class Game extends Sprite { public function Game() { // Add initial graphics InitGraphics(); } private function InitGraphics():void { } } }
You can see the Game class in its smallest form. The function Game() is the constructor function. Inside this function we can place any code we would like to run directly after creating the class.
To add the initial graphics we are first building two containers to hold them: a top and a bottom layer. Splitting the graphics provides an easy way to keep the graphics seperated. This is useful when controlling the overlap of multiple graphics. For a difference game you could imagine that the level with the two screens and differences goes to the bottom layer, and that a counter saying how many differences are left and a mouse cursor goes to the top layer. To keep things organized, we place the necessary code in the function InitGraphics.
private function InitGraphics():void { // Add layers addChild(layer_bottom); addChild(layer_top); }
Game Class: Instantiation of the Class in Main
To instantiate the Game class and add it to the display list of Main we place the following code in the init function of Main:
var mainGame:Game = new Game(); addChild(mainGame);
Doing this will first initialize all the variables specified in Game and then call its constructor function.
Step 3: Adding and Exporting Graphics
We would like to add the graphics of our first level to the library of the Flash Authoring Tool, in this case Flash CS3 Professional. The engine we are going to make in this tutorial requires one image without any differences and one image with all the differences included. From the image with differences we will later copy the parts of that image that contain differences and place them over the normal image. Because only the differences are copied and the rest is untouched, you might as well remove the rest of the image to save some filespace (see the image below).
First we create two symbols that hold the graphics. Use Insert > New Symbol and create two symbols: gfx_level1 and gfx_leveldifference1. Also export these as shown below. For reasons explained in the next step it is important that all symbols used for level graphics are named correctly. We use the base class Sprite because the container will not need a timeline for animation. Inside these containers you could draw up the levels with vector graphics, or import some bitmap graphics and place them there.
Step 4: Creating the Level Class
Next up we are creating a primitive form of the Level class. The Level is what holds the two images that make up both sides of the screen. Here all the differences are added randomly distributed over both sides of the screen and are made clickable. An action like guessing a difference will cause a visual change in the Level class and a change in information in the Game class: the difference piece will fade out and the differences left counter reduces by one.
For now we will focus on adding code to the class constructor of Level. A parameter given to the Level class is _levelNumber. We save this value in the class locally.
public function Level(_levelNumber:int) { // Save the data from the parameters locally this.levelNumber = _levelNumber; } private var levelNumber:int;
Next we are obtaining the image data from the library through the command getDefinitionByName. Now it will become clear why it was important to give the symbols the correct export name. By using getDefinitionByName we can get the reference to a Class object and having this reference allows us to create a new instance of this class. As shown in last step we've used Sprite for our level graphics containers so new classRef as Sprite is necessary to create our Sprite object.
//); private var imageClean:BitmapData; private var imageDifferences:BitmapData;
The function GetBitmapData turns your Sprite, which is given as a parameter, into a BitmapData object. First a clean bitmap is created that has the dimensions of Sprite. All pixels in it are the colour 0x00000000 which means they have alpha: 0, red:0, green:0, red:0, in other words a black invisible pixel. Using draw the Sprite is drawn on top of the bitmapdata. The BitmapData object gets returned.
private function GetBitmapData(ob:DisplayObject):BitmapData { var bd:BitmapData = new BitmapData(ob.width, ob.height, true, 0x00000000); bd.draw(ob); return bd; }
So why would we want to have BitmapData and not just leave it as a Sprite? First off, the concept of copying one part of a picture to another is the most efficient way when using bitmapdata. With the function copyPixels to be precise, which is a very fast operation.
Secondly a bitmap is faster for the computer to draw than a vector image. Fading in an image will take more processing power when the image is a complex vector image than when it is a bitmap. Bitmap won't care how detailed the picture is.
And if you're worried about looks: it looks exactly the same provided that the user has not zoomed into the screen.
For reference, here is the full sourcecode of Level so far:
package game { import flash.display.Bitmap; import flash.display.BitmapData; import flash.display.Sprite; import flash.geom.Point; import flash.geom.Rectangle; import flash.utils.getDefinitionByName; public class Level extends Sprite { public function Level(_levelNumber:int) { // Save the data from the parameters locally this.levelNumber = _levelNumber; /* * Get bitmap data of the level. * */ //); } // Variables private var levelNumber:int; private var imageClean:BitmapData; private var imageDifferences:BitmapData; } }
Step 5: Adding the startLevel Function to the Game Class
With this basic form of the Level class ready we can create a way to Level to be instantiated in Game. Instantiation of level will produce the two-sided screen graphics and user interaction. When a level is done, that instance will be removed, and a new instance will be created. The difference is in the parameter for level number which is increased by one.
The startLevel function in Game:
private function StartLevel():void { // Create a new level currentLevel = new Level(currentLevelNumber); layer_bottom.addChildAt(currentLevel,0); } ... private var currentLevel:Level; private var currentLevelNumber:int;
We also liked to start the level right away, by putting some code in the constructor of Game. Let's do this now.
public function Game() { // Add initial graphics InitGraphics(); // Start level 1 currentLevelNumber = 1; StartLevel(); }
Step 6: Producing two Clean Backgrounds
Moving on to expanding the constructor of the Level class. A typical look of a difference game is the two-sided screen. In our case we have two clean identical backgrounds with differences laid over them. To produce the two identical clean backgrounds we are going to create one Bitmap instance called mainBitmap and copy the image data of the clean image (imageClean) to it twice, at two different positions.
The following pieces of code are added to the constructor of the Level class:
mainBitmap = new Bitmap(); mainBitmapData = new BitmapData(Main.STAGE.stageWidth, Main.STAGE.stageHeight);
Let's break down what happens. In the first line we create a new Bitmap instance which will function as a container for the two clean images. A new BitmapData instance is made with the appropiate size, namely the size of the screen. We obtain the size of the screen through Main.STAGE.stageWidth and Main.STAGE.stageHeight which are static variables defined in Main, as discussed earlier.
var cleanRect:Rectangle = new Rectangle(0, 0, imageClean.width, imageClean.height); mainBitmapData.copyPixels(imageClean, cleanRect, new Point(0, 0)); mainBitmapData.copyPixels(imageClean, cleanRect, new Point(BG_WIDTH+1, 0));
Next up is the operation copyPixels. copyPixels is a function that belongs to the bitmapData class of Flash and can be used to copy bitmapdata from another instance to their own. In this case bitmapdata from imageClean (the source bitmapdata), supplied through the first parameter, is being copied to mainBitmapData (the target bitmapdata) The next parameter is a Rectangle object, which is used to determine exactly what part of the source bitmapdata will be copied. We would like to copy everything that is in imageClean, so the Rectangle is made to have the same dimensions as imageClean. The third parameter is a Point object which says where in the target bitmapdata the source bitmapdata will be copied to. This is the property that we would like to be different between the two copyPixel operations. We shift the x-location with BG_WIDTH+1 to the right. BG_WIDTH is the size of the clean image, and +1 just serves as a separator line.
mainBitmap.bitmapData = mainBitmapData; addChild(mainBitmap);
Finally we add the bitmapData from mainBitmapData to mainBitmap and add the bitmap to the display list.
The current result thus far in an illustration:
Step 7: Using Markers for Dimensions and Placement
We are leaving the Level class for a second to talk about creating so called "difference markers". Each difference on the screen needs to have a certain position and a certain size, and we would like an efficient way to define these. One possible way of doing this is by manually looking up the x and y positions and the width and height of each difference and store them somewhere in an array. This can however get time-consuming and is kind of boring to do. A more elegant way is by placing markers over the difference image indicating where the differences will come. Those markers will be scanned through code and the position and dimensions will be extracted.
Let's set up the difference markers. We are creating two types, a circle and a square:
Export the two types of markers according to the image below. Why exporting as RoundDifference and SquareDifference is important in this case will be handled later.
Step 8: Placing the Difference Markers
We will need to create a container that will hold the difference markers. Because we need a guideline, the clean image will be included in the container. Drag the clean image from the CS3 library to the stage, right-mouse click on it, hit Convert to symbol and export it with name "differences level 1" and linkage class "differences_level1" and base class flash.display.Sprite. Inside the container, add a new layer that will holds the difference markers. Drag the round or square differences and position and scale them as you see fit.
The image below shows the result:
Step 9: Scanning the Difference Markers
Once again we return to the constructor of the Level class to add some more code to it. We would like to scan the container holding the difference markers for all difference marker objects and store those in an array called differenceMarkerList.
Add the following code to the constructor of Level:
classRef = getDefinitionByName("differences_level" + levelNumber.toString()) as Class; differencesData = new classRef as Sprite;
Here, much like a few steps before, we obtain a Sprite object using getDefinitionByName. The object is then saved into variable differencesData. This is the whole container holding all the differences markers and a clear image.
Having obtained this data we can now search through it for objects that interest us. Also add the following code:
var i:int = differencesData.numChildren; while (i--) { var ob:* = differencesData.getChildAt(i); if (ob is RoundDifference || ob is SquareDifference) { differenceMarkerList.push(ob); } }
First we look how many children the container has in its display list and store this value in variable i. We then loop through all those children in search for a difference marker. The reason why we exported them earlier now becomes clear. By exporting a marker to let's say RoundDifference, it now belongs to the RoundDifference class. When you drag those markers to the container, all those objects are an instance of RoundDifference. Because of this, they also have the datatype RoundDifference, which we can use as an identifier of a certain type of object. N.B. it will not only have the datatype RoundDifference, but all the datatypes it inherits from MovieClip too.
Inside the while loop we first obtain the data of a child in differencesData. We do a datatype check to see if it either belongs to RoundDifference of SquareDifference. If yes, then add it to the array differenceMarkerList. Now we have got an array through which we can easily access the data of each difference and the clear image also put in the container is not included (because it's datatype did not belong to either RoundDifference or SquareDifference).
Step 10: Introduction to Difference Pieces
Because everything needs a name, we will call the differences visible on the screen "difference pieces". They are small objects that hold the graphics coming from the image with the differences, variable imageDifferences. Also they hold a hit field used for mouse interaction, the user must be able to click on them to activate a difference. These difference pieces must be added in pairs so they appear both on the left and right side of the screen. Each object must know who its linked partner is so they can communicate with each other. From these pairs only one of them must initially be visible. When clicking on either one of the pair, it will activate itself and also it's linked partner, to become visible.
The positioning of the difference pieces must happen according to the data set in the previously added difference markers. We will look what the position and dimensions of the difference marker is and using that information we extract a rectangular block of graphics from imageDifferences. Also the hit field placed inside a difference piece must match the form of the difference marker, so RoundDifference will generate a round hit field and SquareDifference a square hitfield.
With the introduction of the Piece class the following final class structure for this tutorial (which is very straightforward) is obtained. A Piece instance is located in
Level,
a Level instance in Game, and so forth.
The following illustration shows how the difference piece comes together:
Step 11: Creating the Piece Class
Every single difference piece will be created using the Piece class. In this class the graphics and the hit field reside. Input parameters for the Piece class are the difference marker object (Sprite), the image with differences i.e. imageDifferences (BitmapData), and whether or not the piece is triggered on creation (Boolean).
Piece Class: Adding the Class Skeleton
First we create the basic class form and save the third parameter.
package game { import flash.display.BitmapData; import flash.display.Sprite; public class Piece extends Sprite { public function Piece (_dif:Sprite,_imageDifferences:BitmapData,_bTriggered:Boolean) { // Save parameter locally bTriggered = _bTriggered; } private var bTriggered:Boolean; } }
Piece Class: Creating Hit Fields
We would like to create a hit field based on the form of the difference marker. So we've either got a square or a round and the dimensions are the same as the difference marker.
So first we create an empty Sprite that will hold the hit field:
// Create hitfield hitField = new Sprite();
Then a data type check is done to see if the difference marker supplied as a parameter is equal to RoundDifference. If yes, use the Flash graphics drawing function drawEllipse to draw a round shape. As you can see, the width and height of the difference marker as passed to it as well.
if (_dif is RoundDifference) { hitField.graphics.beginFill(0x0); hitField.graphics.drawEllipse(0, 0 , _dif.width, _dif.height); hitField.graphics.endFill(); }
Same thing as the round hit field, but now for a square:
else if(_dif is SquareDifference) { hitField.graphics.beginFill(0x0); hitField.graphics.drawRect(0, 0, _dif.width, _dif.height); hitField.graphics.endFill(); }
Finally we add the hit field to the display list, so it will become part of the clickable region. When we later put a MouseEvent listener onto a Piece object, this is the field that will respond to mouse clicks. We of course don't actually want to see the hit field, so the alpha is put as zero. You might think hitField.visible = false would work as well, but doing that will prevent hitField from becoming part of the clickable region.
addChild(hitField); hitField.alpha = 0;
Piece Class: Graphics
Each Piece object has a small rectangular bitmap containing some bitmapdata from imageDifferences. This is what we will be adding in this step.
Create a empty Bitmap that will hold the graphics:
// Set up the bitmap creation of the difference difBitmap = new Bitmap();
Create a new BitmapData object with the same dimensions as the difference marker. We will be performing a copyPixels operation so we will need a Rectangle and a Point object as discussed before. It's position on the target bitmap is simply (0,0). The rectangle will need to be set so it covers a small rectangular area on imageDifferences. First set the top-left position with _dif.x and _dif.y then set the width and height of the piece of data you want.
var difBitmapData:BitmapData = new BitmapData(_dif.width, _dif.height); var pt:Point = new Point(0,0); var rect:Rectangle = new Rectangle(_dif.x, _dif.y, _dif.width, _dif.height);
Perform the copy pixels operation and set the bitmapData of difBitmap to difBitmapData.
// Fill difference bitmap with difference data difBitmapData.copyPixels(_imageDifferences, rect, pt); difBitmap.bitmapData = difBitmapData;
Now that boolean parameter comes into action. If _bTriggered is indeed true, then immediately upon loading the screen the difference will be visible. Remember that one of the Piece objects in the pair needs to be shown already.
if (_bTriggered) addChild(difBitmap);
Step 12: Creating Piece Objects as Pairs
As said before we need to add those Piece objects as pairs to the screen. Let's make a function that does that. This function will need to be called each time a new pair is added to the screen. For example an image with 5 differences in it will need 5 pairs.
Create the function CreateDifferenceCouple inside the class Level. The parameter ob (int) is a number saying which difference marker must be used. In step 9 we scanned for difference markers and put those in array differenceMarkerList. We now retrieve the difference marker again using parameter ob as an index. The second parameter is side (String) and will be either value "left" or "right".
private function CreateDifferenceCouple(ob:int, side:String):void { // Get difference location and type var data:Sprite = differenceMarkerList[ob]; }
Next up we create two Piece objects. All of the following codes in this step must be added to the function we just created, CreateDifferenceCouple. Data is the difference marker we created in the code above, imageDifferences is the bitmapdata of the image with differences. Remember that the Piece class' third parameter is _bTriggered which is a Boolean value. If this boolean is true then the Piece object is shown directly upon creation. Using code side=="left" and side=="right" will generate boolean values. Only one of these can true at once, so it is a good way to tell the Piece class if it should be pre-shown or not.
// Create two piece objects var dif:Piece = new Piece(data, imageDifferences, side=="left"); var dif2:Piece = new Piece(data, imageDifferences, side=="right");
Add MouseEvent listeners to both Piece objects. This will only respond to the hit field added inside each Piece object. The function Trigger it calls will be handled later. You might notice false, 0, true which if you look into the Flash docs means that weakReference is true. This is to assist garbage collection; I'm not going to go into detail about the reasons for this (see Daniel Sidhion's Quick Tip for a great introduction to garbage collection), but I put weakReference as true in a lot of cases to avoid memory leaks.
// Add listeners to them dif.addEventListener(MouseEvent.MOUSE_DOWN, Trigger, false, 0, true); dif2.addEventListener(MouseEvent.MOUSE_DOWN, Trigger, false, 0, true);
Add the two Piece objects to the display list of Level.
// Add pieces to display list addChild(dif); addChild(dif2);
Position the two Piece objects. The x positioning works much the same way as in step 6 when we added clean backgrounds. The y position is the same on both sides of the screen.
// Position them. If it ends on the right side, add the width of the image to the x-position. dif.x = data.x; dif2.x = data.x + BG_WIDTH + 1; dif.y = dif2.y = data.y;
Finally add the Piece objects to pieceArray so they can be accessed later.
// Add pieces to piece array pieceArray.push(dif); pieceArray.push(dif2);
Step 13: Linking the Piece Objects Together as a Pair
To link pieces together we need to store a reference of one another in each of the two Piece objects. Create the function SetPartner inside the Piece class with as parameter _partner (Piece). partner is then stored locally.
public function SetPartner(_partner:Piece):void { partner = _partner; } ... private var partner:Piece;
Next in the function CreateDifferenceCouple call the newly created functions in the Piece class and add each others references.
// Assign partners dif.SetPartner(dif2); dif2.SetPartner(dif);
This illustration shows how the differences pieces are positioned and paired:
Step 14: Adding Piece Pairs in a Random Fashion
In this step we are going to add Piece pairs to the screen by calling the CreateDifferenceCouple function for each defined difference marker. We can put some randomization to the process by manipulating the side (String) parameter, which detemines which Piece will be pre-triggered. Either the left side or the right side is triggered. We will be using Math.Random() for it's random value, but have to be careful though: you don't want all differences pre-triggered on one side of the screen but instead evenly spread out. So we would need to perform an extra check to keep the balance.
We would like to add the placement of Piece pairs to happen right away when the level starts, so add the following code to the bottom of the constructor of Level:
i = amountOfDifferences; var leftnum:int, rightnum:int; // Amount of differences placed on sides var distribution:int = Math.floor(amountOfDifferences / 2); while (i--) { if (Math.random() < .5 && leftnum < distribution) { CreateDifferenceCouple(i, "left"); } else if (rightnum < distribution) { CreateDifferenceCouple(i, "right"); } else { CreateDifferenceCouple(i, "left"); } }
Like in step 9 we loop through each difference marker. In this case we only need it's index value, though. Variables leftnum and rightnum track how many difference have been added to either screen and the variable distribution says how many differences would ideally be placed on the left screen. In the first if-statement we create a 50% chance that a difference is triggered on the left side given that the amount of differences on the left side does not exceed distribution. Then we check if the amount of differences on the right side doesn't exceed distribution, and else we add it to the left side.
Step 15: Trigger
Remember that we added a MouseEvent listeners to both of the Piece objects in a pair, that call Trigger when clicked. Add the following code to the Level class:
private function Trigger(e:MouseEvent):void { // Remove trigger event of piece var ob:Piece = e.target.parent; ob.removeEventListener(MouseEvent.MOUSE_DOWN, Trigger) ob.Activate(); // Remove trigger event of partner var partner:Piece = ob.GetPartner(); partner.removeEventListener(MouseEvent.MOUSE_DOWN, Trigger) partner.Activate(); }
So what happens here, is we first get a reference to the Piece object that is just clicked. That is obtained through e.target.parent. The parent of hit field is the Piece object we want. Remove the event listener attached to it and call it's Activate function, which doesn't exist yet and we will create next step. The reference to the partner of the clicked Piece object is obtained through the function GetPartner() of piece. Also the partner has its event listener removed and is activated.
Step 16: Activating a Piece Object
When a Piece object is activated we would like it to fade in gently. Add the following code to the Piece class:
public function Activate():void { if (!bTriggered) { addChild(difBitmap); difBitmap.alpha = 0; } Tweener.addTween(difBitmap, { alpha:1, time:3 } ); }
In case the Piece object is not just visible, add the difBitmap to the display list. We make the alpha 0 so that it can fade from alpha 0 to alpha 1. In this case we are using the Tweener library but any other tweening library for as3 could fit here.
Step 17: Making a Bridge Between the Level and the Game Class
Whenever the uses clicks on a difference piece, the pair is activated, and the amount of differences that still has to be found is decreased by one. Let's call the variable that holds this value amountOfDifferencesLeft. We will be creating a differences remaining counter in a later step that makes use of this variable to show the right number. The idea is that game statistics like the current level you are in and amountOfDifferencesLeft are placed inside the Game class. These should be seperated from the Level class, as the Level class will just provide graphics and user interaction.
So we need to make a "bridge" between Level and Game to let the system know the amount of differences has decreased by one. Add the following code to the Game class.
private function SubtractDifferencesLeft():void { currentDifferencesLeft--; } ... // Game score variables private var currentDifferencesLeft:int;
To let the Level class be able to call this function, we must pass the reference to this function as a parameter to the Level class upon instantiation. A reference to a function is simply it's name stored inside a variable: var someFunctionReference:Function = SubtractDifferencesLeft. Now the function can be activated by using someFunctionReference(parameters...).
The code inside the function StartLevel inside Game now changes by the addition to an extra parameter to Level:
currentLevel = new Level(currentLevelNumber,SubtractDifferencesLeft);
And the new code inside Level to accomodate for the added parameter. We save the reference to the SubtractDifferencesLeft function for later use.
public function Level(_levelNumber:int,_fSubtractDifferencesLeft:Function) { // Save the data from the parameters locally this.levelNumber = _levelNumber; ... omitted code } ... omitted code private var fSubtractDifferencesLeft:Function;
Now to make the actual call, we add the following code to the Trigger function inside the Level class.
// Activate the add point function fSubtractDifferencesLeft();
Step 18: Checking if the Level is Completed
As you may have foreseen, when the amount of differences left is equal to zero, the level is complete. This check will have to be done inside the function we just created called SubtractDifferencesLeft.
The function SubtractDifferencesLeft with added code:
private function SubtractDifferencesLeft():void { currentDifferencesLeft--; // Is differences left equal to zero? If yes - level is won. if (currentDifferencesLeft == 0) { // Add actions for level completion } }
Step 18: Level Completion Graphics Notification
Very important is the verification to the player that the level is won through some graphics effect. In this tutorial is chosen to make the difference pieces blink when the level is won.
The following action is added to SubtractDifferencesLeft. We will be calling a new function inside the Level class called WinLevel.
if (currentDifferencesLeft == 0) { // Activate winning level graphics on the level instance currentLevel.WinLevel(); }
This function WinLevel is a public function, which is required to be accessed from Game.
public function WinLevel():void { var timer:Timer = new Timer(750,4); timer.addEventListener(TimerEvent.TIMER, function():void { var i:int = pieceArray.length; while (i--) { pieceArray[i].visible = !pieceArray[i].visible; } } ); timer.start(); }
Here we see the code needed to blink all the differences pieces. We make use of a Timer with interval 750 ms that runs 4 times. It is important to let it run an even amount of times so it is visibile after the timer has stopped.
In the event listener applied to the timer we set an anonymous function that simply loop through all the pieces in pieceArray and sets it's visibility to the opposite of what it currently is. Finally we start the timer.
Step 19: Adding a Pause and Moving to the Next Level
Internally we must change the currentLevel variable and increase it with 1. There must be a pause between the transition so the level does not immediately switch after winning.
Add the highlighted code the function SubtractDifferencesLeft of the Game class.
if (currentDifferencesLeft == 0) { // Activate winning level graphics on the level instance currentLevel.WinLevel(); // Move to next level if (currentLevelNumber < NUMBER_OF_LEVELS) NextLevel(); }
As you can see we've added the if-statement to see if it doesn't increase the value NUMBER_OF_LEVELS. This is done for the purposes of the tutorial: there are only 2 levels. What can be added here for the full game is some function called WinGame after completing all the levels. The new NextLevel function that is inside the Game class looks like this:
private function NextLevel():void { // Increase level number currentLevelNumber++; // Wait a couple of seconds, then move to the next level. new EasyTimer(5, StartLevel); }
So the first one is obvious, currentLevelNumber is increased by one. This value is always used as a parameter when creating a new Level instance. The second part is the delay. EasyTimer is a function supplied with the tutorial code package, inside the map utils, that has time in seconds as first parameter and a function reference as a second parameter. It is not important to know how this works internally. In this case it is a 5 second delay after which StartLevel is called.
Step 20: Smooth Transition Between Levels
We'd like to create a smooth transition between levels by fading out the current level into the new level. For this we will be using Tweener.
The StartLevel function of the Game class is expanded with the following code, added on top of the function:
// If an old level still exists, fade it out and remove it if (currentLevel) { var oldLevel:Level = currentLevel; Tweener.addTween(oldLevel, { alpha:0, time:5, onComplete: function():void { layer_bottom.removeChild(oldLevel); } }); }
Before a level is created, the variable currentLevel will produce null. If there is already a level and a new level is started through a call to StartLevel then the if-statement produces true. We create the variable oldLevel that will hold the reference temporarily. Then we add some Tweener code and let it fade out to alpha=0 in a timespan of 5 seconds. On complete function is called that removes the child from the display list. This is important, because although invisible it is still there and taking RAM. It's last reference was being held in the display list and by removing that reference through removeChild we make it eligible for garbage collection.
Step 21: Blend Mode
As the level fades we can see a problem: the difference pieces become visible as rectangles. The figure below demonstrates this problem.
What we need to do is apply BlendMode.LAYER to the level that is fading out. Add the following line of code (highlighted):
// If an old level still exists, fade it out and remove it if (currentLevel) { var oldLevel:Level = currentLevel; oldLevel.blendMode = BlendMode.LAYER; Tweener.addTween(oldLevel, { alpha:0, time:5, onComplete: function():void { layer_bottom.removeChild(oldLevel); } }); }
Step 22: Creating a Custom Mouse Cursor
Characteristic to a difference game is that instead of the normal mouse cursor, a double ring cursor is used. This allows the player to match up differences on both sides of the screen.
Custom Mouse Cursor: Creating the Graphics
Create a red ring and export it as a Graphic type. Then create a movieclip, name it "cursor" and export it as gfx_mouseCursor. Inside it paste two red rings. The middle of the movieclip must be between the two rings and is the position where the mouse is locked on to. The distance from the circles to the middle is equal to the the width of the background image (plus the one pixel that serves as divider).
Custom Mouse Cursor: The Code to Make it Work
In step 2 we had added code to show the mouse pointer. Let's bring that back up again.
private function InitGraphics():void { // Add layers addChild(layer_bottom); addChild(layer_top); // Add differences left counter layer_top.addChild(g_differencesCounter); // Add mouse cursor to top layer layer_top.addChild(g_mouseCursor); }
When the Game class is loaded, it adds the two layers. To the top layer it adds the mouse cursor. The variable g_mouseCursor is created as follows:
private var g_mouseCursor:Sprite = new gfx_mouseCursor();
So now the graphics are added and it still needs a way to move. It is time to create our gameloop. Inside the constructor of Game, add the following code:
Next up is the creation of a gameloop. A gameloop is a single loop in the game that runs all the time if the game is active and handles the changes of any components linked to it. Think about moving, creating or destroying entities. Anything that needs some update according to time or some event. The gameloop is present in the Game class and is called through an Event.ENTER_FRAME listener. Although the gameloop in a difference game doesn't play a particulary big role, you can imagine that in a game with a lot of moving enemies over the screen it would.
public function Game() { // Add an enter frame loop to update the mouse cursor addEventListener(Event.ENTER_FRAME, Update, false, 0, true); // Add initial graphics InitGraphics(); // Start level 1 currentLevelNumber = 1; StartLevel(); }
The Update function is called once a frame. For our game, Update is a very simple function:
private function Update(e:Event):void { // Update mouse cursor position g_mouseCursor.x = mouseX; g_mouseCursor.y = mouseY; }
So as you might guess, it just positions the cursor the same as the mouse position at each new frame.
Step 23: Creating the Difference Counter
One thing that is left to add in the game UI is the difference counter that shows how many differences are remaining.
Difference Counter: Graphics
Add a couple of keyframes and in those keyframes add numbers starting from 0. A layer of code is added with a simple stop() in it. This approach allows you to create any graphics you want for the numbers and doesn't limit you to using a textfield.
Difference Counter: Code
Just like the mouse cursor, we add the difference counter to InitGraphics:
private function InitGraphics():void { // Add layers addChild(layer_bottom); addChild(layer_top); // Add differences left counter layer_top.addChild(g_differencesCounter); // Add mouse cursor to top layer layer_top.addChild(g_mouseCursor); }
And define g_differencesCounter as:
private var g_differencesCounter:MovieClip = new gfx_differencesCounter();
On loading the level the amount of differences must be counted as extracted from Level. Create a function in Level that returns the amount of differences. Then in startLevel set the variable currentDifferencesLeft to this number.
private function StartLevel():void { // If an old level still exists, fade it out and remove it if (currentLevel) { var oldLevel:Level = currentLevel; oldLevel.blendMode = BlendMode.LAYER; Tweener.addTween(oldLevel, { alpha:0, time:5, onComplete: function():void { layer_bottom.removeChild(oldLevel); } }); // Create a new level currentLevel = new Level(currentLevelNumber,SubtractDifferencesLeft); layer_bottom.addChildAt(currentLevel,0); // Set and get some score variables currentDifferencesLeft = currentLevel.GetAmountOfDifferences(); // Update differences counter graphics g_differencesCounter.gotoAndStop(currentDifferencesLeft+1); }
Whenever a difference is found the counter should update. SubtractDifferencesFound is the function that is called when the user finds a difference, so there it must be added.
private function SubtractDifferencesLeft():void { currentDifferencesLeft--; // Update differences counter g_differencesCounter.gotoAndStop(currentDifferencesLeft+1); // Is differences left equal to zero? If yes - level is won. if (currentDifferencesLeft == 0) { // Code omitted. } }
We move the differences counter to the desired keyframe.
Intermezzo: Working Demo
So after a lot of coding, we are finally ready to see a working demo of the difference game. On each difference a hand cursor is added by the use of buttonMode = true to make it easier to spot the differences. Notice that when using tab you see yellow-tab boxes appear. This is caused by the buttonMode setting. Because buttonMode will be false anyway in the final version (because the hand cursor shouldn't be visible) this will not pose a problem.
Step 24: The Main Menu
The main menu in this tutorial will have an entrance page with background, title and buttons. Next to this are an instruction page and a credits page. Also for some eye-candy we've added alpha-smoothing between each page. Let's start by adding graphics.
Main Menu: Graphics
The chosen way to build up our main menu is this: we have an object called "main menu background" (exported as gfx_mainmenu_background) which holds just the background graphics. Then buttons and text are overlaid. Some pages contain the game title in it (credits) and some don't (instructions). All buttons are simply exported in Flash as buttons and given identifier names like butStart, butInstructions, butCredits. In the credits page and the instructions page is the same return button with identifier butReturn.
The following illustration demonstrates.
Main Menu: Initialization Code
We will be expanding Main.as to show the main menu instead of directly starting the game.
private function InitGraphics():void { g_menu.addChild(g_mainmenu_background); g_menu.addChild(g_mainmenu_main); } private function InitListeners():void { g_mainmenu_main.butStart.addEventListener(MouseEvent.MOUSE_DOWN, ButtonAction); g_mainmenu_main.butInstructions.addEventListener(MouseEvent.MOUSE_DOWN, ButtonAction); g_mainmenu_main.butCredits.addEventListener(MouseEvent.MOUSE_DOWN, ButtonAction); g_mainmenu_instructions.butReturn.addEventListener(MouseEvent.MOUSE_DOWN, ButtonAction); g_mainmenu_credits.butReturn.addEventListener(MouseEvent.MOUSE_DOWN, ButtonAction); } // gfx private var g_menu:Sprite = new Sprite(); private var g_mainmenu_background:Sprite = new gfx_mainmenu_background; private var g_mainmenu_main:gfx_mainmenu_buttons = new gfx_mainmenu_buttons; private var g_mainmenu_credits:gfx_mainmenu_credits = new gfx_mainmenu_credits; private var g_mainmenu_instructions:gfx_mainmenu_instructions = new gfx_mainmenu_instructions;
To get the code organized we the code up into two functions, InitGraphics and InitListeners. We create some instances of the screens shown above and also create g_menu which will serve as a container. In InitGraphics we add the graphics when starting up the game, so that starts with the main screen. We will always keep g_mainmenu_background visible and swap other graphics on top of it.
Main Menu: Toggle the Visiblilty of the Main Menu
Thanks to our container named g_menu we have a quick way to turn on or off everything that belongs to the menu. The if-statements check if the graphics are really there to eliminate any possible errors.
private function ShowMainMenu():void { if (!contains(g_menu)) addChild(g_menu); } private function HideMainMenu():void { if (contains(g_menu)) removeChild(g_menu); }
Main Menu: Adding Life to the Buttons
As you noticed we added listeners to all the buttons in InitListeners. Those linked to the function ButtonAction.
private function ButtonAction(e:MouseEvent):void { switch(e.target.name) { case "butStart": StartGame(); HideMainMenu(); break; case "butInstructions": SwitchPage(g_mainmenu_instructions); break; case "butCredits": SwitchPage(g_mainmenu_credits); break; case "butReturn": SwitchPage(g_mainmenu_main); break; } }
In ButtonAction we check for the instance name of the button that is clicked and do a specific command. I much prefer this method because it simplifies a lot of things. Like in this case not having to write a function for each button. So butStart will start the game through StartGame, a function we have yet to make. It also hides the main menu.
Main Menu: Switching Pages
The SwitchPage function will fade out the current page that is visibile and fade in the page that is given as a parameter.
private function SwitchPage(_newPage:Sprite):void { // to prevent errors, remove old tweens if (Tweener.isTweening(previousPage)) { Tweener.removeAllTweens(); g_menu.removeChild(previousPage); } // set new references previousPage = currentPage; currentPage = _newPage; // transition effect old page previousPage.mouseChildren = false; Tweener.addTween(previousPage, { alpha:0, time:1.5, onComplete: function():void { g_menu.removeChild(previousPage); } }) // transition effect new page g_menu.addChild(currentPage); currentPage.alpha = 0; currentPage.mouseChildren = true; Tweener.addTween(currentPage, { alpha:1, time:1.5 } ); } private var currentPage:Sprite; private var previousPage:Sprite;
The first section was added to remove errors that occurred when rapidly fading through pages. New references are set and the current page becomes the previous page and the page in the parameter the current page. Then we fade out the old page using Tweener and also block the mouse interaction, and when it's done we remove it from the display list. The new page is faded in from alpha 0. It is activated for mouse interaction just in case it was blocked before.
Main Menu: Starting the Game
Remember the code in Main.as we initially made to start the game right away? That is now wrapped in a function.
private function StartGame():void { // Game init mainGame = new Game(); addChild(mainGame); }
Step 24: A Game Ending
So what is a game without a proper ending? For this tutorial we add an animation after winning the second level letting the player know he/she has won and at the end the screen becomes clickable and can take you back to the main menu.
Game Ending: The Win Game Condition
// Is differences left equal to zero? If yes - level is won. if (currentDifferencesLeft == 0) { // Activate winning level graphics on the level instance currentLevel.WinLevel(); // Move to next level if (currentLevelNumber < NUMBER_OF_LEVELS) NextLevel(); // No more levels. Win game! else WinGame(); }
Remember the function SubtractDifferencesLeft in Game.as? In here we check if the amount of differences left is zero, and if yes, win the game. Also we take it to the next level if the current level is lower than the amount of levels we've got (2). Now the else-statement is added that the user must get the win animation after beating all levels.
Game Ending: Animation
Here is a screenshot of the end-game animation.
At the last frame of the end game animation we've added code that dispatches an Event, telling the Game class that is it done animating and can activate the full-screen button.
stop(); dispatchEvent(new Event("DONE"));
Game Ending: Showing an Animation and Activating the Button
Add the following code to Game.as:
private function WinGame():void { g_endgame = new gfx_endgame(); g_endgame.stop(); // fade in end game animation addChild(g_endgame); g_endgame.alpha = 0; Tweener.addTween(g_endgame, { alpha:1, delay:5, time:2, onComplete: function():void { // remove the current level from the display list layer_bottom.removeChild(currentLevel); // start playing animation g_endgame.play(); } }); // mouse interaction at the end of the animation g_endgame.addEventListener(MouseEvent.MOUSE_DOWN, ReturnToMenu); g_endgame.mouseEnabled = false; g_endgame.addEventListener("DONE", function(e:Event):void { // enable mouse interaction g_endgame.mouseEnabled = true; // buttonmode so it shows the hand cursor g_endgame.buttonMode = true; } ); } private var g_endgame:MovieClip;
First we instantiate the end game animation and prevent it from running. The animation is added to the display list, and we use Tweener to fade it in. There is a delay set of 5 seconds so you can see the difference pieces of level 2 blink before moving on. On completion of the fade in we remove the current level from the display list and tell the animation to run.
Next we add a mouse listener and display it for mouse interaction. We check the animation for the event "DONE" (code in the last frame of the animation dispatches that event) and upon receiving we enable the mouse interaction of the animation.
Game Ending: Resetting the Game
The final step is to make the code that is called when the button on the end-game animation is clicked. This resides in the function ReturnToMenu of Game.as and links back to Main.as through an event dispatch. Let's look at ReturnToMenu.
private function ReturnToMenu(e:MouseEvent):void { dispatchEvent(new Event("GAME_OVER")); }
This dispatches the custom event "GAME_OVER". To catch it, we must put an eventListener on the instance of the Game class. Let's add this listener to the StartGame function of Main.as.
private function StartGame():void { // Game init mainGame = new Game(); mainGame.addEventListener("GAME_OVER",ResetGame); addChild(mainGame); }
When the event is caught, the function ResetGame is called, that removed the Game instance from the display list and also sets its value to null so it can be garbage collected. Also we remove its eventListener just in case. After all this we can show the main menu again.
private function ResetGame(e:Event):void { removeChild(mainGame); mainGame.removeEventListener("GAME_OVER",ResetGame); mainGame = null; ShowMainMenu(); }
Conclusion
So there we have it! I hope you have enjoyed this tutorial and were able to pick up some programming tricks to help you create your own solutions.
This game could still be expanded a lot. Think about hints, particle effects, points and penalties. Although the artwork is the main attraction of a difference game, some extra effects and options can make the game better, and having organized code helps you adding that without too much hassle. Development time can be cut down by choosing a proper way to create levels. Imagine that instead of the current solution, we chose to export each and every difference and turn in into a fading out animation. For 15 levels with 10 differences each that would have needed 150 manually created animations. Very time consuming!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://gamedevelopment.tutsplus.com/tutorials/test-your-observation-skills-with-an-as3-difference-game--active-4731 | CC-MAIN-2018-34 | refinedweb | 7,933 | 56.05 |
It’s just a syntax, nothing more than that for saying that “the function is pure virtual”.
A pure virtual function is a virtual function in C++ for which we need not to write any function definition and only we have to declare it. It is declared by assigning 0 in declaration.
Here is an example of pure virtual function in C++ program
#include<iostream> using namespace std; class B { public: virtual void s() = 0; // Pure Virtual Function }; class D:public B { public: void s() { cout << " Virtual Function in Derived class\n"; } }; int main() { B *b; D dobj; b = &dobj; b->s(); }
Virtual Function in Derived class | https://www.tutorialspoint.com/why-is-a-cplusplus-pure-virtual-function-initialized-by-0 | CC-MAIN-2022-21 | refinedweb | 107 | 54.05 |
It's been over six years since I first started working on Alzabo (probably=
=20
close to seven, all told), and to tell the truth I'm a little sick of it=20
;)
I'm still proud of it, since it has served me well on many projects, but=20
it also has a lot of problems, many of which are rather intractable,=20
unless I want to destroy backwards compatibility.
I'm starting to work on something which will do the things I like best=20
about Alzabo, which is mostly the ability to generate very complex queries=
=20
through the use of Perl data structures, rather than string manipulation.
For now, I'm calling this thing "Q" (for Query), but I'm going to change=20
the name before it goes to CPAN, obviously (I like it, but a=20
single-character top-level namespace seems a bit presumptuous ;)
The work-in-progress can be seen in my personal SVN repo at=20 for those who are interested in following=
=20
along.
I'd also like to get feedback from people who have used Alzabo, are using,=
=20
or who looked at it and chose something else. Things I'm interested in=20
are:
* What you love about Alzabo.
* What you hate about Alzabo.
* What problems you had getting it do X.
* I stopped using Alzabo because of X.
* I chose something else because of X.
For this last one, if you chose something else because you prefer=20
something that makes classes more primary (like Class::DBI/DBIx::Class),=20
then please don't respond. I know that some people like that sort of=20
thing, but it was never my goal to support that way of working=20
with Alzabo, and I won't be changing that with Q.
After my sig you can read the current pod for Q, which outlines many of=20
Alzabo's problems (from my perspective), and some of my goals for Q.
If anyone wants to become a co-maintainer of Alzabo, that'd be great. I=20
don't plan on letting it rot, because=EFI still use it for many projects. I=
=20
probably won't be doing any _major_ new development on it, or trying to=20
fix bugs that require major design changes. If you'd like to do either of=
=20
these things, I'd be happy to help you.
Thanks,
-d
VegGuide.Org
Your guide to all that's veg. My book*/
DESCRIPTION
The goal of this module is to provide a (relatively) simple, flexible
way to *dynamically* generate SQL queries from Perl. The emphasis here
is on dynamic, and by that I mean that the structure of the SQL query
may change dynamically.
This is different from simply changing the parameters of a query
dynamically. For example:
SELECT user_id FROM User where username =3D ?
While this is a dynamic query in the sense that the username is
parameter-ized, and may change on each invocation, it is still easily
handled by a phrasebook class. If that is all you need, I suggest
checking out "Class::Phrasebook::SQL", "Data::Phrasebook", and
"SQL::Library" on CPAN.
Why Not Use a Phrasebook?
Let's assume we have a simple User table with the following columns:
username
state
access_level
Limiting ourselves to queries of equality ("username =3D ?"), we would
still need 32 (1 + 5 + 10 + 10 + 5 + 1) entries to handle all the
possible combinations. Now imagine adding in variants like allowing fo=
r
wildcard searches using LIKE or regexes, or more complex variants
involving an "OR" in a subclause.
This gets even more complicated if you start adding in joins, outer
joins, and so on. It's plain to see that a phrasebook gets too large t=
o
be usable at this point, and you'd probably have to write a program ju=
st
to generate the phrasebook and keep it up to date at this point!
Why Not String Manipulation?
The first solution that might come to mind is to dump the phrasebook i=
n
favor of string manipulation. This is simple enough at first, but
quickly gets ugly. Handling all of the possible options correctly
requires lots of fiddly code that has to concatenate bits of SQL in th=
e
correct order.
The Solution
Hopefully, this module provides a solution to this problem. It allows
you to specify queries in the form of *Perl data structures*. It
provides a set of objects to represent specific parts of a schema,
specifically tables, columns, and foreign keys. Using these objects yo=
u
can easily generate very complex queries by combining them with string=
s
and passing them to the appropriate query-generating method.
I also hope that this module can be used as a building block to build
other tools. A good example would be a tool for generating DDL
statements (like Alzabo ;).
HISTORY AND GOALS
This module comes from my experience writing and using Alzabo. Alzabo
does everything this module does, and a lot more. The fact that Alzabo
does so many things has become a bit problematic in its maintenance, a=
nd
Alzabo is over 6 years old at this time (August of 2006).
Problems with Alzabo
Here are some of the problems I've had with Alzabo over the years:
* Adding support for a new RDBMS is a lot of work, so it only suppor=
ts
MySQL and Pg. Alzabo tried to be really smart about preventing use=
rs
from shooting themselves in the foot, and required a lot of specif=
ic
code for each DBMS to achieve this.
* It doesn't support multiple versions of a DBMS very well. Either i=
t
doesn't work with an older version at all, or it doesn't support
some enhanced capability of a newer version.
On a side note, if DBMS's were to provide a standard API for askin=
g
questions about their DDL syntax and vcapabilities like "what is t=
he
max number of chars in a column name?" or "what data are the names
of each data type?" that would have made things infinitely easier.
* There are now free GUI design tools for specific databases that do=
a
better job of supporting the database in question.
* Alzabo separates its classes into Create (for generation of DDL) a=
nd
Runtime (for DML) subclasses, which might have been worth the memo=
ry
savings six years ago, but just makes for an extra hassle now.
* When I originally developed Alzabo, I thought that generating OO
classes that subclasses the Alzabo classes and added "business
logic" methods was a good idea, thus "Alzabo::MethodMaker".
Nowadays, I prefer to have my business logic classes simple use th=
e
Alzabo classes. In other words, I now prefer "has-a" versus "is-a"
object design for this case.
Method auto-generation based on a specific schema can be quite
handy, but it should be done in the domain-specific classes, not a=
s
a subclass of the core functionality.
* Storing schemas in an Alzabo-specific format is problematic for ma=
ny
obvious reasons. It's simpler to simply get the schema definition
from an existing schema, or to allow users to define it in code.
* Alzabo's referential integrity checking was really cool back when =
I
mostly used MySQL with MYISAM tables, but is a burden nowadays.
* I didn't catch the testing bug until quite a while after I'd start=
ed
working on Alzabo. Alzabo's test suite is nasty. Q will be built f=
or
testability, and I'll make sure that high test coverage is part of
my ongoing goals.
* Alzabo does too many things, which makes it hard to explain and
document.
Goals
Overall, rather than coming up with a very smart solution that allows =
us
to use 80% of a DBMS's functionality, I'd rather come up with a 100%
solution that's dumber. It's easy to add smarts on top of a dumb layer=
,
but it can be terribly hard to add that last 20% once you've got
something really smart.
A good example of this is Alzabo's support of database functions like
"AVG" or "SUM". It supports them in a very clever way, but adding
support for a new function can be a pain, especially if it has odd
syntax.
The goals for Q, based on my experience with Alzabo, are the following=
:
* Provide a simple way to generate queries dynamically. I really lik=
e
the way this works with Alzabo, except that Alzabo is not as
flexible as I'd like.
Specifically, I want to be able to issue updates and deletes to mo=
re
than one row at a time. I want support for sub-selects, unions, et=
c.
and all that other good stuff.
* I want complex query creation to requires less fiddliness than
Alzabo. This means that class to represent queries will be a littl=
e
smarter and more flexible about the order in which bits are added.
For example, in using Alzabo I often come across cases where I wan=
t
to add a table to a query's join *if it hasn't already been added*=
=2E
Right now there's no nice simple way to do this. Specifying the
table twice will cause an error. It would be nice to simply be abl=
e
to do this:
$query->join( $foo_table =3D> $bar_table )
unless $query->join_includes($bar_table);
* Provide the base for a tool that does what the
"Alzabo::Runtime::Row" class does. There will be a separate tool
that takes query results and turns them into low-level "row" objec=
ts
instead of returning them as DBI statement handles.
This tool will support something like Alzabo's "potential" rows,
which are objects that have the same API as these row objects, but
do not represent data in the DBMS.
Finally, it will have support the same type of simple "unique row
cache" that Alzabo provides. This type of dirt-simple caching has
proven to be a big win in many applications I've written.
OTHER MODULES
This module is based on many years of using and maintaining "Alzabo",
which is a much more ambitious project. There are modules similar to
this one on CPAN:
* SQL::Abstract
On Aug 21, 2006, at 4:42 AM, Dave Rolsky wrote:
> * Adding support for a new RDBMS is a lot of work, so it only
> supports
> MySQL and Pg. Alzabo tried to be really smart about
> preventing users
> from shooting themselves in the foot, and required a lot of
> specific
> code for each DBMS to achieve this.
>
> * It doesn't support multiple versions of a DBMS very well.
> Either it
> doesn't work with an older version at all, or it doesn't
> support
> some enhanced capability of a newer version.
>
> * When I originally developed Alzabo, I thought that
> generating OO
> classes that subclasses the Alzabo classes and added "business
> logic" methods was a good idea, thus "Alzabo::MethodMaker".
> Nowadays, I prefer to have my business logic classes simple
> use the
> Alzabo classes. In other words, I now prefer "has-a" versus
> "is-a"
> object design for this case.
For what it's worth, you might want to take a peek at how I handle
this in my DBIx::SQLEngine distribution.
It's certainly not perfect, but I think it's aiming at the same kind
of goals you're talking about -- ie, portable query generation from
Perl data structures, with the class-builder stuff as an optional
layer rather than as the core interface.
I approached the portability issues by leveraging DBIx::AnyDBD to
rebless the connection/query-generator object into an appropriate
subclass based on the DBI driver and remote DBMS in use, plus some
DBMS-specific version-detection logic, for example to determine
whether we're talking to MySQL 3.x, 4.x, or 5.x.
(Along the way I discovered an annoying limitation of DBIx::AnyDBD,
in that it juggles the inheritance hierarchy instead of creating new
classes; I've been meaning to replace this with an implementation
based on NEXT but haven't gotten around to it yet...)
Each subclass can tweak the query-generation methods to cope with
local idiosyncrasies in SQL syntax, and returns information about the
capabilities of the connected database, so that higher-level
application code can detect whether the target system supports
transactions, allows simultaneous active STHs on a single DBH, etc.
The core interface is through the Driver base class and the various
subclasses thereof:
Feel free to grab any bits of this that seem helpful, or to ignore it
if not...
-Simon
On Mon, 21 Aug 2006, Dave Rolsky wrote:
> I'm starting to work on something which will do the things I like best
> about Alzabo, which is mostly the ability to generate very complex queries
> through the use of Perl data structures, rather than string manipulation.
>
> For now, I'm calling this thing "Q" (for Query), but I'm going to change
> the name before it goes to CPAN, obviously (I like it, but a
> single-character top-level namespace seems a bit presumptuous ;)
>
> The work-in-progress can be seen in my personal SVN repo at
> for those who are interested in following
> along.
Depending on your time table for this, have you considered trying to write
it in Perl 6?
FYI, considering the new and rapidly improving v6.pm and related CPAN
infrastructure such as Moose, you can write in Perl 6 now but people can
use it like its a Perl 5 module, because v6.pm translates it to a Perl 5
.pm and/or .pmc, and you can already call Perl 5 and Perl 6 code from each
other, so you can still use the Perl 5 DBI. And yes, I participate in the
Pugs+Perl6 project on a semi-daily basis.
(I am confident enough in this that I have already halted my Perl 5 CPAN
development and any further releases will be written in Perl 6. This
includes my Rosetta DBMS project, so it will now only be possible to use
Rosetta as part of a Perl 6 including stack. Note that I'm generally not
making further announcements about my projects until they are actually
ready to use, rather than perpetuate vaporware, but this brief note to
Poop-group is an exception since this is a low-volume list and this
anecdote may help you decide the direction of your own project.)
-- Darren Duncan
On Mon, 21 Aug 2006, Darren Duncan wrote:
> Depending on your time table for this, have you considered trying to write
> it in Perl 6?
Not really. Absent enough docs to really understand P6 I don't want to
jump in just yet. I also want to use this thing I'm writing relatively
soon.
I fully expect that once P6 is done (whatever that means) I'll use it, but
Pugs is a moving target at the moment, and I'm never sure whether I'm
misunderstanding the P6 syntax or Pugs has a bug. Yes, the Pugs and P6
folks are very helpful on IRC, but constantly asking questions on IRC is
not very conducive to productivity for me.
-dave
/*===================================================
VegGuide.Org
Your guide to all that's veg. My book blog
===================================================*/ | http://sourceforge.net/p/poop/mailman/poop-group/thread/Pine.LNX.4.64.0608250810270.7952@urth.org/ | CC-MAIN-2014-52 | refinedweb | 2,555 | 66.88 |
I'm trying to test my flask application using unittest. I want to refrain from flask-testing because I don't like to get ahead of myself.
I've really been struggling with this unittest thing now. It is confusing because there's the request context and the app context and I don't know which one I need to be in when I call db.create_all().
It seems like when I do add to the database, it adds my models to the database specified in my app module (init.py) file, but not the database specified in the setUp(self) method.
I have some methods that must populate the database before every test_ method.
How can I point my db to the right path?
def setUp(self):
#self.db_gd, app.config['DATABASE'] = tempfile.mkstemp()
app.config['TESTING'] = True
# app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + app.config['DATABASE']
basedir = os.path.abspath(os.path.dirname(__file__))
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + \
os.path.join(basedir, 'test.db')
db = SQLAlchemy(app)
db.create_all()
#self.app = app.test_client()
#self.app.testing = True
self.create_roles()
self.create_users()
self.create_buildings()
#with app.app_context():
# db.create_all()
# self.create_roles()
# self.create_users()
# self.create_buildings()
def tearDown(self):
#with app.app_context():
#with app.request_context():
db.session.remove()
db.drop_all()
#os.close(self.db_gd)
#os.unlink(app.config['DATABASE'])
Here is one of the methods that populates my database:
def create_users(self):
#raise ValueError(User.query.all())
new_user = User('Some User Name','[email protected]','admin')
new_user.role_id = 1
new_user.status = 1
new_user.password = generate_password_hash(new_user.password)
db.session.add(new_user)
Places I've looked at:
And the flask documentation:
one issue that your hitting is the limitations of flask contexts, this is the primary reason i think long and hard before including a flask extension into my project, and flask-sqlalchemy is one of the biggest offenders. i say this because in most cases it is completely unnecessary to depend on the flask app context when dealing with your database. Sure it can be nice, especially since flask-sqlalchemy does a lot behind the scenes for you, mainly you dont have to manually manage your session, metadata or engine, but keeping that in mind those things can easily be done on your own, and for doing that you get the benefit of unrestricted access to your database, with no worry about the flask context. here is an example of how to setup your db manually, first i will show the flask-sqlalchemy way, then the manual plain sqlalchemy way:
the
flask-sqlalchemy way:
import flask from flask_sqlalchemy import SQLAlchemy app = flask.Flask(__name__) db = SQLAlchemy(app) # define your models using db.Model as base class # and define columns using classes inside of db # ie: db.Column(db.String(255),nullable=False) # then create database db.create_all() # <-- gives error if not currently running flask app
the standard sqlalchemy way:
import flask import sqlalchemy as sa from sqlalchemy.ext.declarative import declarative_base # first we need our database engine for the connection engine = sa.create_engine(MY_DB_URL,echo=True) # the line above is part of the benefit of using flask-sqlalchemy, # it passes your database uri to this function using the config value # SQLALCHEMY_DATABASE_URI, but that config value is one reason we are # tied to the application context # now we need our session to create querys with Session = sa.orm.scoped_session(sa.orm.sessionmaker()) Session.configure(bind=engine) session = Session() # now we need a base class for our models to inherit from Model = declarative_base() # and we need to tie the engine to our base class Model.metadata.bind = engine # now define your models using Model as base class and # anything that would have come from db, ie: db.Column # will be in sa, ie: sa.Column # then when your ready, to create your db just call Model.metadata.create_all() # no flask context management needed now
if you set your app up like that, any context issues your having should go away. | http://m.dlxedu.com/m/askdetail/3/4ca1145f89ad47a422cdc7cbcb5fcefa.html | CC-MAIN-2018-39 | refinedweb | 655 | 56.55 |
How to make a Phone call in Windows Phone
This code example shows how to launch the Phone app using PhoneCallTaskAPI in Windows Phone.
Note: In contrast to Symbian, Windows Phone does not allow you to make calls "directly" from your own app, or to monitor for and connect to incoming calls. As soon as phone call initiation code below is executed the phone app will be displayed and the user will be asked to confirm that they want to dial the specified number.
Windows Phone 8
Windows Phone 7.5
Introduction
This simple example displays a Phone Call button on the screen. When the button is pressed, the app calls PhoneCallTaskAPI in the Microsoft.Phone.Tasks namespace.
First create a new Windows Phone 7 Silverlight application using the default template in MS Visual Studio. Then add a Button to the XAML as shown below:
<Grid x:
<Button Content="Phone Call" Height="82" HorizontalAlignment="Left" Margin="140,234,0,0"
Name="btnCall" VerticalAlignment="Top" Width="auto" Click="btnCall_Click" />
</Grid>
The user interface with the button is shown below:
Next initialize the PhoneCallTask task object in the constructor of page. To avoid the memory overhead so I initialized in the constructor.
PhoneCallTask phoneTask = null;// Constructor
PhoneNumberChooserTask phoneNumberChooserTask;
public MainPage()
{
InitializeComponent();
phoneNumberChooserTask = new PhoneNumberChooserTask();
phoneNumberChooserTask.Completed += new EventHandler<PhoneNumberResult>(phoneNumberChooserTask_Completed);
phoneTask = new PhoneCallTask();
}
The PhoneCallTask class contains two core properties like DisplayName, PhoneNumber and one show() method, which allows us to show the phone application. To make an actual call though the application.
Double-click the button control and click event handler got added to button control object.On the button click event the device's native phonebook is launched & user can then select a contact from it to whom he wishes to make a phone call . See below mentioned code snippet.
private void btnCall_Click(object sender, RoutedEventArgs e)
{
//this will launch the phonebook to choose a contact to call.
phoneNumberChooserTask.Show();
}
As soon as user chooses the contact from phonebook, the following call back method gets called & launches the phone call dialog, asking the user to dial the particular contact or not.
void phoneNumberChooserTask_Completed(object sender, PhoneNumberResult e)
{
if (e.TaskResult == TaskResult.OK)
{
//Code to start a new phone call using the retrieved phone number.
phoneTask .DisplayName = e.DisplayName;
phoneTask .PhoneNumber = e.PhoneNumber;
phoneTask .Show();
}
}
Following screenshots depict the same :
Download source code File:WP MakeACall.zip.
Summary
Basically PhoneCallTask API allows to launch the Phone application.
References
- PhoneCallTask (MSDN)
Hamishwillee - Pavan, Vineet
Vineet, thanks for adding the note comparing to Symbian. I think this is very important, so much in fact that I've moved it to the top under the Abstract. Note that when you post text, please have a look at how it renders - the text you had originally had lots of odd spaces - ie "this is blah , blah" - note the space before the comma, this should not exist.
Pavan. Nice article. I have subedited it slightly for readability. In addition I've linked to the namespace and the PhoneCallTaskAPI in MSDN reference in the Abstract. Please consider doing this in future. At the moment this hard codes your name, which looks great. It would be excellent if you could extend this to get the contact details to call from the contacts store (or Facebook or whatever).
CheersH
hamishwillee 01:22, 20 February 2012 (EET)
Vineet.jain - Hamish - Sure
Surely Hamish i'll keep this in my mind for future edits.
ThanksVineet
vineet.jain 07:52, 21 February 2012 (EET)
Rocking.ships - ThanksThanks buddy!
rocking.ships 22:38, 24 November 2012 (EET) | http://developer.nokia.com/community/wiki/How_to_make_a_Phone_call_in_Windows_Phone | CC-MAIN-2014-35 | refinedweb | 594 | 54.93 |
Scikit-TDAScikit-TDA
Scikit-TDA is an opinionated collection of libraries for Topological Data Analysis. The user interfaces across all included libraries are standardized and compatible with numpy and scikit-learn.
This is currently a WIP. Documentation and examples will be coming this summer.
Currently, we include
- Kepler Mapper for mapper and visualization.
- UMAP for dimensionality reduction.
- ripser for persistent homology.
- persim for persistence images.
To install all these libraries
pip install scikit-tda
The libraries will then be accessible from
sktda
import sktda.kmapper as km
It is not clear what the best way to do this is. Should all libraries exist at the top level, or should we reorganize the libraries so they make more sense as a group? | https://libraries.io/pypi/scikit-tda | CC-MAIN-2019-22 | refinedweb | 121 | 52.36 |
Sometimes having a software-based ‘virtual keyboard’ for your desktop applications can be a real help. You can provide a way for your users to enter characters that they don’t have on their physical keyboard. Or, provide a more user-friendly way than other applications provide. You can also circumvent a stealthy keylogger by entering passwords this way. And for writing letters or posts to readers in other languages, it can be a huge timesaver.
Thus, the JHVirtualKeyboard project. You can download the binary files here, and get the full source-code here. I built it using Visual Studio 2010: if you really need a VS 2008 solution then email me.
This was a fun project done to do in Windows Presentation Foundation (WPF), and I thought it’d provide a useful article to youz because of the range of design techniques it uses. WPF yielded a few side-bennies like the fact that you can resize this Window and all the keys resize themselves appropriately. That was a freebee, by virtue of the Grid control. Of course, I consider WPF (and SilverLight) generally fun; it’s the premier GUI-design platform out there and I have great respect for it’s creators. That said, this project does represent a significant amount of work. Let me rephrase that: a lot amount of work, to get it right. And then to boil it down to it’s essentials to the smaller application you see before you. I’m sharing this with you, to save you time on your projects and to illustrate techniques. Please be gracious in your feedback and contribute your own improvements so we can evolve this with improvements. If you send me your suggestions, I’ll edit them into the code upon the next revision and cite you as the contributor. Multiple minds are always superior to just one.
We’ll walk through it’s design, have a little fun, and provide a link so you can download the entire demo application and source-code so you can put it to immediate use.
It’s an auxillary WPF window which, if you integrate it into your application, allows you to provide a way for your user to invoke it and click on the key-buttons to insert characters into your application text fields.
I’ve provided it with several language modes as a starter. My immediate need was for a memorization flash-card program to learn various language terms as in Arabic or Russian, so that’s what I created first. It also has French, German, and Spanish, which only actually add a few locale-specific characters. Perhaps you can tackle implementing a Mandarin keyboard and send to me to be included – I’d love to see! lol
You select the keyboard-layout via the ComboBox at the lower-right corner. Changing the language causes the keyboard to automatically update itself.
I found most of the keyboard-layout information on wikipedia and the like. I have to warn you though: that’s a good way to get utterly distracted and winding up delving into the historical underpinnings of ligatures and linguistics. But anyway, I advocate for simplification via standardization. Hence, I try to select the one layout that is most appropriate as a standard for a given language/culture, but not go so far as to bastardize the standard US keyboard form itself. I can see that for some languages, that is not going to be an easy balance to choose. That arrangement should be chosen (by my reasoning) to most resemble what a native speaker would most likely be used to, within the confines of the standard keyboard layout. We are not selecting a Culture/Language for the GUI, after all. That’s a different topic altogether.
Anyway..
As a further aid in typing, along the bottom edge, are thirteen extra typographic symbols you can type. I’ve needed these a lot.
In the demo-app pictured above you can see that there are two text fields. One is a TextBox, the other a RichTextBox. The virtual keyboard (VK) works the same with either one. Whichever text field has the current focus, that’s the one who receives whatever is typed into the VK.
The keyboard has provisions for saving it’s state, in that the next time you invoke it it’ll preset to the last language you used, and it’ll also remember whether you had it up whence you last exited your application, so it can auto-launch itself next time.
As a further convenience, just for kicks n giggles – when you move your application around on the screen, this keyboard Window can move in sync with it.
The key ToolTips are a convenience to the user, to help identify what the key is. I was tempted to insert the entire history of each glyph in there. However, this application/tool is for helping the person of the current Culture, type characters that happen to come from a foreign alphabet (relative to that user). Thus the ToolTips are in English (by default) to identify what those glyphs are. For this application, those ToolTips can be important.
When you click on the CAPS LOCK key, all the letters shift to uppercase. It stays in this state until you click CAPS LOCK again.
When you click on the SHIFT LOCK key, you have access to the shifted-state key symbols that you see on the keycaps, in addition to capital letters. This is the same as a physical keyboard does. In this case, after a ten-second timeout it drops back down to the unshifted state.
The Control and Alt keys do nothing. I just left them there so it looks more like a standard (US English) keyboard. But you can put those to use for something like the dead-letter keys that certain keyboards use languages. Or just remove them.
This is created using Visual Studio 2010, on Windows 7 x64, and targets the .NET Framework 4. I did not test it on earlier versions. I’m a believer in letting the vendors evolve our tools and taking advantage of the latest versions. If you’re one of those still stuck on version 1.0, because you’re so ultra-conservative – all I can say is: go away! I have a separate library that this and all my other projects use, but I pared it down for this project download to just the bare minimum that it uses and included it with your source download.
I set the configuration to target "Any CPU". And that took a bit of tinkering. I’m unsure why. It kept flipping itself back to x64. When VS 2010 on Windows x64 suddenly seems to not be recognizing certain of your projects, check the build configuration. If one is x86 and the other x64, they will definitely clip your swagger! It can take several attempts before the setting sticks.
I’m sure you want to know how to put it to use within your own program. Then we’ll chat about a few interesting aspects of it’s design, and then finally walk through the code itself.
The easiest way to tell how to use the VK in your own design, is to show you a simple demo-app that does just that. Let’s mosey through it in three steps.
Using it within your own Application is easy. The demo app that comes with your download is a pared-down WPF application that has a simple TextBox to receive the characters, and a Button for launching the virtual keyboard. And.. I added a 2nd text control, a RichTextBox, to illustrate using the VK for multiple input fields.
Here’s the XAML for the demo-app:
<DockPanel LastChildFill="True">
<StackPanel Orientation="Horizontal" HorizontalAlignment="Right"
DockPanel.
<Button Name="btnVK" Margin="5,0" Width="100" Click="btnVK_Click">
Virtual Keyboard</Button>
Mainly there’s a TextBox and a RichTextBox. The DockPanel on line 1 is there simply as a convenience for sticking the buttons along the bottom. I’ve been in the habit of throwing the DockPanel in there everytime I create a new Window. Note that when you set a Label over another field, if you want it to get quite close you have to set it’s bottom-padding to zero as well as it’s bottom-margin. I leave the other sides at the default padding value of 5. In some cases I’ve had to use a negative bottom-margin to get it to nudge as near as I wanted. Tried that same method with a girl at the theatre, but that didn’t have positive results.
TextBox
RichTextBox
DockPanel
The two buttons have click-handlers which serve to launch the virtual keyboard, and to close the application.
I use a certain coding formatting standard in all our projects, which I’ve detailed here.
Let’s check out the code-behind for this MainWindow:
1. using System;
2. using System.Windows;
3. using System.Windows.Controls;
4. using JHVirtualKeyboard;
5.
6. namespace JHVirtualKeyboardDemoApp
7. {
8. public partial class MainWindow : Window, IVirtualKeyboardInjectable
9. {
10. public MainWindow()
11. {
12. InitializeComponent();
13. // Remember which text field has the current focus.
14. _txtLastToHaveFocus = txtBox;
15. // Set up our event handlers.
16. this.ContentRendered += OnContentRendered;
17. this.LocationChanged += OnLocationChanged;
18. this.Closing += new System.ComponentModel.CancelEventHandler(OnClosing);
19. }
You add a using pragma for the JHVirtualKeyboard namespace, and implement the IVirtualKeyboardInjectable interface as shown above. That interface is your contract with the virtual keyboard (VK) so they can interoperate.
JHVirtualKeyboard
IVirtualKeyboardInjectable
Notice what we’re doing on line 14: we have an instance variable, _txtLastToHaveFocus, that serves to point to whichever text field has focus. We use this to tell the VK where to put it’s characters.
_txtLastToHaveFocus
Notice also that we have three event handlers defined here. Let’s check these out.
The ContentRendered event handler:
ContentRendered
1. void OnContentRendered(object sender, EventArgs e)
2. {
3. // If the Virtual Keyboard was up (being shown) when this
4. // application was last closed, show it now.
5. if (Properties.Settings.Default.IsVKUp)
6. {
7. ShowTheVirtualKeyboard();
8. }
9. // Put the initial focus on our first text field.
10. txtBox.Focus();
11. }
We have 3 things going on here.
IsVKUp
ShowTheVirtualKeyboard
Here’s the ShowTheVirtualKeyboard method:
1. public void ShowTheVirtualKeyboard()
2. {
3. // (Optional) Enable it to remember which language it was set to last time,
// so that it can preset itself to that this time also.
4. VirtualKeyboard.SaveStateToMySettings(Properties.Settings.Default);
5. // Show the keyboard.
6. VirtualKeyboard.ShowOrAttachTo(this, ref _virtualKeyboard);
7. }
This is where we fire up the VK. You’re doing 2 things here.
SaveStateToMySettings
this
_virtualKeyboard
You need to implement the IVirtualKeyboardInjectable interface. Here’s how the demo app does this:
public System.Windows.Controls.Control ControlToInjectInto
{
get { return _txtLastToHaveFocus; }
}
By implementing ControlToInjectInto, you tell the VK where to send the characters that your user types into it.
ControlToInjectInto
If you have just one text field, you would simply return that (the instance-variable for the field). Here, we have two text fields. So we instead track which one has focus using this instance-variable, and return that in this property getter.
Let’s check out something else cool..
1. void OnLocationChanged(object sender, EventArgs e)
2. {
3. // Do this if, when your user moves the application window around the screen,
4. // you want the Virtual Keyboard to move along with it.
5. if (_virtualKeyboard != null)
6. {
7. _virtualKeyboard.MoveAlongWith();
8. }
9. }
If you handle the LocationChanged event of your Window thus, then the VK will move around along with your Window, like it’s stuck to it. You can still position the VK elsewhere. Normally, I include a checkbox in my app’s Options dialog to give the user the ability to turn this on or off. The MoveAlongWith is a Window extension method from the WPFExtensions.cs file.
LocationChanged
MoveAlongWith
Oh yeah.. the click-handler that launches it. Nothing remarkable here..
1. #region btnVK_Click
2. /// <summary>
3. /// Handle the event that happens when the user clicks on the
/// Show Virtual Keyboard button.
4. /// </summary>
5. private void btnVK_Click(object sender, RoutedEventArgs e)
6. {
7. ShowTheVirtualKeyboard();
8. }
9. #endregion
And here is the Closing event handler:
Closing
1. void OnClosing(object sender, System.ComponentModel.CancelEventArgs e)
2. {
3. bool isVirtualKeyboardUp = _virtualKeyboard != null &&
VirtualKeyboard.IsUp;
4. Properties.Settings.Default.IsVKUp = isVirtualKeyboardUp;
5. Properties.Settings.Default.Save();
6. }
All this does, is save the Up/NotUp state of your VK to your application’s settings.
This demo-app has two text fields so that you can see how to use your VK to serve input into more than one control.
What you need to accomplish this, is pretty simple. We added that instance variable, _txtLastToHaveFocus. This variable has to always be pointing to whichever field is currently holding the focus within your Window. How do we do this?
Simple. We hook up handlers to the GotFocus events of all the text fields, as here..
private void txtBox_GotFocus(object sender, RoutedEventArgs e)
{
// Remember that the plain TextBox was the last to receive focus.
_txtLastToHaveFocus = txtBox;
}
private void txtrichBox_GotFocus(object sender, RoutedEventArgs e)
{
// Remember that the RichTextBox was the last to receive focus.
_txtLastToHaveFocus = txtrichBox;
}
As a result, that variable is always pointing to the correct field.
I mentioned you need to add something to your application’s settings:
Here you can see I put two settings. The bool setting, IsVKUp, can be whatever you want to name it. The one that specifies the language, though, must be named "VKLayout" or else VK won’t see it.
That’s it as far as how to use this within your own app.
Let’s talk about the design a bit.
I’m quite fond of object-oriented design, especially where it saves time and yields a more elegant creation.
Here, we have keyboard layouts. These can be pretty detailed bits of information. Fifty-plus keys that change according to the language/alphabet, tooltips, and double that for the shifted/unshifted states, and then you have to treat the capital letters versus small letters, differently than the distinction between shifted/unshifted symbols. And there’re potentially hundreds of possible keyboard layouts.
On the other hand, some of these have a lot in common. And, considering the central role of the common US-std English-language keyboard, I based the root class of the hierarchy of keyboard layouts upon the English US keyboard to supply the default properties.
Thus, we have a conceptual scheme that begs of an object-oriented class hierarchy. At the top, the root class is the US keyboard layout. Everything else either inherits their stuff from that, or overrides it. Below that, at the second level, are the main variations. Russian (perhaps that should’ve really been called Cyrillic), Spanish, etc. And below that, could be further variations of those.
The economy of data arises when you consider that, e.g. the Spanish keyboard changes only a few things, but wants to leave everything else in place just as with the English US keyboard. So if we can let the Spanish keyboard inherit all the English keyboard features, it only has to override just those keys (and their tooltips, etc.) that it wants to change. The ideal OOP application.
To forge this scheme into our WPF XAML-based application that has fifty data-bound buttons, check it ..
If you look at the KeyAssignmentSet class in Visual Studio (VS), you’ll see it has a big honkn glob of instance variables and properties. One for every key-button of the keyboard. I named them "VK_A", "VK_OEM1", etc. to match the Win32 definitions.
KeyAssignmentSet
The class KeyAssignmentSet represents the US English keyboard layout.
The subclass SpanishKeyAssignmentSet, overrides only those properties it wants to change. In this, just seven does the trick. That’s a lot easier than providing for all 50-plus key definitions. Suweet.
Now, to make this work with our XAML design, merits a bit of intricacy. WPF likes to have stable view-models, to be a view of. The VK itself has a main view-model class: KeyboardViewModel. All the key-buttons have view-model objects that are members of KeyboardViewModel. Thus, to apply the appropriate KeyAssignmentSet to that view-model, we have an array of the individual key view-models, and we iterate across those assigning the information from the KeyAssignmentSet to them.
For this check out the method AssignKeys, in VirtualKeyboard.xaml.cs
In response, the WPF view updates itself automatically to reflect the new layout.
In this code I use the term "CodePoint" to refer to a spot within the Unicode standard that’s represented by a specific number, and depending upon which font you happen to be using – maps to a resulting glyph. Unicode is a wonderful resource; a massive amount of work was invested to wrest order and manageability out of the World’s chaos of languages and alphabets. Here, I use the term "shifted CodePoint" to refer to the character on the upper portion of the key button, such as the Asterisk that is often over top of the digit-8 key.
Here’s the layout with the key-names shown:
A few observations from the XAML of the VK..
1. xmlns:jhlib="clr-namespace:JHLib;assembly=JHLib"
2. xmlns:vk="clr-namespace:JHVirtualKeyboard"
3. mc:Ignorable="d"
4. jhlib:WindowSettings.SaveSize="True"
I bring in the namespace for the external utility library and assign it to the XML namespace prefix "jhlib". Then I do the same for the JHVirtualKeyboard assembly and assign that to "vk". Note how the syntax varies: one is an external assembly, and the other is a local assembly hence doesn’t require the ";assembly=" part. This is one of those things that if you don’t get it right, you’ll spin wheels trying to figure out why your XAML isn’t recognizing your junk in your trunk.
I set the WindowStyle property to ToolWindow, and the FontFamily to Arial Unicode MS since that seems to have pretty decent coverage of the non-Latin codepoints.
Let’s look at one of the key-button definitions:
<Button Name="btnSectionMark"
Command="vk:VirtualKeyboard.KeyPressedCommand"
CommandParameter="§"
ToolTip="Section sign, or signum sectionis"
Margin="6,0,2,1"
Width="25" Height="24">§</Button>
This one injects the section-mark (also called the "signum sectionis" if you want to sound like a jerk). All the key-buttons use this same KeyPressedCommand, with the command parameter carrying the character to inject. Most of the formatting is set in a Style, but this one overrides the Margin, Width and Height to fine-tune the appearance.
Right now, looking at this XAML in LiveWriter, the button Content shows up with the syntax for the hexadecimal 00A7 value, with the ampersand, pound, x prefix and the semicolon suffix. But at the same time, the published version I’m looking at online in FireFox, shows the actual signum sectionis instead. Hmm..
I use a style for the key-buttons so that I don’t have to type in a ton of markup..
<Style TargetType="{x:Type Button}">
<Setter Property="Width" Value="19"/>
<Setter Property="Height" Value="23"/>
<Setter Property="FontSize" Value="18"/>
<Setter Property="Padding" Value="0"/>
<Setter Property="FontFamily" Value="Bitstream Cyberbase, Roman" />
<Setter Property="Margin" Value="1,0,2,1"/>
<Setter Property="Effect">
<Setter.Value>
<DropShadowEffect Direction="315" Opacity="0.7"/>
</Setter.Value>
</Setter>
<Style.Triggers>
<Trigger Property="IsPressed" Value="True">
<Setter Property="Foreground" Value="{StaticResource brushBlue}"/>
<!– Shift the button downward and to the right slightly,
to give the affect of being pushed inward. –>
<Setter Property="Margin" Value="2,1,0,0"/>
<Setter Property="Effect">
<Setter.Value>
<DropShadowEffect Direction="135" Opacity="0.5" ShadowDepth="2"/>
</Setter.Value>
</Setter>
</Trigger>
</Style.Triggers>
</Style>
This Style targets all Buttons, thus it becomes the default for all Buttons within that container.
I had to tinker with and fine-tune the sizes, padding and margins to get it to look it’s best.
Note the FontFamily: "Bitstream Cyberbase, Roman". I purchased that due to it’s codepoint-coverage.
FontFamily
The most interesting thing (well, to me anyway) in this Style is making the key-buttons visually float off the surface a bit, and then seem to push downward (and to the right slightly) when you click on them, to yield that cute 3D affect. As you can see from reading the XAML, the trigger fires when the IsPressed property becomes true. It reacts by setting the margin property to shift it’s position down and right, and changes the DropShadowEffect to get that 3D affect.
Here’re how the main key-buttons are defined..
<Button Name="btnVK_OEM_3" Grid.
<Button Name="btnVK_1" Grid.</Button>
<Button Name="btnVK_2" Grid.</Button>
<Button Name="btnVK_3" Grid.</Button>
<Button Name="btnVK_4" Grid.</Button>
The DataContext of this Window is the KeyboardViewModel. As you can see, the Content of these Buttons is set to the Text property of the VK_1, VK_2, etc. members of KeyboardViewModel. Each of those keybuttons implement INotifyPropertyChanged to keep the GUI apprised of their values when they change. The base class BaseViewModel implements that for us.
The ToolTip property is also data-bound to the view-model. Each of those KeyViewModel objects (ie, VK_1, etc) also has a ToolTip property.
A slight complication for me was that that Text property, and the ToolTip, needed to yield different values depending upon whether the shift key was in effect.
1. public string ToolTip
2. {
3. get
4. {
5. if ((s_domain.IsShiftLock || (_isLetter && s_domain.IsCapsLock)) &&
!String.IsNullOrEmpty(_shiftedToolTip))
6. {
7. return _shiftedToolTip;
8. }
9. else
10. {
11. return _toolTip;
12. }
13. }
14. }
Here, s_domain is a static variable that refers to our KeyboardViewModel. This selects between the unshifted, and the shifted, ToolTip value.
s_domain
KeyboardViewModel
The Text property acts similarly, except that it selects from the key’s _text instance variable if that was explicitly set, otherwise it returns the unshifted or shifted codepoint that was assigned to that key.
Text
_text
This merited a bit of tinkering. Probably there’s a better way to do this, but it’s what I was able to get to work. If you know of a better way, please share.
1. public static void ShowOrAttachTo(IVirtualKeyboardInjectable targetWindow,
ref VirtualKeyboard myPointerToIt)
2. {
3. try
4. {
5. s_desiredTargetWindow = targetWindow;
6. // Evidently, for modal Windows I can't share user-focus with another
// Window unless I first close and then recreate it.
7. // A shame. Seems like a waste of time. But I don't know of a
// work-around to it (yet).
8. if (IsUp)
9. {
10. Console.WriteLine("VirtualKeyboard: re-attaching to a different Window.");
11. VirtualKeyboard.The.Closed += new EventHandler(OnTheKeyboardClosed);
12. VirtualKeyboard.The.Close();
13. myPointerToIt = null;
14. }
15. else
16. {
17. myPointerToIt = ShowIt(targetWindow);
18. }
19. }
20. catch (Exception x)
21. {
22. Console.WriteLine("Exception in VirtualKeyboard.ShowOrAttachTo: " + x.Message);
23. // Below, is what I normally use as my standard for raising
// objections within library routines (using my own std MessageBox substitute).
24. //IInterlocution inter = Application.Current as IInterlocution;
25. //if (inter != null)
26. //{
27. // inter.NotifyUserOfError("Well, now this is embarrassing.",
// "in VirtualKeyboard.ShowOrAttachTo.", x);
28. //}
29. }
30. }
Yeah, it’s not the trivial call to ShowDialog that we’d expect, is it?
ShowDialog
The complication is: if it was already showing, but owned by some other window, and this window (the one you’re actually trying to get to use the VK now) was launched modally, it can’t just "take ownership" of the VK Window. The only thing I could get to work, was to shut the VK down and then re-launch it.
Thus, here we tell the VK to close itself, and hook into the Closed event so that another method gets called after the VK closes. That other method, in turn, re-launches the VK.
A bit of usability testing revealed that your users, in attempting to enter their stuff using the mouse, preferred a shift-key that would reset itself. So, clicking on either of the shift keys pushes the VK into the shifted state, and then clicking on any character pops it back into the un-shifted state. But if the user delays, it resets itself after ten seconds. Here’s what does that..
1. public void PutIntoShiftState()
2. {
3. // Toggle the shift-lock state.
4. _domain.IsShiftLock = !_domain.IsShiftLock;
5. // If we're turning Shiftlock on, give that a 10-second timeout before
// it resets by itself.
6. if (_domain.IsShiftLock)
7. {
8. ClearTimer();
9. _resetTimer = new DispatcherTimer(TimeSpan.FromSeconds(10),
10. DispatcherPriority.ApplicationIdle,
11. new EventHandler(OnResetTimerTick),
12. this.Dispatcher);
13. }
14. SetFocusOnTarget();
15. }
Note that everytime your user clicks on anything within the VK, the VK’s Window gets focus. Which is not what we want. Thus we follow that with a call to SetFocusOnTarget, which tosses focus back to your text field.
SetFocusOnTarget
Here is the method that actually inserts the character into your target text-field..
1. protected void Inject(string sWhat)
2. {
3. if (TargetWindow != null)
4. {
5. ((Window)TargetWindow).Focus();
6. TextBox txtTarget = TargetWindow.ControlToInjectInto as TextBox;
7. if (txtTarget != null)
8. {
9. txtTarget.InsertText(sWhat);
10. }
11. else
12. {
13. RichTextBox richTextBox =
TargetWindow.ControlToInjectInto as RichTextBox;
14. if (richTextBox != null)
15. {
16. richTextBox.InsertText(sWhat);
17. }
18. else // let's hope it's an IInjectableControl
19. {
20. IInjectableControl targetControl =
TargetWindow.ControlToInjectInto as IInjectableControl;
21. if (targetControl != null)
22. {
23. targetControl.InsertText(sWhat);
24. }
25. }
26. //else
// if you have other text-entry controls such as a rich-text box, include them here.
This part can merit a bit of thought. I have provided for two possible controls: a TextBox, and a RichTextBox.
I call your method ControlToInjectInto, to get a reference to what to put the character into. I try to find what it is by casting it to first one type and then another.
For either of these, I defined an extension method InsertText, to do the actual text insertion. Which was surprisingly non-trivial.
InsertText
In an effort to accommodate you custom-text-box creators, I also defined an interface IInjectableControl. If you have a text field that is neither a TextBox nor a RichTextBox, if you can make your control implement this interface, it’ll still work. Otherwise, you’re going to have to modify this code to make it work for you. Well, that’s the great thing about getting a project complete with source-code. You’ll need to code for your control here, and also in the method DoBackSpace – which btw uses the built-in editing command EditingCommands.Backspace to actually do the BACKSPACE. It was actually simpler to just manipulate the text directly. But I wanted to play with the commanding approach. So I add a command-binding to this control at this point, use that to execute the Backspace operation, and leave that command-binding in place until the keyboard closes at which time we clear it.
IInjectableControl
DoBackSpace
EditingCommands.Backspace
Here’s the InsertText extension method for TextBox, which you’ll find in JLib.WPFExtensions.cs
1. public static void InsertText(this System.Windows.Controls.TextBox textbox,
string sTextToInsert)
2. {
3. int iCaretIndex = textbox.CaretIndex;
4. int iOriginalSelectionLength = textbox.SelectionLength;
5. string sOriginalContent = textbox.Text;
6. textbox.SelectedText = sTextToInsert;
7. if (iOriginalSelectionLength > 0)
8. {
9. textbox.SelectionLength = 0;
10. }
11. textbox.CaretIndex = iCaretIndex + 1;
12. }
Yeah, looks a bit verbose.
Here’s the InsertText extension method for RichTextBox:
1. public static void InsertText(this System.Windows.Controls.RichTextBox richTextBox,
string sTextToInsert)
2. {
3. if (!String.IsNullOrEmpty(sTextToInsert))
4. {
5. richTextBox.BeginChange();
6. if (richTextBox.Selection.Text != string.Empty)
7. {
8. richTextBox.Selection.Text = string.Empty;
9. }
10. TextPointer tp = richTextBox.CaretPosition.GetPositionAtOffset(0,
LogicalDirection.Forward);
11. richTextBox.CaretPosition.InsertTextInRun(sTextToInsert);
12. richTextBox.CaretPosition = tp;
13. richTextBox.EndChange();
14. Keyboard.Focus(richTextBox);
15. }
This took a bit of tinkering, just to get to insert a simple character. It’s not as simple as simply appending to the text: if the caret is not at the end, you have to insert at the caret and slide everything to the right (speaking in terms of array-indices of course).
The project-code includes a library that I commonly use for WPF desktop apps, JhLib. Their locations on my system were thusly:
C:\DesktopAppsVS2010\JhVirtualKeyboard
C:\DesktopAppsVS2010\Libs\JhLib
This knowledge should help you to properly manage the project layout on your disk.
So, there you have it – a working screen-based virtual keyboard created using C# and WPF. I hope this is useful for you, and that you’ll give me your thoughts and suggestions. I find WPF fun to work with: it feels so natural now that I dislike using anything else for a desktop GUI application. But there’s always new bits to learn.
sincerely thine,
James Witt Hur. | http://www.codeproject.com/Articles/145579/A-Software-Virtual-Keyboard-for-Your-WPF-Apps?fid=1604935&df=90&mpp=10&sort=Position&spc=None&select=4332175&tid=4476457 | CC-MAIN-2015-40 | refinedweb | 4,814 | 57.98 |
Blinking LED using LPC2148 – ARM Microcontroller Tutorial – Part 3
Contents
Hello World
In this tutorial we will learn how to start programming an ARM microcontroller. This is a hello world project (blinking an LED) intended for beginners to ARM microcontroller programming. Here we are using LPC2148 ARM microcontroller and Keil IDE for programming.
Components Required
- LPC2148 Development Board
- LED
- 220R Resistor
Registers
In this section we will learn about different registers used for configuring or controlling a pin of an ARM microcontroller. In microcontrollers, pins are divided in to different PORTS. Usually a 32-bit microcontroller will have 32 pins per PORT (sometimes it may vary). We have 2 PORTS in LPC2148, P0 and P1. Each pin of these ports are named as P0.0, P0.1, P0.2, P1.0, P1.2 etc.
PINSELx
Usually most of the pins of an ARM microcontroller is multi-functional, every pin can be made to perform one of the assigned function by setting particular bits of PINSEL register.
PINSEL registers for each pins.
Please refer the user manual of LPC2148 microcontroller for more details.
IOxDIR
This register is used to control the direction (input or output) of a pin, once is it configured as a GPIO pin (General Purpose Input Output) by using PINSELx register.
IOxPIN
IOxPIN is GPIO port pin value register. This register is used to read the current state of port pins regardless of input or output configuration. And it is also used to write status (HIGH or LOW) of output pins.
IOxSET
IOxSET is GPIO output set register. This register is commonly used in conjunction with IOxCLR register described below. Writing ones to this register sets (output high) corresponding port pins, while writing zeros has no effect.
IOxCLR
IOxCLR is GPIO output clear register. As mentioned above, this register is used in conjunction with IOxSET. Writing ones to this register clears (output low) corresponding port pins, while writing zeros has no effect.
Keil C Code
#include "lpc214x.h" // Include LPC2148 header file #define LED_PIN 16 // Define LED to PIN 16 #define LED_PORT IO1PIN // Define LED to Port1 #define LED_DIR IO1DIR // Define Port1 direction register void delayMs(unsigned int x); int main() { PINSEL2 = 0x00000000; // Define port lines as GPIO LED_DIR |= (1 << LED_PIN); // Define LED pin as O/P LED_PORT &= ~(1 << LED_PIN); // Turn off the LED initially while(1) // Loop forever { LED_PORT |= (1 << LED_PIN); // Turn ON LED delayMs(1000); // 1 second delay LED_PORT &= ~(1 << LED_PIN); // Turn OFF LED delayMs(1000); // 1 second delay } return 0; } //Blocking delay function void delayMs(unsigned int x) { unsigned int j; for(;x>0;x--) for(j=0; j<0x1FFF; j++); }
Code Explanation
I hope that the above program is self explanatory as it is well commented, even though I am explaining few things below which may be confusing for beginners.
Circuit Diagram
Circuit Description
LED is connected to pin 16 of port 1 via a current limiting resistor. And a 12MHz crystal is connected to the oscillator pins, which will provide the necessary clock for the operation of the microcontroller. We can use the UART lines as shown in the above circuit for flashing the chip using flash magic tool, please refer the article Flashing LPC2148 using Serial ISP Bootloader for more information about it.
If you are using a development board or an LPC2148 stick, you don’t need to worry much about circuit diagram. These will be already connected. Just make sure that the LED is connected to the pin provided in the C program.
Creating Keil Project
Hope you already downloaded and installed Keil IDE. You may refer this article for more details about installing Keil IDE.
- Open Keil IDE.
- Select Project >> New μVision Project.
- You can create a new folder and give a name of your choice for the new project as shown below.
- Click Save.
- Next we need to select the microcontroller LPC2148.
- Click the drop down menu and select “Legacy Device Database” as shown below.
- Click OK.
- Next it should ask you for permission to add Startup.s file to your project as shown below.
- Click Yes.
- Now you can see that the project is created.
- You can click + icon in the left side project folder section to see all the files in the project.
- Currently there is only one file, Startup.s .
- Now we can add our source file to the project.
- Right click on “Source Group 1”.
- Select “Add New Item to Group ‘Source Group 1’ ” as shown below.
- Select C File.
- Give a name, for example “main.c”.
- Click Add.
- Now we can see that the new file is created.
- Now you can enter the code here and save it.
- Now we need to update few project settings.
- Click on the “Options for Target 1” icon as shown below.
- Go to “Output” tab.
- Check the checkbox “Create Hex File” as shown below.
- Now go to “Linker” tab.
- Check the checkbox “Use Memory Layout from Target Dialog” as shown below.
- Click OK.
- Now you can build the project using build button as shown below.
- Once the build is completed, make sure that there are no errors by checking the build output windows at the bottom.
- Now the hex file should be generated in “Objects” folder inside your project folder as shown below.
Flashing Program to the Chip
You can use a JTAG programmer (like ULINK 2, ULINK PRO, JLINK etc. ) or we can use on-chip ISP bootloader. Please read the article Flashing LPC2148 using On-Chip ISP Bootloader for more details.
Download Here
You can download the entire Keil project folder here. | https://electrosome.com/blinking-led-using-lpc2148-arm-microcontroller/ | CC-MAIN-2022-40 | refinedweb | 927 | 73.37 |
Solar Soil Moisture Meter with ESP8266
In this project, we’re making a solar-powered soil moisture monitor. It uses an ESP8266 wifi microcontroller running low power code, and everything’s.
Supplies
You’ll need a solar battery charging board and ESP8266 breakout such as the NodeMCU ESP8266 or Huzzah, as well as a soil sensor, battery, power switch, some wire, and an enclosure to put your circuit inside.
Here are the components and materials used for the soil moisture monitor:
- ESP8266 NodeMCU microcontroller (or similar, Vin must tolerate up to 6V)
- Adafruit solar charging board with optional thermistor and 2.2K ohm resistor
- 2200mAh li-ion battery
- Perma-proto board
- Soil moisture/temperature sensor
- 2 cable glands
- Waterproof enclosure
- Waterproof DC power cable pair
- Heat shrink tubing
- 3.5W solar panel
- Push button power switch
- Double stick foam tape
Here are the tools you’ll need:
- Soldering iron and solder
- Helping hands tool
- Wire strippers
- Flush snips
- Tweezers (optional)
- Heat gun or lighter
- Multimeter (optional but handy for troubleshooting)
- USB A-microB cable
- Scissors
- Step drill
You’ll need free accounts on cloud data sites io.adafruit.com and IFTTT.
To keep up with what I’m working on, follow me on YouTube, Instagram, Twitter, Pinterest, and subscribe to my newsletter. As an Amazon Associate I earn from qualifying purchases you make using my affiliate links.
Breadboard Prototype
It’s important to create a solderless breadboard prototype for projects like this, so you can make sure your sensor and code are working before making any permanent connections.
In this case, the soil sensor has stranded wires to it was necessary to temporarily attach solid headers to the ends of the sensor wires using solder, helping hands, and some heat shrink tubing.
Follow the circuit diagram to wire up the sensor’s power, ground, clock, and data pins (data also gets a 10K pull-up resistor that comes with the soil sensor).
- Sensor green wire to GND
- Sensor red wire to 3.3V
- Sensor yellow wire to NodeMCU pin D5 (GPIO 14)
- Sensor blue wire to NodeMCU pin D6 (GPIO 12)
- 10K pull-up resistor between blue data pin and 3.3V
You can translate this to your preferred microcontroller. If you’re using an Arduino Uno or similar, your board is already supported by the Arduino software. If you’re using the ESP8266, please check out my Internet of Things Class for step-by-step help getting set up with ESP8266 in Arduino (by adding supplemental URLs to the Additional Boards Manager URLs field in Arduino’s preferences, then searching for and selecting new boards from the boards manager). I tend to use the Adafruit ESP8266 Huzzah board type to program the NodeMCU ESP8266 board, but you can also install and use the Generic ESP8266 board support. You’ll also need the SiLabs USB communications chip driver (available for Mac/Windows/Linux).
To get the sensor up and running with my Arduino-compatible board, I downloaded the SHT1x Arduino Library from Practical Arduino’s github page, then unzipped the file and moved the library folder to my Arduino/libraries folder, then renamed it SHT1x. Open up the example sketch ReadSHT1xValues and change the pin numbers to 12 (dataPin) and 14 (clockPin), or copy the modified sketch here:
#include <SHT1x.h> #define dataPin 12 // NodeMCU pin D6 #define clockPin 14 // NodeMCU pin D5 SHT1x sht1x(dataPin, clockPin); // instantiate SHT1x object void setup() { Serial.begin(38400); // Open serial connection to report values to host Serial.println("Starting up"); } void loop() { float temp_c; float temp_f; float humidity; temp_c = sht1x.readTemperatureC(); // Read values from the sensor temp_f = sht1x.readTemperatureF(); humidity = sht1x.readHumidity(); Serial.print("Temperature: "); // Print the values to the serial port Serial.print(temp_c, DEC); Serial.print("C / "); Serial.print(temp_f, DEC); Serial.print("F. Humidity: "); Serial.print(humidity); Serial.println("%"); delay(2000); }
Upload this code to your board and open up the serial monitor to see the sensor data stream in.
If your code won’t compile and complains about SHT1x.h not being found, you haven’t got the required sensor library installed properly. Check your Arduino/libraries folder for one called SHT1x, and if it’s somewhere else, like your downloads folder, move it to your Arduino libraries folder, and rename if it necessary.
If your code compiles but won’t upload to your board, double check your board settings, be sure your board is plugged in, and select the correct port from the Tools menu.
If your code uploads but your serial monitor input is unrecognizable, double check your baud rate matches that specified in your sketch (38400 in this case).
If your serial monitor input doesn’t seem correct, double check your wiring against the circuit diagram. Is your 10K pull-up resistor in place between the data pin and 3.3V? Are data and clock connected to the correct pins? Are power and ground connected as they should be throughout the circuit? Do not proceed until this simple sketch is working!
The next step is specific to the ESP8266 and configures the optional wireless sensor reporting portion of the sample project. If you’re using a standard (non-wireless) Arduino-compatible microcontroller, continue to develop your final Arduino sketch and skip to Prepare Solar Charging Board.
Software Setup
To compile the code for this project with the ESP8266, you’ll need to install a few more Arduino libraries (available through the library manager):
Download the code attached to this step, then unzip the file and open up Solar_Powered_Soil_Moisture_Monitor_Tutorial in your Arduino software.
This code is a mashup of the sensor code from earlier in this tutorial and a basic example from the cloud data service Adafruit IO. The program enters low power mode and sleeps most of the time, but wakes up every 15 minutes to read the temperature and humidity of the soil, and reports its data to Adafruit IO. Navigate to the config.h tab and fill in your Adafruit IO username and key, as well as your local wifi network name and password, then upload the code to your ESP8266 microcontroller.
You’ll have to do a bit of prep on io.adafruit.com. After creating feeds for temperature and humidity, you can create a dashboard for your monitor featuring a graph of the sensor values and both incoming feeds’ data. If you need a refresher on getting started with Adafruit IO, check out this lesson in my Internet of Things Class.
Prepare Solar Charging Board
Prepare the solar charging board by soldering on its capacitor and some wires to the load output pads. I’m customizing mine to charge at a faster rate with an optional add-on resistor (2.2K soldered across PROG) and making it safer to leave unattended by replacing the surface mount resistor with a 10K thermistor attached to the battery itself. This will limit charging to safe a temperature range. I covered these modifications in more detail in my Solar USB Charger project.
Build Microcontroller Circuit
Solder up the microcontroller board and power switch to a perma-proto board.
Connect the solar charger power output to the input of your switch, which should be rated for at least 1 amp.
Create and solder the breadboard wire connections described in the circuit diagram above (or to your personal version’s specifications), including the 10K pull-up resistor on the sensor’s data line.
The solar charger’s Load pins will provide 3.7V battery power when no solar power exists, but will be powered directly from the solar panel if it’s plugged in and sunny. Therefore the microcontroller must be able to tolerate a variety of voltages, as low as 3.7V and up to 6V DC. For those requiring 5V, a PowerBoost (500 or 1000, depending on the current required) can be used to modulate the Load voltage to 5V (as shown in the Solar USB Charger project). Here are some common boards and their input voltage ranges:
- NodeMCU ESP8266 (used here): 5V USB or 3.7V-10V Vin
- Arduino Uno: 5V USB or 7-12V Vin
- Adafruit Huzzah ESP8266 Breakout: 5V USB or 3.4-6V VBat
In order to achieve the longest possible battery life, you should take some time to consider and optimize the total current your current draws. The ESP8266 has a deep sleep feature which we used in the Arduino sketch to reduce its power consumption dramatically. It wakes up to read the sensor and draws more current while it connects to the network to report the sensor’s value, then goes back to sleep for a specified amount of time. If your microcontroller draws a lot of power and can’t easily be made to sleep, consider porting your project to a compatible board that draws less power. Drop a question in the comments below if you need help identifying which board could be right for your project.
Install Cable Glands
To make weatherproof entry points for the solar panel cable and sensor cable, we’ll install two cable glands into the side of the weatherproof enclosure.
Test fit your components to identify the ideal placement, then mark and drill holes in a waterproof enclosure using a step drill. Install the two cable glands.
Complete Circuit Assembly
Insert the port side of a waterproof power cable into one and solder it to the solar charger’s DC input (red to + and black to -).
Insert the soil sensor through the other gland, and connect it up to the perma-proto as per the circuit diagram.
Tape the thermistor probe to the battery. This will limit charging to a safe temperature range while the project is left unattended outside.
Charging while too hot or too cold could damage the battery or start a fire. Exposure to extreme temperatures can cause damage and shorten the battery’s life, so bring it inside if it’s below freezing or above 45℃/113F.
Tighten the cable glands to make a weatherproof seal around their respective cables.
Prepare Solar Panel
Follow my Instructable to splice the cable for your solar panel with the plug side of the waterproof DC power cable set.
Test It
Plug in your battery and turn on the circuit by pressing the power switch.
Test it out and be sure it’s reporting to the internet before closing up the enclosure and installing the sensor in your herb garden, precious potted plant, or other soil within signal range of your wifi network.
Once the data from the sensor is being logged online, it’s easy to set up a recipe for email or text alerts on the API gateway site If This Then That. I configured mine to email me if the soil moisture level drops below 50.
To test it without waiting for my plant to dry out, I manually entered a data point to my humidity feed on Adafruit IO that fell below the threshold. A few moments later, the email arrives! If the soil’s levels fall below my specified level, I’ll get an email every time the feed is updated until I water the soil. For my sanity, I updated my code to sample the soil much less often than every 15 minutes.
Use It Outside!
This is a fun project to customize based on your plant’s hydration needs, and it’s easy to swap out or add sensors or integrate the solar power features into your other Arduino projects.
Thanks for following along!
Originally posted on Instructables
If you like this project, try these others! | https://beckystern.com/2017/11/17/solar-soil-moisture-meter-with-esp8266/ | CC-MAIN-2022-40 | refinedweb | 1,930 | 59.94 |
pyfrank 0.2.3
python binding for iOS automation using frank.
pyfrank - python binding for iOS automation using frank.
pyfrank is an API to interact with frank, the iOS automation framework.
Installation
Option 1:
- Clone this repo or download the sources
- cd pyfrank
- python setup.py build
- sudo python setup.py install
Option 2:
sudo pip install pyfrank
#It’s that simple
from pyfrank import * # We are running a simulator with frank locally on port 32765 device = Device(host='127.0.0.1', port=32765) # Get the view representing a tab-bar button with the label "Mail" view = device.getView(UiQuery("tabBarButton", {'marked': "Mail" })) # Touch the button response = view.touch() if isinstance(response, Success): logging.info("Test mail button succeeded!"); elif isinstance(response, Failure): logging.error("Test mail button failed: %s", response)
#The object model
Device
The first entry point for interacting with frank. It’s constructor receives the host and the port of the frank enabled device.
Example:
from pyfrank import * device = Device("127.0.0.1", 32765) # Type text into the keyboard device.typeIntoKeyboard("abc") # Execute an application on the device device.appExec("appName", "arg1", "arg2", ...) # Get the accesibility state accessibilityEnabled = device.accessibilityCheck() # Get the device orientation orientation = device.getOrientation() if orientation == Orientation.PORTRAIT: print "Portrait" elif orientation == Orientation.LANDSCAPE: print "Landscape" # Get the application layout graph dump = device.dump()
UiQuery
In frank views can be found using a special query language called “UiQuery”.
Example:
from pyfrank import * UiQuery("view:'UIImageView' marked:'ProfilePicture'")
- Additional documentation on UiQuery can be found here:
View
View allows to perform operations on the view(s) that match a UiQuery.
#Get the profile picture view view = device.getView(UiQuery({'view': 'UIImageView'}, {'marked': 'ProfilePicture'})) #Flash the profile picture r = view.flash() #Test for success if isinstance(r, Success): print "Flashed the profile picture!" else: print "Failed flashing profile picture" #Touch the profile picture r = view.touch() #Get the title text input view input = device.getView(UiQuery({'view', 'UITextField'}, {'marked': 'Title'})) r = input.setText("New title text") if isinstance(r, Success): print "Title input was changed successfully." else: print "Failed changing title input"
Retrieve a view property
view = device.getView(UiQuery({'view':'UILabel'}, { 'marked':'Pull down to refresh...' })) # Pull out the 'text' attribute. Every attribute exposed by frank can be called as a method on the view to retrieve it's value. r = view.text() if isinstance(r, Success): labelText = r['results'][0] print "The text of the UILabel is", labelText else: print "I seriously failed to retrieve the UILabel text attribute", r
#More information on frank
- Downloads (All Versions):
- 8 downloads in the last day
- 73 downloads in the last week
- 439 downloads in the last month
- Author: Daniel Ben-Zvi
- Keywords: python frank pyfrank frankly ios qa automation robot testing
- License: BSD
- Categories
- Package Index Owner: danielbenzvi
- DOAP record: pyfrank-0.2.3.xml | https://pypi.python.org/pypi/pyfrank/0.2.3 | CC-MAIN-2015-18 | refinedweb | 471 | 51.65 |
You have now seen and deployed all of the code for both the sender and receiver parts of the JAXM message echo example. If, however, you were to start your web browser and point it at, which is the URL that causes the sender servlet to transmit a message, you would find that after about 30 seconds, the sender would give up waiting for a reply from the receiver and an error page would be displayed by the browser. Although all of the code is in place, the proper JAXM configuration has not been set up to allow the providers to exchange messages. In this section, we look at how to configure the JAXM reference implementation.
A message traveling from the sending servlet to the receiver has to make three hops:
From the sender to the local provider
From the local provider to the remote provider
From the remote provider to the receiving servlet
We saw earlier that a JAXM client logically connects to its local
provider using the
ProviderConnectionFactory
createConnection( ) method, but we
didn’t see how the provider itself is located. This
information is held in a file called
client.xml,
which must be located in the
CLASSPATH of the JAXM
client. Since both the sender and the receiver servlets in this
example are deployed as web applications, their
client.xml files should be placed in the
WEB-INF/classes directory of their WAR files, as
shown in the following listing of the files that make up the web
archive for the
SOAPRPSender servlet:
META-INF/MANIFEST.MF WEB-INF/classes/ora/jwsnut/chapter4/soaprpsender/SOAPRPSenderServlet.class WEB-INF/classes/client.xml WEB-INF/web.xml
A full description of the
client.xml file will
be found in Chapter 8. The content of the
client.xml file used by the
SOAPRPSender servlet is shown in Example 4-6, in which the line numbers on the left have
been added for ease of reference only.
Example 4-6. The client.xml file for the sending servlet
1 <?xml version="1.0" encoding="ISO-8859-1"?> 2 3 <!DOCTYPE ClientConfig 4 PUBLIC "-//Sun Microsystems, Inc.//DTD JAXM Client//EN" 5 ""> 6 <ClientConfig> 7 <Endpoint> 8
urn:SOAPRPSender9 </Endpoint> 10 <CallbackURL> 11 </CallbackURL> 13 14 <Provider> 15
<URI></URI>16
<URL></URL>17 </Provider> 18 </ClientConfig>
The lines shown in bold relate to the configuration of the sending
servlet; the other lines are fixed content that are the same in all
client.xml files. The
Provider element at the end of the file is used
when the client connects to the messaging provider. The two child
elements are used as follows:
- URI
The URI value identifies the provider in use. For the JAXM reference implementation, you must use the value. The JAXM code that implements the
ProviderConnectioninterface and the provider itself communicate by adding private header entries to the messages sent by JAXM clients. This URI is used as the namespace for the XML elements in these header entries; it is also used to set their actor attribute. When the provider receives a message from a JAXM client, it removes and actions all headers for which the actor attribute has this fixed value.
- URL
The URL is where messages from the JAXM client to the provider are actually sent. For the reference implementation in the JWSDP, the provider is a Tomcat service called
jaxm-provider, accessible at port 8081. The provider is not required to be on the same host as the JAXM client. If the provider is not co-located with the client, then the name of the provider’s host should be used instead of
localhost.
Figure 4-5 shows how the URL field of the
Provider element is used to locate the JAXM
client’s local provider.
When a provider receives a message for delivery to a client, it needs
to be able to match the destination address of the message to the
client that provides service at that address. As we saw earlier, the
destination address that is placed in a SOAP-RP header is a URI that
identifies the target of the message — it need not be a URL.
Therefore, the provider maintains a list of mappings from the
Endpoint URI to the URL of the client that should
receive messages destined for that URI. In the
client.xml file, the
Endpoint
element declares the URI that corresponds to the client, and the
CallbackURL element specifies the URL to which
messages for that URI should be delivered. In terms of the example
that we are using in this chapter, the sending servlet advertises its
URI as urn:SOAPRPSender. Since
the sending servlet expects to receive messages on the URL,
this is the URL to which the sending servlet’s URI
should be mapped.[43] Hence,
the appropriate
Endpoint
and
CallbackURL
entries in the
client.xml file for the sending
servlet would be:
<Endpoint> urn:SOAPRPSender </Endpoint> <CallbackURL> </CallbackURL>
In the case of the receiving servlet (which as a JAXM client also
requires its own
client.xml file), these entries
would look like this:
<Endpoint> urn:SOAPRPEcho </Endpoint> <CallbackURL> </CallbackURL>
The receiving servlet also needs a
Provider
element containing the URL of its local provider that, if you deploy
both the sending and receiving servlets on the same host, is the same
provider used by the sending servlet and therefore requires the same
URL.
The
client.xml file solves the problem of how to
route messages between clients and a provider, but there remains the
issue of how the providers route messages among themselves. In the
case of the example used in this chapter, the provider needs to
deliver messages addressed to the URIs urn:SOAPRPEcho and urn:SOAPRPSender, by passing them to
whichever provider the clients owning these endpoints are
connected.[44] In order to make this possible, providers
are configured with URI-to-URL mappings that are similar to those
created by the
Endpoint and
CallbackURL elements used in the
client.xml file. Each provider must be
configured with a mapping for each remote URI to which messages from
its local clients might be addressed, specifying the URL of the
provider to which messages carrying that URI as a destination address
must be delivered (and not the URL of the receiving client).
In the reference implementation, these mappings are stored in a file
called
provider.xml
, which resides in the
/WEB_INF directory of the
jaxm-provider service, details of which
you’ll find in Chapter 8.
Fortunately, you don’t need to deal with this file
directly — instead, you can view and change the mappings using
the JAXM provider administration service, which can be accessed using
a web browser.
The provider administration tool is a web
application that provides a user interface that lets you configure
the JAXM provider without having to manually edit its
provider.xml file. Once you understand the
content of this file (details of which are provided in Chapter 8), you’ll find it very easy to
use the administration tool, so we’re not going to
describe it in great detail. Here, we need to use it to add endpoint
mappings for the URIs urn:SOAPRPEcho and urn:SOAPRPSender. Assuming that this service
is running on the same host as your browser, the URL that you need to
use to access it is.
When you attempt to connect to this service, you are
prompted to supply a username and password. When you installed the
JWSDP, you were prompted to supply a username and a password, and you
should use the same username and password to access the configuration
service. If you can’t remember them, you can find
them in the
tomcat-users.xml file, which is held
in the
conf directory of the web server. Here is
what this file typically looks like, with the important lines
highlighted in bold:
<?xml version='1.0'?> <tomcat-users> <role rolename="admin"/> <role rolename="manager"/> <role rolename="provider"/> <user username="JWSUserName" password="JWSPassword" roles="admin,manager,provider"/> </tomcat-users>
In this case, supply the username
JWSUserName and
the password
JWSPassword. These values can also be
found in the
jwsnutExamples.properties file in
your home directory, assuming you created it as described in Chapter 1.
Once you reach the configuration service’s home
page, expand the tree view that you’ll see on the
left, and select the entry for
http below the
SOAPRP profile. You should see a screen like that
shown in Figure 4-6.
This screen contains, among other things, the endpoint mappings for messages being sent by the provider for the SOAP-RP profile using HTTP as the underlying communications protocol. The URL associated with the URI urn:SOAPRPEcho needs to be the one required to access the provider to which the receiving servlet is attached, whereas the URL for the URI urn:SOAPRPSender should be that of the provider for the sending servlet. A provider has three available URLs; for the case of the JWSDP reference implementation running in the Tomcat web server, these URLs are listed in Table 4-2, where it is assumed that the provider and the clients are all running on the same machine.
If the target provider is on a different machine, substitute the
hostname of that machine for
localhost in these
URLs.
Since the messages in the example used in this chapter use the SOAP-RP profile, both of the JAXM client URIs should be mapped to the URL for the SOAP-RP receiving URL of the target provider, which will be. To add these mappings, select “Create New Endpoint Mapping” from the combo box at the top right of the screen. You are presented with a form that allows you to enter a URI along with its corresponding URL, as shown in Figure 4-7, where the mapping for urn:SOAPRPEcho has been entered.
The mappings that you need to enter are shown in Table 4-3.
Once both mappings are set up, they should appear on the main screen as shown in Figure 4-8. Select “Save to Profile” to save these mappings.
At this point, the provider is properly configured to forward messages to either URI. Of course, if the clients are on separate machines and use different providers, it is then necessary to configure each provider separately:
The provider local to the sending servlet is configured with a mapping for the URI urn:SOAPRPEcho — that is, the URI to which it sends. The URL for this mapping refers to the other provider.
The provider local to the receiving servlet similarly requires a mapping for the URI urn:SOAPRPSender.
You can now finally run the example that we have been using throughout this chapter. To do so, simply enter the URL into your browser. After a short delay, you should see the SOAP message that was sent by the sending servlet and returned by the receiver, an example of which is shown in Example 4-7. This message has been reformatted for the sake of readability.
Example 4-7. A SOAP-RP message sent via a messaging provider
<?xml version="1.0" encoding="UTF-8"?> <soap-env:Envelope xmlns: <soap-env:Header> <m:path xmlns: <m:from>urn:SOAPRPEcho</m:from> <m:to>urn:SOAPRPSender</m:to> <m:id>9a85b633-2c8f-4d2e-84a5-ff6b21c05f61</m:id> <m:relatesTo>3166c06a-e38c-466e-b43e-d55a37f3d3fc</m:relatesTo> <m:fwd/> <m:rev/> </m:path> </soap-env:Header> <soap-env:Body> <tns:Sent xmlns:This is the content</tns:Sent> </soap-env:Body> <tns:Date xmlns:Thu Aug 08 15:58:53 BST 2002</tns:Date> </soap-env:Envelope>
The elements in the message header are defined by the SOAP-RP
protocol, further information on which can be found later in this
chapter. Note the
to and
from
elements, which contain the URIs for the sending and receiving
servlets, and the
Date element, which follows the
SOAP body and contains the date and time at which the message was
processed by the receiver.
As a summary of how messaging providers use the JAXM configuration information, the following is a step-by-step account of the way in which a SOAP-RP message is sent from a JAXM client to its destination. The return path would obviously be identical, but with the addresses reversed.
The receiving servlet initializes. As it does so, it uses the
ProviderConnectionFactoryand the
ProviderConnectioninterface to establish a connection to its local provider, as well as calls the
ProviderConnection
getMetaData( )method. In order to contact the provider to obtain the metadata, the JAXM code in the client accesses the receiving servlet’s
client.xmlfile to locate the provider’s URL from the
Providerelement — in this case,. It also passes to the provider the information in the
Endpointand
CallbackURLelements so that the provider knows that messages intended for the URI urn:SOAPRPEcho should be delivered to the URL.
The sending servlet uses the
ProviderConnectionFactoryand the
ProviderConnectioninterface to establish a connection to its local provider. It also obtains a
MessageFactoryfor the
soaprpprofile and constructs a message, setting the
fromaddress to urn:SOAPRPSender and the
toaddress to urn:SOAPRPEcho.
The client uses the
ProviderConnection
send( )method to transmit the message. The JAXM code in the client accesses the sending servlet’s
client.xmlfile and uses the URI and URL in the
Providerelement to find the URL of the provider — in this case,. Also, if it has not already done so, it passes to the provider the information in the
Endpointand
CallbackURLelements so that it can map the sending servlet’s URI (urn:SOAPRPSender) to its message callback URL (). The message is then delivered to the provider at the URL.
When the provider receives the message, it stores it in its outgoing message queue. There is a separate set of message queues for each profile that the provider supports, which the reference implementation keeps in a directory hierarchy in temporary storage provided by its host container. If you are running the JWSDP in the Tomcat web server, you’ll find the messages that the provider sends and receives held below the directory
work\Services Engine\jwsdp-services\jaxm-provider, relative to the JWSDP installation directory.
When the message is to be transmitted from the outgoing message queue, the provider extracts its destination address. In order to do this, the provider needs to understand where it will find this address, which is profile-dependent. The provider can do this because the class
com.sun.xml.messaging.soaprp.SOAPRPMessageImplthat represents a SOAP-RP message is derived from
com.sun.xml.messaging.jaxm.util.ProfileMessage(which has abstract methods that extract to
toand
fromaddresses from the message).
SOAPRPMessageImplimplements these methods so that they extract the correct parts of the SOAP-RP header. The message class for the ebXML profile similarly implements them to extract the
Partyobject from the message (see Section 4.6 later in this chapter, for further information on this). The fact that the provider has to be able to get the destination address from within a SOAP message explains why nonprofiled messages that do not contain a destination address (i.e., SAAJ messages created using the default
MessageFactory) cannot be sent using a provider.
The provider uses the destination address to check its URI-to-URL mapping, set up using the JAXM provider administration tool, to find the URL of the provider to which the message should be sent. In this case, the destination address is urn:SOAPRPEcho, which maps to the URL. This happens to be a URL belonging to the same provider, of course, but this does not matter. The local provider delivers the message to its peer using an HTTP POST request to this URL. If delivery fails, the provider retries on the assumption that the remote provider is not yet started or there is a problem with network connectivity.
When the peer provider receives the incoming SOAP-RP message, it stores it in its received message queue. Subsequently, an attempt is made to deliver this message to the correct JAXM client. Delivery is performed by extracting the destination address from the message, in the same way as the sending provider did when transmitting the message, and using it to access the
Endpointmapping table built from the
client.xmlfiles of the clients connected to the provider (see Figure 4-5). Here, the destination address urn:SOAPRPEcho has been registered by the receiving servlet and mapped to its delivery URL (see Step 1). The provider delivers the message using an HTTP POST request to that URL. If delivery fails, or if there is no entry for the destination URI in the provider’s mapping table, the provider will retry delivery later, on the assumption that the client has not yet been started but will register later.
The final point to mention in our
discussion of JAXM configuration discusses the reason for including
the
load-on-startup element in the
web.xml file of the receiving servlet in our
example so that it is loaded when the web container initializes. As
we said earlier in Section 4.4.1, a provider uses the
Endpoint elements from the
client.xml files of the JAXM clients that are
connected to it to determine where to route the messages it receives.
A provider cannot directly read these files — instead, they are
read and a representation of their content is passed (in a private
SOAP message header) when a client connects to the
provider.[45] The sending servlet, which is not marked
to be loaded at startup, initializes and connects to the provider
when the request from the browser sent to the URL is
received by the web container; therefore, it is registered with the
provider before a message addressed to it needs to be dispatched.
However, since the receiving servlet’s URL is only
referenced when the provider tries to deliver a message to it based
on an entry in the provider’s client URI-to-URL
mapping table, if the receiving servlet were not marked to be loaded
at startup, it would not have initialized and connected to the
provider, and therefore its URL would not be registered in this
mapping table.
[43] This URL comes from the sending
servlet’s
web.xml file, which
was shown in Example 4-2.
[44] In general, when there are two JAXM clients on separate machines, there are two providers involved. However, if both the sender and receiver are deployed on the same machine, the likelihood is that they will use the same provider (although you could arrange to run two providers on the same machine). Even though this is the case, the configuration still has to be created in the same way as if there were two providers. The description here is consistent with that.
[45] Exactly when this happens is, of course,
implementation-dependent. At the time of this writing, the reference
implementation does this the first time the client requests
ProviderMetaData, or when the
ProviderConnection
send()
method is called for the first time. The fact that this is left so
late also explains why a client that simply listens passively for
messages, such as the receiving servlet in the example in this
chapter, must call
getMetaData( ) as shown in
Example 4-4, even though it doesn’t
make use of the
ProviderMetaData. The purpose of
the call is simply to register the receiver’s
Endpoint with the provider so that it can receive
messages.
Get Java Web Services in a Nutshell now with O’Reilly online learning.
O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers. | https://www.oreilly.com/library/view/java-web-services/0596003994/ch04s04.html | CC-MAIN-2020-50 | refinedweb | 3,267 | 57.61 |
About teParameterScan
teParameterScan is a package for Tellurium that provides a different way to run simulations and plot graphs than given in the standard Tellurium library. While the standard syntax in Tellurium asks you to put parameters, such as the amount of time for the simulation to run, as arguments in the simulation function, teParameterScan allows you to set these values before calling the function. This is especially useful for more complicated 3D plots that often take many arguments to customize. Therefore, teParameterScan also provides several plotting options that would be cumbersome to handle using the traditional approach. This tutorial will go over how to use all of teParameterScan’s functionality. Additionally, there are examples which you can run and edit in Tellurium. They can be found in examples/tellurium-files/parameterScan.
To use teParameterScan you will need to import it. The easiest way to do this is the command ‘from tellurium import ParameterScan’ after the standard ‘import tellurium as te’.
import tellurium as te from tellurium import ParameterScan as ps cell = ''' $Xo -> S1; vo; S1 -> S2; k1*S1 - k2*S2; S2 -> $X1; k3*S2; vo = 1 k1 = 2; k2 = 0; k3 = 3; ''' rr = te.loadAntimonyModel(cell) p = ps.ParameterScan(rr) p.startTime = 0 p.endTime = 20 p.numberOfPoints = 50 p.width = 2 p.xlabel = 'Time' p.ylabel = 'Concentration' p.title = 'Cell' p.plotArray()
After you load the model, you can use it to create an object of ParameterScan. This allows you to set many different parameters and easily change them.
Viewing 3D Models
Running scripts that produce 3D models in the default IPython console results in a 2D representation of the graph being displayed in the console. Switching to the regular console (click on ‘Console’ tab at the bottom of the console, then click on ‘Python 1’ at the top) will instead bring up a separate window where you can click and drag on the graph in order to rotate it. Sometimes this window will be buried underneath Tellurium; just click on the graph icon that appears on the taskbar to view the graph.
Methods
plotArray()
The plotArray() method works much the same as te.plotArray(), but with some increased functionality. It automatically runs a simulation based on the given parameters, and plots a graph of the results. Accepted parameters are startTime, endTime, numberOfPoints, width, color, xlabel, ylabel, title, integrator, and selection.
plotGraduatedArray()
This method runs several simulations, each one with a slightly higher starting concentration of a chosen species than the last. It then plots the results. Accepted parameters are startTime, endTime, value, startValue, endValue, numberOfPoints, width, color, xlabel, ylabel, title, integrator, and polyNumber.
plotPolyArray()
This method runs the same simulation as plotGraduatedArray(), but plots each graph as a polygon in 3D space. Accepted parameters are startTime, endTime, value, startValue, endValue, numberOfPoints, color, alpha, xlabel, ylabel, title, integrator, and polyNumber.
plotMultiArray(param1, param1Range, param2, param2Range)
This method plots a grid of arrays with different starting conditions. It is the only method in teParameterScan that takes arguments when it is called. The two param arguments specify the species or constant that should be set, and the range arguments give the different values that should be simulated with. The resulting grid is an array for each possible combination of the two ranges. For instance, plotMultiArray(‘k1’, [1, 2], ‘k2’, [1, 2, 3]) would result in a grid of six arrays, one with k1 = 1 and k2 = 1, the next with k1 = 1 and k2 = 2, and so on. The advantage of this method is that you can plot multiple species in each array. Accepted parameters are startTime, endTime, numberOfPoints, width, title, and integrator.
plotSurface()
This method produces a color-coded 3D surface based on the concentration of one species and the variation of two factors (usually time and an equilibrium constant). Accepted parameters are startTime, endTime, numberOfPoints, startValue, endValue, independent, dependent, color, xlabel, ylabel, title, integrator, colormap, colorbar, and antialias.
createColormap(color1, color2)
This method allows you to create a custom colormap for plotSurface(). It returns a colormap that stretches between color1 and color2. Colors can be input as RGB tuplet lists (i.e. [0.5, 0, 1]), or as strings with either a standard color name or a hex value. The first color becomes bottom of the colormap (i.e. lowest values in plotArray()) and the second becomes the top.
createColorPoints()
This method creates a color list (i.e. sets ‘color’) spanning the range of a colormap. The colormap can either be predefined or user-defined with createColorMap(). This set of colors will then be applied to arrays, including plotArray(), plotPolyArray(), and plotGraduatedArray(). Note: This method gets the number of colors to generate from the polyNumber parameter. If using it with plotArray() or plotGraduatedArray(), setting this parameter to the number of graphs you are expecting will obtain better results.
teParameterScan in Action
Let’s say that we want to examine how the value of a rate constant (parameter) affects how the concentration of a species changes over time. There are several ways this can be done, but the simplest is to use plotGraduatedArray. Here is an example script:
import tellurium as te from tellurium import ParameterScan as ps r = te.loada(''' $Xo -> S1; vo; S1 -> S2; k1*S1 - k2*S2; S2 -> $X1; k3*S2; vo = 1 k1 = 2; k2 = 0; k3 = 3; ''') p = ps.ParameterScan(r) p.endTime = 6 p.numberOfPoints = 100 p.polyNumber = 5 p.startValue = 1 p.endValue = 5 p.value = 'k1' p.selection = ['S1'] p.plotGraduatedArray()
This will produce the following graph:
Another way is to use createColormap() and plotSurface() to create a 3D graph of the same model as above. After loading the model, we can use this code:
p.endTime = 6 p.colormap = p.createColormap([.12,.56,1], [.86,.08,.23]) p.dependent = ['S1'] p.independent = ['time', 'k1'] p.startValue = 1 p.endValue = 5 p.numberOfPoints = 100 p.plotSurface()
Which will produce this figure:
Parameters
alpha: Sets opaqueness of polygons in plotPolyArray(). Should be a number from 0-1. Set to 0.7 by default.
antialias: A bool that controls antialiasing for plotSurface(). True turns it on, False turns it off. Set to True by default.
color: Sets color for use in all plotting functions except plotSurface() and plotMultiArray(). Should be a list of at least one string. All legal HTML color names are accepted. Additionally, for plotArray() and plotGraduatedArray(), this parameter can determine the appearance of the line as according to PyPlot definitions. For example, setting color to [‘ro’] would produce a graph of red circles. For examples on types of lines in PyPlot, go to. If there are more graphs than provided color selections, subsequent graphs will start back from the beginning of the list.
colorbar: True shows a color legend for plotSurface(), False does not. Set to True by default.
colormap: The name of the colormap you want to use for plotSurface(). Legal names can be found at and should be strings. Alternatively, you can create a custom colormap using the createColorMap method.
dependent: The dependent variable for plotSurface(). Should be a string of a valid species.
endTime: When the simulation ends. Default is 20.
endValue: For plotGraduatedArray(), assigns the final value of the independent variable other than time. For plotPolyArray() and plotSurface() assigns the final value of the parameter being varied. Should be a string of a valid parameter.
independent: The independent variable for plotSurface(). Should be a list of two strings: one for time, and one for a parameter.
integrator: The integrator used to calculate results for all plotting methods. Set to ‘cvode’ by default, but another option is ‘gillespie.’
legend: A bool that determines whether a legend is displayed for plotArray(), plotGraduatedArray(), and plotMultiArray(). Default is True.
numberOfPoints: Number of points in simulation results. Default is 50. Should be an integer.
polyNumber: The number of graphs for plotGraduatedArray() and plotPolyArray(). Default is 10.
rr: A pointer to a loaded RoadRunner model. ParameterScan() takes it as its only argument.
selection: The species to be shown in the graph in plotArray() and plotMultiArray(). Should be a list of at least one string.
sameColor: Set this to True to force plotGraduatedArray() to be all in one color. Default color is blue, but another color can be chosen via the “color” parameter. Set to False by default.
startTime: When the simulation begins. Default is 0.
startValue: For plotGraduatedArray(), assigns the beginning value of the independent variable other than time. For plotPolyArray() and plotSurface() assigns the beginning value of the parameter being varied. Default is whatever the value is in the loaded model, or if not specified there, 0.
title: Default is no title. If set to a string, it will display above any of the plotting methods.
value: The item to be varied between graphs in plotGraduatedArray() and plotPolyArray(). Should be a string of a valid species or parameter.
width: Sets line width in plotArray(), plotGraduatedArray(), and plotMultiArray(). Won’t have any effect on special line types (see color). Default is 2.5.
xlabel: Sets a title for the x-axis. Should be a string. Not setting it results in an appropriate default; to create a graph with no title for the x-axis, set it to None.
ylabel: Sets a title for the y-axis. Should be a string. Not setting it results in an appropriate default; to create a graph with no title for the x-axis, set it to None.
zlabel: Sets a title for the z-axis. Should be a string. Not setting it results in an appropriate default; to create a graph with no title for the x-axis, set it to None.
About SteadyStateScan
This class is part of teParameterScan but provides some added functionality. It allows the user to plot graphs of the steady state values of one or more species as dependent on the changing value of an equilibrium constant on the x-axis. To use it, use the same import statement as before: ‘from tellurium import ParameterScan as ps’. Then, you can use SteadyStateScan on a loaded model by using ‘ss = ps.SteadyStateScan(rr)’. Right now, the only working method is plotArray(), which needs the parameters of value, startValue, endValue, numberOfPoints, and selection. The parameter ‘value’ refers to the equilibrium constant, and should be the string of the chosen constant. The start and end value parameters are numbers that determine the domain of the x-axis. The ‘numberOfPoints’ parameter refers to the number of data points (i.e. a larger value gives a smoother graph) and ‘selection’ is a list of strings of one or more species that you would like in the graph. | http://tellurium.analogmachine.org/documentation/tellurium-tutorial/parameter-scanning/ | CC-MAIN-2017-22 | refinedweb | 1,761 | 59.7 |
In this document
- Get the source code from the GitHub repository.
Introduction
In this article, I will show you how to run ABP module zero core template on Docker, step by step. And then, we will discuss alternative scenarios like web farm using Redis and Haproxy.
As you now, Docker is the most popular software container platform. I won’t go into details of how to install/configure Docker on windows or what are the advantages of using Docker. You can find related documents here. And there is a Getting Started document as well.
What is ABP Module Zero Core Template?
Module zero core template is a starter project template that is developed with using ASP.NET Boilerplate Framework. This a .net core project as a Single Page Application with using Angular4. And also there is a multi-page MVC application as well. But in this article, I will explain angular4 version.
In module zero core template project there are two separate projects, one is Angular4 project as web UI and the host project that is used by angular UI. Let's examine it to better understand the project before running it on Docker.
Getting Started
Creating Template From Site
First, I will download module zero core template from site.
Framework: .NET Core2.0 + Single Page Web Application Angular + Include module zero
Project name: Acme.ProjectName
Before preparing project to run on Docker, let's run the project, first. I am opening the .sln file in folder ProjectName\aspnet-core.
Creating Database with Using EF Migrations
Before running project, we should create database with using EF migrations on Package Manager Console. First, I am setting Acme.ProjectName.Web.Host as start up project. (right-click Host project and select Set as Startup Project). And then, open Package Manager Console, select default project to EntityFrameworkCore, run the command below
update-database
After running this command, database will be created with name ProjectNameDb.
Running Host Project
And now, Host project is ready to run. On visual studio Ctrl+F5 . It opens swagger method index page.
All these services are served in application layer of project and used by Angular UI.
Running Angular Project
While host is already running and we can run Angular project that uses APIs. To run Angular project, make sure you have node and npm installed on your machine.
First, run cmd on location ProjectName\angular and run the command "npm install" or just "yarn" to fetch client side packages.
Run npm start command in the same directory to start angular project.
Finally you have to see the line "webpack: Compiled successfully" in the output screen.
We started Angular project successfully. Open your browser and navigate to
Use the credentials below to login
After you login, you will see the screen below.
Check HERE for more details.
To summarize what we did for running Angular project:
- Run cmd on location ProjectName\angular.
- Run yarn or npm install command(I used yarn for above example).
- Run npm start command.
- Browse localhost:4200 to see angular project is running.
Everything is running properly. Ready to run on docker...
Running Project on Docker
If you have not installed Angular CLI yet, you have to install it. Run the command below to install Angular CLI.
npm install -g @angular/cli
After you ensure Angular CLI installed, let's see files and folders to configure Docker environment. There is a folder that named docker under ProjectName/aspnet-core.
In docker/ng folder there is a docker-compose.yml file and two powershell script to run docker compose(up.ps1) and stop(down.ps1) it. And there is one more folder and a powershell script file.
This script file to build and publish host and agular project. And also, this script copies the files that is into docker folder to build folder. First, I will run the build-with-ng.ps1 script on location ProjectName/aspnet-core/build.
After running script, when you look at the build folder, you will see the outputs folder.
Before running up.ps1 command,
- You have to share the drives. To share it, right click Docker system tray, go to settings, navigate to shared folders and click all the drives.
- Database is hosted on the local machine not on the docker. Website hosted on docker will be connecting to your local database. And with a trusted connection string the connection will be unsuccessful. So set your sql database username & password. To achieve this modify the file "...\aspnet-core\src\Acme.ProjectName.Web.Host\appsettings.Staging.json". Update Default ConnectionStrings > "Server=10.0.75.1; Database=ProjectNameDb; User=sa; Password=<write your password>;"
Run up.ps1 script to run these two project on docker under location ProjectName/aspnet-core/build/outputs.
Angular and host projects are now running. Browse for Host Project and for Angular UI.
Module Zero Core Template Web Farm on Docker with Using Redis and Haproxy
In a web farm there are more than one web servers, there is a load balancer at the front of these servers and a server to store sharing sessions/caches.
In our example, angular application will be client, haproxy will be load balancer, host app will be web servers and redis will be shared server.
Create a configuration file for haproxy named haproxy.cfg to location ProjectName\aspnet-core\docker\ng
haproxy.cfg
(Copy paste code lines makes encoding problem just download the file => Download haproxy.cfg)
global maxconn 4096 defaults mode http timeout connect 5s timeout client 50s timeout server 50s listen http-in bind *:8080 server web-1 outputs_abp_host_1:80 server web-2 outputs_abp_host_2:80 stats enable stats uri /haproxy stats refresh 1s
Important lines haproxy.cfg are server web-1 and server web-2. I repreduced host applications. This will create two host application on docker container.
Modified docker-compose.yml
(Copy paste code lines makes encoding problem just download the file => Download docker-compose.yml)
version: '2' services: abp_redis: image: redis ports: - "6379:6379" abp_host: image: abp/host environment: - ASPNETCORE_ENVIRONMENT=Staging volumes: - "./Host-Logs:/app/App_Data/Logs" abp_ng: image: abp/ng ports: - "9902:80" load_balancer: image: haproxy:1.7.1 volumes: - "./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg" ports: - "9904:8080"
build-with-ng.ps1
Replace the below line
(Get-Content $ngConfigPath) -replace "21021", "9901" | Set-Content $ngConfigPath
with this line
(Get-Content $ngConfigPath) -replace "21021", "9904" | Set-Content $ngConfigPath
Now Angular UI will connect to haproxy. Haproxy distribute the requests to web servers.
Overwrite up.ps1 with the content below
docker rm $(docker ps -aq) docker-compose up -d abp_redis sleep 3 docker-compose up -d abp_host docker-compose up -d abp_ng sleep 2 docker-compose scale abp_host=2 sleep 2 docker-compose up -d load_balancer
To use redis cache install Abp.RedisCache library to ProjectName.Web.Core project. And update ProjectNameWebCoreModule.cs like following:
[DependsOn(..., typeof(AbpRedisCacheModule))] public class ProjectNameWebCoreModule : AbpModule {
And adding redis cache configuration to PreInitialize method (ProjectNameWebCoreModule.cs)
public override void PreInitialize() { ... Configuration.Caching.UseRedis(options => { var connectionString = _appConfiguration["Abp:RedisCache:ConnectionString"]; if (connectionString != null && connectionString != "localhost") { options.ConnectionString = AsyncHelper.RunSync(() => Dns.GetHostAddressesAsync(connectionString))[0].ToString(); } }) ...
Add redis configurations appsettings.staging.json.
appsettings.staging.json
{ ..., "Abp": { "RedisCache": { "ConnectionString": "outputs_abp_redis_1" } } }
outputs_abp_redis_1 is the name of redis container and this name is defining by docker automatically. After this changing, host project will resolve the dns of machine that is deployed on. And now, when I run build-with-ng.ps1 and up.ps1 , web farm project will run. And the result:
As you can see, all containers are working. When you browse you can see Angular UI is working.
How Will I Know If Haproxy and Redis Work?
There are tools to track haproxy activity(haproxy web interface) and get redis stored data(redis cli).
Haproxy web interface
When you browse you will see something like following.
When you navigate between on angular application pages or run any api on host project (), you can see that the haproxy is routing the requests to different machines. You can track which machine is running under Session rate>Cur tab that are changing for web-1 and web-2.
Redis cli
To understand if redis is working, you can use redis-cli. run docker exec -it outputs_abp_redis_1 redis-cli command to run redis-cli interactive mode to connect redis server that is running on docker container. Then to test if redis is running, write ping command and it will return PONG if it works. Now when I write keys * command, I should get the application cache keys.
As you can see, redis is working well. Cache keys stored on redis, correctly.
Source Code
You can get the latest source code here | https://aspnetboilerplate.com/Pages/Documents/Articles/Running-in-Docker-Containers-and-Building-a-Web-Farm-Load-Balancer-Scenario/index.html | CC-MAIN-2021-43 | refinedweb | 1,444 | 59.6 |
Where are you? Implementing geolocation with Geocoder PHPBy Arno Slatius. We actually use a paid service for this although not for everything. The paid results hold much more information than you get from free services. I found out that Geocoder PHP actually is what I was missing for the integration of various services that we use.
Geocoder PHP provides: “an abstraction layer for geocoding manipulations”. The library is split into three parts: an HttpAdapter for doing requests, several geocoding Providers and various Formatter/Dumpers to do output formatting.
Installation
Installation of Geocoder is most easily done using composer. Add the following to your composer.json:
{ "require": { "willdurand/geocoder": "@stable" } }
Or get one of the archives from the Geocoder PHP website.
GeoCoding
We’ll first take a look at geocoding adresses. The input for this is usually a street address, but can also be a neighborhood, area, place, state or country. The geocoded output should have a good pointer to where on the globe you should look for what you used as input. The result, of course, depends on the quality of the used geocoder.
To use Geocoder you’ll first need an HttpAdapter to fire the requests at the web service. The HttpAdaper is a web client that actually comes in 5 different flavors with PhpGeocoder. You can use the always available CUrl or Socket based clients. Additionally you can add PHP clients like Buzz Browser, Guzzle PHP HTTP Client, Zend HTTP and the GeoIP2 webservice API. You’ll need to add those to your project yourself if you want to use them.
With the HTTP adapter you can then use a geocoder to do the work. The list of supported geocoders is long, very long. They can be divided into a two groups;
- Geographical coders: Google Maps, OpenStreetMap, Bing Maps, MapQuest, Yandex, GeoPlugin, Geonames, ArcGIS Online, Google Maps for Business, TomTom, OIORest (Denmark only), Geocoder.ca (Canada only), Geocoder.us (USA only), DataScienceToolkit (USA & Canada), IGN OpenLS (France only), Baidu (China only),
- IP geocoders: FreeGeoIp, HostIp, IpInfoDB, Geoip, GeoIPs, MaxMind web service, IpGeoBase, DataScienceToolkit
Most can be used for free and some require you to register for an API-key. Most will have some sort of rate or daily limit on the amount of queries that you can shoot their way. Google, for instance, will allow you to do 2500 requests/day.
The quality of the geocoding services varies. Some support specific countries, and usually will be better for those countries. But this is not what we’ll be evaluating here. It is something you do want to test out for yourself, because results will depend on the place you’re geocoding.
Enough about all the possibilities, let’s get to work. We’ll start with a simple example first.
$lookfor = 'Laan van Meerdervoort, Den Haag, Nederland'; $adapter = new \Geocoder\HttpAdapter\CurlHttpAdapter(); $geocoder = new \Geocoder\Geocoder(); $geocoder->registerProvider(new \Geocoder\Provider\GoogleMapsProvider($adapter)); $result = $geocoder->geocode($lookfor);
Simple, right? This is what you’ll get as result:
Geocoder\Result\Geocoded#1 ( [*:latitude] => 52.0739343 [*:longitude] => 4.2636776 [*:bounds] => array ( 'south' => 52.0610866 'west' => 4.2262026 'north' => 52.0868446 'east' => 4.3008719 ) [*:streetNumber] => null [*:streetName] => 'Laan Van Meerdervoort' [*:cityDistrict] => null [*:city] => 'The Hague' [*:zipcode] => null [*:county] => 'The Hague' [*:countyCode] => 'THE HAGUE' [*:region] => 'South Holland' [*:regionCode] => 'ZH' [*:country] => 'The Netherlands' [*:countryCode] => 'NL' [*:timezone] => null )
This is actually the most beautiful part of Geocoder PHP; which ever geocoder you use, what ever direction you’re geocoding in (normal/reverse), you’ll always get the same result set. That’s invaluable if you’d ever think about changing the used geocoder.
About my address choice; the Laan van Meerdervoort is one of the longest streets in the Netherlands (~5800m long). I left out a house number to get the full street. The difference in the latitude/longitude and the bounds make this obvious now. These bounds, in this case, hold the bounding box of the entire street. It’s very practical that you get both in your result; the center of the street and the bounds.
Most geocoders can be asked to be locale or region sensitive and Geocoder has this nicely integrated with a
LocaleAwareProviderInterface. Say you’re Russian, for instance, and you need to go to the Peace Palace which is on the above street you’d use it like this:
$geocoder->registerProvider(new \Geocoder\Provider\GoogleMapsProvider($adapter, 'Ru')); $result = $geocoder->geocode('Laan van Medevoort 7, Den Haag, Nederland');
And get:
Geocoder\Result\Geocoded#1 ( [*:latitude] => 52.0739343 [*:longitude] => 4.2636776 [*:bounds] => array ( 'south' => 52.0610866 'west' => 4.2262026 'north' => 52.0868446 'east' => 4.3008719 ) [*:streetNumber] => 7 [*:streetName] => 'Laan Van Meerdervoort' [*:cityDistrict] => null [*:city] => 'Гаага' [*:zipcode] => null [*:county] => 'Гаага' [*:countyCode] => 'ГААГА' [*:region] => 'Южная Голландия' [*:regionCode] => 'ZH' [*:country] => 'Нидерланды' [*:countryCode] => 'NL' [*:timezone] => null )
Of course you want to get the best results out of your geocoding. If you’re in the US you might want to use a coder that specifically supports the US and another for outside the US. The beauty is that Geocoder supports the chaining of geocoders. You can then throw your request at a set of coders and the first valid result will be returned. This is great because otherwise you’d have to try/catch a whole set of coders until you got a result.
An example of this:
$adapter = new \Geocoder\HttpAdapter\CurlHttpAdapter(); $geocoder = new \Geocoder\Geocoder(); $chain = new \Geocoder\Provider\ChainProvider([ new \Geocoder\Provider\OpenStreetMapProvider($adapter), new \Geocoder\Provider\GoogleMapsProvider($adapter), new \Geocoder\Provider\BingMapsProvider($adapter, $bingApiKey), ]); $geocoder->registerProvider($chain);
Do note that Geocoder will return the first valid result. That might not be the best you can get! So take good care in the sequence you chain your coders together!
Reverse geocoding
Reverse geocoding is when you’ve got a point on the globe, usually a GPS (WGS84) coordinate. If your input is in another projection you could use the PHP version of Proj4 to transform it.
The syntax, once you’ve got an instance of a geocoder, again is quite simple. The only thing you’ll need to remember is that it’s latitude first, then longitude (hands up if you also keep switching latitude and longitude accidentally…).
Make sure your geocoder supports reverse geocoding because not all do. I’ll geocode the geographic center of the Netherlands:
$result = $geocoder->reverse(52.155247, 5.387452);
Which leads to a geocoded result like this.
Geocoder\Result\Geocoded#1 ( [*:latitude] => 52.1551992 [*:longitude] => 5.3872474 [*:bounds] => array ( 'south' => 52.1551992 'west' => 5.3872474 'north' => 52.1551992 'east' => 5.3872474 ) [*:streetNumber] => '30' [*:streetName] => 'Krankeledenstraat' [*:cityDistrict] => 'Stadskern' [*:city] => 'Amersfoort' [*:zipcode] => '3811 BN' [*:county] => 'Amersfoort' [*:countyCode] => 'AMERSFOORT' [*:region] => 'Utrecht' [*:regionCode] => 'UT' [*:country] => 'The Netherlands' [*:countryCode] => 'NL' [*:timezone] => null )
Geocoding IP addresses
I’ve personally used geocoding on IP addresses to determine what language to show a website in (combined with browser accepts of course). You feed the IP address of your user, usually from
$_SERVER['REMOTE_ADDR'], to the geocoder and get a result. Something to consider is whether or not your server supports IPv6 and is able to serve over IPv6. If so, then you’ll need to use a service that supports geocoding IPv6 because not all do.
There are several IP geocoders available, and using them is as simple as using the others:
$adapter = new \Geocoder\HttpAdapter\CurlHttpAdapter(); $geocoder = new \Geocoder\Geocoder(); $result = $geocoder->registerProvider( new \Geocoder\Provider\FreeGeoIpProvider($adapter) ) ->geocode($_SERVER['REMOTE_ADDR']);
The result, again, is a
Geocoded result. I won’t show an example because you’ll be disappointed. IP geocoders results vary a lot in quality. Most often you’ll only get a country and nothing else. I’ve set up some examples for Geocoding IP addresses if you’re interesed in the results for your IP.
Another thing to note in the last example is that you should always nest your geocoding attempts with Geocoder PHP in a try/catch block. The geocoder is able to throw several exceptions at you depending on the output of the geocoder you’re using. Most are quite obvious but can be handled nicely by adding several catch conditions:
try { $result = $geocoder->registerProvider( new \Geocoder\Provider\FreeGeoIpProvider($adapter) ) ->geocode($_SERVER['REMOTE_ADDR']); } catch (NoResultException $e) { $result = "No result was returned by the geocoder"; } catch (QuotaExceededException $e) { $result = "We met our daily quota"; } catch (Exception $e) { $result = "Error: " . $e->getMessage(); }
Output formatting
The last thing I want to touch on is the output formatting. Geocoder has several output formatters that allow you to easily integrate its result into your application. There are two types of output; a string
Formatter that has a
sprintf-like interface and
Dumpers which allow for a language specific output.
The supported output dumpers are XML like GPS eXchange Format (GPX), Keyhole Markup Language (KML) for interfacing with Google Earth, Well-Known Binary (WKB) and Well-Known Text (WKT) for your geospatial enabled databases and finally GeoJSON for easy JavaScript interfacing.
For example; get a geocoded result and feed that to a formatter or dumper;
$adapter = new \Geocoder\HttpAdapter\CurlHttpAdapter(); $GeoJSON = new \Geocoder\Dumper\GeoJsonDumper; $geocoder = new \Geocoder\Geocoder(); $geocoder->registerProvider( new \Geocoder\Provider\OpenStreetMapProvider($adapter) ); echo $GeoJSON->dump( $geocoder->reverse(52.155247, 5.387452) );
This will output a GeoJSON result which is quite usable with OpenLayers for instance:
{ "type": "Feature", "geometry": { "type": "Point", "coordinates": [ 5.3872095538856, 52.1551691 ] }, "properties": { "streetName": "Lieve Vrouwekerkhof", "zipcode": "3811 AA", "city": "Amersfoort", "cityDistrict": "Amersfoort", "region": "Utrecht", "country": "Nederland", "countryCode": "NL" } }
Conclusion
Compliments have to be made about the great class structure in Geocoder. It is a true OOPHP library that makes good use of namespaces and interfaces. It is extremely easy to use. Extending it or building your own Geocoders for it should be very easy.
If straightforward geocoding is what you are looking for, then this library will work great for you. I personally see room for an even better Geocoder because there is more power available. It would be great if you were able to set conditions on the request you’re making to a geocoding chain to get the best result available from all coders. For instance: “at least return a street name” in the result set or “have a confidence level of > 75%” (some geocoders supply this information) or a “maximum address error of less than 500 meters” (something that could be calculated taking the difference of the output address and the input coordinates). Of course all this depends on individual geocoders’ possibilities, but simple filtering can certainly be done.
If you’re interested in the geocoders themselves; I’ve set up a small demo to check out the results for various geocoders.
Did you ever work with geocoders, needed a library like this or perhaps work with it already? Please share your thoughts with us. | https://www.sitepoint.com/implementing-geolocation-geocoder-php/?utm_source=sitepoint&utm_medium=articletile&utm_campaign=likes&utm_term=php | CC-MAIN-2017-13 | refinedweb | 1,782 | 63.9 |
zip_file_set_encryption − set encryption method for file in zip
libzip (-lzip)
#include <zip.h>
int
The zip_file_set_encryption() function sets the encryption method for the file at position index in the zip archive to method using the password password. The method is the same as returned by zip_stat(3). For the method argument, currently only the following values are supported:
If password is NULL, the default password provided by zip_set_default_password(3) is used.
The current encryption method for a file in a zip archive can be determined using zip_stat(3).
Upon successful completion 0 is returned. Otherwise, −1 is returned and the error information in archive is set to indicate the error.
zip_file_set_encryption()
fails if:
[ZIP_ER_ENCRNOTSUPP]
Unsupported compression method requested.
libzip(3), zip_set_default_password(3), zip_stat(3)
zip_file_set_encryption() was added in libzip 1.2.0.
Dieter Baron <dillo@nih.at> and Thomas Klausner <tk@giga.or.at> | https://fossies.org/linux/libzip/man/zip_file_set_encryption.man | CC-MAIN-2020-05 | refinedweb | 144 | 51.04 |
Introduction: Interacting With an OLED and an Accelerometer
In the previous tutorial, we learned how to use an OLED to display different shapes. We started by drawing a pixel, and from there we were able to draw lines, rectangles, squares, and circles. However, each display took us a considerable amount of time and effort. Fortunately the JUGL library can be used to do the same, and more, in less time and in a much easier way. The library supports the following features:
- Pixels
- Lines
- Rectangles
- Squares
- Polygons
- Circles
- Triangles
- Text
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Introducing the Accelerometer ADXL335
In this tutorial we are going to show how to use the library to display a ball and make it move around the screen using an accelerometer. The accelerometer that we will be using is the ADXL335.
This sensor measures acceleration, like the one due to gravity, on the x, y and z axes. Thus, if the sensor is at rest parallel to ground, only one of the axes will feel the acceleration of gravity. As you tilt the device, other axes will start feeling the acceleration of gravity too. This way, it is possible to analyze the way the device is moving.
Step 2: Adding the Accelerometer to the OLED Set-up
So now that we know how the OLED and accelerometer work, it is now time to create our setup.
First we need to add the sensor to our OLED setup in the following way as shown in the image above.
The sensor’s pins are connected to the Arduino as shown below:
- VCC – 5V
- GND – GND
- X – A3
- Y – A2
- Z and ST are left unwired
Step 3: The Code
Once we have the wiring set up, we can use the following code to create our game:
<br><p>#include <JUGL.h><br>#include <SPI.h> #include <Wire.h> #include <JUGL_SSD1306_128x64.h> using namespace JUGL; SSD1306_128x64 driver; const int xpin = A3; //Assign pin A3 to x const int ypin = A2; //Assign pin A2 to y int x, y, x1, y1, r, varx, vary, width, height; //Define variables int xy [2]; //Array to hold (x,y) coordinate //Declaration of functions void Circle(IScreen& scr); void move_right(IScreen& scr); void stop_right(IScreen& scr); void move_left(IScreen& scr); void stop_left(IScreen& scr); void move_up(IScreen& scr); void stop_up(IScreen& scr); void move_down(IScreen& scr); void stop_down(IScreen& scr); void setup(){ IScreen& screen = driver; //Make reference to driver screen.Begin(); //Initialize screen width = screen.GetWidth(); //Get width of screen (128) height = screen.GetHeight(); //Get height of screen (64) Circle(screen); //Draw circle } void loop(){ x1 = analogRead(xpin); //Read x analog data y1 = analogRead(ypin); //Read y analog data IScreen& screen = driver; //Make reference to driver if(x1<500){ //Check if sensor is tilted to the right move_right(screen); //Move ball right if(varx>=width-1-r ){ //Check if ball reached end of screen stop_right(screen); //Stop moving } } if(x1>520){ //Check if sensor is tilted to the left move_left(screen); //Move ball left if(varx=height-1-r){ //Check if ball reached end of screen stop_up(screen); //Stop moving } } if(y1>510){ //Check if sensor is tilted down move_down(screen); //Move ball down if(varyr){ //Check if ball is within boundaries scr.Flush(); //Display on screen } } void stop_left(IScreen& scr){ scr.Clear(); //Clear screen varx = r; //Update varx xy[0] = varx; //Store new varx value scr.FillCircle(Point(5,xy[1]),r); //Draw circle scr.Flush(); //Display on screen } void move_up(IScreen& scr){ scr.Clear(); //Clear screen vary += 10; //Move ball 10 pixels up, assign value to vary xy[1] = vary; //Store new vary value scr.FillCircle(Point(xy[0],vary),r); //Draw circle if(varyr){ //Check if ball is within boundaries scr.Flush(); //Display on screen } } void stop_down(IScreen& scr){ scr.Clear(); //Clear screen vary = r; //Update vary xy[1] = vary; //Store new vary value scr.FillCircle(Point(xy[0],5),r); //Draw circle scr.Flush(); //Display on screen }</p>
Step 4: Code Explanation
So here is what’s going on in the code. First we include all the libraries that we need to run this program. The JUGL library contains the function required to draw the circle, while the JUGL_SSD1306_128x64 library is used to initialize the screen. This last library also contains the “DrawPoint” and “Flush” functions to draw each of the circle’s pixels and display them on the screen. The SPI and Wire libraries are used to communicate with devices via SPI or I2C. In this case, we are using I2C communication. Since the library supports many drivers, it is required to specify which one we are using. Line 6 takes care of this by specifying that we will be using driver SSD1306 on a 128x64 screen. Below are the other drivers that this library supports:
- EPD 2.0
- EPD 1.44
- EPD 2.7
- PCF8833
- KS0107
In the next part of the code, we assign the Arduino’s analog inputs A3 and A2 to the x and y pins of the sensor, respectively. We also define the variables that we are going to use and create an array to hold the x and y coordinate (origin of the ball). Then we make a forward declaration of the functions that we will be using in this program.
Next, we move to the setup part of the code. In here, we make a reference to the driver that we are using. Based on the reference, we initialize the screen, and get its width and height. Finally, we call the function, “Circle.” This function clears the screen and sets the radius and origin of our ball. We use the “FillCircle” and “Flush” functions from our library to draw the ball and display it. This generates a ball at the bottom left corner of our screen with a radius of 5 pixels and an origin at (5,5).
Step 5: Sensor in Action
Now that we have our ball, we can use the sensor to make it move.
In the loop section of the program, we make a reference to the driver that we are using again. Then we read the data from pins x and y, and assign the values to variables x1 and y1 respectively. The table above shows the value of each pin depending on the inclination of the device.
By comparing the values with the values “at rest” we can determine if the device is being tilted to the right, left, etc. Let’s take the first case in our program as an example.
We know that if the device is being tilted to the right, the value “at rest” will decrease. When this happens, the program calls the function “move_right.” This function clears the screen and then adds the value 10 to variable “varx” (which in this case is zero). This represents the number of pixels that we want to move the ball’s origin in the x axis. Then we store the new value of “varx” in the first location of the array. Finally, we call the “FillCircle” and “Flush” functions to display a new circle on our screen 10 pixels away from the previous circle in the x axis. The process keeps repeating as long as the value of pin X is less than 510, thus erasing the previous circle and drawing a new one 10 pixels away from each iteration. This gives the illusion that the ball is moving to the right.
However, if the ball reaches the end of the screen, the function “stop_right” is called. This function clears the screen, sets varx equal to the 126 and stores this value in the first location of the array. Then the functions “FillCircle” and “Flush” are called to draw and display a ball with its origin at (126,xy[1]). In other words, the program will stop moving the ball 10 pixels to right, and instead it will keep drawing the same circle at the edge of the right side of the screen in whatever y location it resides.
The same idea is used when the device is tilted left, up, or down.
Step 6: Sensor in Action - Continued
Every time we move the ball on the screen we need to keep track of the changes in the x and y axis. As it was stated before, this is done by storing the new values of x and y in the array every time there is a change in the ball’s origin. For example, if we move the ball to the right and stop such that the last ball drawn has an origin at (30,5), if we want to move the ball up from there, we have to take into account the change in the x axis so that when the new ball is drawn, its origin resides at (30,15). In other words, the values held in the array serve as a reference location for the next ball to be drawn.
Above is a short video of this tutorial's goal in action from the Jaycon Systems collection of video tutorials.
We hope you enjoyed this Instructable and you'll tell us how your project turns out!
If you haven't already, then check out Jaycon Systems website for more cool tutorials, and our online store where you can buy the parts you need to get creating!
We also have more interesting and fun projects for you to consider on our Instructables profile as well!
If you have any questions about this tutorial, then please do not hesitate to post a comment, shoot us an email, or post it in our forum!
Thanks for reading!
Be the First to Share
Recommendations
Discussions | https://www.instructables.com/id/Interacting-With-an-OLED-and-an-Accelerometer/ | CC-MAIN-2020-10 | refinedweb | 1,633 | 69.31 |
Writing Virtualmin Plugins
Introduction to Plugins
Before starting on a plugin, we suggest that you first read the Webmin module developers guide at .
A Virtualmin plugin is simply a Webmin module that can provide additional features to Virtualmin virtual servers or users. To do this, it must contain a Perl script called virtual_feature.pl which defines certain functions. The plugin module can then be registered by Virtualmin, and the feature it offers will then be available when creating new virtual domains.
A plugin typically adds a new possible to virtual servers, in addition to the standard features built into Virtualmin (website, DNS domain and so on). For example, it may enable reporting using some new statistics generator, or activate some game server in the virtual domain. Virtualmin will add options to the Create Server and Edit Server pages for enabling the plugin’s feature, and call functions in its virtual_feature.pl when the feature is activated, de-activated or changed.
Starting a Plugin
The steps to start writing your own plugin are similar to those for creating a new Webmin module :
- Find the Webmin root directory, which will be
/usr/libexec/webminon Redhat-derived systems,
/usr/share/webminon Debian and Ubuntu, or
/opt/webminon Solaris.
- Pick a directory name for your plugin that is not currently in use by any other Webmin module. Plugin directories typically start with
virtualmin-- so for the purposes of this documentation, we will assume that yours is going to be called
virtualmin-yourname .
- Create that directory under the Webmin root. Then within it, create
helpand
langsub-directories.
- Create a file in the directory called
module.info. This should contain lines like :
desc=A description of your plugin version=1.0 hidden=1
Naturally, the
desc= line should be changed to something more meaningful. If you want the plugin to appear in Webmin’s menu of modules, remove the
hidden=1 line. This is not typically the case, as most plugins are accessed entirely via Virtualmin.
- Create a module library file for your plugin, named
virtualmin-yourname-lib.pl. Initially, it can contain the following code :
do '../web-lib.pl'; &init_config(); do '../ui-lib.pl'; &foreign_require("virtual-server", "virtual-server-lib.pl"); 1;
- Create a plugin API file called
virtual_feature.pl, containing the following initial code :
do 'virtualmin-yourname-lib.pl'; sub feature_name { return "A description of your plugin"; }
- Delete the file
/etc/webmin/module.infos.cacheto clear Webmin’s cache of installed modules.
- Open Virtualmin in your browser, and click on Features and Plugins under System Settings on the left menu. Your new plugin should appear in the list of those available - check its box in the left column, and click Save.
With this done, you can now start work on expanding the capabilities of the plugin by implementing the API documented below.
Package and Distribution
Since a plugin is just a Webmin module, the usual process for packaging it still applies. The commands to do this are :
cd /usr/libexec/webmin tar cvzf /tmp/virtualmin-yourname.wbm.gz virtualmin-yourname
As you can see, a module or plugin is just a tar file of the directory. These can then be installed using the Webmin Configuration module, on the Webmin Modules page.
If you prefer to package your plugin as an RPM, this can be done using the
makemodulerpm.pl script, available from . It must be run as root as shown below :
cd /usr/libexec/webmin /usr/local/bin/makemodulerpm.pl --target-dir /tmp virtualmin-yourname
This will create a file named
wbm-virtualmin-yourname-1.0-1.noarch.rpm in the
/tmp directory.
A similar script exists for Debian and Ubuntu systems, available from .
Plugin CGI Scripts
Because a plugin is just a Webmin module, it can contain
.cgi scripts like any other module. These can be useful for displaying additional information about the feature that the plugin manages, or for managing objects within that feature (such as mailing lists or user accounts). Most plugins will not need to include any CGI scripts, as their functionality is provided entirely by implementing the API functions described below.
You can see some examples of this by looking at the Mailman and Oracle plugins. The former includes several CGIs for managing mailing lists in a domain, while the Oracle plugin has CGIs for creating and viewing tables within databases created by Virtualmin.
The Plugin API
The meat of a plugin is it’s implementation of an API that is called by Virtualmin when performing tasks like enabling or disabling the plugin’s feature for a domain. These must all be defined in the file
virtual_feature.pl under the plugin’s directory. Most are optional, depending on which functionality your plugin is implementing.
For each function the name, supplied parameters, description and an example implementation are shown.
Many functions are passed a domain object as a parameter. This is simply a hash reference that is used internally by Virtualmin to store information about a virtual server. Some of the useful keys in the hash are :
dom- the domain name, like foo.com
user- the administration username for the domain
home- the server’s home directory
ip- the server’s IP address (which may be virtual or shared)
In addition, for each feature the domain has enabled, the code for that feature will be set to 1 in the hash. For example, a domain with a website has
web set to 1, for email the code is
webmin. Virtualmin will add an entry for your plugin when its feature is enabled for the domain, the code for which will be the same as the plugin’s directory.
Core Functions
Functions in this section must be implemented by all plugins.
feature_name()
This must return a short name for the feature, like My plugin.
sub feature_name { return "My plugin name"; }
Functions for Features
The most common use for plugins is to add a new feature that can be selected when a virtual server is created or modified. The functions listed in this section should be implemented in this case, although not all are mandatory.
feature_label(edit-form)
This must return a longer name for the feature, for display in the server creation and editing pages. The
edit-form parameter can be used to determine which page is calling the function.
sub feature_label { return "Enable my plugin"; }
feature_check()
This optional function will be called when the plugin is registered by Virtualmin, to check that all of its dependencies are met. It must return undef if everything is OK, or an error message if some program or service that the plugin depends upon is missing. If not implemented, Virtualmin will assume that the plugin has no dependencies.
sub feature_check { if (&has_command("someprogram")) { return "The commmand someprogram required by this plugin is not installed"; } else { return undef; } }
feature_losing(domain)
This should return text to be displayed when this feature is being removed from a domain. The domain parameter is the Virtualmin domain hash reference for the server the feature is being removed from.
sub feature_losing { return "My plugin for this virtual server will be disabled"; }
feature_disname(domain)
This optional function should return a description of what will be done when this feature is temporarily disabled. It is only needed if your plugin implements the feature_disable function, indicating that it can be disabled.
sub feature_disname { return "My plugin will be temporarily de-activated"; }
feature_clash(domain)
If activating this plugin feature in the given domain would clash with something already on the system, this function must return an error message. Otherwise, it can just return undef.
sub feature_clash { my ($d) = @_; if (-r "/etc/someprogram/$d->{'dom'}") { return "Some program is already enabled for this domain"; } else { return undef; } }
feature_depends(domain)
If implemented, this function should check if the given domain object has all the features enabled that would be required by this plugin. For example, if your plugin implements something that is accessible via the web, the domain must have the
web feature set. If a dependency is missing it must return an error message explaining this, or
undef if everything is OK.
sub feature_depends { my ($d) = @_; if ($d->{'web'}) { return undef; } else { return "My plugin requires a website"; } }
feature_suitable(parentdom, aliasdom, superdom)
This optional function should check the given parent domain, alias target domain and super-domain objects, to ensure that they are suitable for this feature. It can be useful for preventing the plugin from being enabled in sub-domains or alias domains, where it may not be appropriate. It must return 1 if the feature can be used, or 0 if not. If not implemented, Virtualmin assumes that it can be used anywhere.
sub feature_suitable { my ($parent, $alias, $super) = @_; if ($parent && !$parent->{'web'}) { return 0; } else { return 1; } }
feature_setup(domain)
This function will be called when the plugin feature is being enabled for some server, either at creation time or when the server is subsequently modified. It must perform whatever actions are needed, such as modifying config files, running commands and so on. It should notify the user of the features activation by calling the functions
&$virtual_server::first_print and
&$virtual_server::second_print, like so :
sub feature_setup { my ($d) = @_; &$virtual_server::first_print("Setting up My plugin.."); my $ex = system("somecommand --add $d->{'dom'} >/dev/null 2>&1"); if ($ex) { &$virtual_server::second_print(".. failed!"); return 1; } else { &$virtual_server::second_print(".. done"); return 1; } }
feature_delete(domain)
This function is called when the feature is removed for some server, either at deletion time or when the server is modified. It must perform whatever config file changes or run whatever commands are needed to turn the feature off, and should use the
&$virtual_server::first_print and
&$virtual_server::second_print functions to notify the user about what it is doing.
sub feature_delete { my ($d) = @_; &$virtual_server::first_print("Turning off up My plugin.."); system("somecommand --remove $d->{'dom'} >/dev/null 2>&1"); &$virtual_server::second_print(".. done"); }
feature_modify(domain, olddomain)
Whenever a virtual server is modified, this function will be called in all plugins. It should check if some attribute of the server that the plugin uses has changed (like dom or user), and update the appropriate config files. For example, if your feature configures some program that needs to know the virtual server’s domain name, this function must compare
$domain→{’dom’} and
$olddomain→{’dom’} , and if they differ perform whatever updates are needed. It should only produce output when it actually does something though.
sub feature_modify { my ($d, $oldd) = @_; if ($d->{'dom'} ne $oldd->{'dom'}) { &$virtual_server::first_print("Changing domain for My plugin .."); rename("/etc/someprogram/$oldd->{'dom'}", "/etc/someprogram/$d->{'dom'}"); &$virtual_server::second_print(".. done"); } }
feature_disable(domain)
If this function is defined, it will be called when a virtual server with the plugin feature active is disabled. It should temporarily turn off access to the feature in a non-destructive way, so that it can be fixed later by a call to feature_enable.
sub feature_disable { my ($d) = @_; &$virtual_server::first_print("Temporariliy disabling My plugin .."); rename("/etc/someprogram/$d->{'dom'}", "/etc/someprogram/$d->{'dom'}.disabled"); &$virtual_server::second_print(".. done"); }
feature_enable(domain)
This function will be called when a virtual server with the plugin’s feature is re-enabled. It should undo whatever changes were made by the
feature_disable function. It only needs to be implemented if
feature_disable is.
sub feature_enable { my ($d) = @_; &$virtual_server::first_print("Re-enabling My plugin .."); rename("/etc/someprogram/$d->{'dom'}.disabled", "/etc/someprogram/$d->{'dom'}"); &$virtual_server::second_print(".. done"); }
feature_bandwidth(domain, start, bwhash)
If defined, this function should report to Virtualmin the amount of bandwidth used by some virtual server since the given start Unix time. Bandwidth is the total number of bytes uploaded and downloaded, broken down by day. This function should scan whatever log file is available for the feature, extract upload and download counts for the domain, and add to the values in the
bwhash hash reference.
Because bandwidth is accumulated by day, the
bwhash hash is index by the number of days since 1st Jan 1970 GMT, which is simply a Unix time divided by 86400.
feature_webmin(domain, other-domains)
If you want your plugin to provide access to a Webmin module to the owners of virtual servers that have its feature enabled, this function can be used tell Virtualmin which modules access should be granted to. Typically, a plugin will grant access to its own module, which will have standard CGI scripts for use in further configuring whatever service the plugin enables.
This function must return a list of array references, each of which has two values :
- The directory of a module to grant access to (typically just
$module_name)
- A hash reference of ACL values to set in that module for the domain owner. This is typically used to restrict him to just the configurations relevant to the given domain.
The domain parameter is the virtual server object that this feature is enabled in, and other-domains is an array reference of other virtual servers that are owned by the same user as domain, and which have the plugin’s feature enabled. This latter parameter should be taken into account in order to grant access to configure all of the user’s servers.
sub feature_webmin { local ($d, $all) = @_; my @fdoms = grep { $_->{$module_name} } @$all; if (@fdoms) { return ( [ $module_name, { 'doms' => join(" ", @fdoms) } ] ); } else { return ( ); } }
feature_import(domain-name, user-name, db-name)
This function is called when an existing virtual server is being imported into Virtualmin. It should return 1 if the service configured by the plugin is already active for the given domain, perhaps because it was set up manually.
sub feature_import { my ($dname, $user, $db) = @_; if (-r "/etc/someprogram/$dname") { return 1; } else { return 0; } }
feature_links(domain)
This optional function allows the plugin to provide additional links on the left menu of framed Virtualmin themes when a domain with the feature enabled is selected. It must return a list of hash references, each containing the following keys :
mod- The module the link is to, typically
$module_name.
desc- The text of the link.
page- The CGI within the module that the link is to.
cat- The left-side menu category that this link should appear under, such as
servicesor
logs.
A link to a module that the current Webmin user does not have access to will not be displayed.
sub feature_links { local ($d) = @_; return ( { 'mod' => $module_name, 'desc' => "Manage My plugin", 'page' => 'index.cgi?dom='.$d->{'dom'}, 'cat' => 'services', } ); }
feature_always_links(domain)
This function is similar to
feature_links, but is called regardless of which domain is selected. It can be used when you have a page that can be used even for virtual servers that don’t have the plugin’s feature active.
feature_validate(domain)
This function is optional, and is used by Virtualmin domain validation page. If implemented, it should check to ensure that all configuration files and other settings specific to the domain are setup properly. If any problems are found it should return an error message string, otherwise
undef.
sub feature_validate { my ($d) = @_; if (!-r "/etc/someprogram/$d->{'dom'}") { return "Missing someprogram configuration file"; } else { return undef; } }
virtusers_ignore(domain)
This optional function should be implemented by plugins that add and manage email aliases to a domain - for example, one that deals with mailing lists or autoresponders. Because you don’t generally want these aliases showing up in the general list of those in the domain, it should return a list of full addresses to hide from the list.
sub virtusers_ignore { my ($d) = @_; return ( "myplugin\@$d->{'dom'}" ); }
Limits and Template Functions
Plugins can define fields that will appear on the owner limits page for a virtual server, and in server templates. Limits are useful if your plugin uses up resources of some kind, such as disk space for databases or memory for server processes. You can then allow the master administrator to define limits on these resources, via functions documented here.
Virtualmin templates are the location of most configuration settings that are used when creating new virtual servers. If your plugin has some adjustable settings that might be used when it is enabled, you can implement the functions below to add new input fields to templates. These can then be fetched in your plugin’s
feature_setup function with the
get_template call.
feature_limits_input(domain)
This optional function should return a HTML inputs for limits specific to this plugin’s feature. The initial values of those limits should be take from the
domain object, where they must be stored in keys starting with the plugin’s name (to avoid clashes). The HTML returned must make use of the
ui_table_row function to format table columns.
sub feature_limits_input { my ($d) = @_; if ($d->{$module_name}) { return &ui_table_row("Maximum My plugin databases", &ui_opt_textbox($module_name."limit", $d->{$module_name."limit"}, 4, "Unlimited", "At most")); } }
feature_limits_parse(domain, in)
This function parses the HTML form inputs generated by
feature_limits_input. It should examine the
in hash reference and update the
domain object to set or clear limits based on the user’s selections. If any errors are found it should return an error message string, or
undef if all is OK.
sub feature_limits_parse { my($d, $in) = @_; if (!$d->{$module_name}) { # Do nothing } elsif ($in->{$module_name."limit_def"}) { delete($d->{$module_name."limit"}); } else { $in->{$module_name."limit"} =~ /^\d+$/ || return "Limit must be a number"; $d->{$module_name."limit"} = $in->{$module_name."limit"}; } return undef; }
template_input(template)
This optional function must return HTML for editing template settings specific to this plugin. The
template parameter is a hash reference to a template object, which contains settings for all features and plugins. Yours should only show and edit keys that start with the plugin’s module name, so that they are properly merged when a non-default template is edited. HTML returned must make use of the
ui_table_row function to format table columns.
sub template_input { my ($tmpl) = @_; return &ui_table_row("Default My plugin database size", &ui_opt_textbox($module_name."dbsize", $tmpl->{$module_name."dbsize"}, 5, "Default"). "MB"); }
template_parse(template, in)
This function must check
in for selections made by the user in the fields created by
template_input, and then update the
template hash reference. If there are any errors in the user’s input it should return an error string, or undef if everything is OK. Template keys must start with the plugin’s module name, so that they are properly merged when a non-default template is edited.
sub template_parse { my ($tmpl, $in) = @_; if ($in->{$module_name."dbsize_def"}) { delete($tmpl->{$module_name."dbsize"}); } else { $in->{$module_name."dbsize"} =~ /^\d+$/ || return "Database size must be a number"; $tmpl->{$module_name."dbsize"} = $in->{$module_name."dbsize"}; } }
Backup and Restore Functions
In the Virtualmin architecture, each feature and plugin is responsible for backing up and restoring configuration files associated with a domain, but which are stored outside the virtual server’s home directory. If your plugin adds a feature to Virtualmin which stores data in some location that won’t be included in a domain’s regular back, you should implement the functions in this section to ensure that it is backed up and restored.
feature_backup(domain, file, opts, all-opts)
This function should copy configuration files associated with the virtual server object
domain and copy them to the path given by
file. If there is just a single file then it can be copied directly - otherwise, your code should create a tar file of all required files and write it to that path.
The
&$virtual_server::first_print and
second_print functions should be called to tell the user that the backup is starting, and if it has succeeded or failed. If the copy was successful the function should return 1, or 0 on failure.
sub feature_backup { my ($d, $file) = @_; &$virtual_server::first_print("Copying My plugin configuration file .."); my $ok = ©_source_dest("/etc/someprogram/$d->{'dom'}", $file); if ($ok) { &$virtual_server::second_print(".. done"); return 1; } else { &$virtual_server::second_print(".. copy failed!"); return 0; } }
feature_restore(domain, file, opts, all-opts)
This function is the opposite of
feature_backup - it should take the data in the file passed in with the
file parameter, and update local config files or databases for the virtual server defined in
domain to restore those settings. The format of
file will be exactly the same as whatever your plugin created in the
feature_backup function, although it may be in a different location.
The
&$virtual_server::first_print and
second_print functions should be called to tell the user that the restore is starting, and if it has succeeded or failed. If the process was successful the function should return 1, or 0 on failure.
sub feature_restore { my ($d, $file) = @_; &$virtual_server::first_print("Restoring My plugin configuration file .."); my $ok = ©_source_dest($file, "/etc/someprogram/$d->{'dom'}"); if ($ok) { &$virtual_server::second_print(".. done"); return 1; } else { &$virtual_server::second_print(".. copy failed!"); return 0; } }
Other User Interface Functions
These functions aren’t really related to any feature or capability that the plugin provides - instead, the allow it to add elements to the Virtualmin user interface.
settings_links
If implemented, this function should return a list of hash references, each of which defines a new link under the System Settings menus on Virtualmin’s left frame. These are only accessible to the master administrator, and appear regardless of which domain is selected. They typically link to global configuration pages for the plugin.
Each hash must contain the following keys :
- link - The URL path to link to, relative to the Webmin root.
- title - The text of the link.
- icon - URL path to an icon for the link. This is currently only used when Virtualmin is not being accessed via the framed theme.
- cat - The category for this settings link, such as
settingor
ip.
sub settings_links { return ( { 'link' => "/$module_name/edit_config.cgi", 'title' => "My plugin configuration", 'icon' => "/$module_name/images/config.gif", 'cat' => 'setting' } ); }
theme_sections
The Virtualmin framed theme displays various information on the right-hand system information page after you login, such as the status of servers, available updates and comparative quota use. This function allows your plugin to add sections of its own, typically to display global status information.
If defined, it must return a list of hash references, each containing the following keys :
title- The title of the section
html- The HTML to appear within the section when it is opened. Forms that submit to CGIs within the plugin are perfectly OK.
status- Set to 1 if you want the section to be open by default, 0 if not.
for_master- This must be set to 1 if the section should be visible to the master administrator.
for_reseller- Set to 1 if it should be visible to resellers.
for_owner- Set to 1 if it should be visible to individual domain owners.
sub theme_sections { return ( { 'title' => 'My plugin status', 'html' => &is_server_running() ? 'Some program is running OK' : 'Some program is down!', 'status' => 0, 'for_master' => 1 } ); }
Functions For Mailboxes
A Virtualmin plugin can also provide extra capabilities to virtual server users. This is done by implementing additional functions in the
virtual_feature.pl file, similar to those used for adding a new server feature. This can be used for granting users access to some new service, like a game server or database, which is not supported natively by Virtualmin.
When a plugin adds capabilities to a user, additional inputs will typically appear on the user editing page. In additional, the plugin can define extra columns to appear in the user list, to display the status of the new user capabilities.
Most of the functions above take a user details hash reference as a parameter. Some of the useful keys in this hash are :
user- The full Unix username of the user, which may have the domain name appended, like jcameron-foo.
real- The real name of the user, such as Jamie Cameron.
home- The user’s home directory, which is typically under the virtual server’s home.
pass- The user’s encrypted Unix password.
plainpass- If the user’s password has just been changed or set, this field will contain the plain text password. It is not always available though, for example when editing a user without changing the password.
The functions that can be added to
virtual_feature.pl to support user capabilities are :
mailbox_inputs(user, new, domain)
This function is called when the page for editing a virtual server user is displayed. The user parameter is a hash reference of user details, such as the login name, real name and home directory. The new parameter will be set to 1 if this is a new user, or 0 if editing an existing user. The domain parameter is a hash reference of virtual server information, as used in the plugin functions documented above.
This function must return HTML for the additional inputs to display, formatted to fit inside a 2-column table. This is best done with functions from ui-lib.pl, like:
sub mailbox_inputs { my ($user, $new, $d) = @_; my $access = &check_user_access($user); return &ui_table_row("Allow access to My plugin?", &ui_yesno_radio("myplugin", $access)); }
It should detect the current state of the user, and use this information to determine the values of the inputs.
mailbox_validate(user, olduser, in, new, domain)
This function is called when the user form is saved, but before any changes are actually committed. It should check the form inputs in the in hash reference to make sure they are valid, and return either undef on success, or an error message if there is some problem.
mailbox_save(user, olduser, in, new, domain)
This function must save the actual settings selected for this user, by updating whatever configuration files are needed for this capability. The
user parameter is the update user details hash, containing his new username, password, real name and other attributes. The
olduser parameter is the user hash from before the changes were made, and can be compared with user to detect username and other changes.
in is the form inputs hash, new is a flag indicating if this is a new or edited user, and
domain is the details of the virtual server this user is in.
sub mailbox_save { my ($user, $olduser, $in, $new, $d) = @_; if ($user->{'user'} ne $olduser->{'user'}) { &set_user_access($olduser, 0); } &set_user_access($user, $in->{'myplug'} ? 1 : 0); }
mailbox_delete(user, domain)
This function is called when a user is deleted. It should check to see if he has the capability managed by this plugin enabled, and if so perform whatever tasks are needed to remove it. The parameters are the same as those for the mailbox_save function.
sub mailbox_save { my ($user, $d) = @_; &set_user_access($user, 0); }
mailbox_modify(user, olduser, domain)
This function gets called when a user is modified by some part of Virtualmin other than the Edit User page, for example by the
modify-user.pl command-line script. It should compare the old and new user objects to see if anything that this plugin uses has changed, such as the username or password. If so, it must update whatever configuration files the plugin uses.
sub mailbox_modify { my ($user, $olduser, $d) = @_; if ($user->{'user'} ne $olduser->{'user'}) { my $oldaccess = &get_user_access($olduser); &set_user_access($olduser, 0); &set_user_access($user, $oldaccess); } }
mailbox_header(domain)
If you want an additional column to appear in the user list indicating the state of this plugin’s capability for users, this function should return the title for the column. Otherwise, it should just return undef. If you don’t need to define any extra column, then you don’t even need to implement it.
sub mailbox_header { return "Plugin access"; }
mailbox_column(user, domain)
When a column exists for this plugin in the user list, this function will be called once for each user. It must return the text to display, such as Enabled or Disabled. If
mailbox_header is not implemented, then this function doesn’t need to be either.
sub mailbox_column { local ($user, $d) = @_; return &check_user_access($user) ? "Yes" : "No"; }
mailbox_defaults_inputs(defs, domain)
Virtualmin Pro allows users to define various defaults for new users added to domains, on a per-domain basis. If your plugin wants to be able to add to these defaults, you can implement this function. The
defs parameters is a hash reference for a user object containing the defaults, which should be checked to find the current status for your settings.
sub mailbox_defaults_inputs { my ($defs, $d) = @_; return &ui_table_row("Allow access to My plugin by default?", &ui_yesno_radio("myplugin", $defs->{'myplugin'})); }
mailbox_defaults_parse(defs, domain, in)
This function is the counterpart to
mailbox_defaults_inputs. It should check form inputs in
in and use them to update the default settings object
defs.
sub mailbox_defaults_parse { my ($defs, $d, $in) = @_; $defs->{'myplugin'} = $in->{'myplugin'}; }
Database Functions
In the core package, Virtualmin supports MySQL and PostgreSQL databases. However, the plugin architecture allows developers to add new database types which can then be associated with virtual servers. Typically a plugin that adds databases will also implement the
feature_ functions, so that the new database type can be enabled for new or existing virtual servers - just as is the case for MySQL and PostgreSQL.
Because Virtualmin allows mailbox users to have access to some database types, the plugin can also include support for creating, listing and managing additional users associated with each database. Because not all database systems support granting a user full access to a database, implementation of the user-related functions is optional.
database_name()
This function must return the name of the database type.
sub database_name { return "FooSQL"; }
database_list(domain)
This function must return a list of the names of databases owned by the given
domain object, each of which is a hash reference containing the following keys :
name- The unique database name.
type- The database type code, typically set to
$module_name.
desc- A description of the database type, usually the same as returned by
database_name.
users- A flag, set to 1 if the database supports multiple users, 0 if not.
link- A URL path for managing the database’s contents. If you have not implemented this, it this key can be left out.
Typically the list of databases for a domain will be stored in the domain hash itself, in a key named
db_$module_name. This removes the need for the plugin to store the domain → database mapping separately.
sub database_list { my ($d) = @_; my @rv; foreach my $db (split(/\s+/, $d->{'db_'.$module_name})) { push(@rv, { 'name' => $db, 'type' => $module_name, 'desc' => &database_name(), 'link' => "/$module_name/edit_dbase.cgi?db=$db" }); } return @rv; }
databases_all()
This function should return a list of all databases known to the database server the plugin manages, even those not associated with any domain. Its return format should be the same as
database_list.
sub databases_all { my @rv; foreach my $dbname (&list_foosql_databases()) { push(@rv, { 'name' => $dbname, 'type' => $module_name, 'desc' => &database_name() }); } return @rv; }
database_clash(domain, name)
This function must check if a database of the type managed by the plugin with the given
name already exists, and if so return 1. It is used by Virtualmin to prevent database name collisions at creation time. If no clash exists, it must return 0.
sub database_clash { my ($d, $name) = @_; foreach my $db (&list_foosql_databases()) { return 1 if ($db eq $name); } return 0; }
database_create(domain, name)
This function is where the real work of creating a new database should happen. It must perform all the work needed to add a database and associate it with the virtual server, typically by adding it to the
db_$module_name key in the
domain hash reference. It should use
&$virtual_server::first_print to output a message before creation starts, and
second_print to display success or failure when done. It should return 1 if creation was successful, 0 if not.
Access to the new database must be granted to the virtual server’s owner. For databases managed by some kind of server (like MySQL and PostgreSQL), the domain’s username and password must be able to login to access the new database. These can be found in the
domain hash in the
user and
pass keys.
sub database_create { my ($d, $name) = @_; &$virtual_server::first_print("Creating FooSQL database $name .."); local $err = &create_foosql_database($name); if ($err) { &$virtual_server::second_print(".. failed : $err"); return 0; } else { &$virtual_server::second_print(".. done"); $d->{'db_'.$module_name} .= " ".$name; return 1; } }
database_delete(domain, name)
This function must delete a database of the type managed by this plugin, and remove access to it from the virtual server. Like
database_create, it should use the
sub database_delete { my ($d, $name) = @_; &$virtual_server::first_print("Deleting FooSQL database $name .."); local $err = &delete_foosql_database($name); if ($err) { &$virtual_server::second_print(".. failed : $err"); return 0; } else { &$virtual_server::second_print(".. done"); $d->{'db_'.$module_name} =~ s/\s+\Q$name\E//g; return 1; } }
database_size(domain, name)
This function is called by Virtualmin when a user displays information about a database, and when computing a virtual server’s total disk usage. It must return two numbers :
- The size of the database on disk, in bytes.
- The number of tables in the database.
sub database_size { my ($d, $name) = @_; my $size = &disk_usage_kb("/var/foosql/$name"); my @tables = &list_foosql_tables($name); return ( $size*1024, scalar(@tables) ); }
database_users(domain, name)
If the plugin’s database type supports multiple logins, this function can be implemented to return a list of array references, each of which contains a login and password. Only users associated with
domain and with access to the database specified by the
name parameter need to be returned. If the password is encrypted, it is fine to use that as the second element of each array ref.
sub database_users { my ($d, $name) = @_; return &execute_foosql_sql($name, "select login,password from users where db = '$name'"); }
database_create_user(domain, database, user, password)
This function must create a new database with with access to the database specified by the
database parameter, which is a hash reference returned by
database_list. The new user must have the login set by the
user parameter, and password specified by
password. If something goes wrong, it should called
error.
sub database_create_user { my ($d, $db, $user, $pass) = @_; &execute_foosql_sql($db->{'name'}, "create user '$user' with password '$pass'"); }
database_modify_user(domain, old-database, database, old-user, user, password)
This function must modify the user in the database specified by the
old-database parameter and named
old-user, changing his login to
user and password to
password (if provided). If the modification fails, it should call
error.
sub database_modify_user { my ($db, $olddb, $db, $olduser, $user, $pass) = @_; if ($user ne $olduser) { &execute_foosql_sql($olddb->{'name'}, "rename user '$olduser' to '$user'"); } if (defined($pass)) { &execute_foosql_sql($olddb->{'name'}, "alter user '$user' password '$pass'"); } }
database_delete_user(domain, user)
This function should delete the database user specified by the
user parameter from all databases owned by the virtual server in
domain.
sub database_delete_user { my ($d, $user) = @_; foreach my $name (&list_foosql_databases()) { &execute_foosql_sql($name, "delete user '$user'"); } }
database_user(name)
Some database servers impose limits on the length or allowed characters in database logins. This function should check if the given
name exceeds any such restrictions, and if so truncate or modify it to be valid. It should then return the modified version.
sub database_user { my ($name) = @_; if (length($name) > 16) { $name = substr($name, 0, 16); } return $name; } | http://www.virtualmin.com/component/option,com_openwiki/Itemid,48/id,writing_virtualmin_plugins/ | crawl-001 | refinedweb | 5,791 | 52.9 |
Ruth: A Bible Study Guide for Women. One purpose is to provide an outline of applicable Bible truths for women to use in their ladies' Bible classes. Another purpose of this material is to provide critical-thinking questions that will hopefully be a springboard for further discussion. (All quotations are from the New King James Version)
This study over Ruth contains 5 lessons. Please use and copy as you see fit. I have also provided some brotherhood websites below that should further aid your study. These are created by brethren and have a plethora of material covering any topic you would like to study further. With online resources today we as Christians have no excuse for being taken off guard by others who criticize or question our belief in the Word. Please use these along with your own Bible study. May the information that follows be used as a stepping-stone to a deeper study of the Word of God, and may you grow more in love with the Word and with our Almighty Creator.
God bless you as you study and obey His Word,
Emily H. Fisher
Lesson 1 - Introduction to the book of Ruth
I. Authorship
a. Although we cannot be certain of the author of Ruth, many scholars claim Samuel is the writer (Arthur E. Cundall and Leon Morris, Judges and Ruth: An Introduction and Commentary, 224).
b. Since Samuel died before David's coronation as king and since Solomon is not listed in Ruth's genealogy, most likely it was written by someone during David's reign as king (Ruth 4:17-22; 1 Samuel 25:1; 2 Samuel 2:4; 5:3).
II. Background
a. The book of Ruth takes place during the time period of the judges (Ruth 1:1).
b. The judges served as local rulers and military leaders in times of crisis. It is a period of chaos and lawlessness in which "everyone did what was right in his own eyes" (Judges 21:25).
c. During this time, God's Sovereign power is shown through idolatrous people.
d. Israel had been called of God to be a separate people: commanded not to enter into league with other people, not to intermarry with pagan nations, and to abhor their gods. They often failed.
In the midst of this dark time of Israel, the book of Ruth is a story of devotion, piety, and purity; it is in direct contrast to the book of Judges.
Judges Ruth
war peace
cruelty kindness
idolatry worship of true God
villainy virtue
lust love
III. Purpose/Message
a. This book has been called the most beautiful short story ever written.
b. The book of Ruth records the ancestry of David, Israel's greatest king, through whom Jesus came (Matthew 1:5-17).
c. It is the only book devoted wholly to the history of a woman.
d. It deals with a single family's problems and concerns, but this should not blind us to the theological values, especially the providence of God.
e. It is a book about God: His rule over all and His blessings on those who trust Him.
IV. Keywords
a. Kinsman (redeemer), "one who redeems" - appears 13 times. "Redeem" means "to buy back, or satisfy."
b. Rest
V. Key passages: Ruth 1:16-17; 2:12
VI. Notes to consider:
a. The question must be asked: Is it a problem that Ruth, being a Moabitess, marries an Israelite when the Israelite law forbade marrying foreigners?
b. Deuteronomy 7:1-4 prohibits Israel from marrying the people who dwelt in Canaan. There is no prohibition of marriage with a Moabite. The Moabites were descendants of Lot, the nephew of Abraham (Genesis 19:36-38; 11:27).
c. We will see in lesson 2 that Ruth was not just a foreigner. She was loyal to her Israelite mother-in-law and was clearly a convert to the Jewish religion.
d. Furthermore, Ruth is one of four women mentioned in the genealogy of Christ who were Gentiles (Tamar, Rahab, and Bathsheba being the other three - Matthew 1:3-6). This points to the fact that God would have "all men to be saved and to come unto the knowledge of the truth" (1 Timothy 2:4).
Discussion Questions:
- Read Judges: 2:11-12; 2:16-19; 21:25. Discuss the environment in Israel during the time the story of Ruth took place.
- Discuss how the book of Ruth is different from the book of Judges.
- Look up the meaning of "kinsman". What application does this word have in the book of Ruth? What does it point to universally?
- Name the Gentile women found in the line of Christ. How did these women come to be ancestors of Christ?
- Read and discuss the key passages of the book of Ruth.
- Discuss the Israelite law of marrying foreigners. Is this a problem in the book of Ruth?
Lesson 2 – From Moab to Judah: 1:1-22
I. Read the text
II. Chapter 1
a. Famines were common in Palestine with the area's uncertain rainfall.
b. The use of the word, "sojourn" shows that Elimelech planned to return to Israel.
i. Moab is east of the Dead Sea and is known for its fertility.
c. This family was from Bethlehem in Judah.
i. Bethlehem means "house of bread" and is very close to Jerusalem.
ii. This area was earlier known as Ephrath (Genesis 35:19; 48:7).
d. We are not told how long they stay in Moab or their doings there until the head of the family dies leaving his wife and two sons.
e. Due to the circumstances, it was probably necessary that the sons of Naomi marry Moabitesses.
f. "They dwelt there about ten years" (Ruth 1:4) possibly is a reference to the total time they dwelt there. Otherwise, it seems likely that children would be mentioned from the unions.
g. It seems unusual that the three males of the family would all die.
i. The reason of their death is not stated (Deuteronomy 29:29).
h. Naomi, now without husband and sons, is left alone.
i. She has no reason to stay in Moab.
ii. It seems Naomi takes the initiative to set out for home and her daughters-in-law simply follow her at first.
i. Naomi implores the two young widows to return to their homes.
i. Note that she refers to their "mother's house" (Ruth 1:8).
ii. Naomi prays that Jehovah may "deal kindly" with them.
- Naomi's use of "Yahweh" (translated "LORD", which is the personal name of the God of Israel) in the presence of her two Moabite daughters-in-law is evidence of her strong faith in the Lord.
- It would be unlikely that they remarry in Israel, so instead of sharing in Naomi's poverty, it only makes sense for them to stay in their own land to find husbands.
- In the culture of the ancient, near-Eastern world, the word "rest" refers to the security that marriage gave a woman - not freedom from work.
iii.Naomi kisses them goodbye and the three weep loudly together.
- This was the Eastern expression of grief.
j. Orpah and Ruth both refuse the suggestion, but Naomi does not want them to be a part of her uncertain life.
i. She points out that she will not have any more children.
ii. Even if she could, the two widows could not wait for the sons to grow.
iii. "It grieves me very much for your sakes" (Ruth 1:13) shows Naomi's feelings towards her daughters in the fact that they would have security and happiness in their own country.
iv. "The hand of the Lord" is an anthropomorphism (a figure of speech using human terms in reference to God) and is used commonly in the Old Testament; this shows God's activity.
v. Naomi's words bring another flood of tears.
k. Two actions of Orpah and Ruth follow. (Ruth 1:14)
i. Orpah "kissed her mother-in-law". This is a farewell kiss.
- Instead of thinking less of Orpah, we should look at her obedience to Naomi
ii. Ruth "clung to her". She had given her loyalty to Naomi and would not leave her.
- It is apparent that both of the young women loved their mother-in-law.
- It seems one wished to remain a daughter, while the other desired to become a wife again (1 Corinthians 7:8-9).
l. Chapter 1:16-17 shows Ruth's trust in God was real; her response is a classic expression of faithfulness.
i. In saying, "wherever you lodge, I will lodge", Ruth realizes she will be cut off from her own people of Moab.
ii. She will stay with Naomi until death.
iii. Ruth does not mention Chemosh, the god of the Moabites; instead she appeals to Yahweh.
iv. One thinks about what our Lord said, "So likewise, whoever of you does not forsake all that he has cannot be My disciple." (Luke 14:33)
m. Naomi is convinced by Ruth's unshakable firmness.
n. Ruth 1:19-21 does not record their journey back to Bethlehem but only their reception by the women of the town.
i. The women were out and about spreading the news of Naomi's return (the men would have been at work on the harvest).
ii. Their question, "Is this Naomi?" makes one wonder if the harsh years had altered Naomi's appearance.
iii. Naomi (pleasant) rejects her name making the plea for them to call her Mara (bitter).
- There is a word-play in the original Hebrew text and can be illustrated thusly: "Call me Mara, for the Almighty has marred me.
iv. She compares leaving Bethlehem to her return now to the town.
v. Naomi sees that she is helpless in the face of the Almighty God.
o. The narrative ends with stating that they returned at the beginning of the barley harvest, which is near the beginning of May.
i. This is why the book was read, and still is, by Jews during the Feast of Pentecost (at the time of barley harvest).
ii. It might be interesting to note that the word for "return" occurs twelve times in this chapter, emphasizing their return to the land of God's people.
Discussion Questions:
- Does knowing the meaning of the name "Bethlehem" bring any significance to the fact that Christ was born in this town? Discuss.
- What earlier events in Israel's history occurred around Bethlehem?
- How is the beginning of this story an example of the Bible's brevity? How would this observation point to the Bible's inspiration?
- For what purpose did God work in the lives of these people? Discuss God's providence today?
- What does Orpah and Ruth's willingness to leave their homeland and go with Naomi say about their character?
- Did the three women have a close relationship?
- Discuss the reactions of the two young widows.
- What does the fact that the women of the city come out to greet Naomi imply about her character?
- Discuss Naomi's response to the townswomen. Is she blaming God for her hardships?
- Look up "barley harvest" in a Bible dictionary to get a better idea of the time to which they returned.
Lesson 3 – The Kinsman: 2:1 - 23
I. Read the text
II. Chapter 2
a. We are introduced to Boaz, a kinsman of Elimelech.
i. He was only connected to Naomi through her husband; this is what made it possible for him to be the kinsman-redeemer.
ii. He is described as "a mighty man of wealth". This is sometimes associated with strength in battle (Judges 6:12; 2 Kings 5:1); and certainly Boaz could have been a warrior, but most likely this description points to his influence in the community as a powerful landowner of moral integrity.
iii. The meaning of the name "Boaz" uncertain.
B. Ruth, being the more physically able of the two, offers to go to work in the field.
i. She states that hopefully she will find favor in some good man's sight who would allow her to continue in his field (Ruth 2:2).
- Ruth may not have been aware of the law concerning widows and the harvest (Deuteronomy 24:19).
- The Great Provider made it possible for the less fortunate of Israel to be fed by giving the law in Leviticus 19:9.
- It was Ruth's right, as a widow, to glean in the corner of the field.
ii. It seems there was one field in which many planted and worked their own part of it.
- We should not picture, as we are accustomed to, several fields in which individual landowners planted.
C. After the reapers, Ruth gleans and coincidently/providentially comes to a part of the field that belongs to Boaz.
i. We should not overlook the providence of God here.
ii. The author again points to Boaz being from Elimelech's family.
iii. The hand of God is working for His purpose and we must not miss that point.
D. Ruth 2:4 shows us that Boaz was one of those who believed their religious faith should enter every other aspect of life.
i. He greets his workers with the conventional greeting, "The Lord be with you" or "Peace be with you", and they respond, "The Lord bless you."
E. Immediately, Boaz sees Ruth and inquires about the newcomer.
i. His servant recounts all to him about the Moabite.
- She is a hard worker and has taken only one short break.
ii. Boaz may have heard earlier about Naomi and Ruth's return, but now he can put a face to the foreigner.
F. Boaz speaks directly to Ruth and ensures her share in his part of the field. (Ruth 2:8-9)
i. He urges her to stay close to the reapers and glean because he has instructed them to leave her alone.
- The usual practice called for the gleaners to work far behind the reapers to avoid any problems between themselves and the owners.
- By doing this, Boaz made it possible for Ruth to gather much grain since she would be working in front of the other gleaners.
G. Ruth realized that Boaz was doing more than was required under the circumstances and shows her gratitude by bowing in humility.
i. From Ruth's question we ascertain that she must have been curious about Boaz's kindness (Ruth 2:10).
H. Boaz answers that he is impressed with her sacrifice to follow Naomi to a foreign land.
i. Similar to Abraham, Ruth went out not knowing where she went.
ii. Boaz prays that God will show kindness to Ruth as she has to Naomi. (As we will see, this prayer was answered through the man who spoke it.)
iii. Ruth's acceptance into the Israelite community, despite Deuteronomy 23:3, brings up some concern. However, we must remember that this account takes place during the judges. It was a time in which "everyone did what was right in his own eyes" (Judges 17:6; 21:25) and throughout the Bible God works through man's disobedience.
I. Ruth's reply in verse thirteen shows Boaz's speech meant a great deal to her.
i. Such kindness would not be expected since she was not even one of his maidservants.
J. Ruth 2:14-16: Boaz goes out of his way to be kind to Ruth.
i. At mealtime, he makes sure she is full.
ii. During working hours, he instructs his men again concerning her and she gleans without anyone bothering her.
K. In the evening, she beats out the grain that she gleaned and it is equal to about four gallons of barley.
i. This is a large amount of grain and probably points to Ruth's hard work and Boaz's men obeying him (Ruth 2:16).
L. Ruth 2:18-23 records Naomi's response to Ruth coming under the good favor of their kinsman, Boaz.
i. Naomi realizes that the amount of grain is more than would be expected from a day's labor.
ii. Of learning of the good graces of Boaz, Naomi praises God for His loving-kindness.
iii. Naomi instructed her to continue to work near Boaz's workers.
iv. Ruth did as she was told, and even though she was working with Boaz's servants, she remained with Naomi as promised.
Discussion Questions:
- Kinsman-redeemer is an interesting Bible subject. Look up more information about this topic. How is Boaz a type of Christ?
- Discuss the character of Ruth from what we have learned so far in the book. What aspects can we apply to our lives?
- Study Deuteronomy 23:3. Are there other possibilities that may explain Israel accepting a Moabitess into their community?
- Discuss the character of Boaz. Which characteristics are needed by men today as they lead the home and the church?
- How often do we hear greetings like Boaz's in our workplaces? Why is this?
- Why do you think Boaz showed such generosity to the Moabitess?
- Under such circumstances as Naomi (and Ruth) have endured, there are usually two reactions from people: blame God and fall away or lean on God and grow stronger in Him. Discuss which reaction is Naomi's.
- What part of Ruth's promise (Ruth 1:16, 17) has she kept so far?
Lesson 4 – Ruth and Boaz: 3:1 - 18
I. Read the text
II. Chapter 3
a. The first five verses explain a custom we know little about. Naomi's plan is for Ruth to let Boaz know she is interested in marriage.
i. An unprotected woman during this time in history suffered many hardships.
- Marriage would drastically change Ruth's, as well as Naomi's, circumstances.
ii. The plan could be carried out that night since Boaz would be at the threshing floor.
- Threshing floors were usually located on top of high places to catch the wind.
- Animals would be used to tread the sheaves to separate the grain from the husk, and then it was thrown into the air so that the wind blew the chaff away and the heavier grain would fall to the ground.
- Usually, grain was threshed during the day, but maybe there was a suitable breeze at that time. Another reason could be that the grain needed to be guarded and Boaz was on duty this particular night.
iii. Ruth's instructions involve finding Boaz at the threshing floor and laying at his uncovered feet. The rest would be up to Boaz.
- Perhaps, the uncovering of the feet was to simply wake up Boaz, but it also signified Ruth's lowliness as a petitioner.
- She was asking if he would perform the duty of the nearest kinsman.
- We have no way of knowing how common this practice was, but it seems Ruth knew nothing about it since Naomi had to explain to her what to do.
B. Ruth 3:6-13: Ruth obeys Naomi's words.
i. Harvest time was a time of feasting and enjoyment, so Boaz was full and happy when he retired for the night.
ii. We are given the impression that Boaz slept for some time before awakening to find a woman at his feet.
iii. Ruth, humbly refers to herself as his maidservant and makes her plea.
- If Boaz threw his cloak over Ruth, he would be claiming her as his wife. See Ezekiel 16:8.
iv. Boaz speaks a blessing on Ruth.
- He thinks she has shown more kindness now than when she first came, that is, in not forsaking Naomi and working.
- She has not sought a young man her age to marry, as would be expected, but has acted responsibly towards her family duty.
- He assures her that what she has done is all right, and all know that she is a virtuous woman, "of noble character". (Boaz is also described thus in 2:1) Same word is used in Proverbs 31:10, 12:4.
V. The "plot thickens" as Boaz makes known that there is a nearer kinsman than himself.
- It was a customary practice of Israel for the living nearest kin to produce offspring for the deceased man.
- Deuteronomy 25:5-10 mentions a brother only, but common sense tells us from this passage that in due order a near kinsman could raise up children for his brother-less relative.
- Boaz's, "as the Lord lives", shows his determination in the matter.
C. The Bible is not squeamish about describing sexual encounters, but the writer of Ruth does not indicate anything improper that occurs between Ruth and Boaz.
i. In fact, if read carefully, and without a 21st century idea of sexual permissiveness, it becomes clear that the writer is implying that both of them acted virtuously in a situation that could have turned out otherwise.
ii. Thus, Boaz sends her home before daybreak so that anyone she might meet would not recognize her and speak false rumors.
iii. However, he does not send his prospective bride away empty handed.
D. Naomi's question when Ruth returns seems strange unless we think about the hour of the morning (before daylight). Today, we might say, "Is it you?"
i. Ruth's answer is not recorded in detail, but the writer mentions the barley from Boaz.
- "Do not go empty-handed to your mother-in-law" is not recorded in Ruth 3:15.
- The same word for "empty" is used in Ruth 1:21 possibly pointing to the fact that Naomi's "empty" days are over.
ii. Naomi's response shows that she trusts in Boaz as a man that would see the matter through until finished.
Discussion Questions:
- How has "courting" and marriage proposals changed since this time? Can we learn some godly principles from their customs?
- Why was marriage such a desired state for a woman especially for this time and culture?
- Discuss the ways that both Ruth and Boaz act morally upright.
- Read Proverbs 31. Discuss the characteristics of this lady. How can we apply those characteristics in our lives?
- Is there some indication that Boaz had encounters with the widows outside of what is recorded here?
- What are some things we need to teach our young women (and men) about the book of Ruth? (Read Titus 2:3-5)
Lesson 5 – Redemption: 4:1 - 22
I. Read the text
II. Chapter 4
A. This is one of the few passages in God's Word and documents of man that give us insight into this legal process in the ancient world.
B. The city gate was a significant place in Palestinian cities.
i. It was where people met to discuss business transactions.
ii. It was a kind of outdoor court where judicial matters were resolved by the elders of the city and other respectable people.
iii. In a place as small as Bethlehem, the best place to find someone is to wait for them to pass by the city gate.
C. Boaz gives a friendly greeting and the fact that the man's name is not mentioned may be because he did not fulfill his role as kinsman-redeemer.
i. Boaz gathers some of the elders of the city together as witnesses and explains the situation to them.
ii. It was extremely important in Israel to keep land within the family. See Jeremiah 32:6-12.
iii. The kinsman is ready to redeem the land, however, he declines the opportunity when he finds out that he would need to marry Ruth the Moabitess, and their firstborn would legally be Mahlon's son.
- Apparently, from his excuse (Ruth 4:6) his own family would not possess the land if he had a child with Ruth.
iv. The removing of his sandal and handing it to Boaz was a custom symbolizing the transfer of land ownership.
- During this time, few written records were kept and a verbal declaration in front of witnesses was legally binding.
v. Since the gate was the center of social life, Boaz's speech (Ruth 4:9, 10) was a way of saying that the name of the deceased would live on in the community.
D. The witnesses respond to all that has happened and pronounce a blessing upon Ruth, Boaz, and their descendants.
i. For Ruth, a prayer of fruitfulness was given to be like Rachel and Leah, from whom came the twelve tribes
ii. For Boaz, an expression of hope was pronounced to be well-known in Bethlehem.
iii. For the descendants, a reference to Genesis 38:6-29 is given and may have significance since Tamar's situation was similar to Ruth's and Bethlehem (the setting of the book of Ruth) is the territory where the tribe of Judah dwelt.
E. The two are wed and God blesses them with a child.
i. Notice the writer of Ruth (as well as the people in Ruth 4:12) regard children as a blessing from God.
F. The women who had witnessed Naomi's bitter lament (Ruth 1:20, 21) now gather around her to share her happiness (Ruth 4:14, 15).
i. They praised the Lord, giving Him credit for providing a kinsman-redeemer.
ii. They praised Boaz, for he was the kinsman-redeemer.
iii. They praised the child, as a restorer or sustainer of Naomi's old age.
iv. They praised Ruth as better than seven sons (the number 7 represents completion or a perfect family – see 1 Samuel 2:5).
G. It is expected that Naomi would delight in this child after such hardships she experienced.
i. She "laid him on her bosom and became a nurse to him" (Ruth 4:16).
- Whether this means she was a wet nurse to the child or is just describing a grandmother delighting in her first grandbaby, we do not know.
ii. His name is Obed, which means "servant".
H. The book ends with a short genealogy.
i. Two people brought together by a highly unlikely series of circumstances became ancestors of the great king of Israel, David, who in turn provides an essential link in the genealogy of our Lord (Matthew 1:4-16).
I. Points to consider about the book of Ruth:
i. Boaz is a type of Christ:
- As redeemer (Ruth 2:20)
- As the lord of harvest (Ruth 2:3)
- As a dispenser of bread (Ruth 3:15)
- As a giver of rest (Ruth 3:1)
- As a man of valor (Ruth 2:1)
ii. If a human can love an outcast, redeem her, and have fellowship with her, then God certainly can do the same for all the outcasts of the world! (Romans 5:8)
iii. Circumstances neither make nor destroy believers; neither Naomi and Ruth's poverty nor Boaz's wealth turned them from God.
Discussion Questions:
- Find other passages in the Bible that show events occurring at the city gate.
- Discuss the character of Boaz as seen in this chapter.
- Read Matthew 1:5. Boaz is said to be the son of Salmon and Rahab. Do you think Boaz's connection with Rahab contributed to his acceptance of the outcast, Ruth?
- Read Genesis 38. Why might the people in Ruth point back to that incident in pronouncing their blessing upon Ruth and Boaz?
- Describe how Naomi's attitude has changed since the beginning of the book.
- Discuss the ways that this book points to (or is a shadow of) better things to come through Jesus Christ.
- Discuss the lessons we can apply in our lives from this account.. | https://thegospelofchrist.com/womens-study-aids/ruth-a-bible-study-guide-for-women | CC-MAIN-2019-26 | refinedweb | 4,604 | 81.33 |
Main feature
The primary feature of deepdish is its ability to save and load all kinds of data as HDF5. It can save any Python data structure, offering the same ease of use as pickling or numpy.save. However, it improves by also offering:
- Interoperability between languages (HDF5 is a popular standard)
- Easy to inspect the content from the command line (using h5ls or our specialized tool ddls)
- Highly compressed storage (thanks to a PyTables backend)
- Native support for scipy sparse matrices and pandas DataFrame, Series and Panel
- Ability to partially read files, even slices of arrays
An example:
import deepdish as dd d = { 'foo': np.ones((10, 20)), 'sub': { 'bar': 'a string', 'baz': 1.23, }, } dd.io.save('test.h5', d)
This can be reconstructed using dd.io.load('test.h5'), or inspected through the command line using either a standard tool:
$ h5ls test.h5 foo Dataset {10, 20} sub Group
Or, better yet, our custom tool ddls (or python -m deepdish.io.ls):
$ ddls test.h5 /foo array (10, 20) [float64] /sub dict /sub/bar 'a string' (8) [unicode] /sub/baz 1.23 [float64]
Read more at Saving and loading data. | https://pypi.org/project/deepdish/ | CC-MAIN-2017-09 | refinedweb | 193 | 62.27 |
As many of you are aware, Microsoft Access databases do not automatically shrink when data is deleted. Instead, Access files keep growing until the user asks the application to Compact and Repair the database. (You can tell Access to automatically Compact and Repair when the last user closes the database, but that doesn't solve the problem presented here.) What if you need to manage Access files programmatically? Our company's software writes data to Access files on the fly and allows the user to download them. There was just one problem — every time we wrote to the file, even if we truncated the tables first, it kept growing! My task was to find a way to programmatically Compact and Repair the database without involving the user or running Access. After some searching, I was able to locate several examples of doing this in classic ASP and/or VB6. I took those examples, migrated the code to managed C#, brought it into an ASP.NET web site, and abstracted the process into a simple utility function.
Microsoft provides a way to Compact and Repair databases programmatically through its JET framework. JET is also the means through which applications are able to read from and write to various file formats in the Microsoft Office ecosystem, including Access and Excel files. Our application uses SQL Server Integration Services to export data through the JET provider from SQL Server to an Access database, which the user then downloads through HTTP.
The code consists of one static method placed in a class that should be in the App_Code folder of an ASP.NET web site. The code could just as easily be adapted to function within a desktop application — you'd just need to remove the path mapping logic.
static
There is one critical step to take before this code will compile (don't worry, it's easy). Add a reference to your web site (or desktop application). In the Add Reference dialog, go to the COM tab. Find Microsoft Jet and Replication Objects Library (the DLL file is msjro.dll), and add it. If you're working on a web site, you'll notice that Visual Studio will automatically create a couple Interop assemblies in your BIN folder. These are used to allow your managed code to talk to the COM objects involved. The process is completely transparent — do a search for .NET COM Interop to learn more about it.
Note: Compacting a database in this way cannot necessarily be done concurrently. It is up to you to implement concurrency and thread safety as needed.
Here is the code including XML comments (you can also download it above):;
using JRO;
/// <summary>
/// Encapsulates small static utility functions.
/// </summary>
public class Utility
{
/// <summary>The connection to use to connect to
/// an Access database using JET.</summary>
public const string AccessOleDbConnectionStringFormat =
"Data Source={0};Provider=Microsoft.Jet.OLEDB.4.0;";
/// <summary>
/// Compacts an Access database using Microsoft JET COM
/// interop.
/// </summary>
/// <param name="fileName">
/// The filename of the Access database to compact. This
/// filename will be mapped to the appropriate path on the
/// web server, so use a tilde (~) to specify the web site
/// root folder. For example, "~/Downloads/Export.mdb".
/// The ASP.NET worker process must have been granted
/// permission to read and write this file, as well as to
/// create files in the folder in which this file resides.
/// In addition, Microsoft JET 4.0 or later must be
/// present on the server.
/// </param>
/// <returns>
/// True if the compact was successful. False can indicate
/// several possible problems including: unable to create
/// JET COM object, unable to find source file, unable to
/// create new compacted file, or unable to delete
/// original file.
/// </returns>
public static bool CompactJetDatabase(string fileName)
{
// I use this function as part of an AJAX page, so rather
// than throwing exceptions if errors are encountered, I
// simply return false and allow the page to handle the
// failure generically.
try
{
// Find the database on the web server
string oldFileName =
HttpContext.Current.Server.MapPath(fileName);
// JET will not compact the database in place, so we
// need to create a temporary filename to use
string newFileName =
Path.Combine(Path.GetDirectoryName(oldFileName),
Guid.NewGuid().ToString("N") + ".mdb");
// Obtain a reference to the JET engine
JetEngine engine =
(JetEngine)HttpContext.Current.Server.CreateObject(
"JRO.JetEngine");
// Compact the database (saves the compacted version to
// newFileName)
engine.CompactDatabase(
String.Format(
AccessOleDbConnectionStringFormat, oldFileName),
String.Format(
AccessOleDbConnectionStringFormat, newFileName));
// Delete the original database
File.Delete(oldFileName);
// Move (rename) the temporary compacted database to
// the original filename
File.Move(newFileName, oldFileName);
// The operation was successful
return true;
}
catch
{
// We encountered an error
return false;
}
}
}
Thanks to Roy Fine, Craig Starnes and Michael Brinkley for their earlier work on this issue in other. | http://www.codeproject.com/Articles/18081/Compact-and-Repair-an-Access-Database-Programmatic?PageFlow=FixedWidth | CC-MAIN-2015-35 | refinedweb | 788 | 55.84 |
It's no secret that I think the cult of TDD leads you down a dark and stormy path towards brittle code with a false sense of security.
Clean isolation, mocks for all things impure, interfaces designed for ease of testing over ease of use. The sense of achievement is immense. Look at all those green checkmarks 💪
And outside a few special cases with complex logic or algorithms, it means boopkis to your production code.
Most business code is JSON bureaucracy: shuttling data from one side to another. Transforming formats. Joining data streams. Abstracting knowledge domains and business processes.
You know how to write a loop and TypeScript ensures you hold the code correctly. Unit tests don't add much.
Fakes over mocks
Unit tests isolate too much.
While fantastic for complex algorithms and gnarly logic, they break down for JSON bureaucracy. They don't test the right things.
Take this test for example:
describe("getOAuthToken", () => {it("should call the db with the tokenReference successfully", async () => {const tokenReference = "tokenReference"const tableName = "oauth_token"await oauthService.getOAuthToken(tokenReference)expect(db).toBeCalledWith(tableName)expect(db().where).toBeCalledTimes(1)expect(db().where).toBeCalledWith("internal_reference", tokenReference)})})
Verifies that calling
getOAuthToken talks to the database and uses the
tokenReference to fetch a row. The database is mocked and this test is pure. Runs on your machine with no external dependencies very fast.
A beautiful example of a great unit test. 👌
Do you think it's a useful test?
How stubs and mocks fail
What happens if we change the table name? This test won't notice.
What happens if we change the table structure? This test doesn't even check.
How do you know the code does anything of value? The test pretends to check, but you can make it pass with useless code, if you want.
export async function getOAuthToken(): Promise<Token> {db("oauth_token")db("random_table").where("internal_reference", "tokenReference")return {access_token: "123",refresh_token: "afefaw",expires_in: 3600,}}
Call the correct table then call a
.where() with hardcoded params on a random table. Return an unrelated value of the right type. 💩
We're using Jest with TypeScript which ensures the return types must match. At least there's that.
You won't be mean like this in your project, I hope. It can happen by accident. When someone unfamiliar with your code makes a change that breaks it in a way that mocks obscure.
Stubs and mocks test a strawman
Stubs and mocks construct a strawman and test that, not your code. This is a lesson Google learned at scale – stubs and mocks make bad tests.
But full integration testing is slow, resource intensive, and hard to get right. The more complex your production environment, the harder it is to simulate.
To quote David Wells, an early developer of Serverless Framework: "Forget about local testing, you can't replicate AWS on your machine".
The solution are fakes. This also is a lesson Google learned at scale and put in their wonderful Software Engineering at Google book.
Fakes are lightweight implementations of the services you need. Maintained, ideally, by the team who makes that service.
pg-mem is a fake database for your tests
pg-mem is a pure in-memory implementation of Postgres. The perfect solution for database tests that sit in that sweet spot between pure unit and full integration. 😍
You get database reads and writes, guarantees around data consistency, migrations, and zero overhead. Your tests run in memory, as fast as always.
pg-mem achieves this through Olivier Guimbal's herculean effort. The madman built his own SQL parser and reimplemented an almost full-featured clone of Postgres in TypeScript.
What pg-mem looks like in practice
See the example repository for the full setup. It's based on my How to configure Jest with TypeScript from a while back when I thought this article would be "next week". 😅
For this example we're using knex, a query builder, to talk to the database and pg-mem for testing. The project doesn't do anything useful, it exists to show off testing.
Use Jest manual mocks to fake your database
Jest manual mocks are the perfect place to implement a fake. You build a lightweight implementation of your code and Jest uses it in tests.
For the database example, you take this db connection file:
// ./src/db.tsimport knex from "knex"import knexFile from "../knexfile"export default knex(knexFile)
And add its fake counterpart in a sibling
__mocks__/ directory:
// ./src/__mocks__/db.tsimport { newDb } from "pg-mem"import knexFile from "../../knexfile"const mem = newDb()export default mem.adapters.createKnex(0, knexFile) as typeof import("knex")
The original
db.ts file configures knex to connect to your database. You can
import db and run SQL queries with
db('table_name') throughout your codebase.
The
__mocks__/ version uses the same config, but instantiates an in-memory pg-mem instance and connects to that. Unless you're doing something special, the rest of your code should Just Work. It has no idea the database is fake 🤘
Setup the pg-mem database
Your fake database needs to be in the right state to run tests. Have the tables, the seed data, etc.
That happens in
jest.setup.ts, a test setup file that runs before your tests. Configured by the
setupFilesAfterEnv value in
jest.config.ts.
// jest.setup.tsimport db from "./src/db"// enables the fake database for all test filesjest.mock("./src/db")// run migrationsbeforeAll(async () => {await db.migrate.latest()})// close connectionafterAll(async () => {await db.destroy()})
We import the faked
db file, enable mocking for every test, and run migrations before anything else. After tests are done, we close the connection so Jest doesn't hang.
Write better tests with a fake database
Take that
getOAuthToken example from before. Here's the function itself:
export type OAuthToken = {access_token: stringrefresh_token: stringexpires_in: numberinternal_reference: string}export type OAuthTokenRow = OAuthToken & {internal_reference: string}export const getOAuthToken = async (tokenReference: string): Promise<OAuthToken> => {const tokenRow: OAuthTokenRow = await db("oauth_token").where("internal_reference", tokenReference).first()return omit(tokenRow, "internal_reference")}
Talks to the database and returns the first row of the
oauth_token table that matches the
tokenReference. Omits
internal_reference before returning the value because that's an implementation detail. I think. We could debate on that.
How would you write a test for this function?
Here's what I did:
// ./src/__tests__/oauth-service.tsdescribe("oauth-service", () => {let token: oauthService.OAuthToken, tokenReference: stringbeforeEach(() => {tokenReference = Faker.datatype.uuid()token = {access_token: Faker.datatype.string(20),refresh_token: Faker.datatype.string(20),expires_in: 3600,}})describe("getOAuthToken", () => {it("should read the oauth token from db", async () => {await db("oauth_token").insert({...token,internal_reference: tokenReference,})const oauthToken = await oauthService.getOAuthToken(tokenReference)expect(oauthToken).toEqual(token)})})})
Construct a fake OAuth token, insert into the database, use the
getOAuthToken function. Compare that the result matches expectations.
Now you can rely on this test to tell you if something's wrong.
Table got renamed? You get a SQL error.
Table structure changed? The result no longer matches unless you fix the function.
Made a typo in your query or aren't using the params? The test will tell you.
You get better insert tests too
The insertion counterpart to
getOAuthToken looks like this:
export const insertOAuthToken = async (tokenReference: string,token: OAuthToken) => {return db("oauth_token").insert({...token,internal_reference: tokenReference,})}
Gets an
OAuthToken and its internal reference, saves to the database.
How would you write this test?
Here's what I did:
describe("insertOAuthToken", () => {it("should insert a token", async () => {const [{ count: prevCount }] = await db("oauth_token").count()await oauthService.insertOAuthToken(tokenReference, token)const [{ count: afterCount }] = await db("oauth_token").count()const newToken = await db("oauth_token").where("internal_reference", tokenReference).first()expect(afterCount).toEqual((prevCount as number) + 1)expect(omit(newToken, "internal_reference")).toEqual(token)})})
Compares row count before and after insertion, verifies it increments. Fetches the inserted value and verifies it matches expectations.
Another great test would be to verify what happens, if you try to use a non-unique
tokenReference and insert twice. The code currently has a bug I think.
Principles of good DB testing with pg-mem
- Test the data gets inserted
- Test the right data gets inserted
- Test the inserted data gets returned
- Avoid hardcoded values with Faker
Now all I'm missing from Rails is something like Factory Bot for convenient test models and data factories. The search continues
Cheers,
~Swizec
PS: you can use the
__mocks__ approach to write a lightweight fake of any file. Like a 3rd party SDK, a fake microservice, or an API ️ | https://swizec.com/blog/pg-mem-and-jest-for-smooth-integration-testing/ | CC-MAIN-2022-27 | refinedweb | 1,408 | 51.55 |
Quick Start Guide¶
The first thing you’ll need to do to get started is install ChatterBot.
pip install chatterbot
See Installation for options for alternative installation methods.
Create a new chat bot¶
from chatterbot import ChatBot chatbot = ChatBot("Ron Obvious")
Note
The only required parameter for the ChatBot is a name. This can be anything you want.
Training your ChatBot¶
After creating a new ChatterBot instance it is also possible to train the bot. Training is a good way to ensure that the bot starts off with knowledge about specific responses. The current training method takes a list of statements that represent a conversation. Additional notes on training can be found in the Training documentation.
Note
Training is not required but it is recommended.
from chatterbot.trainers import ListTrainer conversation = [ "Hello", "Hi there!", "How are you doing?", "I'm doing great.", "That is good to hear", "Thank you.", "You're welcome." ] trainer = ListTrainer(chatbot) trainer.train(conversation) | https://chatterbot.readthedocs.io/en/stable/quickstart.html | CC-MAIN-2021-25 | refinedweb | 158 | 52.15 |
SequenceMatcher in Python
The topic of this tutorial: SequenceMatcher in Python using difflib.
introduction:
String is an interesting topic in programming. We use so many methods and build-in functions to program strings. SequenceMatcher class is one of them. With the help of SequenceMatcher we can compare the similarity of two strings by their ratio. For this, we use a module named “difflib”. By this, we import the sequence matcher class and put both of the strings into it.
We can use only two strings where it compares string two with string 1 and shows the ration of how string two is similar to string one. It is a better idea to compare two strings with a few lines of code. The idea behind this is to find the longest matching subsequence which should be continued and compare it with full string and then get the ration as output.
#import the class from difflib import SequenceMatcher s1 = "gun" s2 = "run" sequence = SequenceMatcher(a=s1 , b=s2) #comparing both the strings print(sequence.ratio())
output:0.6666666666666666
This “difflib” class also provides some extra features. But two features are mostly used for programs first one is get_close_matches and differ.
With get_close_matches we compare a particular list of string elements with a given string and find out those strings who are close to the given cutoff. The below code will explain this very well.
from difflib import SequenceMatcher , get_close_matches s1 = "abcdefg" list_one = ["abcdefghi" , "abcdef" , "htyudjh" , "abcxyzg"] match = get_close_matches(s1,list_one , n=2 , cutoff=0.6) print(match)
output:
['abcdef' , 'abcdefghi']
In the get_close_matches class I am defining four things:
s1 : Takes the string s1
list1: Takes the list list1
n : How many strings I want in my output it can be any number but should be less than total elements in the list.
cutoff: Defining how much ratio I want between them.
There is another important method we use of this module named differ.
Differ compare two texts which contain some sentences and give common sentences in output. Let me explain in the code.
from difflib import Differ text1 = ''' hello world! i like python and code in it.'''.splitlines() text2 = ''' hello world! i like java and coding'''.splitlines() dif = Differ() df = list(dif.compare(text1 , text2)) from pprint import pprint pprint(df)
output:
[' ', ' hello world!', '- i like python and code in it.', '+ i like java and coding']
Here in output, We can see “hello world!” is common in both the strings, So it is printing only one time. But the rest of the content is different so it is printing separately.
SequenceMatcher class is mostly used for comparing two string. Which comes in many programming challenges. Even it reduces the time complexity and makes the code more efficient.
Also read: | https://www.codespeedy.com/sequencematcher-in-python/ | CC-MAIN-2022-27 | refinedweb | 458 | 66.84 |
dpath 1.4.2
Filesystem-like pathing and searching for dictionaries
dpath-python
A python library for accessing and searching dictionaries via /slashed/paths ala xpath
Basically it lets you glob over a dictionary as if it were a filesystem. It allows you to specify globs (ala the bash eglob syntax, through some advanced fnmatch.fnmatch magic) to access dictionary elements, and provides some facility for filtering those results.
sdists are available on pypi:
Installing
The best way to install dpath is via easy_install or pip.
easy_install dpath pip install dpath
Using Dpath
import dpath.util
Separators
All of the functions in this library (except ‘merge’) accept a ‘separator’ argument, which is the character that should separate path components. The default is ‘/’, but you can set it to whatever you want.
Searching
Suppose we have a dictionary like this:
x = { "a": { "b": { "3": 2, "43": 30, "c": [], "d": ['red', 'buggy', 'bumpers'], } } }
… And we want to ask a simple question, like “Get me the value of the key ‘43’ in the ‘b’ hash which is in the ‘a’ hash”. That’s easy.
>>> help(dpath.util.get) Help on function get in module dpath.util: get(obj, glob, separator='/') Given an object which contains only one possible match for the given glob, return the value for the leaf matching the given glob. If more than one leaf matches the glob, ValueError is raised. If the glob is not found, KeyError is raised. >>> dpath.util.get(x, '/a/b/43') 30
Or you could say “Give me a new dictionary with the values of all elements in x['a']['b'] where the key is equal to the glob '[cd]'. Okay.
>>> help(dpath.util.search) Help on function search in module dpath.util: search(obj, glob, yielded=False) Given a path glob, return a dictionary containing all keys that matched the given glob. If 'yielded' is true, then a dictionary will not be returned. Instead tuples will be yielded in the form of (path, value) for every element in the document that matched the glob.
… Sounds easy!
>>> result = dpath.util.search(x, "a/b/[cd]") >>> print json.dumps(result, indent=4, sort_keys=True) { "a": { "b": { "c": [], "d": [ "red", "buggy", "bumpers" ] } } }
… Wow that was easy. What if I want to iterate over the results, and not get a merged view?
>>> for x in dpath.util.search(x, "a/b/[cd]", yielded=True): print x ... ('a/b/c', []) ('a/b/d', ['red', 'buggy', 'bumpers'])
… Or what if I want to just get all the values back for the glob? I don’t care about the paths they were found at:
>>> help(dpath.util.values) Help on function values in module dpath.util: values(obj, glob, separator='/', afilter=None, dirs=True) Given an object and a path glob, return an array of all values which match the glob. The arguments to this function are identical to those of search(), and it is primarily a shorthand for a list comprehension over a yielded search call. >>> dpath.util.values(x, '/a/b/d/\*') ['red', 'buggy', 'bumpers']
Example: Setting existing keys
Let’s use that same dictionary, and set keys like ‘a/b/[cd]’ to the value ‘Waffles’.
>>> help(dpath.util.set) Help on function set in module dpath.util: set(obj, glob, value) Given a path glob, set all existing elements in the document to the given value. Returns the number of elements changed. >>> dpath.util.set(x, 'a/b/[cd]', 'Waffles') 2 >>> print json.dumps(x, indent=4, sort_keys=True) { "a": { "b": { "3": 2, "43": 30, "c": "Waffles", "d": "Waffles" } } }
Example: Adding new keys
Let’s make a new key with the path ‘a/b/e/f/g’, set it to “Roffle”. This behaves like ‘mkdir -p’ in that it makes all the intermediate paths necessary to get to the terminus.
>>> help(dpath.util.new) Help on function new in module dpath.util: new(obj, path, value) Set the element at the terminus of path to value, and create it if it does not exist (as opposed to 'set' that can only change existing keys). path will NOT be treated like a glob. If it has globbing characters in it, they will become part of the resulting keys >>> dpath.util.new(x, 'a/b/e/f/g', "Roffle") >>> print json.dumps(x, indent=4, sort_keys=True) { "a": { "b": { "3": 2, "43": 30, "c": "Waffles", "d": "Waffles", "e": { "f": { "g": "Roffle" } } } } }
This works the way we expect with lists, as well. If you have a list object and set index 10 of that list object, it will grow the list object with None entries in order to make it big enough:
>>> dpath.util.new(x, 'a/b/e/f/h', []) >>> dpath.util.new(x, 'a/b/e/f/h/13', 'Wow this is a big array, it sure is lonely in here by myself') >>> print json.dumps(x, indent=4, sort_keys=True) { "a": { "b": { "3": 2, "43": 30, "c": "Waffles", "d": "Waffles", "e": { "f": { "g": "Roffle", "h": [ null, null, null, null, null, null, null, null, null, null, null, null, null, "Wow this is a big array, it sure is lonely in here by myself" ] } } } } }
Handy!
Example: Merging
Also, check out dpath.util.merge. The python dict update() method is great and all but doesn’t handle merging dictionaries deeply. This one does.
>>> help(dpath.util.merge) Help on function merge in module dpath.util: merge(dst, src, afilter=None, flags=4, _path='') Merge source into destination. Like dict.update() but performs deep merging. flags is an OR'ed combination of MERGE_ADDITIVE, MERGE_REPLACE, or MERGE_TYPESAFE. * MERGE_ADDITIVE : List objects are combined onto one long list (NOT a set). This is the default flag. * MERGE_REPLACE : Instead of combining list objects, when 2 list objects are at an equal depth of merge, replace the destination with the source. * MERGE_TYPESAFE : When 2 keys at equal levels are of different types, raise a TypeError exception. By default, the source replaces the destination in this situation. >>> y = {'a': {'b': { 'e': {'f': {'h': [None, 0, 1, None, 13, 14]}}}, 'c': 'RoffleWaffles'}} >>> print json.dumps(y, indent=4, sort_keys=True) { "a": { "b": { "e": { "f": { "h": [ null, 0, 1, null, 13, 14 ] } } }, "c": "RoffleWaffles" } } >>> dpath.util.merge(x, y) >>> print json.dumps(x, indent=4, sort_keys=True) { "a": { "b": { "3": 2, "43": 30, "c": "Waffles", "d": "Waffles", "e": { "f": { "g": "Roffle", "h": [ null, 0, 1, null, 13, 14, null, null, null, null, null, null, null, "Wow this is a big array, it sure is lonely in here by myself" ] } } }, "c": "RoffleWaffles" } }
Now that’s handy. You shouldn’t try to use this as a replacement for the deepcopy method, however - while merge does create new dict and list objects inside the target, the terminus objects (strings and ints) are not copied, they are just re-referenced in the merged object.
Filtering
All of the methods in this library (except new()) support a ‘afilter’ argument. This can be set to a function that will return True or False to say ‘yes include that value in my result set’ or ‘no don’t include it’.
Filtering functions receive every terminus node in a search - e.g., anything that is not a dict or a list, at the very end of the path. For each value, they return True to include that value in the result set, or False to exclude it.
Consider this example. Given the source dictionary, we want to find ALL keys inside it, but we only really want the ones that contain “ffle” in them:
>>> print json.dumps(x, indent=4, sort_keys=True) { "a": { "b": { "3": 2, "43": 30, "c": "Waffles", "d": "Waffles", "e": { "f": { "g": "Roffle" } } } } } >>> def afilter(x): ... if "ffle" in str(x): ... return True ... return False ... >>> result = dpath.util.search(x, '**', afilter=afilter) >>> print json.dumps(result, indent=4, sort_keys=True) { "a": { "b": { "c": "Waffles", "d": "Waffles", "e": { "f": { "g": "Roffle" } } } } }
Obviously filtering functions can perform more advanced tests (regular expressions, etc etc).
Key Names
By default, dpath only understands dictionary keys that are integers or strings. String keys must be non-empty. You can change this behavior by setting a library-wide dpath option:
import dpath.options dpath.options.ALLOW_EMPTY_STRING_KEYS = True
Again, by default, this behavior is OFF, and empty string keys will result in dpath.exceptions.InvalidKeyName being thrown.
Separator got you down? Use lists as paths
The default behavior in dpath is to assume that the path given is a string, which must be tokenized by splitting at the separator to yield a distinct set of path components against which dictionary keys can be individually glob tested. However, this presents a problem when you want to use paths that have a separator in their name; the tokenizer cannot properly understand what you mean by ‘/a/b/c’ if it is possible for ‘/’ to exist as a valid character in a key name.
To get around this, you can sidestep the whole “filesystem path” style, and abandon the separator entirely, by using lists as paths. All of the methods in dpath.util.* support the use of a list instead of a string as a path. So for example:
dpath.path : The Undocumented Backend
dpath.util is where you want to spend your time: this library has the friendly functions that will understand simple string globs, afilter functions, etc.
dpath.path is the backend pathing library - it is currently undocumented, and not meant to be used directly! It passes around lists of path components instead of string globs, and just generally does things in a way that you (as a frontend user) might not expect. Stay out of it. You have been warned!
Contributors
We would like to thank the community for their interest and involvement. You have all made this project significantly better than the sum of its parts, and your continued feedback makes it better every day. Thank you so much!
The following authors have contributed to this project, in varying capacities:
- Caleb Case <calebcase@gmail.com>
- Andrew Kesterson <andrew@aklabs.net>
- Marc Abramowitz <marc@marc-abramowitz.com>
- xhh2a <xhh2a@berkeley.edu>
- Stanislav Ochotnicky <sochotnicky@redhat.com>
- Misja Hoebe <misja@conversify.com>
- Gagandeep Singh <gagandeep.2020@gmail.com>
- Alan Gibson <alan.gibson@gmail.com>
- Author: Caleb Case, Andrew Kesterson
- License: MIT
- Categories
- Development Status :: 5 - Production/Stable
- Environment :: Console
- Intended Audience :: Developers
- License :: OSI Approved :: MIT License
- Natural Language :: English
- Programming Language :: Python :: 2.7
- Programming Language :: Python :: 3
- Topic :: Software Development :: Libraries :: Python Modules
- Package Index Owner: akesterson
- DOAP record: dpath-1.4.2.xml | https://pypi.python.org/pypi/dpath/1.4.2 | CC-MAIN-2018-05 | refinedweb | 1,744 | 64.71 |
The.
Generally the display driver supports scrolling in hardware, but I haven't found a solution to scroll text with the driver in landscape orientation. So I programmed a simple text buffer and corresponding drawing functions to display the buffer on the display.
To use the text buffer display, it only needs to be included in the project:
#include "tb_display.h"
Then it can be initialized with the desired orientation:
Display orientationDisplay orientation
// init the text buffer display and print welcome text on the display
tb_display_init(screen_orientation);
The value for the orientation can be seen in this image:
Two functions are available to output either single characters or a whole string. Here is an example for the output of a single character received via the serial interface:
// check for serial input and print the received characters
while(Serial.available() > 0){
char data = Serial.read();
tb_display_print_char(data);
Serial.write(data);
}
And here for the output of a string:
Update #1:Update #1:
tb_display_print_String(" M5StickC\n\n Textbuffer Display\n\n");
Version 1.1 allows an additional (optional) parameter for the print string function. The optional parameter "chr_delay" allows a "character by character" processing of the String. Then, it looks like Teletype or Typewriter. The delay is in milliseconds. Thanks to LeRoy Miller for the idea.
// with 85ms Character delay, the display looks more
// like Teletype or a typewriter
tb_display_print_String("The quick brown fox jumps over the lazy dog and was surprised that he used all letters of the alphabet.", 85);
QWERTY keyboard HAT
Together with the keyboard HAT you have a simple way to enter and display texts or values.
Nao (n602) constructed a very nice frame for the keyboard HAT. With this frame the keyboard can be used much better and is optically enhanced. Check it out on thingiverse.
These can be, for example, access data to an access point, or login data. Or notes that come to mind at the breakfast table.Feedback
I hope this code can prove to be useful to the wider community. Feel free to message me here if you have questions or comments.
Enjoy!
Regards,
Hans-Günther | https://www.hackster.io/hague/m5stickc-textbuffer-scrolling-display-fb6428 | CC-MAIN-2022-33 | refinedweb | 353 | 56.05 |
Automatic Type Detection in C++ 11
Some of these new features are :
a) Automatic Type Detection
b) nullptr
c) Lambda Expressions
d) Uniform Initialization, etc.
Here, we shall discuss “Automatic Type Detection”.
The auto keyword
In C++ 11, the auto keyword was assigned a new meaning. It is no longer used as a storage class specifier and is now used to declare variables without specifying their data type. It can also be used with user-defined data types.
For example,
Earlier, we used to write :
int x = 79; double y = 24.5; char z = 'c';
Now we can simply write :
auto x = 79; auto y = 24.5; auto z = 'c';
#include <iostream> using namespace std; class mynewclass { int x; char y; double z; public: mynewclass(int a, char b, double c); void disp(); }; mynewclass::mynewclass(int a, char b, double c) { x = a; y = b; z = c; } void mynewclass::disp() { cout << x << "n"; cout << y << "n"; cout << z << "n"; } int main() { mynewclass m = mynewclass(49, 'a', 9.8); mynewclass n = mynewclass(68, 'd', 1.7); mynewclass o = mynewclass(57, 'b', 2.3); mynewclass p = mynewclass(34, 'e', 4.7); mynewclass q = mynewclass(12, 'h', 9.3); mynewclass r = mynewclass(95, 'i', 6.2); m.disp(); n.disp(); o.disp(); p.disp(); q.disp(); r.disp(); return 0; }
auto m = mynewclass(49, 'a', 9.8); auto n = mynewclass(68, 'd', 1.7); auto o = mynewclass(57, 'b', 2.3); auto p = mynewclass(34, 'e', 4.7); auto q = mynewclass(12, 'h', 9.3); auto r = mynewclass(95, 'i', 6.2);
The decltype operator
typedef decltype (m) wow; wow s = mynewclass(13, 'f', 34.2);
So, now it is clear that the new meaning assigned to the keyword auto has made it much more useful. Also, the new operator named decltype is a very useful addition to C++.
The next article discusses Lambda Expressions in C++ 11.
mynewclass m = mynewclass(49, 'a', 9.8);
and
mynewclass m(49, 'a', 9.8);
are just two different ways to initialize an instance of mynewclass. There is nothing wrong with the first method. On the contrary, the first method is the normal form and the second method is its abbreviated form.
In fact, you can see the following example from page no:- 227 of Bjarne Stroustrup's book "The C++ Programming Language":-
Date today = Date (23, 6, 1983);
Date xmas (25, 12 ,1990); // abbreviated form
As you can see in the excerpt above, the first method utilizes the method used in this article, while the method suggested by you is at the second place and has the comment "abbreviated form".
It means that Bjarne Stroustrup, the creator of C++, himself suggests the first method for initialization of an instance of a class.
I suppose you've understood that.
Concerning the "auto" keyword, the aim of this article is to provide a description of what the "auto" keyword is about. To use or to ignore it, depends upon individual programmers.
This is a really bad way to initialize an instance of a class:
mynewclass m = mynewclass(49, 'a', 9.8);
That will usually create a temporary mynewclass and then use the copy constructor to initialize m, and destroy the temporary.
It should be:
mynewclass m(49, 'a', 9.8);
IMO, 'auto' should only be used where type information isn't useful to the reader.
C++ 11? C++ 14? Why don't they create a new language altogether instead of all these modifications? Before I manage to get used to one version, another pops up. LOL
Thanks in support of sharing such a fastidious
thought, piece of writing is fastidious, thats why i
have read it fully
Very nice post. I just stumbled upon your weblog and wished
to say that I have really enjoyed surfing around your blog posts.
After all I will be subscribing to your rss feed and I
hope you write again soon!
Happy to Help 🙂
If some one wants expert view concerning blogging and site-building after that
i recommend him/her to pay a visit this website, Keep up the pleasant work. | https://www.wincodebits.in/2015/07/automatic-type-detection-in-c-11.html | CC-MAIN-2018-34 | refinedweb | 681 | 73.58 |
Some of these options may be deprecated or provide altered functionality in Zenoss 3.0. See the chapter titled "ZenPack Conversion Tasks for 3.0" for additional compatibility information.
This section will show how to add a new tab in Zenoss (prior to Version 3.0) or add a selection to the
(Action menu) (Version 3.0 or later), or modify an existing one by means of a ZenPack or zendmd.
A tab in Zenoss is an object property that resides within the following structure:
factory_type_information = ( { 'immediate_view' : 'deviceOrganizerStatus', 'actions' : ( { 'id' : 'status' , 'name' : 'Status' , 'action' : 'deviceOrganizerStatus' , 'permissions' : (permissions.view,) }, ) }, )
For example, tabs in the
Locations screen are created from the Python class definition
Location(DeviceOrganizer, ZenPackable)
which resides in the module
Location.py in the
$ZENPATH/Products/ZenModel directory.
Zenoss works with class instances which are created runtime by Zope. These objects are packed within database which is called ZODB. If you want to modify some object properties you should connect to ZODB and get the object first, modify it and save your changes.
The following example shows the procedure for adding a new tab to Locations screen. This code is executed from
__init__.py of an example ZenPack.
import Globals import transaction import os.path skinsDir = os.path.join(os.path.dirname(__file__), 'skins') from Products.CMFCore.DirectoryView import registerDirectory if os.path.isdir(skinsDir): registerDirectory(skinsDir, globals()) from AccessControl import Permissions as permissions from Products.ZenModel.ZenPack import ZenPackBase from Products.ZenUtils.Utils import zenPath from Products.ZenModel.ZenossSecurity import * from Products.ZenUtils.ZenScriptBase import ZenScriptBase class ZenPack(ZenPackBase): olMapTab = { 'id' : 'olgeomaptab' , 'name' : 'OpenLayers Map' , 'action' : 'OLGeoMapTab' , 'permissions' : (permissions.view,) } def _registerOLMapTab(self, app): # Register new tab in locations dmdloc = self.getDmdRoot('Locations') finfo = dmdloc.factory_type_information actions = list(finfo[0]['actions']) for i in range(len(actions)): if(self.olMapTab['id'] in actions[i].values()): return actions.append(self.olMapTab) finfo[0]['actions'] = tuple(actions) dmdloc.factory_type_information = finfo transaction.commit() def _unregisterOLMapTab(self, app): dmdloc = self.getDmdRoot('Locations') finfo = dmdloc.factory_type_information actions = list(finfo[0]['actions']) for i in range(len(actions)): if(self.olMapTab['id'] in actions[i].values()): actions.remove(actions[i]) finfo[0]['actions'] = tuple(actions) dmdloc.factory_type_information = finfo transaction.commit() def install(self, app): ZenPackBase.install(self, app) self._registerOLMapTab(app) def upgrade(self, app): ZenPackBase.upgrade(self, app) self._registerOLMapTab(app) def remove(self, app, junk): ZenPackBase.remove(self, app, junk) zpm = app.zport.ZenPortletManager self._unregisterOLMapTab(app)
The class method
_registerOLMapTab(self, app) registers the modified property of object
Locations , which resides in
/zport/dmd/Locations in the ZODB.
The function
getDmdRoot('Locations') returns the class instance of class
Location which is in ZopeDB. Next we get the dictionary of its factory_type_information property. Modify this, so that a new dictionary defining the tab is appended to it. The tab structure is defined in olMapTab dictionary. The id field is the identification name of this tab. You can put any string here. The name field is the string that is shown on your new tab, action points to the template that is executed when you click on the tab and should be accessible in Zope. The permissions field is default permissions for zenoss user to execute the template this tab points to. This line
dmdloc.factory_type_information = finfo is very important because Zope won't detect any change to the persistent object and
transaction.commit() won't save any modifications to the object. The rule here is that
commit() saves only modifications of object that executes its
setattr() method.
Of course every step shown above can be done manually within the zendmd prompt. The following session shows adding new tab to
Locations in zendmd:
zenoss@db-server:/home/geonick$ zendmd Welcome to zenoss dmd command shell! use zhelp() to list commands >>> from AccessControl import Permissions as permissions >>> locobj = dmd.getDmdRoot('Locations') >>> locobj <Location at /zport/dmd/Locations> >>> finfo = locobj.factory_type_information >>> finfo ({',)})},) >>> actions = list(finfo[0]['actions']) >>> olMapTab = {'id': 'olgeomaptab', 'name': 'OpenLayers Map', 'action': 'OLGeoMapTab','permissions': (permissions.view,)} >>> for i in range(len(actions)): ... if(olMapTab['id'] in actions[i].values()): ... break ... >>> actions.append(olMapTab) >>> finfo[0]['actions'] = tuple(actions) >>> locobj.factory_type_information = finfo >>> locobj.factory_type_information ({',)}, {'action': 'OLGeoMapTab', 'permissions': ('View',), 'id': 'olgeomaptab', 'name': 'OpenLayers Map'})},) >>>commit()
After
commit() the new tab should be in Locations. Don't forget to provide the template file.
(Submitted by Nikolai Georgiev)
Note
This procedure is valid for pre-3.0 versions only.
The dialog container exists on every page in Zenoss; it's a DIV element with the id attribute of dialog. Loading a dialog performs two actions:
Fetching (via an XHR) HTML to display inside the dialog container
Showing the dialog container. These can be accomplished by calling the
show()method on the dialog container, passing the event and an URL that will return the contents:
$('dialog').show(this.event, 'dialog_MyDialog')
The dialog can then be hidden with, predictably, $('dialog').hide(). Since dialogs are almost always loaded via clicking on a menu item, menu items whose isdialog attribute is
Truewill generate the JavaScript to show the dialog automatically. See the Section 6.3, “Adding a New Menu or Menu Item” section of this guide for more information.'
As for the dialog box contents themselves, any valid HTML will do, but certain conventions exist. Dialogs should have a header:
<h2>Perform Action</h2>
Dialogs should also provide a cancel button:
<input id="dialog_cancel" type="button" value="Cancel" onclick="$('dialog').hide()"/>
The main wrinkle with dialogs occurs in the area of form submission. Some dialogs are self-contained, and can carry their own form that is created and submitted just like any other form. Other dialogs, however, submit forms that exist elsewhere on the page -- for example, dialogs that perform actions against multiple rows checked in a table. These dialogs may use the submit_form method on the dialog container, which submits the form surrounding the menu item that caused the dialog to be loaded to the url passed in to the method. Thus for a table surrounded by a <form> and containing several checkboxes, dialogs loaded by menu items in the table's menu may submit the table's form to a url by providing a button:
<input type="submit" name="doAction:method" value="Do It" tal:
See the section on Section 1.3, “Zope 2, ZPT and TAL” for more information about tal:attributes and the
${here/absolute_url_path} syntax.
Finally, dialogs that create objects should validate the desired id before submitting. A method on the dialog container called
submit_form_and_check(), which accepts the same parameters as
submit_form() (URL), will do this. It requires:
A text box with the id 'new_id', the value of which will be checked
A hidden input field with the id checkValidIdPath, with a value containing the path in which the id should be valid (for example, creating a device under
/zport/dmd/Deviceswill require checking that no other devices in
/zport/dmd/Deviceshas the same id, so the value of checkValidIdPath should be "
/zport/dmd/Devices".
here/getPrimaryUrlPathworks well for most cases).
An element with the id errmsg into which the error message from the validation method, if any, will be put
For example, a generic object creation dialog:
<h2>Create Object</h2> <span id="errmsg" style="color:red;"></span> <br/> <span>ID: </span> <input id="new_id" name="id"/> <input type="hidden" id="checkValidIdPath" tal: <br/> <input tal: <input id="dialog_cancel" type="button" value="Cancel" onclick="$('dialog').hide()"/>
These examples will cover most cases; generally, a good idea is to look at other dialog templates that contain similar elements or perform similar actions.
Note
This procedure is valid for pre-3.0 versions only.
Classes that inherit from the ZenMenuable mixin have a method called getMenus, which traverses up the object's path aggregating
ZenMenuItem objects owned by its ancestors. These objects comprise an action to be executed, a human-readable description, and various attributes restricting the objects to which the item is applicable.
For example, imagine basic menus exist on
dmd and
dmd.Devices:
dmd More (menu) See more... (menu item) Do more... Manage Manage object... dmd.Devices More See more... Do less...
A call to
dmd.Devices.getMenus() will return:
More See more... (from dmd.Devices) Do more... (from dmd) Do less... (from dmd.Devices) Manage Manage object... (from dmd)
As you can see, menu items inherit their ancestors' unless they define their own, which override when their ancestors' conflict.
In theory, all
ZenMenuables (which includes nearly all objects in Zenoss) may own menu items; in practice, all but a few menus live on /zport/dmd.
Adding a new menu item is fairly straightforward. Because menu items are persistent objects, modifications must happen in a migrate script (or be included as XML in a ZenPack). The method
ZenMenuable.buildMenus() accepts a dictionary of menus, each of which is a list of dictionaries representing the attributes of menu items. Instructions on writing migrate scripts can be found elsewhere in this guide.
Find the id of the menu to which you wish to add items. The simplest way to do this is to locate the menu_ids definition on the page template that renders the menu. Tables will have a single menu id. The page menu may have several, which will be rendered as sub-menus. The TopLevel menu is a special case; it appears in the page menu, but its items are rendered as siblings of the other menus.
If activating the menu item will require a dialog, create one. See the Section 6.2, “Adding a Dialog” section of this guide for more info.
Determine the objects for which the menu item should be visible. Menu items will use several criteria for determining whether to apply:
allowed_classes: A list of strings of class names for which the menu item should be rendered.
banned_classes: A list of strings of class names for which the menu item should not be rendered.
banned_ids: A list of strings of object ids for which the menu item should not be rendered.
isglobal: Whether the menu item should be inherited by children of the menu item's owner.
permissions: The permissions the current user must have for the context in order for the item to render.
Figure out the action the menu item will perform. If it's a dialog, then the action is the name of the dialog template, and the isdialog attribute of the menu item should be
True. If it's a regular link, the action should be the URL or "javascript:" you would normally have as the href attribute of an anchor.
Now build the dictionary. It should look like this, where MenuId is the menu from step 1:
menus = { 'MenuId': [ { 'id': 'myUniqueId', 'description': 'Perform My Action...', 'action': 'dialog_myAction', 'isdialog': True, 'allowed_classes': ('MyGoodClass',), 'banned_classes': ('MyBadClass',), 'banned_ids': ('Devices',), 'ordering': 50.0, 'permissions': (ZenossSecurity.ZEN_COMMON,) }, ]}
'ordering' is a float determining the item's relative position in the menu. Greater numbers mean the item will be placed higher. Also notice that it's almost certainly pointless to set both allowed_classes and banned_classes; it was done here only as an example. The permission
ZEN_COMMONis a standard Zenoss permission -- see the new permissions section of this guide for more information.
If you have more menu items in the same menu, you can add them to that list; if you have more menus, you can create more keys in the menus dictionary.
Finally, use the
dmd.buildMenus()method to create the
MenuItems:
dmd.buildMenus(menus)
Note
This procedure is valid for 2.x versions and for re-skinned pages in version 3.x. See the chapter titled "ZenPack Conversion Tasks for 3.0" for additional compatibility information.
ZenTableManager is a Zope product that helps manage and display large sets of tabular data. It allows for column sorting, breaking down the set into pages, and filtering of elements in the table.
Here's a sample of a table listing all devices under the current object along with their IPs. First we set up the form that will deal with our navigation form elements:
... <form method="post" tal: <script type="text/javascript" src="/zport/portal_skins/zenmodel/submitViaEnter.js"></script>
Next, we set up our table, defining the objects we want to list (in this case, here/devices/getSubDevicesGen). We then pass those objects, along with a unique tableName, to ZenTableManager, which will return a batch of those objects of the right size (for paging purposes):
<table class="zentable" tal:
Next, a table header and a couple of hidden fields:
<tr> <th class="tabletitle" colspan="2"> <!--Colspan will of course change with the number of fields you show--> My Devices </th> </tr> <input type='hidden' name='tableName' tal: <input type='hidden' name='zenScreenName' tal:
Now we add the rows that describe our devices. First we need to set up the column headers so that they'll be clickable for sorting. For that, we use
ZenTableManager.getTableHeader(tableName, fieldName, fieldTitle, sortRule="cmp").
<tbody> <tr> <!--We want to sort by names using case-insensitive comparison--> <th tal:name</th> <!--Default sortRule is fine for IP sorting--> <th tal:ip</th> </tr>
Now the data themselves. In order to have our rows alternate colors, we'll use the useful TALES attribute
odd, which is True for every other item in a
tal:repeat loop.
<tal:block tal: <tr tal: <td class="tablevalues"> <a class="tablevalues" href="href" tal:device </a> </td> <td class="tablevalues" tal:ip</td> </tr> </tal:block> </tbody>
Finally, let's add the navigation tools we need and close off our tags.
<tr> <td colspan="2" class="tableheader"> <span metal: </td> </tr> </table> </form>
Note
This procedure is valid for 2.x versions and for re-skinned pages in version 3.x. See the chapter titled "ZenPack Conversion Tasks for 3.0" for additional compatibility information.
But what if you want to be able to edit devices from this table? The process is simple. First, you add a checkbox to the first column of your device list:
<td class="tablevalues" align="left"> <!--Now add your checkbox, defining the list of devices as "deviceNames"--> <input tal: <!--Then the first column contents as above--> <a...>device</a> </td>
Now that we can choose devices from the list, we need the controls to edit them. In this case, we'll use a macro defining controls that allow a device to be moved to a different device class. Just add the macro call to the end of your table:
... </tr> <!--Add controls here--> <tal:block tal: <!--This macro includes the <tr> tag, so we need to pass it colspan--> <span metal: </tal:block> </table> </form>
Note
This procedure is valid for 2.x versions and for re-skinned pages in version 3.x. See the chapter titled "ZenPack Conversion Tasks for 3.0" for additional compatibility information.
Creating a new Edit Form.
Add form input fields
Add a boolean type:
... <select class="tablevalues" tal: <option tal: </select> ...
This block of code creates a select dropdown with two options:
True and
False. The select dropdown is pre-populated with the value returned by
getMyBooleanProperty(). The value of this form field will be stored in the attribute MyBooleanProperty.
Add a text box type:
... <textarea class="tablevalues" rows='5' cols="33" tal: </textarea> ...
This block of code creates a text box. The text box is pre-populated with the string value returned by
getMyTextBoxProperty(). The value of this form field will be stored in the attribute MyTextBoxProperty.
Add a text type:
... <input class="tablevalues" type="text" size="40" tal: ...
This block of code creates a text field. The text field is pre-populated with the string value returned by
getMyStringProperty(). The value of this form field will be stored in the attribute MyStringProperty.
Add a select dropdown type:
... <select class="tablevalues" tal: <option tal: </select> ...
This block of code creates a select dropdown where the option value and displayed option string are the same. A list of option values are returned by
getMySelectPropertyOptions. The select dropdown is pre-populated by the value in getMySelectProperty. The value of this form field will be stored in the attribute MySelectProperty.
... <select class="tablevalues" tal: <option tal: </select> ...
This block of code creates a select dropdown where the option value is an integer and displayed option is a string. A list of tuples containing the option values and displayed option string are returned by
getMySelectPropertyOptionTuples. The select dropdown is pre-populated by the value in getMySelectProperty. The value of this form field will be stored in the attribute MySelectProperty.
Add the form action
... <form id='MyForm' method="post" tal: ...
The form action should be set to a function (i.e.
here/absolute_url_path) that returns the path to the object being edited.
... <input class="tableheader" type="submit" name="saveProperties:method" value=" Save " /> ...
This submit button name will be in the format
saveProperties:method.
saveProperties is the method name that will be executed when the submit button is clicked.
Add the
save() method
... def saveProperties(self, REQUEST=None): """Save all Properties found in the REQUEST.form object. """ for name, value in REQUEST.form.items(): if getattr(self, name, None) != value: self.setProperty(name, value) return self.callZenScreen(REQUEST) ...
Create a
saveProperty() method in the effective object. | http://community.zenoss.org/docs/DOC-10099 | CC-MAIN-2015-11 | refinedweb | 2,855 | 50.94 |
Bug #14479closed
Exceptions raised from a :call tracepoint can sometimes be "rescued" inside the method
Description
This is a Ruby 2.5 regression.
If you raise an exception from a :call tracepoint, it can, in certain circumstances, be caught by a rescue block inside the called method. Here is an illustration:
def foo begin puts "hi" rescue => e puts "In rescue" end end TracePoint.trace :call do |tp| raise "kaboom" if tp.method_id == :foo end foo
In Ruby 2.4.3, this results in the exception as expected.
In Ruby 2.5.0, this results in "in rescue" being printed to the console. The rescue block inside method "foo" is catching the exception.
This is highly dependent on the positioning of the rescue block in the method, and may be related to which bytecode is flagged with the trace flag. For example, the following method "foo" raises the exception in Ruby 2.5.0:
def foo puts "hi" begin puts "hi" rescue => e puts "In rescue" end end
Here are three more interesting variants that should be considered:
def foo if true begin puts "hi" rescue => e puts "In rescue" end end end
Prints "in rescue"
def foo if false begin puts "hi" rescue => e puts "In rescue" end end end
Raises the exception
def foo if false begin puts "hi" rescue => e puts "In rescue" end end 1 end
Segfaults!
Updated by jeremyevans0 (Jeremy Evans) 4 months ago
This is still an issue in the master branch. I've submitted a pull request that should fix the problem:
Updated by ko1 (Koichi Sasada) about 1 month ago
Updated by ko1 (Koichi Sasada) 29 days ago
- Status changed from Open to Rejected
I want to reject this issue because of the following reasons:
- TracePoint block shouldn't raise an exception. TracePoint should not hurt non-hook (99.99..% case) execution if possible. I don't think this difference is not a matter.
- Now Ruby 2.4 is obsolete version (current last supported version is Ruby 2.6), so it also the change from stable versions if it was changed. I propose to change the definition of
callevent is "It invokes just before the first line in a method" from Ruby 2.4.
def foo # invoike here before 2.4 begin # invoke here from 2.5 foo rescue ... end end
Please reopen this issue if it is needed.
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/14479 | CC-MAIN-2021-43 | refinedweb | 402 | 73.47 |
When creating a GUI interface with the Tkinter, the windows’ size is usually a result of the size and location of the windows’ components. However, this is regulable by providing a specific width and height.
You can change the size of a Tkinter window by passing a width and height string as an input to the geometry() method. Using sample programs, we will learn how to set a particular window size for our Tkinter GUI application in Python in this article.
Size and Position of the Python Tkinter Window
The height and width of a window are referred to as its size. The window’s position on the screen is referred to as its window position. This section will look at how to change the window and position it in Python Tkinter. The method used for both goals is geometry.
The syntax is as follows:
parent_window.geometry("size_of_width x size_of_height + position_x + position_y")
- width_size: only accepts integer values and controls the window’s horizontal space.
- height_size: only accepts integer values and determines the window’s vertical space.
- x_position: only accepts integer values and moves the window vertically.
- y_position: only takes integer values and moves the window horizontally.
The window is formed with 350 widths and 450 heights in this code. It is located at x=700 and y=200. Consequently, whenever the code is run, it appears slightly to the right of the center.
from tkinter import * window_size = Tk() window_size.title('Codeunderscored') window_size.geometry('350x450+700+200') Label( window_size, text="Code means lot more \n than you could possibly comprehend", font=('Times',20) ).pack(fill=BOTH, expand=True) window_size.mainloop()
Geometry() – syntax
When using Tkinter in Python, the geometry() function on the Tk() class variable is used to set the window size.
#import statement from tkinter import * #create GUI Tk() variable code_gui = Tk() #set window size code_gui.geometry("widthxheight")
It would be best to substitute width and height with numbers corresponding to the window’s width and height. In the argument given to geometry(), notice an x between width and height. Please note that the width x height of the window does not include the window title.
Example 1: Set Window Size in Python Tkinter
This example will utilize the geometry() method to provide the Tk() window with a 500 by 200 pixels fixed size. The program is written in Python.
from tkinter import * code_gui = Tk(className='Codeunderscored Python Examples - Window Size') # set window size code_gui.geometry("500x200") code_gui.mainloop()
Example 2: Change the size of your GUI application’s window.
Now, let’s adjust the width and height parameters passed to the geometry() function. Let’s say 300 x 300.
from tkinter import * code_gui = Tk(className='Codeunderscored Python Examples - Window Size') # set window size code_gui.geometry("300x300") code_gui.mainloop()
How can I make a Tkinter window with a fixed size?
The goal is to create a Tkinter window with a fixed size. You cannot alter a window with a limited size to suit the user’s needs; its dimensions are fixed. On the other hand, you can resize a standard window.
Approach
Declare a Tkinter object by importing the module. Using the maxsize() and minsize() functions, create a fixed-size window.
Syntax:
minsize(height, width)
In this case, both the height and width are measured in pixels. The minsize() method in Tkinter sets the Tkinter window’s minimum size. A user can use this method to reduce the window’s initialized size to its smallest size while still being able to maximize and scale it.
maxsize(height, width)
This method is used to set the root window’s maximum size. The user will still be able to minimize the window to its smallest size.
Program 1: Making a standard window
# Import module from tkinter import * # Create object root = Tk() # Adjust size root.geometry("400x400") # Execute tkinter root.mainloop()
Program 2: Creating a window with a constant size
# Import module from tkinter import * # Create object root = Tk() # Adjust size root.geometry("400x400") # set minimum window size value root.minsize(400, 400) # set maximum window size value root.maxsize(400, 400) # Execute tkinter root.mainloop()
Fullscreen Python Tkinter Window
We’ll learn to create the Python Tkinter window full screen in this part. For your information, few options exist for making the app fullscreen by default. The first approach necessitates the use of the screen’s resolution.
In addition, you can directly specify height and width if you know the screen resolution. For example, in my situation, the resolution is 1920 x 1080, therefore,
window_screen.geometry("1920x1080)
The window’s width is 1920 pixels, and the height is 1080 pixels. The other alternative is changing the parent window’s property to True for full screen.
- In this technique, the screen is set to full screen, regardless of the display size.
- In other words, the software takes up the entire screen on all smartphones.
- The disadvantage of this method is that you must manually build close and other buttons.
window_screen.attributes('-fullscreen', True)
We’ve shown an example of the second technique in this code, where we’ve set the entire screen to True. That is in regards to the attribute method.
from tkinter import * window_screen = Tk() window_screen.title('Codeunderscored') window_screen.attributes('-fullscreen', True) Label( window_screen, text="Code is very interesting \n all you need is patience", font=('Times', 20) ).pack(fill=BOTH, expand=True) window_screen.mainloop()
Fixed Window Size in Python Tkinter
We sometimes wish to fix the window size while working on the application so that widgets display in the same area where you have fixed them. This section will learn how to set fixed window sizes using Python Tkinter.
- We’ll achieve this by bypassing (0,0) the resizable method. In the case of the width and height, 0,0 denotes False.
- The resizable method tells the window manager whether or not you can resize this window.
- It only accepts boolean values.
Syntax:
window_screen.resizable(0, 0) window_screen.resizable(False, False) window_screen.resizable(width=False, Height=False)
All of these formulations accomplish the same goal. Any of these methods are at your disposal for your use. The size of a window has been fixed in this code. This method is also known as screen locking. The user will not be able to modify the screen’s size.
from tkinter import * window_screen = Tk() window_screen.title('Codeunderscored') window_screen.geometry('350x450+700+200') window_screen.resizable(0,0) Label( window_screen, text="Code is more meaningful \n if you can resonate with it", font=('Times', 20) ).pack(fill=BOTH, expand=True) window_screen.mainloop()
Two photos are displayed in this output. On the one hand, the first image shows a conventional window that can be resized by clicking on the maximize square button; on the other hand, the second image shows a locked window. As a result, the window’s size is fixed, and the user will not be able to adjust it.
Tkinter Lock Window Size in Python
We sometimes wish to fix the window size while working on the project so that widgets show in the same place you installed them. As a result, we’ll learn how to lock window size in Python Tkinter in this part.
- The user will be unable to modify the window size if the window is locked.
- We’ll achieve this by bypassing (0,0) the resizable method. In the case of the width and height, 0,0 denotes False.
- The resizable method tells the window manager whether or not you can resize this window.
- It only accepts boolean values.
- It is the same as the previous section on the fixed window size in Python Tkinter.
Syntax:
- window_screen.resizable(0, 0)
- window_screen.resizable(False, False)
- window_screen.resizable(width=False, Height=False)
All of these formulations accomplish the same goal. The choice is yours!
The window in this output is locked, which means you cannot resize it. The first image on the left is an example of a typical window in which the user can adjust the size of the window. Still, the image on the right has the maximize button disabled, indicating that the user is not permitted to change the size of the window.
Another technique to lock window size is to set minsize() and maxsize() to the same value as the geometry size.
In the coming part, we will explore them thoroughly.
Minimum Window Size for Python Tkinter
We’ll delve into establishing the minimum window size in Python Tkinter in this section. The window’s minimum size determines the number of windows that can be decreased. Without this, you can shrink the window to any size. The minsize() process sets the window’s maximum size, after which it will not shrink.
Syntax:
window_screen.minsize(width_minsize, height_minsize)
- width_minsize() takes an integer value and prevents shrinking from the east to the west.
- height_minsize() takes an integer number and stops the shrinking from the south to the north.
We’ve only enabled the user’s capability to reduce the window by 50 pixels in this code. As you can see, the geometry is 300400, with a minimum width of 250 pixels and a maximum height of 350 pixels. The difference between the two is 50.
Consequently, the window can be shrunk 50 percent from left to right and 50 percent from bottom to top.
from tkinter import * window_size = Tk() window_size.title('Codeunderscored') window_size.geometry('300x400') window_size.minsize(250, 350) Label( window_size, text="Coding is alot fun \n if you are not implored to", font=('Times', 23), bg = '#156475', fg = '#fff' ).pack(fill=BOTH, expand=True) window_size.mainloop()
Three images are the results displayed in this output. First, when the code is executed, the top one shows the window in its original state.
.
On the other hand, the second image on the left shows that when a user wants to shrink or reduce the window from the right side to the left side, he may only do so by 50 pixels, which is true for the height. As it turns out, in Python Tkinter, this is how we limit the window size.
Tkinter Max Window Size in Python
This section will teach us to set the maximum window size in Python Tkinter. The maximum window size influences how much the window can expand. In addition, the user can expand the window to any level if this is not enabled. The maxsize method also provides the maximum size beyond which the window will not expand.
Syntax:
window_size.maxsize(width_maxsize, height_maxsize)
- width_maxsize() takes an integer number and prevents the expansion from traveling west to east.
- height_maxsize() takes an integer number and stops the expansion from moving north to south.
We’ve only enabled users to expand the window size by 50 pixels in this code. As you can see, the geometry is 300400, with a maximum width of 350 pixels and a maximum height of 450 pixels. The difference between the two is 50. As a result, the window can be expanded by 50 percent from right to left and 50 percent from top to bottom.
from tkinter import * window_size = Tk() window_size.title('Codeunderscored') window_size.config() window_size.geometry('300x400') window_size.maxsize(350, 450) Label( window_size, text="Coding means the world \n if you reach the level I am in life now", font=('Times', 20), bg = '#156475', fg = '#fff' ).pack(fill=BOTH, expand=True) window_size.mainloop()
The resultant three images are displayed in this output. Initially, when the code is executed, the top one shows the window in its original state-. On the flip side, the second image on the left shows that when a user wants to extend the window size from left to right, he may only do so for 50 pixels. The same is true for height. Interestingly, in Python Tkinter, this is how we limit window expansion.
Example: Showing how to set fixed window size in Tkinter
Sometimes it’s necessary to resize the window so that widgets appear in the same place you configured them. As a result, we’ll learn how to use Python Tkinter to set fixed window size in this section. We’ll do this by calling the resizable method with the value (0,0).
False denotes width and height of 0,0. The resizable technique indicates whether or not this window can change its size to the window manager. Only Boolean values are accepted. Here’s how to use the code to modify the size of a window:
from tkinter import * window_size = Tk() window_size.title('Codeunderscored Tkinter Set Window Size') window_size.geometry('430x310+650+180') window_size.resizable(0,0) Label( window_size, text="Coding in Python is more interactive, \n and clear than other OOP-languages I have used.", font=('Times', 16) ).pack(fill=BOTH, expand=True) window_size.mainloop()
The box has a locked window, as you can see below. The window size is fixed, and the user will be unable to change it.
Example: How to make a Python Tkinter window fullscreen
We’ll explore making the Python Tkinter window full screen in this example. To make the program fullscreen by default, you have a few alternatives. The first method necessitates the resolution of the screen. You can set the height and width explicitly if you know the screen resolution.
Another alternative is to set the parent window’s full-screen attribute to True. The screen is set to full screen regardless of the display size. In other words, the software takes over the entire screen. This approach is disadvantageous of requiring the manual operation of the close and other buttons. In the following code, we’ve set the whole screen to True:
from tkinter import * window_size = Tk() window_size.title(' Codeunderscored Tkinter Set Window Size ') window_size.attributes('-fullscreen', True) Label( window_size, text ="Coding in Python is enjoyable, the syntax is absolutely fantastic.", font=('Times', 24) ).pack(fill=BOTH, expand=True) window_size.mainloop()
In this output, Python Tkinter is running in full-screen mode. The standard toolbars for shutting down, shrinking, and expanding the screen are missing.
Summary
We learned to set the window size for a GUI application built with Tkinter in this Python Examples tutorial. This tutorial taught us many methods for adjusting Python Tkinter window size.
The size of the window in a Tkinter GUI is usually dictated by the size and location of the components in the window. You can, however, control the window’s size by specifying its width and height. To modify the size of the Tkinter window, use the geometry() function on the window with the width and height strings as arguments. | https://www.codeunderscored.com/tkinter-set-window-size-explained-with-examples/ | CC-MAIN-2022-21 | refinedweb | 2,415 | 57.77 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.